NameNode '192.168.1.164:9000'
Started: |
Tue Jul 06 14:37:10 CST 2010 |
Version: |
0.20.2, r911707 |
Compiled: |
Fri Feb 19 08:07:34 UTC 2010 by chrisdo |
Upgrades: |
There are no upgrades in progress. |
Browse the filesystem Namenode Logs
Cluster Summary
4 files and directories, 1 blocks = 5 total. Heap Size is 16.12 MB / 888.94 MB (1%)
Configured Capacity |
: |
1.59 TB |
DFS Used |
: |
72 KB |
Non DFS Used |
: |
120.87 GB |
DFS Remaining |
: |
1.47 TB |
DFS Used% |
: |
0 % |
DFS Remaining% |
: |
92.57 % |
Live Nodes |
: |
2 |
Dead Nodes |
: |
0 |
NameNode Storage:
Storage Directory |
Type |
State |
/opt/hadoop/data/dfs.name.dir |
IMAGE_AND_EDITS |
Active |
Hadoop , 2010.
|
以上是我们安装完成,并且正常运行后的HDFS状态页面(访问地址:http://210.66.44.88:50070/dfshealth.jsp),其中的Browse the filesystem 是查看文件系统的入口,但是有可能会出现无法访问的问题,我就遇到过,上网查了很久的解决办法都无果,后来我通过firebug拦截请求发现,Browse the filesystem这个链接的页面会跳转到另外一个页面,而这个页面的地址是http://192.168.1.164:50075/browseDirectory.jsp?namenodeInfoPort=50070&dir=%2F(
192.168.1.164是服务器的内网地址,在masters和slaves文件中配置的是局域网IP),但是需要通过外网才能访问(类似于http://210.66.44.88:50075/browseDirectory.jsp?namenodeInfoPort=50070&dir=%2F才行),这就是问题所在。
我们来看看nn_browsedfscontent.jsp的源代码:
package org.apache.hadoop.hdfs.server.namenode;
import java.io.IOException;
import java.net.InetAddress;
import java.net.InetSocketAddress;
import java.net.URLEncoder;
import java.util.Vector;
import javax.servlet.ServletConfig;
import javax.servlet.ServletContext;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import javax.servlet.http.HttpSession;
import javax.servlet.jsp.JspFactory;
import javax.servlet.jsp.JspWriter;
import javax.servlet.jsp.PageContext;
import javax.servlet.jsp.SkipPageException;
import org.apache.hadoop.util.ServletUtil;
import org.apache.jasper.runtime.HttpJspBase;
import org.apache.jasper.runtime.JspSourceDependent;
import org.apache.jasper.runtime.ResourceInjector;
public final class nn_005fbrowsedfscontent_jsp extends HttpJspBase
implements JspSourceDependent
{
private static final JspFactory _jspxFactory = JspFactory.getDefaultFactory();
private static Vector _jspx_dependants;
private ResourceInjector _jspx_resourceInjector;
//此处是将请求随机转发到一个DataNode节点上
public void redirectToRandomDataNode(NameNode nn, HttpServletResponse resp)
throws IOException
{
String nodeToRedirect;
int redirectPort;
FSNamesystem fsn = nn.getNamesystem();
String datanode = fsn.randomDataNode();
if (datanode != null) {
redirectPort = Integer.parseInt(datanode.substring(datanode.indexOf(58) + 1));
nodeToRedirect = datanode.substring(0, datanode.indexOf(58));
}
else {
nodeToRedirect = nn.getHttpAddress().getHostName();
redirectPort = nn.getHttpAddress().getPort();
}
// 此处是得到服务器的名称(hostname)
String fqdn = InetAddress.getByName(nodeToRedirect).getCanonicalHostName();
String redirectLocation = "http://" + fqdn + ":" + redirectPort + "/browseDirectory.jsp?namenodeInfoPort=" + nn.getHttpAddress().getPort() + "&dir=" + URLEncoder.encode("/", "UTF-8");
resp.sendRedirect(redirectLocation);
}
public Object getDependants()
{
return _jspx_dependants;
}
public void _jspService(HttpServletRequest request, HttpServletResponse response)
throws IOException, ServletException
{
PageContext pageContext = null;
HttpSession session = null;
ServletContext application = null;
ServletConfig config = null;
JspWriter out = null;
Object page = this;
JspWriter _jspx_out = null;
PageContext _jspx_page_context = null;
try
{
response.setContentType("text/html; charset=UTF-8");
pageContext = _jspxFactory.getPageContext(this, request, response, null, true, 8192, true);
_jspx_page_context = pageContext;
application = pageContext.getServletContext();
config = pageContext.getServletConfig();
session = pageContext.getSession();
out = pageContext.getOut();
_jspx_out = out;
this._jspx_resourceInjector = ((ResourceInjector)application.getAttribute("com.sun.appserv.jsp.resource.injector"));
out.write(10);
out.write("\n\n<html>\n\n<title></title>\n\n<body>\n");
NameNode nn = (NameNode)application.getAttribute("name.node");
redirectToRandomDataNode(nn, response);
out.write("\n<hr>\n\n<h2>Local logs</h2>\n<a href=\"/logs/\">Log</a> directory\n\n");
out.println(ServletUtil.htmlFooter());
out.write(10);
} catch (Throwable t) {
if (!(t instanceof SkipPageException)) {
out = _jspx_out;
if ((out != null) && (out.getBufferSize() != 0))
out.clearBuffer();
if (_jspx_page_context != null) _jspx_page_context.handlePageException(t);
}
} finally {
_jspxFactory.releasePageContext(_jspx_page_context);
}
}
}
从代码可以看出,当我们点击Browse the filesystem 时后台会将请求随机转发到一台DataNode节点,使用的是slaves文件中配置的服务器列表所反解的域名或主机名,而通过局域网IP未能反解出域名和主机名,所以用的是IP,这样就出问题了,解决办法有两个:视redirectToRandomDataNode方法内生成的URL而定,如果反转域名是主机名的话,你只需要修改本地HOSTS映射就可以了(推荐使用Windows Hosts Editor,软件地址:http://yymmiinngg.iteye.com/blog/360779);如果反转出的域名是主机局域网IP的话,那就需要配置slaves和masters使用域名或外网IP
分享到:
相关推荐
hadoop filesystem api常见使用说明
The author helps you familiarize yourself with the various commands that you can use to perform various tasks within the Hadoop system. The author also helps you know how to write MapReduce programs ...
书名:Hadoop The Definitive Guide 语言:英文 The rest of this book is organized as follows. Chapter 2 provides an introduction to MapReduce. Chapter 3 looks at Hadoop filesystems, and in particular ...
OReilly.Hadoop.The.Definitive.Guide.June.2009.RETAiL.eBOOk-sUppLeX Description Apache Hadoop is ideal for organizations with a growing need to process massive application datasets. Hadoop: The ...
The fourth edition covers Hadoop 2 exclusively. The Hadoop 2 release series is the current active release series and contains the most stable versions of Hadoop. There are new chapters covering YARN ...
The Hadoop Distributed File System,学习云计算方面知识必不可少的材料,相信学过之后会对文件系统有新的理解。
hadoop权威指南代码 (Hadoop: The Definitive Guide code) http://www.hadoopbook.com
Moving Hadoop to The Cloud 英文epub 本资源转载自网络,如有侵权,请联系上传者或csdn删除 本资源转载自网络,如有侵权,请联系上传者或csdn删除
The Hadoop Distributed Filesystem Chapter 4. YARN Chapter 5. Hadoop I/O Part II. MapReduce Chapter 6. Developing a MapReduce Application Chapter 7. How MapReduce Works Chapter 8. MapReduce Types and...
Hadoop: The Definitive Guide, 4th Edition Get ready to unlock the power of your data. With the fourth edition of this comprehensive guide, you’ll learn how to build and maintain reliable, scalable,...
Hadoop The Definitive Guide, 4th Edition
Hadoop the definition guide. 学习大数据必读书。spark学习胡基础
Specification work related to the Hadoop Compatible Filesystem (HCFS) effort. HDFS Support for POSIX-style filesystem extended attributes. See the user documentation for more details. Using ...
hadoop指南第二版 hadoop指南2 Hadoop The Definitive Guide 2nd Edition
With this digital Early Release edition of Hadoop: The Definitive Guide, you get the entire book bundle in its earliest form – the author’s raw and unedited content – so you can take advantage of ...
Hadoop.The.Definitive.Guide
在windows环境下开发hadoop时,需要配置HADOOP_HOME环境变量,变量值D:\hadoop-common-2.7.3-bin-master,并在Path追加%HADOOP_HOME%\bin,有可能出现如下错误: org.apache.hadoop.io.nativeio.NativeIO$Windows....
Apache Hadoop YARN is the modern distributed operating system for big data applications. It morphed the Hadoop compute layer to be a common resource-management platform that can host a wide variety of...
Hadoop和Kerberos简介。
Hadoop使用常见问题以及解决方法,简单实用