本文共 2864 字,大约阅读时间需要 9 分钟。
总体上可以参考以下博文,Reference:http://blog.csdn.net/woshisunxiangfu/article/details/44026207
伪分布式则可以参考:
http://blog.csdn.net/xin_jmail/article/details/40556267
但有些小地方需要补上: 1)配置core-site.xml、hdfs-site.xml时,<description>不能用中文
2)
hostname -f
vi /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 <host> ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 <ip> <host> hostname -f 3) 服务器端开通8088(yarn)和50070(namenode)端口 /sbin/iptables -I INPUT -p tcp --dport 8088 -j ACCEPT /etc/init.d/iptables saveservice iptables restart
/sbin/iptables -I INPUT -p tcp --dport 50070 -j ACCEPT
/etc/init.d/iptables save
service iptables restart
测试时用服务器端的IP,如 http://192.168.22.250:8099/ (原文是hostname,http://datanode-4:8099/)
效果>
重启服务器后,重启Hadoop流程:
cd /root/hadoop2.6
sbin/start-dfs.sh
sbin/start-yarn.sh
验证:http://192.168.22.250:8099/cluster
----------------------------------
伪分布式综合:
cd /appl/hadoop-2.7.0/etc/hadoopyarn-env.sh
export JAVA_HOME=/appl/jdk1.7.0_80...hadoop-env.sh
export JAVA_HOME=/appl/jdk1.7.0_80...yarn-site.xml
mapred-site.xmlyarn.nodemanager.aux-services mapreduce_shuffle
hdfs-site.xmlmapreduce.framework.name yarn
core-site.xmldfs.repliacation 0
vi /etc/profilefs.defaultFS hdfs://localhost:9000 hadoop.tmp.dir /appl/hadoop-2.7.0/tmp
export JAVA_HOME=/appl/jdk1.7.0_80export HADOOP_HOME=/appl/hadoop-2.7.0export HADOOP_ROOT_LOGGER=INFO,consoleexport HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/nativeexport HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"...export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:...export MASTER='local-cluster[1,1,1024]'source /etc/profile ssh设置 网络端口开通(如8088、50070) sh start-dfs.sh sh start-yarn.sh jps
[root@centos1 ~]# jps4575 SecondaryNameNode4857 NodeManager4373 DataNode4755 ResourceManager5187 Jps4276 NameNode
Web验证-
Namenode - http://192.168.56.250:50070/ Yarn - http://192.168.56.250:8088/ Cluster - http://192.168.56.250:8088/cluster----------------------------------
错误处理
Q: WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
A: 开启调试:
export HADOOP_ROOT_LOGGER=DEBUG,console
hadoop fs -text /mk/test/hadoop.log
见真正错误信息:Failed to load native-hadoop with error: ... /lib/libc.so.6: version `GLIBC_2.14' not found
ll /lib/libc.so.6查看现在版本
下载安装匹配版本(http://www.filewatcher.com/,注意32、64位,详见文章“CentOS安装glibc-2.14”)
Q: JPS 不见datanode,或hadoop fs -put 时文件不见在hdfs上但mkdir却可以的情况
A: http://blog.csdn.net/hackerwin7/article/details/19973045