您的位置:首页 > 运维架构

hadoop集群搭建(hadoop)

2011-12-05 14:31 471 查看
首先说一下配置环境:三台电脑

[code=plain]192.168.30.149  hadoop149namenode和jobtracker   ###因为149机器稍微好一点
192.168.30.150  hadoop150 datanode和TaskTracker
192.168.30.148  hadoop150 datanode和TaskTracker
配置ssh无需密码登陆:
$ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
$ cat~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
我的master在149可以吧149的.pub文件拷贝到150和148上 然后执行cat~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
如果存在问题有可能是文件权限问题!
我用的hadoop版本是hadoop-0.20.2 下载地址:
google吧 过两天弄个网盘都放在上面再写到这里。下载后:编辑几个文件:在/root/hadoop-0.20.2/conf中(这里注意的是几台电脑的hadoop文件路径必须相同):加入如下一句话[root@localhostconf]# vim hadoop-env.shexportJAVA_HOME=/usr/java/jdk1.7.0_01     ###设置变量[/code][root@localhostconf]# vim core-site.xml
<?xmlversion="1.0"?><?xml-stylesheettype="text/xsl" href="configuration.xsl"?><!-- Put site-specificproperty overrides in this file. --><configuration><property><name>fs.default.name</name><value>hdfs://192.168.30.149:9000</value> ###具体的意义之后会讲解</property></configuration>
[root@localhostconf]# vim mapred-site.xml
<?xmlversion="1.0"?><?xml-stylesheettype="text/xsl" href="configuration.xsl"?><!-- Putsite-specific property overrides in this file. --><configuration><property><name>mapred.job.tracker</name><value>hdfs://192.168.30.149:9004</value></property></configuration>
[root@localhostconf]# vim hdfs-site.xml
<?xmlversion="1.0"?><?xml-stylesheettype="text/xsl" href="configuration.xsl"?><!-- Putsite-specific property overrides in this file. --><configuration><property><name>dfs.replication</name><value>2</value></property></configuration>        
[root@localhostconf]# vim masters
hadoop149
[root@localhostconf]# vim slaves
hadoop150hadoop148
一共编辑了5个文件,具体意义代表什么,之后会讲到这里注意要被指/etc/hosts文件,如下(192.168.30.149):[root@localhostconf]# vim /etc/hosts
# Do not removethe following line, or various programs# that requirenetwork functionality will fail.127.0.0.1               localhost.localdomain localhost::1             localhost6.localdomain6 localhost6192.168.30.149hadoop149192.168.30.150hadoop150192.168.30.148hadoop148
4.启动hadoop:这里用简单的命令进行启动,A.格式化文件系统:
#bin/hadoop namenode –format
B.启动hadoop
#bin/start-all.sh
C.利用hadoop自带的例子测试hadoop是否启动成功
#bin/hadoop fs -mkdir input     ###在文件系统中创建input文件夹#bin/hadoopfs -put README.txt input    ###把本地readme.txt上传到input中#bin/hadoop fs –lsr            ###查看本件系统所有文件存在文件并且大小不为0则hadoop文件系统搭建成功。#bin/hadoopjar hadoop-0.20.2-examples.jar wordcount input/README.txt output###将输出结果输出到output中#bin/hadoop jar hadoop-0.20.2-examples.jar wordcount input/1.txt output
11/12/02 17:47:14 INFOinput.FileInputFormat: Total input paths to process : 111/12/02 17:47:14 INFO mapred.JobClient:Running job: job_201112021743_000111/12/02 17:47:15 INFOmapred.JobClient: map 0% reduce 0%11/12/02 17:47:22 INFOmapred.JobClient: map 100% reduce 0%11/12/02 17:47:34 INFOmapred.JobClient: map 100% reduce 100%11/12/02 17:47:36 INFO mapred.JobClient:Job complete: job_201112021743_000111/12/02 17:47:36 INFO mapred.JobClient:Counters: 1711/12/02 17:47:36 INFOmapred.JobClient: Job Counters11/12/02 17:47:36 INFOmapred.JobClient: Launched reducetasks=111/12/02 17:47:36 INFOmapred.JobClient: Launched maptasks=111/12/02 17:47:36 INFOmapred.JobClient: Data-local maptasks=111/12/02 17:47:36 INFOmapred.JobClient: FileSystemCounters11/12/02 17:47:36 INFOmapred.JobClient: FILE_BYTES_READ=3252311/12/02 17:47:36 INFOmapred.JobClient: HDFS_BYTES_READ=4425311/12/02 17:47:36 INFOmapred.JobClient: FILE_BYTES_WRITTEN=6507811/12/02 17:47:36 INFOmapred.JobClient: HDFS_BYTES_WRITTEN=2314811/12/02 17:47:36 INFOmapred.JobClient: Map-Reduce Framework11/12/02 17:47:36 INFOmapred.JobClient: Reduce inputgroups=236711/12/02 17:47:36 INFOmapred.JobClient: Combine outputrecords=236711/12/02 17:47:36 INFOmapred.JobClient: Map inputrecords=73411/12/02 17:47:36 INFOmapred.JobClient: Reduce shufflebytes=3252311/12/02 17:47:36 INFOmapred.JobClient: Reduce outputrecords=236711/12/02 17:47:36 INFO mapred.JobClient: Spilled Records=473411/12/02 17:47:36 INFOmapred.JobClient: Map outputbytes=7333411/12/02 17:47:36 INFOmapred.JobClient: Combine inputrecords=750811/12/02 17:47:36 INFOmapred.JobClient: Map outputrecords=750811/12/02 17:47:36 INFOmapred.JobClient: Reduce inputrecords=2367也可以通过本地浏览器进行查看状态:50070和50030端口(注意配置本地C:\Windows\System32\drivers\etc\hosts文件)
192.168.30.150      hadoop150192.168.30.149      hadoop149192.168.30.148      hadoop148
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: