您的位置:首页 > 运维架构

Ubuntu Hadoop 完全分布式搭建

2016-09-28 16:20 645 查看
本文部分细节由于在《Ubuntu Hadoop 伪分布式搭建》中出现过所以不重复讲述,如须知道JDK,hadoop的安装过程,请参考http://blog.csdn.net/u010171031/article/details/52689700

系统:Ubuntu16.04

jdk:jdk1.8.0_101

Hadoop:hadoop2.7.3

首先需要有两台以上的计算机,一台作为Master节点,其他的作为Slave节点,所有的服务器上都需要配置好jdk的环境,

我这里准备了两台服务器作为节点

Master 192.168.92.129

Slave1 192.168.92.130

首先修改Master节点的配置

sudo vim /etc/hosts


在里面添加上

192.168.92.129  Master
192.168.92.130  Slave1


(当然在Slave1节点上也需要添上)

然后我们来实现Master节点免密码登陆Slave节点

在Master节点的~/.ssh 目录下,存在文件id_rsa.pub,通过ssh把这个文件传给Slave1节点

scp ~/.ssh/id_rsa.pub hadoop@Slave1:/home/hadoop


然后我们在Slave1节点上操作

mkdir ~/.ssh
cat ~/id_rsa.pub >> authorizde_keys


回到Master节点

对免密码登陆进行测试

ssh Slave1


如果没有提示输入密码,直接登陆说明配置成功

然后我们需要修改Hadoop的配置文件

首先是core-site.xml文件

vim /usr/lib/hadoop/etc/hadoop/core-site.xml


打开,然后在
<configuration></configuration>
之间添加

<property>
<name>fs.defaultFS</name>
<value>hdfs://127.0.0.1:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/usr/lib/hadoop/tmp</value>
<description>Abasefor other temporary directories.</description>
</property>
<property>
<name>hadoop.proxyuser.spark.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.spark.groups</name>
<value>*</value>
</property>


然后是hdfs-site.xml

vim /usr/lib/hadoop/etc/hadoop/hdfs-site.xml


同样插入

<property>
<name>dfs.namenode.secondary.http-address</name>
<value>127.0.0.1:9001</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/lib/hadoop/tmp/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/lib/hadoop/tmp/dfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>


mapred-site.xml

vim /usr/lib/hadoop/etc/hadoop/mapred-site.xml


<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>127.0.0.1:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>127.0.0.1:19888</value>
</property>


yarn-site.xml

vim /usr/lib/hadoop/etc/hadoop/yarn-site.xml


<property>
<name>yarn.resourcemanager.hostname</name>
<value>127.0.0.1</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>


然后我们需要在主节点中添加子节点的信息

需要在slaves文件中进行添加

vim /usr/lib/hadoop/etc/hadoop/slaves


这个文件是用来保存dataNode的节点信息,文件里面原有localhost,可以删除,也可以不删除(这样master节点既有NameNode又有DataNode)

当然我们需要在这个之后加上

Slave1


这样Slave1上的DataNode才会启动

最后我们修改hadoop-env.sh

中的JAVA_HOME配置

vim /usr/lib/hadoop/etc/hadoop/hadoop-env.sh


找到JAVA_HOME改为

export JAVA_HOME=/usr/lib/jdk/jdk1.8.0_101


然后把配置好的Hadoop通过ssh发送到Slave节点上

scp -r /usr/lib/hadoop hadoop@Slave1:/home/hadoop


然后在Slave1上把hadoop放到和Master相同的目录下

格式化hdfs

/usr/lib/hadoop/bin/hdfs namenode -format


启动hadoop

/usr/lib/hadoop/sbin/start-dfs.sh
/usr/lib/hadoop/sbin/start-yarn.sh
/usr/lib/hadoop/sbin/mr-jobhistory-daemon.sh start historyserver


这个时候在Master节点上执行jps会出现

JobHistorySever
SecondaryNameNode
Jps
ResourceManager
NameNode


在Slave节点上会出现

Jps
DataNode
NodeManager


然后我们需要在HDFS上创建目录

hdfs dfs -mkdir /user/hadoop
hdfs dfs -mkdir input


在本地创建一个words文件,里面放入一些字符

word
edmond
monkey
broewning
king
...


把words文档放到HDFS上

hdfs dfs -put words input


我们运行hadoop自带的example测试是否能正常运行

hadoop jar /usr/lib/hadoop/share/mapreduce/hadoop-mapreduce-examples-2.7.3.jar wordcount input output


然后会出现类似于:

16/10/13 12:55:19 INFO client.RMProxy: Connecting to ResourceManager at /127.0.0.1:8032
16/10/13 12:55:19 INFO input.FileInputFormat: Total input paths to process : 1
16/10/13 12:55:19 INFO mapreduce.JobSubmitter: number of splits:1
16/10/13 12:55:19 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1476329370564_0003
16/10/13 12:55:20 INFO impl.YarnClientImpl: Submitted application application_1476329370564_0003
16/10/13 12:55:20 INFO mapreduce.Job: The url to track the job: http://15ISK:8088/proxy/application_1476329370564_0003/ 16/10/13 12:55:20 INFO mapreduce.Job: Running job: job_1476329370564_0003
16/10/13 12:55:25 INFO mapreduce.Job: Job job_1476329370564_0003 running in uber mode : false
16/10/13 12:55:25 INFO mapreduce.Job:  map 0% reduce 0%
16/10/13 12:55:29 INFO mapreduce.Job:  map 100% reduce 0%
16/10/13 12:55:33 INFO mapreduce.Job:  map 100% reduce 100%
16/10/13 12:55:33 INFO mapreduce.Job: Job job_1476329370564_0003 completed successfully
16/10/13 12:55:33 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=221
FILE: Number of bytes written=238271
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=283
HDFS: Number of bytes written=171
HDFS: Number of read operations=6
HDFS: Number of large read operations=0
HDFS: Number of write operations=2
Job Counters
Launched map tasks=1
Launched reduce tasks=1
Data-local map tasks=1
Total time spent by all maps in occupied slots (ms)=1771
Total time spent by all reduces in occupied slots (ms)=2005
Total time spent by all map tasks (ms)=1771
Total time spent by all reduce tasks (ms)=2005
Total vcore-milliseconds taken by all map tasks=1771
Total vcore-milliseconds taken by all reduce tasks=2005
Total megabyte-milliseconds taken by all map tasks=1813504
Total megabyte-milliseconds taken by all reduce tasks=2053120
Map-Reduce Framework
Map input records=13
Map output records=12
Map output bytes=204
Map output materialized bytes=221
Input split bytes=120
Combine input records=12
Combine output records=11
Reduce input groups=11
Reduce shuffle bytes=221
Reduce input records=11
Reduce output records=11
Spille
a986
d Records=22
Shuffled Maps =1
Failed Shuffles=0
Merged Map outputs=1
GC time elapsed (ms)=101
CPU time spent (ms)=1260
Physical memory (bytes) snapshot=459825152
Virtual memory (bytes) snapshot=3895697408
Total committed heap usage (bytes)=353370112
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=163
File Output Format Counters
Bytes Written=171


注:

在hosts文件中不能有除了Master和Slave节点信息外的其他信息,具体情况可能要根据自己的主机情况配置

内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  ubuntu hadoop 分布式