您的位置:首页 > 运维架构

hadoop分布式集群搭建

2018-03-18 22:23 615 查看
准备工作

Master

hadoop1

slaves

hadoop2, hadoop3

创建hadoop用户并赋予其root权限

/etc/hosts

hadoop1 ip

hadoop2 ip

hadoop3 ip

1,主机间ssh免密码登录

hadoop1>>ssh-genkey -t rsa

hadoop2>>ssh-genkey -t rsa

hadoop3>>ssh-genkey -t rsa

hadoop1>>ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop1

hadoop1>>ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop2

hadoop1>>ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop3

2,jdk,hadoop安装,配置环境变量JAVA_HOME,HADOOP_HOME

3,配置hadoop下的hadoop-env.sh中的JAVA_HOME 

4,配置hadoop下的core-site.xml

<property>

<name>fs.default.name</name>

<value>hdfs://hadoop1:8020</value>

</property>

<property>

<name>dfs.datanode.data.dir</name>

<value>自己创建目录,不能在/tmp下</value>

</property>

5,配置hadoop下的yarn-site.xml,指定Master

<property>

<name>yarn.nodemanager.aux-services</name>

<value>mapreduce_shuffle</value>

</property>

<property>

<name>yarn.resourcemanager.hostname</name>

<value>hadoop1</value>

</property>

6,配置hadoop下的mapred-site.xml,在yarn模式下工作

<property>

<name>mapreduce.framework.name</name>

<value>yarn</value>

</property>

7,配置hadoop下的slaves,指定哪些是从节点,Master也充当从节点

把原来的localhost改为

hadoop1

hadoop2

hadoop3

8,分发安装包

假设jdk,hadoop安装目录和配置环境变量的脚本都在~/app下面

scp -r ~/app hadoop@hadoop2:~/

scp -r ~/app hadoop@hadoop3:~/

9,Master上执行对NN格式化

bin/hdfs namenode -format

10,在Master上启动整个集群

sbin/start-all.sh

11,验证

jps

hadoop1:

SecondaryNamenode

DataNode

NodeManager

NameNode

ResourceManager

hadoop2:

DataNode

NodeManager

hadoop3:

DataNode

NodeManager

webui

hadoop:8088,hadoop:50070
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息