您的位置:首页 > 运维架构

1)hadoop集群搭建

2016-07-21 00:00 218 查看

操作系统环境

CentOS7.2

网络环境

hostnameiprole
hadoop001192.168.252.164hdfs:namenode,datanode,sceondnamenode
yarn:resourcemanager,nodemanager
hadoop002192.168.252.165hdfs:datanode
yarn:nodemanager
hadoop003192.168.252.166hdfs:datanode
yarn:nodemanager

软件包:

jdk-7u55-linux-x64.tar.gz

hadoop-2.6.4.tar.gz

1.准备工作

1.1关闭防火墙

systemctl stop firewalld
chkconfig firewalld off


1.2关闭selinux

vi /etc/selinux/config

SELINUX=disabled

1.3设置网络

vi /etc/sysconfig/network-scripts/ifcfg-eno16777736

TYPE=Ethernet
BOOTPROTO=static
NAME=eno16777736
DEVICE=eno16777736
ONBOOT=yes
IPADDR=192.168.252.164
NETMASK=255.255.255.0
GATEWAY=192.168.252.1

systenctl restart network


1.4设置hostname

vi /etc/sysconfig/network

HOSTNAME=hadoop001

1.5设置hosts

vi /etc/hosts

192.168.252.164 hadoop001
192.168.252.165 hadoop002
192.168.252.166 hadoop003


1.6配置互信

生成密钥文件(~/.ssh目录下生成id_rsa和id_rsa.pub)

ssh-keygen -t rsa

复制公钥 (~/.ssh目录下)

cp id_rsa.pub authorized_keys

每个节点执行完毕之后,合并各个节点的authorized_keys,并用合并后的文件覆盖原有authorized_keys。

1.7安装jdk

tar zxvf jdk-7u55-linux-x64.tar.gz


配置java环境变量

vi ~/.bashrc

export JAVA_HOME=/usr/jdk1.7.0_55
export HADOOP_HOME=/opt/hadoop
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

source ~/.bashrc


2.节点一搭建

2.1解压hadoop(/opt目录下)

tar zxvf hadoop-2.6.4.tar.gz
mv hadoop-2.6.4.tar.gz hadoop


2.2配置环境变量

vi /etc/profile

export JAVA_HOME=/usr/jdk1.7.0_55
export HADOOP_HOME=/opt/hadoop
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

source /etc/profile


2.3修改配置

core-site.xml

<property>
<name>fs.default.name</name>
<value>hdfs://hadoop001:9000</value>
</property>

hdfs-site.xml

<property>
<name>dfs.name.dir</name>
<value>/usr/local/data/namenode</value>
</property>

<property>
<name>dfs.data.dir</name>
<value>/usr/local/data/datanode</value>
</property>

<property>
<name>dfs.tmp.dir</name>
<value>/usr/local/data/tmp</value>
</property>

<property>
<name>dfs.replication</name>
<value>3</value>
</property>

mapred-site.xml

<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>

yarn-site.xml

<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop001</value>
</property>

<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>

Slaves

hadoop001
hadoop002
Hadoop003


3.节点二、三搭建

3.1复制hadoop目录到二、三节点

scp -r hadoop 192.168.252.165:/opt
scp -r hadoop 192.168.252.166:/opt


3.2复制环境变量文件

scp -r profile 192.168.252.165:/etc
scp -r profile 192.168.252.166:/etc


3.3建立data目录

mkdir /usr/local/data


4.启动

4.1格式化HDFS

hdfs namenode -format

4.2启动hdfs集群

start-dfs.sh

4.3验证

jps命令或50070端口

hadoop001:namenode\datanode\sceondnamenode

hadoop002:datanode

hadoop003:datanode

4.4启动yarn

start-yarn.sh

4.5验证:

jps,8088端口

hadoop001:resourcemanager\nodemanager

hadoop002:nodemanager

hadoop003:nodemanager
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  hadoop 搭建