1)hadoop集群搭建
2016-07-21 00:00
218 查看
操作系统环境
CentOS7.2网络环境
hostname | ip | role |
hadoop001 | 192.168.252.164 | hdfs:namenode,datanode,sceondnamenode yarn:resourcemanager,nodemanager |
hadoop002 | 192.168.252.165 | hdfs:datanode yarn:nodemanager |
hadoop003 | 192.168.252.166 | hdfs:datanode yarn:nodemanager |
软件包:
jdk-7u55-linux-x64.tar.gzhadoop-2.6.4.tar.gz
1.准备工作
1.1关闭防火墙
systemctl stop firewalld chkconfig firewalld off
1.2关闭selinux
vi /etc/selinux/config
SELINUX=disabled
1.3设置网络
vi /etc/sysconfig/network-scripts/ifcfg-eno16777736
TYPE=Ethernet BOOTPROTO=static NAME=eno16777736 DEVICE=eno16777736 ONBOOT=yes IPADDR=192.168.252.164 NETMASK=255.255.255.0 GATEWAY=192.168.252.1
systenctl restart network
1.4设置hostname
vi /etc/sysconfig/network
HOSTNAME=hadoop001
1.5设置hosts
vi /etc/hosts
192.168.252.164 hadoop001 192.168.252.165 hadoop002 192.168.252.166 hadoop003
1.6配置互信
生成密钥文件(~/.ssh目录下生成id_rsa和id_rsa.pub)ssh-keygen -t rsa
复制公钥 (~/.ssh目录下)
cp id_rsa.pub authorized_keys
每个节点执行完毕之后,合并各个节点的authorized_keys,并用合并后的文件覆盖原有authorized_keys。
1.7安装jdk
tar zxvf jdk-7u55-linux-x64.tar.gz
配置java环境变量
vi ~/.bashrc
export JAVA_HOME=/usr/jdk1.7.0_55 export HADOOP_HOME=/opt/hadoop export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
source ~/.bashrc
2.节点一搭建
2.1解压hadoop(/opt目录下)
tar zxvf hadoop-2.6.4.tar.gz mv hadoop-2.6.4.tar.gz hadoop
2.2配置环境变量
vi /etc/profile
export JAVA_HOME=/usr/jdk1.7.0_55 export HADOOP_HOME=/opt/hadoop export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
source /etc/profile
2.3修改配置
core-site.xml<property> <name>fs.default.name</name> <value>hdfs://hadoop001:9000</value> </property>
hdfs-site.xml
<property> <name>dfs.name.dir</name> <value>/usr/local/data/namenode</value> </property> <property> <name>dfs.data.dir</name> <value>/usr/local/data/datanode</value> </property> <property> <name>dfs.tmp.dir</name> <value>/usr/local/data/tmp</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property>
mapred-site.xml
<property> <name>mapreduce.framework.name</name> <value>yarn</value> </property>
yarn-site.xml
<property> <name>yarn.resourcemanager.hostname</name> <value>hadoop001</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property>
Slaves
hadoop001 hadoop002 Hadoop003
3.节点二、三搭建
3.1复制hadoop目录到二、三节点
scp -r hadoop 192.168.252.165:/opt scp -r hadoop 192.168.252.166:/opt
3.2复制环境变量文件
scp -r profile 192.168.252.165:/etc scp -r profile 192.168.252.166:/etc
3.3建立data目录
mkdir /usr/local/data
4.启动
4.1格式化HDFS
hdfs namenode -format4.2启动hdfs集群
start-dfs.sh4.3验证
jps命令或50070端口hadoop001:namenode\datanode\sceondnamenode
hadoop002:datanode
hadoop003:datanode
4.4启动yarn
start-yarn.sh4.5验证:
jps,8088端口hadoop001:resourcemanager\nodemanager
hadoop002:nodemanager
hadoop003:nodemanager
相关文章推荐
- 详解HDFS Short Circuit Local Reads
- Hadoop_2.1.0 MapReduce序列图
- 使用Hadoop搭建现代电信企业架构
- 单机版搭建Hadoop环境图文教程详解
- mssql2005数据库镜像搭建教程
- hadoop常见错误以及处理方法详解
- Redis 集群搭建和简单使用教程
- 搭建SSH时的思考和遇到的几个问题的解决方法
- 推荐一个比较不错简单的php运行平台软件PHPnow 搭建 PHP 环境[安装图文教程]
- hadoop 单机安装配置教程
- hadoop的hdfs文件操作实现上传文件到hdfs
- hadoop实现grep示例分享
- 浅谈本地WAMP环境的搭建
- Windows下搭建apache、php、mysql过程分享
- ASP.NET 之 MVC框架及搭建教程(推荐)
- 从零开始搭建MySQL MMM架构
- 快速使用Bootstrap搭建传送带
- Angularjs---项目搭建图文教程
- Centos6.5和Centos7 php环境搭建方法
- asp.net 学习之路 项目整体框架简单的搭建