CentOS7下 Hadoop2.7.3+Spark2.1.0 集群环境搭建(1NN+2DN)
2017-04-03 21:11
796 查看
环境
主机名 | ip | 进程 |
---|---|---|
nn.hadoop.data.example.net | 172.16.156.220 | NameNode、Master、ResourceManager、SecondaryNameNode、JobHistoryServer |
dn1.hadoop.data.example.net | 172.16.156.221 | NodeManager、DataNode、Worker |
dn2.hadoop.data.example.net | 172.16.156.222 | NodeManager、DataNode、Worker |
yum安装如下包 (可能有部分包用不到)
yum install pcre-devel openssl openssl-devel openssh-clients htop gcc zlib lrzsz zip unzip vim telnet-server ncurses wget net-tools
关闭防火墙
systemctl stop firewalld.service systemctl disable firewalld.service
配置hosts文件
vi /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 172.16.156.220 nn.hadoop.data.example.net 172.16.156.221 dn1.hadoop.data.example.net 172.16.156.222 dn2.hadoop.data.example.net
安装JDK和Scala
0.创建文件夹
mkdir /app/java mkdir /app/scala
1.下载
下载JDKwget http://download.oracle.com/otn-pub/java/jdk/8u121-b13/e9e7ea248e2c4826b92b3f075a80e441/jdk-8u121-linux-x64.tar.gz[/code]
如果失效,点这里下载 并上传至服务器
下载Scalawget http://downloads.lightbend.com/scala/2.12.1/scala-2.12.1.tgz[/code]2.移动&解压
mv jdk-8u121-linux-x64.tar.gz /app/java tar -zxvf jdk-8u121-linux-x64.tar.gz mv scala-2.12.1.tgz /app/scala tar -zxvf scala-2.12.1.tgz3.授权
chmod -R 775 /app/ chown -R hadoop /app/创建hadoop用户
useradd hadoop passwd hadoop
如无特殊说明 以后均为hadoop用户操作SSH完密码登录
生成秘钥:~/.ssh/id_rsa和~/.ssh/id_rsa.pubssh-keygen -t rsa
拷贝公钥到其他机器上ssh-copy-id -i nn.hadoop.data.example.net ssh-copy-id -i dn1.hadoop.data.example.net ssh-copy-id -i dn2.hadoop.data.example.net安装Hadoop
0.创建文件夹
mkdir /app/hadoop/data mkdir /app/hadoop/name mkdir /app/hadoop/tmp1.下载hadoop
wget http://www.apache.org/dyn/closer.cgi/hadoop/common/hadoop-2.7.3/hadoop-2.7.3.tar.gz[/code]2.移动&解压
mv hadoop-2.7.3.tar.gz /app/hadoop tar -zxvf hadoop-2.7.3.tar.gz3.修改配置文件
/etc/profile (root权限)HADOOP_HOME=/app/hadoop/hadoop-2.7.3 export HADOOP_INSTALL=$HADOOP_HOME export HADOOP_MAPRED_HOME=$HADOOP_HOME export HADOOP_COMMON_HOME=$HADOOP_HOME export HADOOP_HDFS_HOME=$HADOOP_HOME export YARN_HOME=$HADOOP_HOME export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
slavesdn1.hadoop.data.example.net dn2.hadoop.data.example.net
hadoop-env.sh# export JAVA_HOME=${JAVA_HOME} 改为 export JAVA_HOME=/app/java/jdk1.8.0_121/
core-site.xml<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://nn.hadoop.data.example.net:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/app/hadoop/tmp</value> </property> </configuration>
hdfs-site.xml<configuration> <property> <name>dfs.namenode.secondary.http-address</name> <value>nn.hadoop.data.example.net:50090</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>/app/hadoop/name</value> </property> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>/app/hadoop/data</value> </property> </configuration>
mapred-site.xmlcp mapred-site.xml.template mapred-site.xml<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>nn.hadoop.data.example.net:10020</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>nn.hadoop.data.example.net:19888</value> </property> </configuration>
yarn-site.xml<configuration> <!-- Site specific YARN configuration properties --> <property> <name>yarn.resourcemanager.hostname</name> <value>nn.hadoop.data.example.net</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> </configuration>4.格式化namenode
hadoop namenode -format5.复制文件到其他机器
将/app/hadoop(包括data、name、tmp和配置好的hadoop)复制到其他机器。6.启动dfs
start-dfs.sh7.启动yarn
start-yarn.sh8.启动jobhistory
mr-jobhistory-daemon.sh start historyserver安装Spark2
0.创建文件夹
mkdir /app/spark1.下载Spark2
wget http://www.apache.org/dyn/closer.lua/spark/spark-2.1.0/spark-2.1.0-bin-hadoop2.7.tgz[/code]2.移动&解压
mv spark-2.1.0-bin-hadoop2.7.tgz /app/spark tar -zxvf spark-2.1.0-bin-hadoop2.7.tgz3.修改配置文件
/etc/profile (root权限)export SPARK_HOME=/app/spark/spark-2.1.0-bin-hadoop2.7 export PATH="$SPARK_HOME/bin:$PATH"
spark-env.shcp spark-env.sh.template spark-env.shexport SCALA_HOME=/app/scala/scala-2.12.1 export JAVA_HOME=/app/java/jdk1.8.0_121 export SPARK_MASTER_IP=nn.hadoop.data.easydebug.net export SPARK_WORKER_MEMORY=1g export HADOOP_CONF_DIR=/app/hadoop/hadoop-2.7.3/etc/hadoop
slavesdn1.hadoop.data.example.net dn2.hadoop.data.example.net4.复制文件到其他机器
将/app/spark复制到其他机器。5.启动Spark
/app/spark/spark-2.1.0-bin-hadoop2.7/sbin/start-all.sh安装完成 ^_^
因为系统变量改了几次 最后贴一下完整的 其实可以在配置前直接贴进去
export JAVA_HOME=/app/java/jdk1.8.0_121
export SCALA_HOME=/app/scala/scala-2.12.1
export PATH=$JAVA_HOME/bin:$SCALA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
HADOOP_HOME=/app/hadoop/hadoop-2.7.3 export HADOOP_INSTALL=$HADOOP_HOME export HADOOP_MAPRED_HOME=$HADOOP_HOME export HADOOP_COMMON_HOME=$HADOOP_HOME export HADOOP_HDFS_HOME=$HADOOP_HOME export YARN_HOME=$HADOOP_HOME export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
export SPARK_HOME=/app/spark/spark-2.1.0-bin-hadoop2.7 export PATH="$SPARK_HOME/bin:$PATH"
参考文章
CentOS 6.5 hadoop 2.7.3 集群环境搭建
http://blog.csdn.net/mxxleve 9e61
l/article/details/52653086
Spark修炼之道(进阶篇)——Spark入门到精通:第一节 Spark 1.5.0集群搭建
https://yq.aliyun.com/articles/60309?spm=5176.8251999.569296.66.0H8Bal
Hadoop2.7.3+Spark2.1.0 完全分布式环境 搭建全过程
http://www.cnblogs.com/purstar/p/6293605.html
相关文章推荐
- CentOS7下 Hadoop2.7.3+Spark2.1.0 集群环境搭建(1NN+2DN)
- spark学习1--centOS7.2下基于hadoop2.7.3的spark2.0集群环境搭建
- Hadoop2.7.3+Spark2.1.0完全分布式集群搭建过程
- hadoop2.7.3+spark2.1.0+scala2.12.1环境搭建(2)安装hadoop
- Hadoop2.7.3+Spark2.1.0完全分布式集群搭建过程
- Ubuntu16.04 下 Spark2.0.2+Hadoop2.7.3+Zookeeper3.4.9+HBase1.1.7集群环境搭建
- Hadoop2.7.3+Spark2.1.0 完全分布式环境 搭建全过程
- Hadoop2.7.3+Spark2.1.0 完全分布式环境 搭建全过程
- CentOS 6.5 hadoop 2.7.3 集群环境搭建
- Ubuntu16.04 下 Spark2.0.2+Hadoop2.7.3+Zookeeper3.4.9+HBase1.1.7集群环境搭建--4
- hadoop2.7.3+spark2.1.0+scala2.12.1环境搭建(1)安装jdk
- Hadoop2.7.3+Spark2.1.0 完全分布式环境 搭建全过程
- Hadoop2.7.3+Spark2.1.0完全分布式集群搭建过程
- spark2.1.0完全分布式集群搭建-hadoop2.7.3
- Centos7 下 spark1.6.1_hadoop2.6 分布式集群环境搭建
- Ubuntu16.04 下 Spark2.0.2+Hadoop2.7.3+Zookeeper3.4.9+HBase1.1.7集群环境搭建--3
- 集群RedHat6.5+JDK1.8+Hadoop2.7.3+Spark2.1.1+zookeeper3.4.6+kafka2.11+flume1.6环境搭建步骤
- hadoop2.7.3+spark2.1.0+scala2.12.1环境搭建(4)SPARK 安装
- hadoop2.7.3+spark2.1.0+scala2.12.1环境搭建(3)http://www.cnblogs.com/liugh/p/6624491.html
- Ubuntu16.04 下 Spark2.0.2+Hadoop2.7.3+Zookeeper3.4.9+HBase1.1.7集群环境搭建--2