您的位置:首页 > 运维架构

hadoop-2.6.4集群编译搭建-阿里云和腾讯云

2017-02-17 14:34 471 查看

腾讯云阿里云 hadoop集群编译搭建

fromhttp://blog.csdn.net/u014595668/article/details/52079753

环境准备

阿里云配置:
[hadoop@lizer_ali ~]$ uname -a
Linux lizer_ali 2.6.32-573.22.1.el6.x86_64 #1 SMP Wed Mar 23 03:35:39 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
[hadoop@lizer_ali ~]$ head -n 1 /etc/issue
CentOS release 6.5 (Final)
[hadoop@lizer_ali ~]$ cat /proc/cpuinfo | grep name | cut -f2 -d: | uniq -c
1  Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz
[hadoop@lizer_ali ~]$ getconf LONG_BIT
64
[hadoop@lizer_ali ~]$ cat /proc/meminfo
MemTotal:        1018508 kB
MemFree:          353912 kB
12345678910111234567891011腾讯云配置:
[hadoop@lizer_tx ~]$ uname -a
Linux lizer_tx 2.6.32-573.18.1.el6.x86_64 #1 SMP Tue Feb 9 22:46:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
[hadoop@lizer_tx ~]$ head -n 1 /etc/issue
CentOS release 6.7 (Final)
[hadoop@lizer_tx ~]$ cat /proc/cpuinfo | grep name | cut -f2 -d: | uniq -c
1  Intel(R) Xeon(R) CPU E5-26xx v3
[hadoop@lizer_tx ~]$ getconf LONG_BIT
64
[hadoop@lizer_tx ~]$ cat /proc/meminfo
MemTotal:        1020224 kB
MemFree:          688488 kB
12345678910111234567891011

创建用户

useradd Hadoop passwd haddop

jdk1.7安装:

下载:http://www.oracle.com/technetwork/java/javase/downloads/java-archive-downloads-javase7-521261.html#jdk-7u80-oth-JPRwget http://download.oracle.com/otn/java/jdk/7u80-b15/jdk-7u80-linux-x64.tar.gz?AuthParam=1469844164_7ce09e1f99570835183215c3510e95e0mv jdk-7u80-Linux-x64.tar.gz\?AuthParam\=1469844164_7ce09e1f99570835183215c3510e95e0jdk-7u80-linux-x64.tar.gz配置jdk 
tar zxf jdk-7u80-linux-x64.tar.gz -C /opt/
配置环境变量
vim /etc/profile
export JAVA_HOME=/opt/jdk1.7.0_80
export JRE_HOME=/opt/jdk1.7.0_80/jre
export CLASSPATH=./$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH
export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH
1234512345生效: 
source /etc/profile

编译hadoop2.6.4所需软件

yum install gcc cmake gcc-c++

安装maven

wget http://www-eu.apache.org/dist/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.tar.gz 安装maven:http://www.blogjava.net/caojianhua/archive/2011/04/02/347559.html
tar zxf apache-maven-3.3.9-bin.tar.gz -C /usr/local/
vim /etc/profile
export MAVEN_HOME=/usr/local/apache-maven-3.3.9
export PATH=$PATH:$MAVEN_HOME/bin
12341234
source /etc/profile
[root@lizer_ali hadoop]# mvn -v
Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-11T00:41:47+08:00)
Maven home: /usr/local/apache-maven-3.3.9
Java version: 1.7.0_80, vendor: Oracle Corporation
Java home: /opt/jdk1.7.0_80/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "2.6.32-573.22.1.el6.x86_64", arch: "amd64", family: "unix"
12345671234567

安装protobuf

要求版本protobuf-2.5.0 wget https://github.com/google/protobuf/releases/download/v2.5.0/protobuf-2.5.0.tar.gz
cd protobuf-2.5.0/
./configure -prefix=/usr/local/protobuf-2.5.0
make && make install

vim /etc/profile
export PROTOBUF=/usr/local/protobuf-2.5.0
export PATH=$PROTOBUF/bin:$PATH
12345671234567
protoc --version

安装ant

wget http://www-eu.apache.org/dist//ant/binaries/apache-ant-1.9.7-bin.tar.gz
tar zxf apache-ant-1.9.7-bin.tar.gz -C /usr/local/
vim /etc/profile
export ANT_HOME=/usr/local/apache-ant-1.9.7
export PATH=$PATH:$ANT_HOME/bin
12341234
source /etc/profile
ant -version 

Apache Ant(TM) version 1.9.7 compiled on April 9 2016
yum install autoconf automake libtool
yum install openssl-devel
1212

安装findbugs

http://findbugs.sourceforge.net/downloads.htmlwget http://prdownloads.sourceforge.net/findbugs/findbugs-3.0.1.tar.gz?download 
mv findbugs-3.0.1.tar.gz\?download findbugs-3.0.1.tar.gz
tar zxf findbugs-3.0.1.tar.gz -C /usr/local/
vim /etc/profile
export FINDBUGS_HOME=/usr/local/findbugs-3.0.1
export PATH=$FINDBUGS_HOME/bin:$PATH

findbugs -version
123456123456

hadoop编译安装:

下载hadoop:http://hadoop.apache.org/releases.htmlwget http://www-eu.apache.org/dist/hadoop/common/hadoop-2.6.4/hadoop-2.6.4-src.tar.gz
tar zxf hadoop-2.6.4-src.tar.gz 

cd hadoop-2.6.4-src
more BUILDING.txt
 查看如何编译安装
mvn clean package -Pdist,native,docs -DskipTests -Dtar
 编译过程中,需要下载很多包,等待时间比较长。当看到hadoop各个项目都编译成功,即出现一系列的SUCCESS之后,即为编译成功。有些包下载卡住,重复执行上面的命令,或可以根据提示到相应的网址(https://repo.maven.apache.org/maven2)下载放到指定位置出现错误1: 
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project hadoop-common: An Ant BuildException
has occured: input file /home/hadoop/hadoop-2.6.4-src/hadoop-common-project/hadoop-common/target/findbugsXml.xml does not exist 

[ERROR] around Ant part ...<xslt style="/usr/local/findbugs-3.0.1/src/xsl/default.xsl" in="/home/hadoop/hadoop-2.6.4-src/hadoop-common-project/hadoop-common/target/findbugsXml.xml" out="/home/hadoop/hadoop-2.6.4-src/hadoop-common-project/hadoop-common/target/site/findbugs.html"/>...
@ 44:256 in /home/hadoop/hadoop-2.6.4-src/hadoop-common-project/hadoop-common/target/antrun/build-main.xml 
解决办法1: 参考:http://www.itnose.net/detail/6143808.html 从该命令删除docs参数再运行mvn package -Pdist,native -DskipTests -Dtar出现错误2: 
[INFO] Executing tasks 

main: 

[mkdir] Created dir: /home/hadoop/hadoop-2.6.4-src/hadoop-common-project/hadoop-kms/downloads 

[get] Getting: http://archive.apache.org/dist/tomcat/tomcat-6/v6.0.41/bin/apache-tomcat-6.0.41.tar.gz  
[get] To: /home/hadoop/hadoop-2.6.4-src/hadoop-common-project/hadoop-kms/downloads/apache-tomcat-6.0.41.tar.gz 
解决2: 卡在这里,应该是不能下载 本地翻墙下载上传到指定位置出现错误3: 
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-javadoc-plugin:2.8.1:jar (module-javadocs) on project hadoop-hdfs:
MavenReportException: Error while creating archive: 

[ERROR] ExcludePrivateAnnotationsStandardDoclet 

[ERROR] Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000f31a4000,
130400256, 0) failed; error='Cannot allocate memory' (errno=12) 

[ERROR] # 

[ERROR] # There is insufficient memory for the Java Runtime Environment
to continue. 

[ERROR] # Native memory allocation (malloc) failed to allocate 130400256 bytes for committing reserved memory. 

[ERROR] # An error report file with more information is saved as: 

[ERROR] # /home/hadoop/hadoop-2.6.4-src/hadoop-hdfs-project/hadoop-hdfs/target/hs_err_pid24729.log 

[ERROR] 

[ERROR] Error occurred during initialization of VM, try to reduce the Java heap size for the MAVEN_OPTS environnement variable using -Xms:<size> and -Xmx:<size>. 

[ERROR] Or, try to reduce the Java heap size for the Javadoc goal using -Dminmemory=<size> and -Dmaxmemory=<size>. 
解决3: 应该是内存不够,没有分配swap 添加2G swap分区 添加或扩大交换分区 dd if=/dev/zero of=/home/swap bs=512 count=4096000 bs 是扇区大小 bs=512 指大小为512B count为扇区数量 表示创建一个大小为4G 的文件 /home/swap 用空值填充。of位置可以自己调整。查看当前分区的大小 free -m格式化并挂载 mkswap /home/swap swapon /home/swap查看挂载情况 swapon -s开机自动挂载 vim /etc/fstab /home/swap swap swap defaults 0 0想写在分区 swapoff /home/swap出现问题4: 
main: 

[mkdir] Created dir: /home/hadoop/hadoop-2.6.4-src/hadoop-hdfs-project/hadoop-hdfs-httpfs/downloads 

[get] Getting: http://archive.apache.org/dist/tomcat/tomcat-6/v6.0.41/bin/apache-tomcat-6.0.41.tar.gz  
[get] To: /home/hadoop/hadoop-2.6.4-src/hadoop-hdfs-project/hadoop-hdfs-httpfs/downloads/apache-tomcat-6.0.41.tar.gz 
解决4: 网络问题,同上 
cp /home/hadoop/hadoop-2.6.4-src/hadoop-common-project/hadoop-kms/downloads/apache-tomcat-6.0.41.tar.gz /home/hadoop/hadoop-2.6.4-src/hadoop-hdfs-project/hadoop-hdfs-httpfs/downloads/ 
编译安装成功
[INFO] Apache Hadoop Gridmix .............................. SUCCESS [  6.239 s]
[INFO] Apache Hadoop Data Join ............................ SUCCESS [  4.070 s]
[INFO] Apache Hadoop Ant Tasks ............................ SUCCESS [  3.304 s]
[INFO] Apache Hadoop Extras ............................... SUCCESS [  4.653 s]
[INFO] Apache Hadoop Pipes ................................ SUCCESS [  8.279 s]
[INFO] Apache Hadoop OpenStack support .................... SUCCESS [  7.736 s]
[INFO] Apache Hadoop Amazon Web Services support .......... SUCCESS [06:22 min]
[INFO] Apache Hadoop Client ............................... SUCCESS [  9.608 s]
[INFO] Apache Hadoop Mini-Cluster ......................... SUCCESS [  0.258 s]
[INFO] Apache Hadoop Scheduler Load Simulator ............. SUCCESS [  6.721 s]
[INFO] Apache Hadoop Tools Dist ........................... SUCCESS [ 15.171 s]
[INFO] Apache Hadoop Tools ................................ SUCCESS [  0.022 s]
[INFO] Apache Hadoop Distribution ......................... SUCCESS [ 37.343 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 31:45 min
[INFO] Finished at: 2016-07-30T23:59:50+08:00
[INFO] Final Memory: 101M/241M
[INFO] ------------------------------------------------------------------------
12345678910111213141516171819201234567891011121314151617181920在
hadoop-dist/target/
已经生成了可执行文件拷贝到用户目录 
cp -r hadoop-2.6.4 ~/

配置

两台机免密互通

ssh-keygen
 各自生成的公钥放到另一台机上 
cat id_rsa_else.pub >> authorized_keys 

chmod 600 authorized_keys

网络规划:

hadoop1 123.206.33.182 slave 

hadoop0 114.215.92.77 master

配置hosts

vim /etc/hosts 

123.206.33.182 hadoop1 tx lizer_tx 

114.215.92.77 hadoop0 ali lizer_ali

配置环境变量

vim /etc/profile 

export HADOOP_HOME=/home/hadoop/hadoop-2.6.4 

export PATH=$HADOOP_HOME/bin:$PATH

Hadoop配置

配置文件放在
$HADOOP_HOME/etc/hadoop/
下 修改一下配置:vim hadoop-env.sh 
export JAVA_HOME=/opt/jdk1.7.0_80
vim yarn-env.sh 
export JAVA_HOME=/opt/jdk1.7.0_80
vim slaves (这里没有了master配置文件) 
hadoop1
vim core-site.xml
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/hadoop/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop0:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>4096</value>
</property>
</configuration>
123456789101112131415123456789101112131415vim hdfs-site.xml
<configuration>
<property>
<name>dfs.http.address</name>
<value>hadoop0:50070</value>
</property>
<property>
<name>dfs.secondary.http.address</name>
<value>hadoop0:50090</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/home/hadoop/hadoop/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/hadoop/hadoop/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.nameservices</name>
<value>hadoop0</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoop0:50090</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>
12345678910111213141516171819202122232425262728293031323334353637381234567891011121314151617181920212223242526272829303132333435363738cp mapred-site.xml.template mapred-site.xmlvim /etc/mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
<final>true</final>
</property>
<property>
<name>mapreduce.jobtracker.http.address</name>
<value>hadoop0:50030</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>hadoop0:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>hadoop0:19888</value>
</property>
<property>
<name>mapred.job.tracker</name>
<value>hadoop0:9001</value>
</property>
</configuration>
12345678910111213141516171819202122231234567891011121314151617181920212223vim yarn-site.xml
<configuration>

<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop0</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>hadoop0:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>hadoop0:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>hadoop0:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>hadoop0:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>hadoop0:8088</value>
</property>
</configuration>
12345678910111213141516171819202122232425262728293031321234567891011121314151617181920212223242526272829303132vim master 
hadoop0
scp -r /home/hadoop/hadoop-2.6.4/etc/hadoop/* tx:~/hadoop-2.6.4/etc/hadoop/

启动和关闭hadoop

参考:http://my.oschina.net/penngo/blog/653049
bin/hdfs namenode -format
sbin/start-dfs.sh
sbin/stop-dfs.sh
sbin/start-yarn.sh
sbin/stop-yarn.sh
sbin/mr-jobhistory-daemon.sh start historyserver
sbin/mr-jobhistory-daemon.sh stop historyserver
sbin/hadoop-daemon.sh start secondarynamenode
sbin/hadoop-daemon.sh stop secondarynamenode
123456789123456789[hadoop@lizer_ali hadoop-2.6.4]$ jps 3099 ResourceManager 3430 SecondaryNameNode 2879 NameNode 3470 Jps 3382 JobHistoryServer[hadoop@lizer_tx ~]$ jps 9757 DataNode 9853 NodeManager 10064 Jps检查节点配置情况 
bin/hadoop dfsadmin -report
网页节点管理 
http://114.215.92.77:8088/cluster
网页资源管理 
http://114.215.92.77:50070/dfshealth.html#tab-overview

新建文件夹

bin/hdfs dfs -mkdir -p input
http://blog.csdn.net/u014595668/article/details/52079753
http://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-common/SingleCluster.html
http://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-common/ClusterSetup.html
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  single cluster