您的位置:首页 > 运维架构

Hadoop分布式框架搭建

2016-07-27 17:33 441 查看
hadoop版本:hadoop-0.20.205.0-1.i386.rpm

jdk版本:jdk-6u35-linux-i586-rpm.bin 

环境为linux 64

master: 10.10.30.64

slave1: 10.10.30.65

slave2: 10.10.30.68

总体的步骤:

1.修改主机名/etc/hosts(虚拟机拷贝后若不一致要从新修改,从新分发)

2.创建一个普通账户(hadoop),hadoop以此账户运行。

2.root安装jdk

3.修改环境变量

4.安装hadoop,修改配置文件

5.将虚拟机拷贝2份,分别作为slave1,slave2

6.配置ssh,使两两之间,自己登陆自己都免密码

7.用普通账户格式化namenode
8.启动,并观察是否正常运行了

------------------------1.设置主机ip地址映射/etc/hosts-----------------------------
------------------------2.添加hadoop用户,作为运行hadoop的用户---------------------
------------------------3.安装jdk并设置环境变量------------------------------------

[test@master 桌面]$ su - root
密码:
[root@master ~]# useradd hadoop
[root@master ~]# passwd hadoop
更改用户 hadoop 的密码 。
新的 密码:
无效的密码: 过短
无效的密码: 过于简单
重新输入新的 密码:
passwd: 所有的身份验证令牌已经成功更新。
[root@master ~]# vim /etc/hosts
[root@master ~]# cat /etc/hosts
10.10.30.64 master
10.10.30.65 slave1
10.10.30.68 slave2
[root@master ~]# cd /home/test/
[root@master test]# ls
hadoop-0.20.205.0-1.i386.rpm  公共的  视频  文档  音乐
jdk-6u35-linux-i586-rpm.bin   模板    图片  下载  桌面
[root@master test]# chmod 744 jdk-6u35-linux-i586-rpm.bin  #给bin执行权限
[root@master test]# ./jdk-6u35-linux-i586-rpm.bin
Unpacking...
Checksumming...
Extracting...
UnZipSFX 5.50 of 17 February 2002, by Info-ZIP (Zip-Bugs@lists.wku.edu).
inflating: jdk-6u35-linux-i586.rpm
inflating: sun-javadb-common-10.6.2-1.1.i386.rpm
inflating: sun-javadb-core-10.6.2-1.1.i386.rpm
inflating: sun-javadb-client-10.6.2-1.1.i386.rpm
inflating: sun-javadb-demo-10.6.2-1.1.i386.rpm
inflating: sun-javadb-docs-10.6.2-1.1.i386.rpm
inflating: sun-javadb-javadoc-10.6.2-1.1.i386.rpm
Preparing...                ########################################### [100%]
1:jdk                    ########################################### [100%]
Unpacking JAR files...
rt.jar...
jsse.jar...
charsets.jar...
tools.jar...
localedata.jar...
plugin.jar...
javaws.jar...
deploy.jar...
Installing JavaDB
Preparing...                ########################################### [100%]
1:sun-javadb-common      ########################################### [ 17%]
2:sun-javadb-core        ########################################### [ 33%]
3:sun-javadb-client      ########################################### [ 50%]
4:sun-javadb-demo        ########################################### [ 67%]
5:sun-javadb-docs        ########################################### [ 83%]
6:sun-javadb-javadoc     ########################################### [100%]

Java(TM) SE Development Kit 6 successfully installed.

Product Registration is FREE and includes many benefits:
* Notification of new versions, patches, and updates
* Special offers on Oracle products, services and training
* Access to early releases and documentation

Product and system data will be collected. If your configuration
supports a browser, the JDK Product Registration form will
be presented. If you do not register, none of this information
will be saved. You may also register your JDK later by
opening the register.html file (located in the JDK installation
directory) in a browser.

For more information on what data Registration collects and
how it is managed and used, see: http://java.sun.com/javase/registration/JDKRegistrationPrivacy.html 
Press Enter to continue.....

Done.
[root@master test]# vim /etc/profile
[root@master test]# ls /usr/java/jdk1.6.0_35/
bin        lib          register.html        THIRDPARTYLICENSEREADME.txt
COPYRIGHT  LICENSE      register_ja.html
include    man          register_zh_CN.html
jre        README.html  src.zip
[root@master test]# tail -3 /etc/profile	#设置环境变量
export JAVA_HOME=/usr/java/jdk1.6.0_35
export CLASSPATH=.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH
[root@master test]#

-----------------------安装hadoop,修改配置文件-----------------------
#这一步完后后,将虚拟机拷贝两份,分别作为slave1,slave2
#若启动两个拷贝后,ip地址和hosts不一样,要改为实际的ip

[root@master test]# ls
hadoop-0.20.205.0-1.i386.rpm            公共的
jdk-6u35-linux-i586.rpm                 模板
jdk-6u35-linux-i586-rpm.bin             视频
sun-javadb-client-10.6.2-1.1.i386.rpm   图片
sun-javadb-common-10.6.2-1.1.i386.rpm   文档
sun-javadb-core-10.6.2-1.1.i386.rpm     下载
sun-javadb-demo-10.6.2-1.1.i386.rpm     音乐
sun-javadb-docs-10.6.2-1.1.i386.rpm     桌面
sun-javadb-javadoc-10.6.2-1.1.i386.rpm
[root@master test]# rpm -ivh hadoop-0.20.205.0-1.i386.rpm
Preparing...                ########################################### [100%]
1:hadoop                 ########################################### [100%]
[root@master test]# cd /etc/hadoop/
[root@master hadoop]# ls
capacity-scheduler.xml      hadoop-policy.xml      slaves
configuration.xsl           hdfs-site.xml          ssl-client.xml.example
core-site.xml               log4j.properties       ssl-server.xml.example
fair-scheduler.xml          mapred-queue-acls.xml  taskcontroller.cfg
hadoop-env.sh               mapred-site.xml
hadoop-metrics2.properties  masters
[root@master hadoop]# vim hadoop-env.sh

export JAVA_HOME=/usr/java/jdk1.6.0_35

[root@master hadoop]# vim core-site.xml
[root@master hadoop]# cat core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/hadoop/tmp</value>(tmp文件夹需提前建好)
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://10.10.30.64:9000</value>(备注:此处一定使用10.10.30.64,否则配置hadoop-eclipse插件时会连接不上)
</property>
<property>
<name>dfs.name.dir</name>
<value>/opt/hadoop/name</value>
</property>
</configuration>
(备注:如没有配置hadoop.tmp.dir参数,此时系统默认的临时目录为:/tmp/hadoo-hadoop。而这个目录在每次重启后都会被干掉,必须重新执行format才行,否则会出错。)
[root@master hadoop]# vim hdfs-site.xml
[root@master hadoop]# cat hdfs-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/opt/hadoop/hdfs_data</value>
</property>
</configuration>
(备注:文件必须已经预先创建好并存在!)
[root@master hadoop]# vim mapred-site.xml (在较高版本该文件被yarn-site.xml替代)
[root@master hadoop]# cat mapred-site.xml

<configuration>

<!-- Site specific YARN configuration properties -->
<property>
<name>mapred.job.tracker</name>
<value>hdfs://10.10.30.64:9001</value>
</property>
<property>
<name>mapred.system.dir</name>
<value>/opt/hadoop/mapred_system</value>
</property>
<property>
<name>mapred.local.dir</name>
<value>/opt/hadoop/mapred_local</value>
</property>
</configuration>
(备注:注意上面一定要填Ip,不要填localhost,不然eclipse会连接不到!)
[root@master hadoop]# cat masters(这里的masters和slaves文件一定提前修改,目的是设置主从关系)
master
[root@master hadoop]# cat slaves
slave1
slave2
[root@master hadoop]#

------------------------下面切换到hadoop用户,设置ssh免密码登陆------------------------------

[hadoop@master ~]$ ssh-keygen -t dsa	#这个步骤在两个从节点上也要做一遍
Generating public/private dsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_dsa):
Created directory '/home/hadoop/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/hadoop/.ssh/id_dsa.
Your public key has been saved in /home/hadoop/.ssh/id_dsa.pub.
The key fingerprint is:
6f:88:68:8a:d6:c7:b0:c7:e2:8b:b7:fa:7b:b4:a1:56 hadoop@master
The key's randomart image is:
+--[ DSA 1024]----+
|                 |
|                 |
|                 |
|                 |
|        S        |
|   . E . o       |
|  . @ + . o      |
| o.X B   .       |
|ooB*O            |
+-----------------+
[hadoop@master ~]$ cd .ssh/
[hadoop@master .ssh]$ ls
id_dsa  id_dsa.pub
[hadoop@master .ssh]$ cp id_dsa.pub authorized_keys	#必须要将公钥改名为authorized_keys

#编辑authorized_keys文件,将两个从节点中生成的公钥id_dsa.pub中的内容拷贝到authorized_keys中
[hadoop@master .ssh]$ vim authorized_keys
[hadoop@master .ssh]$ exit
logout
[test@master .ssh]$ su - root
密码:
[root@master ~]# cd /home/hadoop/
[root@master hadoop]# ls
[root@master hadoop]# cd .ssh/
[root@master .ssh]# ls
authorized_keys  id_dsa  id_dsa.pub

#切换到root,将authorized_keys分别拷贝到两个从节点的/home/hadoop/.ssh下
#这里root拷贝的时候不需要输入密码,因为之前也被我设置免密码了
[root@master .ssh]# scp authorized_keys slave1:/home/hadoop/.ssh/
authorized_keys                                                100% 1602     1.6KB/s   00:00
[root@master .ssh]# scp authorized_keys slave2:/home/hadoop/.ssh/
authorized_keys                                                100% 1602     1.6KB/s   00:00
[root@master .ssh]#

#这样拷贝完后,三台机用hadoop用户ssh登陆就不需要密码了,
#注意,第一次登陆需要,然后再登陆就不需要了,一定要两两之间
#自己登陆自己都走一遍

-------------------------------格式化并启动hadoop---------------------------
#注意:把三台机的防火墙都关掉测试。
[hadoop@master ~]$ /usr/bin/hadoop namenode -format
12/09/01 16:52:24 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = master/10.10.30.64
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 0.20.205.0
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-security-205 -r 1179940; compiled by 'hortonfo' on Fri Oct  7 06:19:16 UTC 2011
************************************************************/
12/09/01 16:52:24 INFO util.GSet: VM type       = 32-bit
12/09/01 16:52:24 INFO util.GSet: 2% max memory = 2.475 MB
12/09/01 16:52:24 INFO util.GSet: capacity      = 2^19 = 524288 entries
12/09/01 16:52:24 INFO util.GSet: recommended=524288, actual=524288
12/09/01 16:52:24 INFO namenode.FSNamesystem: fsOwner=hadoop
12/09/01 16:52:24 INFO namenode.FSNamesystem: supergroup=supergroup
12/09/01 16:52:24 INFO namenode.FSNamesystem: isPermissionEnabled=true
12/09/01 16:52:24 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
12/09/01 16:52:24 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
12/09/01 16:52:24 INFO namenode.NameNode: Caching file names occuring more than 10 times
12/09/01 16:52:24 INFO common.Storage: Image file of size 112 saved in 0 seconds.
12/09/01 16:52:25 INFO common.Storage: Storage directory /tmp/hadoop-hadoop/dfs/name has been successfully formatted.
12/09/01 16:52:25 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/10.10.30.64
************************************************************/
[hadoop@master ~]$ /usr/bin/start-all.sh	#启动信息像下面这样就正常启动了
starting namenode, logging to /var/log/hadoop/hadoop/hadoop-hadoop-namenode-master.out
slave2: starting datanode, logging to /var/log/hadoop/hadoop/hadoop-hadoop-datanode-slave2.out
slave1: starting datanode, logging to /var/log/hadoop/hadoop/hadoop-hadoop-datanode-slave1.out
master: starting secondarynamenode, logging to /var/log/hadoop/hadoop/hadoop-hadoop-secondarynamenode-master.out
starting jobtracker, logging to /var/log/hadoop/hadoop/hadoop-hadoop-jobtracker-master.out
slave2: starting tasktracker, logging to /var/log/hadoop/hadoop/hadoop-hadoop-tasktracker-slave2.out
slave1: starting tasktracker, logging to /var/log/hadoop/hadoop/hadoop-hadoop-tasktracker-slave1.out
#但是通过查看java的进程信息查不到,可能原因有两个:
#1.防火墙没关
#2.若防火墙关了还这样,重启。
[hadoop@master ~]$ /usr/java/jdk1.6.0_35/bin/jps
28499 Jps
[root@master ~]# iptables -F
[root@master ~]# exit
logout
[hadoop@master ~]$ /usr/bin/start-all.sh
starting namenode, logging to /var/log/hadoop/hadoop/hadoop-hadoop-namenode-master.out
slave2: starting datanode, logging to /var/log/hadoop/hadoop/hadoop-hadoop-datanode-slave2.out
slave1: starting datanode, logging to /var/log/hadoop/hadoop/hadoop-hadoop-datanode-slave1.out
master: starting secondarynamenode, logging to /var/log/hadoop/hadoop/hadoop-hadoop-secondarynamenode-master.out
starting jobtracker, logging to /var/log/hadoop/hadoop/hadoop-hadoop-jobtracker-master.out
slave2: starting tasktracker, logging to /var/log/hadoop/hadoop/hadoop-hadoop-tasktracker-slave2.out
slave1: starting tasktracker, logging to /var/log/hadoop/hadoop/hadoop-hadoop-tasktracker-slave1.out
[hadoop@master ~]$ /usr/java/jdk1.6.0_35/bin/jps
30630 Jps
---------------------------重启后正常了----------------------
------------------------master节点---------------------------
[hadoop@master ~]$ /usr/bin/start-all.sh
starting namenode, logging to /var/log/hadoop/hadoop/hadoop-hadoop-namenode-master.out
slave2: starting datanode, logging to /var/log/hadoop/hadoop/hadoop-hadoop-datanode-slave2.out
slave1: starting datanode, logging to /var/log/hadoop/hadoop/hadoop-hadoop-datanode-slave1.out
master: starting secondarynamenode, logging to /var/log/hadoop/hadoop/hadoop-hadoop-secondarynamenode-master.out
starting jobtracker, logging to /var/log/hadoop/hadoop/hadoop-hadoop-jobtracker-master.out
slave2: starting tasktracker, logging to /var/log/hadoop/hadoop/hadoop-hadoop-tasktracker-slave2.out
slave1: starting tasktracker, logging to /var/log/hadoop/hadoop/hadoop-hadoop-tasktracker-slave1.out
[hadoop@master ~]$ /usr/java/jdk1.6.0_35/bin/jps
3388 JobTracker
3312 SecondaryNameNode
3159 NameNode
3533 Jps

------------------------salve1---------------------------------

[hadoop@master ~]$ ssh slave1
Last login: Sat Sep  1 16:51:48 2012 from slave2
[hadoop@slave1 ~]$ su - root
密码:
[root@slave1 ~]# iptables -F
[root@slave1 ~]# setenforce 0
[root@slave1 ~]# exit
logout
[hadoop@slave1 ~]$ /usr/java/jdk1.6.0_35/bin/jps
3181 TaskTracker
3107 DataNode
3227 Jps

--------------------------slave2------------------------------
[hadoop@master ~]$ ssh slave2
Last login: Sat Sep  1 16:52:02 2012 from slave2
[hadoop@slave2 ~]$ su - root
密码:
[root@slave2 ~]# iptables -F
[root@slave2 ~]# setenforce 0
[root@slave2 ~]# exit
logout
[hadoop@slave2 ~]$ /usr/java/jdk1.6.0_35/bin/jps
3165 DataNode
3297 Jps
3241 TaskTracker
[hadoop@slave2 ~]$

--------------------------------------------------------------
错误:从节点Nodemanager可以开启,但是查看日志文件发现从节点
无法接通主节点,一直发送请求。
 <p align="left">2016-08-04 09:40:03,188 INFO org.apache.hadoop.ipc.Client: Retryingconnect to server: 0.0.0.0/0.0.0.0:8031. Already tried 0 time(s); retry policyis RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000MILLISECONDS)</p><p align="left">2016-08-04 09:40:04,189 INFO org.apache.hadoop.ipc.Client: Retryingconnect to server: 0.0.0.0/0.0.0.0:8031. Already tried 1 time(s); retry policyis RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000MILLISECONDS)</p><p align="left">2016-08-04 09:40:05,189 INFO org.apache.hadoop.ipc.Client: Retryingconnect to server: 0.0.0.0/0.0.0.0:8031. Already tried 2 time(s); retry policyis RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000MILLISECONDS)</p><p align="left">2016-08-04 09:40:06,190 INFO org.apache.hadoop.ipc.Client: Retryingconnect to server: 0.0.0.0/0.0.0.0:8031. Already tried 3 time(s); retry policyis RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000MILLISECONDS)</p><p align="left">2016-08-04 09:40:07,191 INFO org.apache.hadoop.ipc.Client: Retryingconnect to server: 0.0.0.0/0.0.0.0:8031. Already tried 4 time(s); retry policyis RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000MILLISECONDS)</p><p align="left">2016-08-04 09:40:08,192 INFO org.apache.hadoop.ipc.Client: Retryingconnect to server: 0.0.0.0/0.0.0.0:8031. Already tried 5 time(s); retry policyis RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000MILLISECONDS)</p><p align="left">2016-08-04 09:40:09,192 INFO org.apache.hadoop.ipc.Client: Retryingconnect to server: 0.0.0.0/0.0.0.0:8031. Already tried 6 time(s); retry policyis RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000MILLISECONDS)</p><p align="left">2016-08-04 09:40:10,193 INFO org.apache.hadoop.ipc.Client: Retryingconnect to server: 0.0.0.0/0.0.0.0:8031. Already tried 7 time(s); retry policyis RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000MILLISECONDS)</p><p align="left">2016-08-04 09:40:11,194 INFO org.apache.hadoop.ipc.Client: Retryingconnect to server: 0.0.0.0/0.0.0.0:8031. Already tried 8 time(s); retry policyis RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000MILLISECONDS)</p><p align="left">2016-08-04 09:40:12,194 INFO org.apache.hadoop.ipc.Client: Retryingconnect to server: 0.0.0.0/0.0.0.0:8031. Already tried 9 time(s); retry policyis RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000MILLISECONDS)</p>
解决办法:
在yarn-site.xml文件中添加如下配置

<property>
    <name>yarn.resourcemanager.address</name>
    <value>master:8032</value>
  </property>
  <property>
    <name>yarn.resourcemanager.scheduler.address</name>
    <value>master:8030</value>
  </property>
  <property>
    <name>yarn.resourcemanager.resource-tracker.address</name>
    <value>master:8031</value>
  </property>
重启hadoop,查看Nodemanager日志文件。


参考文章:http://blog.csdn.net/chen_jp/article/details/7933103
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  hadoop 分布式 集群