您的位置:首页 > 运维架构 > Linux

Linux下Hadoop集群安装详细步骤

2011-08-26 16:04 861 查看

1. 环境的需求(这里虚拟机下Centos6的安装就不说了)

Centos6 + hadoop-0.21.0.tar

2. 服务器的配置(我在这里均为2个CPU,2G内存,100硬盘)

在这里,服务器IP最好是固定的,也就是说,能相互之间用ping命令ping通的IP

建议在公司开发,因为公司有这个条件,我在这里配置了三台datanode,分别为

Datanode1,Datanode2,Datanode3(这些都是创建虚机的主机名)

服务器名称 IP地址(自己定)

Namenode 192.168.16.1

Datanode1 192.168.16.2

Datanode2 192.168.16.3

Datanode3 192.168.16.4

2.1 修改服务器IP的方法如下:

vi /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE="eth0"

#这里是你网卡的物理地址,通常检测到的网卡你就不用输入了

#打开后该项已经存在,无需修改

HWADDR="00:0C:29:95:1D:A5"

BOOTPROTO="static"

ONBOOT="yes"

#这里是IP地址,不能重复,从253降序排列,被占用了的不能再次使用.

IPADDR=172.16.101.245

NETMASK=255.255.255.0

NETWORK=172.16.101.0

BROADCAST=172.16.101.255

GATEWAY=172.16.101.254

退出保存后,执行如下命令,使设置的网关马上生效:

shell>> ifdown eth0

shell>> ifup eth0

shell>> /etc/init.d/network restart

3. 安装JDK6(我用的是jdk-6u26-linux-x64-rpm.bin)

3.1. 创建安装目录 mkdir /usr/java/

3.2. 把jdk-6u26-linux-x64-rpm.bin移动到/usr/java/下然后执行

./jdk-6u26-linux-x64-rpm.bin

运行中会要求输入,顺序yes和按回车就行.

执行后会看到一个文件夹名为:jdk1.6.0_26

3.3. 设置环境变量

/etc/profile增加如下内容

#config java

JAVA_HOME=/usr/java/jdk1.6.0_26

CLASSPATH=.:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar

PATH=$JAVA_HOME/bin:$HOME/bin:$PATH

export PATH JAVA_HOME CLASSPATH

3.4. 让设置生效: source /etc/profile

4. 安装ssh服务器和客户端

a. yum search ssh

b. 找到要安装的server(这里拿openssh-server.x86_64)

c. 安装server: yum install openssh-server.x86_64

d. 安装client(这里拿openssh-clients.x86_64)

e. 安装client: yum install openssh-clients.x86_64

5. 设置ssh进行Namenode和Datanode之间无密码访问

a. 用 ssh-key-gen 在本地主机上创建公钥和密钥

[root@Namenode ~]# ssh-keygen -t rsa

Enter file in which to save the key (/home/jsmith/.ssh/id_rsa):[Enter key]

Enter passphrase (empty for no passphrase): [Press enter key]

Enter same passphrase again: [Pess enter key]

Your identification has been saved in /home/jsmith/.ssh/id_rsa.

Your public key has been saved in /home/jsmith/.ssh/id_rsa.pub.

The key fingerprint is: 33:b3:fe:af:95:95:18:11:31:d5:de:96:2f:f2:35:f9

root@Namenode

b. 用 ssh-copy-id 把公钥复制到远程主机上

[root@Namenode ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@Datanode1

root@Datanode1's password:

Now try logging into the machine, with ―ssh ?root@Datanode1‘‖, and check in:

.ssh/authorized_keys to make sure we haven‘t added extra keys that you weren‘t expecting.

[注: ssh-copy-id 把密钥追加到远程主机的 .ssh/authorized_key 上.]

c. 直接登录远程主机

[root@Namenode ~]# ssh Datanode1

Last login: Sun Nov 16 17:22:33 2008 from 192.168.1.2

[注: SSH 不会询问密码.]

[root@Datanode1 ~]

[注: 你现在已经登录到了远程主机上]

d. 注意:在这里,执行都在Namenode上面,而且Namenode也需要对自己进行无密码操作即

[root@Namenode ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@Namenode操作,

其他的,按照a-c重复操作Datanode2和Datanode3就行了

一定要能无密码访问,否则不能集群Hadoop一定失败.

6. 安装Hadoop(这里,每一台服务器的JDK和Hadoop安装路径都相同)

a. 创建安装目录 mkdir /usr/local/hadoop/

b. 解压安装文件hadoop-0.21.0.tar放入到安装目录

tar -zxvf hadoop-0.21.0.tar

c. 设置环境变量

/etc/profile增加如下内容

#config hadoop

export HADOOP_HOME=/usr/local/hadoop/

export PATH=$HADOOP_HOME/bin:$PATH

#hadoop的日志文件路径的配置

export HADOOP_LOG_DIR=${HADOOP_HOME}/logs

让设置生效: source /etc/profile

d. 设置主从配置

Namenode中/etc/hosts的配置如下:

192.168.16.1 Namenode

192.168.16.2 Datanode1

192.168.16.3 Datanode2

192.168.16.4 Datanode3

Namenode中/usr/local/hadoop/conf/masters的配置如下:

Namenode

Namenode中/usr/local/hadoop/conf/slaves的配置如下:

Datanode1

Datanode2

Datanode3

Datanode1中/etc/hosts的配置如下:(/usr/local/hadoop/conf/中的masters和slaves的配置跟Namenode一样)

192.168.16.1 Namenode

192.168.16.2 Datanode1

Datanode2中/etc/hosts的配置如下:(/usr/local/hadoop/conf/中的masters和slaves的配置跟Namenode一样)

192.168.16.1 Namenode

192.168.16.3 Datanode2

Datanode3中/etc/hosts的配置如下:(/usr/local/hadoop/conf/中的masters和slaves的配置跟Namenode一样)

192.168.16.1 Namenode

192.168.16.4 Datanode3

e. 修改配置文件/usr/local/hadoop/conf/ hadoop-env.sh

把JAVA_HOME该为安装jdk的路径

# The java implementation to use. Required.

export JAVA_HOME=/usr/java/jdk1.6.0_26/

f. 修改配置文件 core-site.xml内容如下:

<configuration>

<property>

<name>fs.default.name</name>

<value>hdfs://Namenode:9000/</value>

</property>

<property>

<name>hadoop.tmp.dir</name>

<value>/usr/local/hadoop/tmp/</value>

</property>

</configuration>

g. 修改配置文件 hdfs-site.xml内容如下:

<configuration>

<property>

<name>dfs.replication</name>

#设置备份文件数

<value>1</value>

</property>

</configuration>

h. 修改配置文件 mapred-site.xml内容如下:

<configuration>

<property>

<name>mapred.job.tracker</name>

#一般jobtracker和namenode设置到同一台机器上,但是同样可以集群

<value>Namenode:9001</value>

</property>

</configuration>

i. 注意:上面讲的配置文件全部是在Namenode中配置的,只要把这三个配置文件拷贝复制到其他的Datanode上就行了

j. 初始化Hadoop: cd /usr/local/hadoop/

./bin/hadoop namenode -format

出现类似如下的信息:但是不能出现ERORR字段.

.2.jar:/usr/local/hadoop/bin/../lib/paranamer-generator-2.2.jar:/usr/local/hadoop/bin/../lib/qdox-1.10.1.jar:/usr/local/hadoop/bin/../lib/servlet-api-2.5-6.1.14.jar:/usr/local/hadoop/bin/../lib/slf4j-api-1.5.11.jar:/usr/local/hadoop/bin/../lib/slf4j-log4j12-1.5.11.jar:/usr/local/hadoop/bin/../lib/xmlenc-0.52.jar:/usr/local/hadoop/bin/../lib/jsp-2.1/*.jar:/usr/local/hadoop/hdfs/bin/../conf:/usr/local/hadoop/hdfs/bin/../hadoop-hdfs-*.jar:/usr/local/hadoop/hdfs/bin/../lib/*.jar:/usr/local/hadoop/bin/../mapred/conf:/usr/local/hadoop/bin/../mapred/hadoop-mapred-*.jar:/usr/local/hadoop/bin/../mapred/lib/*.jar:/usr/local/hadoop/hdfs/bin/../hadoop-hdfs-*.jar:/usr/local/hadoop/hdfs/bin/../lib/*.jar

STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.21 -r 985326; compiled by 'tomwhite' on Tue Aug 17 01:02:28 EDT 2010

************************************************************/

Re-format filesystem in /usr/local/hadoop/tmp/dfs/name ? (Y or N) y

Format aborted in /usr/local/hadoop/tmp/dfs/name

11/06/16 13:04:17 INFO namenode.NameNode: SHUTDOWN_MSG:

/************************************************************

SHUTDOWN_MSG: Shutting down NameNode at namenode/172.16.101.251

************************************************************/

k. 启动Hadoop ./bin/start-all.sh

This script is Deprecated. Instead use start-dfs.sh and start-mapred.sh

starting namenode, logging to /usr/local/hadoop//logs/hadoop-root-namenode-namenode.out

datanode1: starting datanode, logging to /usr/local/hadoop/bin/../logs/hadoop-root-datanode-datanode1.out

datanode2: starting datanode, logging to /usr/local/hadoop/bin/../logs/hadoop-root-datanode-datanode2.out

datanode3: starting datanode, logging to /usr/local/hadoop/bin/../logs/hadoop-root-datanode-datanode3.out

namenode: starting secondarynamenode, logging to /usr/local/hadoop/bin/../logs/hadoop-root-secondarynamenode-namenode.out

starting jobtracker, logging to /usr/local/hadoop//logs/hadoop-root-jobtracker-namenode.out

datanode3: starting tasktracker, logging to /usr/local/hadoop/bin/../logs/hadoop-root-tasktracker-datanode3.out

datanode2: starting tasktracker, logging to /usr/local/hadoop/bin/../logs/hadoop-root-tasktracker-datanode2.out

datanode1: starting tasktracker, logging to /usr/local/hadoop/bin/../logs/hadoop-root-tasktracker-datanode1.out

启动后用命令JPS查看结果如下:

[root@namenode hadoop]# jps

1806 Jps

1368 NameNode

1694 JobTracker

1587 SecondaryNameNode

然后到Datanode1/2/3上去查看,执行JPS,结果如下:

[root@datanode2 hadoop]# jps

1440 Jps

1382 TaskTracker

1303 DataNode

[root@datanode2 hadoop]# jps

1382 TaskTracker

1303 DataNode

1452 Jps

说明你成功集群安装了Hadoop

7. HDFS操作

运行bin/目录的hadoop命令,可以查看Haoop所有支持的操作及其用法,这里以几个简单的操作为例。

建立目录

[root@namenode hadoop]# ./bin/hadoop dfs -mkdir testdir

在HDFS中建立一个名为testdir的目录

复制文件

[root@namenode hadoop]# ./bin/hadoop dfs -put /home/dbrg/large.zip testfile.zip

把本地文件large.zip拷贝到HDFS的根目录/user/dbrg/下,文件名为testfile.zip

查看现有文件

[root@namenode hadoop]# ./bin/hadoop dfs -ls
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: