您的位置:首页 > 运维架构

伪分布式Hadoop安装

2015-11-02 22:18 323 查看
保持linux联网状态。

--------------------------------------------------------

设置操作系统环境

1.设置ip地址vim

右键主机名——》设为桥接模式

右键右上方的小电脑》edit connection->ipv4settingst

1)通过centos桌面操作

使用命令ifconfig查看ip地址

当修改了ip之后,使用命令service network restart重新启动网络服务

vi /etc/sysconfig/network-scripts/ifcfg-eth0

2.设置主机名

hostname查看主机名称

1)vi /etc/sysconfig/network,将hostname的值改成想要修改的名字

2)vi /etc/hosts 将第一行的localhost.localdomain改成想要设置的名字

3)reboot重启

绑定主机名到ip上

vi /etc/hosts增加一行记录192.168.1.97 crxy97

错误unknow host :hadoop1之类的原因在于主机名hadoop1没有绑定到ip上面

3.关闭防火墙

service iptables stop

service iptables status查看状态

4.关闭防火墙的自启动功能

chkconfig --list |grep iptables

chkconfig iptables off

[root@mjc70 /]# chkconfig --list |grep iptables

iptables 0:off
1:off 2:on
3:on 4:on
5:on 6:off

[root@mjc70 /]# chkconfig iptables off

5.设置SSH免密码登录

[root@mjc70 /]# ssh localhost

The authenticity of host 'localhost (::1)' can't be established.

RSA key fingerprint is 6b:0d:4d:69:33:11:09:86:60:cc:ff:7d:b9:b9:74:77.

Are you sure you want to continue connecting (yes/no)?

Host key verification failed.

[root@mjc70 /]# ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/root/.ssh/id_rsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /root/.ssh/id_rsa.

Your public key has been saved in /root/.ssh/id_rsa.pub.

The key fingerprint is:

40:6a:0d:29:2f:ac:36:30:59:9c:31:69:a7:81:84:ef root@mjc70

The key's randomart image is:

+--[ RSA 2048]----+

|o+o+... |

|o O.o= |

| * *o o |

|+ =.. . |

|.+ . S |

|.oE |

|. . |

| |

| |

+-----------------+

[root@mjc70 /]# ll -a

[root@mjc70 ~]# ll .ssh

[root@mjc70 .ssh]# cd

[root@mjc70 ~]# ll .ssh

total 8

-rw-------. 1 root root 1675 Oct 17 23:09 id_rsa

-rw-r--r--. 1 root root 392 Oct 17 23:09 id_rsa.pub

[root@mjc70 ~]# ssh-copy-id localhost (别忘记输入yes)

密码为安装密码

[root@mjc70 ~]# cd

[root@mjc70 ~]# ll .ssh(之前8项现在16)

[root@mjc70 ~]# cd .ssh

[root@mjc70 .ssh]# more id_rsa.pub

ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA9zyhSiWLZYBwngLr+v5Bf9QrhQsYvahiZMWuEt/nkio3TD8qSlljyoBgt3aSKFyuP4

xFBI8reVSpL/Y8aUbIG0rVdr+VM3a0mHMMse0FLn3+ahEQozBrlQ8FSUIBn6ksAPgBGheELsyaOEkvgh4iRUX0zpPkg+eybCvhxR0P

oot@mjc70

[root@mjc70 .ssh]# ssh localhost

Last login: Sat Oct 17 22:18:44 2015 from 192.168.1.141

6.安装jdk

把jdk-7u79-linux-x64.tar.gz放到linux的/usr/local(先把local里的东西删掉)目录下

执行命令tar -zxvf jdk-7u79-linux-x64.tar.gz解压缩

执行命令vi /etc/profile 增加两行内容,分别是

export JAVA_HOME=/usr/local/jdk1.7.0_79

export PATH=.:$JAVA_HOME/bin:$PATH

保存退出。执行命令source /etc/profile

执行命令java -version查看刚才操作是否生效。

-------------------------------------------------------

HDFS伪分布搭建

1)修改配置文件etc/hadoop/hadoop-env.sh:

JAVA_HOME=/usr/local/jdk1.7.0_79

2)修改配置文件etc/hadoop/core-site.xml:

<configuration>

<property>

<name>fs.defaultFS</name>

<value>hdfs://192.168.1.97:9000</value>

</property>

<property>

<name>hadoop.tmp.dir</name>

<value>/usr/local/hadoop-2.6.0-64/tmp</value>

</property>

<property>

<name>fs.trash.interval</name>

<value>1440</value>

</property>

</configuration>

3)mapred-site.xml.template

改名为mapred-site.xml

4)修改配置文件etc/hadoop/hdfs-site.xml:

<configuration>

<property>

<name>dfs.replication</name>

<value>1</value>

</property>

<property>

<name>dfs.permissions</name>

<value>false</value>

</property>

</configuration>

格式化文件系统:

$ bin/hdfs namenode -format

(hadoop下的bin

[root@mjc70 bin]# cd /usr

[root@mjc70 usr]# cd local

[root@mjc70 local]# ll

一直cd到bin)

启动HDFS集群:

$ sbin/start-dfs.sh

访问web浏览器:

NameNode - http://localhost:50070/
练习:

创建目录:

$ bin/hdfs dfs -mkdir /user

$ bin/hdfs dfs -mkdir /user/root

复制文件:cd

$ bin/hdfs dfs -put /etc/profile input

关闭集群:

$ sbin/stop-dfs.sh

安装Yarn

修改配置文件etc/hadoop/mapred-site.xml:

<configuration>

<property>

<name>mapreduce.framework.name</name>

<value>yarn</value>

</property>

</configuration>

修改配置文件etc/hadoop/yarn-site.xml:

<configuration>

<property>

<name>yarn.nodemanager.aux-services</name>

<value>mapreduce_shuffle</value>

</property>

</configuration>

启动Yarn集群:

$ sbin/start-yarn.sh

访问web浏览器:

ResourceManager - http://localhost:8088/
运行例子:

$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar wordcount input output

查看结果:

$ bin/hdfs dfs -cat output/*

关闭Yarn集群:

$ sbin/stop-yarn.sh
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: