您的位置:首页 > 运维架构 > Linux

Linux_Hadoop1.2.1 安装笔记

2017-06-22 14:43 357 查看
环境:Vmware Workstation 10,CentOS-7-x86_64-DVD-1511.iso,Xshell 4.0,Master:192.168.216.141,Slave1:192.168.216.142,Slave2:192.168.216.143.

给普通用户hadoop授予sudo权限

[hadoop@localhost ~]$ su root

密码:

[root@localhost hadoop]# ll /etc/sudoers

-r–r—–. 1 root root 4188 7月 7 2015 /etc/sudoers

[root@localhost hadoop]# chmod u+w /etc/sudoers

[root@localhost hadoop]# ll /etc/sudoers

-rw-r—–. 1 root root 4188 7月 7 2015 /etc/sudoers

[root@localhost hadoop]# vim /etc/sudoers

root ALL=(ALL) ALL

hadoop ALL=(ALL) ALL

[root@localhost hadoop]# chmod u-w /etc/sudoers

[root@localhost hadoop]# ll /etc/sudoers

-r–r—–. 1 root root 4215 6月 20 17:56 /etc/sudoers

[root@localhost hadoop]# exit

exit

Master服务器设置ssh无密码登录

[hadoop@localhost ~]$ ssh-keygen -t rsa -P ”

Generating public/private rsa key pair.

Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):

Created directory ‘/home/hadoop/.ssh’.

Your identification has been saved in /home/hadoop/.ssh/id_rsa.

Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.

[hadoop@localhost ~]$ ll ~/.ssh

总用量 8

-rw——-. 1 hadoop hadoop 1675 6月 20 18:02 id_rsa

-rw-r–r–. 1 hadoop hadoop 398 6月 20 18:02 id_rsa.pub

[hadoop@localhost ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

[hadoop@localhost ~]$ chmod 600 ~/.ssh/authorized_keys

[hadoop@localhost ~]$ ll ~/.ssh/

authorized_keys的权限必须是600

总用量 16

-rw——-. 1 hadoop hadoop 398 6月 20 18:03 authorized_keys

-rw——-. 1 hadoop hadoop 1675 6月 20 18:02 id_rsa

-rw-r–r–. 1 hadoop hadoop 398 6月 20 18:02 id_rsa.pub

[hadoop@localhost ~]$ sudo vim /etc/ssh/sshd_config

AuthorizedKeysFile .ssh/authorized_keys

[hadoop@localhost ~]$ ssh localhost

The authenticity of host ‘localhost (::1)’ can’t be established.

ECDSA key fingerprint is 1b:1d:06:bb:5c:dd:92:e6:22:ff:03:b3:a7:c0:e7:ed.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added ‘localhost’ (ECDSA) to the list of known hosts.

hadoop@localhost’s password:

Last login: Tue Jun 20 17:44:21 2017 from 192.168.216.1

[hadoop@localhost ~]$ su

密码:

[root@localhost hadoop]# service sshd restart

Redirecting to /bin/systemctl restart sshd.service

[root@localhost hadoop]# exit

exit

[hadoop@localhost ~]$ ssh localhost

The authenticity of host ‘localhost (::1)’ can’t be established.

ECDSA key fingerprint is 1b:1d:06:bb:5c:dd:92:e6:22:ff:03:b3:a7:c0:e7:ed.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added ‘localhost’ (ECDSA) to the list of known hosts.

Last failed login: Tue Jun 20 18:09:37 CST 2017 from localhost on ssh:notty

There were 4 failed login attempts since the last successful login.

Last login: Tue Jun 20 18:06:00 2017 from localhost

将~/.ssh文件夹所有发送给Slave1和Slave2:

[hadoop@localhost ~]$ scp -r ~/.ssh hadoop@192.168.216.142:~

hadoop@192.168.216.142’s password:

id_rsa 100% 1675 1.6KB/s 00:00

id_rsa.pub 100% 398 0.4KB/s 00:00

authorized_keys 100% 398 0.4KB/s 00:00

known_hosts 100% 348 0.3KB/s 00:00

[hadoop@localhost ~]$ scp -r ~/.ssh hadoop@192.168.216.143:~

The authenticity of host ‘192.168.216.143 (192.168.216.143)’ can’t be established.

ECDSA key fingerprint is 64:05:c3:ec:5e:cb:64:d8:41:74:d6:ee:cb:c3:84:59.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added ‘192.168.216.143’ (ECDSA) to the list of known hosts.

hadoop@192.168.216.143’s password:

id_rsa 100% 1675 1.6KB/s 00:00

id_rsa.pub 100% 398 0.4KB/s 00:00

authorized_keys 100% 398 0.4KB/s 00:00

known_hosts 100% 525 0.5KB/s 00:00

接下来可以测试Master,Slave1和Slave2相互免密码登录。

Master服务器

[hadoop@localhost ~]$ ssh 192.168.216.142

Last login: Tue Jun 20 18:24:22 2017 from 192.168.216.142

[hadoop@localhost ~]$ ip a

1: lo: mtu 65536 qdisc noqueue state UNKNOWN

link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

inet 127.0.0.1/8 scope host lo

valid_lft forever preferred_lft forever

inet6 ::1/128 scope host

valid_lft forever preferred_lft forever

2: eno16777736: mtu 1500 qdisc pfifo_fast state UP qlen 1000

link/ether 00:0c:29:9e:9f:16 brd ff:ff:ff:ff:ff:ff

inet 192.168.216.142/24 brd 192.168.216.255 scope global dynamic eno16777736

valid_lft 1148sec preferred_lft 1148sec

inet6 fe80::20c:29ff:fe9e:9f16/64 scope link

valid_lft forever preferred_lft forever

[hadoop@localhost ~]$ exit

登出

Connection to 192.168.216.142 closed.

[hadoop@localhost ~]$ ssh 192.168.216.143

Last login: Tue Jun 20 17:30:54 2017 from 192.168.216.1

[hadoop@localhost ~]$ exit

登出

Connection to 192.168.216.143 closed.

同理从Slave1到Master, Slave2登录;Slave2到Master, Slave1登录。

至此准备阶段结束

Master服务器

[hadoop@localhost ~]$ wget http://archive.apache.org/dist/hadoop/core/hadoop-1.2.1/hadoop-1.2.1.tar.gz

[hadoop@localhost ~]$ sudo tar -zxvf hadoop-1.2.1.tar.gz -C /usr/local/

[hadoop@localhost ~]$ sudo mv /usr/local/hadoop-1.2.1/ /usr/local/hadoop

[hadoop@localhost ~]$ ll /usr/local/ | grep hadoop

drwxr-xr-x. 15 root root 4096 7月 23 2013 hadoop

[hadoop@localhost ~]$ sudo chown -R hadoop:hadoop /usr/local/hadoop/

[hadoop@localhost ~]$ ll /usr/local/ | grep hadoop

drwxr-xr-x. 15 hadoop hadoop 4096 7月 23 2013 hadoop

[hadoop@localhost ~]$ sudo vim /etc/profile



[hadoop@localhost ~]$ source /etc/profile

[hadoop@localhost ~]$ hadoop

Usage: hadoop [–config confdir] COMMAND

配置文件

[hadoop@localhost ~]$ vim /usr/local/hadoop/conf/hadoop-env.sh

export JAVA_HOME=/usr/java/jdk1.8.0_131

[hadoop@localhost ~]$ vim /usr/local/hadoop/conf/core-site.xml



[hadoop@localhost ~]$ vim /usr/local/hadoop/conf/hdfs-site.xml



[hadoop@localhost ~]$ vim /usr/local/hadoop/conf/mapred-site.xml



[hadoop@localhost ~]$ vim /usr/local/hadoop/conf/masters

192.168.216.141

[hadoop@localhost ~]$ vim /usr/local/hadoop/conf/slaves

192.168.216.142

192.168.216.143

[hadoop@localhost ~]$ cd /usr/local/hadoop/ && mkdir datanode tmp

[hadoop@localhost ~]$ chmod 755 /usr/local/hadoop/tmp

[hadoop@localhost ~]$ chmod 755 /usr/local/hadoop/datanode

注意:datanode和tmp目录的权限必须是755,否则datanode启动失败。

将/usr/local/hadoop整个文件夹发送到Slave1和Slave2:

[hadoop@localhost ~]$ scp -r /usr/local/hadoop/ 192.168.216.142:~

[hadoop@localhost ~]$ scp -r /usr/local/hadoop/ 192.168.216.143:~

Slave1和Slave2上操作

[hadoop@localhost ~]$ sudo mv hadoop /usr/local/

[hadoop@localhost ~]$ sudo vim /etc/profile



[hadoop@localhost ~]$ source /etc/profile

[hadoop@localhost ~]$ hadoop

Usage: hadoop [–config confdir] COMMAND

至此hadoop一主两从部署完毕

Master,Slave1和Slave2

关闭防火墙

注意:下面命令中“–”是两个“-”,csdn显示有误。

[hadoop@localhost hadoop]$ firewall-cmd –state

running

[hadoop@localhost hadoop]$ sudo systemctl stop firewalld

[sudo] password for hadoop:

[hadoop@localhost hadoop]$ firewall-cmd –state

not running

格式化namenode文件系统

[hadoop@localhost hadoop]$ hadoop namenode -format

17/06/22 14:16:49 INFO namenode.NameNode: STARTUP_MSG:

/[b]**************************************************[/b]

STARTUP_MSG: Starting NameNode

STARTUP_MSG: host = localhost/127.0.0.1

STARTUP_MSG: args = [-format]

STARTUP_MSG: version = 1.2.1

STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by ‘mattf’ on Mon Jul 22 15:23:09 PDT 2013

STARTUP_MSG: java = 1.8.0_131

[b]**************************************************[/b]/

17/06/22 14:16:49 INFO util.GSet: Computing capacity for map BlocksMap

17/06/22 14:16:49 INFO util.GSet: VM type = 64-bit

17/06/22 14:16:49 INFO util.GSet: 2.0% max memory = 1013645312

17/06/22 14:16:49 INFO util.GSet: capacity = 2^21 = 2097152 entries

17/06/22 14:16:49 INFO util.GSet: recommended=2097152, actual=2097152

17/06/22 14:16:49 INFO namenode.FSNamesystem: fsOwner=hadoop

17/06/22 14:16:49 INFO namenode.FSNamesystem: supergroup=supergroup

17/06/22 14:16:49 INFO namenode.FSNamesystem: isPermissionEnabled=false

17/06/22 14:16:49 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100

17/06/22 14:16:49 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)

17/06/22 14:16:49 INFO namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0

17/06/22 14:16:49 INFO namenode.NameNode: Caching file names occuring more than 10 times

17/06/22 14:16:50 INFO common.Storage: Image file /usr/local/hadoop/name/current/fsimage of size 112 bytes saved in 0 seconds.

17/06/22 14:16:50 INFO namenode.FSEditLog: closing edit log: position=4, editlog=/usr/local/hadoop/name/current/edits

17/06/22 14:16:50 INFO namenode.FSEditLog: close success: truncate to 4, editlog=/usr/local/hadoop/name/current/edits

17/06/22 14:16:50 INFO common.Storage: Storage directory /usr/local/hadoop/name has been successfully formatted.

17/06/22 14:16:50 INFO namenode.NameNode: SHUTDOWN_MSG:

/[b]**************************************************[/b]

SHUTDOWN_MSG: Shutting down NameNode at localhost/127.0.0.1

[b]**************************************************[/b]/

Master

[hadoop@localhost hadoop]$ start-all.sh

starting namenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-hadoop-namenode-localhost.out

192.168.216.143: datanode running as process 12402. Stop it first.

192.168.216.142: datanode running as process 12458. Stop it first.

The authenticity of host ‘192.168.216.141 (192.168.216.141)’ can’t be established.

ECDSA key fingerprint is 45:3b:37:83:b6:7f:20:55:47:10:b7:ad:63:ae:96:33.

Are you sure you want to continue connecting (yes/no)? yesw^H

192.168.216.141: Warning: Permanently added ‘192.168.216.141’ (ECDSA) to the list of known hosts.

192.168.216.141: secondarynamenode running as process 12610. Stop it first.

starting jobtracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-hadoop-jobtracker-localhost.out

192.168.216.143: tasktracker running as process 12673. Stop it first.

192.168.216.142: tasktracker running as process 12731. Stop it first.

[hadoop@localhost hadoop]$ jps

12610 SecondaryNameNode

12866 Jps

12758 JobTracker

12442 NameNode

[hadoop@localhost hadoop]$ hadoop dfsadmin -report

Configured Capacity: 37492883456 (34.92 GB)

Present Capacity: 34187776030 (31.84 GB)

DFS Remaining: 34187759616 (31.84 GB)

DFS Used: 16414 (16.03 KB)

DFS Used%: 0%

Under replicated blocks: 0

Blocks with corrupt replicas: 0

Missing blocks: 0

Datanodes available: 2 (2 total, 0 dead)

Name: 192.168.216.142:50010

Decommission Status : Normal

Configured Capacity: 18746441728 (17.46 GB)

DFS Used: 8207 (8.01 KB)

Non DFS Used: 1652527089 (1.54 GB)

DFS Remaining: 17093906432(15.92 GB)

DFS Used%: 0
b45b
%

DFS Remaining%: 91.18%

Last contact: Thu Jun 22 14:19:56 CST 2017

Name: 192.168.216.143:50010

Decommission Status : Normal

Configured Capacity: 18746441728 (17.46 GB)

DFS Used: 8207 (8.01 KB)

Non DFS Used: 1652580337 (1.54 GB)

DFS Remaining: 17093853184(15.92 GB)

DFS Used%: 0%

DFS Remaining%: 91.18%

Last contact: Thu Jun 22 14:19:56 CST 2017

Slave1

[hadoop@localhost hadoop]$ start-all.sh

starting namenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-hadoop-namenode-localhost.out

192.168.216.143: starting datanode, logging to /usr/local/hadoop/libexec/../logs/hadoop-hadoop-datanode-localhost.out

192.168.216.142: starting datanode, logging to /usr/local/hadoop/libexec/../logs/hadoop-hadoop-datanode-localhost.out

192.168.216.141: secondarynamenode running as process 12610. Stop it first.

starting jobtracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-hadoop-jobtracker-localhost.out

192.168.216.142: tasktracker running as process 12731. Stop it first.

192.168.216.143: tasktracker running as process 12673. Stop it first.

[hadoop@localhost hadoop]$ jps

12458 DataNode

12731 TaskTracker

12847 Jps

Slave2

[hadoop@localhost hadoop]$ start-all.sh

starting namenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-hadoop-namenode-localhost.out

192.168.216.143: datanode running as process 12402. Stop it first.

192.168.216.142: datanode running as process 12458. Stop it first.

192.168.216.141: starting secondarynamenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-hadoop-secondarynamenode-localhost.out

starting jobtracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-hadoop-jobtracker-localhost.out

192.168.216.142: starting tasktracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-hadoop-tasktracker-localhost.out

192.168.216.143: starting tasktracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-hadoop-tasktracker-localhost.out

[hadoop@localhost hadoop]$ jps

12673 TaskTracker

12402 DataNode

12793 Jps

http://192.168.216.141:50070



http://192.168.216.141:50030

内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  hadoop linux ssh