您的位置:首页 > 运维架构

Hadoop2.7环境搭建---SSH免登陆配置

2016-09-26 09:44 441 查看
感谢分享:http://blog.csdn.net/stark_summer/article/details/42424279
一. 配置机器 hostname
Hostname Hadoop-Master
hostname Hadoop-Slave1
hostname Hadoop-Slave2
打开hosts文件 并修改关联关系(vi /etc/hosts),末尾添加:



ping Hadoop-Slave1

PING Hadoop-Slave1 (10.144.13.146) 56(84) bytes of data.

64 bytes from Hadoop-Slave1 (10.144.13.146): icmp_seq=1 ttl=60 time=0.492 ms

64 bytes from Hadoop-Slave1 (10.144.13.146): icmp_seq=2 ttl=60 time=0.452 ms

64 bytes from Hadoop-Slave1 (10.144.13.146): icmp_seq=3 ttl=60 time=0.479 ms
ping Hadoop-Slave2

PING Hadoop-Slave2 (10.174.221.89) 56(84) bytes of data.

64 bytes from Hadoop-Slave2 (10.174.221.89): icmp_seq=1 ttl=63 time=0.350 ms

64 bytes from Hadoop-Slave2 (10.174.221.89): icmp_seq=2 ttl=63 time=0.308 ms

64 bytes from Hadoop-Slave2 (10.174.221.89): icmp_seq=3 ttl=63 time=0.385 ms
目前两台机器是可以通信的

二. ssh免密码验证配置
1.首先在Master机器配置
1-1.进去.ssh文件: [root@Hadoop-Master ~]#cd ~/.ssh 如果没有该目录,先执行一次ssh
localhost,不要手动创建,不然配置好还要输入密码

1-2.生成秘钥 ssh-keygen:ssh-keygen -t rsa,一路狂按回车键就可以了,最终生成(id_rsa,id_rsa.pub两个文件)
1-3.生成authorized_keys文件:[spark@S1PA11 .ssh]$ cat id_rsa.pub >>authorized_keys
1-4.在另两台台机器Slave1、Slave2也生成公钥和秘钥
1-5.将Slave1机器的id_rsa.pub文件copy到Master机器:[root@Slave1 .ssh]#scp id_rsa.pub root@Hadoop-Master:~/.ssh/id_rsa.pub_s1
1-6.将Slave2机器的id_rsa.pub文件copy到Master机器:[root@Slave1 .ssh]#scp id_rsa.pub root@Hadoop-Master:~/.ssh/id_rsa.pub_s2
1-7.此切换到Master机器合并authorized_keys;
[root@Hadoop-Master .ssh]# cat id_rsa.pub_s1>> authorized_keys
[root@Hadoop-Master .ssh]# cat id_rsa.pub_s2>> authorized_keys
1-8.将authorized_keyscopy到Slave1、Slave2机器:
[root@Hadoop-Master.ssh]# scp authorized_keys root@Hadoop-Slave1:~/.ssh/
[root@Hadoop-Master.ssh]# scp authorized_keys root@Hadoop-Slave2:~/.ssh/
1-9.现在将各台 .ssh/文件夹权限改为700,authorized_keys文件权限改为600(or 644)
chmod 700 ~/.ssh

chmod 600 ~/.ssh/authorized_keys
1-10.验证ssh
[root@Hadoop-Master .ssh]# ssh Hadoop-Slave1

Welcome to aliyun Elastic Compute Service!

[root@Hadoop-Slave1 ~]# exit

logout

Connection to Hadoop-Slave1 closed.

[root@Hadoop-Master .ssh]# ssh Hadoop-Slave2

Welcome to aliyun Elastic Compute Service!

[root@Hadoop-Slave2 ~]# exit

logout

Connection to Hadoop-Slave2 closed.
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: