Hadoop2.7环境搭建---SSH免登陆配置
2016-09-26 09:44
441 查看
感谢分享:http://blog.csdn.net/stark_summer/article/details/42424279
一. 配置机器 hostname
Hostname Hadoop-Master
hostname Hadoop-Slave1
hostname Hadoop-Slave2
打开hosts文件 并修改关联关系(vi /etc/hosts),末尾添加:
![](http://img.blog.csdn.net/20160926085826975)
ping Hadoop-Slave1
PING Hadoop-Slave1 (10.144.13.146) 56(84) bytes of data.
64 bytes from Hadoop-Slave1 (10.144.13.146): icmp_seq=1 ttl=60 time=0.492 ms
64 bytes from Hadoop-Slave1 (10.144.13.146): icmp_seq=2 ttl=60 time=0.452 ms
64 bytes from Hadoop-Slave1 (10.144.13.146): icmp_seq=3 ttl=60 time=0.479 ms
ping Hadoop-Slave2
PING Hadoop-Slave2 (10.174.221.89) 56(84) bytes of data.
64 bytes from Hadoop-Slave2 (10.174.221.89): icmp_seq=1 ttl=63 time=0.350 ms
64 bytes from Hadoop-Slave2 (10.174.221.89): icmp_seq=2 ttl=63 time=0.308 ms
64 bytes from Hadoop-Slave2 (10.174.221.89): icmp_seq=3 ttl=63 time=0.385 ms
目前两台机器是可以通信的
二. ssh免密码验证配置
1.首先在Master机器配置
1-1.进去.ssh文件: [root@Hadoop-Master ~]#cd ~/.ssh 如果没有该目录,先执行一次ssh
localhost,不要手动创建,不然配置好还要输入密码
1-2.生成秘钥 ssh-keygen:ssh-keygen -t rsa,一路狂按回车键就可以了,最终生成(id_rsa,id_rsa.pub两个文件)
1-3.生成authorized_keys文件:[spark@S1PA11 .ssh]$ cat id_rsa.pub >>authorized_keys
1-4.在另两台台机器Slave1、Slave2也生成公钥和秘钥
1-5.将Slave1机器的id_rsa.pub文件copy到Master机器:[root@Slave1 .ssh]#scp id_rsa.pub root@Hadoop-Master:~/.ssh/id_rsa.pub_s1
1-6.将Slave2机器的id_rsa.pub文件copy到Master机器:[root@Slave1 .ssh]#scp id_rsa.pub root@Hadoop-Master:~/.ssh/id_rsa.pub_s2
1-7.此切换到Master机器合并authorized_keys;
[root@Hadoop-Master .ssh]# cat id_rsa.pub_s1>> authorized_keys
[root@Hadoop-Master .ssh]# cat id_rsa.pub_s2>> authorized_keys
1-8.将authorized_keyscopy到Slave1、Slave2机器:
[root@Hadoop-Master.ssh]# scp authorized_keys root@Hadoop-Slave1:~/.ssh/
[root@Hadoop-Master.ssh]# scp authorized_keys root@Hadoop-Slave2:~/.ssh/
1-9.现在将各台 .ssh/文件夹权限改为700,authorized_keys文件权限改为600(or 644)
chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys
1-10.验证ssh
[root@Hadoop-Master .ssh]# ssh Hadoop-Slave1
Welcome to aliyun Elastic Compute Service!
[root@Hadoop-Slave1 ~]# exit
logout
Connection to Hadoop-Slave1 closed.
[root@Hadoop-Master .ssh]# ssh Hadoop-Slave2
Welcome to aliyun Elastic Compute Service!
[root@Hadoop-Slave2 ~]# exit
logout
Connection to Hadoop-Slave2 closed.
一. 配置机器 hostname
Hostname Hadoop-Master
hostname Hadoop-Slave1
hostname Hadoop-Slave2
打开hosts文件 并修改关联关系(vi /etc/hosts),末尾添加:
ping Hadoop-Slave1
PING Hadoop-Slave1 (10.144.13.146) 56(84) bytes of data.
64 bytes from Hadoop-Slave1 (10.144.13.146): icmp_seq=1 ttl=60 time=0.492 ms
64 bytes from Hadoop-Slave1 (10.144.13.146): icmp_seq=2 ttl=60 time=0.452 ms
64 bytes from Hadoop-Slave1 (10.144.13.146): icmp_seq=3 ttl=60 time=0.479 ms
ping Hadoop-Slave2
PING Hadoop-Slave2 (10.174.221.89) 56(84) bytes of data.
64 bytes from Hadoop-Slave2 (10.174.221.89): icmp_seq=1 ttl=63 time=0.350 ms
64 bytes from Hadoop-Slave2 (10.174.221.89): icmp_seq=2 ttl=63 time=0.308 ms
64 bytes from Hadoop-Slave2 (10.174.221.89): icmp_seq=3 ttl=63 time=0.385 ms
目前两台机器是可以通信的
二. ssh免密码验证配置
1.首先在Master机器配置
1-1.进去.ssh文件: [root@Hadoop-Master ~]#cd ~/.ssh 如果没有该目录,先执行一次ssh
localhost,不要手动创建,不然配置好还要输入密码
1-2.生成秘钥 ssh-keygen:ssh-keygen -t rsa,一路狂按回车键就可以了,最终生成(id_rsa,id_rsa.pub两个文件)
1-3.生成authorized_keys文件:[spark@S1PA11 .ssh]$ cat id_rsa.pub >>authorized_keys
1-4.在另两台台机器Slave1、Slave2也生成公钥和秘钥
1-5.将Slave1机器的id_rsa.pub文件copy到Master机器:[root@Slave1 .ssh]#scp id_rsa.pub root@Hadoop-Master:~/.ssh/id_rsa.pub_s1
1-6.将Slave2机器的id_rsa.pub文件copy到Master机器:[root@Slave1 .ssh]#scp id_rsa.pub root@Hadoop-Master:~/.ssh/id_rsa.pub_s2
1-7.此切换到Master机器合并authorized_keys;
[root@Hadoop-Master .ssh]# cat id_rsa.pub_s1>> authorized_keys
[root@Hadoop-Master .ssh]# cat id_rsa.pub_s2>> authorized_keys
1-8.将authorized_keyscopy到Slave1、Slave2机器:
[root@Hadoop-Master.ssh]# scp authorized_keys root@Hadoop-Slave1:~/.ssh/
[root@Hadoop-Master.ssh]# scp authorized_keys root@Hadoop-Slave2:~/.ssh/
1-9.现在将各台 .ssh/文件夹权限改为700,authorized_keys文件权限改为600(or 644)
chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys
1-10.验证ssh
[root@Hadoop-Master .ssh]# ssh Hadoop-Slave1
Welcome to aliyun Elastic Compute Service!
[root@Hadoop-Slave1 ~]# exit
logout
Connection to Hadoop-Slave1 closed.
[root@Hadoop-Master .ssh]# ssh Hadoop-Slave2
Welcome to aliyun Elastic Compute Service!
[root@Hadoop-Slave2 ~]# exit
logout
Connection to Hadoop-Slave2 closed.
相关文章推荐
- hadoop - hadoop2.6 分布式 - 集群环境搭建 - JDK安装配置和SSH安装配置与免密码登陆(集群中)
- 从零开始搭建hadoop分布式集群环境:(四)配置ssh无密码登录
- 正式生产环境下hadoop集群的DNS+NFS+ssh免密码登陆配置
- Hadoop2.7 群集配置+eclipse开发环境搭建
- Ubuntu hadoop 伪分布式环境搭建步骤+ssh密钥(免密码登录)配置
- SSH无密码登陆及Hadoop1.2.1环境搭建
- hadoop搭建时配置SSH免密登陆,解决RSA无法使用问题
- hadoop(二)搭建开发环境安装选项:DesktopGnome、Server、Server GUI、ssh、vi(编辑配置文件)、perl
- hadoop集群搭建一: 集群 配置ssh免密码登陆
- hadoop环境搭建之配置SSH免密码登录
- hadoop环境配置之SSH无密码登陆的配置(备注:转载)
- Hadoop2.7环境搭建---Hadoop配置
- 正式生产环境下hadoop集群的DNS+NFS+ssh免password登陆配置
- 轻松搭建hadoop-1.2.1集群--快速配置SSH免密码登陆
- 正式生产环境下hadoop集群的DNS+NFS+ssh免密码登陆配置
- hadoop环境搭建准备工作之二:linux下设置ssh无密码登陆
- 正式生产环境下hadoop集群的DNS+NFS+ssh免密码登陆配置
- Hadoop -分布式环境搭建安装配置
- hadoop前戏配置二:网络环境、ssh免登录、防火墙关闭
- Hadoop环境搭建-入门伪分布式配置(Mac OS,0.21.0,Eclipse 3.6)