您的位置:首页 > 运维架构

Hadoop步骤 openSUSE 42

2017-01-06 09:00 267 查看
1. ssh 无密码登录

cd ~/.ssh

ssh-keygen -t rsa
 

cat id_rsa.pub >> authorized_keys

本地测试:ssh localhost (第一次需要输入密码,以后不用输入密码)

异机测试:将本机id_rsa.pub(公钥)放到异机的authorized_keys即可

2.jdk 配置

vi  /etc/profile

export JAVA_HOME=/usr/java/jdk1.7.0_79

export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

export PATH=$JAVA_HOME/bin:$PATH

source /etc/profile (注意使用的用户,source 使本用户环境有效~~)

3.安装Hadoop

sudo tar-zxf ~/下载/hadoop-2.6.0.tar.gz -C /usr/local# 解压到/usr/local中
cd/usr/local/
sudo mv./hadoop-2.6.0/ ./hadoop# 将文件夹名改为hadoop
sudo chown-R hadoop ./hadoop# 修改文件权限

安装完成

测试:

cd/usr/local/hadoop
./bin/hadoop version

4. 以下是Hadoop的配置(伪分布式)

./etc/hadoop/core-site.xml 绝对路径是/usr/local/hadoop/etc/hadoop/core-site.xml
不要搞错嘞~~

修改配置文件 core-site.xml 

<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/usr/local/hadoop/tmp</value>
<description>Abase for other temporary directories.</description>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
修改配置文件 hdfs-site.xml

<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop/tmp/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop/tmp/dfs/data</value>
</property>
</configuration>

注意:如果没有的文件夹,需要新建,比如/usr/local/hadoop/tmp 、 dfs  、name 、data
配置完成后,执行 NameNode 的格式化:

./bin/hdfs namenode -format

   Q1: 


Hadoop格式化HDFS报警告 java.net.UnknownHostException: bogon: bogon

   解决方案:重启虚拟机,bogon是DNS逆解析失败导致的bogon,重启后执行 namenode 格式化命令,如果还不能解决;

   则 vi /etc/hostname  改成slave02 ;vi /etc/hosts 添加127.0.0.1
      slave02 重启虚拟机~(Slave02 是自己取得名字)

关键信息如下:

17/01/05 14:33:41 INFO namenode.FSImage: Allocated new BlockPoolId: BP-627974405-127.0.0.1-1483598021204

17/01/05 14:33:41 INFO common.Storage: Storage directory /usr/local/hadoop/tmp/dfs/name has been successfully formatted.

17/01/05 14:33:41 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0

17/01/05 14:33:41 INFO util.ExitUtil: Exiting with status 0

17/01/05 14:33:41 INFO namenode.NameNode: SHUTDOWN_MSG: 

/************************************************************

SHUTDOWN_MSG: Shutting down NameNode at slave02/127.0.0.1

************************************************************/

成功了吧~~

格式化完成,下一步启动Hadoop:

开启
NameNode 和 DataNode 守护进程(启动前要保证ssh服务是开着滴,命令service sshd start)

./sbin/start-dfs.sh

启动成功:

hadoop@slave02:/usr/local/hadoop> jps

2641 DataNode

2838 SecondaryNameNode

2536 NameNode

5853 Jps

hadoop@slave02:/usr/local/hadoop> 

果然成功~

5.以下是Hadoop配置(分布式集群)

一主两从,即一个master两个slave

1) 配置文件:/etc/hadoop/core-site.xml

<configuration>

        <property>

             <name>hadoop.tmp.dir</name>

             <value>file:/usr/local/hadoop/tmp</value>

             <description>Abase for other temporary directories.</description>

        </property>

        <property>

             <name>fs.defaultFS</name>

             <value>hdfs://master:9000</value>

        </property>

</configuration>

2) 配置文件:/etc/hadoop/mapred-site.xml 
   

<configuration>

<property>

<name>mapred.job.tracker</name>

<value>master:9001</value>

</property>

</configuration>

3)
配置文件:/etc/hadoop/hdfs-site.xml

<configuration>

<property>

             <name>dfs.replication</name>

             <value>1</value>

        </property>

        <property>

             <name>dfs.namenode.name.dir</name>

             <value>file:/usr/local/hadoop/tmp/dfs/name</value>

        </property>

        <property>

             <name>dfs.datanode.data.dir</name>

             <value>file:/usr/local/hadoop/tmp/dfs/data</value>

        </property>

</configuration>

4)
配置masters和slaves主从结点

etc/hadoop/ 目录下没有masters文件夹,不需要配置,只配置Slaves节点

$vim slaves:

输入:

slave01

slave02

5) 配置/etc/hosts

需要root权限哟~

->#vi /etc/hosts

#127.0.0.1      localhost

192.168.126.137 master

192.168.126.138 slave01

192.168.126.136 slave02 

实际IP自己更改~

6)
复制虚拟机,复制2份,一共三个虚拟机,master slave01 slave02

更改三台机器的主机名:vi /etc/hostname 分别为master
slave01 slave02

重新设置ssh
免密码登录,使master 能免密码登录master slave01 slave02(如果能登录,就不用重配了~~)

验证完成master能登录master
slave01 slave02

7)启动hadoop,

hadoop@master:/usr/local/hadoop>
./sbin/start-dfs.sh 

Starting namenodes on [master]

master: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-namenode-master.out

slave01: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hadoop-datanode-slave01.out

slave02: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hadoop-datanode-slave02.out

Starting secondary namenodes [0.0.0.0]

0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-secondarynamenode-master.out

jps一下看看:

hadoop@master:/usr/local/hadoop>
jps

3419 Jps

3116 NameNode

3311 SecondaryNameNode

去slave01
jps 看看:

hadoop@slave01:~> jps

3278 Jps

3185 DataNode

去slave02 jps 看看:

hadoop@slave02:~>
jps

2355 DataNode

2457 Jps

启动成功了~~好消息。

打开浏览器,去master:50070看看吧~

启动管理进程噻~

hadoop@master:/usr/local/hadoop>
./sbin/start-yarn.sh 

starting yarn daemons

starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-resourcemanager-master.out

slave01: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-nodemanager-slave01.out

slave02: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-nodemanager-slave02.out

hadoop@master:/usr/local/hadoop> jps

3116 NameNode

3785 ResourceManager

3311 SecondaryNameNode

3920 Jps
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: