Hadoop2.2.0单节点伪分布式搭建
2014-01-30 19:40
267 查看
安装SSH
$ ssh-keygen -t dsa –P '' -f ~/.ssh/id_dsa$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
验证是否安装成功:
$ ssh –version$ ssh localhost
Welcome to Ubuntu 12.04 LTS (GNU/Linux 3.2.0-23-generic x86_64) * Documentation: https://help.ubuntu.com/ Last login: Mon Feb 17 00:59:20 2014 from localhost.localdomain
安装 JDK(这里安装jdk1.7.0_45)
参考我的博客,链接如下:http://blog.csdn.net/stanely_hwang/article/details/18883599
安装Hadoop
1.目录配置
在格式化前必须先建立好相应的目录,并赋予相应操作权限。/home/hadoop | 用户目录 |
/opt/hadoop/hadoop-2.2.0 | 软件home |
/opt/hadoop/dfs/name | 数据和编辑文件 |
/opt/hadoop/mapred/local | 存放数据 |
/opt/hadoop/mapred/system | 存放数据 |
创建完后,执行
$ sudo chmod –R a+w /opt/hadoop
获得目录权限
解压Hadoop tar.gz 压缩包到~/目录下
$ cd ~/
$ sudo tar –zxvf hadoop-2.2.0.tar.gz
2.修改hadoop参数文件
以下所有配置文件的修改均在下面目录完成:$ pwd
~/hadoop-2.2.0/etc/hadoop
编辑配置文件 core-site.xml
$ vi core-site.xml<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:8020</value>
<description>The name of the defaultfile system. Either the literal string "local" or a host:port forNDFS.
</description>
<final>true</final>
</property>
</configuration>
编辑配置文件 hdfs-site.xm
$ vi hdfs-site.xml<configuration> <property> <name>dfs.namenode.name.dir</name> <value>file:/opt/hadoop/dfs/name</value> <description>Determineswhere on the local filesystem the DFS name node should store the name table. Ifthis is a comma-delimited list of directories then the name table is replicatedin all of the directories, for redundancy. </description> <final>true</final> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/opt/hadoop/dfs/data</value> <description>Determineswhere on the local filesystem an DFS data node should store its blocks. If thisis a comma-delimited list of directories, then data will be stored in all nameddirectories, typically on different devices.Directories that do not exist areignored. </description> <final>true</final> </property> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.permissions</name> <value>false</value> </property></configuration>编辑配置文件 mapred-site.xml
$ vi mapred-site.xml<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapred.system.dir</name> <value>file:/opt/hadoop/mapred/system</value> <final>true</final> </property> <property> <name>mapred.local.dir</name> <value>file:/opt/hadoop/mapred/local</value> <final>true</final> </property></configuration>编辑配置文件 yarn-site.xml
<configuration> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>mapreduce_shuffle</value> <description>shuffle service that needsto be set for Map Reduce to run </description> </property> </configuration>编辑配置文件 hadoop-env.sh
$ vi hadoop-env.sh在hadoop-env.sh脚本文件中添加$JAVA_HOME,这里我将JDK安装在~/opt/jdk1.7文件下
export JAVA_HOME=~/opt/jdk1.7
启动hadoop
1.格式化namenode
到Hadoop安装目录的bin目录下执行(例如~/hadoop-2.2.0/bin目录下)$ ./hdfs namenode –format
2.开启守护进程
到Hadoop安装目录的sbin目录下执行(例如~/hadoop-2.2.0/sbin目录下)$ ./hadoop-daemon.sh start namenode
$ ./hadoop-daemon.sh start datanode
3.开启yarn守护进程
到Hadoop安装目录的sbin目录下执行(例如~/hadoop-2.2.0/sbin目录下)$ ./yarn-daemon.sh start resourcemanager
$ ./yarn-daemon.sh start nodemanager
4.检查进程是否启动
$ jps2912 NameNode5499 ResourceManager
2981 DataNode
6671 Jps
6641 NodeManager
6473 SecondaryNameNode
5.查看hadoop管理页面
http://localhost:8088查看Namenode情况
相关文章推荐
- 5节点Hadoop分布式集群搭建-超详细文档
- 基于hadoop2.6.0搭建5个节点的分布式集群
- 5节点Hadoop分布式集群搭建经验分享
- 5节点Hadoop分布式集群搭建-超详细文档
- 5节点Hadoop分布式集群搭建-超详细文档
- 5节点Hadoop分布式集群搭建-超详细文档
- Hadoop分布式集群环境搭建(三节点)
- 搭建3个节点的hadoop集群(完全分布式部署)--2安装mysql及hive
- 5节点Hadoop分布式集群搭建-超详细文档
- Hadoop三节点分布式集群搭建(基于openstack)
- 5节点Hadoop分布式集群搭建-超详细文档
- 搭建3个节点的hadoop集群(完全分布式部署)5 flume安装及flume导数据到hdfs
- centos7下安装编译并搭建hadoop2.6.0单节点伪分布式集群
- 5节点Hadoop分布式集群搭建-超详细文档
- 5节点Hadoop分布式集群搭建-超详细文档
- Hadoop、ZooKeeper、Hive、HBase 七节点分布式集群搭建
- 5节点Hadoop分布式集群搭建-超详细文档
- Hadoop完全分布式集群搭建(三个节点)
- 完全分布式Hadoop集群的安装搭建和配置(4节点)
- Hadoop 2.2.0单节点的伪分布集成环境搭建