分散式-ubuntu12.04安装hadoop1.2.1
2015-07-23 14:42
399 查看
在hadoop1.2.1被预装在一份报告中安装说明java。我装了很多的版本号java以及许多的版本号hadoop,然后发现oracle-java7与hadoop1.2.1能够匹配。
一,安装详细过程例如以下:
1. 安装java: sudo apt-get install oracle-java7-installer
2. 安装hadoop1.2.1: http://hadoop.apache.org/docs/r1.2.1/single_node_setup.html#Download
二。測试是否成功安装(伪分布式模式):
Format a new distributed-filesystem:
$ bin/hadoop namenode -format
Start the hadoop daemons:
$ bin/start-all.sh
The hadoop daemon log output is written to the ${HADOOP_LOG_DIR} directory (defaults to ${HADOOP_HOME}/logs).
Browse the web interface for the NameNode and the JobTracker; by default they are available at:
NameNode - http://localhost:50070/
JobTracker - http://localhost:50030/
Copy the input files into the distributed filesystem:
$ bin/hadoop fs -put conf input
Run some of the examples provided:
$ bin/hadoop jar hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
Examine the output files:
Copy the output files from the distributed filesystem to the local filesytem and examine them:
$ bin/hadoop fs -get output output
$ cat output/*
When you're done, stop the daemons with:
$ bin/stop-all.sh
三。web UI
1. NameNode:http://localhost:50070/
2. JobTracker :http://localhost:50030/
一,安装详细过程例如以下:
1. 安装java: sudo apt-get install oracle-java7-installer
2. 安装hadoop1.2.1: http://hadoop.apache.org/docs/r1.2.1/single_node_setup.html#Download
二。測试是否成功安装(伪分布式模式):
Format a new distributed-filesystem:
$ bin/hadoop namenode -format
Start the hadoop daemons:
$ bin/start-all.sh
The hadoop daemon log output is written to the ${HADOOP_LOG_DIR} directory (defaults to ${HADOOP_HOME}/logs).
Browse the web interface for the NameNode and the JobTracker; by default they are available at:
NameNode - http://localhost:50070/
JobTracker - http://localhost:50030/
Copy the input files into the distributed filesystem:
$ bin/hadoop fs -put conf input
Run some of the examples provided:
$ bin/hadoop jar hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
Examine the output files:
Copy the output files from the distributed filesystem to the local filesytem and examine them:
$ bin/hadoop fs -get output output
$ cat output/*
When you're done, stop the daemons with:
$ bin/stop-all.sh
三。web UI
1. NameNode:http://localhost:50070/
2. JobTracker :http://localhost:50030/
相关文章推荐
- 在Action获取Scope对象
- Linux进程间通信——使用信号量
- linux下配置ip
- linux的root密码忘了怎么办?
- NUTCH2.3 hadoop2.7.1 hbase1.0.1.1 solr5.2.1部署(一)
- 单机lamp环境一键安装部署
- 有关Linux下源码编译的问题
- Linux之下MySQL安装的三种方案的比较
- linux终端那些事儿
- linux安装lua相关编译报错
- Quartus中出现错误: Can't place multiple pins assigned to pin location Pin_F16 (IOPAD_X53_Y21_N14)
- Linux下ssh登录速度慢的解决办法
- AWWWB.COM网站克隆器
- linux的sort命令
- linux命令行下使用代理
- 跪机事件的过程回顾
- 理解RESTful架构
- 我们要如何观察、分析竞争对手的网站?
- hadoop mapred(hive)执行目录 文件权限问题
- 加快您的网站的最佳实践(Best Practices for Speeding Up Your Web Site)