您的位置:首页 > 运维架构

hadoop的伪分布式安装(详细)

2015-06-15 23:35 295 查看
安装环境

安装环境

vmvare 11

centos 6.5





安装开始:

一 安装jdk

将下载的 .bin格式的JDK传输到Linux 家目录下的 Hadoop 文件夹,将hadoop安装包也放到这个文件夹



vmvare
11

centos 6.5





安装开始:

一 安装jdk

将下载的 .bin格式的JDK传输到Linux 家目录下的 Hadoop 文件夹,将hadoop安装包也放到这个文件夹













进入JDK安装目录建立一个软连接

[root@CentOS-6 java]# ln -s jdk1.6.0_27 java





[root@CentOS-6 java]# cd

进入家目录

编辑 .bashrc





将hadoop目录解压到/usr/目录

让环境变量生效





二 设置ssh等效性

配置ssh等效性

免密码ssh设置 --不然每启动一次就需要输入一次密码

现在确认能否不输入口令就用ssh登录localhost:

# ssh localhost

如果不输入口令就无法用ssh登陆localhost,执行下面的命令:

[root@CentOS-6 ~]# ssh-keygen -t rsa





[root@CentOS-6 ~]# cd .ssh/

[root@CentOS-6 .ssh]# ls

id_rsa id_rsa.pub known_hosts





[root@CentOS-6 .ssh]# cat id_rsa.pub >authorized_keys

[root@CentOS-6 .ssh]# ls

authorized_keys id_rsa id_rsa.pub known_hosts

[root@CentOS-6 .ssh]#





三 安装hadoop软件

将hadoop压缩包解压到 /usr/目录下





建立一个软连接





编辑hadoop配置文件

进入





编辑三个文件core-site.xml 、hdfs-site.xml和mapred-site.xml

1)编辑 core-site.xml 在<configuration> </configuration>之间增加

<property>

<name>fs.default.name</name>

<value>hdfs://localhost:9000</value>

</property>

<property>

<name>hadoop.tmp.dir</name>

<value>/tmp/hadoop/hadoop-${user.name}</value>

</property>

2)编辑hdfs-site.xml 在<configuration> </configuration>之间增加

<property>

<name>dfs.replication</name>

<value>1</value>

</property>

3)编辑mapred-site.xml 在<configuration> </configuration>之间增加

<property>

<name>mapred.job.tracker</name>

<value>localhost:9001</value>

</property>













进入hadoop安装目录





格式化分布式文件系统

格式化名称节点:建立一系列结构,存放HDFS元数据





[root@CentOS-6 bin]# ./hadoop namenode -format





启动hadoop

[root@CentOS-6 bin]# ./start-all.sh





检测守护进程启动情况

[root@CentOS-6 bin]# jps

3884 NameNode

4180 JobTracker

4111 SecondaryNameNode

4441 Jps

[root@CentOS-6 bin]#





启动过程出现错误 DataNode 没有启动成功

查看日志









从日志中可以看出 java.net.UnknownHostException 异常

java.net.UnknownHostException: CentOS-6.5: CentOS-6.5

at java.net.InetAddress.getLocalHost(InetAddress.java:1360)

at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.getHostname(MetricsSystemImpl.java:481)

at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configureSystem(MetricsSystemImpl.java:412)

at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configure(MetricsSystemImpl.java:408)

at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.start(MetricsSystemImpl.java:152)

at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.init(MetricsSystemImpl.java:133)

at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.init(DefaultMetricsSystem.java:40)

at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.initialize(DefaultMetricsSystem.java:50)

at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1650)

at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)

at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)

at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)

Hadoop在格式化HDFS的时候,通过hostname命令获取到的主机名是CentOS-6.5

,然后在/etc/hosts文件中进行映射的时候,没有找到

查看一下/etc/sysconfig/network文件: 保存的是hostname 也就是主机名 。

看看主机名是什么,然后修改 /etc/hosts文件,修改成主机名。





修改后的hosts文件





重启网络

[root@CentOS-6 ~]# /etc/rc.d/init.d/network restart

重新格式化HDFS









启动集群





开启成功





进入虚拟机,打开浏览器,输入

http://192.168.141.2:50070

会看到节点存储信息









到此安装结束

简单测试

[root@CentOS-6 ~]# hadoop fs -ls

ls: Cannot access .: No such file or directory.

[root@CentOS-6 ~]# ls

anaconda-ks.cfg hbase-0.94.16-security install.log.syslog 模板 文档 桌面

Hadoop installer workspace 视频 下载

Hbase install.log 公共的 图片 音乐

[root@CentOS-6 ~]# mkdir input

[root@CentOS-6 ~]# cd input/

[root@CentOS-6 input]# ls

[root@CentOS-6 input]# echo "hello world">test2.txt

[root@CentOS-6 input]# echo "hello hadoop">test1.txt

[root@CentOS-6 input]# ls

test1.txt test2.txt

[root@CentOS-6 input]# hadoop fs -mkdir input

[root@CentOS-6 input]# hadoop fs -ls

Found 1 items

drwxr-xr-x - root supergroup 0 2015-04-19 17:15 /user/root/input

[root@CentOS-6 input]# hadoop fs -put test1.txt input

[root@CentOS-6 input]# hadoop fs -put test2.txt input

[root@CentOS-6 input]# hadoop fs -ls

Found 1 items

drwxr-xr-x - root supergroup 0 2015-04-19 17:16 /user/root/input

[root@CentOS-6 input]#





测试mapreduce

[root@CentOS-6 ~]# cd /usr/hadoop

[root@CentOS-6 hadoop]# hadoop jar hadoop-examples-1.2.1.jar wordcount input output

15/04/19 17:22:19 INFO input.FileInputFormat: Total input paths to process : 2

15/04/19 17:22:19 INFO util.NativeCodeLoader: Loaded the native-hadoop library

15/04/19 17:22:19 WARN snappy.LoadSnappy: Snappy native library not loaded

15/04/19 17:22:20 INFO mapred.JobClient: Running job: job_201504191711_0001

15/04/19 17:22:21 INFO mapred.JobClient: map 0% reduce 0%

15/04/19 17:22:57 INFO mapred.JobClient: map 50% reduce 0%

15/04/19 17:22:58 INFO mapred.JobClient: map 100% reduce 0%

15/04/19 17:23:09 INFO mapred.JobClient: map 100% reduce 100%

15/04/19 17:23:10 INFO mapred.JobClient: Job complete: job_201504191711_0001

15/04/19 17:23:10 INFO mapred.JobClient: Counters: 29

15/04/19 17:23:10 INFO mapred.JobClient: Job Counters

15/04/19 17:23:10 INFO mapred.JobClient: Launched reduce tasks=1

15/04/19 17:23:10 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=63538

15/04/19 17:23:10 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0

15/04/19 17:23:10 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0

15/04/19 17:23:10 INFO mapred.JobClient: Launched map tasks=2

15/04/19 17:23:10 INFO mapred.JobClient: Data-local map tasks=2

15/04/19 17:23:10 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=11731

15/04/19 17:23:10 INFO mapred.JobClient: File Output Format Counters

15/04/19 17:23:10 INFO mapred.JobClient: Bytes Written=25

15/04/19 17:23:10 INFO mapred.JobClient: FileSystemCounters

15/04/19 17:23:10 INFO mapred.JobClient: FILE_BYTES_READ=55

15/04/19 17:23:10 INFO mapred.JobClient: HDFS_BYTES_READ=249

15/04/19 17:23:10 INFO mapred.JobClient: FILE_BYTES_WRITTEN=169962

15/04/19 17:23:10 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=25

15/04/19 17:23:10 INFO mapred.JobClient: File Input Format Counters

15/04/19 17:23:10 INFO mapred.JobClient: Bytes Read=25

15/04/19 17:23:10 INFO mapred.JobClient: Map-Reduce Framework

15/04/19 17:23:10 INFO mapred.JobClient: Map output materialized bytes=61

15/04/19 17:23:10 INFO mapred.JobClient: Map input records=2

15/04/19 17:23:10 INFO mapred.JobClient: Reduce shuffle bytes=61

15/04/19 17:23:10 INFO mapred.JobClient: Spilled Records=8

15/04/19 17:23:10 INFO mapred.JobClient: Map output bytes=41

15/04/19 17:23:10 INFO mapred.JobClient: CPU time spent (ms)=48340

15/04/19 17:23:10 INFO mapred.JobClient: Total committed heap usage (bytes)=292167680

15/04/19 17:23:10 INFO mapred.JobClient: Combine input records=4

15/04/19 17:23:10 INFO mapred.JobClient: SPLIT_RAW_BYTES=224

15/04/19 17:23:10 INFO mapred.JobClient: Reduce input records=4

15/04/19 17:23:10 INFO mapred.JobClient: Reduce input groups=3

15/04/19 17:23:10 INFO mapred.JobClient: Combine output records=4

15/04/19 17:23:10 INFO mapred.JobClient: Physical memory (bytes) snapshot=324190208

15/04/19 17:23:10 INFO mapred.JobClient: Reduce output records=3

15/04/19 17:23:10 INFO mapred.JobClient: Virtual memory (bytes) snapshot=1133568000

15/04/19 17:23:10 INFO mapred.JobClient: Map output records=4

[root@CentOS-6 hadoop]#

[root@CentOS-6 hadoop]# hadoop fs -ls

Found 2 items

drwxr-xr-x - root supergroup 0 2015-04-19 17:16 /user/root/input

drwxr-xr-x - root supergroup 0 2015-04-19 17:23 /user/root/output

[root@CentOS-6 hadoop]# hadoop fs -ls output

Found 3 items

-rw-r--r-- 1 root supergroup 0 2015-04-19 17:23 /user/root/output/_SUCCESS

drwxr-xr-x - root supergroup 0 2015-04-19 17:22 /user/root/output/_logs

-rw-r--r-- 1 root supergroup 25 2015-04-19 17:23 /user/root/output/part-r-00000

[root@CentOS-6 hadoop]# hadoop fs -cat output/part-r-00000

hadoop 1

hello 2

world 1

[root@CentOS-6 hadoop]#

运行结果





数据其实就在 slave1 和 slave2 我们制定的位置

[hadoop@master conf]$ vim hdfs-site.xml

<?xml version="1.0"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>

<property>

<name>dfs.data.dir</name>

<value>/data/hadoop</value>

</property>

<property>

<name>dfs.replication</name>

<value>2</value>

</property>

</configuration>

[root@CentOS-6 conf]# hadoop fs

Usage: java FsShell

[-ls <path>]

[-lsr <path>]

[-du <path>]

[-dus <path>]

[-count[-q] <path>]

[-mv <src> <dst>]

[-cp <src> <dst>]

[-rm [-skipTrash] <path>]

[-rmr [-skipTrash] <path>]

[-expunge]

[-put <localsrc> ... <dst>]

[-copyFromLocal <localsrc> ... <dst>]

[-moveFromLocal <localsrc> ... <dst>]

[-get [-ignoreCrc] [-crc] <src> <localdst>]

[-getmerge <src> <localdst> [addnl]]

[-cat <src>]

[-text <src>]

[-copyToLocal [-ignoreCrc] [-crc] <src> <localdst>]

[-moveToLocal [-crc] <src> <localdst>]

[-mkdir <path>]

[-setrep [-R] [-w] <rep> <path/file>]

[-touchz <path>]

[-test -[ezd] <path>]

[-stat [format] <path>]

[-tail [-f] <file>]

[-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...]

[-chown [-R] [OWNER][:[GROUP]] PATH...]

[-chgrp [-R] GROUP PATH...]

[-help [cmd]]

Generic options supported are

-conf <configuration file> specify an application configuration file

-D <property=value> use value for given property

-fs <local|namenode:port> specify a namenode

-jt <local|jobtracker:port> specify a job tracker

-files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster

-libjars <comma separated list of jars> specify comma separated jar files to include in the classpath.

-archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines.

The general command line syntax is

bin/hadoop command [genericOptions] [commandOptions]

[root@CentOS-6 conf]#

[root@CentOS-6 conf]# hadoop dfsadmin

Usage: java DFSAdmin

[-report]

[-safemode enter | leave | get | wait]

[-saveNamespace]

[-refreshNodes]

[-finalizeUpgrade]

[-upgradeProgress status | details | force]

[-metasave filename]

[-refreshServiceAcl]

[-refreshUserToGroupsMappings]

[-refreshSuperUserGroupsConfiguration]

[-setQuota <quota> <dirname>...<dirname>]

[-clrQuota <dirname>...<dirname>]

[-setSpaceQuota <quota> <dirname>...<dirname>]

[-clrSpaceQuota <dirname>...<dirname>]

[-setBalancerBandwidth <bandwidth in bytes per second>]

[-help [cmd]]

Generic options supported are

-conf <configuration file> specify an application configuration file

-D <property=value> use value for given property

-fs <local|namenode:port> specify a namenode

-jt <local|jobtracker:port> specify a job tracker

-files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster

-libjars <comma separated list of jars> specify comma separated jar files to include in the classpath.

-archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines.

The general command line syntax is

bin/hadoop command [genericOptions] [commandOptions]

[root@CentOS-6 conf]#

[root@CentOS-6 ~]# hadoop dfsadmin -report

Configured Capacity: 18536591360 (17.26 GB)

Present Capacity: 11065761792 (10.31 GB)

DFS Remaining: 11065618432 (10.31 GB)

DFS Used: 143360 (140 KB)

DFS Used%: 0%

Under replicated blocks: 0

Blocks with corrupt replicas: 0

Missing blocks: 0

-------------------------------------------------

Datanodes available: 1 (1 total, 0 dead)

Name: 127.0.0.1:50010

Decommission Status : Normal

Configured Capacity: 18536591360 (17.26 GB)

DFS Used: 143360 (140 KB)

Non DFS Used: 7470829568 (6.96 GB)

DFS Remaining: 11065618432(10.31 GB)

DFS Used%: 0%

DFS Remaining%: 59.7%

Last contact: Sun Apr 19 17:37:09 CST 2015

[root@CentOS-6 ~]#





可以使用帮助

刷新命令

[root@CentOS-6 ~]# hadoop dfsadmin -help refreshNodes





归档命令 将多个小文件合并一个大文件

[hadoop@master ~]$ hadoop archive -archiveName files.har -p /user/hadoop/input

/user/hadoop

这里名字必须叫做 files.har

[hadoop@master ~]$ hadoop fs -cat /user/hadoop/files.har/part-0

hello hadoop

hello world

使得负载均衡

[root@CentOS-6 ~]# start-balancer.sh

starting balancer, logging to /usr/hadoop-1.2.1/libexec/../logs/hadoop-root-balancer-CentOS-6.5.out

[root@CentOS-6 ~]#
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: