您的位置:首页 > 运维架构

单机安装(Hadoop2.2.0测试用)

2016-02-07 11:48 357 查看
需要关闭SELINX,执行:/usr/sbin/setenforce0

注意:最好是手动关闭。
还有:要把各个服务器的防火墙给关闭了,不然,后面运行时会报错。

Linux关闭防火墙命令:1) 永久性生效,重启后不会复原 开启:chkconfigiptables
on 关闭:chkconfig iptables off 2)
即时生效,重启后复原 开启:serviceiptables start
关闭:service iptables stop

1、创建用户和组

[root@hadoop ~]# groupadd -g 200 hadoop
[root@hadoop ~]# useradd -u 200 -g hadoophadoop
[root@hadoop ~]# passwd hadoop
Changing password for user hadoop.
New UNIX password:
BAD PASSWORD: it is based on a dictionaryword
Retype new UNIX password:
passwd: all authentication tokens updatedsuccessfully.
[root@hadoop ~]# su - hadoop

2、安装ssh,并配置ssh免密码登录(配置节点之间的信任关系)

1)、在hadoop用户下生成密钥:rsa格式的密钥都选择默认格式
[hadoop@hadoop ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key(/home/hadoop/.ssh/id_rsa):
Enter passphrase (empty for nopassphrase):
Enter same passphrase again:
Your identification has been saved in/home/hadoop/.ssh/id_rsa.
Your public key has been saved in/home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
1a:d9:48:f8:de:5b:be:e7:1f:5b:fd:48:df:59:59:94hadoop@hadoop
[hadoop@hadoop ~]$ cd .ssh
[hadoop@hadoop .ssh]$ ls
id_rsa id_rsa.pub

dsa格式的密钥:也都选择默认的路径
[hadoop@hadoop .ssh]$ ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key(/home/hadoop/.ssh/id_dsa):
Enter passphrase (empty for nopassphrase):
Enter same passphrase again:
Your identification has been saved in/home/hadoop/.ssh/id_dsa.
Your public key has been saved in/home/hadoop/.ssh/id_dsa.pub.
The key fingerprint is:
71:bf:c6:f3:dc:ca:1f:24:8c:0b:7e:94:6f:a2:98:8ahadoop@hadoop
[hadoop@hadoop .ssh]$ ls
id_dsa id_dsa.pub id_rsa id_rsa.pub

把公钥添加到密钥中去
[hadoop@hadoop ~]$ cat .ssh/id_rsa.pub >>.ssh/authorized_keys
[hadoop@hadoop ~]$ cat .ssh/id_dsa.pub >>.ssh/authorized_keys
[hadoop@hadoop ~]$ cd .ssh
[hadoop@hadoop .ssh]$ ls
authorized_keys id_dsa id_dsa.pub id_rsa id_rsa.pub

对authorized_keys权限进行修改:如果不修改,登陆时还是需要密码[hadoop@hadoop .ssh]$ chmod 644
authorized_keys
chmod go-wx authorized_keys
[hadoop@hadoop1 .ssh]$ ll
total 32
-rw-rw-r-- 1 hadoop hadoop 396 Dec 19 11:20 authorized_keys
-rw------- 1 hadoop hadoop 1675 Dec 1911:19 id_rsa
-rw-r--r-- 1 hadoop hadoop 396 Dec 19 11:19 id_rsa.pub
-rw-r--r-- 1 hadoop hadoop 402 Dec 19 11:20 known_hosts
[hadoop@hadoop1 .ssh]$ chmod go-wxauthorized_keys
[hadoop@hadoop1 .ssh]$ ll
total 32
-rw-r--r-- 1 hadoop hadoop 396 Dec 19 11:20 authorized_keys
-rw------- 1 hadoop hadoop 1675 Dec 1911:19 id_rsa
-rw-r--r-- 1 hadoop hadoop 396 Dec 19 11:19 id_rsa.pub
-rw-r--r-- 1 hadoop hadoop 402 Dec 19 11:20 known_hosts
[hadoop@hadoop1 .ssh]$ ssh hadoop1
Last login: Thu Dec 19 11:20:39 2013 fromhadoop1

验证ssh:无需密码登陆本机
[oracle@hadoop ~]$ ssh hadoop
Last login: Wed Dec 18 15:38:25 2013 fromhadoop

注意:在配置ssh时,只需配置rsa即可,可以不用配置dsa也可以实现无密码登陆。

3、下载并安装 JAVA JDK系统软件

(注意:在root用户下安装,不然会报错)
请参考linux中安装jdk
#下载jdk
wget http://60.28.110.228/source/package/jdk-6u21-linux-i586-rpm.bin #安装jdk
chmod +x jdk-6u21-linux-i586-rpm.bin
别忘了赋权限
./jdk-6u21-linux-i586-rpm.bin

安装,执行命令
[root@hn ~]# rpm -ivh jdk-6u17-linux-i586.rpm
(jdk的默认路径为/usr/java/jdk1.6.0_17)

#配置环境变量
注意:此处可以修改.bash_profile,也可以修改/etc/profile,也可以修改
/etc/profile.d/java.sh。最好修改/etc/profile文件,因为其是每个用户通用的!
[root@linux64 ~]# vi .bash_profile
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi

# User specific environment and startupprograms

PATH=$PATH:$HOME/bin

export PATH
unset USERNAME

添加java的参数
export JAVA_HOME=/usr/java/jdk1.6.0_21
export HADOOP_HOME=/opt/modules/hadoop/hadoop-1.0.3
exportPATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$PATH

vi /etc/profile.d/java.sh
#复制粘贴一下内容 到 vi 中。
exportJAVA_HOME=/home/hadoop/java/jdk1.8.0_25
exportHADOOP_HOME=/home/hadoop/hadoop/hadoop-2.2.0
exportPATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_YARN_HOME=$HADOOP_HOME
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
exportHADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
exportHADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"

#手动立即生效
source /etc/profile

(记得修改后要重启系统)然后,再测试

#测试
jps

4、检查基础环境

/sbin/ifconfig
[hadoop@master root]$ /sbin/ifconfig
eth0 Link encap:Ethernet HWaddr 00:0C:29:7A:DE:12
inet addr:192.168.1.100 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe7a:de12/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:14 errors:0 dropped:0 overruns:0 frame:0
TX packets:821 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1591 (1.5 KiB) TX bytes:81925 (80.0 KiB)
Interrupt:67 Base address:0x2024
ping master
ssh master
jps
echo $JAVA_HOME
echo $HADOOP_HOME
hadoop

5、Hadoop 单机系统 安装配置

注意:Hadoop2.2.0与Hadoop1.x.x的配置文件路径不同,Hadoop1.x.x默认是在Hadoop的解压路径下;Hadoop2.2.0的默认是在解压路径下的etc/hadoop

[hadoop@hadoop2 sbin]$ ls/home/hadoop/hadoop/hadoop-2.2.0/etc/
hadoop
[hadoop@hadoop2 sbin]$ ls/home/hadoop/hadoop/hadoop-2.2.0/etc/hadoop/
capacity-scheduler.xml hdfs-site.xml mapred-site.xml
configuration.xsl httpfs-env.sh mapred-site.xml.template
container-executor.cfg httpfs-log4j.properties slaves
core-site.xml httpfs-signature.secret ssl-client.xml.example
hadoop-env.cmd httpfs-site.xml ssl-server.xml.example
hadoop-env.sh log4j.properties yarn-env.cmd
hadoop-metrics2.properties mapred-env.cmd yarn-env.sh
hadoop-metrics.properties mapred-env.sh yarn-site.xml
hadoop-policy.xml mapred-queues.xml.template

1)、Hadoop 文件下载和解压

将安装包hadoop-2.2.0.tar.gz存放到某一目录下,并解压#解压 复制或者下载的Hadoop 文件

2)、修改配置文件

修改解压后的目录中的文件夹etc/hadoop下的xml配置文件(如果文件不存在,则自己创建)
ühadoop-env.sh修改以下配置:
exportJAVA_HOME=/home/hadoop/java/jdk1.8.0_25

3)、Slaves文件修改为以下配置:

Hadoop2

4)、HadoopCommon组件 配置core-site.xml

#编辑 core-site.xml 文件
vi/opt/modules/hadoop/hadoop-1.0.3/conf/core-site.xml
core-site.xml(其中“YARN001”是在/etc/hosts中设置的host,如果未设置,则换为localhost):
<property>
<name>fs.default.name</name>
<value>hdfs://hadoop2:9000</value>
</property>

<property>
<name>dfs.replication</name>
<value>1</value>
</property>

<property>
<name>dfs.namenode.name.dir</name>
<value>/home/hadoop/hadoop/hadoop-2.2.0/dfs/name</value>
</property>

<property>
<name>dfs.datanode.data.dir</name>
<value>/home/hadoop/hadoop/hadoop-2.2.0/dfs/data</value>
</property>
其中,各个目录一定是非/tmp下的目录

5)、yarn-site.xml:yarn-site.xml:

<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>

6)、mapred-site.xml

[oracle@hadoop conf]$ vi mapred-site.xml

mapred-site.xml:
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>

6、Hadoop2.2.0命令

注意:Hadoop2.2.0的操作命令都在安装目录的sbin/目录下/home/hadoop/hadoop/hadoop-2.2.0/sbin
[hadoop@hadoop2 sbin]$ ls/home/hadoop/hadoop/hadoop-2.2.0/sbin/
distribute-exclude.sh refresh-namenodes.sh start-secure-dns.sh stop-dfs.sh
hadoop-daemon.sh slaves.sh start-yarn.cmd stop-secure-dns.sh
hadoop-daemons.sh start-all.cmd start-yarn.sh stop-yarn.cmd
hdfs-config.cmd start-all.sh stop-all.cmd stop-yarn.sh
hdfs-config.sh start-balancer.sh stop-all.sh yarn-daemon.sh
httpfs.sh start-dfs.cmd stop-balancer.sh yarn-daemons.sh
mr-jobhistory-daemon.sh start-dfs.sh stop-dfs.cmd

启动服务:
ü格式化HDFS:
bin/hadoop namenode -format
ü启动HDFS:
sbin/start-dfs.sh
ü启动YARN:
sbin/start-yarn.sh

7、测试

1)、访问URL地址:http://yarn001:8088

2)访问URL地址:http://yarn001:50070

3)、Wordcount

运行Hadoop自带的测试系统,来统计每个单词的个数:
例如,把/home/hadoop/hadoop-1.0.3/bin目录下的.sh文件都放在Hadoop上,运行Hadoop的测试系统来统计每个单词的数量
在Hadoop根目录下创建一个input文件夹,并把/home/hadoop/hadoop-1.0.3/bin目录下的.sh文件都放在Hadoop上input中

[hadoop@hadoopbin]$ hadoop fs -mkdir /input
[hadoop@hadoopbin]$ ls
hadoop start-all.sh stop-balancer.sh
hadoop-config.sh start-balancer.sh stop-dfs.sh
hadoop-daemon.sh start-dfs.sh stop-jobhistoryserver.sh
hadoop-daemons.sh start-jobhistoryserver.sh stop-mapred.sh
rcc start-mapred.sh task-controller
slaves.sh stop-all.sh
[hadoop@hadoopbin]$ hadoop fs -put *.sh /input
[hadoop@hadoopbin]$ hadoop fs -ls /input
Found 14items
-rw-r--r-- 1 hadoop supergroup 2377 2014-05-10 11:14 /input/hadoop-config.sh
-rw-r--r-- 1 hadoop supergroup 4336 2014-05-10 11:14/input/hadoop-daemon.sh
-rw-r--r-- 1 hadoop supergroup 1329 2014-05-10 11:14/input/hadoop-daemons.sh
-rw-r--r-- 1 hadoop supergroup 2143 2014-05-10 11:14 /input/slaves.sh
-rw-r--r-- 1 hadoop supergroup 1166 2014-05-10 11:14/input/start-all.sh
-rw-r--r-- 1 hadoop supergroup 1065 2014-05-10 11:14/input/start-balancer.sh
-rw-r--r-- 1 hadoop supergroup 1745 2014-05-10 11:14 /input/start-dfs.sh
-rw-r--r-- 1 hadoop supergroup 1145 2014-05-10 11:14/input/start-jobhistoryserver.sh
-rw-r--r-- 1 hadoop supergroup 1259 2014-05-10 11:14/input/start-mapred.sh
-rw-r--r-- 1 hadoop supergroup 1119 2014-05-10 11:14 /input/stop-all.sh
-rw-r--r-- 1 hadoop supergroup 1116 2014-05-10 11:14/input/stop-balancer.sh
-rw-r--r-- 1 hadoop supergroup 1246 2014-05-10 11:14 /input/stop-dfs.sh
-rw-r--r-- 1 hadoop supergroup 1131 2014-05-10 11:14/input/stop-jobhistoryserver.sh
-rw-r--r-- 1 hadoop supergroup 1168 2014-05-10 11:14/input/stop-mapred.sh
[hadoop@hadoopbin]$

运行Hadoop自带的计数程序:

[hadoop@hadoophadoop-1.0.3]$ hadoop jar hadoop-examples-1.0.3.jar wordcount /input /output
14/05/1011:19:14 INFO input.FileInputFormat: Total input paths to process : 14
14/05/1011:19:14 INFO util.NativeCodeLoader: Loaded the native-hadoop library
14/05/1011:19:14 WARN snappy.LoadSnappy: Snappy native library not loaded
14/05/1011:19:15 INFO mapred.JobClient: Running job: job_201405101042_0001
14/05/1011:19:16 INFO mapred.JobClient: map 0%reduce 0%
14/05/1011:19:31 INFO mapred.JobClient: map 14%reduce 0%
14/05/1011:19:40 INFO mapred.JobClient: map 28%reduce 4%
14/05/1011:19:46 INFO mapred.JobClient: map 42%reduce 4%
14/05/1011:19:49 INFO mapred.JobClient: map 42%reduce 9%
14/05/1011:19:52 INFO mapred.JobClient: map 57%reduce 9%
14/05/1011:20:00 INFO mapred.JobClient: map 57%reduce 14%
14/05/1011:20:03 INFO mapred.JobClient: map 64%reduce 14%
14/05/1011:20:06 INFO mapred.JobClient: map 71%reduce 19%
14/05/1011:20:09 INFO mapred.JobClient: map 78%reduce 19%
14/05/1011:20:12 INFO mapred.JobClient: map 85%reduce 21%
14/05/1011:20:15 INFO mapred.JobClient: map 92%reduce 21%
14/05/10 11:20:18INFO mapred.JobClient: map 100% reduce26%
14/05/1011:20:30 INFO mapred.JobClient: map 100%reduce 100%
14/05/1011:20:35 INFO mapred.JobClient: Job complete: job_201405101042_0001
14/05/1011:20:35 INFO mapred.JobClient: Counters: 29
14/05/10 11:20:35INFO mapred.JobClient: Job Counters
14/05/1011:20:35 INFO mapred.JobClient: Launched reduce tasks=1
14/05/1011:20:35 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=90823
14/05/1011:20:35 INFO mapred.JobClient: Totaltime spent by all reduces waiting after reserving slots (ms)=0
14/05/1011:20:35 INFO mapred.JobClient: Totaltime spent by all maps waiting after reserving slots (ms)=0
14/05/1011:20:35 INFO mapred.JobClient: Launched map tasks=14
14/05/1011:20:35 INFO mapred.JobClient: Data-local map tasks=14
14/05/1011:20:35 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=56147
14/05/1011:20:35 INFO mapred.JobClient: FileOutput Format Counters

14/05/1011:20:35 INFO mapred.JobClient: BytesWritten=6185
14/05/1011:20:35 INFO mapred.JobClient: FileSystemCounters
14/05/1011:20:35 INFO mapred.JobClient: FILE_BYTES_READ=28744
14/05/1011:20:35 INFO mapred.JobClient: HDFS_BYTES_READ=23862
14/05/1011:20:35 INFO mapred.JobClient: FILE_BYTES_WRITTEN=381254
14/05/1011:20:35 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=6185
14/05/1011:20:35 INFO mapred.JobClient: FileInput Format Counters

14/05/1011:20:35 INFO mapred.JobClient: BytesRead=22345
14/05/1011:20:35 INFO mapred.JobClient: Map-ReduceFramework
14/05/1011:20:35 INFO mapred.JobClient: Mapoutput materialized bytes=28822
14/05/1011:20:35 INFO mapred.JobClient: Mapinput records=691
14/05/1011:20:35 INFO mapred.JobClient: Reduce shuffle bytes=28822
14/05/1011:20:35 INFO mapred.JobClient: Spilled Records=4022
14/05/1011:20:35 INFO mapred.JobClient: Mapoutput bytes=34175
14/05/1011:20:35 INFO mapred.JobClient: Totalcommitted heap usage (bytes)=2266947584
14/05/1011:20:35 INFO mapred.JobClient: CPUtime spent (ms)=8610
14/05/1011:20:35 INFO mapred.JobClient: Combine input records=3139
14/05/1011:20:35 INFO mapred.JobClient: SPLIT_RAW_BYTES=1517
14/05/1011:20:35 INFO mapred.JobClient: Reduce input records=2011
14/05/1011:20:35 INFO mapred.JobClient: Reduce input groups=499
14/05/1011:20:35 INFO mapred.JobClient: Combine output records=2011
14/05/1011:20:35 INFO mapred.JobClient: Physical memory (bytes) snapshot=2006347776
14/05/1011:20:35 INFO mapred.JobClient: Reduceoutput records=499
14/05/1011:20:35 INFO mapred.JobClient: Virtual memory (bytes) snapshot=5150412800
14/05/1011:20:35 INFO mapred.JobClient: Mapoutput records=3139
[hadoop@hadoophadoop-1.0.3]$

举例来查找sh文件中的某个单词的个数,例如,查找required的个数为14个,并与Hadoop运行的结果对比
[hadoop@hadoopbin]$ grep required *.sh
hadoop-config.sh:#Unless required by applicable law or agreed to in writing, software
hadoop-daemon.sh:#Unless required by applicable law or agreed to in writing, software
hadoop-daemons.sh:#Unless required by applicable law or agreed to in writing, software
slaves.sh:#Unless required by applicable law or agreed to in writing, software
start-all.sh:#Unless required by applicable law or agreed to in writing, software
start-balancer.sh:#Unless required by applicable law or agreed to in writing, software
start-dfs.sh:#Unless required by applicable law or agreed to in writing, software
start-jobhistoryserver.sh:#Unless required by applicable law or agreed to in writing, software
start-mapred.sh:#Unless required by applicable law or agreed to in writing, software
stop-all.sh:#Unless required by applicable law or agreed to in writing, software
stop-balancer.sh:#Unless required by applicable law or agreed to in writing, software
stop-dfs.sh:#Unless required by applicable law or agreed to in writing, software
stop-jobhistoryserver.sh:#Unless required by applicable law or agreed to in writing, software
stop-mapred.sh:#Unless required by applicable law or agreed to in writing, software
[hadoop@hadoopbin]$ grep required *.sh | wc
14 168 1209

4)、通过界面查看集群部署部署成功

1)、#检查 jobtracker 和 tasktracker 是否正常http://master:50030/
查看hadoop启动是否成功:在浏览器中输入 http://localhost:50030 来查看mapreduce是否正常启动
注意:网址中输入为主机名hadoop,也可以用本地服务名localhost,得到的结果是一样的,都可以验证hadoop正常启动了。

在浏览器中输入 http://localhost:50070 来查看jobtracker是否正常启动

2)、检查 namenode 和 datanode 是否正常http://master:50070/
使用的hadoop的命令:在hadoop安装目录bin下执行(也可以直接执行,因为已经配置好了环境变量)
[oracle@hadoopbin]$ ./hadoop fs -put stop-all.sh hdfs://hadoop:9000/
Warning:$HADOOP_HOME is deprecated.
将任意一个脚本(如stop-all.sh)存储到hadoop的目录当中,这里需要指定hadoop的入口hdfs://hadoop:9000/

然后,点击Browse the filesystem:可以看出已经成功执行了改命令
其中,stop-all.sh脚本在hadoop根目录下;tmp为hadoop自己生成的目录,可以不用问。

5)、通过执行 Hadoop pi 运行样例检查集群是否成功

cd/opt/modules/hadoop/hadoop-1.0.3
bin/hadoopjar hadoop-examples-1.0.3.jar pi 10 100

#集群正常效果如下
12/07/15 10:50:48 INFO mapred.FileInputFormat: Total input paths to process : 10
12/07/15 10:50:48 INFO mapred.JobClient: Running job: job_201207151041_0001
12/07/15 10:50:49 INFO mapred.JobClient: map 0% reduce 0%
12/07/15 10:51:42 INFO mapred.JobClient: map 40% reduce 0%
12/07/15 10:52:07 INFO mapred.JobClient: map 70% reduce 13%
12/07/15 10:52:10 INFO mapred.JobClient: map 80% reduce 16%
12/07/15 10:52:11 INFO mapred.JobClient: map 90% reduce 16%
12/07/15 10:52:22 INFO mapred.JobClient: map 100% reduce 100%
.....................
12/07/15 10:52:28 INFO mapred.JobClient: Virtual memory (bytes) snapshot=2155343872
12/07/15 10:52:28 INFO mapred.JobClient: Map output records=20
Job Finished in 100.608 seconds
Estimated value of Pi is 3.14800000000000000000
[oracle@hadoophadoop-1.0.3]$ hadoop jar hadoop-examples-1.0.3.jar pi 10 100
Warning:$HADOOP_HOME is deprecated.

Number ofMaps = 10
Samples perMap = 100
Wrote inputfor Map #0
Wrote inputfor Map #1
Wrote inputfor Map #2
Wrote inputfor Map #3
Wrote inputfor Map #4
Wrote inputfor Map #5
Wrote inputfor Map #6
Wrote inputfor Map #7
Wrote inputfor Map #8
Wrote inputfor Map #9
Starting Job
13/12/1908:25:36 INFO mapred.FileInputFormat: Total input paths to process : 10
13/12/1908:25:36 INFO mapred.JobClient: Running job: job_201312181743_0001
13/12/1908:25:37 INFO mapred.JobClient: map 0%reduce 0%
13/12/1908:25:52 INFO mapred.JobClient: map 20%reduce 0%
13/12/1908:25:58 INFO mapred.JobClient: map 40%reduce 0%
13/12/1908:26:04 INFO mapred.JobClient: map 60%reduce 0%
13/12/1908:26:12 INFO mapred.JobClient: map 80%reduce 13%
13/12/1908:26:18 INFO mapred.JobClient: map 100%reduce 26%
13/12/1908:26:29 INFO mapred.JobClient: map 100%reduce 100%
13/12/1908:26:34 INFO mapred.JobClient: Job complete: job_201312181743_0001
13/12/1908:26:34 INFO mapred.JobClient: Counters: 30
13/12/1908:26:34 INFO mapred.JobClient: JobCounters
13/12/1908:26:34 INFO mapred.JobClient: Launched reduce tasks=1
13/12/1908:26:34 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=59797
13/12/1908:26:34 INFO mapred.JobClient: Totaltime spent by all reduces waiting after reserving slots (ms)=0
13/12/1908:26:34 INFO mapred.JobClient: Total time spent by all maps waiting afterreserving slots (ms)=0
13/12/1908:26:34 INFO mapred.JobClient: Launched map tasks=10
13/12/1908:26:34 INFO mapred.JobClient: Data-local map tasks=10
13/12/1908:26:34 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=34717
13/12/1908:26:34 INFO mapred.JobClient: FileInput Format Counters

13/12/1908:26:34 INFO mapred.JobClient: BytesRead=1180
13/12/1908:26:34 INFO mapred.JobClient: FileOutput Format Counters

13/12/1908:26:34 INFO mapred.JobClient: BytesWritten=97
13/12/1908:26:34 INFO mapred.JobClient: FileSystemCounters
13/12/1908:26:34 INFO mapred.JobClient: FILE_BYTES_READ=226
13/12/1908:26:34 INFO mapred.JobClient: HDFS_BYTES_READ=2380
13/12/1908:26:34 INFO mapred.JobClient: FILE_BYTES_WRITTEN=238526
13/12/1908:26:34 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=215
13/12/1908:26:34 INFO mapred.JobClient: Map-Reduce Framework
13/12/1908:26:34 INFO mapred.JobClient: Mapoutput materialized bytes=280
13/12/1908:26:34 INFO mapred.JobClient: Mapinput records=10
13/12/1908:26:34 INFO mapred.JobClient: Reduce shuffle bytes=280
13/12/1908:26:34 INFO mapred.JobClient: Spilled Records=40
13/12/1908:26:34 INFO mapred.JobClient: Mapoutput bytes=180
13/12/1908:26:34 INFO mapred.JobClient: Totalcommitted heap usage (bytes)=1623957504
13/12/1908:26:34 INFO mapred.JobClient: CPUtime spent (ms)=7930
13/12/1908:26:34 INFO mapred.JobClient: Mapinput bytes=240
13/12/1908:26:34 INFO mapred.JobClient: SPLIT_RAW_BYTES=1200
13/12/1908:26:34 INFO mapred.JobClient: Combine input records=0
13/12/1908:26:34 INFO mapred.JobClient: Reduce input records=20
13/12/1908:26:34 INFO mapred.JobClient: Reduce input groups=20
13/12/1908:26:34 INFO mapred.JobClient: Combine output records=0
13/12/1908:26:34 INFO mapred.JobClient: Physical memory (bytes) snapshot=1480073216
13/12/1908:26:34 INFO mapred.JobClient: Reduce output records=0
13/12/1908:26:34 INFO mapred.JobClient: Virtual memory (bytes) snapshot=4101021696
13/12/1908:26:34 INFO mapred.JobClient: Mapoutput records=20
Job Finishedin 58.087 seconds
Estimatedvalue of Pi is 3.14800000000000000000
[oracle@hadoophadoop-1.0.3]$

8、各个进程的启停操作

#启动 Master node :
/opt/modules/hadoop/hadoop-1.0.3/bin/hadoop-daemon.shstart namenode

#启动 JobTracker:
/opt/modules/hadoop/hadoop-1.0.3/bin/hadoop-daemon.shstart jobtracker

#启动secondarynamenode:
/opt/modules/hadoop/hadoop-1.0.3/bin/hadoop-daemon.shstart secondarynamenode

#启动 DataNode&& TaskTracker:

/opt/modules/hadoop/hadoop-1.0.3/bin/hadoop-daemon.shstart datanode
/opt/modules/hadoop/hadoop-1.0.3/bin/hadoop-daemon.shstart tasktracker

停止,命令相同,将start换为stop

#出现错误可查看日志
tail -f/opt/modules/hadoop/hadoop-1.0.3/logs/*
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: