您的位置:首页 > 运维架构

Hadoop2.7.1 配置三个.xml文件的创建地址

2016-03-01 21:25 405 查看
我最初是hadoop1.0.1,已经搭建好了的,现在想把它换成2.7.1,结果起来之后,只有JPS服务起来了,其他几个没有服务没有起来,我见网上说的是在写权限不够16/03/01 21:21:00 WARN namenode.NameNode: Encountered exception during format:

java.io.IOException: Cannot create directory /home/hadoop/leen/hadoop/tmp/dfs/name/current

这个文件的时候权限不够,但是我现在用的是root权限啊!求大神指点,相关的配置文件是按照这个链接博文写的,稍微结合自己的做了修改,希望大家能帮忙指点一下

STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 15ecc87ccf4a0228f35af08fc56de536e6ce657a; compiled by 'jenkins' on 2015-06-29T06:04Z
STARTUP_MSG:   java = 1.7.0_79
************************************************************/
16/03/01 21:20:54 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
16/03/01 21:20:54 INFO namenode.NameNode: createNameNode [-format]
Formatting using clusterid: CID-2e0fcf64-17b7-46da-88ce-4e4c0abc9029
16/03/01 21:20:59 INFO namenode.FSNamesystem: No KeyProvider found.
16/03/01 21:20:59 INFO namenode.FSNamesystem: fsLock is fair:true
16/03/01 21:20:59 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
16/03/01 21:20:59 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
16/03/01 21:20:59 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
16/03/01 21:20:59 INFO blockmanagement.BlockManager: The block deletion will start around 2016 Mar 01 21:20:59
16/03/01 21:20:59 INFO util.GSet: Computing capacity for map BlocksMap
16/03/01 21:20:59 INFO util.GSet: VM type       = 64-bit
16/03/01 21:20:59 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
16/03/01 21:20:59 INFO util.GSet: capacity      = 2^21 = 2097152 entries
16/03/01 21:20:59 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
16/03/01 21:20:59 INFO blockmanagement.BlockManager: defaultReplication         = 1
16/03/01 21:20:59 INFO blockmanagement.BlockManager: maxReplication             = 512
16/03/01 21:20:59 INFO blockmanagement.BlockManager: minReplication             = 1
16/03/01 21:20:59 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
16/03/01 21:20:59 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
16/03/01 21:20:59 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
16/03/01 21:20:59 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
16/03/01 21:20:59 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
16/03/01 21:20:59 INFO namenode.FSNamesystem: fsOwner             = leen (auth:SIMPLE)
16/03/01 21:20:59 INFO namenode.FSNamesystem: supergroup          = supergroup
16/03/01 21:20:59 INFO namenode.FSNamesystem: isPermissionEnabled = true
16/03/01 21:20:59 INFO namenode.FSNamesystem: HA Enabled: false
16/03/01 21:20:59 INFO namenode.FSNamesystem: Append Enabled: true
16/03/01 21:21:00 INFO util.GSet: Computing capacity for map INodeMap
16/03/01 21:21:00 INFO util.GSet: VM type       = 64-bit
16/03/01 21:21:00 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
16/03/01 21:21:00 INFO util.GSet: capacity      = 2^20 = 1048576 entries
16/03/01 21:21:00 INFO namenode.FSDirectory: ACLs enabled? false
16/03/01 21:21:00 INFO namenode.FSDirectory: XAttrs enabled? true
16/03/01 21:21:00 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
16/03/01 21:21:00 INFO namenode.NameNode: Caching file names occuring more than 10 times
16/03/01 21:21:00 INFO util.GSet: Computing capacity for map cachedBlocks
16/03/01 21:21:00 INFO util.GSet: VM type       = 64-bit
16/03/01 21:21:00 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
16/03/01 21:21:00 INFO util.GSet: capacity      = 2^18 = 262144 entries
16/03/01 21:21:00 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
16/03/01 21:21:00 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
16/03/01 21:21:00 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
16/03/01 21:21:00 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
16/03/01 21:21:00 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
16/03/01 21:21:00 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
16/03/01 21:21:00 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
16/03/01 21:21:00 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
16/03/01 21:21:00 INFO util.GSet: Computing capacity for map NameNodeRetryCache
16/03/01 21:21:00 INFO util.GSet: VM type       = 64-bit
16/03/01 21:21:00 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
16/03/01 21:21:00 INFO util.GSet: capacity      = 2^15 = 32768 entries
16/03/01 21:21:00 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1510641234-127.0.1.1-1456838460195
16/03/01 21:21:00 WARN namenode.NameNode: Encountered exception during format:
java.io.IOException: Cannot create directory /home/hadoop/leen/hadoop/tmp/dfs/name/current
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:337)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:548)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:569)
at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:161)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:991)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1429)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554)
16/03/01 21:21:00 ERROR namenode.NameNode: Failed to start namenode.
java.io.IOException: Cannot create directory /home/hadoop/leen/hadoop/tmp/dfs/name/current
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:337)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:548)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:569)
at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:161)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:991)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1429)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554)
16/03/01 21:21:00 INFO util.ExitUtil: Exiting with status 1
16/03/01 21:21:00 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ubuntu/127.0.1.1
************************************************************/
leen@ubuntu:/usr/share/hadoop-2.7.1$ sbin/start-dfs.sh
Starting namenodes on [localhost]
localhost: starting namenode, logging to /usr/share/hadoop-2.7.1/logs/hadoop-leen-namenode-ubuntu.out
localhost: starting datanode, logging to /usr/share/hadoop-2.7.1/logs/hadoop-leen-datanode-ubuntu.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/share/hadoop-2.7.1/logs/hadoop-leen-secondarynamenode-ubuntu.out
leen@ubuntu:/usr/share/hadoop-2.7.1$ jps
9155 Jps


我的配置是按着网上的来的,
http://zhidao.baidu.com/link?url=a5o-u1MuyMW6HoCTjH5YoFcQbmFNZIHwl-VBFuUZELx3IeSkLbZML-VguNmQMgF_zuQjy4mPFLpBlBAJukkL-HGDyVkD9JqBM0S_Ml7T9Nm
具体如下:

1.conf/core-site.xml:
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/hadooptmp</value>
<description>A base for other temporary directories.</description>
</property>

<property>
<name>fs.default.name</name>
<value>hdfs://master:9000</value>
</property>
</configuration>

2.conf/hadoop-env.sh:
export JAVA_HOME=/home/hadoop/jdk1.x.x_xx

3. conf/hdfs-site.xml:
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>

<property>
<name>dfs.data.dir</name>
<value>/home/hadoop/hadoopfs/data</value>
</property>
<property>
<name>dfs.http.address</name>
<value>master:50070</value>
</property>

<property>
<name>dfs.back.http.address</name>
<value>node1:50070</value>
</property>

<property>
<name>dfs.name.dir</name>
<value>/home/hadoop/hadoopfs/name</value>
</property>

<property>
<name>fs.checkpoint.dir</name>
<value>/home/hadoop/hadoopcheckpoint</value>
</property>

<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>

4.conf/mapred-site.xml:
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>master:9001</value>
</property>
<property>
<name>mapred.tasktracker.map.tasks.maximum</name>
<value>4</value>
</property>
<property>
<name>mapred.tasktracker.reduce.tasks.maximum</name>
<value>4</value>
</property>
<property>
<name>mapred.child.java.opts</name>
<value>-Xmx1000m</value>
</property>
</configuration>
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: