您的位置:首页 > 运维架构

Win7 自定义路径配置路径 cygwin部署hadoop

2014-12-19 18:48 316 查看
之前xp下使用cygwin搭建伪分布式环境,namenode和datanode的路径采用的都是默认,这样format后会自动生成数据,启动也没问题。想要cygwin下载需要一个晚上得。需要的可以联系我。

hadoop路径:C:\cygwin\home\thinkpad\hadoop-1.0.4

今天尝试在Win7自定义配置文件路径,不默认。

发现

1、路径不会自动创建

2、手动创建后jps 发现datanode和tasktracker启动不起来出现文件权限的问题

3、启动后自动生成的目录如下:

datanode的

C:\tmp\hadoop-SYSTEM\dfs\data

C:\tmp\hadoop-SYSTEM\mapred\local\taskTracker

C:\tmp\hadoop-SYSTEM\dfs\namesecondary

namenode的

C:\tmp\hadoop-thinkpad\dfs\name

从文件路径发现权限级别不一样:一个是system,一个是用户thinkpad。

例子

hdfs-site.xml

<property>

<name>dfs.replication</name>

<value>1</value>

</property>

<property>

<name>dfs.name.dir</name>

<value>/home/thinkpad/hadoop/local/namenode</value>

</property>

<property>

<name>dfs.name.edits.dir</name>

<value>/home/thinkpad/hadoop/local/editlog</value>

</property>

<property>

<name>dfs.data.dir</name>

<value>/home/thinkpad/hadoop/block</value>

</property>

4、同时必须在mapred-site.xml中增加

<property>

<name>mapred.child.tmp</name>

<value>/home/Administrator/hadoop-1.0.4/tmp</value>

</property>

否则会出错:jobtracker日志会报

2013-12-04 00:34:53,468 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog

2013-12-04 00:34:53,515 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)

2013-12-04 00:34:53,531 INFO org.apache.hadoop.mapred.TaskLogsTruncater: Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1

2013-12-04 00:34:53,531 INFO org.apache.hadoop.mapred.TaskTracker: Starting tasktracker with owner as SYSTEM

2013-12-04 00:34:53,531 INFO org.apache.hadoop.mapred.TaskTracker: Good mapred local directories are: /tmp/hadoop-SYSTEM/mapred/local

2013-12-04 00:34:53,531 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

2013-12-04 00:34:53,531 ERROR org.apache.hadoop.mapred.TaskTracker: Can not start task tracker because java.io.IOException: Failed to set permissions of path: \tmp\hadoop-SYSTEM\mapred\local\taskTracker to 0755

at org.apache.hadoop.fs.FileUtil.checkReturnValue(FileUtil.java:689)

at org.apache.hadoop.fs.FileUtil.setPermission(FileUtil.java:670)

at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:509)

at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:344)

at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:189)

at org.apache.hadoop.mapred.TaskTracker.initialize(TaskTracker.java:723)

at org.apache.hadoop.mapred.TaskTracker.<init>(TaskTracker.java:1459)

at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:3742)

为了日后能分析流程,附上伪分布式下的各个日志:

2013-12-04 04:48:27,171 INFO org.apache.hadoop.mapred.JobTracker: STARTUP_MSG:

/************************************************************

STARTUP_MSG: Starting JobTracker

STARTUP_MSG: host = thinkpad-117f6f/192.168.1.100

STARTUP_MSG: args = []

STARTUP_MSG: version = 1.0.4-SNAPSHOT

STARTUP_MSG: build = -r ; compiled by 'thinkpad' on Mon Oct 21 21:26:40 CST 2013

************************************************************/

2013-12-04 04:48:27,250 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties

2013-12-04 04:48:27,250 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.

2013-12-04 04:48:27,250 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).

2013-12-04 04:48:27,250 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: JobTracker metrics system started

2013-12-04 04:48:27,296 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source QueueMetrics,q=default registered.

2013-12-04 04:48:27,437 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.

2013-12-04 04:48:27,437 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!

2013-12-04 04:48:27,437 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens

2013-12-04 04:48:27,437 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Starting expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)

2013-12-04 04:48:27,437 INFO org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens

2013-12-04 04:48:27,437 INFO org.apache.hadoop.mapred.JobTracker: Scheduler configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT, limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1, -1, -1, -1)

2013-12-04 04:48:27,437 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list

2013-12-04 04:48:27,453 INFO org.apache.hadoop.mapred.JobTracker: Starting jobtracker with owner as thinkpad

2013-12-04 04:48:27,468 INFO org.apache.hadoop.ipc.Server: Starting SocketReader

2013-12-04 04:48:27,468 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort9010 registered.

2013-12-04 04:48:27,468 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcActivityForPort9010 registered.

2013-12-04 04:48:27,515 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog

2013-12-04 04:48:27,562 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)

2013-12-04 04:48:29,093 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50030

2013-12-04 04:48:29,093 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50030 webServer.getConnectors()[0].getLocalPort() returned 50030

2013-12-04 04:48:29,093 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50030

2013-12-04 04:48:29,093 INFO org.mortbay.log: jetty-6.1.26

2013-12-04 04:48:29,296 INFO org.mortbay.log: Started
SelectChannelConnector@0.0.0.0:50030

2013-12-04 04:48:29,296 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.

2013-12-04 04:48:29,296 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source JobTrackerMetrics registered.

2013-12-04 04:48:29,296 INFO org.apache.hadoop.mapred.JobTracker: JobTracker up at: 9010

2013-12-04 04:48:29,296 INFO org.apache.hadoop.mapred.JobTracker: JobTracker webserver: 50030

2013-12-04 04:48:33,203 INFO org.apache.hadoop.mapred.JobTracker: Cleaning up the system directory

2013-12-04 04:48:35,453 INFO org.apache.hadoop.mapred.JobTracker: History server being initialized in embedded mode

2013-12-04 04:48:35,453 INFO org.apache.hadoop.mapred.JobHistoryServer: Started job history server at: localhost:50030

2013-12-04 04:48:35,453 INFO org.apache.hadoop.mapred.JobTracker: Job History Server web address: localhost:50030

2013-12-04 04:48:35,453 INFO org.apache.hadoop.mapred.CompletedJobStatusStore: Completed job store is inactive

2013-12-04 04:48:35,859 INFO org.apache.hadoop.mapred.JobTracker: Refreshing hosts information

2013-12-04 04:48:35,859 INFO org.apache.hadoop.util.HostsFileReader: Setting the includes file to

2013-12-04 04:48:35,859 INFO org.apache.hadoop.util.HostsFileReader: Setting the excludes file to

2013-12-04 04:48:35,859 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list

2013-12-04 04:48:35,859 INFO org.apache.hadoop.mapred.JobTracker: Decommissioning 0 nodes

2013-12-04 04:48:35,859 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting

2013-12-04 04:48:35,859 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9010: starting

2013-12-04 04:48:35,859 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 9010: starting

2013-12-04 04:48:35,859 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 9010: starting

2013-12-04 04:48:35,875 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 9010: starting

2013-12-04 04:48:35,875 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 9010: starting

2013-12-04 04:48:35,875 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 9010: starting

2013-12-04 04:48:35,875 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 9010: starting

2013-12-04 04:48:35,875 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 9010: starting

2013-12-04 04:48:35,875 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 9010: starting

2013-12-04 04:48:35,875 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 9010: starting

2013-12-04 04:48:35,875 INFO org.apache.hadoop.mapred.JobTracker: Starting RUNNING

2013-12-04 04:48:35,875 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 9010: starting

2013-12-04 04:46:17,109 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:

/************************************************************

STARTUP_MSG: Starting DataNode

STARTUP_MSG: host = thinkpad-117f6f/192.168.1.100

STARTUP_MSG: args = []

STARTUP_MSG: version = 1.0.4-SNAPSHOT

STARTUP_MSG: build = -r ; compiled by 'thinkpad' on Mon Oct 21 21:26:40 CST 2013

************************************************************/

2013-12-04 04:46:17,265 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties

2013-12-04 04:46:17,281 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.

2013-12-04 04:46:17,281 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).

2013-12-04 04:46:17,281 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started

2013-12-04 04:46:17,421 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.

2013-12-04 04:46:17,421 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!

2013-12-04 04:46:20,875 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory \tmp\hadoop-SYSTEM\dfs\data is not formatted.

2013-12-04 04:46:20,875 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting ...

2013-12-04 04:46:23,984 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Registered FSDatasetStatusMBean

2013-12-04 04:46:24,000 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at 50010

2013-12-04 04:46:24,000 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s

2013-12-04 04:46:24,046 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog

2013-12-04 04:46:24,109 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)

2013-12-04 04:46:24,125 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dfs.webhdfs.enabled = false

2013-12-04 04:46:24,125 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50075

2013-12-04 04:46:24,125 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50075 webServer.getConnectors()[0].getLocalPort() returned 50075

2013-12-04 04:46:24,125 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50075

2013-12-04 04:46:24,125 INFO org.mortbay.log: jetty-6.1.26

2013-12-04 04:46:24,453 INFO org.mortbay.log: Started
SelectChannelConnector@0.0.0.0:50075

2013-12-04 04:46:24,468 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.

2013-12-04 04:46:24,468 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source DataNode registered.

2013-12-04 04:46:24,609 INFO org.apache.hadoop.ipc.Server: Starting SocketReader

2013-12-04 04:46:24,609 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort50020 registered.

2013-12-04 04:46:24,609 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcActivityForPort50020 registered.

2013-12-04 04:46:24,609 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration = DatanodeRegistration(thinkpad-117f6f:50010, storageID=, infoPort=50075, ipcPort=50020)

2013-12-04 04:46:24,625 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: New storage id DS-489817737-192.168.1.100-50010-1386103584625 is assigned to data-node 127.0.0.1:50010

2013-12-04 04:46:24,625 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting asynchronous block report scan

2013-12-04 04:46:24,625 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-489817737-192.168.1.100-50010-1386103584625, infoPort=50075, ipcPort=50020)In DataNode.run, data = FSDataset{dirpath='C:\tmp\hadoop-SYSTEM\dfs\data\current'}

2013-12-04 04:46:24,625 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Finished asynchronous block report scan in 0ms

2013-12-04 04:46:24,625 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting

2013-12-04 04:46:24,625 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting

2013-12-04 04:46:24,625 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 50020: starting

2013-12-04 04:46:24,625 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 50020: starting

2013-12-04 04:46:24,625 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec

2013-12-04 04:46:24,625 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 50020: starting

2013-12-04 04:46:24,625 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Reconciled asynchronous block report against current state in 0 ms

2013-12-04 04:46:24,640 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks took 0 msec to generate and 15 msecs for RPC and NN processing

2013-12-04 04:46:24,640 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting Periodic block scanner.

2013-12-04 04:46:24,640 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Generated rough (lockless) block report in 0 ms

2013-12-04 04:46:24,640 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Reconciled asynchronous block report against current state in 0 ms

2013-12-04 04:48:35,718 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_-6936235786958192295_1001 src: /127.0.0.1:12923 dest: /127.0.0.1:50010

2013-12-04 04:48:35,734 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:12923, dest: /127.0.0.1:50010, bytes: 4, op: HDFS_WRITE, cliID: DFSClient_-261273902, offset: 0, srvID: DS-489817737-192.168.1.100-50010-1386103584625,
blockid: blk_-6936235786958192295_1001, duration: 1096187

2013-12-04 04:48:35,734 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 0 for block blk_-6936235786958192295_1001 terminating

2013-12-04 04:52:28,640 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification succeeded for blk_-6936235786958192295_1001

2013-12-04 04:45:19,000 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:

/************************************************************

STARTUP_MSG: Starting NameNode

STARTUP_MSG: host = thinkpad-117f6f/192.168.1.100

STARTUP_MSG: args = []

STARTUP_MSG: version = 1.0.4-SNAPSHOT

STARTUP_MSG: build = -r ; compiled by 'thinkpad' on Mon Oct 21 21:26:40 CST 2013

************************************************************/

2013-12-04 04:45:19,125 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties

2013-12-04 04:45:19,140 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.

2013-12-04 04:45:19,140 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).

2013-12-04 04:45:19,140 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started

2013-12-04 04:45:19,234 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.

2013-12-04 04:45:19,234 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!

2013-12-04 04:45:19,234 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.

2013-12-04 04:45:19,234 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.

2013-12-04 04:45:19,250 INFO org.apache.hadoop.hdfs.util.GSet: VM type = 32-bit

2013-12-04 04:45:19,250 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 19.33375 MB

2013-12-04 04:45:19,250 INFO org.apache.hadoop.hdfs.util.GSet: capacity = 2^22 = 4194304 entries

2013-12-04 04:45:19,250 INFO org.apache.hadoop.hdfs.util.GSet: recommended=4194304, actual=4194304

2013-12-04 04:45:19,296 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=thinkpad

2013-12-04 04:45:19,296 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup

2013-12-04 04:45:19,296 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true

2013-12-04 04:45:19,296 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100

2013-12-04 04:45:19,296 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)

2013-12-04 04:45:19,390 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean

2013-12-04 04:45:19,406 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times

2013-12-04 04:45:19,421 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files = 1

2013-12-04 04:45:19,421 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files under construction = 0

2013-12-04 04:45:19,421 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 114 loaded in 0 seconds.

2013-12-04 04:45:19,421 INFO org.apache.hadoop.hdfs.server.common.Storage: Edits file \tmp\hadoop-thinkpad\dfs\name\current\edits of size 4 edits # 0 loaded in 0 seconds.

2013-12-04 04:45:19,437 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 114 saved in 0 seconds.

2013-12-04 04:45:19,562 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 114 saved in 0 seconds.

2013-12-04 04:45:19,703 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups

2013-12-04 04:45:19,703 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 422 msecs

2013-12-04 04:45:19,718 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks = 0

2013-12-04 04:45:19,718 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid blocks = 0

2013-12-04 04:45:19,718 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of under-replicated blocks = 0

2013-12-04 04:45:19,718 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of over-replicated blocks = 0

2013-12-04 04:45:19,718 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode termination scan for invalid, over- and under-replicated blocks completed in 15 msec

2013-12-04 04:45:19,718 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 0 secs.

2013-12-04 04:45:19,718 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes

2013-12-04 04:45:19,718 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks

2013-12-04 04:45:19,718 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list

2013-12-04 04:45:19,718 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec

2013-12-04 04:45:19,718 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec processing time, 0 msec clock time, 1 cycles

2013-12-04 04:45:19,718 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec

2013-12-04 04:45:19,718 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec processing time, 0 msec clock time, 1 cycles

2013-12-04 04:45:19,734 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source FSNamesystemMetrics registered.

2013-12-04 04:45:19,750 INFO org.apache.hadoop.ipc.Server: Starting SocketReader

2013-12-04 04:45:19,750 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort9000 registered.

2013-12-04 04:45:19,750 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcActivityForPort9000 registered.

2013-12-04 04:45:19,750 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: localhost/127.0.0.1:9000

2013-12-04 04:45:19,796 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog

2013-12-04 04:45:19,843 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)

2013-12-04 04:45:19,859 INFO org.apache.hadoop.http.HttpServer: dfs.webhdfs.enabled = false

2013-12-04 04:45:19,859 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50070

2013-12-04 04:45:19,859 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50070 webServer.getConnectors()[0].getLocalPort() returned 50070

2013-12-04 04:45:19,859 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50070

2013-12-04 04:45:19,859 INFO org.mortbay.log: jetty-6.1.26

2013-12-04 04:45:20,062 INFO org.mortbay.log: Started
SelectChannelConnector@0.0.0.0:50070

2013-12-04 04:45:20,062 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at: 0.0.0.0:50070

2013-12-04 04:45:20,062 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting

2013-12-04 04:45:20,062 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9000: starting

2013-12-04 04:45:20,062 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 9000: starting

2013-12-04 04:45:20,062 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 9000: starting

2013-12-04 04:45:20,062 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 9000: starting

2013-12-04 04:45:20,062 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 9000: starting

2013-12-04 04:45:20,062 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 9000: starting

2013-12-04 04:45:20,062 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 9000: starting

2013-12-04 04:45:20,062 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 9000: starting

2013-12-04 04:45:20,062 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 9000: starting

2013-12-04 04:45:20,062 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 9000: starting

2013-12-04 04:45:20,062 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 9000: starting

2013-12-04 04:46:24,625 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:50010 storage DS-489817737-192.168.1.100-50010-1386103584625

2013-12-04 04:46:24,625 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/127.0.0.1:50010

2013-12-04 04:46:24,640 INFO org.apache.hadoop.hdfs.StateChange: *BLOCK* NameSystem.processReport: from 127.0.0.1:50010, blocks: 0, processing time: 0 msecs

2013-12-04 04:48:01,750 WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying to get groups for user webuser

org.apache.hadoop.util.Shell$ExitCodeException: id: webuser: no such user

at org.apache.hadoop.util.Shell.runCommand(Shell.java:255)

at org.apache.hadoop.util.Shell.run(Shell.java:182)

at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375)

at org.apache.hadoop.util.Shell.execCommand(Shell.java:461)

at org.apache.hadoop.util.Shell.execCommand(Shell.java:444)

at org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:68)

at org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:45)

at org.apache.hadoop.security.Groups.getGroups(Groups.java:79)

at org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1026)

at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.<init>(FSPermissionChecker.java:50)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5210)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkTraverse(FSNamesystem.java:5193)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:2019)

at org.apache.hadoop.hdfs.server.namenode.NameNode.getFileInfo(NameNode.java:848)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:601)

at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:415)

at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)

at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

2013-12-04 04:48:01,750 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user webuser

2013-12-04 04:48:10,546 WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying to get groups for user webuser

org.apache.hadoop.util.Shell$ExitCodeException: id: webuser: no such user

at org.apache.hadoop.util.Shell.runCommand(Shell.java:255)

at org.apache.hadoop.util.Shell.run(Shell.java:182)

at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375)

at org.apache.hadoop.util.Shell.execCommand(Shell.java:461)

at org.apache.hadoop.util.Shell.execCommand(Shell.java:444)

at org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:68)

at org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:45)

at org.apache.hadoop.security.Groups.getGroups(Groups.java:79)

at org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1026)

at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.<init>(FSPermissionChecker.java:50)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5210)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkTraverse(FSNamesystem.java:5193)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:2019)

at org.apache.hadoop.hdfs.server.namenode.NameNode.getFileInfo(NameNode.java:848)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:601)

at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:415)

at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)

at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

2013-12-04 04:48:10,546 WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying to get groups for user webuser

org.apache.hadoop.util.Shell$ExitCodeException: id: webuser: no such user

at org.apache.hadoop.util.Shell.runCommand(Shell.java:255)

at org.apache.hadoop.util.Shell.run(Shell.java:182)

at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375)

at org.apache.hadoop.util.Shell.execCommand(Shell.java:461)

at org.apache.hadoop.util.Shell.execCommand(Shell.java:444)

at org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:68)

at org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:45)

at org.apache.hadoop.security.Groups.getGroups(Groups.java:79)

at org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1026)

at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.<init>(FSPermissionChecker.java:50)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5210)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkTraverse(FSNamesystem.java:5193)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:2019)

at org.apache.hadoop.hdfs.server.namenode.NameNode.getFileInfo(NameNode.java:848)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:601)

at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:415)

at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)

at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

2013-12-04 04:48:10,546 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user webuser

2013-12-04 04:48:10,546 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user webuser

2013-12-04 04:48:19,234 WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying to get groups for user webuser

org.apache.hadoop.util.Shell$ExitCodeException: id: webuser: no such user

at org.apache.hadoop.util.Shell.runCommand(Shell.java:255)

at org.apache.hadoop.util.Shell.run(Shell.java:182)

at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375)

at org.apache.hadoop.util.Shell.execCommand(Shell.java:461)

at org.apache.hadoop.util.Shell.execCommand(Shell.java:444)

at org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:68)

at org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:45)

at org.apache.hadoop.security.Groups.getGroups(Groups.java:79)

at org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1026)

at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.<init>(FSPermissionChecker.java:50)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5210)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkTraverse(FSNamesystem.java:5193)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:2019)

at org.apache.hadoop.hdfs.server.namenode.NameNode.getFileInfo(NameNode.java:848)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:601)

at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:415)

at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)

at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

2013-12-04 04:48:19,234 WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying to get groups for user webuser

org.apache.hadoop.util.Shell$ExitCodeException: id: webuser: no such user

at org.apache.hadoop.util.Shell.runCommand(Shell.java:255)

at org.apache.hadoop.util.Shell.run(Shell.java:182)

at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375)

at org.apache.hadoop.util.Shell.execCommand(Shell.java:461)

at org.apache.hadoop.util.Shell.execCommand(Shell.java:444)

at org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:68)

at org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:45)

at org.apache.hadoop.security.Groups.getGroups(Groups.java:79)

at org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1026)

at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.<init>(FSPermissionChecker.java:50)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5210)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:5178)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:2338)

at org.apache.hadoop.hdfs.server.namenode.NameNode.getListing(NameNode.java:831)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:601)

at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:415)

at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)

at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

2013-12-04 04:48:19,234 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user webuser

2013-12-04 04:48:19,234 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user webuser

2013-12-04 04:48:26,953 WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying to get groups for user webuser

org.apache.hadoop.util.Shell$ExitCodeException: id: webuser: no such user

at org.apache.hadoop.util.Shell.runCommand(Shell.java:255)

at org.apache.hadoop.util.Shell.run(Shell.java:182)

at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375)

at org.apache.hadoop.util.Shell.execCommand(Shell.java:461)

at org.apache.hadoop.util.Shell.execCommand(Shell.java:444)

at org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:68)

at org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:45)

at org.apache.hadoop.security.Groups.getGroups(Groups.java:79)

at org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1026)

at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.<init>(FSPermissionChecker.java:50)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5210)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPathAccess(FSNamesystem.java:5178)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:2338)

at org.apache.hadoop.hdfs.server.namenode.NameNode.getListing(NameNode.java:831)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:601)

at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:415)

at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)

at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

2013-12-04 04:48:26,953 WARN org.apache.hadoop.security.UserGroupInformation: No groups available for user webuser

2013-12-04 04:48:33,203 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0

2013-12-04 04:48:35,687 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.allocateBlock: /tmp/hadoop-thinkpad/mapred/system/jobtracker.info. blk_-6936235786958192295_1001

2013-12-04 04:48:35,734 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:50010 is added to blk_-6936235786958192295_1001 size 4

2013-12-04 04:48:35,734 INFO org.apache.hadoop.hdfs.StateChange: Removing lease on file /tmp/hadoop-thinkpad/mapred/system/jobtracker.info from client DFSClient_-261273902

2013-12-04 04:48:35,734 INFO org.apache.hadoop.hdfs.StateChange: DIR* NameSystem.completeFile: file /tmp/hadoop-thinkpad/mapred/system/jobtracker.info is closed by DFSClient_-261273902

2013-12-04 04:52:15,343 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 127.0.0.1

2013-12-04 04:52:15,343 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions: 9 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 6 SyncTimes(ms): 624

2013-12-04 04:52:16,156 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll FSImage from 127.0.0.1

2013-12-04 04:52:16,156 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 1 SyncTimes(ms): 79
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: