您的位置:首页 > 其它

进行Mahout0.8聚类研究时,突然出现无法连接服务器问题,解决方案

2014-05-18 21:30 696 查看
今天在研究mahout的聚类算法,突然出现,连接不上服务器

错误参考信息如下:

hadoop@master:~$ mahout canopy -i /user/hadoop/mahout6/vecfile -o /user/hadoop/mahout6/canopy-result -t1 1 -t2 2 -ow

MAHOUT_LOCAL is not set; adding HADOOP_CONF_DIR to classpath.

Running on hadoop, using HADOOP_HOME=/home/hadoop/hadoop-1.2.1

No HADOOP_CONF_DIR set, using /home/hadoop/hadoop-1.2.1/conf

MAHOUT-JOB: /home/hadoop/mahout-distribution-0.6/mahout-examples-0.6-job.jar

Warning: $HADOOP_HOME is deprecated.

14/03/19 04:51:31 INFO common.AbstractJob: Command line arguments: {--distanceMeasure=org.apache.mahout.common.distance.SquaredEuclideanDistanceMeasure, --endPhase=2147483647, --input=/user/hadoop/mahout6/vecfile, --method=mapreduce, --output=/user/hadoop/mahout6/canopy-result,
--overwrite=null, --startPhase=0, --t1=1, --t2=2, --tempDir=temp}

14/03/19 04:51:36 INFO canopy.CanopyDriver: Build Clusters Input: /user/hadoop/mahout6/vecfile Out: /user/hadoop/mahout6/canopy-result Measure: org.apache.mahout.common.distance.SquaredEuclideanDistanceMeasure@775f6c9f t1: 1.0 t2: 2.0

14/03/19 04:51:38 INFO ipc.Client: Retrying connect to server: master/192.168.75.142:9001. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

14/03/19 04:51:39 INFO ipc.Client: Retrying connect to server: master/192.168.75.142:9001. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

14/03/19 04:51:40 INFO ipc.Client: Retrying connect to server: master/192.168.75.142:9001. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

14/03/19 04:51:41 INFO ipc.Client: Retrying connect to server: master/192.168.75.142:9001. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

14/03/19 04:51:42 INFO ipc.Client: Retrying connect to server: master/192.168.75.142:9001. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

14/03/19 04:51:43 INFO ipc.Client: Retrying connect to server: master/192.168.75.142:9001. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

14/03/19 04:51:44 INFO ipc.Client: Retrying connect to server: master/192.168.75.142:9001. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

14/03/19 04:51:45 INFO ipc.Client: Retrying connect to server: master/192.168.75.142:9001. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

14/03/19 04:51:46 INFO ipc.Client: Retrying connect to server: master/192.168.75.142:9001. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

14/03/19 04:51:47 INFO ipc.Client: Retrying connect to server: master/192.168.75.142:9001. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

14/03/19 04:51:47 ERROR security.UserGroupInformation: PriviledgedActionException as:hadoop cause:java.net.ConnectException: Call to master/192.168.75.142:9001 failed on connection exception: java.net.ConnectException: 拒绝连接

Exception in thread "main" java.net.ConnectException: Call to master/192.168.75.142:9001 failed on connection exception: java.net.ConnectException: 拒绝连接

at org.apache.hadoop.ipc.Client.wrapException(Client.java:1142)

at org.apache.hadoop.ipc.Client.call(Client.java:1118)

at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)

at org.apache.hadoop.mapred.$Proxy2.getProtocolVersion(Unknown Source)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)

at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)

at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)

at org.apache.hadoop.mapred.$Proxy2.getProtocolVersion(Unknown Source)

at org.apache.hadoop.ipc.RPC.checkVersion(RPC.java:422)

at org.apache.hadoop.mapred.JobClient.createProxy(JobClient.java:559)

at org.apache.hadoop.mapred.JobClient.init(JobClient.java:498)

at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:479)

at org.apache.hadoop.mapreduce.Job$1.run(Job.java:563)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:415)

at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)

at org.apache.hadoop.mapreduce.Job.connect(Job.java:561)

at org.apache.hadoop.mapreduce.Job.submit(Job.java:549)

at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580)

at org.apache.mahout.clustering.canopy.CanopyDriver.buildClustersMR(CanopyDriver.java:348)

at org.apache.mahout.clustering.canopy.CanopyDriver.buildClusters(CanopyDriver.java:236)

at org.apache.mahout.clustering.canopy.CanopyDriver.run(CanopyDriver.java:145)

at org.apache.mahout.clustering.canopy.CanopyDriver.run(CanopyDriver.java:109)

at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)

at org.apache.mahout.clustering.canopy.CanopyDriver.main(CanopyDriver.java:61)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)

at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)

at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)

at org.apache.mahout.driver.MahoutDriver.main(MahoutDriver.java:188)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)

at org.apache.hadoop.util.RunJar.main(RunJar.java:160)

Caused by: java.net.ConnectException: 拒绝连接

at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)

at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:708)

at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)

at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:511)

at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:481)

at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:457)

at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:583)

at org.apache.hadoop.ipc.Client$Connection.access$2200(Client.java:205)

at org.apache.hadoop.ipc.Client.getConnection(Client.java:1249)

at org.apache.hadoop.ipc.Client.call(Client.java:1093)

... 38 more



错误原因,Hadoop的Jobtrack断掉了,之后关掉hadoop集群,重新启动hadoop集群,就可以正常运行了





之后就可以继续运行程序了(如下所示)

内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: 
相关文章推荐