Sqoop---Got exception in update thread: com.mysql.jd bc.exceptions.jdbc4.MySQLSyntaxErrorException
2016-07-21 16:28
661 查看
问题:通过sqoop将mysql中的数据导入到hdfs的时候,日志停留在map 100% reduce 0%不动。如下所示:
解决方案:
在NodeManager上面查看对应的job工作日志:
从日志可以看出,mysql中的consumer.txt表不存在,但是问题来了,consumer.txt这个表在mysql数据库中是存在的,如下所示:
![](https://oscdn.geek-share.com/Uploads/Images/Content/201607/21/ef6b02baf6e67bbea5e25313ebd4623c)
此时我感觉到可能是consumer.txt这个表的后缀名造成的,于是我将consumer.txt改为consumer试了试,问题解决。
16/07/21 11:46:09 INFO mapreduce.Job: Job job_1469064014798_0012 running in uber mode : false 16/07/21 11:46:09 INFO mapreduce.Job: map 0% reduce 0% 16/07/21 11:46:21 INFO mapreduce.Job: map 100% reduce 0% 16/07/21 11:56:38 INFO mapreduce.Job: Task Id : attempt_1469064014798_0012_m_000000_0, Status : FAILED AttemptID:attempt_1469064014798_0012_m_000000_0 Timed out after 600 secs 16/07/21 11:56:39 INFO mapreduce.Job: map 0% reduce 0% 16/07/21 11:56:50 INFO mapreduce.Job: map 100% reduce 0% 16/07/21 12:07:08 INFO mapreduce.Job: Task Id : attempt_1469064014798_0012_m_000000_1, Status : FAILED AttemptID:attempt_1469064014798_0012_m_000000_1 Timed out after 600 secs 16/07/21 12:07:09 INFO mapreduce.Job: map 0% reduce 0% 16/07/21 12:07:20 INFO mapreduce.Job: map 100% reduce 0% 16/07/21 12:17:38 INFO mapreduce.Job: Task Id : attempt_1469064014798_0012_m_000000_2, Status : FAILED AttemptID:attempt_1469064014798_0012_m_000000_2 Timed out after 600 secs 16/07/21 12:17:39 INFO mapreduce.Job: map 0% reduce 0% 16/07/21 12:17:49 INFO mapreduce.Job: map 100% reduce 0%
解决方案:
在NodeManager上面查看对应的job工作日志:
[root@hadoop44 container_1469064014798_0009_01_000002]# pwd /usr/local/hadoop/logs/userlogs/application_1469064014798_0009/container_1469064014798_0009_01_000002 [root@hadoop44 container_1469064014798_0009_01_000002]# ll total 4 -rw-r--r--. 1 root root 0 Jul 21 11:25 stderr -rw-r--r--. 1 root root 0 Jul 21 11:25 stdout -rw-r--r--. 1 root root 4028 Jul 21 11:26 syslog [root@hadoop44 container_1469064014798_0009_01_000002]# more syslog 2016-07-21 11:25:58,030 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job .end-notification.max.retry.interval; Ignoring. 2016-07-21 11:25:58,088 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job .end-notification.max.attempts; Ignoring. 2016-07-21 11:25:58,460 INFO [main] org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 2016-07-21 11:25:58,526 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s). 2016-07-21 11:25:58,526 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system started 2016-07-21 11:25:58,535 INFO [main] org.apache.hadoop.mapred.YarnChild: Executing with tokens: 2016-07-21 11:25:58,535 INFO [main] org.apache.hadoop.mapred.YarnChild: Kind: mapreduce.job, Service: job_1469064014798_0009, Ident: (o rg.apache.hadoop.mapreduce.security.token.JobTokenIdentifier@597355e7) 2016-07-21 11:25:58,596 INFO [main] org.apache.hadoop.mapred.YarnChild: Sleeping for 0ms before retrying again. Got null now. 2016-07-21 11:25:58,904 INFO [main] org.apache.hadoop.mapred.YarnChild: mapreduce.cluster.local.dir for child: /usr/local/hadoop/tmp/nm -local-dir/usercache/root/appcache/application_1469064014798_0009 2016-07-21 11:25:59,077 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job .end-notification.max.retry.interval; Ignoring. 2016-07-21 11:25:59,096 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job .end-notification.max.attempts; Ignoring. 2016-07-21 11:25:59,471 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: session.id is deprecated. Instead, use dfs.metric s.session-id 2016-07-21 11:25:59,941 INFO [main] org.apache.hadoop.mapred.Task: Using ResourceCalculatorProcessTree : [ ] 2016-07-21 11:26:00,145 INFO [main] org.apache.hadoop.mapred.MapTask: Processing split: Paths:/dir1/consumer.txt:0+102 2016-07-21 11:26:00,149 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: map.input.file is deprecated. Instead, use mapred uce.map.input.file 2016-07-21 11:26:00,149 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: map.input.start is deprecated. Instead, use mapre duce.map.input.start 2016-07-21 11:26:00,149 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: map.input.length is deprecated. Instead, use mapr educe.map.input.length 2016-07-21 11:26:00,557 INFO [Thread-11] org.apache.sqoop.mapreduce.AutoProgressMapper: Auto-progress thread is finished. keepGoing=fal se 2016-07-21 11:26:00,613 ERROR [Thread-10] org.apache.sqoop.mapreduce.AsyncSqlOutputFormat: Got exception in update thread: com.mysql.jd bc.exceptions.jdbc4.MySQLSyntaxErrorException: Table 'consumer.txt' doesn't exist at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at com.mysql.jdbc.Util.handleNewInstance(Util.java:411) at com.mysql.jdbc.Util.getInstance(Util.java:386) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1054) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4237) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4169) at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2617) at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2778) at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2825) at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:2156) at com.mysql.jdbc.PreparedStatement.execute(PreparedStatement.java:1379) at org.apache.sqoop.mapreduce.AsyncSqlOutputFormat$AsyncSqlExecThread.run(AsyncSqlOutputFormat.java:233)
从日志可以看出,mysql中的consumer.txt表不存在,但是问题来了,consumer.txt这个表在mysql数据库中是存在的,如下所示:
此时我感觉到可能是consumer.txt这个表的后缀名造成的,于是我将consumer.txt改为consumer试了试,问题解决。
相关文章推荐
- 利用Oracle SQL Developer来连接Oracle数据库
- 第一章 MYSQL的架构和历史
- PLSQL Developer常见问题
- mysqldump 安全 --skip-add-drop-table
- mysql导入大批量数据出现MySQL server has gone away的解决方法
- MAC中Django中runserver提示Can't connect to local MySQL server through socket '/tmp/mysql.sock错误
- pl/sql developer中如何导出oracle数据库结构? 参考文章一
- PLSQL Developer设置
- MySql的Delete、Truncate、Drop分析
- mysql 主从复制延迟监控
- 基于SQLiteDatabase使用ContentProvider共享数据
- django + mssql + sqlserver2008
- 大型网站应用中MySQL的架构演变史
- postgresql数据库中间件pgoneproxy支持二级分库分表
- django mysql 设置
- django 使用mysql 出现的 No module named MySQLdb
- mysql主从同步(4)-Slave延迟状态监控
- Django连接MySQL
- win7 中使用PLSQL Developer的配置方法
- PLSQL Developer连接Oracle11g 64位