Flume启动运行时报错org.apache.flume.ChannelFullException: Space for commit to queue couldn't be acquired. Sinks are likely not keeping up with sources, or the buffer size is too tight解决办法(图文详解)
2017-07-29 10:36
1166 查看
[b]前期博客[/b]
Flume自定义拦截器(Interceptors)或自带拦截器时的一些经验技巧总结(图文详解)
[b]问题详情[/b]启动agent服务
[hadoop@master flume-1.7.0]$ bin/flume-ng agent --conf conf_MySearchAndReplaceInterceptor/ --conf-file conf_MySearchAndReplaceInterceptor/flume-conf.properties --name agent1 -Dflume.root.logger=INFO,console
![](https://oscdn.geek-share.com/Uploads/Images/Content/202002/04/31b02fc03ce5b2c594f5ff3838ddf863.png)
我这里,出现了这个错误
![](https://oscdn.geek-share.com/Uploads/Images/Content/202002/04/2553aae4ba0e9d1cd535a55f927e73a7.png)
2017-07-29 10:17:51,006 (lifecycleSupervisor-1-2) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:95)] Component type: SOURCE, name: fileSource started 2017-07-29 10:17:52,792 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.HDFSDataStream.configure(HDFSDataStream.java:57)] Serializer = TEXT, UseRawLocalFileSystem = false 2017-07-29 10:17:55,094 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:231)] Creating hdfs://master:9000/data/types/20170729//run.1501294672792.data.tmp 2017-07-29 10:17:55,842 (hdfs-hdfsSink-call-runner-0) [WARN - org.apache.hadoop.util.NativeCodeLoader.<clinit>(NativeCodeLoader.java:62)] Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2017-07-29 10:18:00,495 (pool-5-thread-1) [ERROR - org.apache.flume.source.ExecSource$ExecRunnable.run(ExecSource.java:352)] Failed while running command: tail -F /usr/local/log/server.log org.apache.flume.ChannelFullException: Space for commit to queue couldn't be acquired. Sinks are likely not keeping up with sources, or the buffer size is too tight at org.apache.flume.channel.MemoryChannel$MemoryTransaction.doCommit(MemoryChannel.java:127) at org.apache.flume.channel.BasicTransactionSemantics.commit(BasicTransactionSemantics.java:151) at org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:194) at org.apache.flume.source.ExecSource$ExecRunnable.flushEventBatch(ExecSource.java:381) at org.apache.flume.source.ExecSource$ExecRunnable.run(ExecSource.java:341) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 2017-07-29 10:18:00,544 (timedFlushExecService21-0) [ERROR - org.apache.flume.source.ExecSource$ExecRunnable$1.run(ExecSource.java:327)] Exception occured when processing event batch org.apache.flume.ChannelException: java.lang.InterruptedException at org.apache.flume.channel.BasicTransactionSemantics.commit(BasicTransactionSemantics.java:154) at org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:194) at org.apache.flume.source.ExecSource$ExecRunnable.flushEventBatch(ExecSource.java:381) at org.apache.flume.source.ExecSource$ExecRunnable.access$100(ExecSource.java:254) at org.apache.flume.source.ExecSource$ExecRunnable$1.run(ExecSource.java:323) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
解决办法
![](https://oscdn.geek-share.com/Uploads/Images/Content/202002/04/8dfb98b6616b356a0a5be353013536de.png)
![](https://oscdn.geek-share.com/Uploads/Images/Content/202002/04/6be13b6d5c2bf3129443b748fc04d037.png)
agent1.channels.memoryChannel.keep-alive = 60 agent1.channels.memoryChannel.capacity = 1000000
然后,再来修改
修改java最大内存大小
vi bin/flume-ng
JAVA_OPTS="-Xmx1024m"
![](https://oscdn.geek-share.com/Uploads/Images/Content/202002/04/a08cab1a8433bfba29f3435e0a4495df.png)
![](https://oscdn.geek-share.com/Uploads/Images/Content/202002/04/77017e4eac18001e0dcb579bc3673a7d.png)
改为
![](https://oscdn.geek-share.com/Uploads/Images/Content/202002/04/5a4a69de0a092ab67eaf6eae1d238fed.png)
即,修改后之后,再次运行
[hadoop@master flume-1.7.0]$ bin/flume-ng agent --conf conf_MySearchAndReplaceInterceptor/ --conf-file conf_MySearchAndReplaceInterceptor/flume-conf.properties --name agent1 -Dflume.root.logger=INFO,console
上述错误,得以解决了。
参考博客
[b]flume-ng 问题处理(1)[/b]
相关文章推荐
- flume报错:Sinks are likely not keeping up with sources, or the buffer size is too tight
- java.lang.UnsupportedOperationException: setXIncludeAware is not supported on this JAXP implementation or earlier: class gnu.xml.dom.JAXPFactory的解决办法(图文详解)
- Flume启动时报错Caused by: java.lang.InterruptedException: Timed out before HDFS call was made. Your hdfs.callTimeout might be set too low or HDFS calls are taking too long.解决办法(图文详解)
- the attribute buffer size is too small 解决方法
- the attribute buffer size is too small 解决方法
- azkaban-web-start.sh启动时出现Table 'execution_flows' is marked as crashed and should be repaired Query错误的解决办法(图文详解)
- linux中ERROR: The partition with /var/lib/mysql is too full!解决办法
- The android gradle plugin version 2.3.0-alpha1 is too old, please update to the latest version.解决办法
- The application is not licensed to modify or create schema for this type of data 解决办法
- live555: The input frame data was too large for our buffer size 解决方法
- spark运行时出现Neither spark.yarn.jars nor spark.yarn.archive is set错误的解决办法(图文详解)
- TRACE打印中文时输出_CrtDbgReport: String too long or IO ErrorThe program的解决办法
- "Where is the debugger or host Application Running"解决办法
- Data for Source Column 3(’Col3’) is too large for the specified buffer size.
- win10系统问题-the boot configuration date for your pc is missing or contains errors 解决办法
- PHPStorm+XDebug进行调试图文教程以及解析wamp的php.ini设置不生效的原因以及Interpreter is not specified or invalid解决办法(调试不生效的原因,两处红色部分)
- the project file '' has been renamed or is no longer in the solution 解决办法
- webview使用遇到 It is possible that this object was over-released, or is in the process of deallocation错误的解决办法
- Warning:The /usr/local/mysql/data directory is not owned by the 'mysql' or '_mysql'的解决办法
- 启动Amoeba报The stack size specified is too small解决方法