Flume Sinks官网剖析(博主推荐)
2017-04-25 15:28
453 查看
不多说,直接上干货!
Flume Sinks
HDFS Sink
Hive Sink
Logger Sink
Avro Sink
Thrift Sink
IRC Sink
File Roll Sink
Null Sink
HBaseSinks
HBaseSink
AsyncHBaseSink
MorphlineSolrSink
ElasticSearchSink
Kite Dataset Sink
Kafka Sink
Custom Sink
官网给我们的参考例子是
我们一般是
自行去官网看吧!
官网提供给我们的参考案例是
这里,官网还提供了Security and Kafka Sink、[b]TLS and Kafka Sink、[b]Kerberos and Kafka Sink,自行去看官网吧![/b][/b]
大家自己去看,罗列出来,是让大家,不要局限于博客本身,眼光要放宽和多看官网,一切来源于官网。
对于大数据无论是各种开源项目,看官网是学习的最好办法,再加上他人的中文博客。不要觉得英文恐惧,专业英语也就那样!变成高手和大牛,加油吧,zhouls!
Flume Sources官网剖析(博主推荐)
Flume Channels官网剖析(博主推荐)
Flume Channel Selectors官网剖析(博主推荐)
一切来源于flume官网http://flume.apache.org/FlumeUserGuide.html
Flume Sinks
HDFS Sink
Hive Sink
Logger Sink
Avro Sink
Thrift Sink
IRC Sink
File Roll Sink
Null Sink
HBaseSinks
HBaseSink
AsyncHBaseSink
MorphlineSolrSink
ElasticSearchSink
Kite Dataset Sink
Kafka Sink
Custom Sink
HDFS Sink
官网给我们的参考例子是
a1.channels = c1 a1.sinks = k1 a1.sinks.k1.type = hdfs a1.sinks.k1.channel = c1 a1.sinks.k1.hdfs.path = /flume/events/%y-%m-%d/%H%M/%S a1.sinks.k1.hdfs.filePrefix = events- a1.sinks.k1.hdfs.round = true a1.sinks.k1.hdfs.roundValue = 10 a1.sinks.k1.hdfs.roundUnit = minute
我们一般是
agent1.sinks = hdfs-sink1 # Define and configure a hdfs sink agent1.sinks.hdfs-sink1.channel = ch1 agent1.sinks.hdfs-sink1.type = hdfs agent1.sinks.hdfs-sink1.hdfs.path = hdfs://master:8030/flume/%Y%m%d agent1.sinks.hdfs-sink1.hdfs.filePrefix = flume agent1.sinks.hdfs-sink1.hdfs.fileSuffix = .log agent1.sinks.hdfs-sink1.hdfs.useLocalTimeStamp = true agent1.sinks.hdfs-sink1.hdfs.rollInterval = 30 agent1.sinks.hdfs-sink1.hdfs.rollSize = 67108864 agent1.sinks.hdfs-sink1.hdfs.rollCount = 0 agent1.sinks.hdfs-sink1.hdfs.codeC = snappy(当然这行,你得自行去安装snappy才可用)
Hive Sink
Logger Sink
Avro Sink
Thrift Sink
IRC Sink
File Roll Sink
Null Sink
HBaseSinks(这里包含HBaseSink 和 AsyncHBaseSink)
自行去官网看吧!
MorphlineSolrSink
ElasticSearchSink
官网提供给我们的参考案例是
a1.channels = c1 a1.sinks = k1 a1.sinks.k1.type = elasticsearch a1.sinks.k1.hostNames = 127.0.0.1:9200,127.0.0.2:9300 a1.sinks.k1.indexName = foo_index a1.sinks.k1.indexType = bar_type a1.sinks.k1.clusterName = foobar_cluster a1.sinks.k1.batchSize = 500 a1.sinks.k1.ttl = 5d a1.sinks.k1.serializer = org.apache.flume.sink.elasticsearch.ElasticSearchDynamicSerializer a1.sinks.k1.channel = c1
Kite Dataset Sink
Kafka Sink
这里,官网还提供了Security and Kafka Sink、[b]TLS and Kafka Sink、[b]Kerberos and Kafka Sink,自行去看官网吧![/b][/b]
Custom Sink
大家自己去看,罗列出来,是让大家,不要局限于博客本身,眼光要放宽和多看官网,一切来源于官网。
对于大数据无论是各种开源项目,看官网是学习的最好办法,再加上他人的中文博客。不要觉得英文恐惧,专业英语也就那样!变成高手和大牛,加油吧,zhouls!
相关文章推荐
- 1.2 Use Cases中 Log Aggregation官网剖析(博主推荐)
- 3. CONFIGURATION官网剖析(博主推荐)
- Flume Source官网剖析(博主推荐)
- 1.2 Use Cases中 Stream Processing官网剖析(博主推荐)
- 1.5 Upgrading From Previous Versions官网剖析(博主推荐)
- 1.2 Use Cases中 Event Sourcing官网剖析(博主推荐)
- 2. APIS官网剖析(博主推荐)
- 3.1 Broker Configs 官网剖析(博主推荐)
- 1.1 Introduction中 Apache Kafka™ is a distributed streaming platform. What exactly does that mean?(官网剖析)(博主推荐)
- 1.2 Use Cases中 Commit Log官网剖析(博主推荐)
- 2.1 Producer API官网剖析(博主推荐)
- 1.1 Introduction中 Topics and Logs官网剖析(博主推荐)
- 1.3 Quick Start中 Step 1: Download the code官网剖析(博主推荐)
- 2.2 Consumer API官网剖析(博主推荐)
- 1.1 Introduction中 Distribution官网剖析(博主推荐)
- 1.3 Quick Start中 Step 2: Start the server官网剖析(博主推荐)
- Flume Channels官网剖析(博主推荐)
- 1.1 Introduction中 Producers官网剖析(博主推荐)
- 1.3 Quick Start中 Step 3: Create a topic官网剖析(博主推荐)
- Flume Channel Selectors官网剖析(博主推荐)