kafka学习 (三)csharp开发
2017-02-06 14:49
429 查看
.net 开发kafka,这今天绕了不少坑,好多人不负责任瞎写,网上dll都没有一个,自己编译了一个,结果发现一个好网站:nuget 查看开源地址和最新release包https://www.nuget.org ,现在比较好的就两个kafka-net和rdkafka。经过实际测试,还是rdfakfa更胜一筹,因为他是基于librdkafka做二次开发的。下面给出github链接地址:
- kafka-net: https://github.com/Jroland/kafka-net
- rdkafka: https://github.com/ah-/rdkafka-dotnet
dll包放到dll文件夹里面,地址在:https://github.com/bluetom520/kafka_for_net,下面分别用两种dll举例
rdkafka基于librdkafka:https://github.com/edenhill/librdkafka,需要在debug or release导入两个dll
- kafka-net: https://github.com/Jroland/kafka-net
- rdkafka: https://github.com/ah-/rdkafka-dotnet
dll包放到dll文件夹里面,地址在:https://github.com/bluetom520/kafka_for_net,下面分别用两种dll举例
1 kafka-net
kafka-net的消费者不能指定latest,只能从开头开始消费,没找到解决办法,无奈放弃,生产者还是不错的,消费有延迟1.1全局配置
app.config<?xml version="1.0" encoding="utf-8" ?> <configuration> <startup> <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.6.1" /> </startup> <appSettings> <add key="KafkaBroker" value="http://26.2.4.171:9092,http://26.2.4.172:9092,http://26.2.4.173:9092" /> <add key="Topic" value="test_topic" /> </appSettings> </configuration>
1.2 producer
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; using KafkaNet; using KafkaNet.Common; using KafkaNet.Model; using KafkaNet.Protocol; using System.Configuration; namespace producer { class Program { static void Main(string[] args) { do { Produce(GetKafkaBroker(), getTopicName()); System.Threading.Thread.Sleep(3000); } while (true); } private static void Produce(string broker, string topic) { string[] temp = broker.Split(','); Uri[] url = new Uri[3]; int index = 0; foreach (string item in temp) { url[index] = new Uri(item); index++; } var options = new KafkaOptions(url); var router = new BrokerRouter(options); var client = new Producer(router); var currentDatetime = DateTime.Now; var key = currentDatetime.Second.ToString(); var events = new[] { new Message("Hello World " + currentDatetime, key) }; client.SendMessageAsync(topic, events).Wait(2000); Console.WriteLine("Produced: Key: {0}. Message: {1}", key, events[0].Value.ToUtf8String()); using (client) { } } private static string GetKafkaBroker() { string KafkaBroker = string.Empty; const string kafkaBrokerKeyName = "KafkaBroker"; if (!ConfigurationManager.AppSettings.AllKeys.Contains(kafkaBrokerKeyName)) { KafkaBroker = "http://localhost:9092"; } else { KafkaBroker = ConfigurationManager.AppSettings[kafkaBrokerKeyName]; } return KafkaBroker; } private static string getTopicName() { string TopicName = string.Empty; const string topicNameKeyName = "Topic"; if (!ConfigurationManager.AppSettings.AllKeys.Contains(topicNameKeyName)) { throw new Exception("Key \"" + topicNameKeyName + "\" not found in Config file -> configuration/AppSettings"); } else { TopicName = ConfigurationManager.AppSettings[topicNameKeyName]; } return TopicName; } } }
1.3 consumer
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; using KafkaNet; using KafkaNet.Common; using KafkaNet.Model; using KafkaNet.Protocol; using System.Configuration; namespace comsumer { class Program { static void Main(string[] args) { Consume(getKafkaBroker(), getTopicName()); } private static void Consume(string broker, string topic) { string[] temp = broker.Split(','); Uri[] url =new Uri[3]; int index = 0; foreach(string item in temp) { url[index] = new Uri(item); index++; 15443 } var options = new KafkaOptions(url); var router = new BrokerRouter(options); OffsetPosition[] off =new OffsetPosition[3]; //off off[0] = new OffsetPosition(0, 9999); off[1] = new OffsetPosition(1, 9999); off[2] = new OffsetPosition(2, 9999); Consumer consumer = new Consumer(new ConsumerOptions(topic, router),off); //var tt2 = consumer.GetTopicOffsetAsync(topic, 2, -1); ////var tt = consumer.GetOffsetPosition(); //var test = CreateOffsetFetchRequest(topic,0); //var test2 = CreateFetchRequest(topic, 0); ////consumer.SetOffsetPosition(new OffsetPosition(consumer.GetOffsetPosition())); ////Consume returns a blocking IEnumerable (ie: never ending stream) bool flag = false; //var t =consumer.Consume(); //var tt = consumer.GetOffsetPosition(); List<OffsetPosition> tt3; foreach (var message in consumer.Consume()) { //if (!flag) //{ tt3 = consumer.GetOffsetPosition(); OffsetPosition[] str = tt3.ToArray(); consumer.SetOffsetPosition(str); //consumer.Dispose(); //break; flag = true; //} Console.WriteLine("1 Response: Partition {0},Offset {1} : {2}", message.Meta.PartitionId, message.Meta.Offset, message.Value.ToUtf8String()); } //consumer.Dispose(); //OffsetPosition[] str = tt3.ToArray(); //var consumer2 = new Consumer(new ConsumerOptions(topic, router), str); //foreach (var message in consumer2.Consume()) //{ // Console.WriteLine("2 Response: Partition {0},Offset {1} : {2}", // message.Meta.PartitionId, message.Meta.Offset, message.Value.ToUtf8String()); //} } private static string getKafkaBroker() { string KafkaBroker = string.Empty; var KafkaBrokerKeyName = "KafkaBroker"; if (!ConfigurationManager.AppSettings.AllKeys.Contains(KafkaBrokerKeyName)) { KafkaBroker = "http://localhost:9092"; } else { KafkaBroker = ConfigurationManager.AppSettings[KafkaBrokerKeyName]; } return KafkaBroker; } private static string getTopicName() { string TopicName = string.Empty; var TopicNameKeyName = "Topic"; if (!ConfigurationManager.AppSettings.AllKeys.Contains(TopicNameKeyName)) { throw new Exception("Key \"" + TopicNameKeyName + "\" not found in Config file -> configuration/AppSettings"); } else { TopicName = ConfigurationManager.AppSettings[TopicNameKeyName]; } return TopicName; } } }
2 rdkafka
生产者和消费者效能都很高,实时,可进行config配置参数,达到各种效果,具体参考https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.mdrdkafka基于librdkafka:https://github.com/edenhill/librdkafka,需要在debug or release导入两个dll
zlib.dll librdkafka.dll
2.1 producer
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; using RdKafka; namespace RDkafka_producer { public class Program { public static void Main(string[] args) { string brokerList = "26.2.4.171:9092,26.2.4.172:9092,26.2.4.173:9092"; string topicName = "test_topic"; var topicConfig = new TopicConfig { //CustomPartitioner = (top, key, cnt) => //{ // var kt = (key != null) ? Encoding.UTF8.GetString(key, 0, key.Length) : "(null)"; // int partition = 0; // if (key != null) // { // partition = key.Length % cnt; // } // bool available = top.PartitionAvailable(partition); // Console.WriteLine("Partitioner topic: {"+top.Name+"} key: {"+kt+"} partition count: {"+cnt+"} -> {"+partition+"} {"+available+"}"); // return partition; //} }; using (Producer producer = new Producer(brokerList)) using (Topic topic = producer.Topic(topicName, topicConfig)) { Console.WriteLine("{"+producer.Name+"} producing on {"+topic.Name+"}. q to exit."); string text; while ((text = Console.ReadLine()) != "q") { byte[] data = Encoding.UTF8.GetBytes(text); Task<DeliveryReport> deliveryReport = topic.Produce(data); var unused = deliveryReport.ContinueWith(task => { Console.WriteLine("Partition: {"+task.Result.Partition+"}, Offset: {"+task.Result.Offset+"}"); }); } } } } }
2.2 consumer
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; using RdKafka; namespace RD_kafka_consumer { public class Program { public static void Run(string brokerList, List<string> topics) { bool enableAutoCommit = false; var config = new Config() { GroupId = "advanced-csharp-consumer", EnableAutoCommit = enableAutoCommit, StatisticsInterval = TimeSpan.FromSeconds(60) }; using (var consumer = new EventConsumer(config, brokerList)) { consumer.OnMessage += (obj, msg) => { string text = Encoding.UTF8.GetString(msg.Payload, 0, msg.Payload.Length); //Console.WriteLine($"Topic: {msg.Topic} Partition: {msg.Partition} Offset: {msg.Offset} {text}"); Console.WriteLine("1 Response: Partition {0},Offset {1} : {2}", msg.Partition, msg.Offset, text); if (!enableAutoCommit && msg.Offset % 10 == 0) { Console.WriteLine("Committing offset"); consumer.Commit(msg).Wait(); Console.WriteLine("Committed offset"); } }; consumer.OnConsumerError += (obj, errorCode) => { //Console.WriteLine($"Consumer Error: {errorCode}"); }; consumer.OnEndReached += (obj, end) => { //Console.WriteLine($"Reached end of topic {end.Topic} partition {end.Partition}, next message will be at offset {end.Offset}"); }; consumer.OnError += (obj, error) => { //Console.WriteLine($"Error: {error.ErrorCode} {error.Reason}"); }; if (enableAutoCommit) { consumer.OnOffsetCommit += (obj, commit) => { if (commit.Error != ErrorCode.NO_ERROR) { //Console.WriteLine($"Failed to commit offsets: {commit.Error}"); } //Console.WriteLine($"Successfully committed offsets: [{string.Join(", ", commit.Offsets)}]"); }; } consumer.OnPartitionsAssigned += (obj, partitions) => { //Console.WriteLine($"Assigned partitions: [{string.Join(", ", partitions)}], member id: {consumer.MemberId}"); consumer.Assign(partitions); }; consumer.OnPartitionsRevoked += (obj, partitions) => { //Console.WriteLine($"Revoked partitions: [{string.Join(", ", partitions)}]"); consumer.Unassign(); }; consumer.OnStatistics += (obj, json) => { //Console.WriteLine($"Statistics: {json}"); }; consumer.Subscribe(topics); consumer.Start(); //Console.WriteLine($"Assigned to: [{string.Join(", ", consumer.Assignment)}]"); //Console.WriteLine($"Subscribed to: [{string.Join(", ", consumer.Subscription)}]"); //Console.WriteLine($"Started consumer, press enter to stop consuming"); Console.ReadLine(); } } public static void Main(string[] args) { string brokerlist = "26.2.4.171:9092,26.2.4.172:9092,26.2.4.173:9092"; List<string> topic = new List<string>(); topic.Add("test_topic"); Run(brokerlist, topic); } } }
2.3 参数配置
Property | C/P | Range | Default | Description |
---|---|---|---|---|
builtin.features | * | gzip, snappy, ssl, sasl, regex, lz4 | Indicates the builtin features for this build of librdkafka. An application can either query this value or attempt to set it with its list of required features to check for library support. Type: CSV flags | |
client.id | * | rdkafka | Client identifier. Type: string | |
metadata.broker.list | * | Initial list of brokers. The application may also use rd_kafka_brokers_add()to add brokers during runtime. Type: string | ||
bootstrap.servers | * | Alias for metadata.broker.list | ||
message.max.bytes | * | 1000 .. 1000000000 | 1000000 | Maximum transmit message size. Type: integer |
message.copy.max.bytes | * | 0 .. 1000000000 | 65535 | Maximum size for message to be copied to buffer. Messages larger than this will be passed by reference (zero-copy) at the expense of larger iovecs. Type: integer |
receive.message.max.bytes | * | 1000 .. 1000000000 | 100000000 | Maximum receive message size. This is a safety precaution to avoid memory exhaustion in case of protocol hickups. The value should be at least fetch.message.max.bytes * number of partitions consumed from + messaging overhead (e.g. 200000 bytes). Type: integer |
max.in.flight.requests.per.connection | * | 1 .. 1000000 | 1000000 | Maximum number of in-flight requests the client will send. This setting applies per broker connection. Type: integer |
max.in.flight | * | Alias for max.in.flight.requests.per.connection | ||
metadata.request.timeout.ms | * | 10 .. 900000 | 60000 | Non-topic request timeout in milliseconds. This is for metadata requests, etc. Type: integer |
topic.metadata.refresh.interval.ms | * | -1 .. 3600000 | 300000 | Topic metadata refresh interval in milliseconds. The metadata is automatically refreshed on error and connect. Use -1 to disable the intervalled refresh. Type: integer |
metadata.max.age.ms | * | 1 .. 86400000 | -1 | Metadata cache max age. Defaults to metadata.refresh.interval.ms * 3 Type: integer |
topic.metadata.refresh.fast.interval.ms | * | 1 .. 60000 | 250 | When a topic looses its leader a new metadata request will be enqueued with this initial interval, exponentially increasing until the topic metadata has been refreshed. This is used to recover quickly from transitioning leader brokers. Type: integer |
topic.metadata.refresh.fast.cnt | * | 0 .. 1000 | 10 | Deprecated: No longer used. Type: integer |
topic.metadata.refresh.sparse | * | true, false | true | Sparse metadata requests (consumes less network bandwidth) Type: boolean |
topic.blacklist | * | Topic blacklist, a comma-separated list of regular expressions for matching topic names that should be ignored in broker metadata information as if the topics did not exist. Type: pattern list | ||
debug | * | generic, broker, topic, metadata, queue, msg, protocol, cgrp, security, fetch, feature, all | A comma-separated list of debug contexts to enable. Debugging the Producer: broker,topic,msg. Consumer: cgrp,topic,fetch Type: CSV flags | |
socket.timeout.ms | * | 10 .. 300000 | 60000 | Timeout for network requests. Type: integer |
socket.blocking.max.ms | * | 1 .. 60000 | 1000 | Maximum time a broker socket operation may block. A lower value improves responsiveness at the expense of slightly higher CPU usage. Deprecated Type: integer |
socket.send.buffer.bytes | * | 0 .. 100000000 | 0 | Broker socket send buffer size. System default is used if 0. Type: integer |
socket.receive.buffer.bytes | * | 0 .. 100000000 | 0 | Broker socket receive buffer size. System default is used if 0. Type: integer |
socket.keepalive.enable | * | true, false | false | Enable TCP keep-alives (SO_KEEPALIVE) on broker sockets Type: boolean |
socket.nagle.disable | * | true, false | false | Disable the Nagle algorithm (TCP_NODELAY). Type: boolean |
socket.max.fails | * | 0 .. 1000000 | 3 | Disconnect from broker when this number of send failures (e.g., timed out requests) is reached. Disable with 0. NOTE: The connection is automatically re-established. Type: integer |
broker.address.ttl | * | 0 .. 86400000 | 1000 | How long to cache the broker address resolving results (milliseconds). Type: integer |
broker.address.family | * | any, v4, v6 | any | Allowed broker IP address families: any, v4, v6 Type: enum value |
reconnect.backoff.jitter.ms | * | 0 .. 3600000 | 500 | Throttle broker reconnection attempts by this value +-50%. Type: integer |
statistics.interval.ms | * | 0 .. 86400000 | 0 | librdkafka statistics emit interval. The application also needs to register a stats callback using rd_kafka_conf_set_stats_cb(). The granularity is 1000ms. A value of 0 disables statistics. Type: integer |
enabled_events | * | 0 .. 2147483647 | 0 | See rd_kafka_conf_set_events() Type: integer |
error_cb | * | Error callback (set with rd_kafka_conf_set_error_cb()) Type: pointer | ||
throttle_cb | * | Throttle callback (set with rd_kafka_conf_set_throttle_cb()) Type: pointer | ||
stats_cb | * | Statistics callback (set with rd_kafka_conf_set_stats_cb()) Type: pointer | ||
log_cb | * | Log callback (set with rd_kafka_conf_set_log_cb()) Type: pointer | ||
log_level | * | 0 .. 7 | 6 | Logging level (syslog(3) levels) Type: integer |
log.thread.name | * | true, false | false | Print internal thread name in log messages (useful for debugging librdkafka internals) Type: boolean |
log.connection.close | * | true, false | true | Log broker disconnects. It might be useful to turn this off when interacting with 0.9 brokers with an aggressive connection.max.idle.msvalue. Type: boolean |
socket_cb | * | Socket creation callback to provide race-free CLOEXEC Type: pointer | ||
connect_cb | * | Socket connect callback Type: pointer | ||
closesocket_cb | * | Socket close callback Type: pointer | ||
open_cb | * | File open callback to provide race-free CLOEXEC Type: pointer | ||
opaque | * | Application opaque (set with rd_kafka_conf_set_opaque()) Type: pointer | ||
default_topic_conf | * | Default topic configuration for automatically subscribed topics Type: pointer | ||
internal.termination.signal | * | 0 .. 128 | 0 | Signal that librdkafka will use to quickly terminate on rd_kafka_destroy(). If this signal is not set then there will be a delay before rd_kafka_wait_destroyed() returns true as internal threads are timing out their system calls. If this signal is set however the delay will be minimal. The application should mask this signal as an internal signal handler is installed. Type: integer |
api.version.request | * | true, false | false | Request broker’s supported API versions to adjust functionality to available protocol features. If set to false the fallback version broker.version.fallbackwill be used. NOTE: Depends on broker version >=0.10.0. If the request is not supported by (an older) broker the broker.version.fallbackfallback is used. Type: boolean |
api.version.fallback.ms | * | 0 .. 604800000 | 1200000 | Dictates how long the broker.version.fallbackfallback is used in the case the ApiVersionRequest fails. NOTE: The ApiVersionRequest is only issued when a new connection to the broker is made (such as after an upgrade). Type: integer |
broker.version.fallback | * | 0.9.0 | Older broker versions (<0.10.0) provides no way for a client to query for supported protocol features (ApiVersionRequest, see api.version.request) making it impossible for the client to know what features it may use. As a workaround a user may set this property to the expected broker version and the client will automatically adjust its feature set accordingly if the ApiVersionRequest fails (or is disabled). The fallback broker version will be used for api.version.fallback.ms. Valid values are: 0.9.0, 0.8.2, 0.8.1, 0.8.0. Type: string | |
security.protocol | * | plaintext, ssl, sasl_plaintext, sasl_ssl | plaintext | Protocol used to communicate with brokers. Type: enum value |
ssl.cipher.suites | * | A cipher suite is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. See manual page for ciphers(1)and `SSL_CTX_set_cipher_list(3). Type: string | ||
ssl.key.location | * | Path to client’s private key (PEM) used for authentication. Type: string | ||
ssl.key.password | * | Private key passphrase Type: string | ||
ssl.certificate.location | * | Path to client’s public key (PEM) used for authentication. Type: string | ||
ssl.ca.location | * | File or directory path to CA certificate(s) for verifying the broker’s key. Type: string | ||
ssl.crl.location | * | Path to CRL for verifying broker’s certificate validity. Type: string | ||
sasl.mechanisms | * | GSSAPI | SASL mechanism to use for authentication. Supported: GSSAPI, PLAIN. NOTE: Despite the name only one mechanism must be configured. Type: string | |
sasl.kerberos.service.name | * | kafka | Kerberos principal name that Kafka runs as. Type: string | |
sasl.kerberos.principal | * | kafkaclient | This client’s Kerberos principal name. Type: string | |
sasl.kerberos.kinit.cmd | * | kinit -S “%{sasl.kerberos.service.name}/%{broker.name}” -k -t “%{sasl.kerberos.keytab}” %{sasl.kerberos.principal} | Full kerberos kinit command string, %{config.prop.name} is replaced by corresponding config object value, %{broker.name} returns the broker’s hostname. Type: string | |
sasl.kerberos.keytab | * | Path to Kerberos keytab file. Uses system default if not set.NOTE: This is not automatically used but must be added to the template in sasl.kerberos.kinit.cmd as ... -t %{sasl.kerberos.keytab}. Type: string | ||
sasl.kerberos.min.time.before.relogin | * | 1 .. 86400000 | 60000 | Minimum time in milliseconds between key refresh attempts. Type: integer |
sasl.username | * | SASL username for use with the PLAIN mechanism Type: string | ||
sasl.password | * | SASL password for use with the PLAIN mechanism Type: string | ||
group.id | * | Client group id string. All clients sharing the same group.id belong to the same group. Type: string | ||
partition.assignment.strategy | * | range,roundrobin | Name of partition assignment strategy to use when elected group leader assigns partitions to group members. Type: string | |
session.timeout.ms | * | 1 .. 3600000 | 30000 | Client group session and failure detection timeout. Type: integer |
heartbeat.interval.ms | * | 1 .. 3600000 | 1000 | Group session keepalive heartbeat interval. Type: integer |
group.protocol.type | * | consumer | Group protocol type Type: string | |
coordinator.query.interval.ms | * | 1 .. 3600000 | 600000 | How often to query for the current client group coordinator. If the currently assigned coordinator is down the configured query interval will be divided by ten to more quickly recover in case of coordinator reassignment. Type: integer |
enable.auto.commit | C | true, false | true | Automatically and periodically commit offsets in the background. Type: boolean |
auto.commit.interval.ms | C | 0 .. 86400000 | 5000 | The frequency in milliseconds that the consumer offsets are committed (written) to offset storage. (0 = disable) Type: integer |
enable.auto.offset.store | C | true, false | true | Automatically store offset of last message provided to application. Type: boolean |
queued.min.messages | C | 1 .. 10000000 | 100000 | Minimum number of messages per topic+partition in the local consumer queue. Type: integer |
queued.max.messages.kbytes | C | 1 .. 1000000000 | 1000000 | Maximum number of kilobytes per topic+partition in the local consumer queue. This value may be overshot by fetch.message.max.bytes. Type: integer |
fetch.wait.max.ms | C | 0 .. 300000 | 100 | Maximum time the broker may wait to fill the response with fetch.min.bytes. Type: integer |
fetch.message.max.bytes | C | 1 .. 1000000000 | 1048576 | Initial maximum number of bytes per topic+partition to request when fetching messages from the broker. If the client encounters a message larger than this value it will gradually try to increase it until the entire message can be fetched. Type: integer |
max.partition.fetch.bytes | C | Alias for fetch.message.max.bytes | ||
fetch.min.bytes | C | 1 .. 100000000 | 1 | Minimum number of bytes the broker responds with. If fetch.wait.max.ms expires the accumulated data will be sent to the client regardless of this setting. Type: integer |
fetch.error.backoff.ms | C | 0 .. 300000 | 500 | How long to postpone the next fetch request for a topic+partition in case of a fetch error. Type: integer |
offset.store.method | C | none, file, broker | broker | Offset commit store method: ‘file’ - local file store (offset.store.path, et.al), ‘broker’ - broker commit store (requires Apache Kafka 0.8.2 or later on the broker). Type: enum value |
consume_cb | C | Message consume callback (set with rd_kafka_conf_set_consume_cb()) Type: pointer | ||
rebalance_cb | C | Called after consumer group has been rebalanced (set with rd_kafka_conf_set_rebalance_cb()) Type: pointer | ||
offset_commit_cb | C | Offset commit result propagation callback. (set with rd_kafka_conf_set_offset_commit_cb()) Type: pointer | ||
enable.partition.eof | C | true, false | true | Emit RD_KAFKA_RESP_ERR__PARTITION_EOF event whenever the consumer reaches the end of a partition. Type: boolean |
queue.buffering.max.messages | P | 1 .. 10000000 | 100000 | Maximum number of messages allowed on the producer queue. Type: integer |
queue.buffering.max.kbytes | P | 1 .. 2147483647 | 4000000 | Maximum total message size sum allowed on the producer queue. Type: integer |
queue.buffering.max.ms | P | 0 .. 900000 | 1000 | Maximum time, in milliseconds, for buffering data on the producer queue. Type: integer |
message.send.max.retries | P | 0 .. 10000000 | 2 | How many times to retry sending a failing MessageSet. Note: retrying may cause reordering. Type: integer |
retries | P | Alias for message.send.max.retries | ||
retry.backoff.ms | P | 1 .. 300000 | 100 | The backoff time in milliseconds before retrying a message send. Type: integer |
compression.codec | P | none, gzip, snappy, lz4 | none | compression codec to use for compressing message sets. This is the default value for all topics, may be overriden by the topic configuration property compression.codec. Type: enum value |
batch.num.messages | P | 1 .. 1000000 | 10000 | Maximum number of messages batched in one MessageSet. The total MessageSet size is also limited by message.max.bytes. Type: integer |
delivery.report.only.error | P | true, false | false | Only provide delivery reports for failed messages. Type: boolean |
dr_cb | P | Delivery report callback (set with rd_kafka_conf_set_dr_cb()) Type: pointer | ||
dr_msg_cb | P | Delivery report callback (set with rd_kafka_conf_set_dr_msg_cb()) Type: pointer |
Topic configuration properties
Property | C/P | Range | Default | Description |
---|---|---|---|---|
request.required.acks | P | -1 .. 1000 | 1 | This field indicates how many acknowledgements the leader broker must receive from ISR brokers before responding to the request: 0=Broker does not send any response/ack to client, 1=Only the leader broker will need to ack the message, -1 or all=broker will block until message is committed by all in sync replicas (ISRs) or broker’s in.sync.replicassetting before sending response. Type: integer |
acks | P | Alias for request.required.acks | ||
request.timeout.ms | P | 1 .. 900000 | 5000 | The ack timeout of the producer request in milliseconds. This value is only enforced by the broker and relies on request.required.acksbeing != 0. Type: integer |
message.timeout.ms | P | 0 .. 900000 | 300000 | Local message timeout. This value is only enforced locally and limits the time a produced message waits for successful delivery. A time of 0 is infinite. Type: integer |
produce.offset.report | P | true, false | false | Report offset of produced message back to application. The application must be use the dr_msg_cbto retrieve the offset from rd_kafka_message_t.offset. Type: boolean |
partitioner_cb | P | Partitioner callback (set with rd_kafka_topic_conf_set_partitioner_cb()) Type: pointer | ||
opaque | * | Application opaque (set with rd_kafka_topic_conf_set_opaque()) Type: pointer | ||
compression.codec | P | none, gzip, snappy, lz4, inherit | inherit | Compression codec to use for compressing message sets. Type: enum value |
auto.commit.enable | C | true, false | true | If true, periodically commit offset of the last message handed to the application. This committed offset will be used when the process restarts to pick up where it left off. If false, the application will have to call rd_kafka_offset_store()to store an offset (optional). NOTE: This property should only be used with the simple legacy consumer, when using the high-level KafkaConsumer the global enable.auto.commitproperty must be used instead. NOTE: There is currently no zookeeper integration, offsets will be written to broker or local file according to offset.store.method. Type: boolean |
enable.auto.commit | C | Alias for auto.commit.enable | ||
auto.commit.interval.ms | C | 10 .. 86400000 | 60000 | The frequency in milliseconds that the consumer offsets are committed (written) to offset storage. Type: integer |
auto.offset.reset | C | smallest, earliest, beginning, largest, latest, end, error | largest | Action to take when there is no initial offset in offset store or the desired offset is out of range: ‘smallest’,’earliest’ - automatically reset the offset to the smallest offset, ‘largest’,’latest’ - automatically reset the offset to the largest offset, ‘error’ - trigger an error which is retrieved by consuming messages and checking ‘message->err’. Type: enum value |
offset.store.path | C | . | Path to local file for storing offsets. If the path is a directory a filename will be automatically generated in that directory based on the topic and partition. Type: string | |
offset.store.sync.interval.ms | C | -1 .. 86400000 | -1 | fsync() interval for the offset file, in milliseconds. Use -1 to disable syncing, and 0 for immediate sync after each write. Type: integer |
offset.store.method | C | file, broker | broker | Offset commit store method: ‘file’ - local file store (offset.store.path, et.al), ‘broker’ - broker commit store (requires “group.id” to be configured and Apache Kafka 0.8.2 or later on the broker.). Type: enum value |
consume.callback.max.messages | C | 0 .. 1000000 | 0 | Maximum number of messages to dispatch in one rd_kafka_consume_callback*()call (0 = unlimited) Type: integer |
相关文章推荐
- kafka学习四:开发producer
- Docker下kafka学习,三部曲之三:java开发
- kafka学习(三)--java开发(基于kafka0.8版本)
- Kafka学习总结(六)——应用开发
- kafka学习五:开发consumer
- Kafka学习总结(六)——应用开发
- kafka学习(四)--java开发(基于kafka0.9、1.0版本)
- Docker下的Kafka学习之三:集群环境下的java开发
- 分布式开发技术 我的学习历程(一)
- (Eclipse 学习笔记)在Eclipse中用myEclipse进行开发
- “面向状态软件开发”学习笔记一(整理LeWolf的文章)
- J2EE 开发实例学习(一)
- 给大家一些学习开发J2EE时框架、开发工具选择的建议
- SharePoint 应用的开发学习笔记(-)
- [转载]用.NET开发MSN聊天机器人|目前MSNP8协议不能用了,所以只作参考学习
- Eclipse插件开发学习笔记 (一)
- 学习游戏开发经典网站
- 学习游戏开发经典网站
- 通过一个简单的SWING日历BEAN开发学习Calendar类的使用(2)JCalendar源代码
- XP方法学习总结及对小组开发的思考