java 实现kafka消息生产者和消费者
2016-12-14 16:56
489 查看
一、概述
kafka原理这东西就不再赘述了,除了官网网上也是能找到一大堆,直接上代码,这里实现的基本需求是 producer类利用for循环来产生消息,然后consumer类来消费这些消息,我的正确运行环境是:
centos-6.5
kafka-2.10_0.10
scala-2.10.4
二、代码
生产者:
消费者
三、结果展示
运行生产者之后
Sent:Message 0
Sent:Message 1
Sent:Message 2
Sent:Message 3
Sent:Message 4
Sent:Message 5
Sent:Message 6
Sent:Message 7
……
运行消费者后
offset = 67, value = Message 2
offset = 68, value = Message 5
offset = 69, value = Message 8
offset = 70, value = Message 11
offset = 71, value = Message 14
offset = 72, value = Message 17
offset = 73, value = Message 20
offset = 74, value = Message 23
offset = 75, value = Message 26
offset = 76, value = Message 29
……
kafka原理这东西就不再赘述了,除了官网网上也是能找到一大堆,直接上代码,这里实现的基本需求是 producer类利用for循环来产生消息,然后consumer类来消费这些消息,我的正确运行环境是:
centos-6.5
kafka-2.10_0.10
scala-2.10.4
二、代码
生产者:
package com.unisk.bigdata.kafka; import java.util.Properties; import org.apache.kafka.clients.producer.KafkaProducer; import org.apache.kafka.clients.producer.Producer; import org.apache.kafka.clients.producer.ProducerRecord; public class MyProducer { public static void main(String[] args) { Properties props = new Properties(); props.put("bootstrap.servers", "master:9092"); props.put("acks", "all"); props.put("retries", 0); props.put("batch.size", 16384); props.put("linger.ms", 1); props.put("buffer.memory", 33554432); props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer"); props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer"); Producer<String, String> producer = null; try { producer = new KafkaProducer<>(props); for (int i = 0; i < 100; i++) { String msg = "Message " + i; producer.send(new ProducerRecord<String, String>("HelloKafka", msg)); System.out.println("Sent:" + msg); } } catch (Exception e) { e.printStackTrace(); } finally { producer.close(); } } }
消费者
package com.unisk.bigdata.kafka; import java.util.Arrays; import java.util.Properties; import org.apache.kafka.clients.consumer.ConsumerRecord; import org.apache.kafka.clients.consumer.ConsumerRecords; import org.apache.kafka.clients.consumer.KafkaConsumer; public class MyConsumer { public static void main(String[] args) { Properties props = new Properties(); props.put("bootstrap.servers", "master:9092"); props.put("group.id", "group-1"); props.put("enable.auto.commit", "true"); props.put("auto.commit.interval.ms", "1000"); props.put("auto.offset.reset", "earliest"); props.put("session.timeout.ms", "30000"); props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); KafkaConsumer<String, String> kafkaConsumer = new KafkaConsumer<>(props); kafkaConsumer.subscribe(Arrays.asList("HelloKafka")); while (true) { ConsumerRecords<String, String> records = kafkaConsumer.poll(100); for (ConsumerRecord<String, String> record : records) { System.out.printf("offset = %d, value = %s", record.offset(), record.value()); System.out.println(); } } } }
三、结果展示
运行生产者之后
Sent:Message 0
Sent:Message 1
Sent:Message 2
Sent:Message 3
Sent:Message 4
Sent:Message 5
Sent:Message 6
Sent:Message 7
……
运行消费者后
offset = 67, value = Message 2
offset = 68, value = Message 5
offset = 69, value = Message 8
offset = 70, value = Message 11
offset = 71, value = Message 14
offset = 72, value = Message 17
offset = 73, value = Message 20
offset = 74, value = Message 23
offset = 75, value = Message 26
offset = 76, value = Message 29
……
相关文章推荐
- Kafka 之 中级
- Linux下Kafka单机安装配置方法(图文)
- Kafka 常用命令行详细介绍及整理
- Kafka使用入门教程第1/2页
- Logstash 与Elasticsearch整合使用示例
- 大数据实验室(大数据基础培训)——Kafka的安装、配置及基础使用
- 大数据实验室(大数据基础培训)——概要
- Kafka(一)Kafka初识
- kafka-manager 的编译和使用(附安装包)
- Kafka源码调试环境搭建
- Kafka代码走读-LogManager
- Kafka+Log4j实现日志集中管理
- Kafka深度解析
- Kafka设计解析(三)- Kafka High Availability (下)
- kafka+storm初探
- storm集群 + kafka单机性能测试
- flume、kafka、storm常用命令
- RocketMQ源码分析(一)整体架构