您的位置:首页 > 编程语言 > Java开发

java 实现kafka消息生产者和消费者

2016-12-14 16:56 489 查看
一、概述

kafka原理这东西就不再赘述了,除了官网网上也是能找到一大堆,直接上代码,这里实现的基本需求是 producer类利用for循环来产生消息,然后consumer类来消费这些消息,我的正确运行环境是:

centos-6.5

kafka-2.10_0.10

scala-2.10.4

二、代码

生产者:

package com.unisk.bigdata.kafka;

import java.util.Properties;

import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerRecord;

public class MyProducer {

public static void main(String[] args) {
Properties props = new Properties();
props.put("bootstrap.servers", "master:9092");
props.put("acks", "all");
props.put("retries", 0);
props.put("batch.size", 16384);
props.put("linger.ms", 1);
props.put("buffer.memory", 33554432);
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
Producer<String, String> producer = null;
try {
producer = new KafkaProducer<>(props);
for (int i = 0; i < 100; i++) {
String msg = "Message " + i;
producer.send(new ProducerRecord<String, String>("HelloKafka", msg));
System.out.println("Sent:" + msg);
}
} catch (Exception e) {
e.printStackTrace();

} finally {
producer.close();
}

}

}


消费者

package com.unisk.bigdata.kafka;

import java.util.Arrays;
import java.util.Properties;

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;

public class MyConsumer {

public static void main(String[] args) {
Properties props = new Properties();
props.put("bootstrap.servers", "master:9092");
props.put("group.id", "group-1");
props.put("enable.auto.commit", "true");
props.put("auto.commit.interval.ms", "1000");
props.put("auto.offset.reset", "earliest");
props.put("session.timeout.ms", "30000");
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");

KafkaConsumer<String, String> kafkaConsumer = new KafkaConsumer<>(props);
kafkaConsumer.subscribe(Arrays.asList("HelloKafka"));
while (true) {
ConsumerRecords<String, String> records = kafkaConsumer.poll(100);
for (ConsumerRecord<String, String> record : records) {
System.out.printf("offset = %d, value = %s", record.offset(), record.value());
System.out.println();
}
}

}

}


三、结果展示

运行生产者之后

Sent:Message 0

Sent:Message 1

Sent:Message 2

Sent:Message 3

Sent:Message 4

Sent:Message 5

Sent:Message 6

Sent:Message 7

……

运行消费者后

offset = 67, value = Message 2

offset = 68, value = Message 5

offset = 69, value = Message 8

offset = 70, value = Message 11

offset = 71, value = Message 14

offset = 72, value = Message 17

offset = 73, value = Message 20

offset = 74, value = Message 23

offset = 75, value = Message 26

offset = 76, value = Message 29

……
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  kafka