您的位置:首页 > 产品设计 > UI/UE

confluent环境谨慎删除topic

2016-07-04 09:37 330 查看
关注一段代码

kafka-connect-hdfs-2.0.0\src\main\java\io\confluent\connect\hdfs\TopicPartitionWriter.java

private void writeRecord(SinkRecord record) throws IOException {
long expectedOffset = offset + recordCounter;
if (offset == -1) {
offset = record.kafkaOffset();
} else if (record.kafkaOffset() != expectedOffset) {
// Currently it's possible to see stale data with the wrong offset after a rebalance when you
// rewind, which we do since we manage our own offsets. See KAFKA-2894.
if (!sawInvalidOffset) {
log.info(
"Ignoring stale out-of-order record in {}-{}. Has offset {} instead of expected offset {}",
record.topic(), record.kafkaPartition(), record.kafkaOffset(), expectedOffset);
}
sawInvalidOffset = true;
return;
}


看一段日志

[2016-07-01 18:19:50,199] INFO Ignoring stale out-of-order record in beaver_http_response-1. Has offset 122980245 instead of expected offset 96789608 (io.confluent.connect.hdfs.TopicPartitionWriter:470)
[2016-07-01 18:19:50,200] INFO Starting commit and rotation for topic partition beaver_http_response-1 with start offsets {} and end offsets {} (io.confluent.connect.hdfs.TopicPartitionWriter:267)


这段轻描淡写的日志,就是数据死活近不了hdfs的重要线索。一旦offset对不牢,就不会写入数据了。
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  kafka confluent