java - Kafka 偏移量未增加

标签 java apache-kafka kafka-producer-api spring-kafka

我正在使用 Kafka 和 Spring-boot:

Kafka 生产者类:

@Service
public class MyKafkaProducer {

    @Autowired
    private KafkaTemplate<String, String> kafkaTemplate;

    private static Logger LOGGER = LoggerFactory.getLogger(NotificationDispatcherSender.class);

    // Send Message
    public void sendMessage(String topicName, String message) throws Exception {
        LOGGER.debug("========topic Name===== " + topicName + "=========message=======" + message);
        ListenableFuture<SendResult<String, String>> result = kafkaTemplate.send(topicName, message);
        result.addCallback(new ListenableFutureCallback<SendResult<String, String>>() {
            @Override
            public void onSuccess(SendResult<String, String> result) {
                LOGGER.debug("sent message='{}' with offset={}", message, result.getRecordMetadata().offset());
            }

            @Override
            public void onFailure(Throwable ex) {
                LOGGER.error(Constants.PRODUCER_MESSAGE_EXCEPTION.getValue() + " : " + ex.getMessage());
            }
        });
    }
}

Kafka 配置:

spring.kafka.producer.retries=0
spring.kafka.producer.batch-size=100000
spring.kafka.producer.request.timeout.ms=30000
spring.kafka.producer.linger.ms=10
spring.kafka.producer.acks=0
spring.kafka.producer.buffer-memory=33554432
spring.kafka.producer.max.block.ms=5000
spring.kafka.bootstrap-servers=192.168.1.161:9092,192.168.1.162:9093

问题:

我有一个主题的 5 个分区,比如说 my-topic

发生的情况是,我得到了成功(即消息成功发送到 Kafka)日志,但主题 my-topic 的任何分区的偏移量都增加了。

正如您在上面看到的,我添加了日志 onSuccessonFailure。 我期望的是,当 Kafka 无法向 Kafka 发送消息时,我应该收到错误消息,但在这种情况下我没有收到任何错误消息。

Kafka 的上述行为以 100: 5 的比例发生(即每成功发送 100 条消息到 kafka 后)。

Edit1:添加成功案例的Kafka生产者日志(即消费者端成功接收到消息)

ProducerConfig - logAll:180] ProducerConfig values: 
    acks = 0
    batch.size = 1000
    block.on.buffer.full = false
    bootstrap.servers = [10.20.1.19:9092, 10.20.1.20:9093, 10.20.1.26:9094]
    buffer.memory = 33554432
    client.id = 
    compression.type = none
    connections.max.idle.ms = 540000
    interceptor.classes = null
    key.serializer = class org.apache.kafka.common.serialization.StringSerializer
    linger.ms = 10
    max.block.ms = 5000
    max.in.flight.requests.per.connection = 5
    max.request.size = 1048576
    metadata.fetch.timeout.ms = 60000
    metadata.max.age.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.sample.window.ms = 30000
    partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
    receive.buffer.bytes = 32768
    reconnect.backoff.ms = 50
    request.timeout.ms = 60000
    retries = 0
    retry.backoff.ms = 100
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.mechanism = GSSAPI
    security.protocol = PLAINTEXT
    send.buffer.bytes = 131072
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
    ssl.endpoint.identification.algorithm = null
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLS
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    timeout.ms = 30000
    value.serializer = class org.apache.kafka.common.serialization.StringSerializer

2017-10-24 14:30:09, [INFO] [karma-unified-notification-manager - ProducerConfig - logAll:180] ProducerConfig values: 
    acks = 0
    batch.size = 1000
    block.on.buffer.full = false
    bootstrap.servers = [10.20.1.19:9092, 10.20.1.20:9093, 10.20.1.26:9094]
    buffer.memory = 33554432
    client.id = producer-1
    compression.type = none
    connections.max.idle.ms = 540000
    interceptor.classes = null
    key.serializer = class org.apache.kafka.common.serialization.StringSerializer
    linger.ms = 10
    max.block.ms = 5000
    max.in.flight.requests.per.connection = 5
    max.request.size = 1048576
    metadata.fetch.timeout.ms = 60000
    metadata.max.age.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.sample.window.ms = 30000
    partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
    receive.buffer.bytes = 32768
    reconnect.backoff.ms = 50
    request.timeout.ms = 60000
    retries = 0
    retry.backoff.ms = 100
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.mechanism = GSSAPI
    security.protocol = PLAINTEXT
    send.buffer.bytes = 131072
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
    ssl.endpoint.identification.algorithm = null
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLS
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    timeout.ms = 30000
    value.serializer = class org.apache.kafka.common.serialization.StringSerializer

最佳答案

它没有显示错误,因为您已将 spring.kafka. Producer.acks 设置为 0 。 将其设置为 1,您的回调函数应该可以工作。然后您可以查看偏移量是否增加。

关于java - Kafka 偏移量未增加,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/46892185/

相关文章:

Java初学者对JDBC有一些疑问

java - 在 java 中使用 xpath 来浏览这个列表?

java - 如何让多个客户倾听同一消费者的声音?

java - 无法从 KafkaProducer 发布消息

apache-kafka - kafka怎么知道 "roll forward or roll back"是否是一笔交易?

java - 如何在 Scala 中期望 String* 的函数中使用 Array[String]

java - 比较java中同一个数组的元素

Java/Scala Kafka Producer 不向主题发送消息

docker - 为什么我应该将docker image “confluentinc/kafka”用于kafka集群?

java - Kafka 消费者无法接收序列化对象?