nullpointerexception - NPE 同时反序列化 kafka 流中的 avro 消息

标签 nullpointerexception apache-kafka avro confluent-schema-registry

我写了一个小的 java 类来测试 Avro 编码的 Kafka 主题的消耗。

    Properties appProps = new Properties();

    appProps.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "http://***kfk14bro1.lc:9092");
    appProps.put(AbstractKafkaAvroSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG, "http://***kfk14str1.lc:8081");
    appProps.put(StreamsConfig.APPLICATION_ID_CONFIG, "consumer");
    appProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest");
    appProps.put(StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,LogAndContinueExceptionHandler.class);


    StreamsBuilder streamsBuilder = new StreamsBuilder();

    streamsBuilder.stream(
                  "coordinates", Consumed.with(Serdes.String(), new GenericAvroSerde()))
              .peek((key, value) -> System.out.println("key=" + key + ", value=" + value));

    new KafkaStreams(streamsBuilder.build(), appProps).start();

当我运行这个类时,SerdeConfigs 被正确地记录下来,可以在下面的日志中看到:

[consumer-56b0e0ca-d336-45cc-b388-46a68dbfab8b-StreamThread-1] INFO io.confluent.kafka.serializers.KafkaAvroSerializerConfig - KafkaAvroSerializerConfig values: 
    schema.registry.url = [http://***kfk14str1.lc:8081]
    basic.auth.user.info = [hidden]
    auto.register.schemas = true
    max.schemas.per.subject = 1000
    basic.auth.credentials.source = URL
    schema.registry.basic.auth.user.info = [hidden]
    value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
    key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy

[normal-consumer-56b0e0ca-d336-45cc-b388-46a68dbfab8b-StreamThread-1] INFO io.confluent.kafka.serializers.KafkaAvroDeserializerConfig - KafkaAvroDeserializerConfig values: 
    schema.registry.url = [http://***kfk14str1.lc:8081]
    basic.auth.user.info = [hidden]
    auto.register.schemas = true
    max.schemas.per.subject = 1000
    basic.auth.credentials.source = URL
    schema.registry.basic.auth.user.info = [hidden]
    specific.avro.reader = false
    value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
    key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy

但消息未被使用并为每条消息生成以下日志:

[normal-consumer-56b0e0ca-d336-45cc-b388-46a68dbfab8b-StreamThread-1] WARN org.apache.kafka.streams.errors.LogAndContinueExceptionHandler - Exception caught during Deserialization, taskId: 0_0, topic: coordinates, partition: 0, offset: 782205986
org.apache.kafka.common.errors.SerializationException: Error deserializing Avro message for id 83
Caused by: java.lang.NullPointerException
    at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.deserialize(AbstractKafkaAvroDeserializer.java:116)
at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.deserialize(AbstractKafkaAvroDeserializer.java:88)
at io.confluent.kafka.serializers.KafkaAvroDeserializer.deserialize(KafkaAvroDeserializer.java:55)
at io.confluent.kafka.streams.serdes.avro.GenericAvroDeserializer.deserialize(GenericAvroDeserializer.java:63)
at io.confluent.kafka.streams.serdes.avro.GenericAvroDeserializer.deserialize(GenericAvroDeserializer.java:39)
at org.apache.kafka.common.serialization.Deserializer.deserialize(Deserializer.java:58)
at org.apache.kafka.streams.processor.internals.SourceNode.deserializeValue(SourceNode.java:60)

但我能够从 avro 控制台消费者那里读取得很好,所以我知道写入主题的数据没有任何问题。下面的命令可以正常打印日志:

~/kafka/confluent-5.1.2/bin/kafka-avro-console-consumer --bootstrap-server http://***kfk14bro1.lc:9092 --topic coordinates --property schema.registry.url=http://***kfk14str1.lc:8081 --property auto.offset.reset=latest

最佳答案

当您自己实例化 Avro Serde 时,它​​不会自动配置架构注册 URL。

所以要么你必须自己配置它,要么你通过添加来定义默认的 serdes:

appProps.setProperty(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());
appProps.setProperty(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, GenericAvroSerde.class.getName());

并通过移除

Consumed.with(Serdes.String(), new GenericAvroSerde())

要配置 Serde,请使用以下代码(根据您的情况进行调整):

GenericAvroSerde genericAvroSerde = new GenericAvroSerde();
boolean isKeySerde = false;
genericAvroSerde.configure(
     Collections.singletonMap(
         AbstractKafkaAvroSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG,
         "http://confluent-schema-registry-server:8081/"),
     isKeySerde);

关于nullpointerexception - NPE 同时反序列化 kafka 流中的 avro 消息,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55475749/

相关文章:

java - Maven 和 Junit

java - Java 8 中的三元运算符,使用 Maven 进行编译

java - 根据配置向不同的Kafka主题发送消息

java - 在 Vaadin 和 Spring 中使用 @autowired 的 NullPointerException

java - null 参数的 IllegalArgumentException 或 NullPointerException?

apache-kafka - **Kafka** 跨区域数据中心之间的双向同步

java - 如何聚合kafka流中的多个分区

java - Kafka Connect S3 接收器在加载 Avro 时抛出 IllegalArgumentException

hadoop - 使用es-hadoop索引日志

scala - Avro genericdata.Record 忽略数据类型