java - Flink Stream Sink ParquetWriter 因 ClassCastException 失败

标签 java apache-kafka apache-flink parquet

我编写了一个消费者,它从 kafka 主题读取数据并使用 StreamSink 以 parquet 格式写入数据。但我收到以下错误

java.lang.Exception: org.apache.flink.streaming.runtime.tasks.ExceptionInChainedOperatorException: Could not forward element to next operator
    at org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.checkThrowSourceExecutionException(SourceStreamTask.java:212)
    at org.apache.flink.streaming.runtime.tasks.SourceStreamTask.performDefaultAction(SourceStreamTask.java:132)
    at org.apache.flink.streaming.runtime.tasks.StreamTask.run(StreamTask.java:298)
    at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:403)
    at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:705)
    at org.apache.flink.runtime.taskmanager.Task.run(Task.java:530)
    at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.flink.streaming.runtime.tasks.ExceptionInChainedOperatorException: Could not forward element to next operator
    at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:651)
    at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:612)
    at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:592)
    at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:727)
    at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:705)
    at org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:104)
    at org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collectWithTimestamp(StreamSourceContexts.java:111)
    at org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher.emitRecordWithTimestamp(AbstractFetcher.java:398)
    at org.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.emitRecord(KafkaFetcher.java:185)
    at org.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.runFetchLoop(KafkaFetcher.java:150)
    at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.run(FlinkKafkaConsumerBase.java:715)
    at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100)
    at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63)
    at org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:202)
Caused by: java.lang.ClassCastException: java.util.ArrayList cannot be cast to java.lang.CharSequence
    at org.apache.parquet.avro.AvroWriteSupport.fromAvroString(AvroWriteSupport.java:371)
    at org.apache.parquet.avro.AvroWriteSupport.writeValueWithoutConversion(AvroWriteSupport.java:346)
    at org.apache.parquet.avro.AvroWriteSupport.writeValue(AvroWriteSupport.java:278)
    at org.apache.parquet.avro.AvroWriteSupport.writeRecordFields(AvroWriteSupport.java:191)
    at org.apache.parquet.avro.AvroWriteSupport.write(AvroWriteSupport.java:165)
    at org.apache.parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:128)
    at org.apache.parquet.hadoop.ParquetWriter.write(ParquetWriter.java:299)
    at org.apache.flink.formats.parquet.ParquetBulkWriter.addElement(ParquetBulkWriter.java:52)
    at org.apache.flink.streaming.api.functions.sink.filesystem.BulkPartWriter.write(BulkPartWriter.java:50)
    at org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.write(Bucket.java:214)
    at org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.onElement(Buckets.java:268)
    at org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink.invoke(StreamingFileSink.java:370)
    at org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:56)
    at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:637)

代码很简单:

DataStream<GenericRecord> sourceStream = env.addSource(bikeDetailsKafkaConsumer010;

            final StreamingFileSink<GenericRecord> sink = StreamingFileSink.forBulkFormat
                    (path, ParquetAvroWriters.forGenericRecord(SchemaUtils.getSchema(schemaSubject)))
                    .withBucketAssigner(new EventTimeBucketAssigner())
                    .build();

            sourceStream.addSink(sink).setParallelism(parallelism);

某些记录是否有问题以及如何调试该记录出现异常。

最佳答案

如果在编写 Parquet/Avro 时遇到 ClassCastException,则几乎总是架构和数据不匹配的情况。

我建议打印出预期的架构和格式化为 json 的实际记录(使用 toString 或使用类型信息更好)。

ByteArrayOutputStream baos = new ByteArrayOutputStream();
DatumWriter<GenericRecord> writer = new GenericDatumWriter<>(schema);
JsonEncoder encoder = EncoderFactory.get().jsonEncoder(schema, baos);
writer.write(genericRecord, encoder);
encoder.flush();
baos.flush();
return new String(baos.toByteArray());

关于java - Flink Stream Sink ParquetWriter 因 ClassCastException 失败,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/60458366/

相关文章:

java - 在已经存在的 SharedPreference 值中更改数据类型

java - Apache 弗林克 : executing a program which extends the RichFlatMapFunction on the remote cluster causes error

azure - 从 Azure IoT 中心路由和转换数据

spring-boot - Spring Kafka 和主题消费者数量

scala - 使用Spark结构化流处理包含嵌套实体的JSON

apache-kafka - Apache Kafka - 磁盘和代理之间的负载不均匀

apache-flink - 如何为同一独立集群上运行的不同 flink 作业指定不同的 log4j.properties 文件

java - 霍纳算法的递归返回错误结果

java - WebTestClient 返回线程不支持的 IllegalStateException : block()/blockFirst()/blockLast() are blocking,

java - 可恢复文件下载协议(protocol)