java - 用于 Java 应用程序的 Amazon Kinesis Data Analytics : Avro issue in deserialization incoming messages

标签 java amazon-web-services avro amazon-kinesis amazon-kinesis-analytics

我尝试将我的 Flink 应用程序部署到 AWS Kinesis Data Analytics 中。此应用程序使用 Apache Avro 对传入消息进行反序列化/序列化。我的应用程序在我的本地机器上运行良好,但是当我将它部署到 AWS 时,出现异常(在 CloudWatch Logs 中): Caused by: java.io.InvalidClassException: org.apache.avro.specific.SpecificRecordBase;本地类不兼容:流 classdesc serialVersionUID = 4445917349737100331,本地类 serialVersionUID = -1463700717714793795

日志详细信息:

{
  "locationInformation": "org.apache.flink.runtime.taskmanager.Task.transitionState(Task.java:913)",
  "logger": "org.apache.flink.runtime.taskmanager.Task",
  "message": "Source: Custom Source -> Sink: Unnamed (1/1) (a72ff69f9dc0f9e56d1104ce21456a5d) switched from RUNNING to FAILED.",
  "throwableInformation": [
    "org.apache.flink.streaming.runtime.tasks.StreamTaskException: Could not instantiate serializer.",
    "\tat org.apache.flink.streaming.api.graph.StreamConfig.getTypeSerializerIn1(StreamConfig.java:160)",
    "\tat org.apache.flink.streaming.runtime.tasks.OperatorChain.createChainedOperator(OperatorChain.java:380)",
    "\tat org.apache.flink.streaming.runtime.tasks.OperatorChain.createOutputCollector(OperatorChain.java:296)",
    "\tat org.apache.flink.streaming.runtime.tasks.OperatorChain.<init>(OperatorChain.java:133)",
    "\tat org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:275)",
    "\tat org.apache.flink.runtime.taskmanager.Task.run(Task.java:714)",
    "\tat java.lang.Thread.run(Thread.java:748)",
    "Caused by: java.io.InvalidClassException: org.apache.avro.specific.SpecificRecordBase; local class incompatible: stream classdesc serialVersionUID = 4445917349737100331, local class serialVersionUID = -1463700717714793795",
    "\tat java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:699)",
    "\tat java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1885)",
    "\tat java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1751)",
    "\tat java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1885)",
    "\tat java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1751)",
    "\tat java.io.ObjectInputStream.readClass(ObjectInputStream.java:1716)",
    "\tat java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1556)",
    "\tat java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)",
    "\tat org.apache.flink.formats.avro.typeutils.AvroSerializer.readCurrentLayout(AvroSerializer.java:465)",
    "\tat org.apache.flink.formats.avro.typeutils.AvroSerializer.readObject(AvroSerializer.java:432)",
    "\tat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)",
    "\tat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)",
    "\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)",
    "\tat java.lang.reflect.Method.invoke(Method.java:498)",
    "\tat java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1170)",
    "\tat java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2178)",
    "\tat java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)",
    "\tat java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)",
    "\tat java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)",
    "\tat org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:566)",
    "\tat org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:552)",
    "\tat org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:540)",
    "\tat org.apache.flink.util.InstantiationUtil.readObjectFromConfig(InstantiationUtil.java:501)",
    "\tat org.apache.flink.streaming.api.graph.StreamConfig.getTypeSerializerIn1(StreamConfig.java:158)",
    "\t... 6 more"
  ],
  "threadName": "Source: Custom Source -> Sink: Unnamed (1/1)",
  "applicationARN": "arn:aws:kinesisanalytics:us-east-1:829044228870:application/poc-kda",
  "applicationVersionId": "8",
  "messageSchemaVersion": "1",
  "messageType": "INFO"
}

我使用库版本:

  • Apache Avro - 1.9.1
  • Apache Flink - 1.9.1
  • Kinesis 生产者库 - 0.13.1
  • AWS Flink - 1.8

请注意,如果我使用 Apache Flink - 1.8、1.6,也会出现同样的问题

KDA Flink代码:

public class KinesisExampleKDA {
   private static final String REGION = "us-east-1";

   public static void main(String[] args) throws Exception {
       Properties consumerConfig = new Properties();
       consumerConfig.put(AWSConfigConstants.AWS_REGION, REGION);
       consumerConfig.put(ConsumerConfigConstants.STREAM_INITIAL_POSITION, "LATEST");

       StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
       env.enableCheckpointing(50000);

       DataStream<EventAttributes> consumerStream = env.addSource(new FlinkKinesisConsumer<>(
               "dev-events", new KinesisSerializer(), consumerConfig));

       consumerStream
               .addSink(getProducer());
       env.execute("kinesis-example");
   }

   private static FlinkKinesisProducer<EventAttributes> getProducer(){
       Properties outputProperties = new Properties();
       outputProperties.setProperty(ConsumerConfigConstants.AWS_REGION, REGION);
       outputProperties.setProperty("AggregationEnabled", "false");

       FlinkKinesisProducer<EventAttributes> sink = new FlinkKinesisProducer<>(new KinesisSerializer(), outputProperties);
       sink.setDefaultStream("dev-result");
       sink.setDefaultPartition("0");
       return sink;
   }
}

class KinesisSerializer implements DeserializationSchema<EventAttributes>, SerializationSchema<EventAttributes> {
   @Override
   public EventAttributes deserialize(byte[] bytes) throws IOException {
       return EventAttributes.fromByteBuffer(ByteBuffer.wrap(bytes));
   }

   @Override
   public boolean isEndOfStream(EventAttributes eventAttributes) {
       return false;
   }

   @Override
   public byte[] serialize(EventAttributes eventAttributes) {
       try {
           return eventAttributes.toByteBuffer().array();
       } catch (IOException e) {
           e.printStackTrace();
       }
       return new byte[1];
   }

   @Override
   public TypeInformation<EventAttributes> getProducedType() {
       return TypeInformation.of(EventAttributes.class);
   }
}

Kinesis 生产者代码:

public class KinesisProducer {

   private static String streamName = "dev-events";

   public static void main(String[] args) throws InterruptedException, JsonMappingException {

       AmazonKinesis kinesisClient = getAmazonKinesisClient("us-east-1");

       try {
           sendData(kinesisClient, streamName);
       } catch (IOException e) {
           e.printStackTrace();
       }
   }

   private static AmazonKinesis getAmazonKinesisClient(String regionName) {

       AmazonKinesisClientBuilder clientBuilder = AmazonKinesisClientBuilder.standard();
       clientBuilder.setEndpointConfiguration(
               new AwsClientBuilder.EndpointConfiguration("kinesis.us-east-1.amazonaws.com",
                       regionName));
       clientBuilder.withCredentials(DefaultAWSCredentialsProviderChain.getInstance());
       clientBuilder.setClientConfiguration(new ClientConfiguration());

       return clientBuilder.build();
   }

   private static void sendData(AmazonKinesis kinesisClient, String streamName) throws IOException {

       PutRecordsRequest putRecordsRequest = new PutRecordsRequest();

       putRecordsRequest.setStreamName(streamName);
       List<PutRecordsRequestEntry> putRecordsRequestEntryList = new ArrayList<>();
       for (int i = 0; i < 50; i++) {
           PutRecordsRequestEntry putRecordsRequestEntry = new PutRecordsRequestEntry();
           EventAttributes eventAttributes = EventAttributes.newBuilder().setName("Jon.Doe").build();
           putRecordsRequestEntry.setData(eventAttributes.toByteBuffer());
           putRecordsRequestEntry.setPartitionKey(String.format("partitionKey-%d", i));
           putRecordsRequestEntryList.add(putRecordsRequestEntry);
       }

       putRecordsRequest.setRecords(putRecordsRequestEntryList);
       PutRecordsResult putRecordsResult = kinesisClient.putRecords(putRecordsRequest);
       System.out.println("Put Result" + putRecordsResult);
   }

.avdl 格式的 Avro 架构:

@version("0.1.0")
@namespace("com.naya.avro")
protocol UBXEventProtocol{

 record EventAttributes{
               union{null, string} name=null;
 }
}

Avro 自动生成的实体类:

@org.apache.avro.specific.AvroGenerated
public class EventAttributes extends org.apache.avro.specific.SpecificRecordBase implements org.apache.avro.specific.SpecificRecord {
  private static final long serialVersionUID = 2780976157169751219L;
  public static final org.apache.avro.Schema SCHEMA$ = new org.apache.avro.Schema.Parser().parse("{\"type\":\"record\",\"name\":\"EventAttributes\",\"namespace\":\"com.naya.avro\",\"fields\":[{\"name\":\"name\",\"type\":[\"null\",{\"type\":\"string\",\"avro.java.string\":\"String\"}],\"default\":null}]}");
  public static org.apache.avro.Schema getClassSchema() { return SCHEMA$; }

  private static SpecificData MODEL$ = new SpecificData();

  private static final BinaryMessageEncoder<EventAttributes> ENCODER =
      new BinaryMessageEncoder<EventAttributes>(MODEL$, SCHEMA$);

  private static final BinaryMessageDecoder<EventAttributes> DECODER =
      new BinaryMessageDecoder<EventAttributes>(MODEL$, SCHEMA$);
…

Github 链接:

有人可以添加更多详细信息吗?为什么它不能在 AWS 上运行?

提前致谢

最佳答案

查看堆栈跟踪,它似乎并没有在它尝试读取消息时发生,但实际上是在运算符本身的初始化阶段发生的。

Flink 的工作方式 - 它序列化(使用 Java 序列化)每个需要执行的运算符,然后以序列化的形式在集群中分发它们。这意味着 KinesisSerializer 将自行序列化(作为一个类)以通过网络发送。

现在的问题是,Kinesis 序列化程序引用了 EventAttributes 模型,这意味着对 EventAttributes(类本身,而不是特定实例)的引用将与它一起序列化。作为序列化元数据的一部分,它有望扩展/实现。在你的情况下,它需要 SpecificRecordBase 这不是你的可分发的一部分,而是 Avro 库的一部分。

因此运算符本身的完整序列化链是 KinesisConsumer -> KinesisSerializer -> EventAttributes -> SpecificRecordBase (Avro 库的一部分)。

然而,AWS 使用 Flink 1.8,它使用 Avro 1.8.2,所有基本 avro 类也来自 1.8.2。您编译您的应用程序并将其链接到 1.9 的 avro 二进制文件。因此,当 Flink 尝试序列化您的运算符并将它们发送到集群时 - 它会将 reference 序列化为 1.9 版的 SpecificRecordBase。但是当 Flink 实际尝试反序列化它时 - 它发现版本与它实际可用的类 (1.8.2) 不匹配,并且链接失败。

这里有 2 个选项:

  1. 不要使用 KDA。而是转到 EMR(截至 2020 年 1 月已打包 1.9.1)或独立 Flink(需要在 EMR 或准系统上手动部署)。
  2. 完全使用 Flink 1.8 编写您的应用程序。您提到“1.8.2 版应用程序无法编译”- 尝试解决此问题。

关于java - 用于 Java 应用程序的 Amazon Kinesis Data Analytics : Avro issue in deserialization incoming messages,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/59662118/

相关文章:

java - 监控分布在多台服务器上的一个应用程序

java - 如何将@CheckForNull 等与Findbugs 一起使用?

node.js - AWS node.js 没有创建 S3 存储桶?

amazon-web-services - 调用 HeadObject 操作时发生客户端错误 (400) : Bad Request Completed 1 part(s) with . .. 剩余文件

amazon-web-services - Redshift 查询限制为 100 mb

java - @Nonnull 在 eclipse/kepler 中默认

avro - Parquet Data 时间戳列 INT96 尚未在 Druid Overlord Hadoop 任务中实现

java - 如何为嵌套集合创建 avro 模式?

java - 如何将嵌套的 avro GenericRecord 转换为 Row

java - JUnit 测试自定义异常