java - 正在进行的快照太多。增加kafka生产者池的大小或减少并发检查点的数量

标签 java kubernetes apache-kafka apache-flink checkpointing

我正在开发一个沉没于Kafka的Flink应用程序。我创建了一个Kafka生产者,其默认池大小为5。我使用以下配置启用了检查点:

    env.enableCheckpointing(1800000);//checkpointing for every 30 minutes.

    // set mode to exactly-once (this is the default)
    env.getCheckpointConfig().setCheckpointingMode(CheckpointingMode.EXACTLY_ONCE);

    // make sure 500 ms of progress happen between checkpoints
    env.getCheckpointConfig().setMinPauseBetweenCheckpoints(5000);

    // checkpoints have to complete within one minute, or are discarded
    env.getCheckpointConfig().setCheckpointTimeout(60000);

    // allow only one checkpoint to be in progress at the same time
    env.getCheckpointConfig().setMaxConcurrentCheckpoints(1);

该应用有时会因以下异常而继续崩溃。 kafka生产者池大小或检查点是否存在此问题?
2020-03-20 22:31:23,859 INFO  org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction  - FlinkKafkaProducer011 0/1 aborted recovered transaction TransactionHolder{handle=KafkaTransactionState [transactionalId=FileSplitReader -> metrics-map -> Sink: components-topic-sink-4ab008489d4c8ed0fe577883438cc1ff-1, producerId=21, epoch=3], transactionStartTime=1584742933826}
2020-03-20 22:31:23,860 ERROR org.apache.flink.streaming.runtime.tasks.StreamTask           - Error during disposal of stream operator.
java.lang.NullPointerException
    at org.apache.flink.streaming.api.functions.source.ContinuousFileReaderOperator.dispose(ContinuousFileReaderOperator.java:164)
    at org.apache.flink.streaming.runtime.tasks.StreamTask.disposeAllOperators(StreamTask.java:668)
    at org.apache.flink.streaming.runtime.tasks.StreamTask.cleanUpInvoke(StreamTask.java:579)
    at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:481)
    at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:707)
    at org.apache.flink.runtime.taskmanager.Task.run(Task.java:532)
    at java.lang.Thread.run(Thread.java:748)
2020-03-20 22:31:23,861 INFO  org.apache.flink.runtime.taskmanager.Task                     - FileSplitReader -> metrics-map -> Sink: components-topic-sink (1/1) (92b7f3ed8f6362fe0087efd40eb94016) switched from RUNNING to FAILED.
org.apache.flink.streaming.connectors.kafka.FlinkKafka011Exception: Too many ongoing snapshots. Increase kafka producers pool size or decrease number of concurrent checkpoints.
    at org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.createTransactionalProducer(FlinkKafkaProducer011.java:934)
    at org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.beginTransaction(FlinkKafkaProducer011.java:701)
    at org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.beginTransaction(FlinkKafkaProducer011.java:97)
    at org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction.beginTransactionInternal(TwoPhaseCommitSinkFunction.java:394)
    at org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction.initializeState(TwoPhaseCommitSinkFunction.java:385)
    at org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.initializeState(FlinkKafkaProducer011.java:862)
    at org.apache.flink.streaming.util.functions.StreamingFunctionUtils.tryRestoreFunction(StreamingFunctionUtils.java:178)
    at org.apache.flink.streaming.util.functions.StreamingFunctionUtils.restoreFunctionState(StreamingFunctionUtils.java:160)
    at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.initializeState(AbstractUdfStreamOperator.java:96)
    at org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:284)
    at org.apache.flink.streaming.runtime.tasks.StreamTask.initializeStateAndOpen(StreamTask.java:1006)
    at org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$0(StreamTask.java:454)
    at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:94)
    at org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:449)
    at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:461)
    at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:707)
    at org.apache.flink.runtime.taskmanager.Task.run(Task.java:532)
    at java.lang.Thread.run(Thread.java:748)

最佳答案

如果不访问环境,很难说出来。

它可能与您正在运行的特定代码有关。您基本上是在遇到this异常。

有两件事:

  • 这是与代码中的数组相关的类似问题:
    Interrupted while joining ioThread / Error during disposal of stream operator in flink application
  • 听起来您正在Kubernetes中运行,如果您查看this,您会发现问题可能与拆卸失败或作业和任务管理器之间的连通性不足有关,因此您可能需要检查自己的网络Kubernetes集群并确保所有Flink Pod可以相互通信。

  • 希望能帮助到你!

    关于java - 正在进行的快照太多。增加kafka生产者池的大小或减少并发检查点的数量,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/60782483/

    相关文章:

    amazon-web-services - 来自服务器的错误 (BadRequest) : container "microsercvice-registry" in pod "microse...." is waiting to start: trying and failing to pull image

    java - 连接到在 Docker 中运行的 Kafka

    java - "0"作为浮点文字中的前缀是什么意思?

    java - JAVA中的if语句报错

    java - 为什么 while 语句没有被读取?

    apache-kafka - Spring Embedded Kafka + Mock Schema Registry : State Store ChangeLog Schema not registered

    python - 为什么我的 Kafka 消费者比我的 Kafka 生产者慢得多?

    java - 接受包含特定方法的对象而不是接受特定类型

    kubernetes - 容器上的 “container_memory_working_set_bytes” 和 “container_memory_rss” 度量有什么区别

    git - 当新的 dags 被推送到 git repo 时,airflow git-sync 不刷新 dags