apache-flink - Apache Flink中的并行度

标签 apache-flink

我可以在Flink程序中为任务的不同部分设置不同程度的并行度吗?
例如,Flink如何解释以下示例代码?
两个自定义实践者MyPartitioner1,MyPartitioner2将输入数据划分为4个分区和2个分区。

partitionedData1 = inputData1
  .partitionCustom(new MyPartitioner1(), 1);
env.setParallelism(4);
DataSet<Tuple2<Integer, Integer>> output1 = partitionedData1
  .mapPartition(new calculateFun());

partitionedData2 = inputData2
  .partitionCustom(new MyPartitioner2(), 2);
env.setParallelism(2);
DataSet<Tuple2<Integer, Integer>> output2 = partitionedData2
  .mapPartition(new calculateFun());

我收到此代码的以下错误:
Exception in thread "main" org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
    at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$receiveWithLogMessages$1.applyOrElse(JobManager.scala:314)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
    at org.apache.flink.runtime.ActorLogMessages$$anon$1.apply(ActorLogMessages.scala:36)
    at org.apache.flink.runtime.ActorLogMessages$$anon$1.apply(ActorLogMessages.scala:29)
    at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
    at org.apache.flink.runtime.ActorLogMessages$$anon$1.applyOrElse(ActorLogMessages.scala:29)
    at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
    at org.apache.flink.runtime.jobmanager.JobManager.aroundReceive(JobManager.scala:92)
    at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
    at akka.actor.ActorCell.invoke(ActorCell.scala:487)
    at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:254)
    at akka.dispatch.Mailbox.run(Mailbox.scala:221)
    at akka.dispatch.Mailbox.exec(Mailbox.scala:231)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 2
    at org.apache.flink.runtime.io.network.api.writer.RecordWriter.emit(RecordWriter.java:80)
    at org.apache.flink.runtime.operators.shipping.OutputCollector.collect(OutputCollector.java:65)
    at org.apache.flink.runtime.operators.NoOpDriver.run(NoOpDriver.java:92)
    at org.apache.flink.runtime.operators.RegularPactTask.run(RegularPactTask.java:496)
    at org.apache.flink.runtime.operators.RegularPactTask.invoke(RegularPactTask.java:362)
    at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559)
    at java.lang.Thread.run(Unknown Source)

最佳答案

ExecutionEnvironment.setParallelism()设置整个程序(即程序的所有运算符)的并行性。

您可以通过在运算符上调用setParallelism()方法来为每个运算符指定并行性。

之所以抛出ArrayIndexOutOfBoundsException,是因为您的自定义分区程序返回了无效的分区号,这可能是由于意外的并行程度所致。自定义分区程序在其partition(K key, int numPartitions)方法中接收接收器的实际并行性作为参数。

关于apache-flink - Apache Flink中的并行度,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/34047548/

相关文章:

python - 将 PCollection 分配回全局窗口

java - 如何增加 Flink 内存大小

java - 在 Java 中将对象列表添加到 flink 表的最佳方法是什么?

java - 使用 GenericRecords 时,Flink Avro 序列化显示 "not serializable"错误

scala - 如何在 Apache Flink Streaming 0.10.0 中指定 OVERWRITE 为 writeAsText?

apache-flink - Apache Flink 从文件加载 ML 模型

apache-flink - 如何在 Zeppelin 中将 Flink var 的内容写入屏幕?

java - StreamingFileSink 未将数据提取到 s3

apache-flink - Flink 窗口 : aggregate and output to sink

azure - Kubernetes 中的 Flink 作业部署