c - MPI_等待: Request pending due to failure

标签 c parallel-processing mpi

我在尝试在下面的代码中实现非阻塞发送和接收时遇到问题,并收到此错误:

 16 Reading <edge192x128.pgm>
 17 Rank 2 [Sat Apr 28 11:24:58 2018] [c6-0c0s13n1] Fatal error in PMPI_Wait: Request pending due to failure, error stack:
 18 PMPI_Wait(207): MPI_Wait(request=0x7ffffff95534, status=0x7fffffff74b0) failed
 19 PMPI_Wait(158): Invalid MPI_Request
 20 Rank 3 [Sat Apr 28 11:24:58 2018] [c6-0c0s13n1] Fatal error in PMPI_Wait: Request pending due to failure, error stack:
 21 PMPI_Wait(207): MPI_Wait(request=0x7ffffff95534, status=0x7fffffff74b0) failed
 22 PMPI_Wait(158): Invalid MPI_Request
 23 _pmiu_daemon(SIGCHLD): [NID 01205] [c6-0c0s13n1] [Sat Apr 28 11:24:58 2018] PE RANK 2 exit signal Aborted
 24 [NID 01205] 2018-04-28 11:24:58 Apid 30656034: initiated application termination
 25 Application 30656034 exit codes: 134
 26 Application 30656034 resources: utime ~0s, stime ~0s, Rss ~7452, inblocks ~7926, outblocks ~19640

我的程序尝试执行以下操作(假设本例有 4 个进程):

  • 根进程将图像文件作为二维数组 PM x PN 读入 masterbuf;
  • 根进程使用 MPI_Issend 将 masterbuf (PM/2 x PN/2) 的子部分传输到所有 4 个进程(包括其自身)。我使用了跨步数据类型将原始数组分成 4 个部分。
  • 所有进程都使用 MPI_Irecv 将 PM/2 x PN/2 小节存储在它们自己的 buf 副本中。
  • 调用 MPI_Wait 是为了防止程序继续执行,直到数据分发完成(我知道我可以在这里使用 MPI_Waitall,我打算在完成这项工作后执行此操作)。

我已经研究了几个小时的代码,但无法解决这个问题,因此我们将不胜感激。代码如下。我删除了一些不相关的 block 。

  1 #include <stdio.h>
  2 #include <stdlib.h>
  3 #include <mpi.h>
  4 #include <math.h>
  5 #include "pgmio.h"
  6
  7 #define M 192
  8 #define N 128
  9
 10 #define PX 2            // number of processes in X dimension
 11 #define PY 2            // number of processes in Y dimension
 12 #define MP M/PX
 13 #define NP N/PY
 14 
 15
 16 #define FILEIN "edge192x128.pgm"
 17 #define FILEOUT "ex7_0_192x128.pgm"
 18
 19 int main(int argc, char **argv)
 20 {
 21   double buf[MP][NP];
 22   double old[MP + 2][NP + 2];
 23   double new[MP + 2][NP + 2];
 24   double edge[MP + 2][NP + 2];
 25   double masterbuf[M][N];
 26   double delta, delta_max, master_delta;
 27
 28   int rank, cart_rank, size, left, right, up, down, iter;
 29   int dims[] = {2, 2};
 30   int periods[] = {0, 0};
 31   int reorder = 0;
 32   int tag = 0;
 33
 34   MPI_Status status;
 35   MPI_Comm comm = MPI_COMM_WORLD;
 36   MPI_Comm cart_comm;
 37
 38   /* initialise MPI */
 39   MPI_Init(&argc, &argv);
 40   MPI_Comm_size(comm, &size);
 41   MPI_Comm_rank(comm, &rank);
 42   MPI_Request request[2 * size];
 43   int coords[size][2];
 44
 45   /* initialise cartesian topology */
 46   MPI_Cart_create(comm, 2, dims, periods, reorder, &cart_comm);
 47   MPI_Comm_rank(cart_comm, &cart_rank);
 48   MPI_Cart_shift(cart_comm, 1, 1, &left, &right);
 49   MPI_Cart_shift(cart_comm, 0, 1, &up, &down);
 50   printf("cart_rank: %d\n", cart_rank);
 51

 56
 57   /* create block datatype for allocation of subsections of image to processes */
 58   MPI_Datatype MPI_block;
 59   MPI_Type_vector(M / PX, N / PY, N, MPI_DOUBLE, &MPI_block);
 60   MPI_Type_commit(&MPI_block);
 61

 73
 74   /* master process: read edges data file into masterbuff and distribute */
 75   if (rank == 0)
 76   {
 77     printf("Reading <%s>\n", FILEIN);
 78     pgmread(FILEIN, masterbuf, M, N);
 79
 80     printf("Distributing data to processes...\n");
 81     for (int i = 0; i < size; i++)
 82     {
 83       /* send chunk to each process: i refers to cart_rank */
 84       MPI_Cart_coords(cart_comm, i, 2, &coords[i][0]);
 85       printf("coords = (%d, %d), rank = %d\n", coords[i][0], coords[i][1], \
 86         cart_rank);
 87       MPI_Issend(&masterbuf[coords[i][0] * MP][coords[i][1] * NP], MP * NP, \
 88         MPI_block, i, tag, cart_comm, &request[i]);
 89     }
 90
 91     MPI_Wait(&request[0], &status);
 92     MPI_Wait(&request[1], &status);
 93     MPI_Wait(&request[2], &status);
 94     MPI_Wait(&request[3], &status);
 95   }
 96
 97   /* all processes: receive data sent by master process */
 98   MPI_Irecv(buf, MP * NP, MPI_block, cart_rank, tag, cart_comm, \
 99     &request[cart_rank + size]);
100
101   /* Could change this to MPI_Waitall */
102   MPI_Wait(&request[5], &status);
103   MPI_Wait(&request[4], &status);
104   MPI_Wait(&request[7], &status);
105   MPI_Wait(&request[6], &status);
106
107   if (rank == 0)
108   {
109     printf("...complete.\n");
110   }

最佳答案

当排名 0 发送给自身并且尚未发布接收时,您的应用程序陷入僵局。

此外,还有 4 MPI_Wait(),但只有一个 MPI_Recv()

顺便说一句,您可以使用MPI_Waitall()来代替调用多个连续的MPI_Wait()

关于c - MPI_等待: Request pending due to failure,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/50075763/

相关文章:

c - fork() 如何知道何时返回 0?

c++ - 自由类型错误 : FT_Load_Char returns 36

c# - react 性扩展 : Stop an observable from returning before the tasks it has spun off have finished?

python - 使用 mpi4py 分析并行化的 python 脚本

c++ - 在四核处理器上运行 MPICH 时出错

c - '->' 的基本操作数具有非指针值;但我几乎可以肯定它有指针类型?

c - 在微芯片的 c++/c 中解析 nmea csv

c++ - 如何使用 openMP 将顺序程序转换为并行程序?

c++ - MPI 代码不适用于 2 个节点,但适用于 1 个

fortran - MPI 散射中的尺寸标注