c - 如何使用 MPI_Scatterv 将矩阵的行发送到所有进程?

标签 c matrix mpi

我正在使用 MPI 接口(interface)。我想拆分一个矩阵(按行)并在每个进程中分配这些部分。

比如我有这个7x7的方阵M。

M = [
    0.00    1.00    2.00    3.00    4.00    5.00    6.00    
    7.00    8.00    9.00    10.00   11.00   12.00   13.00
    14.00   15.00   16.00   17.00   18.00   19.00   20.00
    21.00   22.00   23.00   24.00   25.00   26.00   27.00
    28.00   29.00   30.00   31.00   32.00   33.00   34.00
    35.00   36.00   37.00   38.00   39.00   40.00   41.00
    42.00   43.00   44.00   45.00   46.00   47.00   48.00
];

我有 3 个进程,所以拆分可能是:

  • 进程 0 获取第 0、1 行
  • 进程 1 获取第 2、3、4 行
  • 进程 2 获取第 5、6 行

Scatterv之后应该是这样的:

Process 0:
M0 = [
    0.00    1.00    2.00    3.00    4.00    5.00    6.00    
    7.00    8.00    9.00    10.00   11.00   12.00   13.00
];

Process 1:
M1 = [
    14.00   15.00   16.00   17.00   18.00   19.00   20.00
    21.00   22.00   23.00   24.00   25.00   26.00   27.00
    28.00   29.00   30.00   31.00   32.00   33.00   34.00
];

Process 2:
M2 = [
    35.00   36.00   37.00   38.00   39.00   40.00   41.00
    42.00   43.00   44.00   45.00   46.00   47.00   48.00
];

我想我已经很清楚我想要达到的目标。如果我没有解释好,请随时询问。

现在,我向您展示我的代码:

#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>

#define BLOCK_LOW(id,p,n) ((id)*(n)/(p))
#define BLOCK_HIGH(id,p,n) ((id+1)*(n)/(p) - 1)
#define BLOCK_SIZE(id,p,n) ((id+1)*(n)/(p) - (id)*(n)/(p))
#define BLOCK_OWNER(index,p,n) (((p)*((index)+1)-1)/(n))

void **matrix_create(size_t m, size_t n, size_t size) {
   size_t i; 
   void **p= (void **) malloc(m*n*size+ m*sizeof(void *));
   char *c=  (char*) (p+m);
   for(i=0; i<m; ++i)
      p[i]= (void *) c+i*n*size;
   return p;
}

void matrix_print(double **M, size_t m, size_t n, char *name) {
    size_t i,j;
    printf("%s=[",name);
    for(i=0; i<m; ++i) {
        printf("\n  ");
        for(j=0; j<n; ++j)
            printf("%f  ",M[i][j]);
    }
    printf("\n];\n");
}

main(int argc, char *argv[]) {

    int npes, myrank, root = 0, n = 7, rows, i, j, *sendcounts, *displs;
    double **m, **mParts;

    MPI_Status status;
    MPI_Init(&argc, &argv);
    MPI_Comm_size(MPI_COMM_WORLD,&npes);
    MPI_Comm_rank(MPI_COMM_WORLD,&myrank);

    // Matrix M is generated in the root process (process 0)
    if (myrank == root) {
        m = (double**)matrix_create(n, n, sizeof(double));
        for (i = 0; i < n; ++i)
            for (j = 0; j < n; ++j)
                m[i][j] = (double)(n * i + j);
    }

    // Array containing the numbers of rows for each process
    sendcounts = malloc(n * sizeof(int));
    // Array containing the displacement for each data chunk
    displs = malloc(n * sizeof(int));
    // For each process ...
    for (j = 0; j < npes; j++) {
        // Sets each number of rows
        sendcounts[j] = BLOCK_SIZE(j, npes, n);
        // Sets each displacement
        displs[j] = BLOCK_LOW(j, npes, n);
    }
    // Each process gets the number of rows that he is going to get
    rows = sendcounts[myrank];
    // Creates the empty matrixes for the parts of M
    mParts = (double**)matrix_create(rows, n, sizeof(double));
    // Scatters the matrix parts through all the processes
    MPI_Scatterv(m, sendcounts, displs, MPI_DOUBLE, mParts, rows, MPI_DOUBLE, root, MPI_COMM_WORLD);

    // This is where I get the Segmentation Fault
    if (myrank == 1) matrix_print(mParts, rows, n, "mParts");

    MPI_Finalize();
}

我尝试读取分散的数据时出现Segmentation Fault,提示分散操作无效。我已经用一维数组做了这个并且它起作用了。但是对于二维数组,事情变得有点棘手。

请问您能帮我找出错误吗?

谢谢

最佳答案

MPI_Scatterv 需要一个指向数据的指针,并且数据在内存中应该是连续的。您的程序在第二部分没有问题,但是 MPI_Scatterv 收到一个指向数据指针的指针。因此,更改为:

MPI_Scatterv(&m[0][0], sendcounts, displs, MPI_DOUBLE, &mParts[0][0], sendcounts[myrank], MPI_DOUBLE, root, MPI_COMM_WORLD);

对于 sendcountsdispls 也有一些需要改变的地方:要变成 2D,这些计数应该乘以 nMPI_Scatterv 中的 receive 计数不再是 rows,而是 sendcouts[myrank]

这是最终代码:

#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>

#define BLOCK_LOW(id,p,n) ((id)*(n)/(p))
#define BLOCK_HIGH(id,p,n) ((id+1)*(n)/(p) - 1)
#define BLOCK_SIZE(id,p,n) ((id+1)*(n)/(p) - (id)*(n)/(p))
#define BLOCK_OWNER(index,p,n) (((p)*((index)+1)-1)/(n))

void **matrix_create(size_t m, size_t n, size_t size) {
    size_t i; 
    void **p= (void **) malloc(m*n*size+ m*sizeof(void *));
    char *c=  (char*) (p+m);
    for(i=0; i<m; ++i)
        p[i]= (void *) c+i*n*size;
    return p;
}

void matrix_print(double **M, size_t m, size_t n, char *name) {
    size_t i,j;
    printf("%s=[",name);
    for(i=0; i<m; ++i) {
        printf("\n  ");
        for(j=0; j<n; ++j)
            printf("%f  ",M[i][j]);
    }
    printf("\n];\n");
}

main(int argc, char *argv[]) {

    int npes, myrank, root = 0, n = 7, rows, i, j, *sendcounts, *displs;
    double **m, **mParts;

    MPI_Status status;
    MPI_Init(&argc, &argv);
    MPI_Comm_size(MPI_COMM_WORLD,&npes);
    MPI_Comm_rank(MPI_COMM_WORLD,&myrank);

    // Matrix M is generated in the root process (process 0)
    if (myrank == root) {
        m = (double**)matrix_create(n, n, sizeof(double));
        for (i = 0; i < n; ++i)
            for (j = 0; j < n; ++j)
                m[i][j] = (double)(n * i + j);
    }

    // Array containing the numbers of rows for each process
    sendcounts = malloc(n * sizeof(int));
    // Array containing the displacement for each data chunk
    displs = malloc(n * sizeof(int));
    // For each process ...
    for (j = 0; j < npes; j++) {
        // Sets each number of rows
        sendcounts[j] = BLOCK_SIZE(j, npes, n)*n;
        // Sets each displacement
        displs[j] = BLOCK_LOW(j, npes, n)*n;
    }
    // Each process gets the number of rows that he is going to get
    rows = sendcounts[myrank]/n;
    // Creates the empty matrixes for the parts of M
    mParts = (double**)matrix_create(rows, n, sizeof(double));
    // Scatters the matrix parts through all the processes
    MPI_Scatterv(&m[0][0], sendcounts, displs, MPI_DOUBLE, &mParts[0][0], sendcounts[myrank], MPI_DOUBLE, root, MPI_COMM_WORLD);

    // This is where I get the Segmentation Fault
    if (myrank == 1) matrix_print(mParts, rows, n, "mParts");

    MPI_Finalize();
}

如果您想了解更多有关二维数组和 MPI 的信息,请查看 here

另请查看 PETSc 库的 DMDA 结构 herethere

关于c - 如何使用 MPI_Scatterv 将矩阵的行发送到所有进程?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/25597263/

相关文章:

c - 如何在c中调用以结构体作为参数的函数

c - 卡在线程问题上

关于返回字符串长度的困惑

r - 计算矩阵每个列的 min() 和 max()

numpy - 使用 matplotlibplot_surface 绘制包含 NaN 值的矩阵

arrays - MPI 广播二维阵列

c - 栈帧和gdb

python - 安装 pip 包时如何添加 header 包含目录?

使用 MPI_Comm_spawn 创建子进程

c++ - 0、800、600、0 与 0、800、0、600 OpenGL