c++ - CUDA并行扫描算法共享内存竞争条件

标签 c++ cuda gpu-shared-memory prefix-sum

我正在阅读《大规模并行处理器编程》(第 3 版)一书,该书介绍了 Kogge-Stone 并行扫描算法的实现。 该算法旨在由单个 block 运行(这只是初步简化),下面是实现。

// X is the input array, Y is the output array, InputSize is the size of the input array
__global__ void Kogge_Stone_scan_kernel(float* X, float* Y, int InputSize) {
    __shared__ float XY[SECTION_SIZE]; // SECTION_SIZE is basically blockDim.x
    
    int i = blockIdx.x * blockDim.x + threadIdx.x;
    if (i < InputSize)
        XY[threadIdx.x] = X[i];

    for (unsigned int stride = 1; stride < blockDim.x; stride *= 2) {
        __syncthreads();
        if (threadIdx.x >= stride)
            XY[threadIdx.x] += XY[threadIdx.x - stride]; // Race condition here?
    }

    Y[i] = XY[threadIdx.x];
}

无论算法如何工作,我都对这条线感到有点困惑 XY[threadIdx.x] += XY[threadIdx.x - 步幅]。假设stride = 1,那么threadIdx.x = 6的线程将执行操作XY[6] += XY[5]。但是,同时 threadIdx.x = 5 的线程将执行 XY[5] += XY[4]。问题是:是否可以保证线程6将读取XY[5]的原始值而不是XY[5] + XY[4] ?请注意,这并不限于单个扭曲,其中锁步执行可能会阻止竞争条件。

谢谢

最佳答案

is there any guarantee that the thread 6 will read the original value of XY[5] instead of XY[5] + XY[4]

不,CUDA 不保证线程执行顺序(锁步或其他),并且代码中也没有任何内容可以对此进行排序。

顺便说一下,cuda-memcheckcompute-sanitizer 非常擅长识别共享内存竞争条件:

$ cat t2.cu
const int SECTION_SIZE = 256;
__global__ void Kogge_Stone_scan_kernel(float* X, float* Y, int InputSize) {
    __shared__ float XY[SECTION_SIZE]; // SECTION_SIZE is basically blockDim.x

    int i = blockIdx.x * blockDim.x + threadIdx.x;
    if (i < InputSize)
        XY[threadIdx.x] = X[i];

    for (unsigned int stride = 1; stride < blockDim.x; stride *= 2) {
        __syncthreads();
        if (threadIdx.x >= stride)
            XY[threadIdx.x] += XY[threadIdx.x - stride]; // Race condition here?
    }

    Y[i] = XY[threadIdx.x];
}

int main(){
  const int nblk = 1;
  const int sz = nblk*SECTION_SIZE;
  const int bsz = sz*sizeof(float);
  float *X, *Y;
  cudaMallocManaged(&X, bsz);
  cudaMallocManaged(&Y, bsz);
  Kogge_Stone_scan_kernel<<<nblk, SECTION_SIZE>>>(X, Y, sz);
  cudaDeviceSynchronize();
}
$ nvcc -o t2 t2.cu -lineinfo
$ cuda-memcheck ./t2
========= CUDA-MEMCHECK
========= ERROR SUMMARY: 0 errors
$ cuda-memcheck --tool racecheck ./t2
========= CUDA-MEMCHECK
========= ERROR: Race reported between Read access at 0x000001a0 in /home/user2/misc/junk/t2.cu:12:Kogge_Stone_scan_kernel(float*, float*, int)
=========     and Write access at 0x000001c0 in /home/user2/misc/junk/t2.cu:12:Kogge_Stone_scan_kernel(float*, float*, int) [6152 hazards]
=========
========= RACECHECK SUMMARY: 1 hazard displayed (1 error, 0 warnings)
$

正如您可能已经猜测的那样,您可以通过在有问题的行中分解读写操作并在中间设置一个屏障来解决此问题:

$ cat t2.cu
const int SECTION_SIZE = 256;
__global__ void Kogge_Stone_scan_kernel(float* X, float* Y, int InputSize) {
    __shared__ float XY[SECTION_SIZE]; // SECTION_SIZE is basically blockDim.x

    int i = blockIdx.x * blockDim.x + threadIdx.x;
    if (i < InputSize)
        XY[threadIdx.x] = X[i];

    for (unsigned int stride = 1; stride < blockDim.x; stride *= 2) {
        __syncthreads();
        float val;
        if (threadIdx.x >= stride)
            val = XY[threadIdx.x - stride];
        __syncthreads();
        if (threadIdx.x >= stride)
            XY[threadIdx.x] += val;
    }

    Y[i] = XY[threadIdx.x];
}

int main(){
  const int nblk = 1;
  const int sz = nblk*SECTION_SIZE;
  const int bsz = sz*sizeof(float);
  float *X, *Y;
  cudaMallocManaged(&X, bsz);
  cudaMallocManaged(&Y, bsz);
  Kogge_Stone_scan_kernel<<<nblk, SECTION_SIZE>>>(X, Y, sz);
  cudaDeviceSynchronize();
}
$ nvcc -o t2 t2.cu -lineinfo
$ cuda-memcheck --tool racecheck ./t2
========= CUDA-MEMCHECK
========= RACECHECK SUMMARY: 0 hazards displayed (0 errors, 0 warnings)
$

关于c++ - CUDA并行扫描算法共享内存竞争条件,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/70355891/

相关文章:

c++ - C/C++ 中的 MATLAB "filter"函数

c++ - 带有空参数包的一元折叠

c++ - 打开CV : CUDA context initialization for different methods

c++ - nvcc 致命 : Value 'sm_13' is not defined for option 'gpu-architecture'

c++ - 管理二维 CUDA 阵列

opencl - OpenCL 本地内存有限制吗?

c++ - 获取CUDA错误 “declaration is incompatible with previous ” variable_name“

c++ - 在 Lua 表中注册 C 函数

CUDA共享内存

c++ - 有没有办法改变 std::pair 的引用成员指向的位置?