CUDA 内核未在 CudaDeviceSynchronize 之前启动

标签 cuda

cuda profiler output:

我在使用并发 CUDA 时遇到了一些问题。看看附图。内核在标记点启动,即 0.395 秒。然后是一些绿色的 CpuWork。最后,调用 cudaDeviceSynchronize。在 CpuWork 之前启动的内核不会在同步调用之前启动。理想情况下,它应该与 CPU 工作并行运行。

void KdTreeGpu::traceRaysOnGpuAsync(int firstRayIndex, int numRays, int rank, int buffer)
{
    int per_block = 128;
    int num_blocks = numRays/per_block + (numRays%per_block==0?0:1);

    Ray* rays = &this->deviceRayPtr[firstRayIndex];
    int* outputHitPanelIds = &this->deviceHitPanelIdPtr[firstRayIndex];

    kdTreeTraversal<<<num_blocks, per_block, 0>>>(sceneBoundingBox, rays, deviceNodesPtr, deviceTrianglesListPtr, 
                                                firstRayIndex, numRays, rank, rootNodeIndex, 
                                                deviceTHitPtr, outputHitPanelIds, deviceReflectionPtr);

    CUDA_VALIDATE(cudaMemcpyAsync(resultHitDistances[buffer], deviceTHitPtr, numRays*sizeof(double), cudaMemcpyDeviceToHost));
    CUDA_VALIDATE(cudaMemcpyAsync(resultHitPanelIds[buffer], outputHitPanelIds, numRays*sizeof(int), cudaMemcpyDeviceToHost));
    CUDA_VALIDATE(cudaMemcpyAsync(resultReflections[buffer], deviceReflectionPtr, numRays*sizeof(Vector3), cudaMemcpyDeviceToHost));
}

memcopies 是异步的。结果缓冲区是这样分配的

unsigned int flag = cudaHostAllocPortable;

CUDA_VALIDATE(cudaHostAlloc(&resultHitPanelIds[0], MAX_RAYS_PER_ITERATION*sizeof(int), flag));
CUDA_VALIDATE(cudaHostAlloc(&resultHitPanelIds[1], MAX_RAYS_PER_ITERATION*sizeof(int), flag));

希望有一个解决方案。尝试了很多东西,包括不在默认流中运行。当我添加 cudaHostAlloc 时,我认识到异步方法返回到 CPU。但是当内核在稍后的 deviceSynchronize 调用之前没有启动时,这无济于事。

resultHitDistances[2] 包含两个分配的内存区域,因此当 0 被 CPU 读取时,GPU 应该将结果放入 1。

谢谢!

编辑:这是调用 traceRaysAsync 的代码。

int numIterations = ceil(float(this->numPrimaryRays) / MAX_RAYS_PER_ITERATION);
int numRaysPrevious = min(MAX_RAYS_PER_ITERATION, this->numPrimaryRays);
nvtxRangePushA("traceRaysOnGpuAsync First");
traceRaysOnGpuAsync(0, numRaysPrevious, rank, 0);
nvtxRangePop();

for(int iteration = 0; iteration < numIterations; iteration++)
{

    int rayFrom = (iteration+1)*MAX_RAYS_PER_ITERATION;
    int rayTo = min((iteration+2)*MAX_RAYS_PER_ITERATION, this->numPrimaryRays) - 1;
    int numRaysIteration = rayTo-rayFrom+1;

    // Wait for results to finish and get them

    waitForGpu();
    // Trace the next iteration asynchronously. This will have data prepared for next iteration

    if(numRaysIteration > 0)
    {
        int nextBuffer = (iteration+1) % 2;
        nvtxRangePushA("traceRaysOnGpuAsync Interior");
        traceRaysOnGpuAsync(rayFrom, numRaysIteration, rank, nextBuffer);
        nvtxRangePop();
    }
    nvtxRangePushA("CpuWork");

    // Store results for current iteration

    int rayOffset = iteration*MAX_RAYS_PER_ITERATION;
    int buffer = iteration % 2;

    for(int i = 0; i < numRaysPrevious; i++)
    {
        if(this->activeRays[rayOffset+i] && resultHitPanelIds[buffer][i] >= 0)
        {
            this->activeRays[rayOffset+i] = false;
            const TrianglePanelPair & t = this->getTriangle(resultHitPanelIds[buffer][i]);
            double hitT = resultHitDistances[buffer][i];

            Vector3 reflectedDirection = resultReflections[buffer][i];

            Result res = Result(rays[rayOffset+i], hitT, t.panel);
            results[rank].push_back(res);
            t.panel->incrementIntensity(1.0);

            if (t.panel->getParent().absorbtion < 1)
            {
                numberOfRaysGenerated++;

                Ray reflected (res.endPoint() + 0.00001*reflectedDirection, reflectedDirection);

                this->newRays[rayOffset+i] = reflected;
                this->activeRays[rayOffset+i] = true;
                numNewRays++;

            }
        }



    }

    numRaysPrevious = numRaysIteration;

    nvtxRangePop();

}

最佳答案

这是使用 WDDM 驱动程序模型的 Windows 上的预期行为,其中驱动程序尝试通过尝试批处理内核启动来减轻内核启动开销。尝试在内核调用后直接插入 cudaStreamQuery(0),以在批处理满之前触发内核的提前启动。

关于CUDA 内核未在 CudaDeviceSynchronize 之前启动,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/13568805/

相关文章:

c++ - cuda 共享库链接 : undefined reference to cudaRegisterLinkedBinary

cuda - Jcuda 中的错误处理是如何完成的?

cuda - CUDA block 和经线

c++ - OpenCV CUDA 运行速度比 OpenCV CPU 慢

cuda - 限制 GPU 使用的代码

cuda - 纹理获取比直接全局访问慢,第 7 章来自 "Cuda by example"书

c++ - 在不同的编译器上转换为void **

具有 CUDA 内核动态数据的 C 结构?

c++ - 使用常量内存的 CUDA 应用程序模板

cuda - 支持 CUDA 5 的 GPU 上不受支持的 GPU 架构计算_30