带有 long long int 的 CUDA atomicAdd()

标签 cuda int add atomic long-integer

任何时候我尝试使用 atomicAdd除了 (*int, int) 之外的任何东西我收到此错误:

error: no instance of overloaded function "atomicAdd" matches the argument list

但我需要使用比 int 更大的数据类型.这里有什么解决方法吗?

设备查询:
/usr/local/cuda/samples/1_Utilities/deviceQuery/deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "GeForce GTX 680"
  CUDA Driver Version / Runtime Version          5.0 / 5.0
  CUDA Capability Major/Minor version number:    3.0
  Total amount of global memory:                 4095 MBytes (4294246400 bytes)
  ( 8) Multiprocessors x (192) CUDA Cores/MP:    1536 CUDA Cores
  GPU Clock rate:                                1084 MHz (1.08 GHz)
  Memory Clock rate:                             3004 Mhz
  Memory Bus Width:                              256-bit
  L2 Cache Size:                                 524288 bytes
  Max Texture Dimension Size (x,y,z)             1D=(65536), 2D=(65536,65536), 3D=(4096,4096,4096)
  Max Layered Texture Size (dim) x layers        1D=(16384) x 2048, 2D=(16384,16384) x 2048
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Maximum sizes of each dimension of a block:    1024 x 1024 x 64
  Maximum sizes of each dimension of a grid:     2147483647 x 65535 x 65535
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 1 copy engine(s)
  Run time limit on kernels:                     Yes
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device PCI Bus ID / PCI location ID:           1 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 5.0, CUDA Runtime Version = 5.0, NumDevs = 1, Device0 = GeForce GTX 680

最佳答案

我的猜测是错误的编译标志。您正在寻找 int 以外的任何东西,您应该使用 sm_12 或更高版本。

正如罗伯特·克罗维拉所说 unsigned long long int支持变量,但 long long int不是。

使用的代码来自:Beginner CUDA - Simple var increment not working

#include <iostream>

using namespace std;

__global__ void inc(unsigned long long int *foo) {
  atomicAdd(foo, 1);
}

int main() {
  unsigned long long int count = 0, *cuda_count;
  cudaMalloc((void**)&cuda_count, sizeof(unsigned long long int));
  cudaMemcpy(cuda_count, &count, sizeof(unsigned long long int), cudaMemcpyHostToDevice);
  cout << "count: " << count << '\n';
  inc <<< 100, 25 >>> (cuda_count);
  cudaMemcpy(&count, cuda_count, sizeof(unsigned long long int), cudaMemcpyDeviceToHost);
  cudaFree(cuda_count);
  cout << "count: " << count << '\n';
  return 0;
}

从 Linux 编译:nvcc -gencode arch=compute_12,code=sm_12 -o add add.cu
结果:
count: 0
count: 2500

关于带有 long long int 的 CUDA atomicAdd(),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/17302752/

相关文章:

c++ - 使用顶点缓冲区对象在 OpenGL 中的曲面绘制中出现不希望的锯齿

python - 将复数数组传递到 PyCUDA 内核

c++ - 迭代 vector 时自动与具体类型?

android - Linux : add android platform to cordova

c# - 添加到 IEnumerable 的代码

python - Python 中的 __add__ 方法和负数

c++ - 如何解释 CUDA 的 inline PTX Internal Compiler Error

c++ - Cuda cusolver 无法在 Visual Studio 2013 中链接

java - 我如何才能找到我的 int[] ArrayList 是否包含 int[]

sql-server - SQL SERVER - INT Vs GUID - 我们应该使用什么