Using atomic operations

CUDA provides an API called atomicAdd operations to avoid problems with parallel access of memory locations. It is a blocking operation, which means that when multiple threads try to access the same memory location, only one thread can access the memory location at a time. Other threads have to wait for this thread to finish and write its answer to memory. The kernel function to calculate a histogram using an atomicAdd operation is shown as follows:

import pycuda.autoinitimport pycuda.driver as drvimport numpyimport matplotlib.pyplot as pltfrom pycuda.compiler import SourceModulemod = SourceModule("""          __global__ void atomic_hist(int *d_b, int *d_a, int SIZE){ int tid = threadIdx.x + blockDim.x * blockIdx.x; int item ...

Get Hands-On GPU-Accelerated Computer Vision with OpenCV and CUDA now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.