Using shared memory

Shared memory is available on-chip on a GPU device, hence it is much faster than global memory. Shared memory latency is roughly 100 times lower than uncached global memory latency. All the threads from the same block can access shared memory. This is very useful in many applications where threads need to share their results with other threads. However, it can also create chaos or false results if this is not synchronized. If one thread reads data from memory before the other thread has written to it, it can lead to false results. So, this memory access should be controlled or managed properly. This is done with the __syncthreads() directive, which ensures that all write operations to memory are completed before moving ...

Get Hands-On GPU-Accelerated Computer Vision with OpenCV and CUDA now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.