Shared memory

Shared memory is available on-chip, and hence it is much faster than global memory. Shared memory latency is roughly 100 times lower than uncached global memory latency. All the threads from the same block can access shared memory. This is very useful in many applications where threads need to share their results with other threads. However, it can also create chaos or false results if it is not synchronized. If one thread reads data from memory before the other thread has written to it, it can lead to false results. So, the memory access should be controlled or managed properly.  This is done by the __syncthreads() directive, which ensures that all the write operations to memory are completed before moving ahead in the programs. ...

Get Hands-On GPU-Accelerated Computer Vision with OpenCV and CUDA now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.