Shared memory is available on-chip, and hence it is much faster than global memory. Shared memory latency is roughly 100 times lower than uncached global memory latency. All the threads from the same block can access shared memory. This is very useful in many applications where threads need to share their results with other threads. However, it can also create chaos or false results if it is not synchronized. If one thread reads data from memory before the other thread has written to it, it can lead to false results. So, the memory access should be controlled or managed properly. This is done by the __syncthreads() directive, which ensures that all the write operations to memory are completed before moving ahead in the programs. ...
Get Hands-On GPU-Accelerated Computer Vision with OpenCV and CUDA now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.