Threads

The CUDA has a hierarchical architecture in terms of parallel execution. The kernel execution can be done in parallel with multiple blocks. Each block is further divided into multiple threads. In the last chapter, we saw that CUDA runtime can carry out parallel operations by launching the same copies of the kernel multiple times. We saw that it can be done in two ways: either by launching multiple blocks in parallel, with one thread per block, or by launching a single block, with many threads in parallel. So, two questions you might ask are, which method should I use in my code? And, is there any limitation on the number of blocks and threads that can be launched in parallel?

The answers to these questions are pivotal. As we will ...

Get Hands-On GPU-Accelerated Computer Vision with OpenCV and CUDA now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.