CUDA – utilizing host processes
After you've successfully installed CUDA, you'll need to set some environment variables in order to add the installed bits to your execution path. This functionality works as expected if you don't have access to Docker on your host or if you'd rather use your bare machine to perform GPU-intensive operations. If you'd like to use a more reproducible build, you can use the Docker configuration defined in the following Docker for GPU-enabled programming section.
We'll need to update our PATH to include the CUDA binary paths that we just installed. We can do this by executing the following: export PATH=$PATH:/usr/local/cuda-10.2/bin:/usr/local/cuda-10.2/NsightCompute-2019.1.
We also need to update our LD_LIBRARY_PATH ...
Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Read now
Unlock full access