CUDA in Gorgonia

Gorgonia has implemented support for NVIDIA's CUDA as part of its cu package. It abstracts out almost all the complexity, so all we have to do is simply specify the --tags=cuda flag at build time and ensure the operations we are calling are in fact present in the Gorgonia API.

Not every possible operation is implemented, of course. The emphasis is on operations that benefit from parallel execution, amenable to GPU acceleration. As we will cover in Chapter 5, Next Word Prediction with Recurrent Neural Networks, many of the operations involved in Convolutional Neural Networks (CNNs) meet this criterion.

So, what's available? The following list outlines the options:

  • 1D or 2D convolutions (used in CNNs)
  • 2D max pooling (also ...

Get Hands-On Deep Learning with Go now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.