Appendix I. Setting Up GPUs for DL4J Projects
Training neural networks involves a host of linear algebra calculations. Graphics processing units (GPUs), with their thousands of cores, were designed to excel at such problems. They are therefore often used to speed up training and eventually attain better compute performance per dollar, per watt.
Switching Backends to GPU
DL4J can be used with nVIDIA GPUs with support currently for Cuda 7.5. Cuda 8.0 will be supported when available. DL4J is written to be plug-and-play. This means that switching the compute layer from CPU to GPU is as easy as switching out the artifactId line under the nd4j in the dependencies in the pom.xml file.
<dependencyManagement><dependencies><dependency><groupId>org.nd4j</groupId><artifactId>nd4j-cuda-7.5-platform</artifactId><version>${nd4j.version}</version></dependency></dependencies></dependencyManagement>
Picking a GPU
In general, a high-end consumer-grade model or a professional Tesla device is recommended: as of this writing, a pair of NVidia GeForce GTX 1070 GPUs would be a solid choice with which to begin.
Here are features to consider when buying a GPU:
- Amount of multiprocessors (or cores) on board
-
The more, the better. As simple as that. This relates to how many parallel threads the GPU can process at any given moment.
- Memory available on device
-
This defines how much data can be uploaded ...