Training neural networks involves a host of linear algebra calculations. Graphics processing units (GPUs), with their thousands of cores, were designed to excel at such problems. They are therefore often used to speed up training and eventually attain better compute performance per dollar, per watt.
DL4J can be used with nVIDIA GPUs with support currently for Cuda 7.5. Cuda 8.0 will be supported when available. DL4J is written to be plug-and-play. This means that switching the compute layer from CPU to GPU is as easy as switching out the
artifactId line under the
nd4j in the dependencies in the pom.xml file.
In general, a high-end consumer-grade model or a professional Tesla device is recommended: as of this writing, a pair of NVidia GeForce GTX 1070 GPUs would be a solid choice with which to begin.
Here are features to consider when buying a GPU:
The more, the better. As simple as that. This relates to how many parallel threads the GPU can process at any given moment.
This defines how much data can be uploaded ...