As we mentioned previously, DeepLearning4j relies on ND4J for numerical calculations. ND4J is an interface, and there are multiple possible implementations. So far, we have used the one based on OpenBLAS, but there are other ones. We also mentioned that ND4J can utilize a Graphics Processing Unit (GPU), which is a lot faster than CPUs for typical Linear Algebra operations used in neural networks such as matrix multiplication. To use it, we need to get the CUDA ND4J backend.
If you have previously executed all the code on the ...