O'Reilly logo

Java Deep Learning Projects by Md. Rezaul Karim

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

Distributed training on AWS deep learning AMI 9.0

So far, we have seen how to perform training and inferencing on a single GPU. However, to make the training even faster in a parallel and distributed way, having a machine or server with multiple GPUs is a viable option. An easy way to achieve this is by using AMAZON EC2 GPU compute instances.

For example, P2 is well suited for distributed deep learning frameworks that come with the latest binaries of deep learning frameworks (MXNet, TensorFlow, Caffe, Caffe2, PyTorch, Keras, Chainer, Theano, and CNTK) pre-installed in separate virtual environments.

An even bigger advantage is that they are fully configured with NVidia CUDA and cuDNN. Interested readers can take a look at https://aws.amazon.com/ec2/instance-types/p2/ ...

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required