April 2019
Intermediate to advanced
544 pages
17h 29m
English
If you want to train or run your NLP pipeline quickly, a server with a GPU will often speed things up. GPUs are especially speedy for training a deep neural network when you use a framework such as Keras (TensorFlow or Theano), PyTorch, or Caffe to build your model. These computational graph frameworks can take advantage of the massively parallel multiplication and addition operations that GPUs are built for.
A cloud service is a great option if you don’t want to invest the time and money to build your own server. But it’s possible to build a server with a GPU that’s twice as fast as a comparable Amazon Web Services (AWS) server for about the cost of a month on a comparable AWS instance. Plus you can store ...