Training routine with multiple GPU

In our experiment, we will use our custom machine instead of Amazon EC2. However, you can achieve the same result on any server with GPUs. In this section, we will use two Titan X GPUs with a batch size of 32 on each GPU. That way, we can compute up to 64 videos in one step, instead of 32 videos in a single GPU configuration.

Now, let's create a new Python file named train_multi.py in the scripts package. In this file, add the following code to define some parameters:

 import tensorflow as tf import os import sys from datetime import datetime from tensorflow.python.ops import data_flow_ops import nets import models from utils import lines_from_file from datasets import sample_videos, input_pipeline # Dataset ...

Get Machine Learning with TensorFlow 1.x now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.