Answer to question 1: This means the training is not being distributed, which also means that your system is forcing you to use just one GPU. Now to solve this issue, just add the following line at the beginning of your main() method:
Answer to question 2: Well, this is certainly an AWS EC2-related question. However, I will provide a short explanation. If you see the default boot device, it allocates only 7.7 GB of space, but about 85% is allocated for the udev device, as shown here:
Now, to get rid of this issue, ...