November 2019
Intermediate to advanced
304 pages
8h 40m
English
When you perform k-fold cross-validation, data is divided into k number of subsets. For every subset, we perform evaluation by keeping one of the subsets for testing and the remaining k-1 subsets for training. We will repeat this k number of times. Effectively, we use the entire data for training with no data loss, as opposed to wasting some of the data on testing.
Underfitting is handled here. However, note that we perform the evaluation k number of times only.
When you perform batch training, the entire dataset is divided as per the batch size. If your dataset has 1,000 records and the batch size is 8, then you have 125 training batches.
You need to note the training-to-testing ratio as well. According to that ratio, every ...