Dealing with a small training set – data augmentation

We have been very fortunate so far to possess a large-enough training dataset with 75% of 39,209 samples. This is one of the reasons why we are able to achieve a 99.3% to 99.4% classification accuracy. However, in reality, obtaining a large training set is not easy in most supervised learning cases, where manual work is necessary or the cost of data collection and labeling is high. In our traffic signs classification project, can we still achieve the same performance if we are given a lot less training samples to begin with? Let's give it a shot.

We simulate a small training set with only 10% of the 39,209 samples and a testing set with the rest 90%:

> train_perc_1 = 0.1 > train_index_1 ...

Get Deep Learning with R for Beginners now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.