April 2017
Intermediate to advanced
318 pages
7h 40m
English
So, let's summarize: with five different variants, we were able to improve our performance from 92.36% to 97.93%. First, we defined a simple layer network in Keras. Then, we improved the performance by adding some hidden layers. After that, we improved the performance on the test set by adding a few random dropouts to our network and by experimenting with different types of optimizers. Current results are summarized in the following table:
| Model/Accuracy | Training | Validation | Test |
| Simple | 92.36% | 92.37% | 92.22% |
| Two hidden (128) | 94.50% | 94.63% | 94.41% |
| Dropout (30%) | 98.10% | 97.73% | 97.7% (200 epochs) |
| RMSprop | 97.97% | 97.59% | 97.84% (20 epochs) |
| Adam | 98.28% | 98.03% | 97.93% (20 ... |