4.1 Defining performance metrics4.1.1 Is accuracy the best metric for evaluating a model?4.1.2 Confusion matrix4.1.3 Precision and recall4.1.4 F-score4.2 Designing a baseline model4.3 Getting your data ready for training4.3.1 Splitting your data for train/validation/test4.3.2 Data preprocessing4.4 Evaluating the model and interpreting its performance4.4.1 Diagnosing overfitting and underfitting4.4.2 Plotting the learning curves4.4.3 Exercise: Building, training, and evaluating a network4.5 Improving the network and tuning hyperparameters4.5.1 Collecting more data vs. tuning hyperparameters4.5.2 Parameters vs. hyperparameters4.5.3 Neural network hyperparameters4.5.4 Network architecture4.6 Learning and optimization4.6.1 Learning rate and decay schedule4.6.2 A systematic approach to find the optimal learning rate4.6.3 Learning rate decay and adaptive learning4.6.4 Mini-batch size4.7 Optimization algorithms4.7.1 Gradient descent with momentum4.7.2 Adam4.7.3 Number of epochs and early stopping criteria4.7.4 Early stopping4.8 Regularization techniques to avoid overfitting4.8.1 L2 regularization4.8.2 Dropout layers4.8.3 Data augmentation4.9 Batch normalization4.9.1 The covariate shift problem4.9.2 Covariate shift in neural networks4.9.3 How does batch normalization work?4.9.4 Batch normalization implementation in Keras4.9.5 Batch normalization recap4.10 Project: Achieve high accuracy on image classificationSummary