Chapter 7. Dense Neural Networks

[I]f you’re trying to predict the movements of a stock on the stock market given its recent price history, you’re unlikely to succeed, because price history doesn’t contain much predictive information.

François Chollet (2017)

This chapter is about important aspects of dense neural networks. Previous chapters have already made use of this type of neural network. In particular, the MLPClassifier and MLPRegressor models from scikit-learn and the Sequential model from Keras for classification and estimation are dense neural networks (DNNs). This chapter exclusively focuses on Keras since it gives more freedom and flexibility in modeling DNNs.1

“The Data” introduces the foreign exchange (FX) data set that the other sections in this chapter use. “Baseline Prediction” generates a baseline, in-sample prediction on the new data set. Normalization of training and test data is introduced in “Normalization”. As means to avoid overfitting, “Dropout” and “Regularization” discuss dropout and regularization as popular methods. Bagging, as another method to avoid overfitting and already used in Chapter 6, is revisited in “Bagging”. Finally, “Optimizers” compares the performance of different optimizers that can be used with Keras DNN models.

Although the introductory quote for the chapter might give little reason for hope, the main goal for this chapter—as well as for Part III as a whole—is to discover statistical inefficiencies in financial markets (time series) ...

Get Artificial Intelligence in Finance now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.