We have gained experience working with large datasets in Chapter 8, Scaling Up Prediction to Terabyte Click Logs. There are a few tips that can help you model on large-scale data more efficiently.
First, start with a small subset, for instance, a subset that can fit on your local machine. This can help speed up early experimentation. Obviously, you don't want to train on the entire dataset just to find out whether SVM or random forest works better. Instead, you can randomly sample data points and quickly run a few models on the selected set.
The second tip is choosing scalable algorithms, such as logistic regression, linear SVM, and SGD-based optimization. This is quite intuitive.