In my mind, machine learning is technically a subset of statistics. However, that's not how it might look from the outside. For historical reasons, machine learning has evolved largely independently from statistics, in some cases reinventing the same techniques and giving them a different name, and in other cases inventing whole new ideas without statisticians supposedly involved. Classical statistics grew largely out of the needs of governments in processing census data and the agriculture industry. Machine learning evolved later and largely as an outgrowth of computer science. Early computer scientists, in turn, were drawn from the ranks of physicists and engineers. So the DNA is quite different, and the tools have diverged a lot, but ultimately they're tackling the same problems.
“Machine learning” has become a catchall term that covers a lot of different areas, ranging from classification to clustering. As such, I can't really give you a crisp definition of what it means. However, there are several commonalities that pretty much all machine learning algorithms seem to work with:
- It's all done using computers, leveraging them to do calculations that would be intractable by hand.
- It takes data as input. If you are simulating a system based on some idealized model, then you aren't doing machine learning.
- The data points are thought of as being samples from some underlying “real-world” probability distribution.
- The data is tabular (or at least ...