Machine learning: A quick and simple definition

Get a basic overview of machine learning and then go deeper with recommended resources.

By James Furbush
May 3, 2018
Bare tree branches Bare tree branches (source: PublicDomainPictures via Pixabay)

The following overview covers some of the basics of machine learning (ML): what it is, how it works, and what you need to keep in mind before taking advantage of it.

This information is curated from the expert ML material available on O’Reilly’s online learning platform.

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

What is machine learning?

“Machine learning is the science (and art) of programming computers so they can learn from data,” writes Aurélien Géron in Hands-on Machine Learning with Scikit-Learn and TensorFlow.

ML is a subset of the larger field of artificial intelligence (AI) that “focuses on teaching computers how to learn without the need to be programmed for specific tasks,” note Sujit Pal and Antonio Gulli in Deep Learning with Keras. “In fact, the key idea behind ML is that it is possible to create algorithms that learn from and make predictions on data.”

Examples of ML include the spam filter that flags messages in your email, the recommendation engine Netflix uses to suggest content you might like, and the self-driving cars being developed by Google and other companies.

Machine learning categories

ML algorithms generally fall into five broad categories based on the amount and type of human supervision they receive during training, according to authors Aurélien Géron (Hands-on Machine Learning with Scikit-Learn and TensorFlow) and François Chollet (Deep Learning with Python). These categories are:

  • Supervised learning consists of mapping input data to known labels, which humans have provided. Classifying radiology images for early detection of cancer is a good example.
  • Unsupervised learning is where the input data is unlabeled and the system tries to learn structure from that data automatically, without any human guidance. Anomaly detection, such as flagging unusual credit card transactions to prevent fraud, is an example of unsupervised learning.
  • Semi-supervised learning is often a combination of the first two approaches. That is, the system trains on partially labeled input data—usually a lot of unlabeled data and a little bit of labeled data. Facial recognition in photo services from Facebook and Google are real-world applications of this approach.
  • Reinforcement learning is mostly a research area, but industry use cases are starting to emerge. Reinforcement learning occurs when a computer system receives data in a specific environment and then learns how to maximize its outcomes. Google’s DeepMind AlphaGo computer, which successfully learned to master the game Go, is a recent example of this technique.
  • Transfer learning involves reusing a model that was trained while solving one problem and applying it to a different but related problem. In this talk by Lukas Biewald he describes an example of transfer learning where a deep learning model was trained on millions of images of cats, then “fine-tuned” to detect melanoma in medical imaging.

Things to keep in mind before using machine learning

As with all technologies, ML has obstacles and issues that need to be addressed.

Machine learning requires careful preparation of lots of data

Data that’s going to be used in ML applications must be cleaned up and prepared before it can be of use.

Obviously, if your training data is full of errors, outliers, and noise (e.g., due to poor-quality measurements), it will make it harder for the system to detect the underlying patterns, so your system is less likely to perform well. It is often well worth the effort to spend time cleaning up your training data. (From Hands-on Machine Learning with Scikit-Learn and TensorFlow.)

Also critical is using different data sets to validate and test an ML model. Otherwise the models begin to “overfit.”

Say you are visiting a foreign country and the taxi driver rips you off. You might be tempted to say that all taxi drivers in that country are thieves. Overgeneralizing is something that we humans do all too often, and unfortunately machines can fall into the same trap if we are not careful. In machine learning this is called overfitting: it means that the model performs well on the training data, but it does not generalize well. (From Hands-on Machine Learning with Scikit-Learn and TensorFlow.)

The more complex the machine learning model, the harder it can be to explain

The rush to reap the benefits of ML can outpace our understanding of the algorithms providing those benefits.

Many machine learning algorithms have been labeled “black box” models because of their inscrutable inner-workings. What makes these models accurate is what makes their predictions difficult to understand: they are very complex. This is a fundamental trade-off. These algorithms are typically more accurate for predicting nonlinear, faint, or rare phenomena. Unfortunately, more accuracy almost always comes at the expense of interpretability, and interpretability is crucial for business adoption, model documentation, regulatory oversight, and human acceptance and trust. (From An Introduction to Machine Learning Interpretability.)

Learn more about machine learning

Ready to take the next step with ML? Check out these recommended resources from O’Reilly’s editors.

Hands-On Machine Learning with Scikit-Learn and TensorFlow — Using concrete examples, minimal theory, and two production-ready Python frameworks, author Aurélien Géron helps you gain an intuitive understanding of the concepts and tools for building intelligent systems.

Sprouted Clams and Stanky Bean: When Machine Learning Makes Mistakes — Janelle Shane shows how machine learning mistakes can be embarrassing or even dangerous.

The Frontiers of Machine Learning and AI — Zoubin Ghahramani discusses recent advances in artificial intelligence, highlighting research in deep learning, probabilistic programming, Bayesian optimization, and AI for data science.

Deep Learning with Python — Written by Keras creator and Google AI researcher François Chollet, this book builds your understanding through intuitive explanations and practical examples.

An Introduction to Machine Learning Interpretability — Navdeep Gill and Patrick Hall examine a set of machine learning techniques and algorithms that can help data scientists improve the accuracy of their predictive models while maintaining interpretability.

Post topics: Artificial Intelligence
Post tags: Resources
Share: