Skip to main content

What Is Machine Learning, and How Does It Work? Here’s a Short Video Primer

Deep learning, neural networks, imitation games—what does any of this have to do with teaching computers to “learn”?

Machine learning is the process by which computer programs grow from experience.

This isn’t science fiction, where robots advance until they take over the world.

When we talk about machine learning, we’re mostly referring to extremely clever algorithms.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


In 1950 mathematician Alan Turing argued that it’s a waste of time to ask whether machines can think. Instead, he proposed a game: a player has two written conversations, one with another human and one with a machine. Based on the exchanges, the human has to decide which is which.

This “imitation game” would serve as a test for artificial intelligence. But how would we program machines to play it?

Turing suggested that we teach them, just like children. We could instruct them to follow a series of rules, while enabling them to make minor tweaks based on experience.

For computers, the learning process just looks a little different.

First, we need to feed them lots of data: anything from pictures of everyday objects to details of banking transactions.

Then we have to tell the computers what to do with all that information. 

Programmers do this by writing lists of step-by-step instructions, or algorithms. Those algorithms help computers identify patterns in vast troves of data.

Based on the patterns they find, computers develop a kind of “model” of how that system works.

For instance, some programmers are using machine learning to develop medical software. First, they might feed a program hundreds of MRI scans that have already been categorized. Then, they’ll have the computer build a model to categorize MRIs it hasn’t seen before. In that way, that medical software could spot problems in patient scans or flag certain records for review.

Complex models like this often require many hidden computational steps. For structure, programmers organize all the processing decisions into layers. That’s where “deep learning” comes from.

These layers mimic the structure of the human brain, where neurons fire signals to other neurons. That’s why we also call them “neural networks.” 

Neural networks are the foundation for services we use every day, like digital voice assistants and online translation tools. Over time, neural networks improve in their ability to listen and respond to the information we give them, which makes those services more and more accurate.

Machine learning isn’t just something locked up in an academic lab though. Lots of machine learning algorithms are open-source and widely available. And they’re already being used for many things that influence our lives, in large and small ways. 

People have used these open-source tools to do everything from train their pets to create experimental art to monitor wildfires

They’ve also done some morally questionable things, like create deep fakes—videos manipulated with deep learning. And because the data algorithms that machines use are written by fallible human beings, they can contain biases.Algorithms can carry the biases of their makers into their models, exacerbating problems like racism and sexism. 

But there is no stopping this technology. And people are finding more and more complicated applications for it—some of which will automate things we are accustomed to doing for ourselves--like using neural networks to help run power driverless cars. Some of these applications will require sophisticated algorithmic tools, given the complexity of the task. 

And while that may be down the road, the systems still have a lot of learning to do.

Jeff DelViscio is currently Chief Multimedia Editor/Executive Producer at Scientific American. He is former director of multimedia at STAT, where he oversaw all visual, audio and interactive journalism. Before that, he spent over eight years at the New York Times, where he worked on five different desks across the paper. He holds dual master's degrees from Columbia in journalism and in earth and environmental sciences. He has worked aboard oceanographic research vessels and tracked money and politics in science from Washington, D.C. He was a Knight Science Journalism Fellow at MIT in 2018. His work has won numerous awards, including two News and Documentary Emmy Awards.

More by Jeffery DelViscio

Andrea Gawrylewski is chief newsletter editor at Scientific American. She writes the daily Today in Science newsletter and oversees all other newsletters at the magazine. In addition, she manages all special collector's editions and in the past was the editor for Scientific American Mind, Scientific American Space & Physics and Scientific American Health & Medicine. Gawrylewski got her start in journalism at the Scientist magazine, where she was a features writer and editor for "hot" research papers in the life sciences. She spent more than six years in educational publishing, editing books for higher education in biology, environmental science and nutrition. She holds a master's degree in earth science and a master's degree in journalism, both from Columbia University, home of the Pulitzer Prize.

More by Andrea Gawrylewski