Get the latest updates on the Coronavirus impact on engineers.Click Here
AI and Machine Learning

Doing machine learning the right way

MIT Professor Aleksander Madry strives to build machine-learning models that are more reliable, understandable and robust.
By Rob Matheson March 15, 2020
MIT computer scientist Aleksander Madry. Courtesy: Ian MacLellan, MIT

The work of MIT computer scientist Aleksander Madry is fueled by one core mission: “doing machine learning the right way.”

Madry’s research centers largely on making machine learning — a type of artificial intelligence — more accurate, efficient, and robust against errors. In his classroom and beyond, he also worries about questions of ethical computing, as we approach an age where artificial intelligence will have great impact on many sectors of society.

“I want society to truly embrace machine learning,” said Madry, a recently tenured professor in the Department of Electrical Engineering and Computer Science. “To do that, we need to figure out how to train models that people can use safely, reliably, and in a way that they understand.”

Interestingly, his work with machine learning dates back only a couple of years, to shortly after he joined MIT in 2015. In that time, his research group has published several critical papers demonstrating that certain models can be easily tricked to produce inaccurate results — and showing how to make them more robust.

In the end, he aims to make each model’s decisions more interpretable by humans, so researchers can peer inside to see where things went awry. At the same time, he wants to enable nonexperts to deploy the improved models in the real world for, say, helping diagnose disease or control driverless cars.

“It’s not just about trying to crack open the machine-learning black box. I want to open it up, see how it works, and pack it back up, so people can use it without needing to understand what’s going on inside,” he said.

For the love of algorithms

Madry was born in Wroclaw, Poland, where he attended the University of Wroclaw as an undergraduate in the mid-2000s. While he harbored interest in computer science and physics, “I actually never thought I’d become a scientist,” he said.

An avid video gamer, Madry initially enrolled in the computer science program with intentions of programming his own games. But in joining friends in a few classes in theoretical computer science and, in particular, theory of algorithms, he fell in love with the material. Algorithm theory aims to find efficient optimization procedures for solving computational problems, which requires tackling difficult mathematical questions. “I realized I enjoy thinking deeply about something and trying to figure it out,” said Madry, who wound up double-majoring in physics and computer science.

Getting adversarial

Shortly after joining MIT, Madry found himself swept up in a novel science: machine learning. In particular, he focused on understanding the re-emerging paradigm of deep learning. That’s an artificial-intelligence application that uses multiple computing layers to extract high-level features from raw input — such as using pixel-level data to classify images. MIT’s campus was, at the time, buzzing with new innovations in the domain.

But that begged the question: Was machine learning all hype or solid science? “It seemed to work, but no one actually understood how and why,” Madry said.

Courtesy: Ian MacLellan, MIT

Courtesy: Ian MacLellan, MIT

Answering that question set his group on a long journey, running experiment after experiment on deep-learning models to understand the underlying principles. A major milestone in this journey was an influential paper they published in 2018, developing a methodology for making machine-learning models more resistant to “adversarial examples.” Adversarial examples are slight perturbations to input data that are imperceptible to humans — such as changing the color of one pixel in an image — but cause a model to make inaccurate predictions. They illuminate a major shortcoming of existing machine-learning tools.

Continuing this line of work, Madry’s group showed that the existence of these mysterious adversarial examples may contribute to how machine-learning models make decisions. In particular, models designed to differentiate images of, say, cats and dogs, make decisions based on features that do not align with how humans make classifications. Simply changing these features can make the model consistently misclassify cats as dogs, without changing anything in the image that’s really meaningful to humans.

Results indicated some models — which may be used to, say, identify abnormalities in medical images or help autonomous cars identify objects in the road — aren’t exactly up to snuff. “People often think these models are superhuman, but they didn’t actually solve the classification problem we intend them to solve,” Madry said. “And their complete vulnerability to adversarial examples was a manifestation of that fact. That was an eye-opening finding.”

That’s why Madry seeks to make machine-learning models more interpretable to humans. New models he’s developed show how much certain pixels in images the system is trained on can influence the system’s predictions. Researchers can then tweak the models to focus on pixels clusters more closely correlated with identifiable features — such as detecting an animal’s snout, ears, and tail. In the end, that will help make the models more humanlike — or “superhumanlike” — in their decisions. To further this work, Madry and his colleagues recently founded the MIT Center for Deployable Machine Learning, a collaborative research effort within the MIT Quest for Intelligence, which is working toward building machine-learning tools ready for real-world deployment.

“We want machine learning not just as a toy, but as something you can use in, say, an autonomous car, or health care. Right now, we don’t understand enough to have sufficient confidence in it for those critical applications,” Madry said.

Shaping education and policy

Madry views artificial intelligence and decision making (“AI+D” is one of the three new academic units in the Department of Electrical Engineering and Computer Science) as “the interface of computing that’s going to have the biggest impact on society.”

In that regard, he makes sure to expose his students to the human aspect of computing. In part, that means considering consequences of what they’re building. Often, he said, students will be overly ambitious in creating new technologies, but they haven’t thought through potential ramifications on individuals and society. “Building something cool isn’t a good enough reason to build something,” Madry said. “It’s about thinking about not if we can build something, but if we should build something.”

Madry has also been engaging in conversations about laws and policies to help regulate machine learning. A point of these discussions, he says, is to better understand the costs and benefits of unleashing machine-learning technologies on society.

“Sometimes we overestimate the power of machine learning, thinking it will be our salvation. Sometimes we underestimate the cost it may have on society,” Madry said. “To do machine learning right, there’s still a lot still left to figure out.”

Massachusetts Institute of Technology (MIT)

www.mit.edu

– Edited by CFE Media.


Rob Matheson
Author Bio: Writer, MIT News Office