Research looking to give robots a second impression

Cornell University researchers, along with MIT, are studying how humans form and update impressions of robots with the goal of creating a computational model allowing robots to adjust their nonverbal behavior accordingly.

By Melanie Lefkowitz September 16, 2019

Underestimate robots at your peril. The robot thought of as a lowly cleaner may also be an expert firefighter. A robot that clumsily fumbles an object may still have life-saving skills.

“If people are going to work together with robots, they’re going to form judgments, and they’re going to base decisions on those judgments,” said Ross Knepper, assistant professor in Computing and Information Science at Cornell University. “If the collaborations are going to be efficient, people need to have accurate estimates of robots’ competence.”

Knepper, along with Melissa Ferguson, professor of psychology and senior associate dean of social sciences in the College of Arts and Sciences, is studying how humans form and update impressions of robots – with the goal of creating a computational model allowing robots to adjust their nonverbal behavior accordingly. The research will also shed light on human memory.

“One of the primary goals of this basic research is to understand how people understand other individuals, whether those other individuals are humans or robots,” Ferguson said.

The team, which includes Julie Shah of the Massachusetts Institute of Technology (MIT), was recently awarded a $2.5 million grant from the Office of Naval Research.

The research – at the intersection of psychology, artificial intelligence (AI) and robotics – is partly based on robots working on Navy ships. A survey of robot-assisted search-and-rescue operations showed human error was 30 times more likely to cause failures than hardware or software problems. Making the right decisions about deploying robots – which might be necessary to fight fires, find survivors or defuse bombs – can be a matter of life or death.

“We don’t want people to need to take a lengthy course or read an instruction manual to use a robot,” Knepper said. “We would prefer that robots work as closely as possible to the way people work, and that they understand human methods of communication and collaboration when they team up with people.”

This could also mean managing overly high expectations. Sometimes, humans are so dazzled by new technology, they have unrealistic ideas of its capabilities. If people have too much faith in, for instance, a robot’s ability to stop a wound from bleeding, they might not dial 911 as quickly as they should.

“So it may be the case,” Knepper said, “that the first time you meet a robot, you might want to downplay expectations a little bit just to create a more accurately calibrated sense of what the robot can do.”

First, the researchers need to better understand how humans form their judgments of robots, and how long these judgments last. For this, they are depending on psychological research on memory. Ferguson’s recent work has shown when and how people update their unconscious memories of other people. But questions remain about exactly how this happens, and what factors enable it.

“Robots are an especially interesting subject for examining human memory,” Ferguson said. “We have much less experience with robots, and so our expectations of them are more flexible and unformed. Because of this, we can better control and manipulate the kinds of first experiences people have with them, and then test the influence of those experiences on their memories and judgments of the robots.”

The team has designed a series of experiments aimed at assessing how humans form and update their consciously formed impressions – those that people can report and verbalize – as well as their unconsciously formed first impressions, which must be measured without asking people to talk about them.

Once the researchers have conducted these experiments, they can construct a computational model of how people form and update their memories of robots.

They will then apply the model using a combination of traditional robotics tools and artificial intelligence. The goal is for robots to set human expectations of their behavior and guide people to delegate tasks to them appropriately.

In preliminary work, the researchers experimented with a robotic deli operator to explore strategies robots might use to make themselves seem more competent. Apologizing in advance of a likely error, for example, caused customers to view the robot as less competent than waiting to apologize until after it made the expected mistake.

More research is needed before these findings can be applied, Knepper said.

“If robots can tailor their responses to their own failures to help people interpret them, then they can more accurately fine-tune people’s impressions,” he said. “Ultimately it’s about having a smoother collaboration between the two.”

The project is rooted in human psychology and not linked to any current technology, which means it will remain relevant as the field of robotics evolves.

“As robot capabilities advance, they’re going to succeed and fail in different ways than they do now,” Knepper said. “We want to be able to generalize this to all kinds of different tasks.”

Cornell University

Cornell University

– Edited by Chris Vavra, production editor, Control Engineering, CFE Media, cvavra@cfemedia.com. See more Control Engineering robotics stories.


Author Bio: Melanie Lefkowitz, Cornell University

Related Resources