Published: 
By  Rob Seal
Tariq Iqbal
Tariq Iqbal is striving to build robot-human teams. (Photo by Dan Addison, University Communications)

Humans and robots will one day soon be working on the same teams, so we’re going to need to trust each other.

That belief underpins the research of Tariq Iqbal, an assistant professor at the University of Virginia’s School of Engineering. He recently earned a three-year, $450,000 grant for emerging scholars from the U.S. Air Force Office of Scientific Research to continue and extend his work. 

AFOSR will award about $21.5 million in grants to 48 scientists this year, including Iqbal, as part of its 2024 Young Investigator Program. 

Iqbal specializes in robotics and artificial intelligence. His lab focuses, in general, on optimizing fluency and fluidity in human-robot interactions. 

“So the goal here is: How can we make the robots work effectively with humans?” he said.

Can Humans Trust Robots?

When it comes to safe and effective collaboration, trust will be a key factor. Having just the right amount of trust – not too much, not too little – is a well-understood predictor of success in human teams, Iqbal noted.

And trust is already starting to matter on teams with both robots and humans, such as in automobile manufacturing and autonomous driving, he added. 

“In factories right now, there are robots that are building cars, and they can do it very rapidly,” Iqbal said. “But if you look at the same assembly line, downstream, there are humans doing all the screw tightening, putting all the parts together, checking the lines and everything. So we have the robots’ end of the line, and the humans’ end of the line, but there is a wall between them."

So we have the robots’ end of the line, and the humans’ end of the line, but there is a wall between them. How do we break that wall and make it a full team?

“How do we break that wall and make it a full team? If we can do that, we can achieve something that neither the humans nor the robots can achieve alone.”

There are dangers, however, in developing too much trust, he said. For example, most autonomous driving accidents are caused by a driver who fully trusts the auto-drive, even though safe operation requires hands on the wheel.

Can Robots ‘Trust’ Humans? 

The research funded by the grant is titled “A Psychophysiological and Behavioral Measure-based Multimodal Trust Model for Generating Real-time Intervention to Facilitate Human-Robot Teaming.” It’s an early step toward establishing the baseline measurements that researchers will need to quantify and measure trust in these brave, new team environments. 

The first goal will be to figure out an objective measure of human trust toward robots, Iqbal said. 

But the research will also seek ways to help robots sense the human trust level and respond appropriately.

This function could help a robot sense whether a person is over- or under-trusting its capability to perform the task. Then, the robot can take appropriate action to optimize trust for the entire team’s benefit.

“The challenge is not the robot itself; it’s actually the human,” Iqbal said. “Modeling the human can be the most challenging part.”