Published: 
By  Jennifer McManamay
Student sitting at the wheel in a driving simulator with virtual road displayed on the simulator's monitor.
Yen-Ling Kuo, an assistant professor of computer science, is building a driving simulator, similar to this one in UVA Engineering’s Link Lab, to collect data on driving behavior. She’ll use the data to enable a robot’s AI to associate the meaning of words with what it sees by watching how humans interact with the environment or by its own interactions with the environment. (Graeme Jenvey/UVA Engineering)

Self-driving cars are coming, but will you really be OK sitting passively while a 2,000-pound autonomous robot motors you and your family around town?

Would you feel more secure if, while autonomous technology is perfected over the next few years, your semi-autonomous car could explain to you what it’s doing — for example, why it suddenly braked when you didn’t? 

Better yet, what if it could help your teenager not only learn to drive, but to drive more safely? 

Yen-Ling Kuo, the Anita Jones Faculty Fellow and assistant professor of computer science at the University of Virginia School of Engineering and Applied Science, is training machines to use human language and reasoning to be capable of doing all of that and more. The work is funded by a two-year Young Faculty Researcher grant from the Toyota Research Institute.

“This project is about how artificial intelligence can understand the meaning of drivers’ actions through language modeling and use this understanding to augment our human capabilities,” Kuo said.

“By themselves, robots aren’t perfect, and neither are we. We don’t necessarily want machines to take over for us, but we can work with them for better outcomes.”

Yen-Ling Kuo is the Anita Jones Faculty Fellow and an assistant professor in the UVA Department of Computer Science. She is a participating member of UVA Engineering’s Link Lab, a multidepartment collaborative center for cyber-physical research. (Tom Cogill/UVA Engineering)

Eliminating the Need to Program Every Scenario

To reach that level of cooperation, you need machine learning models that imbue robots with generalizable reasoning skills.

That’s “as opposed to collecting large datasets to train for every scenario, which will be expensive, if not impossible,” Kuo said.

Kuo is collaborating with a team at the Toyota Research Institute to build language representations of driving behavior that enable a robot to associate the meaning of words with what it sees by watching how humans interact with the environment or by its own interactions with the environment.

Let’s say you’re an inexperienced driver, or maybe you grew up in Miami and moved to Boston. A car that helps you drive on icy roads would be handy, right?

Once language-based representations are learned, their semantics can be used to share autonomy between humans and vehicles or robots, promoting usability and teaming.

This new intelligence will be especially important for handling out-of-the-ordinary circumstances, such as helping inexperienced drivers adjust to road conditions or guiding them through challenging situations.

“We would like to apply the learned representations in shared autonomy. For example, the AI can describe a high-level intention of turning right without skidding and give guidance to slow to a certain speed while turning right,” Kuo said. “If the driver doesn’t slow enough, the AI will adjust the speed further, or if the driver’s turn is too sharp, the AI will correct for it.”

Kuo will develop the language representations from a variety of data sources, including from a driving simulator she is building for her lab this summer.

Her work is being noticed. Kuo recently gave an invited talk on related research at the Association for the Advancement of Artificial Intelligence’s New Faculty Highlights 2024 program. She also has a forthcoming paper, “Learning Representations for Robust Human-Robot Interaction,” slated for publication in AI Magazine.

Advancing Human-Centered AI

Kuo’s proposal closely aligns with the research institute’s goals for advancing human-centered AI, interactive driving and robotics. 

“Once language-based representations are learned, their semantics can be used to share autonomy between humans and vehicles or robots, promoting usability and teaming,” said Kuo’s co-investigator, Guy Rosman, who manages the Toyota Research Institute’s Human Aware Interaction and Learning team.

“This harnesses the power of language-based reasoning into driver-vehicle interactions that better generalize our notion of common sense, well beyond existing approaches,” Rosman said.

That means if you ever do hand the proverbial keys over to your car, the trust enabled by Kuo’s research should help you steer clear of any worries. 

Link Lab

UVA Engineering’s Link Lab is an interdisciplinary center of excellence for cyber-physical research that leads pioneering work in robotics and autonomous vehicles, smart health, hardware for the internet of things and smart cities. Prioritizing impact on real-world problems, the Link Lab and its members — 325 UVA faculty, graduate and undergraduate students from multiple departments — partner with industry to tackle the most critical questions at the intersection of the cyber and physical worlds.

Questions? Comments?

Office of Communications

The Office of Communications is charged with keeping all stakeholders well informed about the School's mission, vision, activities, progress and achievements. 

Web Issue? SUBMIT A REQUEST