UVA Engineering researchers earned a national grant to develop a wearable device to provide data and feedback to first responders.
charliefeigenoff@gmail.comNo matter how chaotic the scene, no matter how much noise, smoke, or suffering they encounter, the challenge for first responders remains the same: master the situation — quickly, coolly and accurately — and take appropriate action. Under the best of circumstances, this can be a difficult task, calling on responders to make sense of disparate streams of information from the command center, colleagues, family members, social media, and victims. And with the increasing ubiquity of the Internet and the rise of the Internet of Things, this stream of data — augmented by data from wearables, smartphones, and tablets — has become a torrent.
“Trying to aggregate this large and diverse data stream, interpret it for immediate use and log it for later evaluation requires significant cognitive effort that would be better used to address other incident complexities,” said Homa Alemzadeh, an assistant professor in the University of Virginia’s Department of Electrical and Computer Engineering and a member of UVA Engineering’s Link Lab for cyber-physical systems.
Together with her colleagues Associate Professor Ronald Williams, another electrical and computer engineering faculty member, and Jack Stankovic, BP America Professor of Computer Science and director of the Link Lab, Alemzadeh is developing a wearable cognitive assistant system—essentially an artificial intelligence agent—that would improve the situational awareness and decision making of emergency responders as well as their safety. It would do so by collecting and aggregating data from the incident scene automatically and in real time and then providing dynamic, data-driven feedback that would support effective decision-making.
“Having prompts can help even the best-trained, experienced responders make sure they don’t overlook anything,” said Williams, a life member of the Charlottesville-Albemarle Rescue Squad. “And automated logging will provide data to better evaluate performance and assess the protocols we follow.”
The U.S. Department of Commerce’s National Institutes of Standards and Technology (NIST) thought enough of their project that it awarded the group a $1.1 million grant to develop a prototype as part of its Public Safety Innovation Accelerator Program. Of the 162 proposals submitted, NIST funded just 33.
NIST is not the only organization to see the potential of their work. As part of the project, the UVA Engineering researchers are partnering with local emergency response and public service agencies in Charlottesville and Richmond, an effort facilitated by Williams’ long-time presence in the local emergency services community. As North Garden Volunteer Fire Company Chief George Stephens said, “The more we explore this technology, the more potential we see. It is exciting for all of us to be a part of this effort.”
Prepping Data for Analysis
There are a number of challenges that Alemzadeh and her colleagues must overcome to realize their vision. Unlike data that conforms to the requirements of a structured database, most of the data generated at an emergency scene — audio and free-form text — is unstructured. If this data is to be useful, it must be converted into structured data. This requires a natural language processing system similar to the one that Alexa, Siri, or Google Assistant use to convert a command or query into data that a computer can understand.
Among the issues that the team must confront is the specialized vocabulary associated with emergency responders. They are compiling what is in effect a custom dictionary that will be used for speech-to-text and natural language processing.
The second challenge is interpreting the data accurately. To do this, the team is translating existing emergency response protocols — for instance guidelines that help responders determine if a person is having a heart attack and react correctly — into behavioral models and decision rules that that can be executed by an artificial intelligence agent. This will enable the cognitive assistant to track the situation in real time and offer suggestions to the responders.
Building for Real-World Emergencies
As if meeting these challenges weren’t difficult enough, the team has to build a system that can function in real-world emergency situations. For instance, they must create a system that is robust enough to capture speech in an environment characterized by background noise and overlapping speakers.
“When people ask Alexa a question, it is usually in a quiet room,” Alemzadeh said. “We will need to adapt speech-to-text and natural language processing to work in the field.”
As part of their collaboration with North Garden, the team is characterizing common noise profiles by recording voice streams generated during the company’s training sessions.
In addition, Alemzadeh and her colleagues must build for an environment in which broadband connectivity is not a given. This is important because many of the tools they will use — like natural language processing — rely on cloud resources.
“We must be able to figure out how to reconfigure the device so that, if necessary, it can run speech and natural language processing algorithms locally,” Alemzadeh said. “We need to do this in such a way that performance, although degraded, still provides essential functionality.”
The team will also have to accommodate the limited processing power and battery life of wearable devices and ensure the privacy and security of the data they collect.
A First Step
Looking to the future, Alemzadeh can envision the wearable device including augmented reality interfaces to assist responders, but for the moment, the team is concentrating on building a basic prototype. They presented a demonstration for the NIST Annual Public Safety Broadband Stakeholder Meeting in June 2018 that incorporated protocols for a subset of most common situations in emergency response. They showed that this prototype can convert speech into structured format, select and execute the relevant protocols in real-time, and generate a series of recommendations. (Watch a video interview of the presenters.)
“It will be a long time before the cognitive assistant will be as accurate and reliable as a responder,” says Alemzadeh. “But in the meantime, we do think we will make it much easier for responders to focus on the information that matters — and in the process save lives.”
The UVA team is also collaborating with the University of Oxford, which has an NIST-funded project to locate emergency responders, particularly firefighters, in situations where GPS may not be available. Stankovic spent three months of a sabbatical at Oxford working to integrate the UVA cognitive assistant with Oxford’s approach to location-based services.
Alemzadeh said the team also has developed a partnership with the Richmond Ambulance Authority for testing the language processing algorithms using real EMS data collected and de-identified at their agency. The team also is in conversation with the National Fire Protection Association about potential collaboration on analysis of unstructured EMS data.