Bio

Ph.D. Industrial and Operations Engineering, University of MichiganM.S.E. Industrial and Operations Engineering, University of MichiganB.S. Industrial and Systems Engineering, Virginia TechB.A. Economics, Virginia Tech

"It is critical to design displays and interfaces that can present the right information to the user at the right time."

My research focuses on task sharing, attention management, and interruption management in complex environments that have included aviation, healthcare, military operations, and manufacturing. These work environments impose considerable and continually increasing attentional demands on operators by requiring them to work symbiotically with automation and technology. As such, they are required to divide their mental resources effectively amongst numerous tasks and sources of information. It is critical to consider cognitive ergonomics and systems engineering to support the design of interfaces that can present the right information at the right time. I currently have ongoing research on the following topics:

  • Multimodal displays. One of the main focuses of this lab is how to address visual data overload in various data-rich environments. A promising means of addressing this challenge is the introduction of multimodal interfaces, i.e. interfaces that distribute information across different sensory channels which include vision, audition, and touch (with a focus on the latter). The broad research goals are to identify what types of information is best presented with each sensory channel and under what contexts.
  • Adaptive displays. The operator demands and subsequently the information needed by the operators is always changing in complex environments. The goal here is to develop interfaces that adjust the nature of information presentation in response to various sensed parameters and conditions. For instance, considering user preference, task demands, and environmental conditions.
  • Cognitive limitations. The design of displays will be compromised if their design does not consider the limits of human perception and cognition. For example, one phenomena of interest is change blindness, i.e. the surprising difficulty people have in detecting even large changes in a visual scene or display when the change occurs with another visual event.

My research been funded by the National Science Foundation, Agency for Healthcare Research and Quality, Air Force Office and Scientific Research, and National Institutes of Health with research expenditures totaling over $6 million. I am also the recipient of the NSF CAREER Award and the 2016 APA Briggs Dissertation Award.

Awards

  • Jerome H. Ely Human Factors Article Award 2020
  • NSF CAREER Award 2018
  • American Psychological Association (APA) George E. Briggs Dissertation Award 2016

Research Interests

  • Human Factors
  • Tactile and Multimodal Displays and Interfaces
  • Perception and Performance
  • Adaptive/Adaptable Displays

Selected Publications

  • Devlin, S.P, Byham, J.K., Riggs, S.L. (2021). Does What We See Shape History? Examining Workload History as a Function of Performance and Ambient/Focal Visual Attention. ABS ACM Transactions on Applied Perception (TAP) 18 (2), 1-17
  • Clark, L.D. & Riggs, S.L. (2020). Extending Fitts’ law in three-dimensional virtual environments with current low-cost virtual reality technology. ABS International Journal of Human - Computer Studies, 139.
  • Gomes, K., Betza, S., & Riggs, S.L. (2019). Now you feel it, now you don’t: The effect of movement, cue complexity, and body location on tactile change detection. ABS Human Factors, 62(4), 643–655.
  • Riggs, S. & Sarter, N. (2019). Tactile, visual, and crossmodal visual-tactile change blindness: The effect of transient type and task demands. ABS Human Factors, 61(1), 5–24.

Courses Taught

  • SYS 3023: Human-Machine Interface SYS 4582/6582: Human Error in Complex Systems

Featured Grants & Projects

  • CAREER: Collaboratively perceiving, comprehending, and projecting into the future: Supporting team situational awareness with adaptive multimodal displays

    Funded by the National Science Foundation


    Especially in data-rich and rapidly changing environments, effective teams need to give members the information needed to develop awareness of their own, their teammates', and the overall team's current situation. However, attentional demands are high on such teams, raising questions of how to both monitor those attentional demands and develop systems that adaptively provide needed information not just through visual displays that are often overloaded, but through other senses including touch and sound. Most existing work on adaptive multimodal interfaces for situational awareness focuses on individuals; this project will address how to do this work for teams, using unmanned aerial vehicle (UAV) search and rescue as its primary domain.

    Read More