Paul D. S. Fink – VEMI Lab /vemi University of Maine Mon, 02 Mar 2026 19:32:19 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.5 Augmented Information in the Driving Environment: Increasing Pedestrian Safety with Augmented Reality Indicators for Older Adult Drivers in Maine /vemi/publication/augmented-information-in-the-driving-environment-increasing-pedestrian-safety-with-augmented-reality-indicators-for-older-adult-drivers-in-maine/ Fri, 31 Oct 2025 14:42:36 +0000 /vemi/?post_type=publication&p=5441 This report describes the project activities conducted at 91¸ŁŔű’s VEMI Lab to address the increasing rate of vehicle-pedestrian collisions within the state of Maine and to improve older adult driving safety. Leveraging VEMI’s experience with virtual reality environments, human-subject research, and the lab’s fully immersive autonomous vehicle simulator, a proof-of-concept, computer vision-assisted augmented reality (CVAR) system was designed and tested by Maine drivers aged 65 and older. This system displays an augmented reality (AR) visual indicator (i.e., CVAR icon) on the simulated vehicle’s windshield whenever a pedestrian is detected in or around the roadway. The goal of this AR icon is to capture the driver’s attention and direct it toward the pedestrian, increasing situational awareness, expediting driving reactions, and helping prevent potentially fatal collisions.

Citation:

Brown, J.R., Fink, P.D.S., Movalia, R.M., Kastelein, M.E., James, K.J., Corey, R.R., & Giudice, N.A. (2025). Augmented information in the driving environment: Increasing pedestrian safety with augmented reality indicators for older adult drivers in Maine. Technical report for the Maine Department of Transportation and U.S. Federal Highway Administration. 

]]>
Give us something to chauffeur it: Exploring user needs in traditional and fully autonomous ridesharing for people who are blind or visually impaired /vemi/publication/give-us-something-to-chauffeur-it-exploring-user-needs-in-traditional-and-fully-autonomous-ridesharing-for-people-who-are-blind-or-visually-impaired/ Mon, 18 Sep 2023 16:21:15 +0000 /vemi/?post_type=publication&p=3671 As self-driving technology advances, there is enormous potential to optimize fully autonomous vehicles (FAVs) for use by people who are blind and visually impaired (BVI). Today, BVI users often rely on ridesharing services for daily travel, which present both challenges and opportunities for researchers interested in the accessible design of FAVs. The parallels between current BVI travel experiences in rideshares and predictions that FAV services will adopt rideshared models presents an enticing opportunity to use ridesharing as a proxy for understanding BVI needs in future FAV transportation. However, a key challenge is identifying the extent to which FAVs should be designed to provide the same assistance that human drivers currently provide for BVI travelers in rideshares. To address this issue, ridesharing users with visual impairment (n = 187) within the United States completed a survey instrument designed to assess and compare desires for interactions, information, and assistance between human operated and fully autonomous rideshare vehicles, as well as the modality of information delivery (auditory and/or haptic). Results indicate strong support for access to environmental information (e.g., spatial information about the destination) and contextual information (e.g., progress along the route) across the trip with automated vehicles via natural language interactions. Although results suggest significantly less desire for social interaction with the AI “at the wheel” of FAVs when compared to human drivers, findings indicate that participants desire some social collaboration and human-in-the-loop control during autonomous driving. By empirically comparing human and autonomous ridesharing and exploring both the information needs and modality preferences across information category, the study provides much-needed guidance for future design of humanlike, anthropomorphized, FAV AIs with important implications for social autonomous agents more generally. This study also speaks to the ways in which inclusive and accessible user interfaces should best support user needs across the range of vision loss in future transportation networks.

Keywords: Autonomous vehicles; People with visual impairment; Ridesharing; Accessible transportation; Social AI

Citation: Paul D.S. Fink, Maher Alsamsam, Justin R. Brown, Henry D. Kindler, Nicholas A. Giudice,
Give us something to chauffeur it: Exploring user needs in traditional and fully autonomous ridesharing for people who are blind or visually impaired, Transportation Research Part F: Traffic Psychology and Behaviour, Volume 98, 2023, Pages 91-103, ISSN 1369-8478, https://doi.org/10.1016/j.trf.2023.09.004.
(https://www.sciencedirect.com/science/article/pii/S1369847823001870)

]]>
The Autonomous Vehicle Assistant (AVA): Emerging technology design supporting blind and visually impaired travelers in autonomous transportation /vemi/publication/the-autonomous-vehicle-assistant-ava-emerging-technology-design-supporting-blind-and-visually-impaired-travelers-in-autonomous-transportation/ Thu, 10 Aug 2023 19:12:09 +0000 /vemi/?post_type=publication&p=3657 The U.S. Department of Transportation’s Inclusive Design Challenge spurred innovative research promoting accessible technology for people with disabilities in the future of autonomous transportation. This paper presents the user-driven design of the Autonomous Vehicle Assistant (AVA), a winning project of the challenge focused on solutions for people who are blind and visually impaired. Results from an initial survey (n = 90) and series of user interviews (n = 12) informed AVA’s novel feature set, which was evaluated through a formal navigation study (n = 10) and participatory design evaluations (n = 6). Aggregate findings suggest that AVA’s sensor fusion approach combining computer vision, last-meter assistance, and multisensory alerts provide critical solutions for users poised to benefit most from this emerging transportation technology.

Keywords: Autonomous vehicles, People with visual impairment, Accessibility

Citation: Paul D.S. Fink, Stacy A. Doore, Xue Lin, Matthew Maring, Pu Zhao, Aubree Nygaard, Grant Beals, Richard R. Corey, Raymond J. Perry, Katherine Freund, Velin Dimitrov, Nicholas A. Giudice, The Autonomous Vehicle Assistant (AVA): Emerging technology design supporting blind and visually impaired travelers in autonomous transportation,
International Journal of Human-Computer Studies, Volume 179, 2023, 103125, ISSN 1071-5819, https://doi.org/10.1016/j.ijhcs.2023.103125

]]>
Autonomous is Not Enough: Designing Multisensory Mid-Air Gestures for Vehicle Interactions Among People with Visual Impairments /vemi/publication/autonomous-is-not-enough-designing-multisensory-mid-air-gestures-for-vehicle-interactions-among-people-with-visual-impairments/ Thu, 10 Aug 2023 18:55:01 +0000 /vemi/?post_type=publication&p=3646 Should fully autonomous vehicles (FAVs) be designed inclusively and accessibly, independence will be transformed for millions of people experiencing transportation-limiting disabilities worldwide. Although FAVs hold promise to improve efficient transportation without intervention, a truly accessible experience must enable user input, for all people, in many driving scenarios (e.g., to alter a route or pull over during an emergency). Therefore, this paper explores desires for control in FAVs among (n=23) people who are blind and visually impaired. Results indicate strong support for control across a battery of driving tasks, as well as the need for multimodal information. These findings inspired the design and evaluation of a novel multisensory interface leveraging mid-air gestures, audio, and haptics. All participants successfully navigated driving scenarios using our gestural-audio interface, reporting high ease-of-use. Contributions include the first inclusively designed gesture set for FAV control and insight regarding supplemental haptic and audio cues.

Keywords: Spatial audio, Accessible design, Gestures, Autonomous vehicles, Interfaces for blind or visually impaired individuals, Situational awareness

Citation: Paul D. S. Fink, Velin Dimitrov, Hiroshi Yasuda, Tifany L. Chen, Richard R. Corey, Nicholas A. Giudice, and Emily S. Sumner. 2023. Autonomous is Not Enough: Designing Multisensory Mid-Air Gestures for Vehicle Interactions Among People with Visual Impairments . In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI ’23), April 23– 28, 2023, Hamburg, Germany. ACM, New York, NY, USA, 13 pages. https: //doi.org/10.1145/3544548.3580762 International, USA. DOI: http://doi.org/10.54941/ahfe1003586

Download PDF

]]>