Nicholas A. Giudice – VEMI Lab /vemi University of Maine Thu, 19 Mar 2026 16:03:56 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.5 GuideNav: User-Informed Development of a Vision-Only Robotic Navigation Assistant for Blind Travelers /vemi/publication/guidenav-user-informed-development-of-a-vision-only-robotic-navigation-assistant-for-blind-travelers/ Mon, 16 Mar 2026 16:03:02 +0000 /vemi/?post_type=publication&p=5478 While commendable progress has been made in user-centric research on mobile assistive systems for blind and low-vision (BLV) individuals, references that directly inform robot navigation design remain rare. To bridge this gap, we conducted a comprehensive human study involving interviews with 26 guide dog handlers, four white cane users, nine guide dog trainers, and one O&M trainer, along with 15+ hours of observing guide dog–assisted walking. After de-identification, we open-sourced the dataset to promote human-centered development and informed decision-making for assistive systems for BLV people. Building on insights from this formative study, we developed GuideNav, a vision-only, teach-andrepeat navigation system. Inspired by how guide dogs are trained and assist their handlers, GuideNav autonomously repeats a path demonstrated by a sighted person using a robot. Specifically, the system constructs a topological representation of the taught route, integrates visual place recognition with temporal filtering, and employs a relative pose estimator to compute navigation actions—all without relying on costly, heavy, power-hungry sensors such as LiDAR. In field tests, GuideNav consistently achieved kilometer-scale route following across five outdoor environments, maintaining reliability despite noticeable scene variations between teach and repeat runs. A userstudy with 3 guide dog handlers and 1 guide dog trainer further confirmed the system’s feasibility, marking, to our knowledge, the first demonstration of a quadruped mobile system guiding a route in a manner comparable to guide dogs.

]]>
Augmented Information in the Driving Environment: Increasing Pedestrian Safety with Augmented Reality Indicators for Older Adult Drivers in Maine /vemi/publication/augmented-information-in-the-driving-environment-increasing-pedestrian-safety-with-augmented-reality-indicators-for-older-adult-drivers-in-maine/ Fri, 31 Oct 2025 14:42:36 +0000 /vemi/?post_type=publication&p=5441 This report describes the project activities conducted at 91’s VEMI Lab to address the increasing rate of vehicle-pedestrian collisions within the state of Maine and to improve older adult driving safety. Leveraging VEMI’s experience with virtual reality environments, human-subject research, and the lab’s fully immersive autonomous vehicle simulator, a proof-of-concept, computer vision-assisted augmented reality (CVAR) system was designed and tested by Maine drivers aged 65 and older. This system displays an augmented reality (AR) visual indicator (i.e., CVAR icon) on the simulated vehicle’s windshield whenever a pedestrian is detected in or around the roadway. The goal of this AR icon is to capture the driver’s attention and direct it toward the pedestrian, increasing situational awareness, expediting driving reactions, and helping prevent potentially fatal collisions.

Citation:

Brown, J.R., Fink, P.D.S., Movalia, R.M., Kastelein, M.E., James, K.J., Corey, R.R., & Giudice, N.A. (2025). Augmented information in the driving environment: Increasing pedestrian safety with augmented reality indicators for older adult drivers in Maine. Technical report for the Maine Department of Transportation and U.S. Federal Highway Administration. 

]]>
Non-Visual Interfaces for Visual Learners: Multisensory Learning of Graphic Primitives /vemi/publication/non-visual-interfaces-for-visual-learners-multisensory-learning-of-graphic-primitives/ Mon, 09 Dec 2024 18:40:05 +0000 /vemi/?post_type=publication&p=4968 Multimodal learning systems have been found to be effective in studies investigating cognitive theory of multimedia learning. Yet this research is rarely put into practice in Science, Technology, Engineering, and Math (STEM) learning environments, which are dominated by visual graphics. Introducing multimodal learning systems into STEM settings and allowing students to access dual channel cues beyond visual perception may help more students process information in their preferred modality. The purpose of this study was to investigate the usability, effectiveness, and design of multimodal interfaces for enhancing access to graphical representations. We used existing theories of multisensory information processing to study how sighted participants could learn and interpret spatial primitives and graphical concepts presented via three non-visual conditions: natural language (NL) descriptions, haptic renderings, and a NL-Haptic combination. The results showed that access to haptic-only renderings produced the least accurate responses, whereas NL descriptions with and without haptics led to similar performance by participants when learning graphical content without vision. Performance was also impacted by the complexity of the graphical content, with the highest level of accuracy observed for closed forms, compared to paired line segments and line/polygon intersections. We argue that universally designed, multimodal learning environments can transcend traditional, visual diagrams by utilizing non-visual channels and commercial hardware to support learners with different sensory abilities, preferences, and processing needs. Findings contribute to extending theoretical insights of non-visual information processing to better understand multisensory learning in sighted individuals.

Citation: Doore, Stacy A., Justin R. Brown, Saki Imai, Justin K. Dimmel, and Nicholas A. Giudice. “Non-Visual Interfaces for Visual Learners: Multisensory Learning of Graphic Primitives.” IEEE Access 12 (2024): 189926–40. .

]]>
Comparing Natural Language and Vibro-Audio Modalities for Inclusive STEM Learning with Blind and Low Vision Users /vemi/publication/comparing-natural-language-and-vibro-audio-modalities-for-inclusive-stem-learning-with-blind-and-low-vision-users/ Fri, 27 Oct 2023 20:35:39 +0000 /vemi/?post_type=publication&p=3683 Data representations continue to be produced in predominantly visual forms within STEM disciplines. The disparity in access to these graphical representations between students who are blind or have low vision (BLV) and their sighted peers is exacerbated as the adoption of digital screens become more prevalent in educational settings. Standard accessibility solutions rely heavily on natural language processing, e.g., screen readers, for non-visual information access. But can other non-visual modalities, like touch, be effective for learning graphical content rendered on touchscreens? To investigate this question, we conducted a user study with a multimodal touchscreen learning system to assess the effectiveness of two non-visual graphical presentation modalities: 1) a vibro-audio condition, which used the device’s embedded vibration motor plus an auditory content overview (a spatial and multimodal technique), and 2) a natural language condition that provided a complete description of the content (a cognitively mediated technique). BLV participants (N = 19) were presented with the learning system and asked to answer multiple-choice questions about three different graph types using both presentation modalities. Findings showed that the two presentation modalities were functionally equivalent for learning the graphical information presented, suggesting that for these stimuli, the presentation modality did not have a significant effect on participant graph learning accuracy. However, the type of graph being learned did have a reliable effect. Moreover, a majority of the participants stated a preference towards the natural language condition and, on average, learned graphs faster than with the vibro-audio condition. The similarity found between the two learning modalities is interpreted as supporting user learning preferences while providing redundancy in the information being communicated. This approach layers the various types of information found in graphical representations (text, numerical, spatial) for individuals with accessible learning needs, providing more control, independence, and responsive tools to optimize their own educational materials.

Keywords: Applied Computing, Education, Interactive Learning Environments, Human-Centered Computing, Accessibility, Accessibility Systems and Tools, Human Computer Interaction, HCI Design and Evaluation Methods, Usability Testing

Citation: Brown et al., “Comparing Natural Language and Vibro-Audio Modalities for Inclusive STEM Learning with Blind and Low Vision Users.”.
(https://www.sciencedirect.com/science/article/pii/S1369847823001870)

View Full PDF

]]>
Give us something to chauffeur it: Exploring user needs in traditional and fully autonomous ridesharing for people who are blind or visually impaired /vemi/publication/give-us-something-to-chauffeur-it-exploring-user-needs-in-traditional-and-fully-autonomous-ridesharing-for-people-who-are-blind-or-visually-impaired/ Mon, 18 Sep 2023 16:21:15 +0000 /vemi/?post_type=publication&p=3671 As self-driving technology advances, there is enormous potential to optimize fully autonomous vehicles (FAVs) for use by people who are blind and visually impaired (BVI). Today, BVI users often rely on ridesharing services for daily travel, which present both challenges and opportunities for researchers interested in the accessible design of FAVs. The parallels between current BVI travel experiences in rideshares and predictions that FAV services will adopt rideshared models presents an enticing opportunity to use ridesharing as a proxy for understanding BVI needs in future FAV transportation. However, a key challenge is identifying the extent to which FAVs should be designed to provide the same assistance that human drivers currently provide for BVI travelers in rideshares. To address this issue, ridesharing users with visual impairment (n = 187) within the United States completed a survey instrument designed to assess and compare desires for interactions, information, and assistance between human operated and fully autonomous rideshare vehicles, as well as the modality of information delivery (auditory and/or haptic). Results indicate strong support for access to environmental information (e.g., spatial information about the destination) and contextual information (e.g., progress along the route) across the trip with automated vehicles via natural language interactions. Although results suggest significantly less desire for social interaction with the AI “at the wheel” of FAVs when compared to human drivers, findings indicate that participants desire some social collaboration and human-in-the-loop control during autonomous driving. By empirically comparing human and autonomous ridesharing and exploring both the information needs and modality preferences across information category, the study provides much-needed guidance for future design of humanlike, anthropomorphized, FAV AIs with important implications for social autonomous agents more generally. This study also speaks to the ways in which inclusive and accessible user interfaces should best support user needs across the range of vision loss in future transportation networks.

Keywords: Autonomous vehicles; People with visual impairment; Ridesharing; Accessible transportation; Social AI

Citation: Paul D.S. Fink, Maher Alsamsam, Justin R. Brown, Henry D. Kindler, Nicholas A. Giudice,
Give us something to chauffeur it: Exploring user needs in traditional and fully autonomous ridesharing for people who are blind or visually impaired, Transportation Research Part F: Traffic Psychology and Behaviour, Volume 98, 2023, Pages 91-103, ISSN 1369-8478, https://doi.org/10.1016/j.trf.2023.09.004.
(https://www.sciencedirect.com/science/article/pii/S1369847823001870)

]]>
The Autonomous Vehicle Assistant (AVA): Emerging technology design supporting blind and visually impaired travelers in autonomous transportation /vemi/publication/the-autonomous-vehicle-assistant-ava-emerging-technology-design-supporting-blind-and-visually-impaired-travelers-in-autonomous-transportation/ Thu, 10 Aug 2023 19:12:09 +0000 /vemi/?post_type=publication&p=3657 The U.S. Department of Transportation’s Inclusive Design Challenge spurred innovative research promoting accessible technology for people with disabilities in the future of autonomous transportation. This paper presents the user-driven design of the Autonomous Vehicle Assistant (AVA), a winning project of the challenge focused on solutions for people who are blind and visually impaired. Results from an initial survey (n = 90) and series of user interviews (n = 12) informed AVA’s novel feature set, which was evaluated through a formal navigation study (n = 10) and participatory design evaluations (n = 6). Aggregate findings suggest that AVA’s sensor fusion approach combining computer vision, last-meter assistance, and multisensory alerts provide critical solutions for users poised to benefit most from this emerging transportation technology.

Keywords: Autonomous vehicles, People with visual impairment, Accessibility

Citation: Paul D.S. Fink, Stacy A. Doore, Xue Lin, Matthew Maring, Pu Zhao, Aubree Nygaard, Grant Beals, Richard R. Corey, Raymond J. Perry, Katherine Freund, Velin Dimitrov, Nicholas A. Giudice, The Autonomous Vehicle Assistant (AVA): Emerging technology design supporting blind and visually impaired travelers in autonomous transportation,
International Journal of Human-Computer Studies, Volume 179, 2023, 103125, ISSN 1071-5819, https://doi.org/10.1016/j.ijhcs.2023.103125

]]>
Autonomous is Not Enough: Designing Multisensory Mid-Air Gestures for Vehicle Interactions Among People with Visual Impairments /vemi/publication/autonomous-is-not-enough-designing-multisensory-mid-air-gestures-for-vehicle-interactions-among-people-with-visual-impairments/ Thu, 10 Aug 2023 18:55:01 +0000 /vemi/?post_type=publication&p=3646 Should fully autonomous vehicles (FAVs) be designed inclusively and accessibly, independence will be transformed for millions of people experiencing transportation-limiting disabilities worldwide. Although FAVs hold promise to improve efficient transportation without intervention, a truly accessible experience must enable user input, for all people, in many driving scenarios (e.g., to alter a route or pull over during an emergency). Therefore, this paper explores desires for control in FAVs among (n=23) people who are blind and visually impaired. Results indicate strong support for control across a battery of driving tasks, as well as the need for multimodal information. These findings inspired the design and evaluation of a novel multisensory interface leveraging mid-air gestures, audio, and haptics. All participants successfully navigated driving scenarios using our gestural-audio interface, reporting high ease-of-use. Contributions include the first inclusively designed gesture set for FAV control and insight regarding supplemental haptic and audio cues.

Keywords: Spatial audio, Accessible design, Gestures, Autonomous vehicles, Interfaces for blind or visually impaired individuals, Situational awareness

Citation: Paul D. S. Fink, Velin Dimitrov, Hiroshi Yasuda, Tifany L. Chen, Richard R. Corey, Nicholas A. Giudice, and Emily S. Sumner. 2023. Autonomous is Not Enough: Designing Multisensory Mid-Air Gestures for Vehicle Interactions Among People with Visual Impairments . In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI ’23), April 23– 28, 2023, Hamburg, Germany. ACM, New York, NY, USA, 13 pages. https: //doi.org/10.1145/3544548.3580762 International, USA. DOI: http://doi.org/10.54941/ahfe1003586

Download PDF

]]>