Justin R. Brown – VEMI Lab /vemi University of Maine Wed, 04 Mar 2026 17:42:26 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.5 Augmented Information in the Driving Environment: Increasing Pedestrian Safety with Augmented Reality Indicators for Older Adult Drivers in Maine /vemi/publication/augmented-information-in-the-driving-environment-increasing-pedestrian-safety-with-augmented-reality-indicators-for-older-adult-drivers-in-maine/ Fri, 31 Oct 2025 14:42:36 +0000 /vemi/?post_type=publication&p=5441 This report describes the project activities conducted at 91¸ŁŔű’s VEMI Lab to address the increasing rate of vehicle-pedestrian collisions within the state of Maine and to improve older adult driving safety. Leveraging VEMI’s experience with virtual reality environments, human-subject research, and the lab’s fully immersive autonomous vehicle simulator, a proof-of-concept, computer vision-assisted augmented reality (CVAR) system was designed and tested by Maine drivers aged 65 and older. This system displays an augmented reality (AR) visual indicator (i.e., CVAR icon) on the simulated vehicle’s windshield whenever a pedestrian is detected in or around the roadway. The goal of this AR icon is to capture the driver’s attention and direct it toward the pedestrian, increasing situational awareness, expediting driving reactions, and helping prevent potentially fatal collisions.

Citation:

Brown, J.R., Fink, P.D.S., Movalia, R.M., Kastelein, M.E., James, K.J., Corey, R.R., & Giudice, N.A. (2025). Augmented information in the driving environment: Increasing pedestrian safety with augmented reality indicators for older adult drivers in Maine. Technical report for the Maine Department of Transportation and U.S. Federal Highway Administration. 

]]>
Non-Visual Interfaces for Visual Learners: Multisensory Learning of Graphic Primitives /vemi/publication/non-visual-interfaces-for-visual-learners-multisensory-learning-of-graphic-primitives/ Mon, 09 Dec 2024 18:40:05 +0000 /vemi/?post_type=publication&p=4968 Multimodal learning systems have been found to be effective in studies investigating cognitive theory of multimedia learning. Yet this research is rarely put into practice in Science, Technology, Engineering, and Math (STEM) learning environments, which are dominated by visual graphics. Introducing multimodal learning systems into STEM settings and allowing students to access dual channel cues beyond visual perception may help more students process information in their preferred modality. The purpose of this study was to investigate the usability, effectiveness, and design of multimodal interfaces for enhancing access to graphical representations. We used existing theories of multisensory information processing to study how sighted participants could learn and interpret spatial primitives and graphical concepts presented via three non-visual conditions: natural language (NL) descriptions, haptic renderings, and a NL-Haptic combination. The results showed that access to haptic-only renderings produced the least accurate responses, whereas NL descriptions with and without haptics led to similar performance by participants when learning graphical content without vision. Performance was also impacted by the complexity of the graphical content, with the highest level of accuracy observed for closed forms, compared to paired line segments and line/polygon intersections. We argue that universally designed, multimodal learning environments can transcend traditional, visual diagrams by utilizing non-visual channels and commercial hardware to support learners with different sensory abilities, preferences, and processing needs. Findings contribute to extending theoretical insights of non-visual information processing to better understand multisensory learning in sighted individuals.

Citation: Doore, Stacy A., Justin R. Brown, Saki Imai, Justin K. Dimmel, and Nicholas A. Giudice. “Non-Visual Interfaces for Visual Learners: Multisensory Learning of Graphic Primitives.” IEEE Access 12 (2024): 189926–40. .

]]>
Comparing Natural Language and Vibro-Audio Modalities for Inclusive STEM Learning with Blind and Low Vision Users /vemi/publication/comparing-natural-language-and-vibro-audio-modalities-for-inclusive-stem-learning-with-blind-and-low-vision-users/ Fri, 27 Oct 2023 20:35:39 +0000 /vemi/?post_type=publication&p=3683 Data representations continue to be produced in predominantly visual forms within STEM disciplines. The disparity in access to these graphical representations between students who are blind or have low vision (BLV) and their sighted peers is exacerbated as the adoption of digital screens become more prevalent in educational settings. Standard accessibility solutions rely heavily on natural language processing, e.g., screen readers, for non-visual information access. But can other non-visual modalities, like touch, be effective for learning graphical content rendered on touchscreens? To investigate this question, we conducted a user study with a multimodal touchscreen learning system to assess the effectiveness of two non-visual graphical presentation modalities: 1) a vibro-audio condition, which used the device’s embedded vibration motor plus an auditory content overview (a spatial and multimodal technique), and 2) a natural language condition that provided a complete description of the content (a cognitively mediated technique). BLV participants (N = 19) were presented with the learning system and asked to answer multiple-choice questions about three different graph types using both presentation modalities. Findings showed that the two presentation modalities were functionally equivalent for learning the graphical information presented, suggesting that for these stimuli, the presentation modality did not have a significant effect on participant graph learning accuracy. However, the type of graph being learned did have a reliable effect. Moreover, a majority of the participants stated a preference towards the natural language condition and, on average, learned graphs faster than with the vibro-audio condition. The similarity found between the two learning modalities is interpreted as supporting user learning preferences while providing redundancy in the information being communicated. This approach layers the various types of information found in graphical representations (text, numerical, spatial) for individuals with accessible learning needs, providing more control, independence, and responsive tools to optimize their own educational materials.

Keywords: Applied Computing, Education, Interactive Learning Environments, Human-Centered Computing, Accessibility, Accessibility Systems and Tools, Human Computer Interaction, HCI Design and Evaluation Methods, Usability Testing

Citation: Brown et al., “Comparing Natural Language and Vibro-Audio Modalities for Inclusive STEM Learning with Blind and Low Vision Users.”.
(https://www.sciencedirect.com/science/article/pii/S1369847823001870)

View Full PDF

]]>
Give us something to chauffeur it: Exploring user needs in traditional and fully autonomous ridesharing for people who are blind or visually impaired /vemi/publication/give-us-something-to-chauffeur-it-exploring-user-needs-in-traditional-and-fully-autonomous-ridesharing-for-people-who-are-blind-or-visually-impaired/ Mon, 18 Sep 2023 16:21:15 +0000 /vemi/?post_type=publication&p=3671 As self-driving technology advances, there is enormous potential to optimize fully autonomous vehicles (FAVs) for use by people who are blind and visually impaired (BVI). Today, BVI users often rely on ridesharing services for daily travel, which present both challenges and opportunities for researchers interested in the accessible design of FAVs. The parallels between current BVI travel experiences in rideshares and predictions that FAV services will adopt rideshared models presents an enticing opportunity to use ridesharing as a proxy for understanding BVI needs in future FAV transportation. However, a key challenge is identifying the extent to which FAVs should be designed to provide the same assistance that human drivers currently provide for BVI travelers in rideshares. To address this issue, ridesharing users with visual impairment (n = 187) within the United States completed a survey instrument designed to assess and compare desires for interactions, information, and assistance between human operated and fully autonomous rideshare vehicles, as well as the modality of information delivery (auditory and/or haptic). Results indicate strong support for access to environmental information (e.g., spatial information about the destination) and contextual information (e.g., progress along the route) across the trip with automated vehicles via natural language interactions. Although results suggest significantly less desire for social interaction with the AI “at the wheel” of FAVs when compared to human drivers, findings indicate that participants desire some social collaboration and human-in-the-loop control during autonomous driving. By empirically comparing human and autonomous ridesharing and exploring both the information needs and modality preferences across information category, the study provides much-needed guidance for future design of humanlike, anthropomorphized, FAV AIs with important implications for social autonomous agents more generally. This study also speaks to the ways in which inclusive and accessible user interfaces should best support user needs across the range of vision loss in future transportation networks.

Keywords: Autonomous vehicles; People with visual impairment; Ridesharing; Accessible transportation; Social AI

Citation: Paul D.S. Fink, Maher Alsamsam, Justin R. Brown, Henry D. Kindler, Nicholas A. Giudice,
Give us something to chauffeur it: Exploring user needs in traditional and fully autonomous ridesharing for people who are blind or visually impaired, Transportation Research Part F: Traffic Psychology and Behaviour, Volume 98, 2023, Pages 91-103, ISSN 1369-8478, https://doi.org/10.1016/j.trf.2023.09.004.
(https://www.sciencedirect.com/science/article/pii/S1369847823001870)

]]>