Conference Proceedings – VEMI Lab /vemi University of Maine Thu, 19 Mar 2026 16:03:56 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.5 GuideNav: User-Informed Development of a Vision-Only Robotic Navigation Assistant for Blind Travelers /vemi/publication/guidenav-user-informed-development-of-a-vision-only-robotic-navigation-assistant-for-blind-travelers/ Mon, 16 Mar 2026 16:03:02 +0000 /vemi/?post_type=publication&p=5478 While commendable progress has been made in user-centric research on mobile assistive systems for blind and low-vision (BLV) individuals, references that directly inform robot navigation design remain rare. To bridge this gap, we conducted a comprehensive human study involving interviews with 26 guide dog handlers, four white cane users, nine guide dog trainers, and one O&M trainer, along with 15+ hours of observing guide dog–assisted walking. After de-identification, we open-sourced the dataset to promote human-centered development and informed decision-making for assistive systems for BLV people. Building on insights from this formative study, we developed GuideNav, a vision-only, teach-andrepeat navigation system. Inspired by how guide dogs are trained and assist their handlers, GuideNav autonomously repeats a path demonstrated by a sighted person using a robot. Specifically, the system constructs a topological representation of the taught route, integrates visual place recognition with temporal filtering, and employs a relative pose estimator to compute navigation actions—all without relying on costly, heavy, power-hungry sensors such as LiDAR. In field tests, GuideNav consistently achieved kilometer-scale route following across five outdoor environments, maintaining reliability despite noticeable scene variations between teach and repeat runs. A userstudy with 3 guide dog handlers and 1 guide dog trainer further confirmed the system’s feasibility, marking, to our knowledge, the first demonstration of a quadruped mobile system guiding a route in a manner comparable to guide dogs.

]]>
Toward Accessible Parsons Problems on Mobile Platforms /vemi/publication/toward-accessible-parsons-problems-on-mobile-platforms/ Tue, 17 Feb 2026 17:38:58 +0000 /vemi/?post_type=publication&p=5461
Parsons problems have become a mainstay of computer science education. They are heavily used among students, especially in K-12 and provide a small puzzle-like experience for students to practice their skills. Today, while prior work has explored complex issues with accessibility and block languages in general, the 2024 changes to accessibility regulations by the U.S. Department of Justice includes new rules around mobile platforms. These rules are ill-defined and in need of evaluation. In this work, we make several contributions. First, we conducted an evaluation of existing blocks with respect to their regulatory compliance and discuss a new blocks technology that we developed that meets these new mobile guidelines. Second, we conducted three empirical studies using Parsons problems to evaluate the usability of the technology with teachers of the visually impaired (n = 32), high-school students with diverse disabilities (n = 28), and high-school students with blindness or low vision (n = 13).
Citation:
Kluthe, T., Contreras, G., Allee, W., Robinson-Moore, W.J., Roy, N., Williams, H., Hoffman, A., Smith, D.W., Blaser, B., Gorlewicz, J.L., Giudice, N.A., & Stefik, A. (2026). Toward Accessible Parsons Problems on Mobile Platforms. In the Proceedings of the 57th ACM Technical Symposium on Computer Science Education (SIGCSETS’26). Vol 1, 589-596. Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3770762.3772646.
]]>
Towards a Cohesive Accessible Mobility Ecosystem /vemi/publication/towards-a-cohesive-accessible-mobility-ecosystem/ Tue, 14 Oct 2025 16:33:52 +0000 /vemi/?post_type=publication&p=5458
Ensuring widespread accessible mobility has the potential to positively impact the lives of millions of people with disabilities by moving and connecting them to what matters, in addition to improving mobility for everyone through the benefits of inclusive design. To enable cross-disciplinary solutions, an overview of several key research areas for advancing the field of accessible mobility will be provided, covering aspects including the accessibility ecosystem, physical and digital methods for improving accessibility, and accessibility safety. Recent advancements and opportunities for innovation will be highlighted, with the aim of developing a holistic inclusive transportation architecture, leveraging the ongoing and future transformation of the industry.
Citation:

Duerstock, B., Giudice, N. A., Klinich, K. D., Kreschmer, J., Lee, M., & Bond, J. (2025). Towards a Cohesive Accessible Mobility Ecosystem.Proceedings of the Human Factors and Ergonomics Society Annual Meeting. Vol. 69, No. 1, pp. 878-883. SAGE Publications. https://doi.org/10.1177/10711813251381732.

]]>
Accessible maps for the future of inclusive ridesharing /vemi/publication/accessible-maps-for-the-future-of-inclusive-ridesharing/ Sun, 22 Sep 2024 16:25:06 +0000 /vemi/?post_type=publication&p=5455
For people who are blind and low vision (BLV), ridesharing provides an important means of independence and mobility. However, a common challenge relates to finding the vehicle when it arrives to an unanticipated location. Although coordinating with the driver for assistance is serviceable in the near term, new solutions are necessary when a human is no longer available in future automated vehicles. Therefore, this paper presents and evaluates a multisensory smartphone-based map system designed to enable nonvisual tracking of summoned vehicles. Results from a user study with (N=12) BLV users suggest that vibro-audio maps (VAMs) promote superior spatial confidence and reasoning compared to current nonvisual audio interfaces in ridesharing apps, while also being desirable and easy to use. A subsequent expert evaluation based on improvements suggested during the user study indicate the practical utility of VAMs to address both current and future wayfinding challenges for BLV travelers.
Citation:
Fink, P.D.S., Milne, H0., Caccese, A., Alsamsam, M., Loranger, J., Colley, M., & Giudice, N.A. (2024).Accessible maps for the future of inclusive ridesharing. In the Proceedings of the 16th International Conference onAutomotiveUser Interfaces and Interactive Vehicular Applications (AutomotiveUI’24), 106-115.https://doi.org/10.1145/3640792.3675736.
]]>
X-ray vision as a compensatory augmentation for slowing cognitive map decay in older adults /vemi/publication/x-ray-vision-as-a-compensatory-augmentation-for-slowing-cognitive-map-decay-in-older-adults/ Sat, 11 May 2024 16:00:24 +0000 /vemi/?post_type=publication&p=5452
Safe and efficient navigation often relies on the development and retention of accurate cognitive maps that include inter-landmark relations. For many older adults, cognitive maps are difficult to form and remember over time, which introduces serious challenges for independence and mobility. To address this problem, we explore an innovative compensatory augmentation solution enabling enhanced inter-landmark learning via an “X-Ray Vision” simulation. Results with (n=45) user study participants suggest superior older adult cognitive map retention over time from a single learning session with the augmentation versus a control condition without the augmentation. Furthermore, results characterize differences in decay of cognitive maps between older adults and a control of younger adults. These findings suggest important implications for future augmented reality devices and the ways in which they can be used to promote memory and independence among older adults.
Citation:
Bennett, C.R., Fink, P.D.S., & Giudice, N.A. (2024). X-ray vision as a compensatory augmentation for slowing cognitive map decay in older adults. Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (CHI ’24). DOI: 10.1145/3613904.3642644.
]]>
Spatial audio-enhanced multimodal graph rendering for efficient data trend learning on touchscreen devices /vemi/publication/spatial-audio-enhanced-multimodal-graph-rendering-for-efficient-data-trend-learning-on-touchscreen-devices/ Sat, 11 May 2024 15:53:32 +0000 /vemi/?post_type=publication&p=5449
Touchscreen-based rendering of graphics using vibrations, sonification, and text-to-speech is a promising approach for nonvisual access to graphical information, but extracting trends from complex data representations nonvisually is challenging. This work presents the design of a multimodal feedback scheme with integrated spatial audio for the exploration of histograms and scatter plots on touchscreens. We detail the hardware employed and the algorithms used to control vibrations and sonification adjustments through the change of pitch and directional stereo output. We conducted formative testing with 5 blind or visually impaired participants, and results illustrate that spatial audio has the potential to increase the identification of trends in the data, at the expense of a skewed mental representation of the graph. This design work and pilot study are critical to the iterative, human-centered approach of rendering multimodal graphics on touchscreens and contribute a new scheme for efficiently capturing data trends in complex data representations.
Citation:
Robinson-Moore, W. Kalal, M., Tennison, J.L., Giudice, N.A., & Gorlewicz, J.L. (2024). Spatial audio-enhanced multimodal graph rendering for efficient data trend learning on touchscreen devices. Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (CHI ’24). DOI: 10.1145/3613904.3641959.
]]>
Towards robotic companions: Understanding handler-guide dog interactions for informed guide dog robot design /vemi/publication/towards-robotic-companions-understanding-handler-guide-dog-interactions-for-informed-guide-dog-robot-design/ Sat, 11 May 2024 15:41:00 +0000 /vemi/?post_type=publication&p=5445
Dog guides are favored by blind and low-vision (BLV) individuals for their ability to enhance independence and confidence by reducing safety concerns and increasing navigation efficiency compared to traditional mobility aids. However, only a relatively small proportion of BLV individuals work with dog guides due to their limited availability and associated maintenance responsibilities. There is considerable recent interest in addressing this challenge by developing legged guide dog robots. This study was designed to determine critical aspects of the handler-guide dog interaction and better understand handler needs to inform guide dog robot development. We conducted semi-structured interviews and observation sessions with 23 dog guide handlers and 5 trainers. Thematic analysis revealed critical limitations in guide dog work, desired personalization in handler-guide dog interaction, and important perspectives on future guide dog robots. Grounded on these findings, we discuss pivotal design insights for guide dog robots aimed for adoption within the BLV community.
Citation:
Hwang, H., Jung, H., Giudice, N.A., Biswas, J., Lee, S.I., & Kim, D. (2024). Towards robotic companions: Understanding handler-guide dog interactions for informed guide dog robot design. Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (CHI ’24). DOI: 10.1145/3613904.3642181.
]]>
Comparing Natural Language and Vibro-Audio Modalities for Inclusive STEM Learning with Blind and Low Vision Users /vemi/publication/comparing-natural-language-and-vibro-audio-modalities-for-inclusive-stem-learning-with-blind-and-low-vision-users/ Fri, 27 Oct 2023 20:35:39 +0000 /vemi/?post_type=publication&p=3683 Data representations continue to be produced in predominantly visual forms within STEM disciplines. The disparity in access to these graphical representations between students who are blind or have low vision (BLV) and their sighted peers is exacerbated as the adoption of digital screens become more prevalent in educational settings. Standard accessibility solutions rely heavily on natural language processing, e.g., screen readers, for non-visual information access. But can other non-visual modalities, like touch, be effective for learning graphical content rendered on touchscreens? To investigate this question, we conducted a user study with a multimodal touchscreen learning system to assess the effectiveness of two non-visual graphical presentation modalities: 1) a vibro-audio condition, which used the device’s embedded vibration motor plus an auditory content overview (a spatial and multimodal technique), and 2) a natural language condition that provided a complete description of the content (a cognitively mediated technique). BLV participants (N = 19) were presented with the learning system and asked to answer multiple-choice questions about three different graph types using both presentation modalities. Findings showed that the two presentation modalities were functionally equivalent for learning the graphical information presented, suggesting that for these stimuli, the presentation modality did not have a significant effect on participant graph learning accuracy. However, the type of graph being learned did have a reliable effect. Moreover, a majority of the participants stated a preference towards the natural language condition and, on average, learned graphs faster than with the vibro-audio condition. The similarity found between the two learning modalities is interpreted as supporting user learning preferences while providing redundancy in the information being communicated. This approach layers the various types of information found in graphical representations (text, numerical, spatial) for individuals with accessible learning needs, providing more control, independence, and responsive tools to optimize their own educational materials.

Keywords: Applied Computing, Education, Interactive Learning Environments, Human-Centered Computing, Accessibility, Accessibility Systems and Tools, Human Computer Interaction, HCI Design and Evaluation Methods, Usability Testing

Citation: Brown et al., “Comparing Natural Language and Vibro-Audio Modalities for Inclusive STEM Learning with Blind and Low Vision Users.”.
(https://www.sciencedirect.com/science/article/pii/S1369847823001870)

View Full PDF

]]>
The Design of mid-air ultrasonic haptic interfaces based on the perception of lines /vemi/publication/the-design-of-mid-air-ultrasonic-haptic-interfaces-based-on-the-perception-of-lines/ Tue, 18 Jul 2023 18:15:55 +0000 /vemi/?post_type=publication&p=3584 Mid-air ultrasonic feedback is a new form of haptic stimulation supporting mid-air, touch-free user interfaces. Functional implementation of ultrasonic haptic (UH) interfaces depend upon the ability to accurately distinguish between the intensity, shape, orientation, and movement of a signal. This user study (N = 15) investigates the ability to non-visually perceive two ultrasonic lines with varying lengths (3, 5, and 7 cm) and
orientations (vertical and horizontal) using the palm of the hand. Key results showed that: (1) the orientation of the lines had no effect on a user’s accuracy when determining their relative lengths, (2) line length distinction significantly improved when the length difference was at least 4 cm, and (3) a clear learning curve was evident when evaluating a new user’s ability to perceive ultrasonic signals. The capabilities of UH technology identified and discussed within this study will help engineer user-friendly and functional mid-air haptic interfaces for future applications.

Keywords: Ultrasonic feedback, Mid-air haptics, User interfaces, Human-computer interaction

Citation:Loranger, J.A., Brown, J.R., Kindler, H.D., Fink, P.D.S., Dimitrov, V., & Giudice, N.A. (2023). The Design of mid-air ultrasonic haptic interfaces based on the perception of lines. In: I.L. Nunes (ed) Human Factors and Systems Interaction. Proceedings of the International Conference on Applied Human Factors and Ergonomics (AHFE’23). vol 84, 18-26. AHFE International, USA. DOI:http://doi.org/10.54941/ahfe1003586

]]>
Natural-Language Scene Descriptions for Accessible Non-Visual Museum Exhibit Exploration and Engagement /vemi/publication/2230/ Tue, 01 Oct 2019 14:15:47 +0000 /vemi/?post_type=publication&p=2230 Indoor navigation and exploration of museum environments present unique challenges for visitors who are blind or have significant vision impairments (BVI). Like other indoor spaces, museums represent dynamic indoor environments that requires the need for both guided and self-tour experiences to allow for BVI visitor independence. In order to fully engage with a museum and its exhibits, BVI visitors need assistive technologies that support natural-language (NL) spatial descriptions that provide flexibility in the way system users receive descriptive information about gallery scenes and exhibit objects. In addition, the user interface must be connected to a robust database of spatial information to interact with mobile device tracking data and user queries. This paper describes the results of an early-stage demonstration project that utilizes an existing graph database model to support a NL information access and art gallery exploration system. Specifically, we investigated using a commercially available voice assistant interface to support NL descriptions of a gallery space and the art objects within it. Future work involves refining the language structures for scene and object descriptions, the integration of the voice assistant interface with tracking and navigation technologies, and additional user testing with sighted and BVI museum visitors.

Citation: Doore, S.A., Sarrazin, A.C., and Giudice, N.A. (2019). Natural-Language Scene Descriptions for Accessible Non-Visual Museum Exhibit Exploration and Engagement. Stock, K., , Jones, C., & Tenbrink, T. (Eds.) Inthe Proceedings of Workshops and Posters at the 14th International Conference on SpatialInformation Theory (COSIT 2019). Regensburg, Germany, Springer International Publishing, (Pp. 91-100).

Keywords and phrases: indoor environments, scene descriptions, voice assistant interface

Download PDF

]]>