Fink, P. D. S. – VEMI Lab /vemi University of Maine Mon, 02 Mar 2026 19:33:07 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.5 Dude, Where’s My (Autonomous) Car? Defining an Accessible Description Logic for Blind and Low Vision Travelers Using Autonomous Vehicles /vemi/publication/dude-wheres-my-autonomous-car-defining-an-accessible-description-logic-for-blind-and-low-vision-travelers-using-autonomous-vehicles/ Thu, 16 Oct 2025 18:13:33 +0000 /vemi/?post_type=publication&p=5467 Purpose: Autonomous vehicles (AVs) are becoming a promising transportation solution for blind and low-vision (BLV) travelers, offering the potential for greater independent mobility. This paper explores the information needs of BLV users across multiple steps of the transportation journey, including finding and navigating to, entering, and exiting vehicles independently.

Methods: A survey with 202 BLV respondents and interviews with 12 BLV individuals revealed the perspectives of BLV end-users and informed the sequencing of natural language information required for successful travel. Whereas the survey identified key information needs across the three trip segments, the interviews helped prioritize how that information should be presented in a sequence of accessible descriptions to travelers.

Results: Taken together, the survey and interviews reveal that BLV users prioritize knowing the vehicle’s make and model and how to find the correct vehicle during the navigation phase. They also emphasize the importance of confirmations about the vehicle’s destination and onboard safety features upon entering the vehicle. While exiting, BLV users value information about hazards and obstacles, as well as knowing which side of the vehicle to exit. Furthermore, results highlight that BLV travelers desire using their own smartphone devices when receiving information from AVs and prefer audiobased interaction.

Conclusion: The findings from this research contribute a structured framework for delivering trip-related information to BLV users, useful for designers incorporating natural language descriptions tailored to each travel segment. This work offers important contributions for sequencing transportation-related descriptions throughout the AV journey, ultimately enhancing the mobility and independence of BLV individuals.

Citation:

Fink, P.D.S., Brown, J.R., Coombs, R., Hamby, E.A., James, K.J., Harris, A., Bond, J., Andrulis, M.E., & Giudice, N.A. (in press). Dude, Where’s My (Autonomous) Car? Defining an Accessible Description Logic for Blind and Low Vision Travelers Using Autonomous Vehicles. Universal Access in the Information Society (UAIS).

 

]]>
Does Trust Even Matter? Behavioral Evidence for the Disconnect Between People’s Subjective Trust and Decisions to Use Autonomous Vehicles /vemi/publication/does-trust-even-matter-behavioral-evidence-for-the-disconnect-between-peoples-subjective-trust-and-decisions-to-use-autonomous-vehicles/ Wed, 21 May 2025 16:47:08 +0000 /vemi/?post_type=publication&p=5464
One of the most prevalent findings in the autonomous transportation literature is the strong connection between human trust and intentions to use autonomous vehicles (AVs). Indeed, trust is widely regarded as an essential antecedent for the adoption of AVs, a finding based primarily on survey-based methodologies. However, psychological theory has long suggested that self-reported measures of intention are fraught with contradiction – people often say one thing and do another. We examine this potential dichotomy here by experimentally characterizing the relationship between self-reported trust and behavioral decisions to use AVs. An initial survey of 444 participants assessed trust in AVs, identifying three trust categories: high, moderate, and low. Results demonstrated that people trust human rideshare drivers more than AVs. A subsequent in-lab behavioral study with 72 of these participants involved choosing a ride in an AV or a human-driven vehicle. Contrary to prevailing assumptions, our results reveal a deep chasm between intention and behavior: 97 % of participants, regardless of trust rating, chose to ride in the AV. This finding indicates that situational context, curiosity, and immediate circumstances heavily influence decision-making, mediating (and even overshadowing) self-reported trust levels. Employing Cognitive Dissonance Theory, we offer potential explanations for why participants reconciled their initial distrust with their subsequent actions. Our findings challenge the narrative that self-reported trust determines AV adoption and highlight the importance of situational factors in shaping user behavior. Using these results, we offer new insights and guidance for deploying AVs, suggesting that controlled, low-risk environments could facilitate broader acceptance of this emerging technology, providing a practical solution to the AV trust problem.
Citation:

Fink, P.D.S., Brown, J.R., Kutzer, K.M., & Giudice, N.A. (2025). Does Trust Even Matter? Behavioral Evidence for the Disconnect Between People’s Subjective Trust and Decisions to Use Autonomous Vehicles. Transportation Research Part F: Traffic Psychology and Behaviour (TRF), 114, 99-117. DOI: 10.1016/j.trf.2025.05.024.

]]>
Accessible maps for the future of inclusive ridesharing /vemi/publication/accessible-maps-for-the-future-of-inclusive-ridesharing/ Sun, 22 Sep 2024 16:25:06 +0000 /vemi/?post_type=publication&p=5455
For people who are blind and low vision (BLV), ridesharing provides an important means of independence and mobility. However, a common challenge relates to finding the vehicle when it arrives to an unanticipated location. Although coordinating with the driver for assistance is serviceable in the near term, new solutions are necessary when a human is no longer available in future automated vehicles. Therefore, this paper presents and evaluates a multisensory smartphone-based map system designed to enable nonvisual tracking of summoned vehicles. Results from a user study with (N=12) BLV users suggest that vibro-audio maps (VAMs) promote superior spatial confidence and reasoning compared to current nonvisual audio interfaces in ridesharing apps, while also being desirable and easy to use. A subsequent expert evaluation based on improvements suggested during the user study indicate the practical utility of VAMs to address both current and future wayfinding challenges for BLV travelers.
侱ٲپDz:
Fink, P.D.S., Milne, H0., Caccese, A., Alsamsam, M., Loranger, J., Colley, M., & Giudice, N.A. (2024). Accessible maps for the future of inclusive ridesharing. In the Proceedings of the 16th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI ’24), 106-115.https://doi.org/10.1145/3640792.3675736.
]]>
X-ray vision as a compensatory augmentation for slowing cognitive map decay in older adults /vemi/publication/x-ray-vision-as-a-compensatory-augmentation-for-slowing-cognitive-map-decay-in-older-adults/ Sat, 11 May 2024 16:00:24 +0000 /vemi/?post_type=publication&p=5452
Safe and efficient navigation often relies on the development and retention of accurate cognitive maps that include inter-landmark relations. For many older adults, cognitive maps are difficult to form and remember over time, which introduces serious challenges for independence and mobility. To address this problem, we explore an innovative compensatory augmentation solution enabling enhanced inter-landmark learning via an “X-Ray Vision” simulation. Results with (n=45) user study participants suggest superior older adult cognitive map retention over time from a single learning session with the augmentation versus a control condition without the augmentation. Furthermore, results characterize differences in decay of cognitive maps between older adults and a control of younger adults. These findings suggest important implications for future augmented reality devices and the ways in which they can be used to promote memory and independence among older adults.
Citation:
Bennett, C.R., Fink, P.D.S., & Giudice, N.A. (2024). X-ray vision as a compensatory augmentation for slowing cognitive map decay in older adults. Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (CHI ’24). DOI: 10.1145/3613904.3642644.
]]>
The Design of mid-air ultrasonic haptic interfaces based on the perception of lines /vemi/publication/the-design-of-mid-air-ultrasonic-haptic-interfaces-based-on-the-perception-of-lines/ Tue, 18 Jul 2023 18:15:55 +0000 /vemi/?post_type=publication&p=3584 Mid-air ultrasonic feedback is a new form of haptic stimulation supporting mid-air, touch-free user interfaces. Functional implementation of ultrasonic haptic (UH) interfaces depend upon the ability to accurately distinguish between the intensity, shape, orientation, and movement of a signal. This user study (N = 15) investigates the ability to non-visually perceive two ultrasonic lines with varying lengths (3, 5, and 7 cm) and
orientations (vertical and horizontal) using the palm of the hand. Key results showed that: (1) the orientation of the lines had no effect on a user’s accuracy when determining their relative lengths, (2) line length distinction significantly improved when the length difference was at least 4 cm, and (3) a clear learning curve was evident when evaluating a new user’s ability to perceive ultrasonic signals. The capabilities of UH technology identified and discussed within this study will help engineer user-friendly and functional mid-air haptic interfaces for future applications.

Keywords: Ultrasonic feedback, Mid-air haptics, User interfaces, Human-computer interaction

Citation: Loranger, J.A., Brown, J.R., Kindler, H.D., Fink, P.D.S., Dimitrov, V., & Giudice, N.A. (2023). The Design of mid-air ultrasonic haptic interfaces based on the perception of lines. In: I.L. Nunes (ed) Human Factors and Systems Interaction. Proceedings of the International Conference on Applied Human Factors and Ergonomics (AHFE’23). vol 84, 18-26. AHFE International, USA. DOI: http://doi.org/10.54941/ahfe1003586

]]>
Does it press? Investigating the efficacy of an ultrasonic haptic button interface for non-visual driving applications /vemi/publication/does-it-press-investigating-the-efficacy-of-an-ultrasonic-haptic-button-interface-for-non-visual-driving-applications/ Tue, 18 Jul 2023 18:13:33 +0000 /vemi/?post_type=publication&p=3583 Ultrasonic haptic (UH) feedback employs mid-air ultrasound waves detectable by the palm of the hand. This interface demonstrates a novel opportunity to utilize non-visual input and output (I/O) functionalities in interactive applications, such as vehicle controls that allow the user to keep their eyes on the road. However, more work is needed to evaluate the useability of such an interface. In this study, 16 blindfolded participants completed tasks involving finding and counting UH buttons, associating buttons with audio cues, learning spatial arrangements, and determining button states. Results showed that users were generally successful with 2–4 arranged buttons and could associate them with audio cues with an average accuracy of 77.1%. Participants were also able to comprehend button spatial arrangements with 77.8% accuracy and engage in reconstruction tasks, suggesting development of reasonably accurate spatial representations. These results signify the capability of UH feedback to have real-world I/O functionality and serve to guide future exploration in this area.

Keywords: Ultrasonic haptic feedback, Mid-air haptics, Ultrasonic buttons, Non-visual interface

Citation: Alsamsam, M., Fink, P.D.S., Brown, J.R., Dimitrov, V., & Giudice, N.A.  (2023). Does it press? Investigating the efficacy of an ultrasonic haptic button interface for non-visual driving applications. In: G. Praetorius, C. Sellberg, & R. Patriarca (eds) Human Factors in Transportation. Proceedings of the International Conference on Applied Human Factors and Ergonomics (AHFE’23). vol 95, 343-353. AHFE International, USA. DOI: http://doi.org/10.54941/ahfe1003819

]]>
Comparing Map Learning between Touchscreen-Based Visual and Haptic Displays: A Behavioral Evaluation with Blind and Sighted Users /vemi/publication/comparing-map-learning-between-touchscreen-based-visual-and-haptic-displays-a-behavioral-evaluation-with-blind-and-sighted-users/ Sun, 26 Dec 2021 16:47:50 +0000 /vemi/?post_type=publication&p=3136 The ubiquity of multimodal smart devices affords new opportunities for eyes-free applications for conveying graphical information to both sighted and visually impaired users. Using previously established haptic design guidelines for generic rendering of graphical content on touchscreen interfaces, the current study evaluates the learning and mental representation of digital maps, representing a key real-world translational eyes-free application. Two experiments involving 12 blind participants and 16 sighted participants compared cognitive map development and test performance on a range of spatio-behavioral tasks across three information-matched learning-mode conditions: (1) our prototype vibro-audio map (VAM), (2) traditional hardcopy-tactile maps, and (3) visual maps. Results demonstrated that when perceptual parameters of the stimuli were matched between modalities during haptic and visual map learning, test performance was highly similar (functionally equivalent) between the learning modes and participant groups. These results suggest equivalent cognitive map formation between both blind and sighted users and between maps learned from different sensory inputs, providing compelling evidence supporting the development of amodal spatial representations in the brain. The practical implications of these results include empirical evidence supporting a growing interest in the efficacy of multisensory interfaces as a primary interaction style for people both with and without vision. Findings challenge the long-held assumption that blind people exhibit deficits on global spatial tasks compared to their sighted peers, with results also providing empirical support for the methodological use of sighted participants in studies pertaining to technologies primarily aimed at supporting blind users.

Citation:

Palani, H.P., Fink, P.D.S., & Giudice, N.A. (2021). Comparing Map Learning between Touchscreen-Based Visual and Haptic Displays: A Behavioral Evaluation with Blind and Sighted Users. Multimodal Technologies and Interaction, 6(1):1. DOI:

]]>
Fully Autonomous Vehicles for People with Visual Impairment: Policy, Accessibility, and Future Directions /vemi/publication/fully-autonomous-vehicles-for-people-with-visual-impairment-policy-accessibility-and-future-directions/ Fri, 27 Aug 2021 15:52:52 +0000 /vemi/?post_type=publication&p=3111 A significant number of individuals in the United States report a disability that limits their ability to travel, including many people who are blind or visually impaired (BVI). The implications of restricted transportation result in negative impacts related to economic security, physical and mental health, and overall quality of life. Fully autonomous vehicles (FAVs) present a means to mitigate travel barriers for this population by providing new, safe, and independent travel opportunities. However, current policies governing interactions with the artificial intelligence (AI) ‘at the wheel’ of FAVs do not reflect the accessibility needs articulated by BVI people in the extant literature, failing to encourage use cases that would result in life changing mobility. By reviewing the legislative and policy efforts surrounding FAVs, we argue that the heart of this problem is due to a disjointed, laissez-faire approach to FAV accessibility that has yet to actualize the full benefits of this new transportation mode, not only for BVI people, but also for all users. We outline the necessity for a policy framework that guides the design of FAVs to include the concerns of BVI people and then propose legislative and design recommendations aimed to promote enhanced accessibility, transparency, and fairness during FAV travel.

Citation:

Fink, P. D. S., Holz, J. A., & Giudice, N. A. (2021). Fully Autonomous Vehicles for People with Visual Impairment: Policy, Accessibility, and Future Directions. ACM Transactions on Accessible Computing, 14(3), 15:1-15:17.
]]>
Federal Accessibility Standards for Fully Autonomous Vehicles /vemi/publication/federal-accessibility-standards-for-fully-autonomous-vehicle/ Fri, 14 May 2021 13:22:22 +0000 /vemi/?post_type=publication&p=3084 Self-driving technology is uniquely positioned to benefit people who cannot drive, including people with travel-limiting disabilities and many older adults. However, the lack of federal policy guiding the development of this technology has led to piecemeal recommendations that largely fail to guarantee accessible use in both public and private implementation scenarios. To leverage the full potential of self-driving technology, the Department of Transportation (DOT) should adopt accessibility standards to support autonomous transportation for people with disabilities and older adults. The Biden-Harris Administration has an important opportunity to reimagine accessible transit, capitalize on ongoing federal research programs such as the Inclusive Design Challenge, and extend the benefits of self-driving technology to those who need it most. If enacted, these recommendations will lead to increased independence, workforce participation, and mobility in the future of transportation.

Citation:

Fink, P.D.S., & Giudice, N. A. (2021). Federal accessibility standards for fully autonomous vehicles. Day One Project, Technology Policy Accelerator. https://www.dayoneproject.org/post/federal-accessibility-standards-for-fully-autonomous-vehicles

 

]]>
Design Guidelines for Schematizing and Rendering Haptically Perceivable Graphical Elements on Touchscreen Devices /vemi/publication/design-guidelines-for-schematizing-and-rendering-haptically-perceivable-graphical-elements-on-touchscreen-devices/ Thu, 17 Sep 2020 16:01:57 +0000 /vemi/?post_type=publication&p=2651 This paper explores the viability of new touchscreen-based haptic/vibrotactile interactions as a primary modality for perceiving visual graphical elements in eyes-free situations. For touchscreen-based haptic information extraction to be both accurate and meaningful, the onscreen graphical elements should be schematized and downsampled to: (1) maximize the perceptual specificity of touch-based sensing and (2) account for the technical characteristics of touchscreen interfaces. To this end, six human behavioral studies were conducted with 64 blind and 105 blindfolded-sighted participants. Experiments 1–3 evaluated three key rendering parameters that are necessary for supporting touchscreen-based vibrotactile perception of graphical information, with results providing empirical guidance on both minimally detectable and functionally discriminable line widths, inter-line spacing, and angular separation that should be maintained. Experiments 4–6 evaluated perceptually-motivated design guidelines governing visual-to-vibrotactile schematization required for tasks involving information extraction, learning, and cognition of multi-line paths (e.g., transit-maps and corridor-intersections), with results providing clear guidance as to the stimulus parameters maximizing accuracy and temporal performance. The six empirically-validated guidelines presented here, based on results from 169 participants, provide designers and content providers with much-needed guidance on effectively incorporating perceptually salient touchscreen-based haptic feedback as a primary interaction style for interfaces supporting nonvisual and eyes-free information access.

Citation:

Hari P. Palani , Paul D. S. Fink & Nicholas A. Giudice (2020) Design Guidelines for Schematizing and Rendering Haptically Perceivable Graphical Elements on Touchscreen Devices, International Journal of Human–Computer Interaction, 36:15, 1393-1414, DOI: 10.1080/10447318.2020.1752464

Download  PDF

]]>