Justin K. Dimmel – VEMI Lab /vemi University of Maine Wed, 04 Mar 2026 17:42:26 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.5 Non-Visual Interfaces for Visual Learners: Multisensory Learning of Graphic Primitives /vemi/publication/non-visual-interfaces-for-visual-learners-multisensory-learning-of-graphic-primitives/ Mon, 09 Dec 2024 18:40:05 +0000 /vemi/?post_type=publication&p=4968 Multimodal learning systems have been found to be effective in studies investigating cognitive theory of multimedia learning. Yet this research is rarely put into practice in Science, Technology, Engineering, and Math (STEM) learning environments, which are dominated by visual graphics. Introducing multimodal learning systems into STEM settings and allowing students to access dual channel cues beyond visual perception may help more students process information in their preferred modality. The purpose of this study was to investigate the usability, effectiveness, and design of multimodal interfaces for enhancing access to graphical representations. We used existing theories of multisensory information processing to study how sighted participants could learn and interpret spatial primitives and graphical concepts presented via three non-visual conditions: natural language (NL) descriptions, haptic renderings, and a NL-Haptic combination. The results showed that access to haptic-only renderings produced the least accurate responses, whereas NL descriptions with and without haptics led to similar performance by participants when learning graphical content without vision. Performance was also impacted by the complexity of the graphical content, with the highest level of accuracy observed for closed forms, compared to paired line segments and line/polygon intersections. We argue that universally designed, multimodal learning environments can transcend traditional, visual diagrams by utilizing non-visual channels and commercial hardware to support learners with different sensory abilities, preferences, and processing needs. Findings contribute to extending theoretical insights of non-visual information processing to better understand multisensory learning in sighted individuals.

Citation: Doore, Stacy A., Justin R. Brown, Saki Imai, Justin K. Dimmel, and Nicholas A. Giudice. “Non-Visual Interfaces for Visual Learners: Multisensory Learning of Graphic Primitives.” IEEE Access 12 (2024): 189926–40. .

]]>
Comparing Natural Language and Vibro-Audio Modalities for Inclusive STEM Learning with Blind and Low Vision Users /vemi/publication/comparing-natural-language-and-vibro-audio-modalities-for-inclusive-stem-learning-with-blind-and-low-vision-users/ Fri, 27 Oct 2023 20:35:39 +0000 /vemi/?post_type=publication&p=3683 Data representations continue to be produced in predominantly visual forms within STEM disciplines. The disparity in access to these graphical representations between students who are blind or have low vision (BLV) and their sighted peers is exacerbated as the adoption of digital screens become more prevalent in educational settings. Standard accessibility solutions rely heavily on natural language processing, e.g., screen readers, for non-visual information access. But can other non-visual modalities, like touch, be effective for learning graphical content rendered on touchscreens? To investigate this question, we conducted a user study with a multimodal touchscreen learning system to assess the effectiveness of two non-visual graphical presentation modalities: 1) a vibro-audio condition, which used the device’s embedded vibration motor plus an auditory content overview (a spatial and multimodal technique), and 2) a natural language condition that provided a complete description of the content (a cognitively mediated technique). BLV participants (N = 19) were presented with the learning system and asked to answer multiple-choice questions about three different graph types using both presentation modalities. Findings showed that the two presentation modalities were functionally equivalent for learning the graphical information presented, suggesting that for these stimuli, the presentation modality did not have a significant effect on participant graph learning accuracy. However, the type of graph being learned did have a reliable effect. Moreover, a majority of the participants stated a preference towards the natural language condition and, on average, learned graphs faster than with the vibro-audio condition. The similarity found between the two learning modalities is interpreted as supporting user learning preferences while providing redundancy in the information being communicated. This approach layers the various types of information found in graphical representations (text, numerical, spatial) for individuals with accessible learning needs, providing more control, independence, and responsive tools to optimize their own educational materials.

Keywords: Applied Computing, Education, Interactive Learning Environments, Human-Centered Computing, Accessibility, Accessibility Systems and Tools, Human Computer Interaction, HCI Design and Evaluation Methods, Usability Testing

Citation: Brown et al., “Comparing Natural Language and Vibro-Audio Modalities for Inclusive STEM Learning with Blind and Low Vision Users.”.
(https://www.sciencedirect.com/science/article/pii/S1369847823001870)

View Full PDF

]]>