Klatzky, R. L. – VEMI Lab /vemi University of Maine Wed, 04 Mar 2026 17:51:38 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.5 Spatial updating of multiple targets: Comparison of younger and older adults /vemi/publication/spatial-updating-multiple-targets-comparison-younger-older-adults/ Thu, 13 Jul 2017 16:29:34 +0000 /vemi/?post_type=publication&p=1567 When walking without vision, people mentally keep track of the directions and distances of previously viewed objects, a process called spatial updating. The current experiment indicates that while people across a large age range are able to update multiple targets in memory without perceptual support, aging negatively affects accuracy, precision, and decision time. Participants (20 to 80 years of age) viewed one, three, or six targets (colored lights) on the floor of a dimly lit room. Then, without vision, they walked to a target designated by color, either directly or indirectly (via a forward turning point). The younger adults’ final stopping points were both accurate (near target) and precise (narrowly dispersed), but updating performance did degrade slightly with the number of targets. Older adults’ performance was consistently worse than the younger group, but the lack of interaction between age and memory load indicates that the effect of age on performance was not further exacerbated by a greater number of targets. The number of targets also significantly increased the latency required to turn toward the designated target for both age groups. Taken together, results extend previous work showing impressive updating performance by younger adults, with novel findings showing that older adults manifest small but consistent degradation of updating performance of multitarget arrays.

Bennett, C. R., Loomis, J. M., Klatzky, R. L., & Giudice, N. A. (2017). Spatial updating of multiple targets: Comparison of younger and older adults. Memory & Cognition, 45(7), 1240-1251. DOI:

Download PDF

]]>
Spatial updating of haptic arrays across the lifespan /vemi/publication/1296/ Thu, 25 May 2017 19:38:38 +0000 /vemi/?post_type=publication&p=1296 Abstract:

µþ²¹³¦°ì²µ°ù´Ç³Ü²Ô»å:ÌýAging research addressing spatial learning, representation, and action is almost exclusively based on vision as the input source. Much less is known about how spatial abilities from non-visual inputs, particularly from haptic information, may change during lifespan spatial development. This research studied whether learning and updating of haptic target configurations differs as a function of age.

Method: Three groups of participants, ranging from 20 to 80 years, felt four-target table-top circular arrays and then performed several tasks to assess lifespan haptic spatial cognition. Measures evaluated included egocentric pointing, allocentric pointing, and array reconstruction after physical or imagined spatial updating.

Results: All measures revealed reliable differences between the oldest and youngest participant groups. Our age effect for egocentric pointing contrasts with previous findings showing preserved egocentric spatial abilities. Error performance on allocentric pointing and map reconstruction tasks showing a clear age effect, with the oldest participants exhibiting the greatest error, are in line with other studies in the visual domain. Post-updating performance sharply declined with age but did not reliably differ between physical and imagined updating.

Conclusion: Results suggest that there is a general trend for age-related degradation of spatial abilities after haptic learning, with the greatest declines manifesting in all measures in people over 60 years of age. Results are interpreted in terms of a spatial aging effect on mental transformations of 3D representations of space in working memory.

Citation:

Giudice, N.A., Bennett, C.R., Klatzky, R.L., and Loomis, J.M. (2017). Spatial Updating of haptic arrays across the lifespan. Experimental Aging Research. 43(3), 274-290.

Download PDF

]]>
Visual, tangible, and touch-screen: Comparison of platforms for displaying simple graphics /vemi/publication/visual-tangible-touch-screen-comparison-platforms-displaying-simple-graphics/ Sun, 26 Jun 2016 18:51:55 +0000 /vemi/?post_type=publication&p=1287 Abstract:

Four different platforms were compared in a task of exploring an angular stimulus and reporting its value. The angle was explored visually, tangibly as raised fine-grit sandpaper, or on a touchscreen with a frictional or vibratory signal. All platforms produced highly accurate angle judgments. Differences were found, however, in exploration time, with vision fastest as expected, followed by tangible, vibration, and friction. Relative to the tangible display, touchscreens evidenced greater noise in the perceived angular value, with a particular disadvantage for friction. The latter must be interpreted in the context of a first-generation display and a rapidly advancing technology. On the whole, the results point both to promise and barriers in the use of refreshable graphical displays for blind users.

Citation: 

Gershon, P., Klatzky, R. L., Palani, H., & Giudice, N. A. (2016). Visual, tangible, and touch-screen: Comparison of platforms for displaying simple graphics. Assistive Technology. 28(1), 1-6.

Download PDF

]]>
Touch-Screen Technology for the Dynamic Display of 2D Spatial Information Without Vision: Promise and Progress /vemi/publication/touch-screen-technology-dynamic-display-2d-spatial-information-without-vision-promise-progress/ Sat, 01 Nov 2014 17:22:22 +0000 /vemi/?post_type=publication&p=1255 Many developers wish to capitalize on touch-screen technology for developing aids for the blind, particularly by incorporating vibrotactile stimulation to convey patterns on their surfaces, which otherwise are featureless. Our belief is that they will need to take into account basic research on haptic perception in designing these graphics interfaces. We point out constraints and limitations in haptic processing that affect the use of these devices. We also suggest ways to use sound to augment basic information from touch, and we include evaluation data from users of a touch-screen device with vibrotactile and auditory feedback that we have been developing, called a vibro-audio interface.

Citation:

Klatzky, R.L., Giudice, N.A., Bennett, C.R., & Loomis, J.M. (2014). Touch-screen technology for the dynamic display of 2D spatial information without vision: Promise and progress. Multisensory Research. 27(5-6), 359-378.

Download PDF

]]>
Combining locations from working memory and long-term memory into a common spatial image /vemi/publication/combining-locations-working-memory-long-term-memory-common-spatial-image/ Thu, 01 Aug 2013 15:07:48 +0000 /vemi/?post_type=publication&p=1073 Abstract: This research uses a novel integration paradigm to investigate whether target locations read in from long-term memory (LTM) differ from perceptually encoded inputs in spatial working-memory (SWM) with respect to systematic spatial error and/or noise, and whether SWM can simultaneously encompass both of these sources. Our results provide evidence for a composite representation of space in SWM derived from both perception and LTM, albeit with a loss in spatial precision of locations retrieved from LTM. More generally, the data support the concept of a spatial image in working memory and extend its potential sources to representations retrieved from LTM.

°ä¾±³Ù²¹³Ù¾±´Ç²Ô:ÌýGiudice, N. A., Klatzky, R. L., Bennett, C. R., & Loomis, J. M. (2013). Combining locations from working memory and long-term memory into a common spatial image. Spatial Cognition and Computation, 13(2), 103-128.

Download PDF

]]>
Representing 3D Space in Working Memory: Spatial Images from Vision, Hearing, Touch, and Language /vemi/publication/representing-3d-space-working-memory-spatial-images-vision-hearing-touch-language/ Sun, 26 May 2013 15:08:54 +0000 /vemi/?post_type=publication&p=1492 The chapter deals with a form of transient spatial representation referred to as a spatial image. Like a percept, it is externalized, scaled to the environment, and can appear in any direction about the observer. It transcends the concept of modality, as it can be based on inputs from the three spatial senses, from language, and from long-term memory. Evidence is presented that supports each of the claimed properties of the spatial image, showing that it is quite different from a visual image. Much of the evidence presented is based on spatial updating. A major concern is whether spatial images from different input modalities are functionally equivalent-that once instantiated in working memory, the spatial images from different modalities have the same functional characteristics with respect to subsequent processing, such as that involved in spatial updating. Going further, the research provides some evidence that spatial images are amodal (i.e., do not retain modality-specific features).

Citation: Loomis, J. M., Klatzky, R. L., & Giudice, N. A. (2013). Representing 3D space in working memory: Spatial images from vision, hearing, touch, and language. In Multisensory imagery (pp. 131-155). Springer New York.

Download PDF

]]>
Perception of 3-D location based on vision, touch, and extended touch /vemi/publication/perception-3-d-location-based-vision-touch-extended-touch/ Tue, 01 Jan 2013 17:00:41 +0000 /vemi/?post_type=publication&p=1081 Abstract: Perception of the near environment gives rise to spatial images in working memory that continue to represent the spatial layout even after cessation of sensory input. As the observer moves, these spatial images are continuously updated. This research is concerned with (1) whether spatial images of targets are formed when they are sensed using extended touch (i.e., using a probe to extend the reach of the arm) and (2) the accuracy with which such targets are perceived. In Experiment 1, participants perceived the 3-D locations of individual targets from a fixed origin and were then tested with an updating task involving blindfolded walking followed by placement of the hand at the remembered target location. Twenty-four target locations, representing all combinations of two distances, two heights, and six azimuths, were perceived by vision or by blindfolded exploration with the bare hand, a 1-m probe, or a 2-m probe. Systematic errors in azimuth were observed for all targets, reflecting errors in representing the target locations and updating. Overall, updating after visual perception was best, but the quantitative differences between conditions were small. Experiment 2 demonstrated that auditory information signifying contact with the target was not a factor. Overall, the results indicate that 3-D spatial images can be formed of targets sensed by extended touch and that perception by extended touch, even out to 1.75 m, is surprisingly accurate.

Citation: Giudice, N.A., Klatzky, R.L., Bennett, C.R., & Loomis, J.M. (2013). Perception of 3-D location based on vision, touch, and extended touch. Experimental Brain Research, 224(1), 141-153.

Download PDF

]]>
Sensory substitution of vision:Importance of perceptual and cognitive processing /vemi/publication/sensory-substitution-vision/ Thu, 02 Aug 2012 15:51:59 +0000 /vemi/?post_type=publication&p=1112 Citation: Loomis, J.M., Klatzky, R.L., & Giudice, N.A. (2012). Sensory substitution of vision: Importance of perceptual and cognitive processing. In R. Manduchi & S. Kurniawan (Eds). Assistive Technology for Blindness and Low Vision (pp. 162-191). Boca Raton, FL: CRC Press.

Download PDF

]]>