Loomis, J.M. – VEMI Lab /vemi University of Maine Wed, 29 Mar 2017 15:08:24 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.5 Spatial working memory for locations specified by vision and audition: Testing the amodality hypothesis /vemi/publication/spatial-working-memory-locations-specified-vision-audition-testing-amodality-hypothesis/ Thu, 03 May 2012 15:51:15 +0000 /vemi/?post_type=publication&p=1111 Abstract: Spatial working memory can maintain representations from vision, hearing, and touch, representations referred to here as spatial images. The present experiment addressed whether spatial images from vision and hearing that are simultaneously present within working memory retain modality-specific tags or are amodal. Observers were presented with short sequences of targets varying in angular direction, with the targets in a given sequence being all auditory, all visual, or a sequential mixture of the two. On two thirds of the trials, one of the locations was repeated, and observers had to respond as quickly as possible when detecting this repetition. Ancillary detection and localization tasks confirmed that the visual and auditory targets were perceptually comparable. Response latencies in the working memory task showed small but reliable costs in performance on trials involving a sequential mixture of auditory and visual targets, as compared with trials of pure vision or pure audition. These deficits were statistically reliable only for trials on which the modalities of the matching location switched from the penultimate to the final target in the sequence, indicating a switching cost. The switching cost for the pair in immediate succession means that the spatial images representing the target locations retain features of the visual or auditory representations from which they were derived. However, there was no reliable evidence of a performance cost for mixed modalities in the matching pair when the second of the two did not immediately follow the first, suggesting that more enduring spatial images in working memory may be amodal.

Citation: Loomis, J. M., Klatzky, R. L., McHugh, B., & Giudice, N. A. (2012). Spatial working memory for locations specified by vision and audition: Testing the amodality hypothesis. Attention, Perception, & Psychophysics, 74, 1260-1267.

Download PDF

]]>
Modality independent coding of spatial layout in the human brain /vemi/publication/modality-independent-coding-spatial-layout-human-brain/ Tue, 07 Jun 2011 16:00:42 +0000 /vemi/?post_type=publication&p=1104 Abstract: In many nonhuman species, neural computations of navigational information such as position and orientation are not tied to a specific sensory modality [1, 2]. Rather, spatial signals are integrated from multiple input sources, likely leading to abstract representations of space. In contrast, the potential for abstract spatial representations in humans is not known, because most neuroscientific experiments on human navigation have focused exclusively on visual cues. Here, we tested the modality independence hypothesis with two functional magnetic resonance imaging (fMRI) experiments that characterized computations in regions implicated in processing spatial layout [3]. According to the hypothesis, such regions should be recruited for spatial computation of 3D geometric configuration, independent of a specific sensory modality. In support of this view, sighted participants showed strong activation of the parahippocampal place area (PPA) and the retrosplenial cortex (RSC) for visual and haptic exploration of informationmatched scenes but not objects. Functional connectivity analyses suggested that these effects were not related to visual recoding, which was further supported by a similar preference for haptic scenes found with blind participants. Taken together, these findings establish the PPA/RSC network as critical in modality-independent spatial computations and provide important evidence for a theory of highlevel abstract spatial information processing in the human brain.

Citation: Wolbers, T.*, Loomis, J.M., Klatzky, R.L., Wutte, M., & Giudice, N.A.* (2011). Modality independent coding of spatial layout in the human brain. Current Biology. 21(11), 984-989 (* equal contribution of authors)

Download PDF

]]>
Functional Equivalence of Spatial Images from Touch and Vision: Evidence from Spatial Updating in Blind and Sighted Individuals /vemi/publication/functional-equivalence-spatial-images-touch-vision-evidence-spatial-updating-blind-sighted-individuals/ Mon, 07 Feb 2011 17:00:07 +0000 /vemi/?post_type=publication&p=1103 Abstract: This research examined whether visual and haptic map learning yield functionally equivalent spatial images in working memory, as evidenced by similar encoding bias and updating performance. In 3 experiments, participants learned 4-point routes either by seeing or feeling the maps. At test, blindfolded participants made spatial judgments about the maps from imagined perspectives that were either aligned or misaligned with the maps as represented in working memory. Results from Experiments 1 and 2 revealed a highly similar pattern of latencies and errors between visual and haptic conditions. These findings extend the well-known alignment biases for visual map learning to haptic map learning, provide further evidence of haptic updating, and most important, show that learning from the 2 modalities yields very similar performance across all conditions. Experiment 3 found the same encoding biases and updating performance with blind individuals, demonstrating that functional equivalence cannot be due to visual recoding and is consistent with an amodal hypothesis of spatial images.

Citation: Giudice, N.A., Betty, M.R., & Loomis, J.M. (2011). Functional equivalence of spatial images from touch and vision: Evidence from spatial updating in blind and sighted individuals. Journal of Experimental Psychology: Learning, Memory, and Cognition. 37(3), 621-634.

Download PDF

]]>
Evidence for amodal representations after bimodal learning: Integration of haptic-visual layouts into a common spatial image /vemi/publication/evidence-amodal-representations-bimodal-learning-integration-haptic-visual-layouts-common-spatial-image/ Tue, 01 Dec 2009 17:00:06 +0000 /vemi/?post_type=publication&p=1092 Abstract: Participants learned circular layouts of six objects presented haptically or visually, then indicated the direction from a start target to an end target of the same or different modality (intramodal versus intermodal). When objects from the two modalities were learned separately, superior performance for intramodal trials indicated a cost of switching between modalities. When a bimodal layout intermixing modalities was learned, intra- and intermodal trials did not differ reliably. These findings indicate that a spatial image, independent of input modality, can be formed when inputs are spatially and temporally congruent, but not when modalities are temporally segregated in learning.

Citation: Giudice, N.A., Klatzky, R.L., & Loomis, J.M. (2009). Evidence for amodal representations after bimodal learning: Integration of haptic-visual layouts into a common spatial image. Spatial Cognition & Computation, 9(4), 287-304.

Download PDF

]]>
Environmental learning without vision: Effects of cognitive load on interface design /vemi/publication/environmental-learning-without-vision-effects-cognitive-load-interface-design/ Mon, 07 Jul 2008 16:00:27 +0000 /vemi/?post_type=publication&p=1098 Abstract: Blindfolded participants were guided along routes from two display modes: spatial language (“left,” “right,” or “straight”) or spatialized audio (where the perceived sound location indicates the target direction). Half of the route guidance trials were run concurrently with a secondary vibrotactile N-back task. To assess cognitive map development, subjects performed a homing task from the route’s terminus. As spatialized audio displays are processed perceptually, we hypothesized they would be less affected by increased cognitive load than language displays, which require cognitive mediation. In corroboration, results showed the secondary task had no effect on cognitive map performance for guidance by spatialized audio but led to significantly worse homing errors for guidance by spatial language. Spatialized audio was also reliably faster and more accurate than language for traversing the route. These results have important implications for the design of future navigation and guidance systems for visually impaired persons.

Citation: Giudice, N.A., Marston, J.R., Klatzky, R.L., Loomis, J.M, & Golledge, R.G. (2008). Environmental learning without vision: Effects of cognitive load on interface design. Proceedings of the 9th International Conference on Low Vision (Vision 2008), Montreal, Canada.

Download PDF

]]>
An n-back task using vibrotactile stimulation with comparison to an auditory analogue /vemi/publication/n-back-task-using-vibrotactile-stimulation-comparison-auditory-analogue-2/ Mon, 25 Jun 2007 17:34:09 +0000 /vemi/?post_type=publication&p=1259 Abstract:

We report a vibrotactile version of the common n-back task used to study working memory. Subjects wore vibrotactile stimulators on three fingers of one hand, and they responded by pressing a button with the other hand whenever the current finger matched the one stimulated n items back. Experiment 1 showed a steep decline in performance as n increased from 1 to 3; each additional level of n decreased performance by 1.5 d′ units on average. Experiment 2 supported a central capacity locus for the vibrotactile task by showing that it correlated strongly with an auditory analogue; both tasks were also related to standard digit span. The vibrotactile version of n-back may be particularly useful in dual-task contexts. It allows the assessment of cognitive capacity in sensory-impaired populations in which touch remains intact, and it may find use in brain-imaging studies in which vibrotactile stimuli impose a memory load.

Citation:

Klatzky, R.L., Giudice, N.A., Tietz, J., Marston, J.R., Golledge, R.G., & Loomis, J.M. (2007). An n-back task using vibrotactile stimulation with comparison to an auditory analogue. Behavior Research Methods. 40(1), 367-372.

Download PDF

]]>
Cognitive Load of Navigating Without Vision When Guided by Virtual Sound Versus Spatial Language /vemi/publication/cognitive-load-navigating-without-vision-guided-virtual-sound-versus-spatial-language/ Tue, 12 Dec 2006 18:26:16 +0000 /vemi/?post_type=publication&p=1257 Abstract:

A vibrotactile N-back task was used to generate cognitive load while participants were guided along virtual paths without vision. As participants stepped in place, they moved along a virtual path of linear segments. Information was provided en route about the direction of the next turning point, by spatial language (“left,” “right,” or “straight”) or virtual sound (i.e., the perceived azimuth of the sound indicated the target direction). The authors hypothesized that virtual sound, being processed at direct perceptual levels, would have lower load than even simple language commands, which require cognitive mediation. As predicted, whereas the guidance modes did not differ significantly in the no-load condition, participants showed shorter distance traveled and less time to complete a path when performing the N-back task while navigating with virtual sound as guidance. Virtual sound also produced better N-back performance than spatial language. By indicating the superiority of virtual sound for guidance when cognitive load is present, as is characteristic of everyday navigation, these results have implications for guidance systems for the visually impaired and others.

Citation:

Klatzky, R.L., Marston, J.R., Giudice, N.A., Golledge, R.G., & Loomis, J.M. (2006). Cognitive load of navigating without vision when guided by virtual sound versus spatial language. Journal of Experimental Psychology: Applied. 12(4), 223-232.

Download PDF

]]>