Mental Imagery and Human-Computer Interaction Lab
Spatial Navigation and Individual Differences in Environmental Representations
This project involves studies of navigational abilities in virtual (driving simulator) and in real large-scale environments. We examined whether procedural- and survey-type representations of an environment would be present after traversing a novel route. We also examined whether individual differences in visual-spatial abilities predicted the types of representations formed. Our results challenge experience-based, sequential models of adults’ development of environmental representations. Furthermore, more spatially integrated sketch-maps were associated with higher spatial abilities. Our findings suggest that spatial abilities, not experience alone, affect the types of representations formed (Blajenkova, Motes, & Kozhevnikov, 2005; Motes, Blajenkova, & Kozhevnikov 2004).

Spatial Updating
Spatial updating refers to the cognitive process that compute the spatial relationship between an individual and his/her surrounding environment as he/she moves based on perceptual information about his/her own movements. It is one of the fundamental forms of navigation, and it contributes to object and scene recognition by predicting the appearance of objects or scenes from novel vantage points so that the individual can recognize them easily as he/she moves.
In our research (Motes, Finlay, & Kozhevnikov, 2006; Finlay, Motes, & Kozhevnikov, 2006) we systematically compared scene recognition response time (RT) and accuracy patterns following observer versus scene movement across view changes ranging from 0 to 360 degrees. The results demonstrated that regardless of whether the scene was rotated or the observer moved, greater angular disparity between judged and encoded views produced slower RTs. Thus, in contrast to previous findings which did not consider a wide range of observer movement, our data show that observer movement does not necessarily automatically update representations of spatial layouts in small-scale (room-sized) environments and raise questions about the effects of duration limitations and encoding points of view on the automatic spatial updating of representations of scenes.
Allocentric vs. Egocentric Spatial Processing
Our research on allocentric-egocentric spatial processing includes three main directions:
- Development of allocentric and egocentric spatial assessments
- Spatial Navigation and Individual Differences in Environmental Representations
- Spatial Updating
This line of research focuses on examining the dissociation between the two types of spatial imagery transformations: allocentric spatial transformations, which involve an object-to-object representational system and encode information about the location of one object or its parts with respect to other objects, versus egocentric perspective transformations that involve a self-to-object representational system.

In our lab, we examine individual differences in egocentric (imagining taking a different perspective in space) and allocentric (mentally manipulating objects from a stationary point of view ) spatial abilities, and develop assessments of these abilities .Our research also seeks to discover the relation of these two types of spatial ability to locomotion and spatial navigation.
Research
Please enable Javascript and Flash to view this Flash video.The research in the Mental Imagery lab focuses on investigating visualization processes and individual differences in mental imagery in cognitive style. In particular, we examine how individual differences in visualization ability affect more complex activities, such as spatial navigation, learning and problem solving in mathematics, science and art. We also explore ways to train visual-object and visual-spatial imagery skills and design three-dimensional immersive virtual environments that can accommodate individual differences and learning styles.
The Mental Imagery and Human-Computer Interaction lab research focuses in five main directions:
- Object-spatial dissociation in individual differences in imagery
- 3D visualization in immersive virtual environments
- Allocentric vs. egocentric spatial processing
- Visualization processes in different domains (meditation, science, arts, and medical applications)
- Cognitive style
Our approach integrates qualitative and quantitative behavioral research methods, as well as neuroimaging techniques (EEG, fMRI). Furthermore, we develop and validate assessment and training paradigms for visualization ability, using 3D immersive virtual reality.
Based on behavioral and neuroscience evidence, we formulated a theoretical framework of individual differences in visual imagery, and suggested that visualization ability is not a single undifferentiated construct, but rather is divided into two main dimensions: object and spatial, and that the spatial dimension is further divided into allocentric and egocentric dimensions. All these visualization abilities underlie success at different complex, real-world tasks, and predict specialization in different professional and academic domains.

