Development of Spatial Ability Tests


In our lab, we design, refine, and validate assessments of visualization abilities. In particular, we design and validate tests in immersive and non-immersive environments.
Immersive_and_nonimmersive_testing_Environments


Our research has demonstrated individual differences in visualization abilities; i.e., dissociation between object and spatial visual abilities (Kozhevnikov, Kosslyn, & Shephard, 2005), and a further distinction between spatial allocentric and spatial egocentric abilities (Kozhevnikov & Hegarty, 2001). Although allocentric and egocentric spatial abilities are correlated, they were also found to have distinguishable characteristics and showed different relationships to real world performance (Kozhevnikov, Motes, Rasch, & Blajenkova, 2006; Kozhevnikov & Hegarty, 2001; Kozhevnikov, Blazhenkova, & Becker, 2010).

Visualization_Ability4.png

Our lab has developed unique tests for assessing egocentric mental transformation ability, the 2D and 3D Perspective-Taking test, which reliably predict spatial navigation performance (i.e., spatial navigation and orientation). This Test is currently used for testing the spatial abilities of navigators, pilots and, also, in Man-Vehicle Laboratory at MIT for predicting astronauts’ spatial orientation skills.

Perspective Taking Test:

Perspective Taking

Furthermore, experimental verification and comparative analysis with 2D non-immersive and immersive 3D Perspective-TakingAbility tests provided experimental evidence that the new 3-D PTA test is the best and unique instrument to measure spatial orientation and spatial navigation abilities.


In addition, we developed 3D immersive tests of allocentric visualization ability (Mental Rotation)
Mental Rotation
.


Neural and Behavioral Correlates of 3D Visual-Spatial Transformations


Our research examines how different environments affect encoding strategies and choice of allocentric versus egocentric frames of reference
OSIVQ
. We compared subjects’ performance on allocentric (e.g., Mental Rotation Task
Mental Rotation
) and egocentric (e.g., Perspective-Taking Task
) spatial tasks in non-immersive and immersive 2D vs. 3D environments. Our findings demonstrate a unique pattern of responses in the 3D immersive environment, and suggest that 3D immersive environments are different from 3D non-immersive and 2D environments, and that immersion is necessary to provide adequate information for building a spatial reference frame crucial for high-order motor planning and egocentric encoding. Furthermore, they suggest that non-immersive environments might encourage the use of more “artificial” encoding strategies in which the 3D image is encoded with respect to an environmental frame of reference, and in particular, to the computer screen. On the other hand, immersive environments
Immersive_VR
can provide the necessary feedback for an individual to use the same strategy and retinocentric frame of reference as he/she would use in a real-world situation.


We plan to further investigate the neural correlates of 3D visual-spatial processing in immersive virtual environments using EEG.

Spatial Updating


spatial updatingSpatial updating refers to the cognitive process that compute the spatial relationship between an individual and his/her surrounding environment as he/she moves based on perceptual information about his/her own movements. It is one of the fundamental forms of navigation, and it contributes to object and scene recognition by predicting the appearance of objects or scenes from novel vantage points so that the individual can recognize them easily as he/she moves.


In our research (Motes, Finlay, & Kozhevnikov, 2006; Finlay, Motes, & Kozhevnikov, 2006) we systematically compared scene recognition response time (RT) and accuracy patterns following observer versus scene movement across view changes ranging from 0 to 360 degrees. The results demonstrated that regardless of whether the scene was rotated or the observer moved, greater angular disparity between judged and encoded views produced slower RTs.  Thus, in contrast to previous findings which did not consider a wide range of observer movement, our data show that observer movement does not necessarily automatically update representations of spatial layouts in small-scale (room-sized) environments and raise questions about the effects of duration limitations and encoding points of view on the automatic spatial updating of representations of scenes.


Allocentric vs. Egocentric Spatial Processing


Our research on allocentric-egocentric spatial processing includes three main directions:


This line of research focuses on examining the dissociation between the two types of spatial imagery transformations: allocentric spatial transformations, which involve an object-to-object representational system and encode information about the location of one object or its parts with respect to other objects, versus egocentric perspective transformations that involve a self-to-object representational system.

Spatial Coding Systems

In our lab, we examine individual differences in egocentric (imagining taking a different perspective in space) and allocentric (mentally manipulating objects from a stationary point of view ) spatial abilities, and develop assessments of these abilities .Our research also seeks to discover the relation of these two types of spatial ability to locomotion and spatial navigation.


Research

Please enable Javascript and Flash to view this Flash video.


The research in the Mental Imagery lab focuses on investigating visualization processes and individual differences in mental imagery in cognitive style. In particular, we examine how individual differences in visualization ability affect more complex activities, such as spatial navigation, learning and problem solving in mathematics, science and art. We also explore ways to train visual-object and visual-spatial imagery skills and design three-dimensional immersive virtual environments that can accommodate individual differences and learning styles.


The Mental Imagery and Human-Computer Interaction lab research focuses in five main directions:


Our approach integrates qualitative and quantitative behavioral research methods, as well as neuroimaging techniques (EEG, fMRI). Furthermore, we develop and validate assessment and training paradigms for visualization ability, using  3D immersive virtual reality.


Based on behavioral and neuroscience evidence, we formulated a theoretical framework of individual differences in visual imagery, and suggested that visualization ability is not a single undifferentiated construct, but rather is divided into two main dimensions: object and spatial, and that the spatial dimension is further divided into allocentric and egocentric dimensions. All these visualization abilities underlie success at different complex, real-world tasks, and predict specialization in different professional and academic domains.