Development of Spatial Ability Tests


In our lab, we design, refine, and validate assessments of visualization abilities. In particular, we design and validate tests in immersive and non-immersive environments.
Immersive_and_nonimmersive_testing_Environments


Our research has demonstrated individual differences in visualization abilities; i.e., dissociation between object and spatial visual abilities (Kozhevnikov, Kosslyn, & Shephard, 2005), and a further distinction between spatial allocentric and spatial egocentric abilities (Kozhevnikov & Hegarty, 2001). Although allocentric and egocentric spatial abilities are correlated, they were also found to have distinguishable characteristics and showed different relationships to real world performance (Kozhevnikov, Motes, Rasch, & Blajenkova, 2006; Kozhevnikov & Hegarty, 2001; Kozhevnikov, Blazhenkova, & Becker, 2010).

Visualization_Ability4.png

Our lab has developed unique tests for assessing egocentric mental transformation ability, the 2D and 3D Perspective-Taking test, which reliably predict spatial navigation performance (i.e., spatial navigation and orientation). This Test is currently used for testing the spatial abilities of navigators, pilots and, also, in Man-Vehicle Laboratory at MIT for predicting astronauts’ spatial orientation skills.

Perspective Taking Test:

Perspective Taking

Furthermore, experimental verification and comparative analysis with 2D non-immersive and immersive 3D Perspective-TakingAbility tests provided experimental evidence that the new 3-D PTA test is the best and unique instrument to measure spatial orientation and spatial navigation abilities.


In addition, we developed 3D immersive tests of allocentric visualization ability (Mental Rotation)
Mental Rotation
.


Spatial Navigation and Individual Differences in Environmental Representations


This project involves studies of navigational abilities in virtual (driving simulator) and in real large-scale environments. We examined whether procedural- and survey-type representations of an environment would be present after traversing a novel route. We also examined whether individual differences in visual-spatial abilities predicted the types of representations formed. Our results challenge experience-based, sequential models of adults’ development of environmental representations. Furthermore, more spatially integrated sketch-maps were associated with higher spatial abilities. Our findings suggest that spatial abilities, not experience alone, affect the types of representations formed (Blajenkova, Motes, & Kozhevnikov, 2005; Motes, Blajenkova, & Kozhevnikov 2004).


Furthermore, with the ultimate goal to better assess, train and improve individuals navigational abilities, we developed and validated an assessment of large-scale egocentric abilities: the Perspective Taking Test
Perspective  Taking Test
. In addition, to improve assessment and training, we examine how people find their way while navigating in space, and what navigational strategies they employ.


Driving simulator


Click to watch video: Driving Simulator, 2-level sity, GMU

Neural and Behavioral Correlates of 3D Visual-Spatial Transformations


Our research examines how different environments affect encoding strategies and choice of allocentric versus egocentric frames of reference
OSIVQ
. We compared subjects’ performance on allocentric (e.g., Mental Rotation Task
Mental Rotation
) and egocentric (e.g., Perspective-Taking Task
) spatial tasks in non-immersive and immersive 2D vs. 3D environments. Our findings demonstrate a unique pattern of responses in the 3D immersive environment, and suggest that 3D immersive environments are different from 3D non-immersive and 2D environments, and that immersion is necessary to provide adequate information for building a spatial reference frame crucial for high-order motor planning and egocentric encoding. Furthermore, they suggest that non-immersive environments might encourage the use of more “artificial” encoding strategies in which the 3D image is encoded with respect to an environmental frame of reference, and in particular, to the computer screen. On the other hand, immersive environments
Immersive_VR
can provide the necessary feedback for an individual to use the same strategy and retinocentric frame of reference as he/she would use in a real-world situation.


We plan to further investigate the neural correlates of 3D visual-spatial processing in immersive virtual environments using EEG.

Spatial Updating


spatial updatingSpatial updating refers to the cognitive process that compute the spatial relationship between an individual and his/her surrounding environment as he/she moves based on perceptual information about his/her own movements. It is one of the fundamental forms of navigation, and it contributes to object and scene recognition by predicting the appearance of objects or scenes from novel vantage points so that the individual can recognize them easily as he/she moves.


In our research (Motes, Finlay, & Kozhevnikov, 2006; Finlay, Motes, & Kozhevnikov, 2006) we systematically compared scene recognition response time (RT) and accuracy patterns following observer versus scene movement across view changes ranging from 0 to 360 degrees. The results demonstrated that regardless of whether the scene was rotated or the observer moved, greater angular disparity between judged and encoded views produced slower RTs.  Thus, in contrast to previous findings which did not consider a wide range of observer movement, our data show that observer movement does not necessarily automatically update representations of spatial layouts in small-scale (room-sized) environments and raise questions about the effects of duration limitations and encoding points of view on the automatic spatial updating of representations of scenes.


Allocentric vs. Egocentric Spatial Processing


Our research on allocentric-egocentric spatial processing includes three main directions:


This line of research focuses on examining the dissociation between the two types of spatial imagery transformations: allocentric spatial transformations, which involve an object-to-object representational system and encode information about the location of one object or its parts with respect to other objects, versus egocentric perspective transformations that involve a self-to-object representational system.

Spatial Coding Systems

In our lab, we examine individual differences in egocentric (imagining taking a different perspective in space) and allocentric (mentally manipulating objects from a stationary point of view ) spatial abilities, and develop assessments of these abilities .Our research also seeks to discover the relation of these two types of spatial ability to locomotion and spatial navigation.