Science Learning in 3D immersive VE


This line of research focuses on exploring the strengths and limitations of virtual reality as a medium for learning scientific concepts (i.e., their potential to convey abstract scientific concepts). We investigate how various aspects of virtual realities (multisensory immersion, 3-D representation, shifting among various frames of reference
OSIVQ
), when applied to scientific models, might facilitate students’ understanding of abstract phenomena and help in displacing intuitive misconceptions with more accurate mental models.  We also study the role of the interaction between virtual reality’s features and other factors (i.e., learners’ individual characteristics, domain-specific knowledge and interaction experience) in shaping the learning process and learning outcomes.


In particular, we investigate learning of the relative motion concept, using Immersive Immersive 3D Virtual Environment
Immersive_VR
simulations’ Relative Motion setting, which
Relative_Motion.png
explores an innovative instructional technology platform as a new media for learning concepts for introductory physics curriculum for K-12 and higher education. This approach supports the learning process by providing a unique possibility for students to interact with and explore their hypotheses in VR-generated worlds, thus making it possible for students to “experience” what they are learning in an entirely new way. The module includes educationally powerful dynamic visual representations (highly “realistic” objects, visualization of concepts such as forces and velocities, visualization of processes and things invisible to the naked eye, focusing on core-concepts [e.g. highlighting, magnifying, removing irrelevant aspects], a real-time graphing tool, etc.), and allows for real-time interaction. Students can move and look around, point and gesture, experience motion, etc. in a simulation, and these “first hand” experiences can significantly contribute to the sense of “presence” students can feel in a virtual environment.


Neural and Behavioral Correlates of 3D Visual-Spatial Transformations


Our research examines how different environments affect encoding strategies and choice of allocentric versus egocentric frames of reference
OSIVQ
. We compared subjects’ performance on allocentric (e.g., Mental Rotation Task
Mental Rotation
) and egocentric (e.g., Perspective-Taking Task
) spatial tasks in non-immersive and immersive 2D vs. 3D environments. Our findings demonstrate a unique pattern of responses in the 3D immersive environment, and suggest that 3D immersive environments are different from 3D non-immersive and 2D environments, and that immersion is necessary to provide adequate information for building a spatial reference frame crucial for high-order motor planning and egocentric encoding. Furthermore, they suggest that non-immersive environments might encourage the use of more “artificial” encoding strategies in which the 3D image is encoded with respect to an environmental frame of reference, and in particular, to the computer screen. On the other hand, immersive environments
Immersive_VR
can provide the necessary feedback for an individual to use the same strategy and retinocentric frame of reference as he/she would use in a real-world situation.


We plan to further investigate the neural correlates of 3D visual-spatial processing in immersive virtual environments using EEG.

Spatial Updating


spatial updatingSpatial updating refers to the cognitive process that compute the spatial relationship between an individual and his/her surrounding environment as he/she moves based on perceptual information about his/her own movements. It is one of the fundamental forms of navigation, and it contributes to object and scene recognition by predicting the appearance of objects or scenes from novel vantage points so that the individual can recognize them easily as he/she moves.


In our research (Motes, Finlay, & Kozhevnikov, 2006; Finlay, Motes, & Kozhevnikov, 2006) we systematically compared scene recognition response time (RT) and accuracy patterns following observer versus scene movement across view changes ranging from 0 to 360 degrees. The results demonstrated that regardless of whether the scene was rotated or the observer moved, greater angular disparity between judged and encoded views produced slower RTs.  Thus, in contrast to previous findings which did not consider a wide range of observer movement, our data show that observer movement does not necessarily automatically update representations of spatial layouts in small-scale (room-sized) environments and raise questions about the effects of duration limitations and encoding points of view on the automatic spatial updating of representations of scenes.


Allocentric vs. Egocentric Spatial Processing


Our research on allocentric-egocentric spatial processing includes three main directions:


This line of research focuses on examining the dissociation between the two types of spatial imagery transformations: allocentric spatial transformations, which involve an object-to-object representational system and encode information about the location of one object or its parts with respect to other objects, versus egocentric perspective transformations that involve a self-to-object representational system.

Spatial Coding Systems

In our lab, we examine individual differences in egocentric (imagining taking a different perspective in space) and allocentric (mentally manipulating objects from a stationary point of view ) spatial abilities, and develop assessments of these abilities .Our research also seeks to discover the relation of these two types of spatial ability to locomotion and spatial navigation.


Research

Please enable Javascript and Flash to view this Flash video.


The research in the Mental Imagery lab focuses on investigating visualization processes and individual differences in mental imagery in cognitive style. In particular, we examine how individual differences in visualization ability affect more complex activities, such as spatial navigation, learning and problem solving in mathematics, science and art. We also explore ways to train visual-object and visual-spatial imagery skills and design three-dimensional immersive virtual environments that can accommodate individual differences and learning styles.


The Mental Imagery and Human-Computer Interaction lab research focuses in five main directions:


Our approach integrates qualitative and quantitative behavioral research methods, as well as neuroimaging techniques (EEG, fMRI). Furthermore, we develop and validate assessment and training paradigms for visualization ability, using  3D immersive virtual reality.


Based on behavioral and neuroscience evidence, we formulated a theoretical framework of individual differences in visual imagery, and suggested that visualization ability is not a single undifferentiated construct, but rather is divided into two main dimensions: object and spatial, and that the spatial dimension is further divided into allocentric and egocentric dimensions. All these visualization abilities underlie success at different complex, real-world tasks, and predict specialization in different professional and academic domains.