<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Imagery Lab &#187; Allocentric</title>
	<atom:link href="http://www.nmr.mgh.harvard.edu/mkozhevnlab/?feed=rss2&#038;tag=allocentric" rel="self" type="application/rss+xml" />
	<link>http://www.nmr.mgh.harvard.edu/mkozhevnlab</link>
	<description>Mental Imagery and Human-Computer Interaction Lab</description>
	<lastBuildDate>Wed, 24 Mar 2010 11:01:33 +0000</lastBuildDate>
	<generator>http://wordpress.org/?v=2.9.2</generator>
	<language>en</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
			<item>
		<title>Development of Spatial Ability Tests</title>
		<link>http://www.nmr.mgh.harvard.edu/mkozhevnlab/?page_id=657</link>
		<comments>http://www.nmr.mgh.harvard.edu/mkozhevnlab/?page_id=657#comments</comments>
		<pubDate>Tue, 23 Mar 2010 00:57:31 +0000</pubDate>
		<dc:creator>admin</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[2D vs 3D]]></category>
		<category><![CDATA[Allocentric]]></category>
		<category><![CDATA[Assessment]]></category>
		<category><![CDATA[Egocentric]]></category>
		<category><![CDATA[Immersive VR]]></category>
		<category><![CDATA[Perspective Taking]]></category>
		<category><![CDATA[Visualization Abilities]]></category>

		<guid isPermaLink="false">http://www.nmr.mgh.harvard.edu/mkozhevnlab/?page_id=657</guid>
		<description><![CDATA[
In our lab, we design, refine, and validate  assessments of visualization abilities.  In particular, we design and validate tests in  immersive and non-immersive environments.
                
             <a href="http://www.nmr.mgh.harvard.edu/mkozhevnlab/?page_id=657" class="more-link">More &#62;</a>]]></description>
		<wfw:commentRss>http://www.nmr.mgh.harvard.edu/mkozhevnlab/?feed=rss2&amp;page_id=657</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Neural and Behavioral Correlates of 3D Visual-Spatial Transformations</title>
		<link>http://www.nmr.mgh.harvard.edu/mkozhevnlab/?page_id=651</link>
		<comments>http://www.nmr.mgh.harvard.edu/mkozhevnlab/?page_id=651#comments</comments>
		<pubDate>Tue, 23 Mar 2010 00:49:46 +0000</pubDate>
		<dc:creator>admin</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Allocentric]]></category>
		<category><![CDATA[Egocentric]]></category>
		<category><![CDATA[Frames of Reference]]></category>
		<category><![CDATA[Immersive VR]]></category>
		<category><![CDATA[Neural underpinnings]]></category>

		<guid isPermaLink="false">http://www.nmr.mgh.harvard.edu/mkozhevnlab/?page_id=651</guid>
		<description><![CDATA[
Our research examines how different environments affect encoding strategies and choice of  allocentric versus egocentric frames of reference
                
                    &#160;
 <a href="http://www.nmr.mgh.harvard.edu/mkozhevnlab/?page_id=651" class="more-link">More &#62;</a>]]></description>
		<wfw:commentRss>http://www.nmr.mgh.harvard.edu/mkozhevnlab/?feed=rss2&amp;page_id=651</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Spatial Updating</title>
		<link>http://www.nmr.mgh.harvard.edu/mkozhevnlab/?page_id=310</link>
		<comments>http://www.nmr.mgh.harvard.edu/mkozhevnlab/?page_id=310#comments</comments>
		<pubDate>Mon, 15 Mar 2010 02:50:13 +0000</pubDate>
		<dc:creator>admin</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Allocentric]]></category>
		<category><![CDATA[Egocentric]]></category>
		<category><![CDATA[Frames of Reference]]></category>
		<category><![CDATA[Spatial Updating]]></category>

		<guid isPermaLink="false">http://www.nmr.mgh.harvard.edu/mkozhevnlab/?page_id=310</guid>
		<description><![CDATA[
Spatial updating refers to the cognitive process that compute the spatial relationship between an individual and his/her surrounding environment as he/she moves based on perceptual information about his/her own movements. It is one of the fundamental forms of navigation, and it contributes to object and scene recognition by predicting the appearance of objects or scenes <a href="http://www.nmr.mgh.harvard.edu/mkozhevnlab/?page_id=310" class="more-link">More &#62;</a>]]></description>
		<wfw:commentRss>http://www.nmr.mgh.harvard.edu/mkozhevnlab/?feed=rss2&amp;page_id=310</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Allocentric vs. Egocentric Spatial Processing</title>
		<link>http://www.nmr.mgh.harvard.edu/mkozhevnlab/?page_id=308</link>
		<comments>http://www.nmr.mgh.harvard.edu/mkozhevnlab/?page_id=308#comments</comments>
		<pubDate>Mon, 15 Mar 2010 02:49:26 +0000</pubDate>
		<dc:creator>admin</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[Allocentric]]></category>
		<category><![CDATA[Assessment]]></category>
		<category><![CDATA[Egocentric]]></category>
		<category><![CDATA[Frames of Reference]]></category>
		<category><![CDATA[Navigation]]></category>
		<category><![CDATA[Spatial Updating]]></category>
		<category><![CDATA[Visualization Abilities]]></category>
		<category><![CDATA[Visualization Processes]]></category>

		<guid isPermaLink="false">http://www.nmr.mgh.harvard.edu/mkozhevnlab/?page_id=308</guid>
		<description><![CDATA[
Our research on allocentric-egocentric spatial processing  includes  three main directions:

 Development of allocentric and egocentric spatial assessments
Spatial Navigation and Individual Differences in Environmental Representations 
Spatial Updating


This line of research focuses on examining the dissociation between the two types of spatial imagery transformations: allocentric spatial transformations, which involve an object-to-object representational system and encode <a href="http://www.nmr.mgh.harvard.edu/mkozhevnlab/?page_id=308" class="more-link">More &#62;</a>]]></description>
		<wfw:commentRss>http://www.nmr.mgh.harvard.edu/mkozhevnlab/?feed=rss2&amp;page_id=308</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Research</title>
		<link>http://www.nmr.mgh.harvard.edu/mkozhevnlab/?page_id=9</link>
		<comments>http://www.nmr.mgh.harvard.edu/mkozhevnlab/?page_id=9#comments</comments>
		<pubDate>Tue, 09 Mar 2010 16:39:57 +0000</pubDate>
		<dc:creator>admin</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[2D vs 3D]]></category>
		<category><![CDATA[Allocentric]]></category>
		<category><![CDATA[Assessment]]></category>
		<category><![CDATA[Cognitive Style]]></category>
		<category><![CDATA[Development]]></category>
		<category><![CDATA[Egocentric]]></category>
		<category><![CDATA[Frames of Reference]]></category>
		<category><![CDATA[Immersive VR]]></category>
		<category><![CDATA[Individual Differences]]></category>
		<category><![CDATA[Meditation]]></category>
		<category><![CDATA[Navigation]]></category>
		<category><![CDATA[Neural underpinnings]]></category>
		<category><![CDATA[Object-Spatial-Verbal style]]></category>
		<category><![CDATA[Perspective Taking]]></category>
		<category><![CDATA[Science Learning]]></category>
		<category><![CDATA[Spatial Updating]]></category>
		<category><![CDATA[Theoretical Model]]></category>
		<category><![CDATA[Training]]></category>
		<category><![CDATA[Visualization Abilities]]></category>
		<category><![CDATA[Visualization in Arts]]></category>
		<category><![CDATA[Visualization in Physics]]></category>
		<category><![CDATA[Visualization Processes]]></category>

		<guid isPermaLink="false">http://www.nmr.mgh.harvard.edu/mkozhevnlab/?page_id=9</guid>
		<description><![CDATA[

The research in the Mental Imagery lab focuses on investigating visualization processes and individual differences in mental imagery in cognitive style. In particular, we examine how individual differences in visualization ability affect more complex activities, such as spatial navigation, learning and problem solving in mathematics, science and art. We also explore ways to train visual-object <a href="http://www.nmr.mgh.harvard.edu/mkozhevnlab/?page_id=9" class="more-link">More &#62;</a>]]></description>
		<wfw:commentRss>http://www.nmr.mgh.harvard.edu/mkozhevnlab/?feed=rss2&amp;page_id=9</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>