J Cogn Neurosci. 2008 Mar;20(3):371-88 doi: 10.1162/jocn.2008.20027.

Integrated contextual representation for objects' identities and their locations

Gronau N, Neta M, Bar M.

Abstract

Visual context plays a prominent role in everyday perception. Contextual information can facilitate recognition of objects within scenes by providing predictions about objects that are most likely to appear in a specific setting, along with the locations that are most likely to contain objects in the scene. Is such identity-related ("semantic") and location-related ("spatial") contextual knowledge represented separately or jointly as a bound representation? We conducted a functional magnetic resonance imaging (fMRI) priming experiment whereby semantic and spatial contextual relations between prime and target object pictures were independently manipulated. This method allowed us to determine whether the two contextual factors affect object recognition with or without interacting, supporting a unified versus independent representations, respectively. Results revealed a Semantic x Spatial interaction in reaction times for target object recognition. Namely, significant semantic priming was obtained when targets were positioned in expected (congruent), but not in unexpected (incongruent), locations. fMRI results showed corresponding interactive effects in brain regions associated with semantic processing (inferior prefrontal cortex), visual contextual processing (parahippocampal cortex), and object-related processing (lateral occipital complex). In addition, activation in fronto-parietal areas suggests that attention and memory-related processes might also contribute to the contextual effects observed. These findings indicate that object recognition benefits from associative representations that integrate information about objects' identities and their locations, and directly modulate activation in object-processing cortical regions. Such context frames are useful in maintaining a coherent and meaningful representation of the visual world, and in providing a platform from which predictions can be generated to facilitate perception and action.

PMID: 18004950