Scenes in the Human Brain: Comparing 2D versus 3D Representations

Scenes in the Human Brain: Comparing 2D versus 3D Representations

Neuron Previews Scenes in the Human Brain: Comparing 2D versus 3D Representations Iris I.A. Groen1,* and Chris I. Baker2,* 1Department of Psychology...

616KB Sizes 0 Downloads 59 Views

Neuron

Previews Scenes in the Human Brain: Comparing 2D versus 3D Representations Iris I.A. Groen1,* and Chris I. Baker2,* 1Department

of Psychology, New York University, New York, NY 10003, USA of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD 20892, USA *Correspondence: [email protected] (I.I.A.G.), [email protected] (C.I.B.) https://doi.org/10.1016/j.neuron.2018.12.014 2Laboratory

Three cortical brain regions are thought to underlie our remarkable ability to perceive and understand visual scenes. In this issue of Neuron, Lescroart and Gallant (2018) use quantitative models of scene processing to reveal 3D representations in these regions. We live in a three-dimensional world, a complex visual and spatial environment that we must navigate to discover places and objects of interest. Even brief glimpses of visual scenes are sufficient for us to determine properties such as the category (e.g., kitchen) and layout (e.g., fridge to the right, door to the left) of the local environment. To reveal how visual information is processed in the human brain to allow such understanding, much attention has focused on three regions identified using functional magnetic resonance imaging (fMRI) that respond more strongly when viewing scenes than when viewing objects or faces: parahippocampal place area (PPA), occipital place area (OPA), and retrosplenial complex (RSC, also termed the medial place area or MPA) (Epstein, 2014). But understanding what aspects of the visual environment are represented in these regions has proven extremely challenging given the complexity and heterogeneity of naturalistic scenes. In this issue of Neuron, Lescroart and Gallant test and compare quantitative models of scene processing, reflecting either 2D (e.g., local oriented edges) or 3D (e.g., surface orientation) properties, to determine which account best for the fMRI responses in these scene-selective areas (Figure 1). Previous fMRI findings suggest that a broad range of scene properties are represented in PPA, OPA, and RSC/MPA, from low-level visual properties such as spatial frequency (Rajimehr et al., 2011) to spatial layout (Kravitz et al., 2011) to high-level semantic concepts such as scene category (e.g., a beach or a forest; Walther et al., 2009). However, it has been suggested that the high-level representa-

tions in these regions can be explained purely by low-level properties (Watson et al., 2017). Distinguishing between low-level and high-level representations is challenging because real-world scenes contain inherent correlations between different scene properties (Malcolm et al., 2016), making it difficult to identify the ‘‘true’’ scene property driving sceneselective cortex. For example, imagine that an experimenter has observed a difference in PPA’s fMRI response to beaches and forest scenes. Beach scenes typically have an ‘‘open’’ 3D spatial layout (due to a lack of occluding boundaries, and large objects being relatively far away), as well as more horizontally oriented 2D edges (due to the presence of sky and a prominent horizon), whereas forests have a closed 3D layout and relatively more vertically oriented edges (due to the presence of trees and other nearby structures). How can we determine whether the brain responses reflect the 2D versus the 3D features of the scenes? To address this issue, Lescroart and Gallant generated complex visual scenes by embedding objects in 3D backgrounds and measuring fMRI responses in the three scene-selective regions while participants viewed short movies of movement through these scenes. These scenes were deliberately made heterogeneous by adding variation in color, lighting, texture, and shadows. The use of artificially generated scenes makes it possible to extract quantitative 3D information from the metadata of the graphics software, including the distance to and orientation of the rendered surfaces (e.g., wall of a house), that are hard to

8 Neuron 101, January 2, 2019 ª 2018 Elsevier Inc.

obtain from the real-world photographs that are often used in studies of scene perception. Several models were then generated from the different properties of the scenes based on either 2D (e.g., spatial frequency) or 3D (e.g., surface distance) properties. Each 2D and 3D model was fit to the measured fMRI response of individual voxels and then tested on their ability to predict variance in an independent, held-out set of fMRI data. Such external validation of the model fits ensures that the models are not just picking up on idiosyncratic features of a particular set of images, but generalizable features across images. While testing multiple models is critical, this creates two additional challenges. First, as the number of tested models increases, the ability to distinguish statistically between the models decreases. To circumvent this problem, Lescroart and Gallant separately tested for the best 2D and the best 3D model and then pitted each against the other to see if the best 3D model can explain the fMRI responses above and beyond a representation of low-level features in the best 2D model. Second, it is important to determine whether the models capture different aspects of the response. To achieve this, Lescroart and Gallant used variance partitioning, combining the 2D and 3D models and seeing how much additional variance is explained compared to when each model is tested separately. If the models each explain independent variance, then the total combined variance explained would be equal to the sum of the variance explained by each model separately. However, if the two models are highly correlated and explain the same

Neuron

Previews

Figure 1. Modeling 2D and 3D Representations in Human Scene-Selective Cortex (A) Scene images were quantified using two classes of models that reflected either 2D features in the scene (e.g., oriented edges, spatial frequencies), depicted in blue, or 3D features (e.g., orientation of surfaces and distance from the observer), depicted in orange. (B) Both classes of models were evaluated on their ability to predict fMRI responses in three scene-selective regions: OPA in lateral-occipital cortex, PPA in ventral temporal cortex, and RSC/MPA in medialparietal cortex. (C) The models were fit to the fMRI response for each individual voxel, after which the fitted weights were used to generate predictions for the fMRI response of that voxel to a new set of scene images. The variance explained by the best 2D and best 3D model was calculated for each model in isolation and in conjunction, yielding variance partitions that were unique to each model and shared between the models. Responses in PPA, OPA, and RSC/MPA were best characterized by a combination of shared 2D and 3D features (cyan) plus a unique contribution of 3D features above and beyond 2D features (orange).

variance, then combining the two models should not explain any additional variance. In all three scene-selective regions, both 2D and 3D models explained some of the variance in fMRI responses. Of the 3D models, a full structural model incorporating both surface orientation features and distance features explained the most variance, while a model based on oriented Gabor filters was the most accurate of the 2D models. When the best 2D and 3D models were compared, the 3D model explained the most variance in scene-selective regions, suggesting that the responses of these regions cannot be

reduced to representations of low-level properties. Nevertheless, there was substantial shared variance between the 2D and 3D model in these regions, suggesting that the 3D model does reflect some low-level features. Interestingly, there was essentially no unique variance accounted for by the 2D model that was not captured by the 3D model. For comparison, Lescroart and Gallant also looked in early visual cortical regions (V1 and V3) and found that in these regions, the 2D model explained the most variance and there was no additional variance accounted for by the 3D model. This suggests that relative to early visual cortex,

scene-selective regions carry out additional computations to represent the 3D structure of the local environment. In a follow-up analysis, Lescroart and Gallant investigated what specific information is represented in the fMRI responses to the scenes by examining the estimated model weights of the best 3D model in the recorded data, and by testing how a set of static images of artificially generated 3D rooms would be organized based on simulated fMRI responses to those images. This analysis revealed that most of the variance in scene-selective cortex could be captured by two major dimensions. The first dimension reflected the distances to large surfaces in the scene from the observer’s viewpoint. The second dimension reflected the amount of enclosing structure present in a scene (or, conversely, the degree of openness). While these results are consistent with earlier reports of spatial dimension encoding in sceneselective cortex (Kravitz et al., 2011), the current study explicitly shows that these features are encoded independent of the low-level features that are correlated with these dimensions. A caveat of any study conducted with artificial scenes is that it is unclear to what extent the results will generalize to naturalistic scene images. For example, the artificial stimuli in this study lacked the typical semantic and contextual relationships present in everyday scenes, which might be an important component of their representation. Further, while the use of artificial scenes allowed Lescroart and Gallant to obtain a quantitative measure of 3D ground truth in the scene stimuli, this information is not readily available from real scene images (2D pictures). Obtaining this information is not impossible, however, but requires specialized, time-consuming measurements (Adams et al., 2016). There are two intriguing questions to consider. First, why do scene-selective regions represent 3D distance and layout of visual scenes? Such information may be critical for identifying particular places or kinds of places or may be particularly relevant for navigation (Malcolm et al., 2016). In the current study, participants simply needed to count the objects in the scenes and the scene representations elicited may reflect automatic computations only. There may Neuron 101, January 2, 2019 9

Neuron

Previews also be distinctions between the three scene regions. Lescroart and Gallant found that OPA showed more selectivity for near/closed compared with far/open than either of the other two regions, and prior work has suggested that while OPA may be particularly important for establishing navigational affordances (Bonner and Epstein, 2017), PPA may be more involved in scene recognition and RSC/MPA in more memory-related functions (Epstein, 2014). Second, how does scene-selective cortex compute 3D distance and layout of visual scenes? The 3D structural model is not computed directly from the image pixels and it’s unclear how these particular 3D features might be extracted by the brain. Insight might be provided by models directly trained on pixel-level data, such as deep neural networks that also show a strong correspondence with responses in scene-selective cortex (Groen et al., 2018).

10 Neuron 101, January 2, 2019

To summarize, the careful and quantitative modeling approach from Lescroart and Gallant provides some of the strongest evidence yet that scene-selective regions in the brain specifically represent the 3D structure of the local visual environment. REFERENCES Adams, W.J., Elder, J.H., Graf, E.W., Leyland, J., Lugtigheid, A.J., and Muryy, A. (2016). The Southampton-York Natural Scenes (SYNS) dataset: statistics of surface attitude. Sci. Rep. 6, 35805. Bonner, M.F., and Epstein, R.A. (2017). Coding of navigational affordances in the human visual system. Proc. Natl. Acad. Sci. USA 114, 4793–4798.

human brain and behavior. eLife 7, https://doi. org/10.7554/eLife.32962. Kravitz, D.J., Peng, C.S., and Baker, C.I. (2011). Real-world scene representations in high-level visual cortex: it’s the spaces more than the places. J. Neurosci. 31, 7322–7333. Lescroart, M.D., and Gallant, J.L. (2018). Human scene-selective areas represent 3D configurations of surfaces. Neuron 101, this issue, 178–192. Malcolm, G.L., Groen, I.I.A., and Baker, C.I. (2016). Making sense of real-world scenes. Trends Cogn. Sci. 20, 843–856. Rajimehr, R., Devaney, K.J., Bilenko, N.Y., Young, J.C., and Tootell, R.B.H. (2011). The ‘‘parahippocampal place area’’ responds preferentially to high spatial frequencies in humans and monkeys. PLoS Biol. 9, e1000608.

Epstein, R.A. (2014). Neural systems for visual scene recognition. In Scene Vision: Making Sense of What We See, K. Kveraga and M. Bar, eds. (MIT Press), pp. 105–134.

Walther, D.B., Caddigan, E., Fei-Fei, L., and Beck, D.M. (2009). Natural scene categories revealed in distributed patterns of activity in the human brain. J. Neurosci. 29, 10573–10581.

Groen, I.I., Greene, M.R., Baldassano, C., Fei-Fei, L., Beck, D.M., and Baker, C.I. (2018). Distinct contributions of functional and deep neural network features to representational similarity of scenes in

Watson, D.M., Andrews, T.J., and Hartley, T. (2017). A data driven approach to understanding the organization of high-level visual cortex. Sci. Rep. 7, 3596.