Current Biology
Dispatches Cortical Processing: How Mice Predict the Visual Effects of Locomotion Marina Fridman and Leopoldo Petreanu* Champalimaud Research, Champalimaud Center for the Unknown, Lisbon, Portugal *Correspondence:
[email protected] http://dx.doi.org/10.1016/j.cub.2017.10.038
New research identifies a frontal area in the mouse neocortex that sends predictions of locomotion-coupled visual flow to visual cortex. The findings support predictive coding theories of cortical processing. Our brains are amazing prediction machines. When we drive on a busy highway, even though we might be having a conversation or enjoying a radio show, we instantly notice a swerving car ahead of us because of its unusual trajectory. We ignore the cars that follow their predictable path, but we can easily detect the unexpected motion of the swerving one and adjust our driving to avoid an accident. How does the brain learn to distinguish the expected from the unexpected? Where and how is sensory information compared with the brain’s predictions? A new study from Leinweber et al. [1] reported recently in Neuron sheds light on the circuits mediating sensory predictions in the mouse visual cortex. Predictions are so seamlessly built in to our perception that we usually do not even notice them. Sensory information can be noisy or incomplete, so our brains use prior knowledge of the environment to shape percepts [2] or to improve our performance of complex motor tasks [3]. Another important role of predictions is to distinguish between sensations that are produced by the environment from those generated by our own movements. This has the benefit that we do not see the world jumping every time we move our eyes [4], although there is a possible downside in that it is the reason why we cannot tickle ourselves [5]. Theorists have proposed that these predictive skills are actually at the core of the brain’s computational goals, a framework called predictive coding [6–9]. Under this view, neural activity results from comparing predictions about the state of the world to actual input from the senses to calculate a prediction error (Figure 1A). Thus, the role of sensory cortical areas would be not to build representations of the world around us,
but rather to signal when incoming sensory stimuli deviate from our predictions. This theory is attractive because it can explain a variety of neural phenomena, such as responses in visual cortex [10,11] and retina [12], and has the additional advantage of being energetically efficient. Recent studies have introduced the mouse visual cortex as a model for studying predictive coding in neocortical circuits [13,14]. In this work, a virtual reality setup is used to study responses to locomotion-predicted visual flow in visual cortex. Mice run spontaneously on a floating ball navigating a virtual corridor (Figure 1B). Under normal conditions, the visual pattern on the virtual reality screen is matched to what the animal would expect to see while running. In other periods, this sensorimotor coupling is broken by briefly stopping the visual flow pattern when animals are running or by playing it out of sync with the animal’s locomotion. Consistent with the predictive coding model, these previous studies found a population of neurons in primary visual cortex (V1) that fires when visual flow does not match the animal’s locomotion [13]. The ‘mismatch’ activity seems to arise though the comparison of excitatory motor-related inputs and inhibitory visual signals [14]. The next piece of the puzzle would be to find the prediction signal: the locomotion-related signal used by visual cortex neurons to perform this comparison. Leinweber et al. [1] set out to find the source of this prediction signal. First, they identified a dense projection from frontal area A24b/M2 to V1. This projection targets the layer of V1 where they had previously seen mismatch signals. A24b/M2 overlaps both motor area M2 and part of the
R1272 Current Biology 27, R1268–R1286, December 4, 2017 ª 2017 Elsevier Ltd.
anterior cingulate cortex, which is known to modulate V1 responses [15]. This made it an excellent candidate for relaying locomotion-related signals to V1. Next, Leinweber et al. [1] asked whether A24b/M2 neurons indeed send motorrelated signals to V1. Using calcium imaging, they recorded neural activity directly in the axons of A24b/M2 neurons that terminated in V1 while the mouse ran in the virtual reality setup. They found that the A24b/M2 axons showed increased activity that started before the mouse began to run. Furthermore, they found that inactivating A24b/M2 reduced mismatch and locomotor activity in V1, while activating it induced activity in running-related V1 neurons. This showed that not only is A24b/M2 sending locomotor-related signals to V1, but this signal is also being used for the mismatch computation. One expectation of the predictive coding scheme is that the predictions generated by the brain reflect knowledge about the environment, and thus should adapt when the environment changes. Do the signals relayed byA24b/M2 inputs to V1 change with experience? To test this, Leinweber et al. [1] trained two groups of mice to navigate to the end of a virtual corridor. For one group, they maintained the normal relationship between locomotion and visual flow, that is, when the mice turned left, the corridor around them moved as expected for a left turn. For the other group, they inverted this relationship, so that when the mouse turned left the corridor looked as if he were turning right. Individual axons show preference for either left or right turns. If A24b/M2 indeed signals predictions of visual flow and not just a copy of the motor
Current Biology
Dispatches command (Figure 1C), the signals should invert after the animal learns to navigate this new environment. Leinweber et al. [1] found that indeed the ratio of left to right preferring axons flipped. This result showed the animals had internalized the new visuomotor coupling that they experienced during training, and that the predictions sent from A24b/M2 had been updated to reflect the new environment. Importantly, this experiment addresses also the ‘language’ spoken by the prediction signals sent to V1. While information in primary visual areas is encoded with respect to the position in visual space, motor areas encode information based on motor-related coordinates, for example linked to muscle activity. Thus, to build a prediction, the brains needs to convert a ‘turn left’ motor signal to the visual flow pattern expected from a left turn. The signals from A24b/M2 to V1 could have been sent in motor coordinates and translated to visual coordinates by the circuit in V1. The authors found that, instead, the translation happens within A24b/M2 because the signals sent to V1 already reflect expected visual flow. The new study has identified a frontal region as a source of motor-related predictive signals sent to V1. Interestingly, a neighboring frontal region has been implicated in sending motor signals to the primary auditory cortex [16], where they cancel self-generated sounds. This suggests a general involvement of frontal areas in similar processes across sensory modalities. It also raises the question whether the findings of Leinweber et al. [1] can be generalized to these other projections. For instance, do other frontal projections match the language of their target areas? For example, the projection to auditory areas would be expected to relay signals in the auditory cortex ‘language’. The results of Leinweber et al. [1] also raise the question of how neurons in A24b/M2 learn the expected visual flow following a motor command. Predictive coding theories would suggest that this learning process in A24b/M2 is instructed by projections from mismatch neurons in V1 [6,7,10]. Such a learning mechanism would require exquisite wiring specificity given the diversity of signals relayed from motor to sensory areas within a single
A
B
Prediction Prediction error
Sensory input
C The language of the A24b/M2 Æ V1 signal
Normal visuomotor coupling
Inverted visuomotor coupling
H1: signal is in motor coordinates Movement
LEFT turn
LEFT
turn
Visual input
LEFT turn
RIGHT
turn
A24b/M2 Æ V1 signal
LEFT turn
LEFT
turn
H2: signal is in visual coordinates Movement
LEFT turn
LEFT
turn
Visual input
LEFT turn
RIGHT
turn
A24b/M2 Æ V1 signal
LEFT turn
RIGHT
turn Current Biology
Figure 1. Testing for predictive signals in mouse cortical circuits. (A) Predictive coding theories posit that neurons compare predictions with incoming sensory information in a subtractive manner to signal prediction errors. (B) Virtual reality setup used by Leinweber et al. [1]. A head-fixed mouse runs on a floating ball while watching a screen. The visual flow on the screen is matched to his running speed and direction. (C) Summary of expected activity for A24b/M2 inputs in V1 under the hypotheses that predictions are sent in motor or visual coordinates. With normal visuomotor coupling it is impossible to distinguish the two hypotheses. However, in the inverted visuomotor coupling the left turn of the mouse is coupled to the visual flow usually associated with a right turn. In this case a motor signal can be distinghuished from one predicting visual flow.
projection [17]. Future experiments should determine the role of projections from mismatch neurons to A24b/M2 in this learning process and the specificity of their connectivity. Predictive coding has been an influential theoretical framework for how the brain might function. By finding a predictive signal that changes with the environment, contributes to the
computation of mismatch signals, and whose ‘language’ matches the one of its target area, Leinweber et al. [1] provide crucial evidence in support of predictive coding theories. Whether this framework can account more generally for cortical function and how exactly this could be implemented in neural circuits is a question that will keep neuroscientists busy for years to come.
Current Biology 27, R1268–R1286, December 4, 2017 R1273
Current Biology
Dispatches REFERENCES 1. Leinweber, M., Ward, D.R., Sobczak, J.M., Attinger, A., and Keller, G.B. (2017). A sensorimotor circuit in mouse cortex for visual flow predictions. Neuron 95, 1420– 1432.e5. 2. Summerfield, C., and de Lange, F.P. (2014). Expectation in perceptual decision making: neural and computational mechanisms. Nat. Rev. Neurosci. 15, 745–756. 3. Ko¨rding, K.P., and Wolpert, D.M. (2004). Bayesian integration in sensorimotor learning. Nature 427, 244–247. 4. Sommer, M.A., and Wurtz, R.H. (2008). Brain circuits for the internal monitoring of movements. Annu. Rev. Neurosci 31, 317–338. 5. Blakemore, S.J., Wolpert, D., and Frith, C. (2000). Why can’t you tickle yourself? Neuroreport 11, R11–R16. 6. Mumford, D. (1992). On the computational architecture of the neocortex II The role of
cortico-cortical loops. Biol. Cybern 66, 241–251. 7. Friston, K.J. (2010). The free-energy principle: a unified brain theory? Nat. Rev. Neurosci. 11, 127–138. 8. Barlow, H. (1994). What is the computational goal of the neocortex? In Large-scale Neuronal Theories of the Brain, C. Koch, and J.L. Davis, eds. (MIT Press), pp. 1–22. 9. Clark, A. (2013). Predictive brains, situated agents, and the future of cognitive science. Behav. Brain Sci. 36, 181–204. 10. Rao, R.P., and Ballard, D.H. (1999). Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nat. Neurosci. 2, 79–87. 11. Friston, K.J. (2005). A theory of cortical responses. Philos. Trans. R. Soc. Lond. B 360, 815–836. 12. Srinivasan, M.V., Laughlin, S.B., and Dubs, A. (1982). Predictive coding: a fresh view of inhibition in the retina. Proc. R. Soc. Lond. B 216, 427–459.
13. Keller, G.B., Bonhoeffer, T., and Hu¨bener, M. (2012). Sensorimotor mismatch signals in primary visual cortex of the behaving mouse. Neuron 74, 809–815. 14. Attinger, A., Wang, B., and Keller, G.B. (2017). Visuomotor couplingshapes the functional development of nouse visual cortex. Cell 169, 1291–1302.e14. 15. Zhang, S., Xu, M., Kamigaki, T., Hoang Do, J.P., Chang, W.-C., Jenvay, S., Miyamichi, K., Luo, L., and Dan, Y. (2014). Long-range and local circuits for top-down modulation of visual cortex processing. Science 345, 660–665. 16. Schneider, D.M., Nelson, A., and Mooney, R. (2014). A synaptic and circuit basis for corollary discharge in the auditory cortex. Nature 513, 189–194. 17. Petreanu, L., Gutnisky, D.A., Huber, D., Xu, N., O’Connor, D.H., Tian, L., Looger, L.L.L., Svoboda, K., and O’Connor, D.H. (2012). Activity in motor–sensory projections reveals distributed coding in somatosensation. Nature 489, 299–303.
Actin Networks: Adapting to Load through Geometry Klemens Rottner1,2,* and Frieda Kage2 1Zoological
Institute, Braunschweig University of Technology, Spielmannstrasse 7, 38106 Braunschweig, Germany Centre for Infection Research, Inhoffenstrasse 7, 38124 Braunschweig, Germany *Correspondence:
[email protected] http://dx.doi.org/10.1016/j.cub.2017.10.042 2Helmholtz
Cell migration frequently involves the protrusion of lamellipodial actin networks, the structure and regulation of which have been studied for decades. New work highlights how the geometry of these networks endows cells with the ability to adapt to environmental conditions and load. Actin filaments are a major component of the cytoskeleton of eukaryotic cells. Their dynamic polymerization and turnover control shape, migration and organelle function in cells, as well as their communication with neighbours or extracellular substrates. The birth of actin filaments is known as nucleation, and the factors and mechanisms promoting or inhibiting this process have increased almost explosively in recent years. Nucleation and branching of actin filaments by Arp2/3 complex followed by their elongation is essential for various actin-dependent processes, including protrusion of the cell edge, vesicle trafficking and viral or bacterial hijacking of their hosts [1,2]. Arp2/3-independent nucleation mechanisms, for example
those mediated by formins [3], generate subcellular actin structures that lack Arp2/3 complex, such as contractile actin filament arrays in muscle or the stress fibres of non-muscle cells. Arp2/3 complex is considered to be obligatory for some processes, however, such as the protrusion of thin sheets of cytoplasm at the cell periphery, called lamellipodia (‘fin feet’) [4,5]. These highly dynamic structures are composed of nearly twodimensional actin networks when formed on solid surfaces in vitro (100–200 nm thick), but also appear as intrinsically flat structures if formed in complex threedimensional environments [6]. Irrespective of environment and conditions, lamellipodia constitute key structures for promoting effective cell
R1274 Current Biology 27, R1268–R1286, December 4, 2017 ª 2017 Elsevier Ltd.
migration [7] and are considered an excellent model system for Arp2/3dependent actin assembly processes in vivo, even though they also contain formins [8]. A new paper from the Sixt group [9], recently published in Cell, establishes for the first time how the geometry of Arp2/3-dependent actin networks in lamellipodia can mediate adaptation to varying loads at the cell front during protrusion and migration. The mechanism described in the new work was deduced from experimental studies and fully recapitulated by mathematical modelling; it is incredibly simple and intuitive, and its understanding requires consideration of less than a handful of widely accepted biochemical facts of the regulators