CONEUR-1876; NO. OF PAGES 5
Available online at www.sciencedirect.com
ScienceDirect Editorial overview: Computational neuroscience Christian Machens and Adrienne Fairhall Current Opinion in Neurobiology 2017, 46:xx–yy
http://dx.doi.org/10.1016/j.conb.2017.09.009 0959-4388/ã 2017 Published by Elsevier Ltd.
Christian Machens Champalimaud Foundation, Lisbon, Portugal e-mail: christian.machens@neuro. fchampalimaud.org Christian Machens is a principal investigator at the Champalimaud Foundation in Lisbon, Portugal. He studied physics in Tubingen, Germany, and in Stony Brook, New York, and received a Ph.D. in computational neuroscience from the Humboldt University of Berlin, Germany, in 2002. He then worked as a postdoctoral fellow at the Cold Spring Harbor Laboratories, before taking a faculty position at the E´cole Normale Supe´rieure in Paris in 2007. In 2011, he joined the newly formed Neuroscience Programme at the Champalimaud Foundation. In his research, he seeks to understand how networks of neurons communicate and process information. His work combines the statistical analysis of neural population activity with the computational modeling of neural networks.
Adrienne Fairhall UW Institute for Neuroengineering, University of Washington, Seattle, USA e-mail:
[email protected] Adrienne Fairhall is a professor in the Department of Physiology and Biophysics and adjunct in the Departments of Physics and Applied Mathematics at the University of Washington; she co-directs the UW Institute for Neuroengineering. She obtained her Honors degree in theoretical physics from the Australian National University and a PhD in statistical physics from the Weizmann Institute of Science. She was a postdoctoral fellow at NEC Research Institute and at Princeton University. She joined the UW faculty in 2004 and now co-directs the University of Washington’s Computational Neuroscience Center and the Institute for Neuroengineering. She has directed the MBL course, Methods in Computational Neuroscience and co-directs the UW/Allen Workshop on the Dynamic Brain. She has
www.sciencedirect.com
In summarizing the goals of the US BRAIN Initiative, Jorgenson et al. [1] beautifully highlighted the critical role of computational/theoretical neuroscience in the broader mission of the field: ‘The overarching goal of theory, modelling and statistics in neuroscience is to create an understanding of how the brain works—how information is encoded and processed by the dynamic activity of specific neural circuits, and how neural coding and processing lead to perception, emotion, cognition and behaviour. Powerful new experimental technologies.. will produce datasets of unprecedented size and sophistication, but rigorous statistical analysis and theoretical insight will be essential for understanding what these data mean. Coherent lessons must be drawn not only from the analysis of single experiments, but also by integrating insights across experiments, scales and systems. Theoretical studies will allow us to check the rigour and robustness of new conceptualizations and to identify distinctive predictions of competing ideas to help direct further experiments. Neuroscience will mature to the extent that we discover basic principles of neural coding and computation that connect and predict the results of multi-modal experimental manipulations of brain and behaviour.’ This lays out a series of grand challenges for computation and underscores the extent to which the value of new experimental approaches relies on the co-development of theory [2]. Are we collectively living up to these challenges? In this issue, we invited authors to share recent contributions and perspectives that demonstrate the application of theory and modeling in the analysis of systems, and the formulation of new statistical tools to capture structure in neural data and in behavior. Drawing together these three branches of computational neuroscience — theory, modeling, and data analysis — is the central truth that neural networks are driven nonlinear dynamical systems [3]. The identification of attractor solutions lies at the heart of classic theories of computations such as memory [4], robust coding [5], long-timescale integration [6], and decision-making [7], but recent demonstrations of the power of artificial neural networks to generate complex behaviors has newly expanded the space of possibilities [8,9]. As we will describe here, the field is pushing forward with advances in the tools required to identify, construct, train, analyze, simplify, and statistically characterize dynamical networks, and to recognize their signatures in highdimensional neural data. While these powerful models can produce remarkable insight using fairly generic components, it remains necessary to understand the role of the detailed structural and molecular properties that are, after all, most accessible to genetic and pharmacological intervention. Thus we have also made Current Opinion in Neurobiology 2017, 46:1–5
Please cite this article in press as: Machens C, Fairhall A: Editorial overview: Computational neuroscience, Curr Opin Neurobiol (2017), http://dx.doi.org/10.1016/j.conb.2017.09.009
CONEUR-1876; NO. OF PAGES 5
2 Computational Neuroscience
held fellowships from Burroughs-Wellcome, the McKnight Foundation, the Sloan Foundation and the Allen Family Foundation. Her work focuses on the interplay between cellular and circuit dynamics in neural computation, with a particular interest in adaptive and state-dependent neural coding.
sure to include work that points to direct connections between dynamics and specific biophysics, including cases in which biophysics suggests or supports novel computational paradigms.
Quantifying behavior Both on the methodological and theoretical side, the field has undergone a push toward more precise accounts of rich, naturalistic behavior. In ‘Quantifying behavior to solve sensorimotor transformations: advances from worms and flies’, Calhoun and Murthy [10] review recent advances in the automated analysis of (invertebrate) behavior. The goal is to reconstruct the (complete) sensory inputs and motor outputs of freely behaving organisms, and use the resulting data to solve several long-standing questions about sensorimotor transformations. At the interface of natural and lab-controlled behavior, Kolling and Akam [11] review the computational literature on foraging behavior, showing that several conflicting results can be unified under the common umbrella of reinforcement learning, if the agent’s goal is to maximize the average reward rate collected within an environment. Indeed, the concept of optimality remains one of the leading postulated design principles underlying behavioral strategies as well as information processing in general. Sharpee, in ‘On texture, form, and fixational eye movements’ [12], suggests that the statistics of eye movements may be tuned to make optimal use of texture detection in V2 for boundary identification.
Feedforward and recurrent network models As noted, artificial neural networks form a vital testbed for exploring the potential capabilities of networks of biological neurons. Many-layered feedforward convolutional networks now solve many problems, particularly image recognition, that were previously inaccessible [13]. Remarkably, such networks account for considerable response variance throughout ventral stream visual areas [14]. Additional insight both into neural function and into image structure is gained by training such networks and using them generatively, as discussed by Gatys et al. in ‘Texture and art with deep neural networks’ [15]. While these classical feedforward networks can provide important insight into sensory processing, the brain is known to be built largely of recurrent networks, which can generate their own, stimulus-independent internal dynamics. Understanding such networks and linking them to experimental observations therefore remains a strong focus of current work. Barak explores the role of ‘Recurrent neural networks as versatile tools of neuroscience research’ [16]. He shows how to train recurrent networks on specific behavioral tasks, reverse-engineer their function, and then compare their solution to that found in neural recordings from awake, behaving animals. The creation of slow, stimulus-independent dynamics is the topic of the article ‘Once upon a (slow) time in the land of recurrent neuronal networks . . . ’ by Huang and Doiron [17]. They compare slow dynamics generated at the edge of chaos with slow dynamics generated by random transitioning between fixed points. A specific problem that is just coming into focus is how to incorporate the coming flood of connectomics data into our understanding of recurrent neural networks. Ocker et al. [18] make a step in that direction in their article ‘Linking connectivity and activity in neuronal networks’. They review how the statistics of network motifs can influence the spiking output of large neural networks.
Dealing with high-dimensional data The past few years have seen a continuous development beyond single neuron approaches [19] of new methodologies for handling multineuronal Current Opinion in Neurobiology 2017, 46:1–5
www.sciencedirect.com
Please cite this article in press as: Machens C, Fairhall A: Editorial overview: Computational neuroscience, Curr Opin Neurobiol (2017), http://dx.doi.org/10.1016/j.conb.2017.09.009
CONEUR-1876; NO. OF PAGES 5
Editorial overview Machens and Fairhall 3
data and inferring dynamical structure. Herfurth and Tchumachenko, in ‘How linear response shaped models of neural circuits and the quest for alternatives’ [20], summarize the successes and limitations of linear approaches in capturing stimulus-response properties and internal network dynamics, and discuss nonlinear extensions. Herz et al. [21] show that linear decoding in the form of population vectors, on the other hand, works even in complicated systems such as the grid-cell system of the entorhinal cortex (‘Periodic population codes: from a single circular variable to higher dimensions and multiple nested scales’). Generally, though, the activity of large population of neurons is more than just a collection of single neuron activities. Tkacik and Savin [22] make this notion precise and show how to put it to a rigorous test using maximum entropy models (‘Maximum entropy models as a tool for building precise neural controls)’. Moreno-Bote et al., ‘What can neuronal populations tell us about cognition?’ [23] show how such higher-order structure in neural population activities can be leveraged to make inferences about cognitive processes such as internal deliberations. Linderman and Gershman [24] emphasize that any statistical analysis of neuronal data becomes much more grounded once it can be compared against a properly formulated probabilistic theory (‘Using computational theory to constrain statistical models of neural data’).
State dependence Given that neurophysiology experiments are increasingly conducted in awake, behaving animals, it is necessary to consider neural coding as a dynamic process that is strongly influenced by the brain’s state and the task at hand. Lange and Haefner, in ‘Characterizing and interpreting the influence of internal variables on sensory activity’ [25], review how statistical approaches can be used to tease apart the influence of unobserved or partially observed internal processes, such as attention, on the sensory tuning of neurons. Wood et al., in ‘Cortical inhibitory interneurons control sensory processing’ [26], discuss their own and other work deconstructing the distinct contributions of different inhibitory interneuron types to this task-dependent and state-dependent modulation: one of the first significant identifications of a potential role for cell type diversity in cortex. The observed context dependence of receptive fields has long pointed toward the possibility that neural representation should be thought of as predictive coding. In ‘With or without you: predictive coding and Bayesian inference in the brain’, Aitchison and Lengyel [27] go from this starting point to interpret context-dependent dynamics in terms of the more general framework of Bayesian inference. The ultimate, macroscopic state dependence of neural dynamics may occur at the interface of awake and asleep. In ‘Modeling the mammalian sleep cycle’, Weber [28] reviews the brainstem circuits that switch the brain into and out of REM sleep and discusses several mechanistic network models that can account for these switches. www.sciencedirect.com
Learning and plasticity The neural substrates for learning and memory formation remain a key area for experimental and theoretical investigation. Two papers address the fundamental biophysics of plasticity and how these dynamics shape learning. Clopath et al., in ‘Modelling plasticity in dendrites: from single cells to networks’ [29], review the influence of complex dendritic structure in determining learning rules both within and between neurons. Mongillo et al. [30] discuss the puzzling dichotomy between stable long-term memories and unstable synaptic connections, and review theories that seek to reconcile the two (‘Intrinsic volatility of synaptic connections — a challenge to the synaptic trace theory of memory’). In ‘Learning with three factors: modulating Hebbian plasticity with errors’, Kusmierz et al. [31] elaborate a general framework for multiple learning paradigms through the incorporation of a ‘third factor’, a modulatory factor that may carry information about errors or other feedback. Tokuda and colleagues, in ‘New Insights into OlivoCerebellar Circuits for Learning from a Small Training Sample’ [32] discuss the problem of learning when only a few examples or trials are available. Focusing on the cerebellum, they review literature that suggests a clever way for tightly controlling the degrees of freedom of a learning system, which slowly relaxes as more data become available. While all of these learning paradigms are largely bottomup and start with biophysics, Orsborn and Pesaran, in ‘Parsing learning in networks using brain–machine interfaces’ [33], review a promising systemic learning paradigm, brain–machine interfaces, in which the population activities that give rise to a particular behavior are tightly controlled, so that the effects of learning can be studied at the level of neural populations.
Analysis of specific systems A number of papers give an in-depth view of progress in specific neural systems. While development is an area of enormous interest for neuroscience and a focus of much experimental work, it has received relatively little attention theoretically. In ‘Understanding neural circuit development through theory and models’ [34], Richter and Gjorgjieva survey progress in theoretical models describing activity-dependent wiring and evolving dynamics in developing nervous systems. Kornblith and Tsao contribute a sweeping review, ‘How thoughts arise from sights: inferotemporal and prefrontal contributions to vision’ [35], of the recent dramatic progress in mapping higher-level visual processing. Reinforcement learning serves as a canonical example of a computational algorithm that appears to have a direct mapping to specific neuronal outputs. In particular, since Schultz’s influential studies [36], dopaminergic firing in the VTA has been Current Opinion in Neurobiology 2017, 46:1–5
Please cite this article in press as: Machens C, Fairhall A: Editorial overview: Computational neuroscience, Curr Opin Neurobiol (2017), http://dx.doi.org/10.1016/j.conb.2017.09.009
CONEUR-1876; NO. OF PAGES 5
4 Computational Neuroscience
identified with prediction error. Lau and co-authors, in ‘The many worlds hypothesis of dopamine prediction error: implications of a parallel circuit architecture in the basal ganglia’ [37], confront experimental evidence that seems inconsistent with this paradigm to broaden the possible interpretations of dopamine signals in light of reinforcement learning theories.
2.
Alivisatos AP, Chun M, Church GM, Deisseroth K, Donoghue JP, Greenspan RJ, McEuen PL, Roukes ML, Sejnowski TJ, Weiss PS, Yuste R: The brain activity map. Science 15 2013, 339:12841285.
3.
Kass RE et al.: Computational neuroscience: mathematical and statistical perspectives. Ann Rev Stat 2018. (in press).
4.
Hopfield J: Neural networks and physical systems with emergent collective computational abilities. Proc Natl Acad Sci U S A 1982, 79:2554-2558.
Disease
5.
Ben-Yishai R, Bar-Or RL, Sompolinsky H: Theory of orientation tuning in visual cortex. Proc Natl Acad Sci U S A 1995, 92:38443848.
6.
Seung HS1, Lee DD, Reis BY, Tank DW: Stability of the memory of eye position in a recurrent network of conductance-based model neurons. Neuron 2000, 26:259-271.
7.
Wang X-J: Decision making in recurrent neuronal circuits. Neuron 2008, 60:215-234.
8.
Maass W, Natschla¨ger T, Markram H: Real-time computing without stable states: a new framework for neural computation based on perturbations. Neural Comput 2002, 14:2531-2560.
9.
Mante V, Sussillo D, Shenoy KV, Newsome WT: Contextdependent computation by recurrent dynamics in prefrontal cortex. Nature 2013, 503:78.
Possibly reflecting a growing confidence in and mainstreaming of model formalisms, a number of authors have applied broader theories of neural function to disease models — a vital goal for computation. Rubin’s paper, ‘Computational models of basal ganglia dysfunction: the dynamics is in the details’ [38] provides a beautiful summary of network modeling of basal ganglia and current hypotheses for the pathologies that lead to tremor. Importantly, Rubin emphasizes the influence that single neuron dynamics has on the emergent behavior of the circuit. Moving to a higher level of description, Keramati and coauthors advance a novel computational theory of addiction in ‘Misdeed of the need: toward computational accounts of the transition to addiction’ [39]. The framework seeks to understand addictive behavior across all the major stages of addiction, including the initial phases of intake escalation. It opens new ground by going beyond the typical focus of subcortical dopaminergic circuitry to include the executive brain systems that establish goals and control behavior. Similarly, Conceicao and colleagues branch out from the basal ganglia to include cortical areas when reviewing mechanisms and computations underlying the generation of Tourette’s syndrome, in ‘Premonitory urges and tics in Tourette syndrome: computational mechanisms and neural correlates’ [40]. Leptourgos et al., in ‘Can circular inference relate the neuropathological and behavioral aspects of schizophrenia?’ [41], apply concepts based on inference models to schizophrenia, arguing that the pathology may arise as a manifestation of circular reasoning,
Education
10. Calhoun AJ, Murthy M: Quantifying behavior to solve sensorimotor transformations: advances from worms and flies. 11. Kolling NS, Akam T: (Reinforcement?) learning to forage optimally. 12. Sharpee T: On texture, form, and fixational eye movements. 13. LeCun Y, Bengio Y, Hinton G: Deep learning. Nature 2015, 521:436-444. 14. Yamins DLK, DiCarlo JJ: Using goal-driven deep learning models to understand sensory cortex. Nat Neurosci 2016, 19:356-365. 15. Gatys L, Ecker A, Bethge M: Texture and art with deep neural networks. 16. Barak O: Recurrent neural networks as versatile tools of neuroscience research. 17. Huang C, Doiron B: Once upon a (slow) time in the land of recurrent neuronal networks. 18. Ocker YH, Buice M, Doiron B, Josic K, Rosenbaum R, Shea-Brown E: From the statistics of connectivity to the statistics of spike times in neuronal networks. 19. Aljadeff J, Lansdell B, Fairhall AL, Kleinfeld D: Spike train analysis, deconstructed. Neuron 2016, 91:221-259.
Finally, the increasing need for theoretical, computational and high-level statistical training has raised the urgency of providing appropriate educational programs and opportunities. We therefore invited Mark Goldman and Michale Fee to contribute an article that we believe will be of great value for the field, ‘Computational Neuroscience Training for the Next Generation of Neuroscientists’ [42]. The authors share their own and the broader community’s experience, advice, wisdom and pointers to resources for new programs or for those keen to update and improve their offerings.
24. Linderman S, Gershman S: Using computational theory to constrain statistical models of neural data.
References
25. Lange R, Haefner R: Characterizing the influence of ‘internal states’ on sensory activity.
1.
26. Wood K, Blackwell J, Geffen M: Cortical inhibitory interneurons control sensory processing.
Jorgenson LA et al.: The BRAIN Initiative: developing technology to catalyse neuroscience discovery. Philos Trans R Soc B 2015, 370:20140164 http://dx.doi.org/10.1098/ rstb.2014.0164.
Current Opinion in Neurobiology 2017, 46:1–5
20. Herfurth T, Tchumachenko T: How linear response shaped models of neural circuits and the quest for alternatives. 21. Herz AVM, Mathis A, Stemmler MB: Periodic population codes: from a single circular variable to higher dimensions and multiple nested scales. 22. Tkacik G, Savin C: Maximum entropy models as a tool for building precise neural controls. 23. Moreno Bote R, Arandia-Romero I, Nogueira R, Mochol G: What can neuronal populations tell us about cognition?.
27. Aitchison L, Lengyel M: With or without you: predictive coding and Bayesian inference in the brain. www.sciencedirect.com
Please cite this article in press as: Machens C, Fairhall A: Editorial overview: Computational neuroscience, Curr Opin Neurobiol (2017), http://dx.doi.org/10.1016/j.conb.2017.09.009
CONEUR-1876; NO. OF PAGES 5
Editorial overview Machens and Fairhall 5
28. Weber F: Modeling the mammalian sleep cycle. 29. Clopath C, Bono J, Wilmes K: Modelling plasticity in dendrites: from single cells to networks. 30. Mongillo G, Rumpel S, Loewenstein Y: Intrinsic volatility of synaptic connections – a challenge to the synaptic trace theory of memory. 31. Kusmierz L, Isomura T, Toyoizumi T: Learning with three factors: modulating Hebbian plasticity with errors. 32. Tokuda I, Hoang H, Kawato M: Cerebellar circuits for learning from a small training sample. 33. Orsborn, Pesaran: Parsing learning in networks using brain– machine interfaces.
36. Schultz W: Predictive reward signal of dopamine neurons. J Neurophysiol 1998, 80:1-27. 37. Lau B, Monteiro T, Paton J: The many worlds hypothesis of dopamine prediction error: implications of a parallel circuit architecture in the basal ganglia. 38. Rubin J: Computational models of basal ganglia dysfunction: the dynamics is in the details. 39. Keramati M, Ahmed S, Gutkin B: Misdeed of the need: toward computational accounts of the transition to addiction. 40. Conceic¸a˜o V, Dias Aˆ, Farinha A, Maia T: Premonitory urges and tics in Tourette syndrome: computational mechanisms and neural correlates.
34. Richter MLA, Gjorgjieva J: Understanding neural circuit development through theory and models.
41. Leptourgos P, Dene`ve S, Jardri R: Can circular inference relate the neuropathological and behavioral aspects of schizophrenia?.
35. Kornblith S, Tsao D: How thoughts arise from sights: inferotemporal and prefrontal contributions to vision.
42. Goldman M, Fee M: Computational neuroscience training for the next generation of neuroscientists.
www.sciencedirect.com
Current Opinion in Neurobiology 2017, 46:1–5
Please cite this article in press as: Machens C, Fairhall A: Editorial overview: Computational neuroscience, Curr Opin Neurobiol (2017), http://dx.doi.org/10.1016/j.conb.2017.09.009