Current Biology
Dispatches relies on the quality of the fossil evidence used to calibrate the clock; second, to improve the precision of these divergence times it is imperative to improve our knowledge of the earliest animal fossil record; third, if the molecular dataset of dos Reis et al. [1] is representative of genomic-scale datasets in general (which is most likely the case), ‘fossil-free’ molecular divergence times (relative divergence times), which were introduced to avoid the known problems of the fossil record and are based exclusively on the information in molecular sequence data [19], are meaningless in deep time. This imprecision of the molecular clock deep in the history of life is frustrating. While the clock provided hope that divergence times for lineages could be dated in the absence of fossil information, it is now clear that the only way to increase its precision is to improve our knowledge of the fossil record itself, via the discovery of new fossils, resolving the affinities of existing ones, and accurately dating fossil occurrences. With genomic data now available [1] our focus should return to palaeontology, and particularly to the investigation of the early and middle Neoproterozoic. It is evident that in isolation, neither fossils nor molecular data can derive the precise and accurate timescale of life so essential to our efforts to robustly test proposed correlations between the history of life and that of planet Earth.
(2014). Low mid-Proterozoic atmospheric oxygen levels and the delayed rise of animals. Science 346, 635–638. 6. Sperling, E.A., Pisani, D., and Peterson K.J. (2007). Poriferan paraphyly and its implications for Precambrian palaeobiology. In: The rise and fall of the Ediacaran biota. Geological Society, London, Special Publications Vol. 286, P. Vickers-Rich, and P. Komarower, eds., pp. 355–368. 7. Lenton, T.M., Boyle, R.A., Poulton, S.W., Shields-Zhou, G.A., and Butterfield, N.J. (2014). Co-evolution of eukaryotes and ocean oxygenation in the Neoproterozoic era. Nat. Geosci. 7, 257–265. 8. Zuckerkandl, E., and Pauling, L. (1962). Molecular disease, evolution, and genetic heterogeneity. In Horizons in Biochemistry, M. Kasha, and B. Pullman, eds. (New York: Academic Press), pp. 189–228. 9. Darwin, C. (1859). On the Origin of Species by Means of Natural Selection (London: John Murray). 10. Benton, M.J., and Donoghue, P.C.J. (2007). Palaeontological evidence to date the tree of life. Mol. Biol. Evol. 24, 26–53. 11. Fedonkin, M.A., Gehling, J.G., Grey, K., Narbonne, G.M., and Vickers-Rich, P. (2007). The Rise of Animals. Evolution and Diversification of the Kingdom Animalia (Baltimore: Johns Hopkins UP). 12. Xiao, S., Zhang, Y., and Knoll, A.H. (1998). Three-dimensional preservation of algae and animal embryos in a Neoproterozoic phosphorite. Nature 391, 553–558. 13. Huldtgren, T., Cunningham, J.A., Yin, C., Stampanoni, M., Marone, F., Donoghue, P.C., and Bengtson, S. (2011). Fossilized nuclei and
germination structures identify Ediacaran ‘‘animal embryos’’ as encysting protists. Science 334, 1696–1699. 14. Liu, A.G., Mcllroy, D., and Brasier, M.D. (2010). First evidence for locomotion in the Ediacara biota from the 565 Ma Mistaken Point Formation, Newfoundland. Geology 38, 123–126. 15. Yin, Z., Zhu, M., Davidson, H.H., Bottjer, D.J., Zhao, F., and Tafforeau, P. (2015). Sponge grade body fossil with cellular resolution dating 60 Myr before the Cambrian. Proc. Natl. Acad. Sci. USA 112, E1453–E1460. 16. Love, G.D., Grosjean, E., Stalvies, C., Fike, D.A., Grotzinger, J.P., Bradley, A.S., Kelly, A.E., Bhatia, M., Meredith, W., Snape, C.E., et al. (2009). Fossil steroids record the appearance of Demospongiae during the Cryogenian period. Nature 457, 718–721. 17. Graur, D., and Martin, W. (2004). Reading the entrails of chickens: molecular timescales of evolution and the illusion of precision. Trends Genet. 20, 80–86. 18. Holder, M., and Lewis, P.O. (2003). Phylogeny estimation: traditional and Bayesian approaches. Nat. Rev. Genet. 4, 275–284. 19. Battistuzzi, F.U., Billing-Ross, P., Murillo, O., Filipski, A., and Kumar, S. (2015). A protocol for diagnosing the effect of calibration priors on posterior time estimates: A case study for the Cambrian explosion of animal phyla. Mol. Biol. Evol. 32, 1907–1912. 20. Liu, A.G., Matthews, J.J., Menon, L.R., McIlroy, D., and Brasier, M.D. (2014). Haootia quadriformis n. gen., n. sp., interpreted as a muscular cnidarian impression from the Late Ediacaran period (approx. 560 million years ago). Proc. Biol. Soc. 281, 20141202.
REFERENCES 1. dos Reis, M., Thawornwattana, Y., Angelis, K., Telford, M.J., Donoghue, P.C.J., and Yang, Z. (2015). Uncertainty in the timing of origin of animals and the limits of precision in molecular timescales. Curr. Biol. 25, 2939–2950. 2. Erwin, D.H., Laflamme, M., Tweedt, S.M., Sperling, E.A., Pisani, D., and Peterson, K.J. (2011). The Cambrian conundrum: early divergence and later ecological success in the early history of animals. Science 334, 1091– 1097. 3. Parfrey, L.W., Lahr, D.G.J., Knoll, A.H., and Katz, L.A. (2011). Estimating the timing of early eukaryotic diversification with multigene molecular clocks. Proc. Natl. Acad. Sci. USA 108, 13624–13629. 4. Eme, L., Sharpe, S.C., Brown, M.W., and Roger, A.J. (2014). On the age of eukaryotes: evaluating evidence from fossils and molecular clocks. Cold Spring Harb. Perspect. Biol. 6, 165–180. 5. Planavsky, N.J., Reinhard, C.T., Wang, X., Thomson, D., McGoldrick, P., Rainbird, R.H., Johnson, T., Fischer, W.W., and Lyons, T.W.
Systems Neuroscience: Shaping the Reward Prediction Error Signal William R. Stauffer Systems Neuroscience Institute, Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA 15216, USA Correspondence:
[email protected] http://dx.doi.org/10.1016/j.cub.2015.09.057
A recent study shows that midbrain GABA (inhibitory) neurons code for environmentally predicted rewards. These GABA neurons communicate with dopamine neurons, where the reward prediction is subtracted from delivered reward. Thus, the GABA prediction signal shapes the dopamine reward prediction error signal. Dopamine neurons are critical for the control of diverse behaviors, ranging from reward processing to motor control [1].
How can we understand such a complex system? Over three decades ago, the neuroscientist David Marr established a
Current Biology 25, R1070–R1091, November 16, 2015 ª2015 Elsevier Ltd All rights reserved R1081
Current Biology
Dispatches A
B
Reward distribution
Unpredicted reward trials Variable time Reward distribution
(No CS)
R
Neuronal response
C
(equation 1)
R
1.5 s
Unpredicted reward response Unpredicted reward response / prediction Unpredicted reward response – prediction
Subtractive > divisive
Subtractive < divisive -1
0 CS
2s
1
Reward prediction error = Received reward Predicted reward:
Predicted reward trials Cue
CS
between received and predicted rewards [12]:
(No R)
Reward size Current Biology
Figure 1. Midbrain computation of reward prediction error. (A) Top: raster plot and peristimulus time histogram of a single dopamine neuron demonstrate a positive prediction error response when an unpredicted reward was delivered at time ‘R’. Middle, raster plot and peristimulus time histogram of single dopamine neurons. No reward response is observed at ‘R’ because the reward was fully predicted at time ‘CS’. Bottom, raster plot and peristimulus time histogram demonstrate a negative prediction error response when a predicted reward failed to be delivered (No R). (Figure from [6]; reprinted with permission from AAAS). (B) Schematic diagrams of task used in the current study. Top: during unpredicted reward delivery trials one reward was randomly chosen from a set of five reward volumes (the reward distribution) and delivered at unpredictable times. The animal could predict neither the time of reward delivery nor the delivered volume. Thus, the reward prediction is near zero. Bottom, during predicted reward trials, an odor pulse was delivered, and the reward was delivered exactly 1.5 seconds after odor pulse onset. The animal could predict the timing of the reward delivery, but not the reward volume. Thus, the reward prediction was the mathematical expectation of the reward distribution volume. Red circles indicate the one reward that was randomly chosen from the reward distribution. (C) Hypothetical neural response functions. The grey line represents a hypothetical reward response function and resembles the true function found by Eshel et al. [11]. The red dashed line represents the hypothetical reward response function divided by the constant reward prediction. The blue line represents the hypothetical reward response function minus the constant reward prediction. The orange circles indicate the ranges where the subtractive and divisive models most diverge.
conceptual framework for investigating neural information processing systems. His conceptual framework is comprised of three distinct but complementary ‘levels of analysis’: the computational goal, what the brain system is trying to accomplish; the algorithm, the computations and representations the brain system uses to facilitate the computational goal; and the implementation, how the structure and function of nervous tissue implements the algorithm. To understand a brain system at these three levels, suggested Marr [2], is to ‘‘understand [the system] completely’’. Learning is a principal computational goal of dopamine neurons, which employ the reward prediction error algorithm
[3–10]. That is, dopamine neurons code the discrepancy between the received and the predicted reward: they are activated when the reward is better than predicted, but inhibited when the reward is worse than predicted (Figure1A) [6]. It is still not known, however, how the midbrain neural circuitry, comprised largely of dopamine neurons and GABA inhibitory neurons, implements reward prediction error coding. A new study by Eshel et al. [11] sheds light on the neural circuit computations that generate the dopamine reward prediction error signal. Reward Prediction Error Reward prediction errors are formalized in reinforcement learning as the difference
Reward prediction errors are immensely useful for learning; they can indicate when outcomes deviate from predictions, and also how predictions ought to be updated. Decades of research have provided neural data that appear consistent with equation 1 (Figure 1A) [3–10]. However, either subtraction of or division by the predicted reward can produce similar results, especially when the outcome approaches the prediction. Moreover, division is a common neural operation in both sensory- and valuerelated brain systems; for instance, range adaptation is an example of divisive normalization [13,14]. Thus, Eshel et al. [11] first examined the arithmetic underlying reward prediction errors. Subtraction, Not Division Eshel et al. [11] developed a Pavlovian behavioral task wherein subtraction of or division by the reward prediction forecast starkly different neural responses. In every trial, the animals received one juice reward chosen randomly from a set of five different reward volumes (the reward distribution; Figure 1B). During half of the trials, a completely unpredicted juice reward was delivered (Figure 1B, top). Because the animal could not predict when a reward would be delivered, the expectation of future reward was close to zero. The neuronal responses to unpredicted rewards approximated a logarithmic function (Figure 1C, grey line). On the other half of the trials, a cue (odor puff) predicted the exact timing of a juice reward (drawn from the same reward distribution; Figure 1B, bottom). In this way, the cue predicted the mathematical expectation of the reward distribution. Reinforcement learning dictates that this value, the prediction, will diminish the received reward (equation 1). The crucial manipulation was the spacing of the five distinct reward sizes. When the extreme reward magnitudes were delivered, the subtractive and divisive models forecast highly divergent neural responses (Figure 1C, red and
R1082 Current Biology 25, R1070–R1091, November 16, 2015 ª2015 Elsevier Ltd All rights reserved
Current Biology
Dispatches blue lines, orange circles). Eshel et al. [11] recorded from individual dopamine neurons while mice performed the Pavlovian task. Across all reward volumes, the constant reward prediction associated with the cue shifted the curve downwards by similar amounts. That is, the authors observed a linear shift in the response function, as forecast by the subtractive model, rather than a nonlinear change, as forecast by the divisive model. This carefully collected neural data clearly demonstrate the underlying subtractive arithmetic and confirm the dopamine prediction error response. Reward Prediction The results thus described demonstrate that reward prediction is subtracted from received reward to generate the reward prediction error. A natural question to ask is what midbrain circuit elements code this reward prediction signal. The midbrain ventral tegmental area (VTA) is composed largely of dopamine neurons and GABA inhibitory neurons. VTA GABA neurons make inhibitory synapses onto dopamine neurons [15]. Action potentials from VTA GABA neurons cause dopamine activity to pause, lead to conditioned place aversion, and disrupt reward consumption [16,17]. Moreover, VTA GABA neurons ramp their activity upwards as predicted reward delivery approaches [18]. These findings suggest that VTA GABA neurons may play a direct role in reward prediction error coding. To investigate this potential role, Eshel et al. [11] used optogenetics to interrogate the computational function of VTA GABA neurons. The authors inserted the light-sensitive depolarizing channel channelrhodopsin (ChR2) into GABA neurons. Then, using a clever manipulation, they stimulated GABA neurons to mimic the reward-predictioninduced GABA activity, rather than predicting reward delivery with an actual cue. Concurrently, they recorded dopamine action potentials. The artificially elevated GABA activity reduced the dopamine reward response, just as the dopamine reward response is reduced by reward prediction. The reduction in dopamine reward response was similar at different reward volumes, again just as forecast by the subtractive model (Figure 1C, blue line). This result
demonstrates that VTA GABA activation mimics the effect of reward prediction. In a complementary experiment, Eshel et al. [11] inserted the hyperpolarizing proton pump archaerhodopsin (ArchT) into VTA GABA neurons. This technique permitted the optical inhibition of GABA neuron activity, and thus the removal of reward prediction-like GABA activity. The authors silenced GABA neurons after reward was predicted by a cue. Sure enough, the resulting dopamine reward response was larger, compared to the same task condition when GABA neurons were not silenced. This result demonstrates that VTA GABA inhibition partially removes the effect of reward prediction. Together, these last two experiments provide evidence that GABA activity is causally related to the proper computation of the reward prediction error. Furthermore, the results suggest that VTA GABA neurons actually code for reward prediction. Behavioral experiments confirmed the role of VTA GABA reward prediction signal in the computation of reward prediction error. Anticipatory licking is a Pavlovian conditioned response that occurs between odor onset and reward delivery. This conditioned response is commonly used as an indicator for the learned predicted value. Interestingly, optogenetic stimulation of GABA neurons actually reduced conditioned responding. This outcome may appear initially confusing: shouldn’t higher reward prediction lead to enhanced anticipatory licking? Not when the main function of the prediction signal is to participate in the computation of reward prediction error. Artificially increasing the reward prediction signal ensured that the delivered reward would be less than the predicted reward. According to equation 1, this relationship results in negative prediction errors, and a learned decrease in the predicted value of the outcome. Thus, previous laser trials caused the mice to learn a new, reduced value for odor. This result demonstrates that the behavioral function of VTA GABA neurons is to code reward prediction specifically for the computation of reward prediction error. This study [11] is the first to demonstrate the coding properties of VTA GABA neurons and elucidate implementation of the dopamine reward
prediction error. Accordingly, this study brings us closer to David Marr’s ‘complete understanding’ of a brain system. But it is only a start. Dopamine neurons receive inputs from more than 30 brain structures [19], and many of these contribute to the calculation of the dopamine reward prediction error [20]. Furthermore, the dopamine prediction error response function codes economic utility, which can be highly nonlinear with regard to reward magnitude [9]. Although subtraction of reward prediction can maintain faithful coding of this variable across a broad range of values, it appears the nonlinear reward magnitude code must originate elsewhere. Nevertheless, this impactful and significant study makes clear that the synaptic inputs to dopamine neurons are compared and used to compute a dopamine neuron-specific reward prediction error response that functions specifically to update value estimates. REFERENCES 1. Schultz, W. (2007). Multiple dopamine functions at different time courses. Annu. Rev. Neurosci. 30, 259–288. 2. Marr, D. (1982). Vision: A Computational Investigation into the Human Representation and Processing of Visual Information (New York: Henry Holt and Co., Inc). 3. Ljungberg, T., Apicella, P., and Schultz, W. (1992). Responses of monkey dopamine neurons during learning of behavioral reactions. J. Neurophysiol. 67, 145–163. 4. Schultz, W., Apicella, P., and Ljungberg, T. (1993). Responses of monkey dopamine neurons to reward and conditioned stimuli during successive steps of learning a delayed response task. J. Neurosci. 13, 900–913. 5. Mirenowicz, J., and Schultz, W. (1996). Preferential activation of midbrain dopamine neurons by appetitive rather than aversive stimuli. Nature 379, 449–451. 6. Schultz, W., Dayan, P., and Montague, P.R. (1997). A neural substrate of prediction and reward. Science 275, 1593–1599. 7. Bromberg-Martin, E.S., Matsumoto, M., Hong, S., and Hikosaka, O. (2010). A pallidus-habenula-dopamine pathway signals inferred stimulus values. J. Neurophysiol. 104, 1068–1076. 8. Steinberg, E.E., Keiflin, R., Boivin, J.R., Witten, I.B., Deisseroth, K., and Janak, P.H. (2013). A causal link between prediction errors, dopamine neurons and learning. Nat. Neurosci. 16, 966–973. 9. Stauffer, W., Lak, A., and Schultz, W. (2014). Dopamine reward prediction error responses
Current Biology 25, R1070–R1091, November 16, 2015 ª2015 Elsevier Ltd All rights reserved R1083
Current Biology
Dispatches reflect marginal Utility. Curr. Biol. 24, 2491– 2500. 10. Nakahara, H., Itoh, H., Kawagoe, R., Takikawa, Y., and Hikosaka, O. (2004). Dopamine neurons can represent context-dependent prediction error. Neuron 41, 269–280. 11. Eshel, N., Bukwich, M., Rao, V., Hemmelder, V., Tian, J., and Uchida, N. (2015). Arithmetic and local circuitry underlying dopamine prediction errors. Nature 525, 243–246. 12. Sutton, R.S., and Barto, A.G. (1998). Reinforcement Learning: An Introduction (Cambridge, MA: The MIT Press). 13. Heeger, D.J. (1992). Normalization of cell responses in cat striate cortex. Vis. Neurosci. 9, 181–197.
14. Louie, K., Grattan, L.E., and Glimcher, P.W. (2011). Reward value-based gain control: divisive normalization in parietal cortex. J. Neurosci. 31, 10627–10639. 15. Omelchenko, N., and Sesack, S.R. (2009). Ultrastructural analysis of local collaterals of rat ventral tegmental area neurons: GABA phenotype and synapses onto dopamine and GABA cells. Synapse 63, 895–906. 16. Tan, K.R., Yvon, C., Turiault, M., Mirzabekov, J.J., Doehner, J., Laboue`be, G., Deisseroth, K., Tye, K.M., and Lu¨scher, C. (2012). GABA neurons of the VTA drive conditioned place aversion. Neuron 73, 1173–1183. 17. van Zessen, R., Phillips, J., Budygin, E., and Stuber, G. (2012). Activation of VTA GABA
neurons disrupts reward consumption. Neuron 73, 1184–1194. 18. Cohen, J.Y., Haesler, S., Vong, L., Lowell, B.B., and Uchida, N. (2012). Neuron-typespecific signals for reward and punishment in the ventral tegmental area - Supplementary Information. Nature 482, 85–88. 19. Watabe-Uchida, M., Zhu, L., Ogawa, S.K., Vamanrao, A., and Uchida, N. (2012). Whole-brain mapping of direct inputs to midbrain dopamine neurons. Neuron 74, 858–873. 20. Tian, J., and Uchida, N. (2015). Habenula lesions reveal that multiple mechanisms underlie dopamine prediction errors. Neuron 87, 1304–1316.
Evolution: Big Bawls, Small Balls John L. Fitzpatrick and Stefan Lu¨pold Computational and Evolutionary Biology, Faculty of Life Sciences, University of Manchester, Manchester M13 9PT, UK Correspondence:
[email protected] (J.L.F.),
[email protected] (S.L.) http://dx.doi.org/10.1016/j.cub.2015.09.060
Males must carefully allocate the energy they devote to sex. A new study of howler monkeys shows that males who use vocalizations to ward off rivals invest less in producing large numbers of sperm. When it comes to sex, you can’t have it all. Animal reproduction is about compromising between the competing demands of finding a mate and successfully fertilizing eggs. Males, who are the sex that typically competes for access to mates, are particularly sensitive to this reproductive balancing act. In many animals, males show off ornate sexual colouration, ear-splitting vocalizations, and even dances to attract a female’s attention. While catching a female’s interest is a good first step toward successful reproduction, access to a receptive female often requires contending with, and outcompeting, rival males. Consequently, competition between males over access to females has in many species led to the evolution of male sexual weaponry (such as horns or antlers) or to massive divergence in body size between males and females. These conspicuous male sexual displays and weapons were instrumental in Darwin’s [1] formulation of sexual selection. We now know, however, that females of most species mate with multiple males
[2], which means that male–male competition over a female is followed by competition among their sperm to fertilize her egg(s) [3]. Competition among rival males before and after mating imposes a host of evolutionary constraints. Males have to allocate limited resources to both securing mates and producing ejaculates that are better at fertilizing eggs than rival males’ sperm. Thus, investment in whatever helps a male get a mate should limit investment in ejaculate traits crucial during sperm competition, and vice versa [4]. However, we know surprisingly little about how males balance their investment in sexual traits. In a study published recently in Current Biology, Dunn et al. [5] address this gap in our understanding of the evolution of male sexual traits by comprehensively examining the evolution of male vocalizations and testes in howler monkeys. Dunn et al. [5] provide compelling evidence that male howler monkeys investing more in vocalizations produce less sperm. Their study capitalizes on the many characteristics that make
howler monkeys a particularly apt model for assessing evolutionary patterns of investment in vocalization. These primates produce extraordinarily loud calls using a specialized larynx with an enlarged hyoid bone, which contains a sound-amplifying air sac. The exact function of the howler monkey roar is unclear, but it almost certainly plays a role in male–male competition and demarcating territories. But while all howler monkey species howl, they vary considerably in hyoid bone size. And that’s not all: howler monkeys live in varying group sizes and environments and show remarkable sexual dimorphism in body size and wide variation in testicular investment among species (Figure 1). In short, these primates represent an evolutionary biologist’s dream for testing hypotheses about how selection shapes sexual traits. Armed with this extreme variation in sexual traits, Dunn et al. [5] used laser surface scanning to generate threedimensional representations of howler monkey hyoid bones to critically test the
R1084 Current Biology 25, R1070–R1091, November 16, 2015 ª2015 Elsevier Ltd All rights reserved