Current Biology
Dispatches could be a necessary adjustment to the presence of a long axon. This axon will filter rapidly varying signals, a biophysical consequence of the membrane capacitance [7]. In that case the slower responses in the fovea will tend to preserve the size of the final output that gets to the bipolar cell: the long axon will act like a lowpass filter, so that slow signals are transmitted better than fast ones. Remember that the long axon exists for purely optical reasons, to keep the inner retinal circuitry from scattering light and allow close packing of the cones’ lightsensitive outer segments. Thus, the evolutionary trade-off would sacrifice sensitivity to rapid signals as an indirect cost of maximal spatial resolution. In the first interpretation, slow integration is intrinsically a good thing. In the second, the slow response is a necessary evil, where speed is sacrificed in order to allow something (acuity) more important. And of course, both things could be true. A separate but important technical point is made by Sinha et al., who used high resolution fluorescence microscopy to compare the inhibitory input to ganglion cells in the fovea with those in the periphery. Their result (few inhibitory
receptors on foveal ganglion cells) agrees with their physiology but conflicts with earlier studies using electron microscopy [8–10]. Because their technique strains the limits of light microscopy, more information will be needed to get to the bottom of this. But Sinha et al. make the important general point that the booming connectomics industry [11,12] is at present stuck with the assumption that structurally similar synapses all have the same weight — something unlikely to be correct. We need more test cases, situations in which electron microscope images of specific synapses can be linked to their chemical neuroanatomy and their behavior in transmitting information. REFERENCES 1. Sinha, R., Hoon, M., Baudin, J., Okawa, H., Wong, R.O.L., and Rieke, F. (2017). Cellular and circuit mechanisms shaping the perceptual properties of the primate fovea. Cell 168, 413–426. 2. Masland, R.H. (2012). The neuronal organization of the retina. Neuron 76, 266–280. 3. Polyak, S.L. (1948). The Retina (University of Chicago Press: Illinois). 4. Tyler, C.W. (1985). Analysis of visual modulation sensitivity. II. Peripheral retina and
the role of photoreceptor dimensions. J. Opt. Soc. Am. A 2, 393–398. 5. Solomon, S.G., Martin, P.R., White, A.J., Ruttiger, L., and Lee, B.B. (2002). Modulation sensitivity of ganglion cells in peripheral retina of macaque. Vision Res. 42, 2893–2898. 6. Walilko, T.J., Viano, D.C., and Bir, C.A. (2005). Biomechanics of the head for Olympic boxer punches to the face. Br. J. Sports Med. 39, 710–719. 7. Hsu, A., Tsukamoto, Y., Smith, R.G., and Sterling, P. (1998). Functional architecture of primate cone and rod axons. Vision Res. 38, 2539–2549. 8. Kolb, H., and Dekorver, L. (1991). Midget ganglion cells of the parafovea of the human retina: a study by electron microscopy and serial section reconstructions. J. Comp. Neurol. 303, 617–636. 9. Calkins, D.J., and Sterling, P. (1996). Absence of spectrally specific lateral inputs to midget ganglion cells in primate retina. Nature 381, 613–615. 10. Calkins, D.J., and Sterling, P. (2007). Microcircuitry for two types of achromatic ganglion cell in primate fovea. J. Neurosci. 27, 2646–2653. 11. Lichtman, J.W., and Denk, W. (2011). The big and the small: challenges of imaging the brain’s circuits. Science 334, 618–624. 12. Seung, S. (2012). Connectome: How the Brain’s Wiring Makes Us Who We Are (New York: Houghton Mifflin).
Multisensory Development: Calibrating a Coherent Sensory Milieu in Early Life Andrew J. Bremner Sensorimotor Development Research Unit, Department of Psychology, Goldsmiths, University of London, New Cross, London, SE14 6NW, UK Correspondence:
[email protected] http://dx.doi.org/10.1016/j.cub.2017.02.055
A new study reveals the effects of visual deprivation in early life on the development of multisensory simultaneity perception. To understand the developmental processes underlying this we need to consider the multisensory milieu of the newborn infant. The newborn human arrives into a startlingly complex sensory world in which it is invaded by information from touch, taste, smell, proprioception, vestibular input, audition, and vision, all of which provide cues about features of objects and events which arrive in the central nervous system at different latencies, and
in almost uniformly varying neural codes. If infants are to develop an ability to perceive a spatiotemporally coherent and meaningful environment from amongst lange, they need to this multisensory me learn to solve the ‘‘crossmodal binding problem’’, i.e. they need to figure out how and when stimuli across the senses go
together. A new study [1] sheds important light on how this happens. Developing Solutions to the Crossmodal Binding Problem Whilst the advantages which multiple senses bestow to perception are well acknowledged (e.g., [2]), the challenges
Current Biology 27, R294–R317, April 24, 2017 ª 2017 Elsevier Ltd. R305
Current Biology
Dispatches which they create for the developing organism are significant. The crossmodal binding problem [3,4] threatens adults’ and infants’ ability to perceive a coherent multisensory world: Coherent multisensory perception of timing is threatened by temporal asynchrony introduced by differences in neural transduction between the senses (e.g., vision takes about 45 ms longer than sound to reach the central nervous system [3]). Coherent multisensory perception of space is threatened by the fact that the brain codes spatial position in different ways across the senses (e.g., visual location is coded in retinocentric coordinates, and tactile location in somatosensory coordinates, and the transformations between visual, auditory and tactile codes depend on the relative postures of eyes, ears, and limbs, which change frequently [5,6]). But the crossmodal binding problem is not purely biological: Even before physical information has reached our sensory apparatus, differences in the physics of the different media of transmission (e.g., sound waves, pulses of electromagnetic radiation, the diffusion of chemicals) give rise to asynchronies and dislocations across those media. So how do our central nervous systems learn to abstract a coherent world from this multisensory mess (or ‘‘sensory bouillabaisse’’ according to one of the authors of [1])? We now know a lot about how adults solve the ‘‘crossmodal binding problem’’ (e.g., [3,4,7,8]), but there has been remarkably little consideration of the origins of such solutions in developmental psychology. This is odd, because the various threats to multisensory perception discussed above change dramatically in their nature and impact across early life, requiring continual recalibrations [9]. Perhaps the most revealing studies to date come from research involving participants with visual deprivation early in life. In the domain of multisensory space, studies indicate the importance of visual experience in early development. The most recent case studies of congenitally blind individuals who have had their sight restored through the removal of cataracts seems to indicate that deprivation of vision in a sensitive period between 5 months [10] and two years [11] leads to atypical development of an ability to map touches into visual space.
In their recent study, Chen et al. [1] show that visual experience is also crucial to the typical development of multisensory simultaneity perception. Presenting audiovisual stimuli with a range of onset asynchronies they asked adult participants to judge whether or not the auditory and visual components occurred simultaneously. Participants who had had congenital cataracts in both eyes in early infancy (up to about 4 months of age) showed an atypical pattern of simultaneity perception; they were biased to judge visual-leading events as simultaneous. Their point of subjective simultaneity for such events was where the visual stimulus led the auditory stimulus by 43 ms. Interestingly, no such atypical development was seen when the same participants were asked to judge visualtactile simultaneity. On this basis Chen et al. make an important point: effects of visual deprivation are not just visual. Visual deprivation in infancy also deprived these individuals of the multisensory experiences which they need in order to develop a typical sense of audiovisual timing. But what are these multisensory experiences and how do they influence multisensory development? Audiovisual Experiences in the First Months of Life Newborn human babies do not live in the same multisensory milieu as adults. Altricial and myopic, they depend on focuses of attention and interest (people, faces, hands) [12] being brought to them (or vice versa). It would be very odd to see an adult talking to a newborn from across a room. As caregivers, we tend to get up close to provide young babies with a rich, nearby — almost peripersonal — multisensory environment. The up-close-and-personal audiovisual environment of the newborn is why I doubt one of Chen et al.’s [1] explanations for their findings. They suggest that the specific effect of visual deprivation on audiovisual simultaneity perception (and not visualtactile simultaneity) may be due to the fact that infants have to calibrate audiovisual simultaneity under conditions of visualto-auditory lag given the slower transit of sound-waves towards the observer (visual-tactile events only occur in near space). But I can’t think of many instances in which infants younger than 5 months would be interested in audiovisual events
R306 Current Biology 27, R294–R317, April 24, 2017
happening at a distance where the visualto-auditory lag would be at all significant. It seems more likely that the auditory-tovisual lags due to differences in the speed of auditory and visual neural transduction to the central nervous system, which are close to 45 ms in the adult, would be a more pertinent concern for crossmodal temporal calibration at this age. Interestingly, this 45 ms crossmodal transduction differential is quite close to the visual-auditory lag at which Chen et al.’s visually deprived participants demonstrated their point of subjective simultaneity. So perhaps the effect of audiovisual deprivation in the first months of life is that it prevents infants from partialing out the lags between auditory and visual signals due to crossmodal transduction differentials. And so this might explain why these infants later, as adults, perceive events where the visual stimulus precedes the auditory stimulus as being simultaneous. But then why should visual-tactile simultaneity not also be affected by the corresponding deprivation of visual-tactile experience? Again, I think the answer lies in a consideration of the newborn multisensory experience. Visual-tactile Experiences in the First Months of Life It is surprising how little visual regard newborn infants have for their own bodies. Visual orienting to tactile stimuli is not behaviourally robust until around 10 months of age [13]. Even when they start reaching for objects, they tend to be just as successful in the light as in the dark (indicating that visual control of the hand is not important [14]). We also know that at 4 months of age infants do not refer vibrotactile stimuli on their feet to a location in external (visual) space [15]. So perhaps this is why Chen et al.’s participants showed no atypical development of visualtactile simultaneity judgments. Because their sight was restored at around 4.5 months, they were only deprived of visual-tactile experience in a period in which infants do not spend much time considering such multisensory content in any case. This would seem to chime with a recent study [10] which indicates that visual deprivation in the first five months of life does not impair the typical development of tactile spatial perception as it does when such deprivation continues into later infancy [11] and childhood.
Current Biology
Dispatches The Development of Multisensory Timing Beyond Early Infancy In this dispatch, I have suggested that the sensorimotor and perceptual limitations of the newborn infant place certain constraints on the ways in which they experience and learn from their multisensory environment. But of course in the second half year of life, typically developing infants increasingly take a more active interest in their environment, starting to make their first successful grasps for objects, and later still beginning to locomote around their environments towards more distant goals [12]. These sensorimotor developments bring about new multisensory experiences which lead to new complications of the crossmodal binding problem. For instance, as infants begin to deal with a wider repertoire of different scales of spatial environment, they will have to cope with similarly varying temporal lags between visual and auditory stimuli arising from the same external events and develop the kinds of compensatory mechanisms which adults use in such circumstances [3,7,8]. Maybe this is what made it difficult for Chen et al.’s participants to recalibrate audiovisual simultaneity after their sight was restored at 4 months. For both typically and atypically developing children, it probably makes a bit of sense in these temporally variable multisensory circumstances to bind stimuli together across a wider temporal window, and such wider temporal windows of crossmodal binding have been observed several times now in infants and children [16–19]. REFERENCES 1. Chen, Y.-C., Lewis, T.L., Shore, D.I., and Maurer, D. (2017). Early binocular input is critical for development of audiovisual but not visualtactile simultaneity perception. Curr. Biol. 27, 583–589. 2. Stein, B.E., ed. (2012). The New Handbook of Multisensory Processing (Cambridge, MA: MIT Press). 3. Harris, L.R., Harrar, V., Jaekl, P., and Kopinska, A. (2010). Mechanisms of simultaneity constancy. In Space and Time in Perception and Action, R. Nijhawan, ed. (Cambridge, UK: Cambridge University Press), pp. 232–253. 4. Ko¨rding, K.P., Beierholm, U., Ma, W.J., Quartz, S., Tenenbaum, J.B., and Shams, L. (2007). Causal inference in multisensory perception. PLoS One 2, e943. 5. Bremner, A.J., and van Velzen, J. (2015). Sensorimotor control: Retuning the body– world interface. Curr. Biol. 25, R159–R161.
6. Jay, M.F., and Sparks, D.L. (1984). Auditory receptive fields in primate superior colliculus shift with changes in eye position. Nature 309, 345–347.
13. Fausey, C.M., Jayaraman, S., and Smith, L.B. (2016). From faces to hands: Changing visual input in the first two years. Cognition 152, 101–107.
7. Sugita, Y., and Suzuki, Y. (2003). Audiovisual perception: Implicit estimation of sound-arrival time. Nature 421, 911–911.
14. Bremner, A.J., Mareschal, D., Lloyd-Fox, S., and Spence, C. (2008). Spatial localization of touch in the first year of life: Early influence of a visual spatial code and the development of remapping across changes in limb position. J. Exp. Psychol. Gen. 137, 149–162.
8. Kopinska, A., and Harris, L.R. (2004). Simultaneity constancy. Perception 33, 1049–1060. 9. Burr, D., Binda, P., and Gori, M. (2011). Multisensory integration and calibration in adults and in children. In Sensory Cue Integration, J. Trommershauser, K. Kording, and M.S. Landy, eds. (Oxford, UK: Oxford University Press), pp. 173–194. 10. Azan˜o´n, E., Camacho, K., Morales, M., and Longo, M.R. (2017). The sensitive period for tactile remapping does not include early infancy. Child Dev., in press. 11. Ley, P., Bottari, D., Shenoy, B.H., Kekunnaya, R., and Ro¨der, B. (2013). Partial recovery of visual–spatial remapping of touch after restoring vision in a congenitally blind man. Neuropsychologia 51, 1119–1123. 12. Adolph, K.E., and Berger, S.E. (2015). Physical and motor development. In Developmental Science: An Advanced Textbook, 7th ed., M.H. Bornstein, and M.E. Lamb, eds. (New York: Psychology Press), pp. 261–333.
15. Bremner, A.J., Holmes, N.P., and Spence, C. (2008). Infants lost in (peripersonal) space? Trends Cogn. Sci. 12, 298–305. 16. Begum Ali, J.B., Spence, C., and Bremner, A.J. (2015). Human infants’ ability to perceive touch in external space develops postnatally. Curr. Biol. 25, R978–R979. 17. Chen, Y.C., Shore, D.I., Lewis, T.L., and Maurer, D. (2016). The development of the perception of audiovisual simultaneity. J. Exp. Child Psychol. 146, 17–33. 18. Lewkowicz, D.J. (1996). Perception of auditory–visual temporal synchrony in human infants. J. Exp. Psychol. Human Percept. Perform. 22, 1094–1106. 19. Lewkowicz, D.J., and Flom, R. (2014). The audiovisual temporal binding window narrows in early childhood. Child Dev. 85, 685–694.
Gut Microbiota: Small Molecules Modulate Host Cellular Functions Jacob M. Luber1,2,3,4,* and Aleksandar D. Kostic1,2,4,* 1Section on Pathophysiology and Molecular Pharmacology, Joslin Diabetes Center, Boston, MA 02215, USA 2Section on Islet Cell and Regenerative Biology, Joslin Diabetes Center, Boston, MA 02215, USA 3Division of Medical Sciences, Harvard Medical School, Boston, MA 02115, USA 4Department of Microbiology and Immunobiology, Harvard Medical School, Boston, MA 02115, USA *Correspondence:
[email protected] (J.M.L.),
[email protected] (A.D.K.) http://dx.doi.org/10.1016/j.cub.2017.03.026
The human gut metagenome was recently discovered to encode vast collections of biosynthetic gene clusters with diverse chemical potential, almost none of which are yet functionally validated. Recent work elucidates common microbiome-derived biosynthetic gene clusters encoding peptide aldehydes that inhibit human proteases. Biosynthetic gene clusters (BGCs) are tightly clustered groups of genes in bacteria that encode pathways capable of producing small-molecule natural
products without cellular machinery such as ribosomes [1]. The array of diverse natural products produced by BGCs is vast; current databases hold more than
Current Biology 27, R294–R317, April 24, 2017 ª 2017 Elsevier Ltd. R307