Local model for contextual modulation in the cerebral cortex

Local model for contextual modulation in the cerebral cortex

Neural Networks 25 (2012) 30–40 Contents lists available at SciVerse ScienceDirect Neural Networks journal homepage: www.elsevier.com/locate/neunet ...

1MB Sizes 2 Downloads 108 Views

Neural Networks 25 (2012) 30–40

Contents lists available at SciVerse ScienceDirect

Neural Networks journal homepage: www.elsevier.com/locate/neunet

Local model for contextual modulation in the cerebral cortex Simo Vanni ∗ Brain Research Unit, Low Temperature Laboratory, Aalto University School of Science, Espoo, Finland Advanced Magnetic Imaging Centre, Aalto University School of Science, Espoo, Finland

article

info

Article history: Received 11 May 2011 Received in revised form 1 August 2011 Accepted 6 August 2011 Keywords: Intracellular Network Area summation function Size tuning Far surround facilitation Surround suppression Contextual modulation Dendrites

abstract A neural response to a sensory stimulus in cerebral cortex is modulated when other stimuli are presented simultaneously. The other stimuli can modulate responses even when they do not drive the neural output alone, indicating a non-linear summation of synaptic activity. The mechanisms of the nonlinearity have remained unclear. Here, I explore a model which considers both network and intracellular processes, and which can account for various types of contextual modulation. The processes include synaptic sensitivity function, determination of inhibition strength, dendritic decay of membrane voltage, and summation of excitatory and inhibitory membrane voltages. First, the model assumes that excitatory and inhibitory units have the same input sensitivity function, which is more broadly tuned than the output tuning function. Second, a central property of the model is that inhibition is a fraction of excitation, determined by covariance between the input and the sensitivity function. With proper fraction, a model neuron sums apparently decorrelated input, regardless of correlations in the original input. Third, the model assumes that synaptic input lands anisotropically on the dendrites, which together with passive dendritic decay cause exponential decay in summation along the input space. This explains the difference between input sensitivity function and output tuning function, and thus accounts for the division between driving classical and modulating extra-classical receptive fields. The model simulations replicate singlecell area summation function, far surround facilitation, and a shift in tuning function due to contextual stimulation. The model is very general, and should be applicable to various interactions between cortical representations. © 2011 Elsevier Ltd. All rights reserved.

1. Introduction Majority of neurons in the primary visual cortex (V1) show a nonlinear response as a function of stimulus size (area summation function, (Angelucci et al., 2002; Cavanaugh, Bair, & Movshon, 2002; Sceniak, Ringach, Hawken, & Shapley, 1999). When the size of the stimulus increases, the response first increases and then decreases until reaching an asymptote. The receptive field center, or the classical receptive field, has the lowest threshold for driving action potentials. Surrounding the classical receptive field is the extra-classical receptive field, where stimulation does not drive action potentials alone, but together with center stimulation can still modulate the response. The modulation is strongly dependent on the relative stimulus parameters of the center and surround, such as orientation or contrast (Knierim & Van Essen, 1992; Levitt & Lund, 1997). Such behavior cannot be explained by linear summation of the stimulus energy in the receptive field and

∗ Correspondence to: Advanced Magnetic Imaging Centre, Aalto University School of Science, P.O. Box 13000, FI-00076 Aalto, Finland. Tel.: +358 947026162; fax: +358 947022969. E-mail address: [email protected]. 0893-6080/$ – see front matter © 2011 Elsevier Ltd. All rights reserved. doi:10.1016/j.neunet.2011.08.001

sparked a number of non-linear network models (Kouh & Poggio, 2008; Schwabe, Obermayer, Angelucci, & Bressloff, 2006). The network models have been successful particularly in describing how the excitation and inhibition are linked (Ozeki, Finn, Schaffer, Miller, & Ferster, 2009; Schwabe et al., 2006; Tsodyks, Skaggs, Sejnowski, & McNaughton, 1997), but pay little attention to intracellular physiology. Importantly, a single neuron membrane voltage is sensitive to much broader set of stimuli than expected from relatively narrow tuning of the action potential output (Carandini & Ferster, 2000; Jia, Rochefort, Chen, & Konnerth, 2010). In addition, while postsynaptic potentials show strong local nonlinearity within dendritic branches, membrane voltage change between branches is linearly summed in the soma (London & Hausser, 2005; Sidiropoulou, Pissadaki, & Poirazi, 2006) after exponential decay along the dendrites, as expected by the passive dendritic properties (Spruston, Jaffe, & Johnston, 1994). Are the intracellular properties associated with the non-linear contextual modulation of a neural response? Here I explore a theoretical model (Fig. 1(a)–(c)) where input is modulated within a single neuron resulting in output which comes from an apparently decorrelated input (Fig. 1(d)). This is achieved by modulating the inhibition strength by a coefficient (d in Fig. 1(c)), and conceptually this is related to inhibition stabilized network model (Ozeki et al.,

S. Vanni / Neural Networks 25 (2012) 30–40

31

b

Fig. 1. Modulation of single cell response and sensitivity. (a) Block diagram of the proposed model. (b) The model, applicable to cortical pyramidal cell, separate the proximal and distal dendrites. Both the pyramidal cell (triangular) and the inhibitory interneuron (spherical) are sensitive to the same input space. (c) Single cell response; from the top-left: The original input distribution in an input space is intersected with the sensitivity function (centered at 0). This results in partial overlap of the input and sensitivity function, called input function in the model. The arrows between the upper and lower rows of (c) indicate which parts of the equation for response are determined by the overlap of input and tuning function. The first term on the equation, excitation (gray area), is the pointwise multiplication of the intersected input with an exponential decay function (output decay). The strength of the second term, inhibition, is determined by coefficient d, which is derived from covariance between input and sensitivity function. Inhibition is subtracted from excitation and the response (membrane potential) is summed to determine response strength. Further mapping from membrane potential to action potential probability has been omitted for simplicity. (d) When an input (at the bottom) is intersected with the sensitivity function and then decorrelated against the sensitivity function, the resulting decorrelated input is typically strongly attenuated. Summing the decorrelated input vector after pointwise multiplication with the dendritic decay is equivalent to model response, as numerically simulated on the right.

2009; Tsodyks et al., 1997), because in both models the level of inhibition and excitation are closely linked. A number of studies have suggested decorrelation of correlated neural activation (Ecker et al., 2010; Felsen, Touryan, & Dan, 2005; Renart et al., 2010; Vinje & Gallant, 2000), and the suggested mechanism have included a tight interplay between the excitation and inhibition (Renart et al., 2010). The decorrelation is linked to coding of visual information. Because sensory environment contains statistical regularities, adaptation to such regularities could be a key for an efficient code (Simoncelli & Olshausen, 2001). Barlow (1961) suggested that principle of equalization is implemented in sensory coding, resulting in efficient code without loss on information, and later suggested that such response equalization emerges from decorrelation of correlated neural responses (Barlow & Földiák, 1989). When we studied interaction of visual response patterns in human cerebral cortex with functional magnetic resonance imaging, we encountered a systematic interaction, either suppression or facilitation, between the response patterns (Vanni & Rosenström, 2011). The interaction seemed to follow a decorrelation rule, and later we learned

that this is best explained by a local rule which is replicated in each magnetic resonance imaging voxel (Sharifian, Nurminen and Vanni, unpublished observations). What has remained unclear in earlier literature and in our own work is what exactly is decorrelated in the cerebral cortex, and whether a single model could explain various interaction phenomena in physiological literature (Felsen et al., 2005; Ichida, Schwabe, Bressloff, & Angelucci, 2007; Schwabe et al., 2006). The possible levels of explanation can include single neurons, or networks of neurons. The current model includes a mixture of both; the presynaptic network of neurons provides the input sensitivity function and determines the strength of inhibition, and intracellular dendritic properties cause part of the output nonlinearity, and the summation of the excitatory and inhibitory membrane voltage cause the decorrelation. In the current model, qualitatively distinct input lands anisotropically on the dendritic tree (Fig. 1(b)–(c)), and with passive dendritic decay this results in summation, clearly weighted at the center of the input parameter space (proximal input). While the decay as a function of input space was originally a trick to limit

32

S. Vanni / Neural Networks 25 (2012) 30–40

summation in input space, recent two-photon imaging data have actually shown anisotropy of sensory input inside a single cell. In a mouse visual cortex neuron, orientation preference of dendritic postsynaptic response form local hotspots, each hotspot comprising distinct orientation preference (Jia et al., 2010). Such finding leaves room for assumed macroscopic anisotropies in larger dendritic structures (Spruston, 2008). The model unit here can be considered as a pyramidal neuron, which sums excitatory input from a large network, and whose accompanying inhibitory neurons are sensitive to the same input (Schwabe, Ichida, Shushruth, Mangapathy, & Angelucci, 2010). A model neuron receives input from feedforward pathways, from horizontal connections within one brain area, as well as via feedforward–feedback loops across different brain areas (following Schwabe et al., 2006). The feedback from higher-order areas enables in principle very wide and complex input sensitivity functions. The output can be viewed as membrane potential at the axon initial segment. Further output non-linearity, which derives from mapping of membrane voltage to action potential probability (Anderson, Lampl, Gillespie, & Ferster, 2000), has been omitted for simplicity, because such monotonous mapping would not significantly affect the results. Here, the current model diverts from majority of earlier models, where the output nonlinearity is explicit. The model can explain the area summation function (Angelucci et al., 2002; Sceniak et al., 1999), facilitation of a V1 neuron from very far surround (Ichida et al., 2007), and change in orientation tuning as a function of surrounding stimulation (Felsen et al., 2005). In addition, the simulated excitation and inhibition peak at the same position in the input space, as shown earlier with intracellular recordings for orientation tuning (Anderson, Carandini, & Ferster, 2000). 2. Methods The sensitivity function defines input to model neuron in an abstract parameter space. We can compare sensitivity function to tuning function, which is well known in electrophysiology. Tuning function is the action potential rate (output) of a neuron as a function of some input parameter, such as orientation of a grating. While the tuning function is determined by the voltage summation at the axon initial segment, sensitivity function corresponds to synaptic weights in the dendritic tree, and together with the input describes the postsynaptic membrane potential change. While sensitivity functions in the current model are bell-shaped Gaussian functions, the model is not limited to any particular function form. In the simulations the Gaussian function helps to demonstrate how position in input space, strength of response, and sensitivity of a unit interact. Next, the input is decorrelated against the sensitivity envelope by subtracting inhibition from the excitation, then the result is pointwise multiplied by a signal conduction decay function, and finally the response is read out from the unit by summating the membrane voltage. In vivo, signal conduction decay can be associated with passive dendritic leakage of input current. The whole process is illustrated in Fig. 1. If we assume an input space X , this process can be modeled as: R=



FS (x) · (FD (FI (x) , FE (x)))

(1)

x

(FInetwork (x)) for which the unit is sensitive. Simulations of input intersected pointwise using a logistic saturation function (reaching an asymptote at the sensitivity function) gave comparable results. Both the min operator and the logistic function mimic local saturating non-linearity within dendrites (London & Hausser, 2005; Sidiropoulou et al., 2006). In the simulations, the sensitivity function is a simple Gaussian function FE (x) = e

−(x−x0 )2 θ

(2)

where θ is the sensitivity width parameter. The area under the sensitivity function is normalized to 1. The decorrelation emerges from function FD (FI (x), FE (x)) = FI (x) − d · FE (x)

(3)

which originally follows Barlow and Földiák (1989), and was explicitly studied with functional magnetic resonance imaging (fMRI) activation pattern decorrelation in Vanni and Rosenström (2011). In the current model, covariance between the input and sensitivity function determines the d coefficient (see Eq. (4) in Box I). This corresponds to supplementary Eq. (1) in Vanni and Rosenström (2011), where it was applied to fMRI voxel population response. When a pair of vectors is placed to Eq. (4), the resulting coefficient d can fully decorrelate the two vectors with Eq. (3). Conceptually, this is close to sphering, a common preprocessing step in statistical signal analysis. The d is a monotonous but non-linear function of correlation of the two variables. When the correlation is negative, d becomes negative resulting in facilitation in Eq. (3). Note that in Eq. (4) the variance of sensitivity function is the variance of the magnitude, not the distribution of the sensitivity, in the input space. Thus, the synaptic weights and the input (after intersection with the sensitivity function) form the two random variables in Eq. (4). The d is calculated once over the x, and not for each x separately. In summary, a sensitivity function forms the space where all inputs will become decorrelated. After input function the system maps the covariance between input and sensitivity function to d coefficient. This d coefficient determines how much inhibition emerges in the network. Before readout, the decorrelated input is pointwise multiplied with the function Fs , which is the signal conduction decay in input space, FS (x) = e

−|x−x0 | λ

(5)

where λ is the space constant, and x0 the center in the input space. The decorrelation (subtractive inhibition) and dendritic decay (pointwise multiplication) can be done in reversed order, as well as summation and decorrelation, without the results changing. Thus the model should not be sensitive to whether the inhibition affects distal or proximal dendrite, or even whether negative feedback from the local excitatory neuron to inhibitory neurons and back are essential for the operation. Key is that the negative (hyperpolarizing) voltage change at axon initial segment is the determined fraction (d) of the maximum possible excitation (depolarization).

FI (x) ≡ min (FInetwork (x), FE (x)) .

3. Results

Here, R is the unit response, and x is the position in an abstract one-dimensional input space. FI (x) is the postsynaptic input and FE (x) the sensitivity function (E for excitation potential) in the input parameter space. The pointwise min operator between the input and sensitivity function has been implemented for convenience, and should cover the fraction of network input

3.1. Simulation of area summation function In the current work the sensitivity functions are onedimensional, but should in principle be generalizable to multiple dimensions. In vision science, such one-dimensional input space is equivalent to e.g. orientation of a grating or distance in visual field.

S. Vanni / Neural Networks 25 (2012) 30–40

d=

(Var (FI (x)) + Var (FE (x))) −

33



(Var (FI (x)) + Var (FE (x)))2 − 4Cov (FI (x), FE (x))2 . 2Cov (FI (x), FE (x))

(4)

Box I.

a

b

c

d

e

Fig. 2. Emergence of area summation function from decorrelation model. Here the input space corresponds to position in the visual field, as e.g. in Sceniak et al. (1999). (a) Three different inputs (light blue, green and red lines at the bottom, not in scale with sensitivity function) result in decreasing amount of overlap between input and sensitivity function (dashed line). The arrows depict the integration radius of input (grows in both directions) which is the abscissa in (b)–(e). Input which is outside the integration radius is set to zero, resulting in increasing width of the input. (b) The decorrelation coefficient (d) increases when the input size is increased from zero width (at 0-position in the input space) to full width of the input space. (c) Output of a linear unit, i.e. if no decorrelation is applied between input and tuning (d = 0, in Fig. 1, and Eq. (3)). (d) Output of a decorrelated unit. (e) Current model associated with physiological data (Sceniak et al., 1999). The data is presented in arbitrary space, where the input size increases rightwards. The asterisks illustrate the data points, and solid line original fit to data. The dashed line is the model unit output, with amplitude scaled, input space position (patch size) shifted, and input width, tuning width and output decay parameters modified to fit the data. The physiological data here and below was extracted from published figures. First the data was digitized from the publications, and then model response was fitted to the data by adjusting the model parameters manually.

Fig. 2 shows how unit response is modulated as a function of size of an input in an input space. In this simulation, input space corresponds to position in the visual field, simulating area summation experiments (Angelucci et al., 2002; Sceniak et al., 1999), but in one-dimensional system. In area summation

experiment a visual stimulus is centered in the classical receptive field of a neuron. When the stimulus, typically a grating, increases in size, the response first increases and then decreases, until it reaches an asymptote (Fig. 2(e)). When input is optimal (light blue line in Fig. 2(a)), i.e. overlapping with the sensitivity function

34

S. Vanni / Neural Networks 25 (2012) 30–40

Fig. 3. Effect of noise on modulation strength. (a) Input (at the bottom, not in scale with sensitivity function) is now a set of narrow Gaussian distributions in the input space. The set is enlarged symmetrically from the center (different colors) and on the top input is truncated with the sensitivity function (dashed line). Random noise is first pointwise multiplied with the sensitivity function, and then added on top of the truncated input. (b)–(d) as in Fig. 2, but for different levels of SNR.

(dashed line), increasing stimulus size (dashed lines and arrows in Fig. 2(a)) results in monotonously increasing suppression (d) until the output is fully suppressed (d = 1, Fig. 2(b)). The response of a linear unit (Fig. 2(c), corresponds to sum of excitation in Fig. 1(a)) would be similarly monotonously increasing up to a summation field size, but response of a decorrelated unit (Fig. 2(d)) shows non-linear behavior. First the response increases, but then decreases down to 0. If the input is not fully congruent with the sensitivity function (green and red curves), the decorrelated output does not reach zero, but asymptotes when the exponentially decaying output non-linearity (Fig. 1) reaches zero weight. The non-linear decay in summation cause the summation to stop before d has reached the maximum value, and together with increasing inhibition (increasing d-coefficient) cause the suppressive part of the area summation function. This model result predicts that increasing the overlap between neural sensitivity and input results in stronger suppression. One such manipulation could be to increase the bandwidth of sinusoidal grating as a second dimension in an area tuning experiment. Because neurons have finite tuning bandwidths (De Valois, Albrecht, & Thorell, 1982; Foster, Gaska, Nagler, & Pollen, 1985; Movshon, Thompson, & Tolhurst, 1978) a sinusoidal pattern might stimulate a neuron suboptimally, and more broadband stimulus could result in stronger suppression. Fig. 2(e) shows that model response in a simulated area tuning experiment can explain experimental data (Sceniak et al., 1999). Fig. 3 shows that increasing noise (decreasing SNR) reduces the strength of suppression, and can be another means to achieve variable suppression in area summation function. As in Fig. 2 with shifted input, increasing noise decreases the fit between the sensitivity function and the input, thus decreasing the congruence of the input and sensitivity vectors. This result predicts that

increasing the amount of noise on top of a stimulus should result in decreasing suppression strength in an area tuning experiment. 3.2. Simulating far surround facilitation Some correspondence between decorrelation and area summation function is intuitive, given that the area summation function can be modeled with an integration of difference of two Gaussians (Sceniak et al., 1999), a form which has been shown to decorrelate retinal output (Atick & Redlich, 1992). In contrast, it is not clear intuitively whether other phenomena emerging from interaction of two visual stimuli, such as far surround facilitation (Ichida et al., 2007), or changes in the tuning function (Felsen et al., 2005), would emerge from the decorrelation model. Fig. 4 shows that when the stimulus is confined close to the edges of the sensitivity function, indeed facilitation emerges at the center of representation (arrow in Fig. 4(a)). Correspondingly, the net output from the unit reaches larger values with the decorrelating (solid curve in Fig. 4(c)) compared to linearly summing (dashed curve) unit. Finding the parameters resulting to facilitation required input at very peripheral part of the tuning function, which is in line with earlier findings in electrophysiology (Ichida et al., 2007) and psychophysics (Nurminen, Peromaa, & Laurinen, 2010), where facilitation of a center response emerges when surround is placed at very large distances from the center. It is noteworthy that in the physiological experiment facilitation is apparent only when the center contrast is low. Accordingly, model center stimulus must be of low input strength (arrow in Fig. 4(b)), or suppression starts to prevail in the responses. Fig. 4(d) shows that simulation of decreasing surround inner edge diameter (distance to center) can model experimental data (Ichida et al., 2007). Here, the measured/simulated single unit

S. Vanni / Neural Networks 25 (2012) 30–40

35

Fig. 4. Far surround facilitation emerges when the input is confined to peripheral part of the tuning function. Input space corresponds to position in the visual field, as in Ichida et al. (2007). Here we explicitly assume that stimulating visual field far from the center of the classical receptive field creates synaptic input to the unit. This conforms to model by Schwabe et al. (2006) (a) Input before multiplication with output decay. The linear (dashed line) and decorrelated (solid line) input behave differently at the center of the input space. In particular, when input strength is low, stimulus in the far surround induces facilitation of the center response (arrow). (b) Input (at the bottom) comprises a weak center and a set of narrow Gaussian distributions more peripherally. The set is enlarged symmetrically from the center. Only a minor input is left after truncation of the input with the sensitivity function (solid line under the sensitivity function). (c) Linear (dashed line) and decorrelated output (solid line) of a single unit as a function of input diameter (starting from the inner edge of the peripheral stimulus, and continuing outwards). The decorrelated output climbs above the linear output, before reaching an asymptote. (d) Decorrelation model associated with physiological data (Ichida et al., 2007). The data is presented in arbitrary space, where the surround ring inner diameter decreases rightwards. Both the experiment and model simulation contain a relatively small center which has the same weak strength (contrast) as the surrounding ring. The asterisks illustrate the data points, and solid line original fit to data. The dashed line is the simulation model unit output, with amplitude scaled, input space position (patch size) shifted, and signal conduction decay and input weight (‘‘contrast’’) parameters modified to fit the data.

represents visual field position, which is centered at position 0. When surrounding stimulus inner edge starts approaching from large distance, the response first increases (facilitation), and then decreases (suppression). 3.3. Simulating shifts in orientation tuning A tuning function refers to mapping of input parameter, such as orientation, to output, such as action potential rate, of a neuron. How orientation tuning emerges in single neurons is still unclear. When comparing different models of orientation tuning Anderson, Carandini et al. (2000) found that inhibitory and excitatory conductances in single neurons peak at the same orientation, and correspond to orientation tuning peak measured by action potential frequency. This was in contrast to predictions by normalization or veto models, predicting flat or complementary-to-tuning changes in conductances respectively. Fig. 5(a) shows that the current model is congruent with the intracellular recordings (Anderson, Carandini et al., 2000) and predicts that excitation (black dotted line) and inhibition (gray dotted line) peak at the same orientation. Orientation tuning can be modulated by surrounding orientation (Felsen et al., 2005; Gilbert & Wiesel, 1990), but implementing both position (center vs. surround) and orientation in the current model would require 2-dimensional model of decorrelation. Instead, I use surrogate

approach where ‘‘surround’’ is a weak input in orientation space. When such additional input is included (input at the bottom of Fig. 5(a)) in the simulation, both excitation (black solid line) and inhibition (gray solid line) dip at the same position along the orientation space. Electrophysiological recordings show that orientation tuning function deviates somewhat away from the adapting (Muller, Metha, Krauskopf, & Lennie, 1999) or surrounding orientation (Felsen et al., 2005; Gilbert & Wiesel, 1990). Fig. 5(b) shows similar behavior in the current model tuning function. When weak input is presented close to peak tuning, the model tuning function apparently deviates to opposite direction. In addition, the constant weak input results in constant excitation raising the baseline excitation level for the modulated tuning curve. This elevation would diminish if the exponential membrane voltage to spike non-linearity was included in the current model; corresponding finding is seen in Ozeki et al. (2009) supplementary material (their Fig. 1(d)), where membrane voltage is on average significantly positive for surrounding stimuli, which do not evoke action potentials. The negative (below baseline activation rate) sidebands in orientation tuning (Fig. 5(b), dashed line) are due to the decorrelation process, where excitatory input inside sensitivity function cause an emerging inhibition while excitation is still weak due to the exponential signal conduction decay. Such negative side-bands are found in membrane potential orientation tuning

36

S. Vanni / Neural Networks 25 (2012) 30–40

Fig. 5. Decorrelation of a tuning function. Here the input space corresponds to orientation as in Felsen et al. (2005). (a) The preferred orientation of the unit is at 0. Excitation (black curves) and inhibition (gray curves) are defined as presented in Fig. 1. and in the model-section. Without additional input (dashed lines), both excitation and inhibition peak at zero. When an additional input (at the bottom) is placed aside from the peak of the sensitivity function, both excitation and inhibition (solid lines) dip at the same value in input space. (b) The tuning function is response (unit output) for stimuli spanning input space. Tuning function without additional input (dashed line) and with input at 0.2 in input space. Note the elevation of response level for constant additional input. (c) and (d) Decorrelation model associated with physiological data (Felsen et al., 2005). The orientation is presented in arbitrary space, where the orientation shifts linearly. While the experiment contain a center, whose tuning is measured while a surround with different orientation (at the orientation pointed by the arrows) is interacting with the center, the model simulations can have only one dimension (orientation or position) and thus are constructed with additional input which does not reach full weight. The solid lines connect the data points in Felsen et al. (2005). The dashed line is the simulation model unit output, with amplitude scaled, input space position (orientation) shifted, and input weight (‘‘contrast’’), and tuning width parameters modified to fit the data. The solid arrows indicate the surround orientation in Felsen et al. (2005) and the additional input in the current model simulation.

curves (Carandini & Ferster, 2000), as well as action potential orientation tuning curves when studied in detail (De Valois, Yund, & Hepler, 1982). Fig. 5(c) and (d) shows that the model simulations can account well real experimental data (Felsen et al., 2005), where surrounding grating orientation repels center orientation tuning away from unmodulated (center only) tuning function maxima. 3.4. Excitation, inhibition and covariance Input strength is a key parameter in the physiological model by Schwabe et al. (2006). In their model, input strength drives excitation with lower threshold and slope than inhibition. Fig. 6 simulates inputs at different strengths. Fig. 6(e) shows that when modulating the input strength, the excitatory drive (which equals to linear input) starts growing faster than the inhibition (d ∗ FE (x) in the current model). When the hypothetical stimulus covers the sensitivity function, the inhibitory strength reaches the excitatory strength resulting in zero output. A key difference between this and Schwabe et al. (2006) model is the lack of threshold in the current simulations. In vivo, apparent lack of threshold emerges from noise which pushes the membrane voltage stochastically above action potential threshold (Anderson, Lampl et al., 2000). The outputs of linear and decorrelated units are displayed for comparison (Fig. 6(c) and (d)). Note that reducing input strength fails to reproduce changes in summation field size as a function of stimulus contrast, as has been found in physiological literature (Sceniak et al., 1999). In Fig. 6(d) this is visible as similar summation peak between the two strongest inputs, denoted with

light green and purple curves. The summation field size stays more or less constant before collapsing when input strength goes closer to zero. If the dendritic decay would follow a shallower decline with weaker input, the summation peak should shift to right. The input strength cannot be the only denominator of the modulation strength, however. During far surround facilitation input is positive, but the modulation (d) is negative suggesting an inverted relation between the input strength and modulation strength compared to more central stimulation. Fig. 7 shows modulation to stimuli at three different locations along the sensitivity function, each with five different input strengths. Fig. 7(b) shows that although modulation strength (d-coefficient) is always a linear function of input strength, the slope of the linear relationship is dependent on the position of the input in the sensitivity function. When the input is at the edge of the sensitivity function, increased input does not suppress, but enhances response in a linear fashion. The common denominator for the different behaviors is covariance. When modulation strength is plotted as a function of covariance (Fig. 7(c)), all data points from 7(b) align nicely. When input is at the center of the sensitivity function, covariance is positive, but when it is on the periphery, covariance is negative. With a given position of input in the sensitivity function, covariance is always a linear function of input strength (Fig. 7(d)). To understand why this happens, it is necessary to understand how covariance is calculated. In the simulations (Fig. 7) the covariance becomes negative at the peripheral part of the sensitivity function, because of very different mean values of the

S. Vanni / Neural Networks 25 (2012) 30–40

a

b

c

d

e

f

37

Fig. 6. Excitation and inhibition at different relative input strengths. (a) Area summation function is constructed as in Fig. 3 using a set of narrow Gaussian input distributions for six different relative input strengths (0, 0.12, 0.24, 0.36, 0.48, 0.6 of maximum input in different colors). (b) d-coefficient for the six different input strengths. (c) Response of a linear unit for increasing input size. (d) Response of a decorrelating unit to increasing input size. (e) Excitation (blue line) and inhibition (red line) for strongest input (0.6) as a function of input size. (f) Excitation (blue line) and inhibition (red line) for the largest input size (9.95) as a function of input strength.

sensitivity function and the peripheral input. The mean value of sensitivity function is at 0.5 (horizontal dashed line in Fig. 7(a)), and any input close to the center covaries positively with the sensitivity. However, positive input confined to the peripheral part covaries negatively with the sensitivity, because mean value of the peripheral input is close to zero (horizontal dotted line at the bottom of Fig. 7(a)), and correspondingly input at the peripheral part is mainly above the mean. How the dependence between covariance and d emerges in biological networks is unclear. If we look at the network dynamics in Tsodyks et al. (1997), we can associate peripheral input and facilitation to point where network is relatively little excited and inhibited, and the network state would progress toward higher activity. Correspondingly, when excitation is large, inhibition increases, and after some time the excitation stabilizes to moderate levels. Such mechanism, named inhibition-stabilized network, has been suggested to operate in surround suppression (Ozeki et al., 2009), and it is dependent on strong recurrent excitation and feedback inhibition. 4. Discussion Current work suggests a mechanism for interaction between representations in the cerebral cortex. A simple model replicates qualitatively three known types of physiological stimulus–stimulus and stimulus–tuning function interactions found in earlier studies. The decorrelation of input within a neuron emerges from varying levels of inhibition, with the inhibition strength reflecting covariance between input and sensitivity function. In addition, the modeled dendritic decay is associated with assumption

of anisotropic input to dendrites, which is partially supported by recent experimental data (Jia et al., 2010) and partially waits for experimental confirmation. Current abstract mathematical formulation of interaction between neural activation patterns reflects single-neuron experimental data, but was partially motivated also by a need to link computational explanation to experimental findings. Decorrelation is highly beneficial in many signal analysis applications, and such benefit might provide strong evolutionary pressure for phylogenies of a neural network. To model benefit in neural coding, current model should be generalized to population of neurons. The current simulations are not dependent on any particular form of input or sensitivity. Similarly, in retinal ganglion cells decorrelation and receptive field structure might not be linked at all (Graham, Chandler, & Field, 2006). According to current model, neurons would need to ‘‘know’’ only one parameter, which is the covariance between input and tuning. This covariance then determines how the input and tuning function are modulated. Intuitively, detection of covariance can be associated with the weight-matrix of synaptic strengths, but the actual implementation is unclear. In physiological experiments, cortical responses are strongly modulated whenever a second stimulus appears in visual field. This phenomenon has been studied especially in center-surround paradigm, by looking how surround modulates the neural (Angelucci et al., 2002; Knierim & Van Essen, 1992; Levitt & Lund, 1997; Maffei & Fiorentini, 1976; Sceniak et al., 1999) or behavioral (Cannon & Fullenkamp, 1991; Ejima & Takahashi, 1985; Nurminen et al., 2010; Olzak & Laurinen, 1999; Xing & Heeger, 2001) responses for the center. Similar contextual modulation as a

38

S. Vanni / Neural Networks 25 (2012) 30–40

Fig. 7. Relationship between input strength, covariance and d-coefficient in the decorrelation model. (a) The input is confined to three different intervals: central (red), middle (green) and peripheral (blue). Each interval is stimulated with five different input levels spaced linearly between 0.1 and 1, scaled to sensitivity function maxima. The widths of the intervals are matched so that they have equal sum of input (area under curve) at each input level. To minimize errors in numerical simulations, the input space between −10 and 10 was summed at 0.0001 intervals. After normalization to sum 1, this results in lower peak sensitivity compared to earlier figures. Horizontal dashed line is the mean level for the sensitivity function. Horizontal dotted line is the mean level for the strongest input confined to the peripheral part of the tuning curve. (b) d coefficient as a function of input strength in the three compartments. Note the negative values for the peripheral input. (c) d-coefficient as a function of covariance between input and sensitivity function. (d) Covariance between input and tuning as a function of input strength.

function of surround size was recently found with fMRI (Nurminen, Kilpeläinen, Laurinen, & Vanni, 2009) suggesting that macroscopic neural activity replicates findings in behavior and single neurons. The physiological findings have been explained with a particular network architecture, where the excitation and inhibition have a particular relationship (Ozeki et al., 2009; Schwabe & Obermayer, 2005; Schwabe et al., 2006), which can be described as inhibitory stabilization of excitatory neurons in a recurrent network (Tsodyks et al., 1997). Current model contributes to these earlier network models, because it not only predicts the quantity of inhibition within the local network, but also suggests an intracellular decorrelation mechanism. How can one neuron e.g. in the primary visual cortex be sensitive to very distant input, such as distant object in the visual field? When the stimuli are relatively far away from each other, precluding direct horizontal interaction in the primary visual cortex, non-linear summation emerges first in the extrastriate areas and then spreads to the rest of the system (Vanni et al., 2004). From the single unit point of view, such spread follows the network of excitatory connections between cortical areas, as Schwabe et al. (2006) suggested, with part of the feedback landing on its distal apical dendrites (Spruston, 2008). Thus, while the mechanisms of interaction in the current model are local, they can be sensitive for anatomically distant cortical areas via rapidly conducting feedforward–feedback loops (Bullier, 2001; Girard, Hupe, & Bullier, 2001). The model suggests that distal synaptic input within one neuron may reflect positions very far away from the center of receptive field. Indeed, our

group has recently found that fMRI signal spreads significantly (corresponding up to 28 mm in the primary visual cortex) off the edge of primary representation (Sharifian, Nurminen and Vanni, unpublished observations). In addition this and earlier (Shmuel, Augath, Oeltermann, & Logothetis, 2006) data has shown systematically negative BOLD signals far away from the primary representation, much further than local horizontal connections. The output decay function reflects passive dendritic properties, and the model suggests qualitatively varying presynaptic sources at different parts of the dendritic tree. This approach is critically different from classical output nonlinearity, which is dependent on input strength alone. A single neuron can be described as two-layer neural network, with double sigmoid function (Gollo, Kinouchi, & Copelli, 2009; Poirazi, Brannon, & Mel, 2003; Polsky, Mel, & Schiller, 2004). Particularly, synaptic input summates non-linearly within a dendritic branch, but the distinct branches are summed linearly at the axon initial segment. Thus, the passive dendritic properties will weigh the input according to distance between the dendritic branch and the soma. Despite voltage-gated mechanisms at dendrites, the summation at the axon initial segment follows the classical decay-function, which is exponentially dependent on the distance of input branch from the soma (Spruston et al., 1994), and perhaps also on the order of branches in the dendritic tree (Poirazi et al., 2003). Recent data shows (Jia et al., 2010) that a single neuron in mouse visual cortex receives a qualitatively wide set of inputs, each housing a local hotspot in the dendritic tree. While the output of the neurons were tuned for orientation, input reflected all orientations. The study does not address the issue of

S. Vanni / Neural Networks 25 (2012) 30–40

large-scale anisotropy across the dendritic tree, and mouse visual cortex, lacking clear clustering for orientation selectivity (Hubener, 2003; Schuett, Bonhoeffer, & Hubener, 2002), might actually not reflect what happens when input parameter changes smoothly along the cortex. In monkeys, part of the anisotropy might emerge from basal dendrites with some 100 µm radius in the V1 (Elston & Rosa, 1998), but sensitivity to distant representations, such as to distant position in visual field, must arrive via feedback (Angelucci et al., 2002). Given that feedback targets multiple layers, including layer 1 (reviewed in Felleman and Van Essen (1991)), the assumed dendritic anisotropy in pyramidal neurons should be found both on horizontal and vertical directions along the dendritic tree. 5. Conclusions Current work suggests a mechanism of cortical interaction, which explains multiple physiological findings with a simple principle of decorrelation between component responses and sensitivity of a unit, and with dendritic decay weighting input anisotropically in the dendrites. Future work should address whether these principles are implemented, and whether the emerging predictions for particular stimulus conditions hold. If the answer is yes, we know that the robust cortical interactions in multiple experimental paradigms reflect very simple principles reflecting both network and intracellular mechanisms. Authors’ contributions SV did all parts of this single-author work. Acknowledgments Tom Rosenström has commented the manuscript and helped with formulation of the mathematical equations. I thank Lauri Nurminen, Alessandra Angelucci, Fariba Sharifian, and Irtiza Gilani for useful discussions. This work has been supported by the Academy of Finland (grant Nos: 213464, 124698, 140726, 218054). References Anderson, J. S., Carandini, M., & Ferster, D. (2000). Orientation tuning of input conductance, excitation, and inhibition in cat primary visual cortex. Journal of Neurophysiology, 84, 909–926. Anderson, J. S., Lampl, I., Gillespie, D. C., & Ferster, D. (2000). The contribution of noise to contrast invariance of orientation tuning in cat visual cortex. Science, 290, 1968–1972. Angelucci, A., Levitt, J. B., Walton, E. J., Hupe, J. M., Bullier, J., & Lund, J. S. (2002). Circuits for local and global signal integration in primary visual cortex. Journal of Neuroscience, 22, 8633–8646. Atick, J., & Redlich, A. (1992). What does the retina know about natural scenes. Neural Computation, 4, 196–210. Barlow, H. (1961). Possible principles underlying the transformation of sensory messages. In W. Rosenblith (Ed.), Sensory communication (pp. 217–234). Cambridge, MA: MIT Press. Barlow, H., & Földiák, P. (1989). Adaptation and decorrelation in the cortex. In R. Durbin, C. Miall, & G. Mitchison (Eds.), The computing neuron (pp. 54–72). Boston: Addison-Wesley Longman Publishing Co., Inc.. Bullier, J. (2001). Integrated model of visual processing. Brain Research Reviews, 36, 96–107. Cannon, M. W., & Fullenkamp, S. C. (1991). Spatial interactions in apparent contrast: inhibitory effects among grating patterns of different spatial frequencies, spatial positions and orientations. Vision Research, 31, 1985–1998. Carandini, M., & Ferster, D. (2000). Membrane potential and firing rate in cat primary visual cortex. Journal of Neuroscience, 20, 470–484. Cavanaugh, J. R., Bair, W., & Movshon, J. A. (2002). Nature and interaction of signals from the receptive field center and surround in macaque V1 neurons. Journal of Neurophysiology, 88, 2530–2546. De Valois, R. L., Albrecht, D. G., & Thorell, L. G. (1982). Spatial frequency selectivity of cells in macaque visual cortex. Vision Research, 22, 545–559. De Valois, R. L., Yund, E. W., & Hepler, N. (1982). The orientation and direction selectivity of cells in macaque visual cortex. Vision Research, 22, 531–544.

39

Ecker, A. S., Berens, P., Keliris, G. A., Bethge, M., Logothetis, N. K., & Tolias, A. S. (2010). Decorrelated neuronal firing in cortical microcircuits. Science, 327, 584–587. Ejima, Y., & Takahashi, S. (1985). Apparent contrast of a sinusoidal grating in the simultaneous presence of peripheral gratings. Vision Research, 25, 1223–1232. Elston, G. N., & Rosa, M. G. (1998). Morphological variation of layer III pyramidal neurones in the occipitotemporal pathway of the macaque monkey visual cortex. Cerebral Cortex, 8, 278–294. Felleman, D. J., & Van Essen, D. C. (1991). Distributed hierarchical processing in the primate cerebral cortex. Cerebral Cortex, 1, 1–47. Felsen, G., Touryan, J., & Dan, Y. (2005). Contextual modulation of orientation tuning contributes to efficient processing of natural stimuli. Network, 16, 139–149. Foster, K. H., Gaska, J. P., Nagler, M., & Pollen, D. A. (1985). Spatial and temporal frequency selectivity of neurones in visual cortical areas V1 and V2 of the macaque monkey. Journal of Physiology, 365, 331–363. Gilbert, C. D., & Wiesel, T. N. (1990). The influence of contextual stimuli on the orientation selectivity of cells in primary visual cortex of the cat. Vision Research, 30, 1689–1701. Girard, P., Hupe, J. M., & Bullier, J. (2001). Feedforward and feedback connections between areas V1 and V2 of the monkey have similar rapid conduction velocities. Journal of Neurophysiology, 85, 1328–1331. Gollo, L. L., Kinouchi, O., & Copelli, M. (2009). Active dendrites enhance neuronal dynamic range. PLoS Computational Biology, 5, e1000402. Graham, D. J., Chandler, D. M., & Field, D. J. (2006). Can the theory of ‘‘whitening’’ explain the center-surround properties of retinal ganglion cell receptive fields? Vision Research, 46, 2901–2913. Hubener, M. (2003). Mouse visual cortex. Current Opinion in Neurobiology, 13, 413–420. Ichida, J. M., Schwabe, L., Bressloff, P. C., & Angelucci, A. (2007). Response facilitation from the ‘‘suppressive’’ receptive field surround of macaque V1 neurons. Journal of Neurophysiology, 98, 2168–2181. Jia, H., Rochefort, N. L., Chen, X., & Konnerth, A. (2010). Dendritic organization of sensory input to cortical neurons in vivo. Nature, 464, 1307–1312. Knierim, J. J., & Van Essen, D. C. (1992). Neuronal responses to static texture patters in area V1 of the alert macaque monkey. Journal of Neurophysiology, 67, 961–980. Kouh, M., & Poggio, T. (2008). A canonical neural circuit for cortical nonlinear operations. Neural Computation, 20, 1427–1451. Levitt, J. B., & Lund, J. S. (1997). Contrast dependence of contextual effects in primate visual cortex. Nature, 387, 73–76. London, M., & Hausser, M. (2005). Dendritic computation. Annual Review of Neuroscience, 28, 503–532. Maffei, L., & Fiorentini, A. (1976). The unresponsive regions of visual cortical receptive fields. Vision Research, 16, 1131–1139. Movshon, J. A., Thompson, I. D., & Tolhurst, D. J. (1978). Spatial and temporal contrast sensitivity of neurones in areas 17 and 18 of the cat’s visual cortex. Journal of Physiology, 283, 101–120. Muller, J. R., Metha, A. B., Krauskopf, J., & Lennie, P. (1999). Rapid adaptation in visual cortex to the structure of images. Science, 285, 1405–1408. Nurminen, L., Kilpeläinen, M., Laurinen, P., & Vanni, S. (2009). Area summation in human visual system: psychophysics, fMRI, and modeling. Journal of Neurophysiology, 102, 2900–2909. Nurminen, L., Peromaa, T., & Laurinen, P. (2010). Surround suppression and facilitation in the fovea: very long-range spatial interactions in contrast perception. Journal of Vision, 10, 9. Olzak, L. A., & Laurinen, P. I. (1999). Multiple gain control processes in contrastcontrast phenomena. Vision Research, 39, 3983–3987. Ozeki, H., Finn, I. M., Schaffer, E. S., Miller, K. D., & Ferster, D. (2009). Inhibitory stabilization of the cortical network underlies visual surround suppression. Neuron, 62, 578–592. Poirazi, P., Brannon, T., & Mel, B. W. (2003). Pyramidal neuron as two-layer neural network. Neuron, 37, 989–999. Polsky, A., Mel, B. W., & Schiller, J. (2004). Computational subunits in thin dendrites of pyramidal cells. Nature Neuroscience, 7, 621–627. Renart, A., de la Rocha, J., Bartho, P., Hollender, L., Parga, N., Reyes, A., et al. (2010). The asynchronous state in cortical circuits. Science, 327, 587–590. Sceniak, M. P., Ringach, D. L., Hawken, M. J., & Shapley, R. (1999). Contrast’s effect on spatial summation by macaque V1 neurons. Nature Neuroscience, 2, 733–739. Schuett, S., Bonhoeffer, T., & Hubener, M. (2002). Mapping retinotopic structure in mouse visual cortex with optical imaging. Journal of Neuroscience, 22, 6549–6559. Schwabe, L., Ichida, J. M., Shushruth, S., Mangapathy, P., & Angelucci, A. (2010). Contrast-dependence of surround suppression in macaque V1: experimental testing of a recurrent network model. NeuroImage,. Schwabe, L., & Obermayer, K. (2005). Adaptivity of tuning functions in a generic recurrent network model of a cortical hypercolumn. Journal of Neuroscience, 25, 3323–3332. Schwabe, L., Obermayer, K., Angelucci, A., & Bressloff, P. C. (2006). The role of feedback in shaping the extra-classical receptive field of cortical neurons: a recurrent network model. Journal of Neuroscience, 26, 9117–9129. Shmuel, A., Augath, M., Oeltermann, A., & Logothetis, N. K. (2006). Negative functional MRI response correlates with decreases in neuronal activity in monkey visual area V1. Nature Neuroscience, 9, 569–577. Sidiropoulou, K., Pissadaki, E. K., & Poirazi, P. (2006). Inside the brain of a neuron. EMBO Reports, 7, 886–892.

40

S. Vanni / Neural Networks 25 (2012) 30–40

Simoncelli, E. P., & Olshausen, B. A. (2001). Natural image statistics and neural representation. Annual Review of Neuroscience, 24, 1193–1216. Spruston, N. (2008). Pyramidal neurons: dendritic structure and synaptic integration. Nature Reviews Neuroscience, 9, 206–221. Spruston, N., Jaffe, D. B., & Johnston, D. (1994). Dendritic attenuation of synaptic potentials and currents: the role of passive membrane properties. Trends in NeuroScience, 17, 161–166. Tsodyks, M. V., Skaggs, W. E., Sejnowski, T. J., & McNaughton, B. L. (1997). Paradoxical effects of external modulation of inhibitory interneurons. Journal of Neuroscience, 17, 4382–4388.

Vanni, S., Dojat, M., Warnking, J., Delon-Martin, C., Segebarth, C., & Bullier, J. (2004). Timing of interactions across the visual field in the human cortex. NeuroImage, 21, 818–828. Vanni, S., & Rosenström, T. (2011). Local non-linear interactions in the visual cortex may reflect global decorrelation. Journal of Computational Neuroscience, 30, 109–124. Vinje, W. E., & Gallant, J. L. (2000). Sparse coding and decorrelation in primary visual cortex during natural vision. Science, 287, 1273–1276. Xing, J., & Heeger, D. J. (2001). Measurement and modeling of center-surround suppression and enhancement. Vision Research, 41, 571–583.