ARTICLE IN PRESS Neurocomputing 72 (2009) 1849–1858
Contents lists available at ScienceDirect
Neurocomputing journal homepage: www.elsevier.com/locate/neucom
Stability of random brain networks with excitatory and inhibitory connections R.T. Gray a,b,d,, P.A. Robinson a,b,c a
School of Physics, The University of Sydney, NSW 2006, Sydney, Australia Brain Dynamics Center, Westmead Millennium Institute, Westmead Hospital and Western Clinical School of the University of Sydney, Westmead 2145, Australia Faculty of Medicine, The University of Sydney, NSW 2006, Australia d National Centre in HIV Epidemiology and Clinical Research, Faculty of Medicine, The University of New South Wales, NSW 2010, Australia b c
a r t i c l e in fo
abstract
Article history: Received 25 April 2007 Received in revised form 29 May 2008 Accepted 3 June 2008 Communicated by W.L. Dunin-Barkowski Available online 16 July 2008
Stability of randomly connected networks of neural populations with randomly distributed excitatory and inhibitory connections is investigated using a simplified physiologically-based model of brain electrical activity. Connections within a random network are randomly assigned to be excitatory or inhibitory, and the strengths of excitatory and inhibitory connections have two distinct distributions. Stability is shown to depend on the size of the network, the connection probability, and the mean and variance of the network’s distribution of connection strengths, thus constraining these quantities. Networks with a nonzero variance for their excitatory and inhibitory strengths are less likely to be stable than networks with zero variance. The effect of changes in overall network activity on an individual population is also investigated. The maximum excitatory and inhibitory inputs into a population are constrained by stability and occur when the magnitudes of the mean excitatory and inhibitory connection strengths are equal and the proportion of connections that are inhibitory has a fixed value less than 0:5. Results consistent with experimentally determined brain networks. Crown Copyright & 2008 Published by Elsevier B.V. All rights reserved.
Keywords: Cortical networks; Stability analysis; Random matrix theory; Graph spectra
1. Introduction The brain consists of approximately 100 billion interconnected excitatory and inhibitory neurons that are arranged into distinct anatomical structures at large scales. These structures form a large nonrandom modular hierarchical network [16,22,23,26, 44,57] with properties similar to those of small world networks [1,49] and composed from a small number of structural building blocks or motifs [47]. The physical and evolutionary reasons for the brain to display this type of structure have yet to be explained completely. A number of studies have investigated the structure of cortical networks using techniques from complex networks [46,49], the measurement of physical properties such as wiring length and axon or dendritic volume [10,11,28], and information theoretic measures such as integration, mutual information, and complexity [48]. Complementing this work, we investigate the dynamics of brain networks using a physiologically-based model of the brain’s electrical activity looking for dynamical constraints on the physical and physiological structure of the brain.
Corresponding author at: School of Physics A28, The University of Sydney, NSW 2006, Sydney, Australia. Tel.: +61 290367966; fax: +61 293517726. E-mail address:
[email protected] (R.T. Gray).
One possible dynamical constraint is the linear stability of a network’s electrical activity. Recent physiologically-based modeling of the brain’s electrical activity in recent years suggests that the stability of the brain’s electrical activity in response to a stimulus is an important constraint on brain physiology [7,38,41]. If the brain were unstable, a stimulus would lead to a continual increase in electrical activity (or possibly nonlinear cycles or chaos), likely corresponding to a disorder; e.g., epilepsy [7,38,41]. There is also evidence that the brain operates close to marginal stability, permitting a wide range of flexible, adaptable, and complex behavior [5,6,42,50]. Studies in complex networks have also shown that stability has an effect on their structure. For example, assortative networks have been shown to be more unstable than random networks [8] and the frequency of particular motifs in complex networks is also affected by stability [34,54]. In this work we investigate the stability of randomly connected networks. Even though large scale brain networks are known to have a specific nonrandom connectivity, it is useful to explore the stability of randomly connected networks. First, we can obtain insights into why evolution selected a particular connectivity in the brain out of the space of all possible connectivities. Alternatively, as discussed in [17], it is unrealistic that a real network of neural populations has fixed parameter values. It is likely that these values change, or fluctuate, over time. We thus
0925-2312/$ - see front matter Crown Copyright & 2008 Published by Elsevier B.V. All rights reserved. doi:10.1016/j.neucom.2008.06.001
ARTICLE IN PRESS 1850
R.T. Gray, P.A. Robinson / Neurocomputing 72 (2009) 1849–1858
take a statistical approach in our work and investigate how changes in a network’s parameters affect the probability that it is stable. In this work we employ random matrix theory which has been used extensively over the last 30 years to determine the statistical properties of the eigenvalues of large random matrices. This theory has many applications in physics [13,21,31], but has also been used to investigate the stability of complex networks in biology [30] and neuroscience [9,17,19,20,51]. Previously we studied the stability of randomly connected brain networks with a distribution of gain values. Brain networks with a large gain variance were shown to have a mixture of excitatory and inhibitory connections and to have multiple eigenvalues close to the stability boundary [20]. Rather than have all the connection gains determined by the same random distribution, a more realistic brain network has a certain percentage of inhibitory connections with different distributions for the strengths of the excitatory and inhibitory connections. In this paper we apply the same model to random brain networks with excitatory and inhibitory connections to investigate how the number of inhibitory connections and the mean and variance of the excitatory and inhibitory strengths affect network stability. The electrical activity in brain networks has previously been studied using simple dynamical models [3,17,24]. The model we use is similar to these models however, since it is derived from a general physiologically-based model, a direct link is maintained between the structure/physiology of brain networks and the dynamics of their electrical activity. The representation of brain networks using directed graphs and their corresponding connection matrices is described in Section 2. In Section 3 the simple physiological model used to describe the dynamics of a brain network is presented and the central role of the network’s gain matrix in determining its stability is demonstrated. We then use random matrix theory to investigate the stability of brain networks that have randomly distributed excitatory and inhibitory connections with gain values drawn from different distributions in Section 4. We also examine the effect variations in the excitatory and inhibitory strengths have on stability and determine the parameter values that give the maximum average magnitude of the excitatory and inhibitory input gain into each population.
2. Brain network model
populations at scales greater than a few tenths of a millimeter and is thereby suitable for investigating the dynamics of large scale brain networks. In particular it produces simulated time series and frequency spectra that closely match those seen in electroencephalograms (EEG) [38,39,42]. The model we use to investigate the dynamics of brain networks is derived from the general model by assuming there is no dendritic filtering and smearing of input, and the firing rate is proportional to the total synaptic input into a population. However, the temporal damping of a population’s neural activity, all the characteristics determining the strength of synaptic connections, and the structure of the underlying brain network remain. This means our simple brain network model is still able to accurately describe the low frequency electrical activity in a network of neural populations [42]. The network model describes the neural fields fa that propagate along the outgoing connections of each population a. The linear perturbations of these fields about the assumed steady state are described in Fourier space by X Dðo0 Þfa ðo0 Þ ¼ Gab fb ðo0 Þ, (1) b
where Dðo0 Þ ¼ ð1 io0 =gÞ2 ¼ ð1 ioÞ2 ,
Gab is the gain from b to a; o ¼ o =g, and o is the angular frequency. The gain Gab is a dimensionless quantity describing the effect of changes in the firing rate of neurons in population b on the neurons of population a. In the spatially uniform case used here g represents a temporal damping rate. Physiologically, Gab is the number of extra action potentials produced in a per extra action potential incident from b. Hence, the gain Gab is a measure 0 of how sensitive and responsive a is to changes in b s activity. If Gab 40 the connection is excitatory and if Gab o0 the connection is inhibitory. If Gab a0 a connection exists from population b to population a, and hence C ab a0. Therefore the gain matrix GðNÞ ¼ ½Gab encodes all of the information in CðNÞ, as well as the strength of connections between populations. Eq. (1) can be written in matrix form as DðoÞUðoÞ ¼ GUðoÞ,
0
(3)
where U is a column vector of the fa ; D ¼ DðoÞI ¼ ð1 ioÞ2 I, and I is the identity matrix. Setting A ¼ G D, Eq. (3) can be simplified to AðoÞUðoÞ ¼ 0.
As in previous work [17,19,20,46] we represent a brain network of n neuronal populations with a directed graph N and its corresponding n by n connection matrix CðNÞ ¼ ½C ij . Each vertex represents a specific neuronal population or brain structure and an edge in the graph signifies a connection from one population to another along which an electrical signal is sent. The neurons constituting each population can belong to physically distinct regions of the brain, for example the visual and motor cortices, or can be physically intermixed with the neurons of another population, as in the case of excitatory and inhibitory neurons in the cortex. If there is a connection from population j to population i then C ij ¼ 1 and if there is no connection C ij ¼ 0. In this work we investigate the stability of brain networks with randomly connected populations. The probability that an edge exists between populations is denoted by p. The dynamics of a brain network’s electrical activity is modeled with the same simplified physiological model used previously [19,20]. This model is a simplified version of a continuum mean field model of neural behavior developed recently, the details of which are summarized in [19] and elsewhere [39,56]. This continuum model has been shown to accurately describe the large scale electrical activity of brain
(2) 0
(4)
This equation describes the linear dynamics of a network of neural populations without any external input, and thus determines the stability of the network. If external inputs are present they appear as a vector on the right hand side of Eq. (4) and determine the activity level of the network, as long as it is stable.
3. Stability of brain networks The linear stability of a network is determined by the solutions
o of the dispersion relation det½AðoÞ ¼ 0.
(5)
In Fourier space instabilities occur when solutions to the dispersion relation lie in the upper half of the complex plane. These roots correspond to exponential growth terms in the overall solution. The boundary between stable and unstable regions corresponds to the real axis, and networks with dispersion roots at this boundary are marginally stable. The solution with the largest imaginary part is the most unstable, or least stable, and determines the overall level of stability of the network. When stable the imaginary part of this solution corresponds to
ARTICLE IN PRESS R.T. Gray, P.A. Robinson / Neurocomputing 72 (2009) 1849–1858
the dominant asymptotic decay rate of the transient solution back to the steady state. If the network is unstable the imaginary part of the most unstable root gives the dominant growth rate of the solution. We term this solution the dominant solution and denote it by o1. The frequency corresponding to o1 ðRe o1 =2pÞ has the maximum power in the frequency spectrum and temporal time series of each neural population in the brain network. The linear stability of a system is often studied by considering the eigenvalues of the system’s characteristic equation. In the time domain this gives solutions with elt terms rather than the eiot terms obtained here via the inverse Fourier transform. The dispersion solutions correspond to the eigenvalues of system rotated 90 anticlockwise about the origin. Therefore, dispersion solutions with positive imaginary part correspond to eigenvalues with positive real part.
For the simplified model used here the solutions to Eq. (5) can be determined from the spectrum of the gain matrix G. Setting l ¼ ð1 ioÞ2 , the dispersion relation becomes detðG lIÞ ¼ 0,
(6)
which is a polynomial in the parameter l, with real coefficients and degree n. This implies the dispersion solutions of a network can be obtained from the eigenvalues l of G, providing a link between the network’s structure and dynamics. The set of eigenvalues of G is called the spectrum of G, which we denote by SpðGÞ. For each eigenvalue l in SpðGÞ there are two dispersion roots given by pffiffiffi (7) o ¼ i i l. Since G is real-valued the elements of SpðGÞ are real or come in complex conjugate pairs. Taking the imaginary part of Eq. (7) we obtain pffiffiffi (8) Im o ¼ Imði i lÞ, pffiffiffi (9) ¼ 1 Imði lÞ, pffiffiffi (10) ¼ 1 Re l. pffiffiffi The stability condition Im oo0 implies that Re lo1 which implies that for G to be stable, all the l in SpðGÞ must satisfy Re l þ jljo2.
(11)
The critical boundary between unstable and stable states is given by Re l þ jlj ¼ 2.
s2e , while inhibitory connections have a probability distribution g i ðxÞ with mean mi o0 and variance s2i . Note that g e ðxÞ must equal zero for xp0 and g i ðxÞ must equal zero for xX0, otherwise connections would change from being excitatory to inhibitory and vice versa. In particular, if Gaussian excitatory and inhibitory distributions are used as approximations, s2e and s2i must be small compared to me and mi to avoid any significant violation of this criterion. The distribution of gains gðxÞ for this type of network is given by gðxÞ ¼ ð1 pi Þg e ðxÞ þ pi g i ðxÞ,
(12)
If l ¼ x þ iy, the stability zone is a parabolic region in the complex plane given by y2 o4 4x. The axis of the parabolic boundary is along the real axis with a turning point at ðx; yÞ ¼ ð1; 0Þ and imaginary axis intercepts at y ¼ 2. The eigenvalue corresponding to o1 is the least stable eigenvalue of the network and is termed the dominant eigenvalue, denoted by l1.
4. Spectrum and stability of random brain networks with inhibitory and excitatory connections In this work we are interested in the effect of randomly distributed excitatory and inhibitory connections on the stability of brain networks. We consider random networks with n populations and probability of connection p. Connections are randomly assigned to be inhibitory with a probability pi. Excitatory connections are given gain values that are determined by a probability distribution g e ðxÞ with mean me 40 and variance
(13)
which has mean
mg ¼ ð1 pi Þme þ pi mi ,
(14)
and variance
s2g ¼ pi ð1 pi Þðme mi Þ2 þ ð1 pi Þs2e þ pi s2i .
3.1. Determining stability from gain matrix eigenvalues
1851
(15)
Note s2g cannot be zero if both excitatory and inhibitory connections exist in the network. Also if s2g a0, the gain matrix is almost certainly asymmetric, even if the underlying structural connectivity of the network is highly symmetric, so we only consider asymmetric gain matrices. Finally, the entries Gab of the gain matrix are independent and identically distributed with mean g ¼ pmg and variance s2 ¼ p½s2g þ ð1 pÞm2g , since the binomial connection probability distribution and the gain distribution are independent. If se and si are small compared to the me and mi then s2g pi ð1 pi Þðme mi Þ2 . In this section we calculate the probability that a random network is stable and determine stability constraints for a network in terms of its parameters n; p; pi ; me ; mi ; se , and si . If se ¼ si ¼ 0 then the values of me and mi can be determined from mg and s2g using Eqs. (14) and (15) giving qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi me ¼ pi s2g =ð1 pi Þ þ mg , (16) and
mi ¼ mg If
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð1 pi Þs2g =pi .
(17)
mg 40 then me 40 and the condition mi o0 requires qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
mg o ð1 pi Þs2g =pi or s2g 4
nppi m2 . ð1 pi Þ g
(18)
Experimentally determined connection networks for the cortex of the cat and macaque monkey have been published and analyzed with graph theoretical methods [16,22,23,26,44,48,57]. These networks range in size from 30 to 80 structures and have a percentage of connections out of all possible connections between 20% and 40%. (These connection percentages may increase with the future discovery of connections between cortical components.) In light of this work we present numerical results for random brain networks with parameters corresponding to these values. We demonstrate our results using Gaussian distributions since an ensemble of brain networks with normally distributed gains can be interpreted as representing a brain network with connection gains that randomly fluctuate over time around the mean values me and mi . However, as long as g and sg are welldefined our results are valid for arbitrary distributions for the excitatory and inhibitory connection gains. 4.1. Spectrum of random networks Section 3.1 showed that the stability of a brain network is determined by the distribution of its gain matrix eigenvalues in the complex plane. To determine the probability Ps that a brain
ARTICLE IN PRESS 1852
R.T. Gray, P.A. Robinson / Neurocomputing 72 (2009) 1849–1858
network is stable we need to determine the probability that all the eigenvalues of the network satisfy Eq. (11). Hence we need to express the eigenvalue distribution for a network in terms of its parameters. Previous work [19,20] has shown that the eigenvalue distribution of large random brain networks with a distribution of gain values can be accurately approximated using results from random matrix theory. In [20] we used the random matrix results of [18,45] to numerically determine the eigenvalue distribution of ensembles of random gain matrices. It was shown that if a gain matrix G is a real n n random asymmetric gain matrix with the Gab being independent random variables with a common mean g and variance s2 , such that the correlation between all asymmetric matrix entries CðGab ; Gcd Þ ¼ ðhGab Gcd i g 2 Þ=s2 is zero for all ða; bÞaðc; dÞ or ða; bÞaðd; cÞ, and the correlation between symmetric entries CðGab ; Gba Þ ¼ ðhGab Gba i g 2 Þ=s2 is equal to the same constant t for all aab then the eigenvalue distribution rðlÞ of G can be approximated by the superposition of two distributions. Note that for the asymmetric matrices studied here t ¼ CðGab ; Gba Þ ¼ 0 for all aab. The first distribution, called the principal distribution, denoted rp ðlÞ, represents the distribution of one eigenvalue in SpðGÞ which is real and normally distributed with mean ng and variance s2 ; i.e., h x ng i . 2s
1
rp ðxÞ pffiffiffiffiffiffi exp s 2p
(19)
The second distribution called the bulk distribution, denoted rb ðlÞ, represents the distribution of the other n 1 eigenvalues. The bulk distribution is approximately (with equality as n ! 1)
rb ðlÞ ðpns2 Þ1 ,
(20)
if x þ y pns , and 0 otherwise, where l ¼ Re l þ iIm l ¼ x þ iy. Our work in [20] showed that if gbs2 40, the principal and bulk distributions is clearly distinguishable from each other and the principal distribution is the distribution of the dominant eigenvalue l1 . If g s2 then the principal and bulk distributions overlap, while if s2 bg the bulk distribution completely overlaps the principal distribution and the entire spectral distribution of G is given by Eq. (20). Eq. (20) shows that at least n 1 eigenvalues of large random gain matrices are uniformly distributed within a circle centered pffiffiffi on the origin with radius s n. The projection of Eq. (20) onto the real axis is given by Z 2 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ns2 x2 , (21) rx ðxÞ ¼ rb ðlÞ dy ¼ pns2 pffiffiffi if jxjps n and rx ðxÞ ¼ 0 otherwise. This is Wigner’s semi-circle law [45,55]. The boundary of this distribution intersects the pffiffiffi positive real axis at s n. For random brain networks with distinct distributions for excitatory and inhibitory connections, the random matrix theory results in this section imply that these networks have a principal distribution that is Gaussian with mean 2
2
2
ng ¼ np½ð1 pi Þme þ pi mi ,
2
The results in the previous section show that we can accurately predict the distribution of a network’s eigenvalues from its parameters using random matrix theory. From this distribution we can calculate the probability that a brain network is stable. The probability P s that a large random brain network G is stable equals the probability that all the eigenvalues in SpðGÞ satisfy Eq. (11). For a random brain network with n populations and a gain matrix with entries with mean g and variance s2 , this probability equals the product of the probability that an eigenvalue in the principal distribution has real part less than one and the probability that all n 1 eigenvalues of the bulk spectrum have real part less than one. Since the principal distribution is approximately normal with mean ng and variance s2 the probability pps that an eigenvalue in the principal distribution has real part less than one is pps ¼ ¼
Z
1
Z
1
rp ðxÞ dx ¼ pffiffiffiffiffiffi s 2p 1
1 1 ng pffiffiffi 1 þ erf 2 s 2
1
2
eðxngÞ
=2s2
dx
1
,
(25)
while the probability pbs that an eigenvalue in the bulk distribution is stable is approximately equal to the integral of Eq. (21) between 1 and 1; i.e., pbs ¼
Z
1
1
rx ðxÞ dx ¼
2 pns2
Z
1
pffiffi ns
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ns2 x2 dx.
(26)
If ns2 o1 then pbs ¼ 1; otherwise, the integral in Eq. (26) yields pbs ¼
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 1 1 ns2 1 þ sin1 pffiffiffi þ . 2 p nps2 s n
(27)
Since the bulk spectrum contains n 1 eigenvalues, the probability that all the eigenvalues in the bulk spectrum are stable is
2
s ¼ p½pi ð1 pi Þðme mi Þ þ ð1 pÞfð1 pi Þme þ pi mi g þ p½ð1 pi Þs2e þ pi s2i ,
(23)
and a bulk spectrum which is almost certainly contained within a circle the square of whose radius is ns2 ¼ np½pi ð1 pi Þðme mi Þ2 þ ð1 pÞfð1 pi Þme þ pi mi g2 þ np½ð1 pi Þs2e þ pi s2i .
4.2. Stability of random brain networks
(22)
and variance 2
In Fig. 1 we show an example eigenvalue distribution for an ensemble of brain networks with normally distributed excitatory and inhibitory connections. This figure shows that the eigenvalue distribution of brain networks with this double-hump gain distribution is accurately described by the random matrix results. Fig. 1(b) shows the eigenvalue distribution of an ensemble of 100 brain networks, with the gain distribution in Fig. 1(a). A subset of the ensemble eigenvalues representing the principal distribution are distributed along the real axis and centered on the stability boundary; the rest of the eigenvalues represent the bulk distribution. The distribution of the real part of the eigenvalues for the ensemble is shown in Fig. 1(c). Figs. 1(b) and (c) show that the principal distribution and the bulk distribution are well approximated by the theoretical predictions. The only significant discrepancy between the predicted and actual spectral distributions is an exponential tail in the real part of the bulk distribution seen in Fig. 1(c). This discrepancy in the spectral distribution is due to the finite size of the brain networks in the ensemble and disappears as n ! 1 [45].
(24)
n1
½pbs
" #n1 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 1 1 ns2 1 1 pffiffiffi þ þ sin ¼ . 2 p nps2 s n
(28)
As n ! 1 the value of pbs converges to 0:5 if ns2 41. Therefore the probability that all the eigenvalues of the bulk spectrum are stable converges to zero as n ! 1, if ns2 41.
ARTICLE IN PRESS R.T. Gray, P.A. Robinson / Neurocomputing 72 (2009) 1849–1858
14
1853
1.0
12 0.5
8
Imλ
g (x)
10
0.0
6 -0.5
4 2 0
-1.0 -0.2
-0.1
0.0 x
0.1
0.2
-1.0
-0.5
0.0
0.5
1.0
1.5
Reλ
nρx
60 40 20 0 -1.0
-0.5
0.0
0.5
1.0
1.5
Reλ Fig. 1. Eigenvalue distribution for an ensemble of 5000 random brain networks with n ¼ 50; p ¼ 0:4; pi ¼ 0:3, and normal distributions for excitatory and inhibitory connections with me ¼ 0:107; se ¼ 0:02; mi ¼ 0:0833, and si ¼ 0:015. (a) Gain distribution for connections in the network. (b) Eigenvalue distribution for 100 networks in the ensemble. The solid circle is the predicted boundary of the bulk spectrum and the dot–dashed curve is the stability boundary given by Eq. (12). (c) The distribution of the real part of eigenvalues for the ensemble. The solid curve is the bulk distribution prediction nrx ðxÞ given by Eq. (21). The smaller Gaussian solid curve is the predicted principal distribution.
Since the principal and bulk distributions are independent, the expected probability that a random brain network with gain matrix G is stable P s equals the product of Eqs. (25) and (28); i.e., Ps ¼
#n1 " pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 1 ng 1 1 1 ns2 1 pffiffiffi 1 þ erf þ sin1 pffiffiffi þ . 2 2 p nps2 s n s 2 (29)
In Fig. 2(a) P s is plotted as a function of ng and ns2 and compared to the predictions from Eq. (29). For a network to have both excitatory and inhibitory connections its parameter values must satisfy Eq. (18) which corresponds to the region above the solid line in Figs. 2(a) and (b). Fig. 2(a) shows there is a transition from almost every network being stable to almost no networks being stable as ng and ns2 increase from 0 to 2, as seen in the theoretically predicted contour plot in Fig. 2(b). Comparing Figs. 2(a)–(c) shows that for fixed ns2 o0:7 the numerical P s agree well with the theoretical predictions. There is a rapid transition from P s 1 to 0 that has a sigmoid shape with a critical value (where P s ¼ 0:5) at ng ¼ 1. Fig. 2(c) shows that the theoretical transition is slightly steeper than the numerical results. As shown in Fig. 2(a) the width of this transition increases as the value of ns2 increases. In Fig. 2(b) the theoretical transition width from P s 1 to 0 as ns2 increases is the same for all values of ngo0:75. Fig. 2(d) shows that the theoretical P s is approximately one until ns2 ¼ 1 and then rapidly decreases to a critical value at ns2 1:2. The decrease in stability becomes more gradual for ns2 41:2. The numerical results in Fig. 2(d) show that the actual transition has a sigmoid shape and is always slightly
less than the theoretical P s . In Fig. 2(a) the results show that for each value of ngo0:8 the width of this transition is constant but the critical value of ns2 decreases from approximately 1.2 to 0.8 as ng increases from 0 to 1. At ng ¼ 0:8 the critical value of ns2 is approximately 1. For ngp1 the theoretical Ps is always greater than the numerical value. These discrepancies between the theoretical and numerical Ps are due to the exponential tails in the distribution of the real part of the ensemble eigenvalues seen in Fig. 1(c). These results are repeated in Fig. 3 with the same parameter values except that n ¼ 100. Fig. 3 shows the same features as Fig. 2. However, the transitions from a network being almost surely stable to almost surely unstable are steeper and more accurately predicted by the theoretical results. In Figs. 3(a) and (d) the critical value of ns2 is closer to unity than in the corresponding plots in Fig. 2. As n ! 1 the discrepancies between the numerical and theoretical results seen in Figs. 2 and 3 disappear and the transitions from P s 1 to 0 stability converge to step functions with critical values of ng ¼ 1 and ns2 ¼ 1. The results in this section show that the structure and physiology of large random brain networks with excitatory and inhibitory connections is constrained by stability. For ns2 51 a brain network is almost certainly stable if ng ¼ npmg o1 and almost certainly unstable if ng ¼ npmg 41. The results in Figs. 2 and 3 show that stability also constrains ns2 , though not as sharply as ng. If ns2 ¼ np½s2g þ ð1 pÞm2g o1, a large random network is highly likely to be stable and any increase in the variance of the gains of a network with ns2 1 greatly decreases Ps .
ARTICLE IN PRESS 1854
R.T. Gray, P.A. Robinson / Neurocomputing 72 (2009) 1849–1858
2.0
2.0
< 0.05
1.0
0.5
1.0
0.5 > 0.95
> 0.95 0.0
0.5
1.0 ng
1.5
0.0 0.0
2.0
1.0
1.0
0.8
0.8
0.6
0.6
Ps
Ps
0.0
< 0.05
1.5
nσ2
nσ2
1.5
0.4
0.4
0.2
0.2
0 0.0
0.5
1.0 ng
1.5
2.0
0
0.0
0.5
1.0 ng
1.5
2.0
0.5
1.0 nσ2
1.5
2.0
Fig. 2. Probability of stability Ps of brain networks with random gain matrices as a function of ng and ns2 . Numerical results were obtained from ensembles of 1000 networks with n ¼ 50, p ¼ 0:4, pi ¼ 0:4, and se ¼ si ¼ 0. (a) Contour plot of stability showing Ps as a function of ng and ns2 . The contour lines are from 95% to 5% in increments of 10%. The solid line is the boundary of the region defined by Eq. (18). A network must have a value of ns2 above this line to contain both excitatory and inhibitory connections. (b) Theoretical contour plot given by Eq. (29). (c) P s as a function of ns2 for g ¼ 0. (d) P s as a function of ng for ns2 ¼ 0:496. The solid lines in (c) and (d) are the corresponding theoretical predictions.
4.3. Effect of variations in excitatory and inhibitory connection gains on stability The results in the previous section show that stability constrains the mean and variance of the connection gains of a network. The predicted stability curves in Figs. 2(d) and 3(d) show that if ns2 1 then a small increase in ns2 can result in a large decrease in P s . The numerical results in the previous section were calculated with se ¼ si ¼ 0. In this section we study the effect of a variation in se and si on the stability of a network. To do this we use Gaussian distributions for the excitatory and inhibitory connection gains. For these distributions the values of se and si must be small so that excitatory connections remain excitatory and inhibitory connections remain inhibitory. This type of distribution would be expected if the gains in a brain network fluctuated around a mean value over time. However, in general the distributions for the excitatory and inhibitory gains could have large values for se and si without changing the type of connection. Eq. (22) shows that the value of ng is independent of se and si . However, Eq. (24) shows that these parameters contribute to the value of ns2 , and hence Ps . This contribution of se and si to ns2 equals np½ð1 pi Þs2e þ pi s2i , which is always greater than or equal
to zero and small for small values of se and si . Fig. 4 shows the effect of an increase in se and si on the eigenvalue distribution of an ensemble of brain networks. In this figure the parameters of the networks in the ensemble are set so that ng ¼ 0; ns2 1, and se =me ¼ si =jmi j. The two rows in Fig. 4 show the change in the gain distribution and the corresponding spectrum of an ensemble of networks as se and si increase. Note that the specific results in this figure hold in general as the distribution of the spectrum only depends on ns2 . In Fig. 4 there is only a small change in the spectrum of the ensemble due to the increase in se and si . This increase in se and si results in a small increase in ns2 , from 0.972 to 1.047, but a large decrease in the probability of stability from 0.72 to 0.57 (these values were calculated from an ensemble of 5000 brain networks). These results show that, despite only giving a small contribution to the value of ns2 , a small change se or si can lead to a large change in the probability that a network is stable. For the Gaussian distributions used here these changes in stability only occur if ns2 1, since se and si must be small. However, in the general case the values for se and si could be large, and hence, have a larger impact on stability. The results in this section show that a network is more likely to be stable if se ¼ si ¼ 0; i.e., if there is no variation in the excitatory and inhibitory gains.
ARTICLE IN PRESS R.T. Gray, P.A. Robinson / Neurocomputing 72 (2009) 1849–1858
b
2.0
< 0.05
2.0
< 0.05
1.5
1.0
nσ2
nσ2
1.5
0.5
1.0
0.5 > 0.95
0.0
0.5
> 0.95
1.0 ng
1.5
0.0
2.0
0.0
0.5
1.0 ng
1.5
2.0
0 0.0
0.5
1.0 nσ2
1.5
2.0
1.0
1.0
0.8
0.8
0.6
0.6
Ps
Ps
0.0
1855
0.4
0.4
0.2
0.2
0 0.0
0.5
1.0 ng
1.5
2.0
Fig. 3. Probability of stability for random brain networks with the same parameters and plotting conventions as in Fig. 2 except n ¼ 100.
In a network with se ¼ si ¼ 0 an excitatory connection from population b to population a has a gain equal to me . The larger the value of me the larger the increase in activity in a due to changes in b and the more responsive and sensitive a is to small changes in b. Similarly, if the connection from b to a is inhibitory then the larger the value of jmi j the larger the decrease in activity in a due to b. Therefore, the larger the values of me and jmi j the more sensitive and responsive a population is to activity changes in its neighboring populations. The results in this section show that for a network to have highly sensitive and responsive populations then stability would require se ¼ si 0 so that the network can have the largest possible values for me and jmi j while remaining stable.
4.4. Random brain networks with maximum net excitatory and inhibitory input gains At the end of the last section we considered the change in activity of a network population due to changes in activity of a neighboring population. In this section we consider the average excitatory and inhibitory effect on the activity of a population due to activity changes in all the populations of the network. To measure the effect of changes in activity of the entire network on the activity of a single population the total excitatory and inhibitory input into that population needs to be determined. Here the total input into a population is defined by the right of
Eq. (1). Therefore, the total excitatory and inhibitory input into each population depends on the gains and the activity level of the populations. However, since on average each population in a random brain network has the same number of excitatory and inhibitory connections, each population has on average the same transient firing rate. A measure of the total excitatory and inhibitory input into each population can then be defined by summing all the incoming excitatory gains and all the incoming inhibitory gains, respectively. Since the connections into a population can be either excitatory or inhibitory the total input gain into a population can be split into the sum of the total excitatory and total inhibitory gains. The total excitatory gain into population a equals P b Gab where the sum extends over all the positive Gab and is denoted by GaE ; we similarly define GaI . The average total excitatory gain into each population of a network is denoted P by GE ¼ a GaE =n and the average net inhibitory gain by P GI ¼ a GaI =n. The larger the value of GE in a network the larger the average increase in electrical activity a population experiences due to changes in the activity of its excitatory neighbors. Similarly, the larger the jGI j the greater the average decrease in activity. Thus, the larger the values of GE and jGI j the more responsive a single population is to changes in the activity of other populations. The sum GE þ GI equals the average total input gain into each population. For a random brain network GE þ GI ¼ ng, while s2 is the variance of the net input gain. The results in Section 4.2 show that stability constrains the mean and variance of the net
ARTICLE IN PRESS 1856
R.T. Gray, P.A. Robinson / Neurocomputing 72 (2009) 1849–1858
1.5 1.0
200
0.5 Imλ
g (x)
150
0.0
100 -0.5 50 0 -0.6 -0.4 -0.2 -0.0 x
-1.0
0.2
0.4
-1.5 -1.5
0.6
-1.0
-0.5
0.0 Reλ
0.5
1.0
1.5
-1.0
-0.5
0.0 Reλ
0.5
1.0
1.5
1.5 6
1.0
5 0.5 Imλ
g (x)
4 3
0.0
2
-0.5
1
-1.0
0 -0.6 -0.4 -0.2 -0.0 x
0.2
0.4
0.6
-1.5 -1.5
Fig. 4. Distributions of eigenvalues for ensembles of networks with n ¼ 50; p ¼ 0:4; pi ¼ 0:4, and varying se and si . (a) and (c) show the double-hump gain distributions with the same values for me ¼ 0:18 and mi ¼ 0:27, respectively. In (a) se ¼ 0:001 and si ¼ 0:0015 and in (c) se ¼ 0:05 and si ¼ 0:075. (b) and (d) show the corresponding eigenvalue distributions for a corresponding ensemble of 100 brain networks. The solid circle is the predicted boundary of the bulk spectrum and the dot–dashed line is the stability boundary given by Eq. (12).
input gain into the populations of a random network, and hence, the values of GE and GI . For a random brain network with a gain distribution given by Eq. (13) GE ¼ npð1 pi Þme and GI ¼ nppi mi . Therefore GE and GI are directly proportional to me and mi , respectively. The previous section showed that the maximum values of me and jmi j, allowed by stability, occur when se ¼ si ¼ 0. We therefore only consider networks with se ¼ si ¼ 0 in this section; hence, me and mi are given by Eqs. (16) and (17). Using Eq. (16) we find pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi GE ¼ npsg pi ð1 pi Þ þ npð1 pi Þmg . (30) Differentiating this with respect to pi gives npsg ð1 2p Þ d ðGE Þ ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffii npmg , dpi 2 pi ð1 pi Þ
(31)
which equals zero when
sg ð1 2pi Þ
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ 2mg . pi ð1 pi Þ
(32)
Rearranging and simplifying Eq. (32), we find p2i pi þ s2g =ð4m2g þ 4s2g Þ ¼ 0.
(33)
This can be solved to give the value of pi that maximizes GE as a function of mg and s2g ; sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi m2g 1 1 . (34) pi ¼ 2 2 2 mg þ s2g If mg ¼ 0 then Eq. (34) implies that pi ¼ 0:5, while Eq. (32) implies that pi o0:5 if mg 40 and pi 40:5 if mg o0. Hence, depending on the value of mg the positive or negative square root is used in Eq. (34) to determine the pi that maximizes GE . Note that the other solution of Eq. (34) is equal to 1 pi . Eq. (34) shows that as ng increases from zero the value of pi that maximizes GE decreases from 0:5. For example, in a network with n ¼ 50; p ¼ 0:4; ng ¼ 1 and ns2 ¼ 1 then pi ¼ 0:389. Substituting Eq. (34) into Eqs. (16) and (17) gives qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi (35) me ¼ mi ¼ m2g þ s2g . Thus, for a brain network to have maximum GE its average excitatory and inhibitory connection gains must have the same absolute value. These results show that a random brain network with n populations and connection probability p has the largest possible excitatory input gain if it has particular values for pi ; me, and mi (each of which can be expressed in terms of n and p). The
ARTICLE IN PRESS R.T. Gray, P.A. Robinson / Neurocomputing 72 (2009) 1849–1858
maximum value that GE can have for fixed mg and s2g is therefore given by qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi GE ¼ npðmg þ m2g þ s2g Þ=2, (36) and the corresponding value of GI is qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi GI ¼ npðmg m2g þ s2g Þ=2.
(37)
Eq. (36) shows that the maximum of GE increases with increasing mg and s2g . Hence, for a network to have its largest average excitatory gain mg and s2g need to have the largest values allowed by stability. Note that in this case jGI j is also as large as possible. Therefore the maximum values of GE and GI are ‘‘balanced’’ with respect to the mean total input gain equal to GE þ GI ¼ npmg ¼ ng; i.e., GE ng ¼ ng GI . Such a network would be highly responsive to small changes in external input and be expected to have similar properties to networks of balanced neurons that have been investigated previously [53]. We showed earlier in this section that stability constrains the ng and ns2 for a random brain network, and hence its possible values of mg and s2g . For large brain networks the critical value of ns2 equals one, independent of ng. However, for finite networks the critical value of ns2 decreases as ng increases. For example, in Fig. 2(a) it decreases from approximately 1.2 to 0.8 as ng increases from 0 to 1. Calculating GE for this range shows that GE has a maximum for ngo1 rather than at ng ¼ 1.
5. Summary and discussion A simplified physiologically-based model of neural activity was used to study the stability of randomly connected brain networks with randomly distributed excitatory and inhibitory connections. Other simple models have previously been used to investigate neural activity [3,17,24]. While our model can be written in a similar form to these models, it has the advantage of being derived from a general physiologically-based continuum model which can be used in principle to investigate the dynamics of brain networks. This means our brain network model maintains a direct link between the physiology and dynamics of brain networks. If additional physiological features are required, or of interest, they can be easily included. Our model incorporates the temporal damping of a population’s neural activity, the characteristics determining the strength of synaptic connections, and the structure of the underlying brain network. Investigations of the general model show that our model will accurately describe low frequency oscillations of the network’s electrical activity reproducing the low frequency power spectrum, for example [42]. In this paper we investigated the stability of randomly connected brain networks with randomly distributed excitatory and inhibitory connections with different probability distributions in strength. The stability of a brain network is determined by the solutions of a network’s dispersion relation which are obtained from its gain matrix. We showed that stability constrains the structure and physiology of brain networks with randomly distributed excitatory and inhibitory connections. A network is highly likely to be stable if the parameters of the network satisfy ngo1 and ns2 o1 (where ng and ns2 are given by Eqs. (22) and (23), respectively). These results imply that stability strongly constrains the number of populations, the number of connections, the average connection gain, and the variance of the connection gains in a network. Small variations in the excitatory and inhibitory gains, measured by se and si , were shown to decrease the probability that a network is stable despite having only a small effect on the network’s spectral distribution and the value of ns2 . This decrease in stability only occurs when ns2 1 for brain networks with Gaussian distributions for the excitatory and inhibitory gains.
1857
Such a distribution would be expected if the gain values fluctuated around their mean values over time. In the more general case where the distributions for the excitatory and inhibitory gains are not Gaussian, the values of se and si could have a larger impact on stability. However, in all cases a brain network is more likely to be stable if se ¼ si ¼ 0 and every excitatory connection has the same positive gain me and every inhibitory connection has the same negative gain mi . This implies that for a random brain network to have multiple marginally stable modes with large excitatory and inhibitory gains, stability constrains se and si to be zero. In Section 4.4 the average net excitatory input gain into a population of a random network was found to be a maximum if it has the largest possible values for ng and ns2 , which are constrained by stability, and particular values for pi ; me, and mi . These parameters satisfy pi o0:5 and me ¼ jmi j and can be expressed in terms of n and p. Note that in comparison the proportion of neural connections in the brain that are inhibitory is approximately 0.2 [4]. These results show that to maximize the average excitatory gain of a brain network the variance in the excitatory and inhibitory connection gains must be zero and particular values for the number of inhibitory connections, the average excitatory strength, and the average inhibitory strength are required. Each of these parameters is dependent on the number of populations and the number of connections. Furthermore, in Section 4.4 it was shown that the maximum values for the total excitatory GE and total inhibitory gain GI into a population allowed by stability occur when GE and GI are ‘‘balanced’’ with respect to the mean total input gain equal to ng (which has to be less than one for the network to be stable). However, for finite networks with a similar size to experimentally determined networks GE and jGI j have a maximum value when ngo1 rather than at ng ¼ 1. Note that if ns2 is set to the critical value, ng has a minimal effect on the dynamics of the network as long as it is less than one. This is because the bulk distribution of such a network overlaps the principal distribution and the eigenvalue distribution of the network is circular in the complex plane with multiple marginally stable eigenvalues. This suggests that for a finite brain network to satisfy the competing ‘‘pressures’’ of being marginally stable and having maximum GE (so it is adaptable and responsive to external stimuli) requires ng to be strictly less than one. For comparison, studies of the human corticothalamic system have estimated the total excitatory and inhibitory gains into the excitatory and inhibitory neural populations of the human cortex [39]. For these neural populations the gain from the excitatory population is approximately 6.8, the gain from the inhibitory population is approximately 8:1, and the excitatory gain from thalamic relay neurons is between 1 and 2. This gives a total input gain between 0:3 and 0.7, and shows that cortical neural populations are approximately balanced with respect to a small value less than one but have large GE and jGI j, as predicted by the results in Section 4.4. Overall, a random brain network with excitatory and inhibitory connections and parameters that maximize the excitatory and inhibitory input into each population while keeping the network stable almost certainly has a circular spectrum in the complex plane, is marginally stable, has balanced excitatory and inhibitory inputs into each population, and has multiple marginally stable modes activated by an external stimulus. Such a network will be responsive to external stimuli and have a wide range of flexible, adaptable, and complex behavior.
Acknowledgments This work was supported by the Australian Research Council and the Westmead Millennium Foundation.
ARTICLE IN PRESS 1858
R.T. Gray, P.A. Robinson / Neurocomputing 72 (2009) 1849–1858
References [1] S. Boccaletti, V. Latora, Y. Moreno, D.U. Hwang, Complex networks: structure and dynamics, Phys. Rep. 424 (2006) 175–308. [3] V.E. Bondarenko, A simple neural network model produces chaos similar to the human EEG, Phys. Lett. A 196 (1994) 195–200. [4] V. Braitenberg, Schu¨z, Anatomy of the Cortex: Statistics and Geometry, Springer, Berlin, 1991. [5] M. Breakspear, Nonlinear phase desynchronization in human electroencephalographic data, Hum. Brain Mapp. 15 (2002) 175–198. [6] M. Breakspear, J.R. Terry, K.J. Friston, Modulation of excitatory synaptic coupling facilitates synchronization and complex dynamics in a nonlinear model or neuronal dynamics, Neurocomputing 52–54 (2003) 151–158. [7] M. Breakspear, J.A. Roberts, J.R. Terry, S. Rodrigues, N. Mahant, P.A. Robinson, A unifying explanation of primary generalized seizures through nonlinear brain modeling and bifurcation analysis, Cereb. Cortex 16 (2006) 1296–1313. [8] M. Brede, S. Sinha, Assortative mixing by degree makes a network more unstable, harXiv:cond-mat/0507710i, 2005. [9] C.L. Buckley, S. Bullock, N. Cohen, Timescale and stability and adaptive behaviour, in: M.S. Capcarrere, et al. (Eds.), Advances in Artificial Life: Eighth European Conference, Ecal, Springer, Berlin, Heidelberg, 2005, pp. 292–301. [10] C. Cherniak, Component placement optimization in the brain, J. Neurosci. 14 (1994) 2418–2427. [11] D.B. Chklovskii, T. Schikorski, C.F. Stevens, Wiring optimization in cortical circuits, Neuron 34 (2002) 341–347. [13] A. Crisanti, G. Paladin, A. Vulpiani, Products of Random Matrices in Statistical Physics, Springer, Berlin, 1993. [16] D.J. Fellerman, D.C. Van Essen, Distributes hierarchical processing in the primate cerebral cortex, Cereb. Cortex 1 (1991) 1–47. [17] J. Feng, V.K. Jirsa, M. Ding, Synchronization in networks with random interactions: theory and applications, Chaos 16 (2006), 015109/1–21. [18] Z. Fu¨redi, J. Komlo´s, The eigenvalues of random symmetric matrices, Combinatorica 1 (1981) 233–241. [19] R. Gray, P.A. Robinson, Stability and spectra of randomly connected excitatory cortical networks, Neurocomputing 70 (2007) 1000–1012. [20] R. Gray, P.A. Robinson, Stability and synchronization of random brain networks with a distribution of connection strengths, Neurocomputing 71 (2008) 1373–1387. [21] T. Guhr, A. Mu¨ller-Groeling, H.A. Weidenmu¨ller, Random matrix theories in quantum physics: common concepts, Phys. Rep. 299 (1998) 189–425. [22] C.C. Hilgetag, Anatomical connectivity defines the organization of clusters of cortical areas, in the macaque monkey and the cat, Phil. Trans. R. Soc. London B 355 (2000) 91–110. [23] C.C. Hilgetag, M.A. O’Neill, M.P. Young, Hierarchical organization of macaque and cat cortical sensory systems explored with a novel network processor, Phil. Trans. R. Soc. London B 355 (2000) 71–89. [24] L.M. Hively, V.A. Protopopescu, Timely detection of dynamical change in scalp EEG signals, Chaos 10 (2000) 864–875. [26] J.B. Jouve, P. Rosenstiehl, M. Imbert, A mathematical approach to the connectivity between the cortical visual areas of the macaque monkey, Cereb. Cortex 8 (1998) 28–39. [28] M. Kaiser, C.C. Hilgetag, Nonoptimal component placement, but short processing paths, due to long-distance projections in neural systems, PLoS Comput. Biol. 2 (2006) 805–815. [30] R.M. May, Will a complex system be stable?, Nature (1972) 238–413. [31] M.L. Mehta, Random Matrices, second ed., Academic Press, New York, 1991. [34] R.J. Prill, P.A. Iglesias, A. Levchenko, Dynamic properties of network motifs contribute to biological network organization, PLoS Biol. 3 (2005) 1881–1892. [38] P.A. Robinson, C.J. Rennie, D.L. Rowe, Dynamics of large-scale brain activity in normal arousal states and epileptic seizures, Phys. Rev. E 65 (2002), 041924/1–9.
[39] P.A. Robinson, C.J. Rennie, D.L. Rowe, S.C. O’Connor, Estimation of multiscale neurophysiologic parameters by electroencephalographic means, Hum. Brain Mapp. 23 (2004) 53–72. [41] P.A. Robinson, C.J. Rennie, J.J. Wright, P.D. Bourke, Steady states and global dynamics of electrical activity in the cerebral cortex, Phys. Rev. E 58 (1998) 3557–3571. [42] P.A. Robinson, C.J. Rennie, J.J. Wright, H. Bahramali, E. Gordon, D.L. Rowe, Prediction of electroencephalographic spectra from neurophysiology, Phys. Rev. E 63 (2001), 021903/1–18. [44] J.W. Scannell, C. Blakemore, M.P. Young, Analysis of connectivity in the cat cerebral cortex, J. Neurosci. 15 (1995) 1463–1483. [45] H.J. Sommers, A. Crisanti, H. Sompolinsky, Y. Stein, Spectrum of large random asymmetric matrices, Phys. Rev. Lett. 60 (1988) 1895–1898. [46] O. Sporns, D.R. Chialvo, M. Kaiser, C.C. Hilgetag, Organization, development and function of complex brain networks, Trends Cognitive. Sci. 8 (2004) 418–425. [47] O. Sporns, R. Ko¨tter, Motifs in brain networks, PLoS Biol. 2 (2004) 1910–1918. [48] O. Sporns, G. Tononi, G. Edelman, Theoretical neuroanatomy: relating anatomical and functional connectivity in graphs and cortical connection matrices, Cereb. Cortex 10 (2000) 127–141. [49] O. Sporns, J.D. Zwi, The small world of the cerebral cortex, Neuroinf. 2 (2004) 145–162. [50] C.J. Stam, J.P.M. Pijn, P. Suffczynski, F.H. Lopes da Silva, Dynamics of the human alpha rhythm: evidence for non–linearity?, Clin. Neurophysiol. 110 (1999) 1801–1813. [51] M. Timme, T. Geisel, F. Wolf, Speed and synchronization of neural oscillators: analytic results based on random matrix theory, Chaos 16 (2006) 015108. [53] C. Van Vreeswijk, H. Sompolinsky, Chaos in neuronal networks with balanced excitatory and inhibitory activity, Science 274 (1996) 1724–1726. [54] E.A. Variano, J.H. McCoy, H. Lipson, Networks, dynamics, and modularity, Phys. Rev. Lett. 92 (2004), 188701/1–14. [55] E. Wigner, Random matrices in physics, SIAM Rev. 9 (1967) 1–23. [56] J.J. Wright, P.A. Robinson, C.J. Rennie, E. Gordon, P.D. Bourke, C.L. Chapman, N. Hawthorn, G.J. Lees, D. Alexander, Towards an integrated continuum model of cerebral dynamics: the cerebral rhythms, synchronous oscillation and cortical stability, Biosystems 63 (2001) 71–88. [57] M. Young, The architecture of visual cortex and inferential processes in vision, Spat. Vision 13 (2000) 137–146. Richard Gray completed his Ph.D. with Professor Peter Robinson, in Theoretical Physics and Neuroscience at the University of Sydney, Australia. He is currently working as a Senior Research Assistant at the National Centre in HIV Epidemiology and Clinical Research in Australia modeling the spread of infectious diseases. His research interests range from Theoretical Neuroscience to Quantitative Medicine and Public Health.
Peter Robinson received his Ph.D. in Theoretical Physics from the University of Sydney in 1987, then worked as a Research Associate at the University of Colorado at Boulder until 1990. He then returned to Australia, joining the Permanent Faculty of the University of Sydney in 1994, and obtaining a chair in 2000. He is currently an Australian Research Council Federation Fellow working on topics ranging from Neuroscience to Space Physics and Plasma Physics.