Criticality and avalanches in neural networks

Criticality and avalanches in neural networks

Chaos, Solitons & Fractals 55 (2013) 80–94 Contents lists available at SciVerse ScienceDirect Chaos, Solitons & Fractals Nonlinear Science, and None...

2MB Sizes 0 Downloads 53 Views

Chaos, Solitons & Fractals 55 (2013) 80–94

Contents lists available at SciVerse ScienceDirect

Chaos, Solitons & Fractals Nonlinear Science, and Nonequilibrium and Complex Phenomena journal homepage: www.elsevier.com/locate/chaos

Criticality and avalanches in neural networks Marzieh Zare, Paolo Grigolini ⇑ Center for Nonlinear Sciences, University of North Texas, P.O. Box 311427, Denton, TX 76203 1427, United States

a r t i c l e

i n f o

Article history: Available online 17 June 2013

a b s t r a c t Experimental work, both in vitro and in vivo, reveals the occurrence of neural avalanches with an inverse power law distribution in size and time duration. These properties are interpreted as an evident manifestation of criticality, thereby suggesting that the brain is an operating near criticality complex system: an attractive theoretical perspective that according to Gerhard Werner may help to shed light on the origin of consciousness. However, a recent experimental observation shows no clear evidence for power-law scaling in awake and sleeping brain of mammals, casting doubts on the assumption that the brain works at criticality. This article rests on a model proposed by our group in earlier publications to generate neural avalanches with the time duration and size distribution matching the experimental results on neural networks. We now refine the analysis of the time distance between consecutive firing bursts and observe the deviation of the corresponding distribution from the Poisson statistics, as the system moves from the non-cooperative to the cooperative regime. In other words, we make the assumption that the genuine signature of criticality may emerge from temporal complexity rather than from the size and time duration of avalanches. We argue that the Mittag–Leffler (ML) exponential function is a satisfactory indicator of temporal complexity, namely of the occurrence of non-Poisson and renewal events. The assumption that the onset of criticality corresponds to the birth of renewal non-Poisson events establishes a neat distinction between the ML function and the power law avalanches generating regime. We find that temporal complexity uncontaminated by periodicity occurs in a narrow range of values of the control parameter, in correspondence of which the information transfer from one network to another becomes maximal. We argue that if this enhancement of information transport is interpreted as a signature of criticality, then the power law avalanches are a manifestation of supercriticality rather than criticality. This leads us to conclude that the recent experiment showing no evidence of power-law avalanches in the brain of mammals would not conflict with the hypothesis that the brain works at criticality. Ó 2013 Elsevier Ltd. All rights reserved.

1. Introduction The brain is a complex dynamic system whose function is supposed to result from the interaction of many components. Although explaining its overall behavior in terms of the underlying mechanism is a challenging problem, very difficult, if not impossible to solve [1], some empirical results including avalanche analysis [2,3], dynamic range [4], information storage [4–6], information transmission ⇑ Corresponding author. E-mail address: [email protected] (P. Grigolini). 0960-0779/$ - see front matter Ó 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.chaos.2013.05.009

[7,8] and computational power [9–13] suggest that the brain may work near criticality. This is the main topic of this special issue honoring Gerhard Werner for his contributions to neurophysiology and his last paper on the subject [14]. For Werner, criticality and the renormalization group theory, the main theoretical tool to study criticality [15], play a fundamental role for the brain function and further theoretical advances in this direction may lead to viewing consciousness as a criticality-generated physical phenomenon. What is the connection between criticality and neural avalanches? Similar to other complex phenomena such as

M. Zare, P. Grigolini / Chaos, Solitons & Fractals 55 (2013) 80–94

earthquakes, snow avalanches and forest fires, avalanches in neural networks are found to follow the popular scalefree condition. The size distribution density, p(s), for example, turns out to be /sm,m being the system specific exponent, found to be 1.5 in neural networks [2]. The corresponding cumulative distribution P(s) has, of course, the form /s(m1). According to the popular view of Bak et. al. [16] criticality is realized spontaneously by complex systems rather than requiring the fine tuning of a control parameter with a critical value. They proposed the term self-organized criticality (SOC), as an operation of the system at criticality that generates the power law behavior in natural phenomena. The authors of Ref. [17] established a connection between SOC and the theory of branching processes remarkably yielding the critical index m = 3/2. This result makes the connection between power law and SOC so convincing that the authors of Ref. [4] define the control parameter K in such a way as to hold the value of 1 when the avalanche distribution size p(s) gets the power law form p(s) / 1/sm, with m = 1.5. In spite of these attractive results, the connection between criticality and inverse power law is still under debate, especially in the case of in vivo experiments [18]. This is so because testing SOC in vivo requires sub sampling while SOC theories assume full sampling [18]. Furthermore, it is important to point out that the existence of power law does not prove that the corresponding generator system is at criticality, since also non-critical systems may produce power laws [19,20]. The recent findings of Destexhe’s group [21] cast some doubts on the very attractive conjecture that the brain operates at criticality. The close examination of the avalanche scaling done by these authors using rigorous statistical analysis, from multi-electrode ensemble recording in cat, monkey and human cerebral cortex, during both wakefulness and sleep, did not confirm power-law scaling of neural avalanches. These results apparently contradict the Werner’s hypothesis that the brain works at criticality, if the inverse power law of the avalanche size distribution is assumed to be a fair criticality indicator. The aim of this article is to show that that the size distribution density of avalanches may not be a quite appropriate indicator of criticality. In the more conventional case of the second-order phase transition [22–24], the occurrence of criticality is properly signaled by the temporal properties of the system’s mean field. The mean field fluctuations are of Poisson type in both the subcritical and supercritical level, and at criticality depart from the Poisson condition for an extended time interval. The critical value of the control parameter [23] (in this case K = 1) corresponds to the emergence of a cooperation between many interacting units and generates a fluctuating mean field with maximal correlation time at criticality. In the case of neural dynamics, a proper indicator of temporal complexity seems to be the time distance s between two consecutive neuron bursts, denoted by the symbol w(s). The corresponding cumulative distribution is denoted by the symbol W(s). To discuss this form of criticality indicator, we use the same neural model as that of earlier publications [25– 27]. The authors of [25,26] increasing the cooperation strength found a smooth transition from the non-cooper-

81

ative to the cooperative condition significantly departing from the conventional second-order phase transition. For this reason these authors made the conjecture that this neural model may afford an interesting example of extended criticality [28]. Although both temporal complexity and avalanche power law distribution have been discussed in depth in these earlier publications, no adequate attention was devoted to the crucial fact that the two complexity indicators, P(s) and W(s), signal emergence of complexity at different values of the cooperation parameter. In this article, we show that with increasing the cooperation strength, temporal complexity emerges earlier than avalanche size complexity and the inverse power law distribution with m = 1.5, fitting the important experimental observation of [2], belongs to a physical regime strongly influenced by periodicity, thereby departing from the renewal condition that according to Refs. [22–24] should be a reliable signature of criticality. To confirm that temporal complexity is a more reliable indicator of criticality, we study the transmission of information from one to another neural network and we find that its efficiency is maximal when temporal complexity indicator signals criticality, and the system is not yet critical according to the avalanche size indicator. This result leads us to conclude that the work of Ref. [21] may be used to support rather than weaken the Werner’s hypothesis that the brain works near criticality. In fact, the complexity of avalanche size distribution may be an indicator of unhealthy rather than healthy brain. The outline of this article is as follows. In Section 2 we illustrate a form of temporal complexity that may signal the emergence of the scale-free behavior advocated by the theoretical work of Refs. [22–24] as a proper indicator of criticality, even when the fat and slow tails are not yet clearly visible. This form of complexity is called Mittag–Leffler (ML) complexity and we argue that, although overlooked so far, it may be a very appropriate criticality indicator. To make this article as self-contained as possible, in Section 3 we concisely illustrate the cooperative neural model that was already used in the earlier work [25–27] and we show how the survival probability W(t) depends on the cooperation strength K. Section 4 is devoted to a detailed analysis of the region where criticality is expected to occur. In Section 5 we discuss two different kinds of power law truncations, corresponding to whether periodicity is not yet significantly active or it is, respectively. Section 6 illustrates the numerical aging experiment that we adopt to prove the genuinely renewal nature of criticality. In Section 7 we illustrate a numerical experiment of information transmission from one neural network to another, showing the maximal efficiency of information transport at the onset of temporal complexity. The numerical experiment of Section 8 yields the power law distribution of avalanche size with m = 1.5 and of avalanche time duration with m = 2 at a cooperation value of K generating significant periodicity, thereby forcing us to conclude, on the basis of the earlier sections, that this is a manifestation of supercritical rather than critical regime. Finally, in Section 9 we make concluding remarks.

82

M. Zare, P. Grigolini / Chaos, Solitons & Fractals 55 (2013) 80–94



2. Mittag–Leffler function, criticality and extended definition of temporal complexity Metzler and Klafter [29] have pointed out the importance of the Mittag–Leffler (ML) survival probability for the field of complexity. In fact, the ML function settles the apparent conflict between the advocates of stretched exponential and inverse power law survival probabilities as important signs of complexity. This is so because the ML function is the generalized exponential function [29]:

Ea ðzÞ ¼

1 X

zn

Cð1 þ naÞ n¼0

;

ð1Þ

with a being an arbitrary positive real number. In the case of a < 1, this function is interpreted as a survival probability in time and it is frequently written as Ea((k t)a). In the short-time regime and long-time limit the ML function reads

Ea ððktÞa Þ  expððktÞa Þ

ð2Þ

and

Ea ððktÞa Þ /

1 ; ta

ð3Þ

respectively. The concept of survival probability is connected to the theoretical perspective of a complex system generating events in time. A complex system at criticality generates events that are referred to as critical events. The time interval between two consecutive crucial events, denoted as laminar region, is assigned the values ± 1, according to a coin tossing prescription [30]. At time t = 0 the system is prepared by selecting all the realizations with an event occurring at that time and positive laminar regions. As a consequence, the probability that no event occurs up to time t, W(t), is properly termed survival probability, and the function w(t)   dW(t)/dt is called waiting time distribution density. We adopt the symbols WML(t) and wML(t) to denote the ML survival probability and the corresponding waiting time distribution, respectively. The work of Ref. [31] shows that the ML function emerges in the case of the random growth of surfaces with the power index a connected to the anomalous scaling parameter b established by Kardar, Parisi and Zhang [32] by means of the renormalization group theory through the simple relation

a ¼ 2b:

T tþT

WðtÞ ¼

ð4Þ

This suggests a connection between the ML function and criticality and consequently with the focus of this special issue. It is convenient to quote the earlier work of Ref. [33], on the EEG dynamics of the human brain, where the stretched exponential of Eq. (2) was interpreted as the top of a ML iceberg, with the inverse power law hidden below the sea surface. A strong support to the conjecture of a connection between the ML function and criticality is given by the recent discovery that the ML function is universal [34]. In the literature complexity is supposed to be associated to the deviation from the exponential prescription through an inverse power law structure. For instance, a plausible form for the time distance between two criticality-induced consecutive events may correspond to the survival probability

a ð5Þ

;

with a < 1. The process is thought to be complex because for times t  T, it is identical to the non integrable inverse power law 1/ta. No attention is paid to the time region t  T, which does not even show up if a log–log representation is adopted. However, the temporal complexity of systems generated by the cooperation of a finite number of components [22] is characterized by an exponential truncation at long times. In this article, we discuss also a periodicity-generated exponential truncation. This has the effect of setting a time limit to the inverse power law regime, and consequently of making more important the analysis of the short-time regime to establish even tenuous signs of cooperation. We refer to Eq. (5) as an example of analytical representation of temporal complexity where the analysis of the short-time behavior cannot disclose the temporal complexity of the process, clearly emerging in the long-time limit through the inverse power law 1/t a: no clear signs of the power index a may emerge from the short-time analysis. The same argument applies to the ML complexity if the stretched-exponential regime of Eq. (2) is not extended enough in time. In fact, if we use the ML representation of Eq. (1) with k0 = 1/T, we do not find a significant departure from Eq. (5), as we can see by noticing that the Laplace transform of wML(t) of Eq. (1) is

^ ML ðuÞ ¼ w 1þ

1  a :

ð6Þ

u k0

The adoption of the subscript 0 indicates that we are assuming that all the events are visible. The Laplace transform of the waiting time distribution density corresponding to Eq. (5) is

^ wðuÞ ¼ 1  Cð1  aÞðuTÞa þ . . .

ð7Þ

Thus, the waiting time distribution density corresponding to Eq. (5) becomes as close as possible to wML(t) by setting

k0 ¼

1

1

Cð1  aÞ1=a T

:

ð8Þ

The emergence of the ML becomes, on the contrary, visible if we make the assumption that not all the critical events are visible. This may correspond to neuron firings of very small intensity that may become visible only by increasing the number of neurons of the system [27]. Let us assume that the probability of recording a crucial event is PS. It is shown [34] that by rescaling time with 1=a

u0  PS u

ð9Þ

the waiting time distribution density of the visible events for PS ? 0 becomes

^ V ðu0 Þ ¼ w 1þ

1  a : u0 k0

ð10Þ

With no rescaling we obtain

^ ðuÞ ¼ W

1 ; u þ ka u1a

ð11Þ

M. Zare, P. Grigolini / Chaos, Solitons & Fractals 55 (2013) 80–94

with



a P1= S k0 :

ð12Þ

As a consequence the stretched exponential regime of the ML temporal complexity becomes surprisingly extended and the inverse power law that is considered to be a reliable signature of complexity is shifted to the long-time regime, where it can be suppressed by the earlier mentioned truncation effects. The truncation effects, as mentioned in Section 1, will be extensively discussed in Section 5. In conclusion, temporal complexity is a condition referring to the time distance between two consecutive criticality-induced events, which is supposed to be given by a waiting time distribution density w(t) / 1/t1+a with a < 1, thereby making hti= 1 and the process non-ergodic [22]. This inverse power law structure is realized at a time t  T,T indicating the time duration of a regime where the complexity of the process is not yet perceived. If the critical events are recorded with the small probability PS, the waiting time distribution density becomes much more extended and the definition of temporal complexity must be properly extended so as to include the survival probabilities with the Laplace transform form of Eq. (11), which may look like stretched exponential functions for extended time intervals. As done in the earlier work of Ref. [25], we establish the parameters k and a by fitting the Laplace transform of the numerical survival probability by means of the func^ ðuÞ of Eq. (11) with the additional caution of checking tion W the accuracy of the fitting procedure with the help of some efficient algorithms generating the ML function. 3. Model description The model used here is the popular leaky integrate-andfire model [35]. The dynamics of a single neuron are expressed by

d x ¼ cxðtÞ þ S þ rnðtÞ; dt

ð13Þ

where x is the membrane potential, 1/c is the membrane time constant of the neuron, S is proportional to a constant input current. To make it possible for the neuron potential to reach the threshold value x = H = 1, we set the condition c < S. n(t) is a random fluctuation taking with equal probability either the value of n(t) = 1 or n(t) = 1. Thus r is the intensity of noise affecting the deterministic dynamics of Eq. (13). As pointed out in the earlier work [27], temporal complexity requires a noise intensity r with a suitably large value. All these variables are dimensionless. When x(t) reaches the threshold value, the neuron fires and goes back to the resting state, x = 0. In the numerical calculations of this article, we adopt two different initial conditions: the former condition is based on assigning to all the neurons the rest state x = 0 and in the latter condition the initial value of x is a random value chosen between 0 and 1. The former condition physically can be realized by assigning to K a value so large as to produce exact synchronization, and by observing the system immediately after the global firing. At this time, we assign a different and much smaller value to K, close to the critical value Kc that,

83

as we shall see, is Kc  0.0075. Then there is an extended transition regime through which the system has to realize the cooperation property corresponding to its criticality. In Section 7 we shall argue that this condition, the former condition, makes it easier for the system to perceive the directions of another network already active in the cooperation state with Kc  0.0075, namely in a state that, according to the analysis of this article, is a genuinely critical state. The adoption of the latter condition makes the system realize quickly its own form of criticality. In Section 7 we shall find that when this initial condition is assigned to the driven network, the transmission of information from the driving to the driven network is affected by statistical inaccuracy. We use c = 0.001, S = 0.001005, and r = 0.0001. The system rests on the action of N neurons. All the calculations of this paper, except for special cases explicitly mentioned, are done with N = 100. We make these N neurons interact with the following prescription. We assume that the firing of one neuron increases the position of all the neurons linked to it from the value x(t) given by Eq. (13) to x(t) + K, with 0 < K < 1. Note that, as in Ref. [27] we assume that the neurons are the nodes of a two-dimensional lattice with periodic boundary condition. Thus, the neural network of this paper is a square of size L = 10. Notice that the cooperation of the neural model adopted in this article fits the locality constraint. Each neuron only interacts with its four neighbors. As we shall see, criticality corresponds to bypassing the locality condition: at criticality the cooperation between two neurons is possible, regardless of their Euclidean distance. Although in the supercritical regime the locality breakdown becomes more evident than in the critical, at criticality the locality breakdown has the beneficial role of making maximal the transport of information from one to another network, as shown in Section 7. In the absence of cooperation, K = 0, the time distance between two consecutive firings of the same neuron reads [36]

! 1 T P ¼ ln : c 1  cS 1

ð14Þ

The probability that no event occurs up to time t, namely, the survival probability W(t), has the exponential form

WðtÞ ¼ expðGtÞ:

ð15Þ

Due to the lack of correlation, the rate of firings with a set of N neurons is given by G = N/hsi, where hsi denotes the mean time it takes a non-interacting neuron to move from x = 0 to x = 1. For the weak value of r that we are adopting hsi  TP. It is interesting to notice that the exponential function of Eq. (15) corresponds to the ML function with a = 1 and k = G. This suggests that temporal complexity becomes evident when the parameter a, as determined by means of the fitting procedure, becomes significantly smaller than 1. We now explain using intuitive arguments why the neural cooperative model leads to the emergence of the ML function and why the time distance between two consecutive firings is a sensitive indicator of cooperation, much more sensitive than the avalanche size distribution.

84

M. Zare, P. Grigolini / Chaos, Solitons & Fractals 55 (2013) 80–94

These intuitive arguments are based on Fig. 1. In panel a we see the sequence of consecutive firings upon increase of the cooperation parameter K. We see that at K = 0.002 the first signs of cooperation appear. There are multiple firings and there are dense and diluted regions. The dense regions correspond to neurons that try to fire at same time. As a consequence, the time distance between two consecutive firings is smaller than the Poisson prediction 1/G. Although the corresponding red curve of panel b seems to be almost indistinguishable from an exponential function, the careful analysis of this article, yielding the results of Fig. 2 (panel a), proves that at short times the survival probability W(t) has a slightly faster decay than the exponential function, while at larger times it is slightly slower than the exponential function. This effect becomes evident at K = 0.0075 where the green curve of panel b at short times shows a decay faster and at large times slower than the virtual exponential function corresponding to the red curve of the same panel. In the correspondence of K = 0.0075, which is proved to correspond to the genuine criticality, the avalanches becomes visible, but the adoption of the time distance between two consecutive firings as indicator of cooperation makes it possible for us perceive cooperation much earlier. It is important to notice the similarity between the avalanche patterns of K = 0.0075 and the distinctly periodic nature of K = 0.013 suggesting that the connection between avalanches and periodicity, whereas the patterns of K = 0.002 indicate deviation from the homogeneous Poisson condition, but not necessarily time periodicity. The patterns of K = 0.013 require some time to become distinctly periodic moving from the initial random

distribution assigned to the system. After this transient the remaining complexity is the power law distribution of their sizes that will be illustrated by Fig. 9 (panel a). The fact that for small values of K,K  0.002, the survival probability W(t) is slightly different from the exponential form of Eq. (15) affords an intuitive explanation of why cooperation makes the ML function emerge. The emergence of the fat tail would create a conflict with the system’s periodicity, whereas a weak cooperation strength is expected to have a weak effect, namely, turning the exponential function into a slightly stretched exponential function, which is the iceberg top of a ML function [33]. As earlier remarked, increasing K has the effect of decreasing W(t) in the short-time region and of increasing it in the long-time region, as a mere consequence of cooperationinduced pattern formation. There are many neurons that tend to fire at the same time, thereby generating avalanches of increasing intensity, while making larger and larger the time distance between different neuron bursts. The stretched exponential is a signature of the neurons firing within the same burst, while the inverse power law refers to the time distance between different bursts. A weak cooperation is not efficient enough to significantly trigger the second effect. This is why the neural model yields the ML survival probability. For a = 1 the stretched exponential is an ordinary exponential for t ranging from t = 0 to t = 1, and the inverse power law of Eq. (3) should emerge at t = 1 thereby remaining totally invisible, regardless of the accuracy of the numerical simulation. When a < 1 but very close to 1, the stretched exponential is turned into the inverse power law 1/ta at very large but finite values of t. However, in the

Fig. 1. Panel a: Spiking pattern of the neural model for different values of cooperation K. Panel b: Survival probability W(t) for different values of the cooperation parameter K. The green curve illustrates W(t) with K = 0.0075, the cooperation strength that according to the analysis of this article corresponds to criticality, and it is compared to the survival probability of a small cooperation case, red curve, and to a survival probability significantly affected by a cooperation-induced periodicity, black curve. Panel c: The green curve of panel b is shown to be a ML function, truncated by neural fluctuations. The red curve is the stretched exponential exp( (kt)a) and the blue line is the inverse power law 1/ta. In both cases a = 0.75. (For interpretation of the references to colour in this figure caption, the reader is referred to the web version of this article.)

M. Zare, P. Grigolini / Chaos, Solitons & Fractals 55 (2013) 80–94

85

Fig. 2. Panel a: a as a function of K. In the region I, the numerical survival probability is very close to the exponential function of Eq. (15). The region II denotes the regime where the ML function appears with negligible periodicity contamination. The gray region overlapping the first part of region II suggests that there is no abrupt change from the exponential to the ML regime. The vertical arrow indicates the portion of region II where the inverse power law 1/ta distinctly emerges. Region III begins almost immediately after the critical value Kc = 0.0075 and is the region significantly influenced by periodicity; Panel b: the function C(K) of Eq. (17). The regions I,II,III and the gray region have the same meaning as in a. The insert of Panel a serves the purpose of supporting our conviction that in the ideal case of an infinitely large amount of units, N = 1, the power index a may make an abrupt drop from a = 1 to a  0.75.

large-time region the inverse power law is hidden by fluctuations and statistical inaccuracies. The noise-induced truncation makes it impossible to see the inverse power law tail that is usually considered to be a genuine manifestation of complexity. For slightly larger values of K, corresponding to smaller values of a, the inverse power law emerges, as shown by the green curve of Fig. 1(panel b). It has to be pointed out that in the case of a second-order phase transition, the cooperative interaction between a finite rather than infinite number of units has the effect of annihilating the phase-transition singularity: the mean field is described by a curve making a fast but smooth change in the region around the critical value of the control parameter [37]. In the case of the neural model of this article, according to the theoretical and numerical arguments of the next section, the phase transition singularity is replaced by a region with a finite size. The green curve of panel b of Fig. 1 is the center of a bunch of not shown curves, with K ranging from K  0.005 to K  0.00875. Further increase of the control parameter K breaks the ML temporal complexity, as shown by the black curve of Fig. 1 (panel b). The power law tail emerging at the end of the exponential region is not the ML tail, which should be much faster. It is an effect of periodicity. For even larger values of K the slope of this tail vanishes, the tail is horizontal and the survival probability abruptly falls to zero at a time very close to TP of Eq. (14). According to the statistical analysis of this article the green curve of panel b of Fig. 1 is the genuine form of temporal complexity generated by this neural model. It is important to stress that the adoption of the fitting procedure based on the Laplace transform of the survival probability, Eq. (11), yields the typical behavior of a ML function, as clearly shown in panel c of Fig. 1, with the short- and long-time region described by the stretched exponential exp((kt)a) and the inverse power law 1/ta, respectively, with a  0.75. The exponential truncation of the inverse power law at large time is due to the fact that K  0.0075

is not yet large enough as to totally quench fluctuations. As discussed in the next sections a further increase of K has the effect of doing that, at the price, however, of converting the noise-induced exponential truncation into a periodicity-induced truncation. 4. Exploring in details the region where the phase transition occurs The numerical results of earlier work [25,26] have established that the effects of cooperation are perceived immediately with a finite value of cooperation parameter K of even extremely small intensity. This led these authors to make the conjecture that this neural model may be a manifestation of the extended criticality advocated by Longo et. al [28]. In this article we propose a somewhat different interpretation based on the observation that also a well known second-order phase transition generates more extended cooperation properties due to the action of a finite number of units. We cannot determine the critical value of Kc by using, for instance, the method of Binder’s cumulants [38] because this method is usually applied to Ising-like phase transitions, see for instance [39]. The transition to cooperation of the neural model of this article does not seem to be an ordinary second-order phase transition. As done in the earlier work [25], we determine the parameter a of the ML function of Eq. (1) with a fitting procedure in the Laplace domain, using Eq. (11). By means of the same fitting procedure, we evaluate also the parameter k of Eq. (1) and we determine the quantity aðkÞ

gðkÞ ¼ kðkÞ

:

ð16Þ

We make an attempt at defining the order parameter of this process by means of C(K) defined by

CðKÞ 

Z 0

K

dkgðkÞ:

ð17Þ

86

M. Zare, P. Grigolini / Chaos, Solitons & Fractals 55 (2013) 80–94

This leads us to establish the results of Fig. 2. The numerical results of Fig. 2 suggest the following interpretation. In the ideal case of a very large number of interacting units, a = 1 for 0 6 K 6 KC. At K = KC the value of a should drop to a value significantly smaller than 1. The insert of Fig. 2 a shows that, as expected, the decrease of a becomes faster and faster with increasing N. It suggests that for a very large number of interacting units a may drop to ac  0.75, which corresponds to

K c  0:0075:

ð18Þ

Using the definition of order parameter of Eq. (17), we obtain the result of panel b of Fig. 2 which is qualitatively similar to the mean field of the Ising-like model of Ref. [22] as a function of the corresponding cooperation strength. However, the nature of the supercritical region in the case of the neural model of this paper is completely different from that of Ref. [22]. The supercritical regime is strongly influenced by periodicity. To make it easier for the readers to understand the effect of periodicity on the ML relaxation, in Fig. 3, we quantify the role of periodicity by means of the property

RðKÞ ¼ Ea ððkT P Þa Þ;

ð19Þ

where Ea(t) is the exact ML function evaluated with the algorithm of [40]. In the limiting case of complete periodicity, the survival probability W(t) = 1 and R(K) = 1. In Fig. 3 we see that at K < Kc = 0.0075,R(K) is negligible and undergoes a fast increase when we move beyond Kc. This would be a convincing indication that criticality has the effect of triggering periodicity, if Kc = 0.0075 were proved to be the genuine criticality parameter. Establishing at which value the phase transition occurs is a difficult task, because at the moment of writing this paper, a theoretical approach to criticality for cooperative models of this kind is not yet available. Thus, we devote Sections 5 and 6 to illustrating additional arguments to support the conjecture of Eq. (18).

5. Two distinct sources of power law truncation In this section, we validate the accuracy of the fitting procedure based on Eq. (11) by running two algorithms generating the ML survival probability, corresponding to the parameters a and k [40,41]. The adoption of this procedure leads us to conclude that the regions I and II of Fig. 2 can be safely interpreted as corresponding to Poisson and ML dynamics uncontaminated by periodicity. However, to reach this important conclusion we have to address the important issue of two different origins of power law truncation. We refer to these truncation sources as noiseand periodicity-induced tail truncation. 5.1. Noise-induced tail truncation When the cooperation strength does not have intensity large enough to force all the neurons to fire at the same time, but it is sufficiently intense as to break the condition of total independence of neurons from one another, the time distance between two consecutive firings is a random quantity that must significantly depart from both the Poisson and the periodic condition. If we focus only on the long-time limit characterized by the inverse power law property, an attractive dynamical generator of this form of temporal complexity is given by the Pomeau–Manneville map [42,43] a xtþ1 ¼ Tðxt Þ ¼ xt þ x1þ1= ; mod1: t

ð20Þ

This map has been widely studied and it is a remarkable example of intermittency emerging at criticality [24]. It is possible to make this discrete-time map compatible with a continuous time description [43–45] by means of

d x ¼ a0 xðtÞ1þ1=a : dt

ð21Þ

Its solution for a trajectory moving from the initial condition x0 = m is [46]

m

xðtÞ ¼  a : 1=a 1  a0 ma t

ð22Þ

When the trajectory x(t) reaches the boundary x = 1, it is injected back to a new initial condition x0 = m, selected with uniform probability in the interval 0 < m < 1. Let us call s the time necessary to move from the initial condition m to x = 1. Setting x(t) = 1 in Eq. (22), we obtain



a a0



 :  1 1=a 1

m

ð23Þ

In order to confirm the validity of our fitting results, we use two algorithms of ML function generator [40,41]. We take the fitting values of a and k found by means of our fitting procedure based on Eq. (11) and we run the two algorithms to generate a ML function to compare with the numerical results. Of particular interest is the algorithm of Ref. [41]. According to this algorithm Fig. 3. Quantitative measure of periodicity for different values of cooperation parameter K. We plot only the restricted region with K < 0.0125 to make it clear that R(K) begins its fast increase at K = 0.075. For values of K larger than 0.0125 this fast increase is maintained.

1 k

s ¼ ln

  1=a 1 sinðapÞ :  cosðapÞ u tanðapmÞ

ð24Þ

M. Zare, P. Grigolini / Chaos, Solitons & Fractals 55 (2013) 80–94

This algorithm allows us to shed light into the noise-induced tail truncation. Let us see why. Although a map of the same kind as that of Eq. (20) to generate the ML function is not yet known, the comparison of Eq. (24) with Eq. (23) suggests a plausible way to understand cooperationinduced criticality in the ML case as well as in the more known case of Refs. [24]. In fact, in the case of m very close to 0, both prescriptions yield

 1=a 1

s / TðuÞ

m

;

ð25Þ

with T(u) = a/a0 and T(u) = (1/k)ln(1/u), in the case of Eqs. (23) and (24), respectively. The Pomeau–Manneville map seems to be appropriate to account for the inverse power law emerging at large times. To account for the ML complexity, as suggested by Eq. (24) we should use a twodimensional map that, to the best of our knowledge, is not yet known. However, we make the plausible conjecture that to account for the noise-induced power law truncation; we have to apply to this still unknown two-dimensional map the same arguments as those adopted years ago to the Pomeau–Manneville map [47,48]. We assume that at criticality a system with a finite number of units obeys the modified Pomeau–Manneville map a xtþ1 ¼ xt þ x1þ1= þ ft ; mod1; t

ð26Þ

where ft is a random noise of intensity D. The work of Ref. [47] proves that an even extremely small intensity D / 1013 may produce a truncation at t / 104. The reason why the power law is truncated depends on the fact that the deterministic dynamics for m very close to zero may be extremely slow and even slower than the diffusion process generated by the random fluctuation ft. In practice, this is equivalent to assuming that the initial condition with m <   1 are not allowed and that the effective initial condition is  itself. The algorithm of Ref. [41] suggests that the ML function may derive from the deterministic prescription of a two-dimensional map. Thus, in this case, the noise-induced tail truncation depends on the fact that initial conditions with m < m  1 and u < u  1 are not allowed. The quantities m and u are proportional to the noise intensity, and for simplicity we refer to them as noise strengths. Fig. 4 illustrates the results of the numerical approach adopted to evaluate the noise-induced tail truncation. We run the neural model of this article for K = 0.005. According to Fig. 2 this value of the cooperation strength corresponds to the region where the power law tail is not yet visible and the survival probability W(t) is still a stretched exponential function. We have adopted the Laplace fitting to determine a and k. We have confirmed the results of this fitting procedure by running two ML algorithms [40,41]. Then we run the algorithm of Ref. [41] using the noise strengths u = m = 0.03. The choice of this noise intensity is done by means of another fitting procedure. After assessing the noise intensity necessary to account for the tail truncation at K = 0.005, we moved to studying the condition K = 0.0075. This is the value of the cooperation parameter that according to the statistical analysis of this article should correspond to criticality, as pointed out by Eq. (18).

87

At this stage we run the algorithm of Ref. [41] with the fitting parameter a and k corresponding to K = 0.0075 and the same noise intensity as that adopted for K = 0.005. We find that this higher cooperation has the effect of better counterbalancing noise, so that in this case the power law tail becomes visible, in a satisfactory agreement with the numerical results. This is so because a fluctuation with the same intensity generates a truncation at larger times if the power index is reduced, as it becomes immediately evident by inspecting Eq. (25). In conclusion, the physical interpretation of these results is that K  0.005 belongs to the criticality basin centered on K  0.0075. At K  0.005, the system begins its transition to the cooperative regime with a = 0.9. The N-finite fluctuations have the effect of making the ML tail invisible. Upon increase of the cooperation parameter from K = 0.005 to K = 0.0075, the ML tail becomes visible, as shown in panel b. The internal noise is not strong enough to annihilate the ML power tail. This source of truncation ought not to be confused with the periodicity-induced truncation, which is signaled by the vertical arrows of both panels, corresponding to the prediction of Eq. (14). With increasing K the noise-induced tail truncation occurs earlier than the periodicity-induced tail truncation. 5.2. Periodicity-induced tail truncation The neural model of this article is based on the generalization of the model proposed many years ago by Mirollo and Strogatz [36] to explain the surprising phenomenon of Thailand fireflies flashing in perfect unison. This model can be derived by setting r = 0 in Eq. (13), namely, by assuming that the motion of each neuron is fully deterministic. We must stress that this model originally was based on the all-to-all assumption: each and every neuron interacts with all the other neurons. In this article, we study the case where each neuron interacts only with its four nearest neighbors. However, one of the important properties of cooperation is that for sufficiently large values of K, the locality condition is violated. Criticality is the onset of this locality breakdown that becomes more and more evident upon increase of K. In the supercritical regime a perfect synchronization may occur, identical to that of the original model of Ref. [36]. In Fig. 5 we show that raising K from K = 0.013to K = 0.03 has the effect of generating results almost indistinguishable from the perfect synchronization of Ref. [36]. This affords further intuitive support to the notion that the region III of Fig. 2 is dominated by periodicity. We want to emphasize that the almost perfect synchronization realized at K = 0.03 may give the misleading impression that the supercritical condition is more convenient than the critical. Actually, a central result of this paper is discussed in Section 7, proving that K = 0.0075 is the most convenient condition for the transmission of information from a neural network to another. At criticality, locality is violated while leaving the system flexible enough as to quickly adapt itself to an external stimulus changing in time, and, as shown in Section 7, the flexibility benefits are enhanced if the external stimulus shares the same complexity, namely, if it is generated by another network at criticality.

88

M. Zare, P. Grigolini / Chaos, Solitons & Fractals 55 (2013) 80–94

Fig. 4. Survival probability under the influence of noise. panel a: K = 0.005; panel b: K = 0.0075. The black curves are the results of numerical simulation. The red curves have been obtained by running the algorithm of Ref. [41] with a and k determined by means of the fitting procedure based on Eq. (11) and setting u = m = 0.03. See the text. The black vertical arrows denote TP of Eq. (14). (For interpretation of the references to colour in this figure caption, the reader is referred to the web version of this article.)

Fig. 5. Periodicity-induced truncation. The black curve refers to the condition K = 0.013 which is used in Fig. 1 to illustrate the super-critical condition. To make the influence of periodicity more transparent, we show also the case K = 0.03 which is very close to the perfect synchronization of Ref. [36].

6. Aging experiment The criticality indicator that we are using in this article is temporal complexity and according to the theoretical perspective of Refs. [22–24] temporal complexity is characterized by renewal aging. Renewal aging is a special form of aging requiring a detailed explanation. Aging is a property of out of equilibrium processes that can be explained as follows. Let us assume that a system is prepared at time t = 0 and that it generates a fluctuation f(t) that is studied by means of its autocorrelation function hf(t1)f(t2)i. Note that h. . .iindicates a Gibbs ensemble average. In other words, we have to observe infinitely many systems, each of which generates the fluctuation f(t) and undergoes an out of equilibrium preparation at time t = 0. For any system’s realization, we evaluate the product f(t1)f(t2) and the symbol hf(t1)f(t2)iis the average of this quantity over

all the realizations. For thermodynamical equilibrium systems the correlation function hf(t1)f(t2)idepends on jt2  t1j, corresponding to the stationary condition. This condition is ensured in the case where the transition from out of equilibrium to equilibrium occurs with a finite time scale Teq < 1. Let us assume that t2 > t1 and that t1  Teq. In this case hf(t1)f(t2)i= Uf(t2  t1). This stationary condition is not fulfilled if t1 < Teq and in the case where Teq = 1 it never applies. This description of non-stationary process leads to the conventional definition of aging as a process with correlation functions that do not depend only on t2  t1, but also on t1 that can be interpreted as the age of the system. The theoretical perspective of Refs. [22–24] refers to the specific case of renewal aging and to cases where the adoption of the ensemble perspective cannot be adopted. The ensemble perspective, applied to the case of brain dynamics, should require averaging over infinitely many copies of the same system, which is obviously not possible. We have to turn the observation of a single time series into the observation process of infinitely many copies of the same system. This is made possible by the renewal assumption: when an event occurs the system is supposed to have a new time evolution that does not have any memory of the earlier dynamics. The probability of occurrence of new events is exactly the same as if the system were born at the moment of producing that event. Thus, we generate a very large number of sequences, each of them derived from the same time series, but recorded beginning the observation from different events of the same time series. This procedure has the effect of producing infinitely many copies of the same system, each of them characterized by the occurrence of an event at the time origin. Of course, using the same perspective we can create an ensemble of many time series, of age ta. This is done associating to any event of the time series, a sequence beginning at a time ta after that event. This set of sequences can be interpreted as a system of age ta. Note that this corresponds to the theoretical prescription adopted in Ref.[49]

M. Zare, P. Grigolini / Chaos, Solitons & Fractals 55 (2013) 80–94

converting a single time series into many copies to make it possible to realize the Gibbs prescription. The age ta does not have anything to do with the absolute time and the proposal of Ref. [49], as further discussed in subSection 6.1, generates an efficient method, called aging experiment, to assess if a process is renewal or not. 6.1. Non-stationary correlation function The correlation function of age ta,Uf(s,ta), is defined as follows

Uf ðs; t a Þ < fðt 1 Þfðt 2 Þ >;

ð27Þ

with s  t2  t1 and ta = t1. We begin observing the system’s fluctuation at a time ta far from preparation. Thus ta is the age of the system. The breakdown of the stationary condition and the ensuing aging effect is frequently met in nature. The case of spin glasses [50] is a popular example of this condition. However, to the best of our knowledge, it is not yet known if they fit the condition of renewal aging or not. To establish if and when the process we are studying in this article is characterized by renewal aging, we have to adopt the procedure of Ref. [49], which, at the same time, as earlier mentioned allows us to derive a Gibbs ensemble from a single time series. To illustrate this procedure with an example, let us just refer ourselves to a sequence of neural firings produced by the model of this paper. We call laminar region the time intervals between two consecutive firings. We create a dichotomous fluctuation that is similar to the sequence of ‘‘on’’ and ‘‘off’’ states produced by the blinking quantum dots [51] by assigning to each laminar region either the value f = + 1 or f = 1, according to a coin tossing prescription. The system’s preparation is established by forcing all the realizations to be at the beginning of a laminar region at the time origin t = 0. The correlation function hf(t2)f(t1)i is expected to be non stationary and it is convenient to adopt the earlier notation hf(t2)f(t1)i= Uf(s,ta), with s  t2  t1 and ta = t1. How can we produce the large number of realizations necessary to evaluate the correlation function Uf(s,ta)? Let us evaluate the time distance between two consecutive firings. We evaluate first the waiting time distribution w(s) and then the corresponding survival probability

WðtÞ 

Z

t

dswðsÞ:

ð28Þ

0

This is nothing but the correlation function of vanishing age, ta = 0. In fact, we begin waiting for the occurrence of the next event immediately after the occurrence of the earlier event. The survival probability of age ta > 0, namely, the correlation function U(s,ta) is realized by means of a moving window of size ta. The left end of the window coincides with the occurrence of an event (a neural firing) and the right end of the window is the time at which we begin waiting for the occurrence of the next event [49]. This is the approach we adopt to realize a Gibbs ensemble and the corresponding correlation function. The temporal complexity generated by criticality, according to Refs. [22–24]

89

is hypothesized to obey the renewal condition. In the renewal case, the time distance between two consecutive events, namely the length of a laminar region, does not have any memory of the time lengths of the other laminar regions. To establish if and when the renewal condition applies, we make two aging experiments, one on the sequence of laminar regions as they are produced by the system and another one on the sequence of shuffled laminar regions. The aged survival probability is expected to decay slower than that of the brand new condition. In the case of non ergodic renewal non-Poisson processes, the slowing down with age is easily explained by noticing that in this case the mean length of a laminar region hsi diverges. If smax is the maximal length corresponding to observing the process for a given time T1, extending the observation time to a much larger observation time T2 is expected to lead to the recording of much larger laminar regions, s > smax, since the lack of this condition would conflict with hsi= 1. If the shuffled and un-shuffled survival probabilities coincide, this is a clear indication that we are in the presence of renewal aging. In Panel a of Fig. 6 we see that at K = 0.0075, which, according to Eq. (18) is the critical value of the cooperation parameter, the process is non-Poisson and renewal. The departure from the Poisson condition is made evident by the fact that the system ages, lack of aging being a well known signature of the Poisson condition [49]. At the same time the shuffled and the un-shuffled sequences generate the same aging, therefore proving that at K = 0.0075 the system is a renewal non-Poisson process, fitting the requirement of temporal complexity [22–24]. We also see, from panel b of Fig. 6 that the condition of strong synchronization violates the renewal condition, because the shuffled and un-shuffled sequences do not coincide. 6.2. Aging intensity as a way of establishing the occurrence of criticality Renewal aging is a manifestation of deviation from the condition of ordinary thermodynamical equilibrium and consequently can be used to measure the departure from the Poisson condition. Cooperation makes the neural system violate the Poisson condition. For this reason, aging can be used as an indicator of the transition from the non-cooperative to the cooperative behavior. We evaluate the aging intensity by means of

dðt a Þ ¼

Z

t

jWa ðtÞ  WðtÞjdt;

ð29Þ

0

where Wa(t) denotes the survival probability of age ta. Fig. 7 shows that K = 0.0075 belongs to the region of cooperation coupling where the aging intensity makes an abrupt increase. The results of Fig. 7 must be considered with caution. In fact, the maximal amount of aging intensity is realized at values of K significantly larger than K = 0.0075, thereby suggesting that criticality may hold true for larger values of K. We must stress that in this kind of experiment the aging intensity is evaluated with no distinction between renewal and non-renewal aging and that

90

M. Zare, P. Grigolini / Chaos, Solitons & Fractals 55 (2013) 80–94

Fig. 6. Aging experiment. Panel a,K = 0.0075; Panel b,K = 0.03. The gray stars and the black triangles refer to the shuffled and un-shuffled sequences, respectively.

We select randomly a subset DS of the neurons of the network S, more precisely 3% of the neurons of S and a subset DP, namely, 3% of neurons of P. Each neuron of the subset DS is coupled to a neuron of the subset DP and is forced to adopt its potential x. We study the correlation of the network S with P by means of the correlation function [53]

PN i¼1 ðX i  XÞðY i  YÞ ffi: CðX; YÞ ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi PN 2 PN 2 ðX  XÞ ðX  YÞ i i i¼1 i¼1

Fig. 7. Aging intensity for different values of the cooperation parameter. The arrow indicates the value of K that is expected to correspond to the genuine criticality of the process.

for values of K larger than 0.0075 periodicity may break the renewal condition. 7. Information transfer at criticality The results of Fig. 7 show that aging intensity undergoes an abrupt change in the interval from K = 0.005 to K = 0.01 but do not allow us to prove with compelling evidence that criticality occurs exactly in the middle of this interval, namely at K = 0.0075. To find a way out from this ambiguity, let us note that in the recent few years some authors [14,13,52] argued that criticality is the condition corresponding to the maximal transfer of information. Thus we study two identical networks, S and P, with the same K, we drive S with P and we monitor the transmission of information from P to S to assess if the maximal efficiency of information transport is realized at K = Kc = 0.0075. The coupling of S to P is realized as follows.

ð30Þ

The calculation is done at a given time t of the order of about 105 or 106 and its results do not change with changing t. The symbols Xi and Yi denote the membrane potentials of the neurons of the network S and P, respectively. The sum runs from i = 1 to i = N, where N is the total number of neurons of each network, and the result is virtually independent of the order adopted to identify the neurons. The symbols X and Y denote the mean membrane potential of the network S and P, respectively. To study the transmission of information from P to S the adoption of mutual information seems to be especially appropriate. Thus, we study also the mutual information that reads [54]

MIðX; YÞ ¼

  XX PðX i ; Y i Þ ; PðX i ; Y i Þ log PðX i ÞPðY i Þ X S Y P i

ð31Þ

i

where P(Xi,Yi) denotes the joint probability of finding Xi = x and Yi = y at the same time. Panel a of Fig. 8 shows the remarkable result that both correlation and mutual information reach their maximal value at K = Kc  0.0075, thereby leading us to conclude that this is the critical value of the control parameter K. This explains why we emphasize this conclusion by means of Eq. (18). We must point out that the crucial result of Fig. 8 a depends on the adoption of the following special initial state: when the system S is coupled to the system P, which is already in the cooperative regime corresponding to K, all the neurons of S are in the resting state. If the neurons of S are randomly distributed, the results become statistically less reliable, and show a shift of the maximum towards the state where

M. Zare, P. Grigolini / Chaos, Solitons & Fractals 55 (2013) 80–94

91

Fig. 8. Panel a: Correlation, C, between network S and network P and Mutual Information, MI, as a measure of the information sent from network P to network S, as a function of the cooperation strength, K, of the two networks; Panel b, the membrane potential of a generic node of S (red curve) and membrane potential of a generic node of P (blue curve). From the top to the bottom, K = 0.005, K = 0.0075, K = 0.016; Panel c: The abscissas are the nodes of the network S and the ordinate the nodes of the network P. The color of squares denotes the intensity of the corresponding correlation. Note that the white color represent negative correlation. From the top to the bottom, K = 0.005, K = 0.0075, K = 0.016. (For interpretation of the references to colour in this figure caption, the reader is referred to the web version of this article.)

the power law avalanches appear. The lack of accuracy prevents us from discussing this condition more. We limit ourselves to noticing that the choice of the initial condition generating Fig. 8 a may be very appropriate to establish the transport of information. In a sense, the condition with all the neurons in the resting state implies that the system is open to adapt itself to the directions of system P. The panel b of Fig. 8 illustrates the correlation between the membrane potentials of two neurons, a neuron of S and a neuron of P. They are randomly selected from those not directly coupled. It is remarkable that at criticality they show an almost perfect synchronization. Additional information on the cooperation-induced correlation is given by the panel c of Fig. 8.

8. Avalanche size and time duration distribution Let us now move to discuss the central result of this paper, the comparison between the model of this article and the SOC prediction that the power law indexes for avalanche size and time duration distributions should be m = 1.5 and m = 2, respectively [2,17]. The results of this comparison are illustrated in Fig. 9, where we devote panel

a to the avalanche size and panel b to the avalanche time duration. We see from panel a that the crucial power index m = 1.5 shows up at K  0.013. Panel b shows that the crucial value m = 2 for the avalanche time duration appears at the same value of the control parameter. The value K  0.013 is significantly larger than what we estimate to be the proper critical value, K = 0.0075. On the basis of these results we are led to conclude that our model of neural dynamics does not fit the SOC predictions that the power law avalanche size and time duration distributions are a manifestation of criticality. According to our model, these avalanches are rather a manifestation of super-criticality and correspond to the epileptic condition. The health condition is signaled by temporal complexity, namely, by the survival probability of Panel a of Fig. 6, which is proved by the aging experiment to be a totally renewal condition. It is important to remark the close connection between criticality as generating the optimal condition for information transport and locality breakdown. The model that we are using does not rest on the all-to-all coupling condition adopted by Mirollo and Stogatz [36] in their pioneer paper. Each neuron is directly connected only to four nearest neighbours. Yet, for very large values of K, see Fig. 5, with K = 0.03, an almost perfect synchronization is realized. This

92

M. Zare, P. Grigolini / Chaos, Solitons & Fractals 55 (2013) 80–94

Fig. 9. Panel a: Avalanche size distribution for different values of the cooperation parameter K, Panel b: Avalanche duration for different values of the cooperation parameter K.

implies that the correlation length is as large as the system’s size. This condition, however, begins at criticality. As we have earlier remarked to explain why the distance between two consecutive firings is sensitive to cooperation, when the Poisson distribution becomes a ML function, neurons that may be very far away from one to the others tend to fire together, thereby making the distance between their firings shorter than the distance 1/G. This is a consequence of correlation length becoming as large as the system’s size. This property is evident and does not need a demonstration. A more difficult problem may be raised by the question: If the correlation length is as large as the system size, would it not be better to use a cooperation strength so large as to make evident the cooperation-induced synchronization? The answer to this question is that the information transfer refers to fluctuations and in the supercritical regime the fluctuations around the ideal synchronization condition are weak and uncorrelated. This is the reason why the efficiency of information transfer becomes maximal at criticality. 9. Concluding Remarks As explained in the introduction the results of this article may be used to support Werner’s conjecture [14] that the brain is a system at criticality in spite of the experimental results of Destexhe’s group [21], revealing no power law in the avalanches of healthy human brain. The interesting results of [21], however, must be considered with cautions because the work of Refs. [2,10] give evidence to the contrary. On the other hand, there are experimental observations [55,56] yielding for m values significantly larger than m = 1.5 and the recent experimental observation of Ref. [57] supporting the emergence of avalanche inverse power law at criticality. We have also to point out that the model of this article has the limited purpose of proving that temporal criticality and avalanche criticality, while being generated by the same cooperation

process, may depend on slightly different values of the control parameters, as a warning to settle the debate on the conflicting experimental results of Refs. [21,56,57]. Setting aside the ambitious issue of contributing results of significant interest for the human brain, there are additional issues to discuss. Do the results of this paper contradict the conclusion of the authors of Ref. [25] that the neural model of Section 3, studied in both articles, generates extended criticality? It is not easy to answer this question because neither extended criticality nor the form of criticality emerging from the more accurate analysis of this article have yet a firm theoretical support. Here we find that at K = 0.0075 an uncontaminated form of temporal complexity emerges, and with it, the maximal information transmission. These properties are shared by the more traditional form of second-order phase transition of Refs. [22– 24,52], where both supercritical and sub-critical correlation lengths are shorter than the critical correlation length. With the model of this article, instead, going beyond the critical condition does not make the system recover the Poisson condition. Rather the higher and higher cooperation strength has the effect of increasingly contaminating the renewal nature of critical temporal complexity with more ostensible signs of periodicity. On the one side, as remarked in Ref. [27], bridging temporal complexity and clock-wise periodicity may fit an important condition of living organisms, the Winfree biological clocks [58] being a proper example. This may be after all a form of extended criticality, with the balance between complexity and periodicity changing with the cooperation parameter. On the other hand, this form of extended criticality may afford a way to freely increasing or decreasing the intensity and the predictability of epileptic crises so as to make it possible to check the efficiency of the method currently proposed to predict in time for any form of crisis. The epileptic crises share a strong similarity with the geophysical and financial crises currently studied by the authors of Ref. [59] and the model of this paper may help their research work.

M. Zare, P. Grigolini / Chaos, Solitons & Fractals 55 (2013) 80–94

It is important to point out that the lack of a theory for the form of criticality generated by the neural model of this article is closely related to a lack of a dynamical theory for the origin of the ML function. It is well known that the idealized Manneville equation of Eq. (20) generates the survival probability of Eq. (5) and the theoretical work of Ref. [24] establishes a connection between criticality and this form of temporal complexity. The results of this article suggest that the ML temporal complexity may be related to the criticality of the neural model of this article, in the same way as the criticality of ordinary second-order phase transitions is connected to the Pomeau–Manneville map [24]. Although a theory for the form of criticality of the model of this paper is missing, the maximal information transfer at K = 0.0075 shown by the numerical results of Section 7 are in line with a widely shared opinion in the field of neurophysiological processes, provided that we use as a signature of criticality temporal complexity rather than the power law size distribution of avalanches. Finally, we want to mention the possible sociological interest of the results of this article. The 3% of nodes of the network S perceiving the directions of the corresponding nodes of the network P, are reminiscent of the committed minorities discussed in the recent work of [60]. Here 3% of nodes of S is enough to drive the network S, while the committed minorities of Ref. [60] must reach the level of 10% to exert an influence of their sociological network. We think that this is an impressive property of cooperation-induced criticality.

Acknowledgement MZ and PG warmly thank ARO and Welch for their support through Grants No. W911NF-11-1-0478 and No. B1577, respectively.

References [1] Loosemore RPW. The complex cognitive systems manifesto, nanotechnology, the brain, and the future. Yearbook Nanotechnol Soc 2013;3:195–217. [2] Beggs JM, Plenz D. Neuronal avalanches in neocortical circuits. J Neurosci 2003;23:11167–77; Beggs JM, Plenz D. Neuronal avalanches in neocortical circuits. J Neurosci 2004;24:5216–29. [3] Lombardi F, Herrmann HJ, Perrone-Capano C, Plenz D, de Arcangelis L. Balance between excitation and inhibition controls the temporal organization of neuronal avalanches. Phys Rev Lett 2012;108:228703-1–3-5. [4] Shew WL, Plenz D. The functional benefits of criticality in the cortex. The Neuroscientist 2012;19:88–100. [5] Haldeman C, Beggs JM. Critical branching captures activity in living neural networks and maximizes the number of metastable States. Phys Rev Lett 2005;94:058101-1–1-4. [6] Socolar JES, Kauffman SA. Scaling in ordered and critical random Boolean networks. Phys Rev Lett 2003;90:068702-1–2-4. [7] Shew WL, Yang H, Yu S, Roy R, Plenz D. Information capacity and transmission are maximized in balanced cortical networks with neuronal avalanches. J Neurosci 2011;31:55–63. [8] Larremore DB, Shew WL, Ott E, Restrepo JG. Effects of network topology, transmission delays, and refractoriness on the response of coupled excitable systems to a stochastic stimulus. Chaos 2011;21:025117-1–025117-10. [9] Bertschinger N, Natschlager T. Real-time computation at the edge of chaos in recurrent neural networks. Neural Comput 2004;16:1413–36.

93

[10] Plenz D, Thiagarajan TC. The organizing principles of neuronal avalanches: cell assemblies in the cortex? Trends Neurosci 2007;30:101–10. [11] Beggs JM. The criticality hypothesis: how local cortical networks might optimize information processing. Philos Trans R Soc A 2008;366:329–43. [12] Chialvo DR. Critical brain networks. Phys A 2004;340:756–65. [13] Chialvo DR. Emergent complex neural dynamics. Nat Phys 2010;6:744–50. [14] Werner G. Consciousness-related brain processes viewed in the framework of phase space dynamics, criticality, and the renormalization group. Chaos Solitons Fract 2013. (this issue). [15] Wilson KG, Kogut J. The renormalization group and the epsilon expansion. Phys Rep 1974;12:75–199. [16] Bak P, Tang C, Wiesenfeld K. Self-organized criticality: an explanation of the 1/f noise. Phys Rev Lett 1987;59:381–4. [17] Zapperi S, Lauritsen KB, Stanley HE. self-organized branching processes: mean-field theory for avalanches. Phys Rev Lett 1995;75:4071–4. [18] Priesemann V, Munk MHJ, Wibral M. Subsampling effects in neuronal avalanche distributions recorded in vivo. BMC neurosci 2009;10(40):1–20. [19] Beggs JM, Timme N. Being critical of criticality in the brain. Front Physiol 2012;3:163-1–163-14. [20] Touboul J, Destexhe A. Can power-law scaling and neural avalanches arise from stochastic dynamics? PLoS ONE 2010;5:8982-1–8982-14. [21] Dehghani N, Hatsopoulos NG, Haga ZD, Parker RA, Greger B, Halgren E, et al. Avalanche analysis from multielectrode ensemble recordings in cat, monkey, and human cerebral cortex during wakefulness and sleep. Front Physiol 2012;3:302-1–302-18. [22] Turalska M, West BJ, Grigolini P. Temporal complexity of the order parameter at the phase transition. Phys Rev E 2011;83:061142-1–26. [23] Svenkeson A, Bologna M, Grigolini P. Linear response at criticality. Phys Rev E 2012;86:041145-1–041145-10. [24] Contoyiannis YF, Diakonos FK. Criticality and intermittency in the order parameter space. Phys Lett A 2000;268:286–92; Contoyiannis YF, Diakonos FK, Malakis A. Intermittent dynamics of critical fluctuations. Phys Rev Lett 2002;89:035701-1–1-4. [25] Lovecchio E, Allegrini P, Geneston E, West BJ, Grigolini P. From selforganized to extended criticality. Front Physiol 2012;3:96-1–9. [26] Grigolini P, Zare M, Svenkeson A, West BJ. Criticality in neural systems. In: Plenz D, Niebur E editors. New York: John Wiley & Sons; 2013. ISBN: 978-3-527-41104-7. [27] Zare M, Grigolini P. Cooperation in neural systems: bridging complexity and periodicity. Phys Rev E 2012;86:051918-1–8-6. [28] Longo G, Montévil M. The inert vs. the living state of matter: extended criticality, time geometry, anti-entropy. Front Physiol 2012;3:39-1–8. [29] Metzler R, Klafter J. From stretched exponential to inverse powerlaw: fractional dynamics, Cole–Cole relaxation processes, and beyond. J Non-Crystal Solids 2002;305:81–7. [30] Zumofen G, Klafter J. Scale-invariant motion in intermittent chaotic systems. Phys Rev E 1993;47:851–63. [31] Failla R, Grigolini P, Ignaccolo M, Schwettmann A. Random growth of interfaces as a subordinated process. Phys Rev E 2004;70:0101011–1-4. [32] Kardar M, Parisi G, Zhang Y-C. Dynamic scaling of growing interfaces. Phys Rev Lett 1986;56:889–92. [33] Bianco S, Ignaccolo M, Rider MS, Ross MJ, Winsor P, Grigolini P. Brain, music, and non-Poisson renewal processes. Phys Rev E 2007;75:06191-1–06191-10. [34] Primakkul P, Svenkeson A, Grigolini P, Bologna M, West BJ. Complexity and the fractional calculus. Adv Math Phys 2013 (on line 28 March). [35] Gerstner W, Kistler WM. Spiking neuron models, single neurons, populations, plasticity. Cambridge: Cambridge University Press; 2002. [36] Mirollo RE, Strogatz SH. Synchronization of pulse-coupled biological oscillators. SIAM J Appl Math 1990;50:1645–62. [37] Tseskis AL. Second-order phase transition in systems with a finite number of particles. JETP 1994;79:591–43. [38] Binder K. Finite size scaling analysis of Ising model block distribution functions. Z Phys B 1981;43:119–40. [39] Turalska M, West BJ, Grigolini P. Role of committed minorities in times of crisis. Sci Rep 2012;3:1371-1–8. [40] Podlubny I, Kacenak M. Mittag–Leffler function-Calculates the Mittag-Leffler function with desired accuracy, MATLAB Central File

94

[41]

[42]

[43] [44]

[45] [46] [47] [48]

[49]

[50]

M. Zare, P. Grigolini / Chaos, Solitons & Fractals 55 (2013) 80–94 Exchange, File ID 8735, mlf.m, 2005. . Fulger D, Scalas E, Germano G. Monte Carlo simulation of uncoupled continuous-time random walks yielding a stochastic solution of the space–time fractional diffusion equation. Phys Rev E 2008;77:021122-1–2-7. Pomeau Y, Manneville P. Intermittent transition to turbulence in dissipative dynamical systems. Commun Math Phys 1980;74:189–97. Manneville P. Intermittency, self-similarity and 1/f-spectrum in dissipative dynamical systems. J Phys (Paris) 1980;41:1235–43. Allegrini P, Benci V, Grigolini P, Hamilton P, Ignaccolo M, Menconi G, et al. Compression and diffusion: a joint approach to detect complexity. Chaos Solitons Fract 2003;15:517–35. Geisel T, Thomae S. Anomalous diffusion in intermittent chaotic systems. Phys Rev Lett 1984;52:1936–9. Akimoto T, Barkai E. Aging generates regular motions in weakly chaotic systems, 2012. arXiv:1209.6170v1 Bettin R, Mannella R, West BJ, Grigolini P. Influence of the environment on anomalous diffusion. Phys Rev E 1995;51:212–9. Floriani E, Mannella R, Grigolini P. Noise-induced transition from anomalous to ordinary diffusion: the crossover time as a function of noise intensity. Phys Rev E 1995;52:5910–7. Bianco S, Grigolini P, Paradisi P. Fluorescence intermittency in blinking quantum dots: renewal or small modulation? J Chem Phys 2005;123:174704-1–174704-10. Amir A, Oreg Y, Imry Y. On relaxations and aging of various glasses. PNAS 2012;109:1850-1–6.

[51] Stefani FD, Hoogenboom JP, Barkai E. Beyond quantum jumps: blinking nanoscale light emitters. Phys Today 2009;2:34–9. [52] Vanni F, Lukovic M, Grigolini P. Criticality and transmission of information in a swarm of cooperative units. Phys Rev Lett 2011;107:078103-1–3-4. [53] Rodgers JL, Nicewander WA. Thirteen ways to look at the correlation coefficient. The Am Stat 1988;42:5966. [54] Kraskov A, Stogbauer H, Andrzejak RG, Grassberger P. Hierarchical clustering using mutual information. Europhys Lett 2005;70:278–84. [55] Hahn G, Petermann T, Havenith MN, Yu S, Singer W, Plenz D, et al. Neuronal avalanches in spontaneous activity in vivo. J Neurophysiol 2010;104:3312–22. [56] Allegrini P, Paradisi P, Menicucci D, Laurino M, Bedini R, Piarulli A, Gemignani A. Sleep unconsciousness and breakdown of serial critical intermittency: new vistas on the global workspace. Chaos Solitons Fract 2013. (this special issue). [57] Tagliazucchi E, Balenzuela P, Fraiman D, Chialvo DR. Criticality in large-scale brain fMRI dynamics unveiled by a novel point process analysis. Front Physiol 2012;3:15. [58] Winfree AT. The geometry of biological time. New York: Springer; 2001. [59] da Fonseca EL, Ferreira FF, Muruganandam P, Cerdeira HA. Identifying financial crises in real time. Phys A 2012;392:1386–92. [60] Xie J, Sreenivasan S, Korniss G, Zhang W, Lim C, Szymanski BK. Social consensus through the influence of committed minorities. Phys Rev E 2011;84:011130-1–0-8.