Neural Oscillators and Dynamical Systems Models

Neural Oscillators and Dynamical Systems Models

Neural Oscillators and Dynamical Systems Models 179 Neural Oscillators and Dynamical Systems Models G B Ermentrout, University of Pittsburgh, Pittsb...

448KB Sizes 0 Downloads 71 Views

Neural Oscillators and Dynamical Systems Models

179

Neural Oscillators and Dynamical Systems Models G B Ermentrout, University of Pittsburgh, Pittsburgh, PA, USA ã 2009 Elsevier Ltd. All rights reserved.

equations, and the goal of dynamical systems theory is to understand their behavior both qualitatively and quantitatively.

Equilibria Introduction Suppose that a synaptically isolated neuron is injected with a constant current. As the amount of current increases, the neuron might begin to fire action potentials. Or perhaps it will fire a burst of action potentials and then stop firing. Or, if it was already firing, maybe the spikes will stop. On the other hand, consider what happens as a walking horse speeds up: he switches first to a trot and then to a gallop. In both of these cases, some qualitative change in behavior has occurred as some parameter (current in the first case, frequency in the second) changes. How can we put these wildly different phenomena at wildly different spatial and temporal scales on a common footing? Nonlinear dynamics or dynamical systems theory is an area of mathematics which attempts to classify the qualitative differences between systems as their initial states or their parameters change. Most modeling of neurons, circuits, and systems depends on the simulation of large systems of coupled differential equations: rates of change for voltages, synaptic strengths, etc., are given in terms of the variables themselves. Thus, in this article, we focus on how nonlinear dynamics can help us understand the behavior of the kinds of differential equations that occur in computational neuroscience. To gain an intuitive feel for the concepts that are introduced in this article, it helps to think of some simple physical systems. A familiar example is a damped pendulum, that is, a solid rod with a weight at the end and with friction. If you move the pendulum, it will rock back and forth several times, eventually returning to the downward position. This position is called an attractor, since perturbations of the pendulum end up back in the downward state. Let us consider a more complicated example. Place a flexible plastic ruler narrow edge down, and place your finger along the top edge. As long as you do not apply too much pressure, the ruler will remain completely straight. If you transiently bend the ruler from the side, keeping the same pressure on it, it will return to a the straight position. Now, apply more pressure on the top of the ruler. Once the pressure becomes too great, the ruler will buckle (bend) to one side or the other. The physical system has gone to a new attractor as the parameter (the applied pressure) changes. The plastic ruler and the pendulum are governed by sets of nonlinear

We start with a very simple example, a model for a passive membrane: dV ¼ ðgL ðV  EL Þ þ IÞ=C dt

½1

Here, 1/gL is the input resistance, EL is the resting potential, I is the applied current, and C is the capacitance. We divided by the capacitance because it is traditional to put just the derivatives of variables on the left-hand side of the equation. This is a linear differential equation, and we can find a closed form solution, or, even easier, we could pop this onto a computer and solve it. However, to do this, we need one more piece of information: the voltage at a particular point in time, for example, t ¼ 0. Thus, we specify V(0) ¼ V0, which is called the initial condition. Rather than solve the equation exactly, we will, instead, introduce the concept of state space and use this to deduce what solutions look like. In Figure 1 is a graph showing dV/dt on the y-axis and V on the x-axis. The x-axis represents all possible voltages and is called the state space for this system. The y-axis tells us how fast and, more importantly, which direction the voltage goes – to the left means it is decreasing and to the right, increasing. For example, if the initial voltage is to the left of the solid black circle, then dV/dt > 0 and the voltage increases. Note that as it approaches the filled circle, its rate, dV/dt, gets smaller so that it decelerates as it gets closer to the filled circle. Initial conditions to the right will decrease until the filled circle is reached. An initial condition starting exactly on the filled circle will stay there forever. For this reason, we call this particular voltage an equilibrium point for eqn [1]. Other terms which are commonly used are steady state and fixed point. Note that all initial conditions move to this unique equilibrium. Thus, we say that it is globally asymptotically stable or that it is a global attractor. No matter what the current injected, there will always be one and only one equilibrium point and it is always stable. In fact, we can write it down explicitly: I þ gL E L Veq ¼ gL Thus, for this simple linear model, if we call V(t) ¼ f(t; V0) the solution to eqn [1] given the initial condition, V(0) ¼ V0, we see that V(t) ! Veq as t ! 1

180 Neural Oscillators and Dynamical Systems Models m(V )

dV dt

V

a

dV dt

V1

V

b

V2

V3

V

c

Figure 1 One-dimensional dynamical system showing stable (black) and unstable (gray) equilibria.

for any initial condition. (Note that the function f is introduced to make explicit that the solution depends on the initial value.) This graphical method may at first seem to be of little use since we can write an explicit solution to the differential equation. However, suppose that in addition to a leak conductance, our membrane has an instantaneous inward current: dV ¼ ðgL ðV  EL Þ  gNa mðVÞ dt ðV  ENa Þ þ IÞ=C  f ðV; IÞ

½2

where mðVÞ ¼ 1=ð1 þ exp½ðV  V1 Þ=V2 Þ (see Figure 1(b)). If the inward conductance is large enough, then dV/dt has the form shown in Figure 1(c). It is no longer possible to write down a solution to the differential equation! However, we can use the graphical method to completely understand it. Unlike the left-hand figure, there are now three equilibria. Looking at the arrows on the x-axis, it is clear that initial conditions starting near V1, V3 will be driven to respectively, V1, V3 while initial conditions near V2 will be pushed away from V2. Thus, there are two asymptotically stable equilibria and there is one unstable equilibrium point. Physically, we would never be able to reach the unstable equilibrium, but we would be able to sense it in an experiment. For any initial condition V(0) > V2 will tend to V3 as time increases, and any initial condition V(0) < V2 will go to V1. Thus, eqn [2] is bistable; it has two asymptotically stable states and V2 serves as a separatrix or threshold between the two stable equilibria. Suppose the neuron is resting at V1 and a brief current stimulus is applied. If the stimulus is small, V(t) will not cross V2 and will relax back to V1. If the stimulus is large enough to cause V(t) to cross V2, then the neuron will tend to the depolarized state, V ¼ V3. All we need now is some sort of delayed outward current and we will have an action potential. Nonlinear dynamics provides a geometric description for the threshold for an active neuron.

More generally, consider a set of n differential equations in n variables: dxj ¼ fj ðx1 ; . . . ; xn Þ dt

j ¼ 1; . . . ; n

½3

with initial conditions xj(0) ¼ xj,0. Under fairly general conditions on the functions fj there is a solution to this which we write in vector form as X(t) ¼ F(t; X0). In this case the state space is Rn, the set of all n-dimensional vectors, and eqn [3] defines an n-dimensional dynamical system which we can write concisely as dX ¼ FðXÞ dt

½4

We call Xeq and equilibrium solution to eqn [4] if F(Xeq) ¼ 0. In our simple example, above, n ¼ 1. The familiar space-clamped Hodgkin–Huxley equation consists of four variables, V, m, h, n, and forms a four-dimensional dynamical system. In absence of an applied current, the nerve membrane sits at rest, which is an equilibrium point. We say that an equilibrium point, Xeq, is asymptotically stable if all the solutions to eqn [4] with initial conditions that are close to the equilibrium point decay back to the equilibrium point as t increases. The nice geometric picture in Figure 1(c) does not help us for higher dimensional systems. Suppose that A is an n  n-dimensional matrix and that F(X) ¼ AX. Then we call this a linear dynamical system. The study of linear dynamical systems is insightful, for, except in special cases (see below), the behavior of any nonlinear system in eqn [4] near an equilibrium point is the same as that of an associated linear system. Consider the following linear dynamical system: dX ¼ AX dt

½5

Clearly X ¼ 0 is an equilibrium point. The general solution to a linear differential equation is a sum of terms of the form: YðtÞ ¼ Veat tm ðcos bt  cÞ

½6

where m is a nonnegative integer, a, b, c are real numbers, and V is a constant vector. If a < 0, then

Neural Oscillators and Dynamical Systems Models

Y(t) decays to zero as t ! 1; if a > 0, then Y(t) grows exponentially; and if a ¼ 0, then the Y(t) grows when m > 0 and oscillates when m ¼ 0 and b > 0. The numbers l ¼ a  ib are the eigenvalues of the matrix, A, that is, there is a constant vector, V, such that AV ¼ lV For an n  n matrix, there are n eigenvalues, roots of the nth order characteristic polynomial pAðlÞ  detðlI  AÞ Combining this fact with eqn [6], we see that all solutions to eqn [5] will decay to the equilibrium point at 0 if a < 0 for every eigenvalue of A and that if ‘any’ a > 0, then at least some solutions to eqn [5] will exponentially grow. In the former case, when all the a’s are negative, we say that system in eqn [5] is linearly asymptotically stable, and in the latter case when at least one a > 0, the system is unstable. The special case a ¼ 0 will play an important role later in this article. Let Xeq be an equilibrium solution of eqn [3] or equivalently eqn [4]. Form the following n  n matrix: 0 @f ðx ;...;x Þ 1 1 1 n 1;...;xn Þ . . . @f1 ðx@x @x1 n B C B C .. .. .. B¼B ½7 C . . . @ A @fn ðx1;...;xn Þ nÞ . . . @fn ðx@x1;...;x @x1 n and evaluate it at the equilibrium point, Xeq. This matrix is called the Jacobi matrix for eqn [4] about the equilibrium point (also often called the Jacobian), and the system Y0 ¼ BY is called the ‘linearization.’ One of the most important results in dynamical systems states that if the linear system defined by the matrix B(Xeq) is linearly asymptotically stable (all eigenvalues have negative real parts), then the equilibrium point Xeq is asymptotically stable as a solution to eqn [4]. This theorem provides a simple way to test the stability of any equilibrium point: form the matrix B and look at its eigenvalues. If all have negative real parts, then the equilibrium is stable; if any have positive real parts, then it is unstable.

181

Returning to Figure 1, the Jacobian is just the slope of dV/dt at the equilibrium point. Thus, if dV/dt has a positive slope through the equilibrium, it is unstable, and if it has a negative slope, the equilibrium is stable. Let us continue to analyze eqn [2] and Figure 1(c), but now we will change the applied current, I. Changing the current just lifts the curve in Figure 1(c) up or down. This is shown in Figure 2 on the left. For large positive or negative currents, there is only one equilibrium point. However, if we start at I ¼ 0 where there are three equilibria and hyperpolarize the membrane (decrease I), then the right-most pair of equilibria will move closer to each other (compare blue with black line). As we continue to decrease the current, there is a current value, I ¼ I*, in which both equilibria come together (cyan curve, green circle), and for I < I*, they no longer exist. Unlike the situation when I ¼ 0, when I < I*, there is only one equilibrium point (magenta curve, magenta circle). This is qualitatively different from three equilibria. A similar phenomenon occurs when we raise the current; the left-most equilibria merge when I ¼ I* (red curve, gold circle). In dynamical systems, any time that a qualitative change in behavior occurs as a parameter varies, we say that a bifurcation has occurred. Specifically, when two equilibria merge and disappear, we say that there is a saddle-node bifurcation. This is often called a limit point or turning point bifurcation. Clearly the left-hand picture is useful for a onedimensional equation, but obviously, if the function f(V,I) is more complicated, we will run out of space and have to draw many different curves for many different values of I. A concise summary of the behavior of this nonlinear system as a parameter varies is shown in the right side of Figure 2. A plot is made of the equilibrium values of one of the variables on the y-axis, and the parameter is shown on the x-axis. This curve is called a bifurcation diagram. Traditionally, stable equilibria and unstable equilibria are distinguished by changing the color or line type of the curve. In our figure, stable equilibria are solid and unstable are dashed. The diagram should be read as follows. Pick a value of I and mentally draw a vertical

V

dV dt I=I V I = I* I=0 I
*

*

I I = I*

I=I

*

Figure 2 Behavior of nonlinear membrane as the current changes. (Left) Phase space picture. (Right) Bifurcation diagram. Special points are colored green and gold.

182 Neural Oscillators and Dynamical Systems Models

line through this value of I. Intersections with the bifurcation diagram correspond to equilibria. Saddle nodes are points where the diagram turns around (hence the name turning point); here, they are at I* and I*. Figure 2, right, shows that there will be bistability for all currents between I* and I*. Suppose that we do the following experiment. Starting at I ¼ 0, we gradually increase the current. As soon as it exceeds I*, the voltage will jump up to the top part of the diagram. Now, we reduce the current. The voltage stays at the upper part of the diagram until the current falls below I*. Thus, it jumps down at a different current than it jumped up. This phenomenon is called hysteresis, and our neuron has now been endowed with ‘memory.’ Once in the ‘up’ state, it will remain there even when the current is brought to zero. Bistability between states (here, equilibria) plays a central role in almost every aspect of the dynamics of single cells and networks of coupled cells. Before turning to more complex dynamical systems, we close this discussion with an important point. Looking again to Figure 2, left, observe that when I ¼ 0, the slope through the gray circle is positive while that through the black circle is negative. The linearized system for our model is

The methods for finding equilibria and their stability, as well as looking for bifurcations, are not confined to one-dimensional models. Nor are they restricted to single neuron models. As an example of a bifurcation with symmetry, let us consider a mutually inhibitory neural network (Figure 3, left):

 y0 ¼ f 0 ðVÞy

where a ¼ F0 (I – gu¯). Notice that a > 0. The Jacobian matrix for this linear system is   1 ag A¼ ag 1

 is an equilibrium. If the slope (f 0 ) is negative, where V the equilibrium is stable, and if it is positive it is unstable. At I ¼ I*, the slope at the equilibrium is zero and the linearized system is 0

y ¼0 which tells us nothing about stability of the nonlinear system. However, whenever a linearized system has a zero eigenvalue, this is the signature for a possible bifurcation for the nonlinear system. This is true no matter what the dimension of the system.

du1 ¼ u1 þ FðI  gu2 Þ dt du2 ¼ u2 þ FðI  gu1 Þ dt u1, u2 represent the firing rates of two different populations of neurons which are receiving equal inputs, I. The strength of the inhibition is g, and the function F(u) increases with increasing values of u. The reader can imagine this as representing two objects competing for the attention of the network. Since the two populations are identical, there is a symmetric equilibrium point in which u1 ¼ u2 ¼ u¯. The stability is determined by looking at the linearized system: dy1 ¼ y1  gy2 dt dy2 ¼ y2  gy1 dt

Two-variable systems (also called planar systems) are easy to analyze for stability. Let T be the sum of the diagonal entries of A (called the trace of a matrix) and let D be the determinant of the matrix. Then the linear system (and therefore the nonlinear system) will be stable provided that T < 0 and D > 0. For this example, T ¼ 2 < 0 and D ¼ 1  g2a2. If the mutual inhibition g is small, or the inputs are small (so that a is

g 0.8

Pitchfork

g

u 1 wins u1

0.6 I

I

0.4

0.2

0

Equal

0

u 2 wins

0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 I (µA cm−2)

Figure 3 Mutually inhibitory network. (Right) Bifurcation diagram as the input, I, increases.

2

Neural Oscillators and Dynamical Systems Models

small), then D > 0. If the inputs increase, then D could become negative. The equilibrium point is no longer stable. In fact, when the determinant is negative for a planar system, it means that there is one positive eigenvalue and one negative eigenvalue and the equilibrium is called a saddle point. Because of the symmetry of our example, the eigenvalues and corresponding eigenvectors of A are easy to write down: fl1 ¼ 1  ga; v1 ¼ ½1; 1T g fl2 ¼ 1 þ ga; v2 ¼ ½1; 1T g l1 is always negative, but l2 becomes positive when ag exceeds 1. This means that there will be growth along the vector v2 and decay along the vector v1. For this reason, v1 is called the stable set or the stable manifold for the saddle point and v2 is called the unstable set or unstable manifold. Amazingly, if the linearized system has a saddle point and stable and unstable sets, then the nonlinear system does as well. However, they will not be straight lines. When the input reaches a critical value so that ag ¼ 1, then the determinant is zero and we see that l2 ¼ 0. From the previous discussion, this means that we can expect a bifurcation. Figure 3 shows the bifurcation diagram for this system. This is very different from the diagram shown in Figure 2. Rather than losing solutions as the parameter changes, a solution becomes unstable and two new ‘stable’ solutions appear. Because of its shape, this is known as a pitchfork bifurcation. It occurs only in systems which have some kind of intrinsic symmetry, but, as this is very common in many physical and biological problems, the pitchfork is a familiar bifurcation. The central line corresponds to the equilibrium point u1 ¼ u2. As the input increases, when it reaches approximately 1, the symmetric equilibrium point

183

becomes unstable – symmetry is broken. The new equilibria have the form u1 ¼ a, u2 ¼ b and u1 ¼ b, u2 ¼ a where a > b. A network like this is called winner-take-all since one of the two objects dominates the other. Notice that when I ¼ 2, the value a is about ten times larger than the value b. Planar systems have another advantage over highdimensional models. Like the one-dimensional models in Figure 1, it is possible to sketch the complete behavior of the differential equation. Instead of plotting u1 or u2 versus time, plot one against the other. The resulting diagram is called a phase-plane. Figure 4 shows the phase-plane for the network. Representative solutions (called trajectories; black) appear as curves moving toward the stable equilibrium points, which are filled circles. The advantage of the phaseplane is that it is possible to infer qualitative behavior without actually solving the equations. At each point in the (u1, u2) plane, the equations tell us the derivatives, du1/dt and du2/dt. These derivatives are tangent to the solutions, so that they show the direction of trajectories. Magenta arrows are drawn at various points in the plane to show the direction. These arrows are called direction fields. Often only a crude approximation of the direction fields is enough: just the eight points on a compass. For example, northeast corresponds to both u1 and u2 increasing in time. To determine where u1 is increasing or decreasing, we only need to determine where u1 is not changing at all. The graph du1/dt ¼ 0 is called the u1-nullcline. For our network, du1/dt ¼ 0 when u1 ¼ F(I – gu2). This curve is plotted as in red in the phase-plane. The u2-nullcline, du2/dt ¼ 0 is plotted in green. The nullclines break the phase-plane into regions where the direction fields are similar and their intersections are the equilibria. Thus, much can be gained without ever solving the differential equations. In the right phase-plane, there are two stable equilibrium points separated by an unstable

u2

u2

a

b u1

u1

Figure 4 Phase-plane for mutual inhibitory network. Direction fields (magenta arrows) and trajectories (black) are shown. The u1 nullcline is in red and the u2 nullcline is in green. For stronger inputs, there are three equilibria. Two (black circles) are stable. Stable (blue) sets for the saddle point (black square) are shown.

184 Neural Oscillators and Dynamical Systems Models

saddle point. The stable set of the saddle point (drawn in blue) divides the plane in two. All initial values of (u1, u2) above the stable set will end up at the equilibrium point where u2 wins and u1 loses, labeled a. The opposite occurs when below the line. The upper triangular region of the plane is called the ‘basin of attraction’ for the equilibrium point a.

The best-known mechanism by which a neuron begins to fire periodically is via the Hopf or Andronov bifurcation. Recall that an equilibrium point is stable as long as the real parts of the eigenvalues of the linearized system are all negative. There are two ways in which the real part can switch to positive: (1) a real negative eigenvalue becomes a positive eigenvalue through a zero eigenvalue and (2) the real part of a complex eigenvalue changes sign, leading to an imaginary eigenvalue, iw0. In the former case, saddle node and pitchfork bifurcations occur – the number of equilibria changes. In the latter, there is no change in the number of equilibria, but instead, limit cycles appear. Figure 5 shows the bifurcation diagram for the Morris–Lecar model as the applied current increases. There is a single equilibrium point, and it changes from stable to unstable as the current increases past about 95. The magenta and green curves plotted in this figure correspond to limit cycles. They depict the maximum and minimum amplitude of the voltage. Like equilibria, the notion of stability can be defined for limit cycles. The magenta curves correspond to unstable and the green to stable limit cycles. The point HB denotes the point of the Hopf bifurcation. Models which undergo Hopf bifurcations have a limited frequency band and a nonzero minimum frequency (often close to w0). For this model, the frequency ranges from 8 to 14 Hz. The small figures labeled a, b, and c are the phase-planes corresponding to points along the bifurcation diagram. At point (a) there is only a stable equilibrium and the neuron always returns to rest. In point (b), taken between the currents I1 and I2, there are two stable states: a stable

Limit Cycles When a neuron is injected with current, it begins to fire repetitively. If the interspike interval is constant, then this pattern of firing is periodic and, in dynamical systems language, is called a limit cycle. A limit cycle is an isolated periodic solution to eqn [4]. All autonomous rhythmic phenomena seen in biology are limit cycles ranging in scales from the 24 h circadian oscillation in the suprachiasmatic nucleus to the 200 Hz ripple in hippocampus. There are several ways in which limit cycles can appear in nonlinear systems. To illustrate these, we will use the simple membrane model of Morris and Lecar: C

dV ¼ I  gL ðV  EL Þ  gk nðV  Ek Þ  gcam1 ðVÞðV  ECa Þ dt dn ¼ an ðVÞð1  nÞ  bn ðVÞn dt

Depending on the parameters of the model, as the current increases, rhythmic firing appears in three different ways: (1) Hopf, (2) SNIC, and (3) homoclinic. These three possibilities have been found in almost all published simulations, and there is strong evidence that they are also observed in real neurons.

14 12 10

40

FLC w

8 6

20

V (mV)

4

HB

0

2 0 85 90 95 100 105 110 115 120 125

−20

I (µA cm−2)

a

−40 −60 70

75

80

c

b

85

90

I1

95 100 105 110 115 120

I (µA cm−2)

I2

a

b

Figure 5 Hopf bifurcation formation of limit cycle. HB, Hopf bifurcation; FLC, fold of limit cycles.

c

Neural Oscillators and Dynamical Systems Models

just the stable limit cycle and the unstable equilibrium. Comparing phase-plane (c) in the Hopf and SNIC shows that for large enough currents, they cannot be distinguished. However, most neurons (at least in cortex) operate near rest so that the bifurcation matters. We note that near the critical current, I*, the frequency grows like: pffiffiffiffiffiffiffiffiffiffiffiffi o ¼ K I  I

equilibrium and a stable limit cycle. The unstable limit cycle (magenta) separates the plane into two regions. Everything outside of the unstable cycle ends up on the stable cycle and everything inside ends up at the equilibrium. The well-known Hodgkin–Huxley model behaves exactly like this. As the current approaches the value I1 from the right, the magenta limit cycle and the green limit cycle collide and annihilate. This is called a fold of limit cycles (FLC) and is very much like the saddle-node bifurcation for equilibria: instead of losing two equilibria, two limit cycles are lost. In the transition from (b) to (c), the unstable magenta limit cycle shrinks to the equilibrium and renders it unstable, leaving only the stable limit cycle. There is compelling evidence that fast-spike interneurons in the cortex have this type of bifurcation. A less well-known, but very important bifurcation to limit cycles is called the saddle-node invariant cycle (SNIC) bifurcation. Most computational models for cortical neurons make the transition from rest to repetitive firing via this bifurcation. An example is shown in Figure 6. Unlike the Hopf case, there is never bistability. Rather, there is a smooth transition from rest to rhythmicity. At the point of bifurcation, the frequency of the oscillation goes to zero. Note the huge dynamic range: 0–25 Hz for the simple model. For subthreshold currents, there are three equilibria; two are unstable and one, the rest state, is stable. Phase-plane (a) shows the big picture. The turquoise equilibrium serves as a threshold. If the potential crosses this, then a spike will be emitted before the cell returns to rest. As the current increases, the threshold and the rest state merge (phase-plane (b)) and at larger currents, these two equilibria disappear, leaving

A key difference between the SNIC and the Hopf is that the Hopf has a much more restricted frequency range and thus is very sensitive to the frequency of the inputs, while the SNIC has a very broad frequency range and no preferred input frequencies. These differences could have important computational consequences: SNIC neurons respond to the total number of inputs (integrators), while Hopf neurons are more sensitive to timing. The third way in which a limit cycle can appear also involves an arbitrarily low frequency but with a different scaling from the SNIC. The bifurcation diagram is shown in Figure 7, and it is more complex than the other two. The frequency also has a large dynamic range, between 0 and 50 Hz. At point (a), there are three equilibria, one stable and two unstable. In phase-plane (a) the unstable set (gold) from the saddle point (turquoise) lies on the outside of the stable set (magenta). As the current approaches the value denoted (b) in the bifurcation diagram, the stable and unstable sets merge to form the red loop in phaseplane (b). This is called a ‘homoclinic’ solution. Just past the bifurcation point, the homoclinic becomes a stable limit cycle, at point (c) and depicted by the green cycle in phase-plane (c). Note that there is both a stable 30 25

40

w

20 15

20

V (mV)

10 5

0

0 30 40 50 60 70 80 90 100 110 120

I (µA cm−2)

−20

−40

c

b a

−60 0

10

20

30

40

50

60

I (µA cm−2) Figure 6 Saddle-node formation of limit cycle.

70

80

90

185

a

b

c

186 Neural Oscillators and Dynamical Systems Models 60 50

20 40

V (mV)

w

10 30

0

20

−10

10 0 40

−20 b −30

a

42

44

46

48

d

50

52

54

56

58

I

a

c

−40 −50 20

30

40

50 60 I (µA cm−2)

70

80 d

c

b

Figure 7 Homoclinic formation of limit cycle.

30

30

20

20

10

10

0

0

−10

−10

−20

−20

−30

−30

−40

−40

−50 800

850

900 Time (ms)

a

950

1000

− 50 800

850

b

900 Time (ms)

950

1000

30

Frequency (Hz)

20 10 0 −10 −20 −30 −40 − 50 c

0

200

400

600 800 Time (ms)

1000

1200

1400

Figure 8 Bifurcation type determines synchronization properties. (a) Two Hopf neurons synchronize with excitatory coupling; (b) SNIC neurons fire in antiphase. (c) Frequency matters as well – if the frequency is halved, a pair of SNIC neurons switch from antiphase to a nonsymmetric rhythm.

Neural Oscillators and Dynamical Systems Models

limit cycle and a stable equilibrium (black circle). The stable set of the saddle point (magenta) separates the phase-plane into regions which tend to either the limit cycle or the equilibrium. Finally, as the current increases, the stable equilibrium and the saddle point merge (a saddle-node bifurcation), leaving only the limit cycle and the unstable equilibrium. The reader may well wonder if these differences matter. Indeed, there seems to be little difference in the shapes of the spikes between the SNIC and the Hopf cases when the neuron is driven to fire at, say, 12 Hz. However, the mechanism by which a neuron makes the transition from rest to firing makes a large difference in how it responds to inputs. This fact is illustrated when two neurons are synaptically coupled into a network. Networks of coupled neural oscillators play a central role in many motor pattern circuits and may be important in sensory perception as well. Figure 8 demonstrates that the type of bifurcation matters. In (a), the neurons use parameters like those in Figure 5 and are driven to 12 Hz. Two identical cells are coupled via a synaptic current and, no matter what the initial state, will eventually synchronize. In (b), the parameters are as in Figure 6, driven also to 12 Hz. However, no matter what is the initial state, the two neurons will fire alternately in ‘antiphase,’ a half-cycle apart. Not only does the type of bifurcation matter, but also the frequency Panel (c) shows what happens to a pair as the frequency is halved from 12 to 6 Hz. Instead of firing in a perfect antiphase pattern, there is a switch so that the neurons fire with a short time between respective spikes followed by a long silent period. In the context of the

187

questions posed at the beginning of this article, one can view the frequency-dependent switch shown in Figure 8(c) as analogous to a gait switch as an animal speeds up or slows down.

Summary Nonlinear dynamics provides a means of characterizing the temporal behavior of neurons and networks of neurons. Transitions between different qualitative behaviors as parameters change occur in only a few different ways. Despite differences in timescales and dimensions of the problem, these bifurcations share common features which provide insight into how neurons react to stimuli and the number and variety of states that networks can achieve. See also: Attractor Network Models; Hodgkin–Huxley Models; Spiking Neuron Models.

Further Reading Guckenheimer J and Holmes P (1986) Applied Mathematical Sciences, Vol. 42: Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields. New York: Springer. Izhikevich E (2006) Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting (Computational Neuroscience). Cambridge, MA: MIT Press. Koch C and Segev I (1998) Methods in Neuronal Modeling, 2nd edn.: From lons to Networks (Computational Neuroscience). Cambridge, MA: MIT Press. Strogatz SH (1994) Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry and Engineering. Cambridge, MA: Perseus.