Controllability of Excitable Systems

Controllability of Excitable Systems

Bulletin of Mathematical Biology (2001) 63, 167–184 doi:10.1006/bulm.2000.0212 Available online at http://www.idealibrary.com on Controllability of E...

255KB Sizes 0 Downloads 48 Views

Bulletin of Mathematical Biology (2001) 63, 167–184 doi:10.1006/bulm.2000.0212 Available online at http://www.idealibrary.com on

Controllability of Excitable Systems† M. PERNAROWSKI Department of Mathematical Sciences, Montana State University, Bozeman, MT 59717, U.S.A. E-mail: [email protected]

Mathematical models of cell electrical activity typically consist of a current balance equation, channel activation (or inactivation) variables and concentrations of regulatory agents. These models can be thought of as nonlinear filters whose input is some applied current I (possibly zero) and output is a membrane potential V . A natural question to ask is if the applied current I can be deduced from the potential V . For a surprisingly large class of models the answer to this question is yes. To show this, we first demonstrate how many models can be embedded into higher dimensional quasilinear systems. For quasilinear models, a procedure for determining the inverse of the nonlinear filter is then described and demonstrated on two models: (1) the FitzHugh–Nagumo model and (2) the Sherman–Rinzel– Keizer (SRK) [Sherman et al., (1988, Biophysics Journal, 54, 411–425)] model of bursting electrical activity in pancreatic β-cells. For the latter example, the inverse problem is then used to deduce model parameter values for which the model and experimental data agree in some measure. An advantage of the correlation technique is that experimental values for activation (and/or regulatory) variables need not be known to make the estimates for these parameter values. c 2001 Society for Mathematical Biology

1.

I NTRODUCTION

Mathematical models of electrical activity in single cells comprise of a set of differential equations, one of which represents a balance of transmembrane ionic currents. An external electrical stimuli is modelled by including an applied current in the current balance equation. Chemical stimuli are often modelled either by varying concentrations of relevant agents or by varying parameters which are believed to be correlated to the stimulating chemical. If the transmembrane potential of a cell at time t is V (t), the model equations have the form: Cm

X dV =− I X (V, w) E − I (t), dt X

(1)

† This work was supported by the National Science Foundation grant DMS-97-04-966.

0092-8240/01/010167 + 18

$35.00/0

c 2001 Society for Mathematical Biology

168

M. Pernarowski

dw E = W (V, w), E dt

w E ∈ Rm .

(2)

The first of these equations represents a balance of capacitive currents (left side), currents through ionic channels (right side) and the applied current I . The parameter Cm is the total capacitance, I X are the transmembrane currents through ionic channels (e.g., voltage-gated sodium, calcium-inactivated potassium), and w E are variables which describe gating and other regulatory effects for each ionic channel of type X . Often, many of these variables evolve slowly. A common experiment in electrophysiology is to isolate a cell, inject a known current I (t) into the cell and then measure the resulting membrane potential V (t). The aim in such experiments is to deduce what ionic currents are relevant for the given cell and hopefully deduce quantitatively accurate functional forms for I X . In this regard the cell may be regarded a nonlinear filter in which the input I (t) elicits a response V (t). In this study we address the inverse problem; namely, if V (t) is known is there a way to deduce what the stimulus I (t) must have been? Surprisingly, the answer to this question is yes for a great many models. The approach we take is direct. Given a model we outline a procedure for explicitly deducing the inverse problem. This procedure requires that w E occur linearly in the given model (1)–(2). To be specific, the model equations must be of the form dv E = f (v) − F(v) ·w E + I (t), dt

(3)

dw E E − A(v)w = b(v) E dt

(4)

E bE ∈ Rm and matrix A ∈ for some scalar function f , vector-valued functions F, Rm×m . Since w E occurs linearly in these equations but the potential v does not, we shall refer to (3)–(4) as quasilinear. In many excitable cell models, (4) is an appropriate form since the variables w E = (w1 , w2 , . . . , wm )T often obey relaxation equations: dwi Wi (v) − wi = , dt τi (v)

i = 1, 2, . . . , m.

(5)

In vector form, this can be written as (4). Such relaxation variables, however, do not always occur linearly in (3). In Section 2, we show that despite the fact that many models (1)–(2) are not quasilinear, they can be embedded in a higher dimensional system which is quasilinear. This embedding is explicit and we demonstrate it on the Hodgkin–Huxley model (Hodgkin and Huxley, 1952) for electrical activity in the squid giant axon. Then, in Section 3, we outline the procedure to invert quasilinear models by deriving a single differential equation for I which depends only on v. This procedure is demonstrated on the FitzHugh–Nagumo model (FitzHugh, 1961; Nagumo et al.,

Controllability of Excitable Systems

169

1962)) where the associated differential equation for I (t) is solved to derive an explicit expression for the current I which elicits a given arbritrary (smooth) response v(t). Lastly, in Section 4, we demonstrate how the inversion procedure can be used to estimate model parameters from experimental data. As with most parameter identification problems, the problem associated with the parameter estimation is ill-posed. This is discussed, a regularization scheme is introduced and then demonstrated on a model of electrical activity in the pancreatic β-cell.

2.

T RANSFORMATION TO Q UASILINEAR F ORM

Here we discuss how some common nonlinearities occuring in excitable cell models (1)–(2) can be eliminated by introducing new variables and embedding solutions of (1)–(2) in a quasilinear system (3)–(4) with degree n > m. Arguably the most common nonlinearity which occurs in excitable cell models are those which arise from modelling voltage activation (and/or inactivation) in ionic channels. Two such examples occur in the Hodgkin–Huxley model (Hodgkin and Huxley, 1952): Cm

dV = −INa (V, m, h) − IK (V, n) − IL (V ) dt φ∞ (V ) − φ dφ = = αφ (V ) − βφ (V )φ, dt τφ (V )

(6) φ = m, h, n,

(7)

where INa , IK and IL are voltage-gated sodium, voltage-gated potassium and leakage currents, respectively. The voltage-gated currents have the form INa = gNa m 3 h(V − VNa ), IK = gK n 4 (V − VK ),

(8) (9)

where gNa , gK are (constant) maximal conductances and VNa , VK are sodium and potassium Nernst potentials, respectively. Defining w E = (w1 , w2 , w3 ) = (m, h, n) this model fails to be quasilinear since wi occur in a nonlinear fashion in (6). However, if we defined new variables W1 = E = (W1 , W2 ). But then, (7) would not m 3 h and W2 = n 4 , (6) would be linear in W E . To circumvent this difficulty we introduce several new variables. be linear in W First we consider the nonlinearities of the type in (9). Assume I X has the form I X = g(V )φ n , (10) where g is some function, n is an integer and the activation variable φ(t) satisfies the relaxation equation (7). Define the variables zi = φi ,

i = 1, 2, . . . , n.

(11)

170

M. Pernarowski

Differentiating z i in t and using (7) we find dz i dφ = iφ i−1 = iαφ z i−1 − iβφ z i dt dt

(12)

for i = 2, 3, . . . , n. The equation for z 1 is (7). Defining Ez = (z 1 , z 2 , . . . , z n ) these equations can be written dEz E ) − A(v)Ez , = b(V (13) dt E ) = (αφ (V ), 0, . . . , 0)T and where b(V βφ  −2αφ  0 A(V ) =   .  .. 

0

0 2βφ −3αφ 0 ···

··· 0 3βφ .. .

··· ··· 0 .. .

··· ··· ··· .. .

···

0

−nαφ

 0 0   0 .  0  nβφ

(14)

Thus, I X = g(V )z n is linear in z n and (13) has the form (4). It is important to note that although the resulting system has n − 1 new variables, the initial conditions on the z i are dependent. In particular, z i (0) = z 1 (0)i . For the Hodgkin–Huxley model, φ(t) = n(t) is the potassium activation variable. Now we turn to currents I X of the form I X = g(V )φ n ψ m ,

(15)

where g is some function, n, m are integers and the activation variables φ(t) and ψ(t) satisfy relaxation equations of the form (7). Define the new variables Z i, j = φ i ψ j ,

i = 0, 1, . . . , n,

j = 0, 1, . . . , m,

i + j 6= 0. (16)

Again, we differentiate Z i, j in t and use the relaxation equations (7) for φ and ψ to find d Z i, j = iαφ Z i−1, j − (iβφ + jβψ )Z i, j + jαψ Z i, j−1 . (17) dt From (17) it is apparent that the derivative of Z i, j depends linearly on two other Z k,l variables. This dependence is illustrated in Fig. 1 where each Z i, j variable is indicated by a circle. The derivative of Z i−1, j will in turn depend on Z i−2, j , Z i−1, j and Z i−1, j−1 . It is evident that by introducing (n + 1)(m + 1) − 1 variables, I X will be linear in Z n,m and the resulting system for Z i, j will be of the form (4). For the Hodgkin–Huxley sodium current, φ and ψ are current activation and inactivation variables, respectively, with n = 3 and m = 1 so that the current equation is linear in φ 3 ψ. The inflated quasilinear system corresponding to this current requires the addition of six new variables. Defining ZE = (Z 1,0 , Z 0,1 , Z 1,1 , Z 2,0 , Z 2,1 , Z 3,0 , Z 3,1 )T

(18)

Controllability of Excitable Systems Zi−1, j

j

171

Zi, j

Zi, j−1

i

Figure 1. Figure showing the dependence of the derivative of Z i, j (t) = φ i ψ j (located at the cross) on other Z i, j variables. The derivative of Z i, j depends linearly on Z l,k variables indicated by solid circles.

the system for ZE has the form d ZE E ) − A(V ) ZE , = b(V dt

(19)

where βφ  0   −αψ  A(V ) =   −2αφ  0   0 0 

0 βψ −αφ 0 0 0 0

0 0 βφ + βψ 0 −2αφ 0 0

0 0 0 2βφ 2βφ + βψ −3αφ 0

0 0 0 0 aψ 0 −3αφ

0 0 0 0 0 3βφ 3βφ + βψ

 0 0   0   0   0   0  αψ

and bE = (αφ , αψ , 0, 0, 0, 0, 0)T . Again we note that the initial conditions are dependent on one another via the definition of Z i, j . In summary, the methods outlined in this section can be used to embed the Hodgkin–Huxley model in an 11th-order quasilinear system. Given the dependence of the initial conditions on one another, we see that solutions of the fourthorder Hodgkin–Huxley model are in fact projections of trajectories on a differentiable manifold embedded in the 11th-order solution space of the corresponding quasilinear system.

3.

C ONTROLLABILITY

In this section we outline a procedure for inverting initial value problems of the T form (3)–(4). We define the initial condition vE0 = (v(0), w(0)) E ∈ Rm+1 and assume that for the applied current I (t)  m−1 T d I d I IE(t) = I, , . . . , m−1 (20) dt dt

172

M. Pernarowski

is continuous. Lastly, we let IE0 = IE(0). With these definitions and assumptions, we can view (3)–(4) as a nonlinear filter whose input is (I, vE0 ) and output is (v(t), IE0 ), i.e., there is some operator N : (I, vE0 ) → (v, IE0 ). In words, if the input current and initial cell state are known, the model yields an output voltage v. And since I (t) is known, IE0 can be found. A natural question to ask is whether N has an inverse. That is to say, if the output v(t) is known, can I (t) be predicted given IE0 ? Surprisingly, the answer to this question is yes in a great many instances. To demonstrate this fact, we explicitly compute the inverse operator. Owing largely to the linearity of (3)–(4) in w, E repeated differentiations of (3) and substitutions using (4) results in a procedure whereby w E can be eliminated resulting in a linear differential equation for I (t) involving v(t) and its derivatives. Solving (3) for I we have   dv dv E I = − f (v) + F(v) ·w E = H0 v, + KE 0 (v) · w. E (21) dt dt Differentiating (21) in t results in the calculations dI d H0 d KE 0 dw E = +w ET + KE 0T dt dt dt dt =

d KE 0 d H0 +w ET + KE 0T (bE − Aw) E dt dt

d KE 0 d H0 + KE 0T bE + w ET − KE 0T Aw E dt dt    E  d H0 TE T d K0 T E E = + K0 b + w E − A K0 dt dt =

which can be written in the form     dI dv d 2 v dv E = H1 v, , 2 + K 1 v, ·w E dt dt dt dt

(22)

(23)

for some functions H1 and KE 1 . Proceeding inductively, repeated differentiations of this equation in t and using the same substitutions it is easily seen that one can obtain a set of equations of the form:    n  dn I dv d n+1 v dv d v = Hn v, , . . . , n+1 + KE n v, , . . . , n · w, E n dt dt dt dt dt

(24)

where n = 1, . . . , m − 1. In matrix notation (21) and (24) can be written IE = HE + K w, E

(25)

Controllability of Excitable Systems

173

where HE = (H0 , H1 , . . . , Hm−1 )T and the matrix K ∈ Rm×m has KE n , n = 0, 1, . . . , m − 1, as rows. So long as K is nonsingular, w E = K −1 ( IE − HE )

(26)

which when substituted into (3) yields dv E = f (v) − F(v) · K −1 ( IE − HE ) + I. dt

(27)

This is a linear differential equation for I (t) of degree m − 1. Given the initial condition IE0 and v(t) the solution I (t) defines the inverse operator N −1 : (v, IE0 ) → (I, vE0 ). Before explaining the significance of this result we first illustrate the procedure on a simple model. For this we choose the FitzHugh–Nagumo model (FitzHugh, 1961; Nagumo et al., 1962) dv = f (v) − w + I (t), dt dw w˙ = = ε(v − γ w), dt v˙ =

(28) (29)

where f (v) = v(1 − v)(v − a) and (a, γ , ε) are parameters. Differentiating (28) in t we find v¨ = f 0 (v)v˙ − w˙ + I˙ = f 0 (v)v˙ − ε(v − γ w) + I˙.

(30)

Solving for w we find w=

1 (v¨ − f 0 (v)v˙ + εv − I˙). εγ

(31)

Substituting this into (28) we obtain I˙ + εγ I = B(v, v, ˙ v)(t) ¨ = v¨ − f 0 (v)v˙ + εv + εγ (v˙ − f (v)).

(32)

Given an initial condition I (0) = I0 the solution of this first-order differential equation is: I (t) = N

−1

(v, I0 ) = e

−εγ t

I0 +

t

Z

e 0

εγ s

!

B(v, v, ˙ v)(s)ds ¨ ,

(33)

174

M. Pernarowski 2

1.5

1 I(t)

0.5

0 −0.5 −1 0

5

10

15

20

25

30

35

40

45

50

t

Figure 2. The input I (t) for the FitzHugh–Nagumo model which yields an output v(t) = sin2 t for (a, γ , ε) = (1/3, 1, 1/10).

demonstrating explicitly a formula for the inverse of N . For example, (33) can be used to compute the stimulus I (t), I0 = 0 which yields an output v(t) = sin2 t. Performing this calculation explicitly‡ we plot the corresponding I (t) in Fig. 2 for a fixed set of parameter values. In this example, the cell is initially at rest since for v(t) = sin2 t, (v(0), v 0 (0)) = (0, 0). Now we explain the significance of these results. It is clear that given the solution (v, w) E of (3)–(4) one can easily find I (t) by simply solving (3). Therefore, the elimination of w E from the model equations seems irrelevant to defining the inverse of N . However, in most experimental situations it is often difficult or impossible to measure v and w E simultaneously. Voltage measurements, on the other hand, are always made. And, if the sampling rate for this measurement is sufficiently fast, estimates of v and its derivatives are easily obtained. In this regard, the elimination of w E is paramount to developing a technique for estimating model parameters and unknown input currents when only v(t) and its derivatives can be measured. These techniques will be described in the next section. Before we conclude this section, however, we cast the inverse problem as a control theory problem. Casti (1985) defines a variety of different control problems. One which most closely mimics our system is the ‘realization-identification’ problem. For example, consider the system x˙ = f (x(t), t) + u(t)g(x(t), t), y(t) = h(x(t)),

(34) (35)

where u(t) is an input, x(t) is a state and y(t) is the output. For the ‘realizationidentification’ problem associated with this system one tries to find f, g and h so ‡ Done symbolically with MapleTM .

Controllability of Excitable Systems

175

that y(t) is known in terms of u(t). The control problem associated with (3)– (4) is a minor modification of the realization-identification problem. If we define T E T , x(t) = (x1 (t), . . . , x M+1 (t))T = (V (t), w(t)) g = 1, u(t) = (I (t), 0) E and h(x) = x1 (t) our problem is one of finding f given the input and output.

4.

PARAMETER I DENTIFICATION

We now use the formulation of the inverse problem to describe a method for estimating unknown model parameters given that the output potential v(t) and input current I (t) are known. We define two classes of problems: P For a given electrophysiological experiment, a model of the form (3)–(4) is assumed and estimates for all but n model parameters λ = (λ1 , . . . , λn ) are known. In the experiment, an input current Ie (t) and measured potential Ve (t) are found for a discrete set of times t1 , t2 , . . . , t N . The problem is then finding λ so that the assumed model most closely agrees with the measured output Ve . P* For a given experiment, a model of the form (3)–(4) is assumed but, since the stimulus is chemical, I (t) = 0. In this case one may still be interested in estimating unknown parameters λ defining the model. Though these problems may seem different to experimentalists, the technique for estimating λ illustrated in this section is the same for both classes of problems. From (27), we define

dv

∗ ∗ −1

E 3 = 3 (λ, v, I ) = − f (v) + F(v) · K ( IE − HE ) − I (36)

, dt

where k · k is some norm (perhaps simply absolute value). Then for problem P one might attempt to minimize§ 3(Ve , Ie , λ) =

N X

3∗ (λ, Ve (ti ), Ie (ti ))

(37)

i=0

in λ. If the agreement were exact, one would have 3 = 0. For a given model, 3∗ is known but since Ve (t) is only known at discrete times and noise is often present, estimation of the derivatives evaluated at t = tk is an issue in itself. Before addressing the issues of derivative estimation and data smoothing, there is a more fundamental theoretical issue of well posedness. In general, 3 will not have a unique minimum. To illustrate this, consider problem P∗ for which



dv ∗ −1 E

E 3 (λ, v, I ) = − f (v) − F(v) · K H . (38) dt § For problem P*, the only difference is that I = 0. e

176

M. Pernarowski

If v(t) is any solution of the differential equation (27), 3∗ (λ, v, I ) will be identically zero. If v˜ is not a solution of (27), 3∗ (λ, v, ˜ I ) will not be zero. It is often the case that the model equations possess a rich bifurcation structure in parameter space. For some λ, (27) may possess stable fixed points while for other λ the model may have stable periodic solutions or other attractors. Thus, a minimization of 3(λ, v, ˜ I ) in λ may result in a value of λ corresponding to a fixed point of (27) despite the output v(t) ˜ being clearly periodic (and vice versa). To illustrate this, we note that for the FitzHugh–Nagumo model with I = 0 in (32), 3∗ =k v¨ − f 0 (v)v˙ + εv + εγ (v˙ − f (v)) k .

(39)

To correlate periodic solutions of this model to data one might introduce phase and scaling variables for time and the potential: τ = δt (t − φt ),

(40)

u = δv (v − φv ),

(41)

λ = (δt , δv , φt , φv ) ∈ R4 .

(42)

Then, with v = φv + u/δv , as δt → 0 and δv → ∞,

 2



δt

d u du 0

3 = δt 2 + (εγ − f (v)) + εv − εγ f (v)

δv dτ dτ ∗

→ εγ k f (φv ) − γ −1 φv k

(43)

=0 if v = φv is a steady state of the model. In other words, if φv is a steady state, the parameter search λ → (0, ∞, 0, φv ) will drive 3 to a minimum regardless of what the data are. For this reason, a penalty function P should be introduced to avoid parameter searches which yield undesirable solutions such as steady states. In general, if Vλ (t) is a solution of the model equations for a given parameter value λ, a penalty function P = P(Ve , Vλ , λ) should be introduced and then the functional M = M(Ve , Ie , Vλ , λ) = 3(Ve , Ie , λ) + P(Ve , Vλ , λ)

(44)

should be minimized in λ. The penalty function should be chosen so that it is numerically larger at λ for which Vλ is an undesirable solution (such as a steady state) while smaller at other solutions (such as periodic solutions). In the next section we illustrate this procedure using the penalty function which is an approximation of the L 2 norm k Vλ (t) − Ve (t) k on the time interval [0, T ] over which the sampling of the data Ve (t) was measured.

Controllability of Excitable Systems

177

4.1. An example: parameter estimation in the SRK model. In this section we demonstrate how to use experimentally measured potential time series to estimate parameters in Sherman–Rinzel–Keizer (SRK) model of β-cell electrical activity (Sherman et al., 1988). The model equations are: Cm

dv = −ICa−V (v) − IK (v, n) − IK−Ca (v, c), dt dn n ∞ (v) − n = , dt τn (v) dc = f (−α ICa−V (v) − kCa c). dt

(45) (46) (47)

The dependent variables v, n, and c are the transmembrane potential, the activation variable for the voltage-gated potassium channel, and the intracellular (free) calcium concentration, respectively. Equation (45) is a current balance equation where the terms on the right-hand side represent the transmembrane currents due to voltage-gated calcium channels (ICa−V ), voltage-gated potassium channels (IK ), and calcium-activated potassium channels (IK−Ca ). Formulae for these currents and standard parameter values used in Sherman et al. (1988) are tabulated in Appendix A. The nondimensionalization of this model reveals (Pernarowski et al., 1991) that the calcium-activated potassium current can be approximated by IK−Ca = gK−Ca c(v − vK ) =

g¯ K−Ca c(v − vK ). Kd

(48)

We adopt this approximation here and then note that (45)–(47) is linear in w E = (n, c) and has the form (3)–(4). Therefore, by the procedure outlined in Section 2, n and c can be eliminated from the equations. The resulting (equivalent) third-order equation for v(t) is derived in Appendix B: 3 ≡ H2



dv d 2 v d 3 v v, , 2 , 3 dt dt dt



  dv d 2 v E + K 2 v, , 2 · (n ∗ , c∗ )T = 0. dt dt

(49)

A sample of experimentally measured values for the membrane potential were obtained from David Mears (National Institutes for Health). These data are shown in Fig. 3 (top trace). The potential Ve (t) was obtained from a single mouse pancreatic β-cell in an intact islet of Langerhans bathed in a 10 mM glucose concentration. The sampling rate for the data was 150 Hz which translates to a measured Ve value every 1t = 6.7 ms. The membrane potential for this experiment (type P∗ ) is seen to lie roughly in the range of 20–30 mV. Superimposed on this figure (bottom trace) is a numerical integration of the SRK model using the ‘standard’ parameter values given in Sherman et al. (1988)¶ . ¶ Tabulated in Appendix A.

178

M. Pernarowski

−20

−30

V

−40

−50

−60

−70 0

10

20

30

40

50

60

t (s)

Figure 3. Membrane potential of a pancreatic β-cell in mV. The top trace shows the membrane potential measured experimentally when the cell was bathed in a 10 mM glucose solution. The bottom trace shows a theoretical prediction using the Sherman–Rinzel–Keizer model with parameter values chosen from Sherman et al. (1988). For these standard parameter values, theory and experiment differ greatly. The object is to find parameter values for which theory and experiment agree in some measure.

Though the experimental data and theoretical prediction depict bursting oscillations, the two are badly out of phase, and have different periods and voltage ranges. Since these are the dominant differences we introduced into the model equations scaling and phase parameters λ as defined in (40)–(41). All other parameters in the model equations were fixed at their original values. To define the measure M in (44) we used 3 computed in (49) and a penalty function P equal to the square of the L 2 norm Z T 2 P =k Ve (t) − Vλ (t) k L 2 [0,T ] = (Ve (t) − Vλ (t))2 dt, (50) 0

where T was chosen as 30 seconds (four burst durations in the experimental data). To evaluate 3 using experimental data, we approximated derivatives using a series of smoothing and discrete differentiation procedures. First, data were smoothed using a sliding window average. Given that the sampling times are tk , k = 1, 2, . . . , N with 1t = tk+1 − tk for all k, a smoothed time N−p series S(Ve ) = {Sk (Ve )}k=1 of the data Ve (t) was computed using Sk (Ve ) =

Z (k+ p)1t i=k+ p 1 1 X Ve (ti ) ' Ve (t)dt, p i=k p1t k1t

(51)

where p is an averaging parameter indicating the width of the window over which the averaging is performed. A discrete differentiation operation D was used to compute dervatives. For example, Ve (tk+1 ) − Ve (tk ) d Ve D(Ve ) = ' (tk ). (52) 1t dt

Controllability of Excitable Systems

179

−15

V (mV)

−20

−25

−30

−35

0

5

10

15 t (s)

20

25

30

Figure 4. Comparison of experimental data and SRK model prediction for different model parameters. The scaling and phase parameters used to make the theoretical prediction agree better with the experimental trace when λ = (δt , δv , φt , φv ) = (1.72, 2.94, −2.7, −9.32). This choice was determined using the algorithm described in the text.

2

To compute 3 at a given λ , ddtVe was approximated by (DS)(Ve ), ddtV2e by (DS DS) (Ve ) and so forth. Unlike the computations of 3 from the experimental data, the computation of the penalty function requires a (more computationally costly) numerical integration of the model equations. However, this computation need only be performed once when minimizing M with respect to the phase and scaling parameters λ defined in (42). After Vλ0 (t) has been found by numerically integrating the model equations for λ = λ0 = (1, 1, 0, 0), Vλ (t) for other λ is easily computed from the relations (40)–(41) without further integrations. As a result, however, the time grid for Vλ (t) in the numerical integration may not match the experimental time grid {tk } making it necessary to interpolate (via cubic splines) Vλ (t) values at t = tk so that the penalty function defined above may be approximated via P =k Ve (t) − Vλ (t) k2L 2 [0,T ] '

X (Ve (tk ) − Vλ (tk ))2 1t.

(53)

k

A sliding window ‘width’ of p = 40 was used to compute the measure M(λ) from the data and a numerical integration of the SRK model. The functional M(λ) was then minimized using the Nelder–Mead simplex algorithm [see Nelder and Mead (1965) and Himmelblau (1972)] using the initial value λ = (1, 1, 0, 0). After several iterations, the algorithm yielded the parameter values λ = λm = (δt , δv , φt , φv ) = (1.72, 2.94, −2.7, −9.32). In Fig. 4, the experimental data Ve (t) are plotted against the scaled and phased theoretical prediction Vλm (t). It is apparent that this choice of λ has significantly improved the correlation of phase and scaling (compared to Fig. 3).

180

M. Pernarowski

5.

D ISCUSSION

As has already been demonstrated, a large class of excitable cell models is controllable in the sense that there exists an input I (t) which elicits any (smooth) response v(t). This does not mean that these models are simultaneously controllable in the other dependent variables w. E Nevertheless, the implications are still broad and interesting. For example, the predicted controllability in the FitzHugh– Nagumo model implies that there is an input I (t) such that v(t) equals the first component of the Lorenz attractor. It is therefore not a surprise that forced excitable cell models exhibit such a rich variety of behaviors. At this early stage, it is not clear how (or if) the invertibility of these models can be used as an analytical tool to examine these behaviors. In Section 4, however, it was demonstrated that model invertibility could be used for parameter estimation purposes. There, the SRK model was inverted and a parameter identification algorithm was applied to determine the phase and scaling parameters λ = (δt , δv , φt , φv ) which minimized the functional M in (44). The results of these calculations (cf. Fig. 4) show much improved phase and scaling characteristics. Despite this improved fit, there is still a significant difference between the downstroke after the active phase of each burst. The experimental downstroke seen in Fig. 4 is less rapid than in the model. Clearly, mere scaling cannot fix this problem. There are several possible explanations for this failing. The inclusion of additional parameters into the minimization procedure might have alleviated this problem. Alternatively, the model may not be able to reproduce a slower downstroke no matter what minimization procedure were applied. This latter possibility would indicate a deficiency in the model itself but would be untestable since ‘all’ minimization procedures would have to be checked to make such a conclusion. Furthermore, if the inclusion of additional parameters into the minimization eliminated the difference in downstrokes, that would not necessarily validate the model. For instance, parameter sets which work well for one experiment may work poorly in other experiments. Thus, the method should not be misconstrued as a tool for validating models but more simply as an alternative technique for obtaining parameter estimates. Even as such, the method can clearly be altered or improved. There are many ways to de-noise data, minimize functionals, choose penalty functions and make derivative estimates from data. The minimization procedure itself can be computationally intensive. In the minimization procedure, each functional evaluation requires that the model equations be numerically integrated for a duration of sufficiently long time so as to capture the qualitative behavior of the data. Thus, the time needed to minimize M depends greatly on the speed of the numerical integrater used. In Section 4 this issue was circumvented by introducing scale and phase parameters: λ = (δt , δv , φt , φv ). There, the model equations only needed to be integrated once using λ = (1, 1, 0, 0) to compute u(τ ). Afterwards, the re-

Controllability of Excitable Systems

181

lations (40)–(41) were used to compute the solution v(t) for other λ. Were the data not periodic, it would not be useful to introduce scale and phase parameters to shorten the time needed to minimize M. Regardless, the results in Section 4 demonstrate that the minimization of M can lead to parameter values for which a given model more closely mimics the experimental behavior. The ultimate goal would be to alter the technique to address the issues of noisy data and speed. The issue of noisy data would be easy to address if an integral formulation for 3 could be found. The algorithm speed might be improved by allowing parameters to change dynamically (such as in Kalman Filters). Lastly, a key feature of the parameter identification procedure discussed is that the model variables w(t) E need not be measured to make parameter estimates. The inversion process indicates explicitly that appropriate derivative information of v(t) can be used to make such estimates. This suggests that by dynamically changing the applied current, a single experimentk might be sufficient to estimate channelgating parameters in voltage-gated channels.

ACKNOWLEDGEMENTS I would like to express my deep regret and sadness at the recent passings of Teresa Chay and Joel Keizer. Their influence on my work has been immense. Indeed, even in this manuscript Joel Keizer’s influence can be noted in the references. Also, I would like to thank David Mears at the National Institutes for Health for sharing his data on pancreatic β-cells so that I could complete the last section of this paper.

A PPENDIX A SRK function definitions. ICa−V (v) = g¯ Ca m ∞ (v)h ∞ (v)(v − vCa ) IK (v, n) = g¯ k n(v − vK ) c IK−Ca (v, c) = g¯ K−Ca (v − vK ) Kd + c

(A1) (A2) (A3)

m ∞ (v) =

1 , 1 + exp[(vm − v)/Sm ]

(A4)

h ∞ (v) =

1 , 1 + exp[(v − vh )/Sh ]

(A5)

k Rather than the several needed in the typical voltage-clamp experimental protocol.

182

M. Pernarowski

n ∞ (v) = τn (v) =

1 , 1 + exp[(vn − v)/Sn ]

(A6)

τ¯n . exp[(v − vb )/Sa ] + exp[−(v − vb )/Sb ]

(A7)

Standard paramater values in the SRK model: Parameter vCa vK g¯ Ca g¯ K g¯ K−Ca VCell Cm Kd kCa α f vb vh vm vn Sa Sb Sh Sm Sn τ¯n

(units) (mV) (mV) (pS) (pS) (pS) (µm3 ) (fF) (µ M) (ms−1 ) [mole coul−1 (µm3 )−1 ] (mV) (mV) (mV) (mV) (mV) (mV) (mV) (mV) (mV) (ms)

SRK value 110 −75 1400 2500 30000 1150 5310 100 0.03 4.5062 × 10−6 0.001 −75 −10 4 −15 65 20 10 14 5.6 37.5

In the SRK model, the parameter α = 1/(2VCell F) where F is Faraday’s constant.

A PPENDIX B Elimination of (n, c) from the SRK model. In this section we reduce the modified SRK model equations to a single third-order differential equation in v(t). For simplicity, set F(v) = ICa−V (v). The potential equation Cm

dv = −F(v) − gK n(v − vk ) − gKCa c(v − vk ) dt

(B1)

is equivalent to H0



dv v, dt



+ KE 0 (v) · w E = 0,

(B2)

Controllability of Excitable Systems

183

where w E = (n, c), KE 0 = (K 01 , K 02 )T and H0 = Cm

dv + F(v) dt

(B3)

KE 0 = (gK (v − vK ), gK−Ca (v − vK )).

(B4)

Using the procedure discussed in Section 2 with I = 0,     dv d 2 v dv H1 v, , 2 + KE 1 v, ·w E = 0, dt dt dt

(B5)

where KE 1 = (K 11 , K 12 )T and d 2v dv gK n ∞ + F0 + (v − vK ) − f αgK−Ca F(v − vK ) 2 dt dt τn   g dv dv K KE 1 = gK − (v − vK ), gK−Ca − f gK−Ca kCa (v − vK ) . dt τn dt H1 = Cm

To find H2 and KE2 = (K 21 , K 22 )T in    2  dv d 2 v d 3 v dv d v H2 v, , 2 , 3 + KE 2 v, , 2 · w E =0 dt dt dt dt dt

(B6) (B7)

(B8)

differentiate (B5) in t:   d H1 d KE 1 n − n ∞ + ·w E + KE 1 · , f (−α F − kCa c) . dt dt τn

(B9)

Thus, d H1 K 11 n ∞ + − K 12 f α F dt τn  E 1  K 11 d K − , K 12 f kCa , KE 2 = dt τn H2 =

(B10) (B11)

where  2 2 d H1 d 3v dv 0d v 00 = Cm 3 + F 2 + F (v) dt dt dt dt − f αgK−Ca F 0 (v)(v − vK ) 

dv dv − f αgK−Ca F dt dt

 gK n 0∞ gK n ∞ 0 gK n ∞ dv (v − vK ) − τ (v − vK ) + τn τn2 n τn dt

(B12) (B13) (B14)

184

M. Pernarowski

and

  d 2v gK 0 gk dv  d KE 1  gk dt 2 + τ 2 τn (v − vK ) − τn dt  n = . dt d 2v dv gK−Ca 2 − f gK−Ca kCa dt dt T Next, solve for (n, c) using (B2) and (B5) to find 

K 02 H1 − K 12 H0 det K K 11 H0 − K 01 H1 c = c∗ = , det K

n = n∗ =

(B15)

(B16) (B17)

where det K = K 01 K 12 − K 02 K 11 .

(B18)

Substituting these into (B8), one obtains a single third-order equation for v(t):     dv d 2 v d 3 v dv d 2 v E 3 ≡ H2 v, , 2 , 3 + K 2 v, , 2 · (n ∗ , c∗ )T = 0. dt dt dt dt dt

(B19)

R EFERENCES Casti, J. L. (1985). Nonlinear System Theory, Orlando, FL: Academic Press. FitzHugh, R. (1961). Impulses and physiological states in theoretical models of nerve membranes. Biophys. J. 1, 445–466. Himmelblau, D. M. (1972). Applied Nonlinear Programming, New York, NY: McGrawHill Book Company. Hodgkin, A. L. and A. F. Huxley (1952). A quantitative description of membrane current and its application to conduction and excitation in nerve. J. Physiol. (Lond.) 117, 500– 544. Nagumo, J. S., S. Arimoto and S. Yoshizawa (1962). An action pulse transmission line simulating nerve axon, Proceedings of the IRE, 50, pp. 2061–2071. Nelder, J. and R. Mead (1965). A simplex method for function minimization. Comput. J. 7, 308–313. Pernarowski, M., R. M. Miura and J. Kevorkian (1991). The Sherman-Rinzel-Keizer model for bursting electrical activity in the pancreatic β-cell, in Differential Equation Models in Population Dynamics and Physiology, Lecture Notes in Biomathematics 92, S. Busenberg and M. Martelli (Eds), Berlin: Springer-Verlag, pp. 34–53. Sherman, A., J. Rinzel and J. Keizer (1988). Emergence of organized bursting in clusters of pancreatic β-cells by channel sharing. Biophys. J. 54, 411–425. Received 1 May 2000 and accepted 2 October 2000