Math1 Comput. Modeliing, Vol.14,pp.686-689,1990 Printed inGreatBritain A GENERAL METHOD TO CHARACTERIZE HOMOGENEOUS NEURONAL POPULATIONS
089s7177/90 $3.00 + 0.00 Pergamon Press plc
Richard L. Port and Yi Lin Departments of Psychology and Mathematics, Slippery Rock University, Slippery Rock, PA 16057 USA Abstract. Nonlinear methods have been applied to the study of brain systems. However, experimental results have been disappointing in that higher-order terms are often found to have no effect, or increase the error in prediction for kernel-derived models. While simple neural systems may be second-order, it is possible that the efficacy of higher-order terms for complex systems may be degraded by computational error. Input parameters, output measures and computations which incorporate known parameters of neural systems are described. Results should provide system definition that is more accurate and amenable to modeling and simulation efforts. Keywords.
Neural systems; nonlinear analysis; brain function.
INTRODUCTION Nonlinear analysis of neurobiological systems has become increasingly popular in recent years. The approach possesses remarkable promise in that the results may contribute to our understanding of brain function, provide a characterization of dysfunctional states, and produce a quantitative model of the intrinsic transformational properties of brain systems. However, many attempts to analyze neural structures have not been particularly enlightening.
each other to yield an output: y(t)
=&(tl)
x(t-tl)
dtl dt2
x(t~)x(t~)x(t3)x(t~)=P~tb(tl-t2)6(t3-t~) +6(tl-t3)
6(t*-t/J +G(t,-t,)6
(t2-q)l
And the expectation of y(t) x(t-tl) x(t-t2) reduces to: (y(t) x(t-tl) x(t-t&=
2 P2 k2 (tl.t2)
For higher-order systems, the output can be described by the Volterra (1930) expression: Yw==og-*f~
dtl
x(t-t,)
If the input is Gaussian white-noise (GWN), the output is a stochastic process with a Gaussian distribution. The probablistic character of x(t) and y(t) allows the expectation or ensemble average to be taken in terms of x(t) and y(t). The mean of a GWN, x(t) , which is equal to the DC component of the input, can be transformed to another GWN with zero mean by simply subtracting the DC component. Consequently, the expection of the product, x(tl) x(t2) reduces to: (x(t1) X(Q))
x(t-t2)
When the input is GWN, analytic procedures resemble linear methods. The fourfold product decomposes to:
Linear systems theory has proved useful in studying simple biological systems such as the circulatory, hemodynamic and respiratory systems. For a linear system with impulse response function kl(t), the output, y(t), in response to a particular input, x(t), is given as a weighted temporal sum of the timetranslated input, x(t-tl) and is expressed by the convolution integral: Y(t)
x(t-tl)
=fJOk2(tlt*)
(t
I,...
t,)
x(t-t1) ,...,
dtl,...,dtn
The expression is valid for stochastic or deterministic inputs, but there are many systems for which the Volterra expansion does not exist and the method is not applicable to unknown systems (in a practical sense). Consequently, Wiener (1950) constructed a hierarchy of functions of increasing order which are mutually orthogonal to each other with respect to a random input.
= p 6 (t1-t2)
Where Gi is a complete set of orthogonal functionals, and hi is the set of symmetrical kernels that quantify relations between x(t) and y(t).
Where< > denotes the ensemble average, P is the power spectral density of the GWN, and 6 is the Dirac delta function. The expectation of the product, y(t) x(t-tl) reduces to:
Due to orthogonality, the series may be truncated after n functionals to give the best nth-order polynomial approximation of system output (in the sense of least mean square error). The first four terms are given:
Go[ho,x(t)l = ho G,[h,,x(t)l=~O:l(tl) x(t-tl) dtl
For nonlinear systems, the output may be determined by an interaction of inputs at multiple instants. Given a system in which the inputs at any two instants interfere with
G2[he,x(e)l=~-~~2(tl.t2)
x(t-tl)x(t-t2) dtldtg
-p&:2(tl,t2) dtl 686
Proc. 7th Int. Conf. on Mathematical and Computer Modelling
x(t-t3)
dtl dt2 dt3 - 3P_fJ.3(tl.t2,t3) x(t-t1)
dtl dt2 Although the Volterra expansion is applicable to either stochastic or deterministic inputs, and the Wiener expansion is defined in respect to a stochastic input, the kernels are related to each other. The nth-order Wiener kernel is decomposed of a series of the Volterra kernel of order n, n+2, n+4, etc (See Rugh, 1981). Thus, the Wiener expansion can be construed as a rearrangement of the Volterra expansion to allow the use of deterministic inputs. A number of procedures have been developed to compute the Wiener kernels including Ogura's generalization of Wiener's original method (Ogura, 1985), the French-But2 Fourier transformation procedure (French and Butz, 1973), and the Lee-Schetzen cross-correlation procedure (Lee and Schetzen, 1965). The Lee-Schetzen method has been applied to study a variety of systems. Computation of the nth-order kernel is accomplished by subtracting lower-order components from the output:
The first four kernels of the expansion are obtained as: h,,= y(t) hi(t)= k
687
of 20%. Addition of the second-order term reduced MSE to 15%. However, inclusion of the third-order term inflated MSE to 30%. The poor performance of higher-order predictions may result from a variety of sources (Sakuranaga, Shunsake, Hida and Naka, 1986). For deterministic systems, the MSE may reflect nonlinearity of a higher-order. For stochastic systems, MSE does not necessarily reflect discrepancy between kernel-predicted and observed output, but may capture the probablistic nature of the system. Finally, the system output may contain intrinsic or extrinsic noise that is unrelated to the input stimulus. It is our position that much of the error present in analysis of neurobiological systems can be controlled through experimental and computational means. CONSTRAINED METHODS Essentially all brain systems are amenable to nonlinear analysis. Moreover, the assumptions of finite memory, stationarity and ergodicity are testable over the course of experimentation. The selection and application of the input stimulus, as well as the output signal, should reflect known parameters of the system. Consequently, a system can be tested over a range of inputs that are of scientific interest and the interpretation of the transformational properties will be enhanced by the utilization of an output signal with a known generator. The following procedures are applicable to the general case where a homogeneous population of neurons comprises a delineated brain structure.
y(t) x(t-tl) Definition -of input parameters.
h2(tl,t2)= &
y(t) x(t-tl) x(t-t2> -hl
h3(tl,t2,t3) = &y(t)
-h2
x(t-tl) x(t-t2) x(t-t3)
- h2 - hl - ho Where denotes time average. Application of nonlinear analytical methods to the study of neural systems has not met with the level of success ihat would appear possible. A number of excellent studies have yielded results that are somewhat disappointing in that higher-order properties have marginal impact on error terms. Krausz and Friesen (1977) have examined synaptic transmission in the lobster cardiac ganglion. A random impulse train input was applied and output was observed as the continuous intracellular potential. Kernels were computed using cross-correlation and their accuracy was evaluated by computing mean square error (MSE) for the kernel-estimated and observed response. The MSE for the first-order term was 19% of the total power of the output signal. Inclusion of the second-order term reduced MSE to 9.5% but consistently underestimated the facilitation that occurred in observed responses. Addition of the thirdorder term had no apparent effect; MSE remained at 9.5%. Response properties of the catfish retinal neurons have been extensively investigated by Marmarelis and Naka (1973). Analysis of the horizontal cell response to a photic white-noise input revealed the following. The first-order model produced an MSE IIQ( 14-u*
Two general forms of input stimulation may serve as a forcing function. A continuous, stochastic input, such as GWN, may be useful to study sensory systems. However, to approximate naturally-occurring input to the central nervous system, the stimulus must take the form of discrete impulses which mimic the action potentials that occur in the afferent region. Krausz (1975) has shown that the Wiener method can be approximated for a random impulse (Poisson process) input. Since the Wiener and Volterra kernels are related, and Volterra's procedure can be applied to a deterministic input, we suggest that the ideal input configuration is a pseudo-random impulse train varied on both frequency and amplitude dimensions. The impulse train can be constructed to exhaustively test the system but constrained by the limits of the afferent system so that only natural transformations are investigated. Frequency limits, defining the range of interimpulse intervals, are obtainable experimentally (via the activity record of the afferent neurons. Thus, record length is maximized by excluding irrelevant configurations. Similarly, the amplitude of an impulse can be varied in a random fashion and constrained by limits of interest. Importantly, saturation levels can be defined to avoid over-estimation of higher-order responses (predicted output that is not physically realizable). Thus, the input train can be defined a priori by series of randomly distributed inter-impulse
688
Proc. 7th Int. Conf. on Mathematical and Computer Modelling
intervals and impulse intensities. Definition -of output signal. Two general forms of output may be investigated. A continuous record can be obtained by recording transient potential differences (field potentials). However, these populations do not maintain a constant temporal relation, and the underlying generators are not well defined. Consequently, computations and interpretations are difficult. Conversely, a spike train output reflects the action potential discharges that occur within the delimited neuronal population. Spike data can be converted to continuous functions by producing a poststimulus time histogram which approximates a waveform. Use of the multiple-unit record of neuronal discharge appears to be the most appropriate for general neural systems. Windows may be set to exclude other populations or elements in which the system of interest is embedded. Response data can be represented in histogram form with the time bins set by the experimenter and the results reflect the output that is transmitted to the next member in the network by the population of neurons under study.
Stationarity of the system can be evaluated by statistical comparison of hl computed for various segments of the output record provided that xii is a random process. The second-order term (h2) defines the deviation of the system response that is attributable to the preceding input. It can be defined as the average deviation from the linear response (hl + ho) over time for all inputs meeting criteria. For the input, x n* x m second-order responses are define A' for interactions between n intensities and m intervals. (h2)ij = y(t)i/j - (bl)i - ho Where i/defines intensity levels for both inputs and j defines the interval separating them. We avoid the practice of ignoring "intervening" impulses to expand record length as employed by others (ie: Krausz, 1975; Sclabassi, et al, 198s). The practice is justified for systems receiving uncontrollable input from other sources but will yield significant artifact when input parameters are defined by the experimenter. Also, because we are evaluating real (causal) systems, only positive arguments are considered.
Amended computations. The zero-order term (ho) is intended to define the baseline of the system and provides an index for the deviations produced by higherorder interactions. For a multiple-unit record, ho represents the spontaneous activity of the system. Previous methods estimate ho as the time average of y(t) over the entire record length. However, input pathways to neural systems are exclusively excitatory or inhibitory. Further, the existence of feedback loops does not guarantee cancellin effects of excitatory and inhibitory elements. Thus, we propose that ho may be more accurately obtained as thearerage of y(t) during quiescent periods of the record as defined by the finite memory of the system (A).
Hence, ho is the average value of y(t) for periods of the record between the occurrence of an impulse plus the memory of the system (ti +A) and the occurrence of the next impulse in the train (ti + 1). Consequently, ho is not affected by the intensity parameters chosen by the experimenter and statistical evaluation of ho for various portions of the record may be used to test ergodicity of the system. It is important to note that neural systems may be nonstationary, but possess the property of ergodicity. The first-order term (hl) defines the average system response to an input regardless of prior events. It is defined as the average deviation of y(t) from hO over time for all inputs. However, in the case where input varies on both frequency and amplitude dimensions, x... n first-order terms are required to d&line the system response to n intensity levels. (hl)i= Y(t)i. - hO
The third-order term (h3) characterizes the change in system output as a consequence of a pair of preceding inputs. The input, xij must be defined in terms of three impulse intensities (i") and two intervals (j'). There exist n3 x m2 third-order conditions for which responses are defined: (h3)Tj'= y(t)ij
- (b2)ij - (hl)i - h,
For our procedure, the second-order term that is subtracted from y(t) is not the superposition of both preceding inputs, but only the most proximal input pair. It can be demonstrated that h3 is not a simple summation of second-order properties by contrasting both procedures, but for the characterization of the change in y(t) evoked by the most distal input, we suggest that the present method most accurately defines the functional impact. The present methods were defined to provide a practical means to study functional properties of neural systems. By adapting the Volterra-Wiener method and including known properties of the concerned system, we hope to reduce the error associated with higher-order predictions. The minimal constraints placed on the input enhance experimental realizability and the selection of multiple-unit discharge as the output should facilitate interpretation. Attempts to contrast error terms for various methods are currently in progress using simulated systems. REFERENCES French, A.S. and E.G. Butz (1973). Measuring the Wiener kernels of a nonlinear system using the fast Fourier transform Int. J. Control, 2, 529-539. algorithm. --____
Proc. 7th Int. Conf. on Mathematical and Computer Modeiiing Krausz, H.I. (1975). Identification of nonlinear systems using random impulse train inputs. Biol. Cybern., 2, 217-230. Krausz, H.I. and W.O. Friesen (1977). The analysis of nonlinear synaptic transmisJ. Gen. Physiol., 70, 243-265. sion. -Lee, Y.W. and Schetzen, M. (1965). Measurement of the Wiener kernels of a nonlinear system by cross-correlation. -Int J. Control, 2, 237-254. Marmarelis, P.Z. and K.I. Naka (1973). Nonlinear analysis and synthesis of receptive field responses in the catfish retina. _ J. Neurophysiol., 36, 619-633. Ogura, H. (1985). Estimate of Wiener kernels of a nonlinear system and fast algorithm Proc 16th using digital Lagurre filters. -NIBB Conference, Okazaki, Japan.
689
Rugh, E.J. (1981). Nonlinear Systems Theory. The Volterra/Wiener Approach. Johns HopkinsPress, Baltimore, Md. Sakuranaga, M.. S. Shunsake, E. Hida and K.I. Naka (1986). Nonlinear analysis: Mathematical theory and biological CRC-Crit Rev Biomed applications. -----* a., 14, 127-184. Sclabassi, R.J., J.L. Eriksson, R.L. Port, G.B. Robinson, and T.W. Berger (1988). Nonlinear systems analysis of the hippocampal perforant path-dentate system. J. Neurophysiol., 60, 1066-1077. VolterIa, V. Theory of Functional and Integral and Integrodifferential Equatiz. %i%kwell Scientific. London. 1930. Wiener, N. (1950). Nonlinear . Problems . in Random Theory, MIT Press,CambridgK