Available online at www.sciencedirect.com
Physica A 325 (2003) 152 – 164
www.elsevier.com/locate/physa
Nonlinear sensors activated by noise L. Gammaitonia;∗ , A.R. Bulsarab a Dipartimento
di Fisica, Universita di Perugia, Instituto Nazionale di Fisica Nucleare-Virgo project, Sezione di Perugia, and Istituto Nazionale di Fisica della Materia, Sezione di Perugia, Perugia I-06100, Italy b Space and Naval Warfare Systems Center, Code D-363, 49590 Lassing Road, San Diego, CA 92152-6147, USA Received 22 December 2002
Abstract We discuss a residence-time-based operating scheme that can be applied to a wide class of bistable sensor devices. The detection of physical signals embedded in noise background is realized via the monitoring of the residence times in the metastable attractors of the system. This scheme for quantifying the response of a nonlinear dynamic device, has been implemented in experiments involving 2uxgate magnetometers. c 2003 Elsevier Science B.V. All rights reserved. PACS: 05.40.+j; 02.50.Ey; 85.25.Dq Keywords: Noise; Bistability; Nonlinearity; Sensors
1. Introduction Dynamical sensors are widely used to detect a number of di:erent physical signals: from magnetic
Corresponding author. Tel.: +39-075-5848458. E-mail addresses:
[email protected] (L. Gammaitoni),
[email protected] (A.R. Bulsara). URL: http://www.
L. Gammaitoni, A.R. Bulsara / Physica A 325 (2003) 152 – 164
153
sensors (see e.g. Ref. [5]), and mechanical sensors (see e.g. Ref. [6]), e.g. acoustic transducers, made with piezoelectric materials. The operation of such devices is often performed by using an external known bias signal to drive the system response. Spectral techniques are then used to detect the unknown target signal. Usually, the amplitude of the bias signal is taken to be quite large (compared to the target signal to be detected) in order to drive the bistable dynamics and to reduce the in2uence of background noise. In this con
154
L. Gammaitoni, A.R. Bulsara / Physica A 325 (2003) 152 – 164
provides an observable that is used as a quanti
2. Model dynamics Our study has been developed in the context of design and tests on a so-called advanced dynamic 2uxgate magnetometer (ADFM) prototype, a magnetic
L. Gammaitoni, A.R. Bulsara / Physica A 325 (2003) 152 – 164
155
where is a system time constant, and T , a dimensionless temperature. 1 h(t) is an external magnetic
x2 1 − ln cosh[c{x + h(t)}] ; 2 c
(2)
where we set c = T −1 . The potential energy function (2) is bistable for c ¿ 1. Model (1) can be augmented by an additive noise term. In this work, we will assume the deterministic bias signal h(t) = A sin !t (period T0 = 2 =!) to be suprathreshold i.e., switching between the two stable attractors in the potential system, or between the static thresholds when the device dynamics are irrelevant, is controlled by the bias signal, with one threshold crossing occurring during each half-cycle (the exact time to threshold crossing depends, of course, on the system and bias parameters). The variable of interest is, then, the di:erence MT = |T+ − T− |, the di:erence between the residence times in the states of the two-state system. This quantity is clearly a function of the system and bias parameters. It is zero when the two stable states are symmetric about the unstable
156
L. Gammaitoni, A.R. Bulsara / Physica A 325 (2003) 152 – 164
operational conditions. One such waveform is obtained by properly adding a square wave (having amplitude 1 ), with a triangular wave (amplitude 2 ) both having frequency !. The amplitudes of the component signals are set according to the prescription 1 + 2 = A. The result is a waveform given by 1 2!
2 ; (2n − 2) ¡ t ¡ (2n − 1) ; t− H (t) = 1 + 2 ! ! !
3 2!
t− 2 ; (2n − 1) ¡ t ¡ 2n : H (t) = −1 − (5)
2 ! ! ! For waveform (5), it is clear that the parameters 1; 2 determine whether threshold crossings occur on the signal segments having slope SL = ∞, SL ¡ 0, or SL ¿ 0. In (i) fact, it is evident that for crossings of the upper threshold (at time t10 ), one has (i) (i) ¿ 0 for t10 = 0 if 1 − 2 ¿ b − j (crossings occur on the SL = ∞ segment), and t10 1 − 2 ¡ b − j (crossings occur on the SL ¿ 0 segment). For the lower threshold, the (i) crossing times are t20 = =! for 1 − 2 ¿ b + j (crossings on the SL = ∞ segment) (i) and t20 ¿ =! for 1 − 2 ¡ b + j (crossings on the SL ¡ 0 segment). For the cases when the threshold crossings occur on the SL = ∞ segments one can, analogous to the time-sinusoidal case, obtain the upper and lower threshold crossing times as (i) t10 =
b − j − 1 + 2 ; 2 2!
(i) = t20
b + j − 1 + 32 ; 2 2!
(6)
whence we obtain, MT (i) = T0
j 1 − 2 ¡ b − j ; 2
MT (i) = T0
b + j − 1 + 2 b − j 6 1 − 2 6 b + j ; 22
MT (i) = 0; 1 − 2 ¿ b + j ;
(7)
where the sensitivity S (i) = 9MT (i) =9j is obtained as S (i) = T0 =2 , S (i) = T0 =22 , and S (i) = 0 for each of the three regimes de
2 |sin−1 gp − sin−1 gm | ; !
(8)
where gm; p ≡ {c−1 tanh−1 xfsm; p − xfsm; p − j}=A. Analogous expressions for waveform (5) may be derived analytically.
L. Gammaitoni, A.R. Bulsara / Physica A 325 (2003) 152 – 164
157
In the following we compute the mean residence times di:erence in the presence of system noise. As mentioned earlier, we expect the previous expressions to provide good approximations to the mean residence times di:erence when the known bias signal is well suprathreshold and the noise and target signal are small. We will assume that the noise is Gaussian and correlated, i.e., it is derived from a white-noise driven Ornstein– Uhlenbeck process [14]: ˙(t) = −−1 + !F(t) ; c
(9)
where F(t) is a white noise process having zero mean and unit variance: F(t) = 0, and F(t)F(t )=#(t −t ). We readily obtain for the correlation function of the colored Gaussian noise, (t) (t ) = 2 exp[ − |t − t |=c ] where 2 = !2 c =2. We also assume that the signal frequency ! is well within the noise band, i.e., the noise is wideband vis-a-vis the signal. For j=0 and A suprathreshold, the threshold crossings to the stable states are controlled by the signal, but the noise does introduce some randomness. The result is a broadening of the RTD, due to the noise. For A far above the deterministic switching threshold and moderate noise, the RTD assumes a symmetric narrow (Gaussian-like) shape with a mean value (the mean crossing time) nearly the same as the most probable value or mode (this is the value around which most experimental observations are likely to be clustered). The mean values (or modes, in this case) of the histograms corresponding to transitions to the left and right stable states coincide. As the signal amplitude decreases and/or the noise intensity increases, the RTD starts to develop a tail so that the mean and mode get separated; the appearance of the tail is an indication of the growing role of noise in producing switching events, although the suprathreshold signal is still the dominant mechanism. When the signal amplitude falls below the deterministic crossing threshold (Ax0 =MU ¡ 1), the crossings are driven largely by the noise. The RTD can assume a characteristic multi-peaked structure [15] that shows “skipping” behavior since the noise can actually cause the crossings to occur at di:erent multiples nT0 =2 (n odd) of the half-period, and the stochastic resonance scenario comes into play [8]. For very special situations (primarily those in which there is a small amount of noise), one can carry out the above procedure with a very weak bias signal. In this case the RTDs for each potential well are almost unimodal with long tails. The mean values and modes are, again, dependent on the target signal; however, in this case, the slopes of the long-time tails of the density functions are di:erent for the two wells, and this di:erence can also be used as an identi
158
L. Gammaitoni, A.R. Bulsara / Physica A 325 (2003) 152 – 164
Fig. 1. RTD for the Schmitt trigger stable states measured for increasing values of the noise standard deviation ! . Each RTD (for a
$ = 0:2b), is presented. We note the following: 1. For small noise (! A−b), the RTD presents two well-separated almost-symmetric peeks centered about the mean values T+; − . 2. As long as the noise stays small (! ¡ A − b), the mean values T+; − are roughly the same as the deterministic values computed above (the larger A, the less they depart from the computed values). As the noise intensity increases, however, the distributions become broader, and start to develop tails. 3. In the presence of increasing amounts of noise (! ¿ A − b) the two peaks of the RTD tend to merge as a consequence of an increasing number of purely noise activated switches between the stable states; simultaneously, the RTDs develop noise-dependent tails. 4. For large noise (! A − b), the switching mechanism is completely dominated by noise. MT decreases and eventually goes to zero when ! → ∞. The results of simulations, wherein we examine the e:ects of changing the noise variance !2 , the bias amplitude A, and the (DC) target signal j, are shown in Ref. [2]. As discussed in the NANDS context [1,2], the (theoretical) largest MT is obtained for zero bias signal. However, in real applications this observation must be tempered by the constraint of
L. Gammaitoni, A.R. Bulsara / Physica A 325 (2003) 152 – 164
159
without the bias signal. Otherwise, a bias signal must be applied. In the following we introduce a quanti
(11)
(12)
where we set !Tn+ ≈ !Tn− = !Tn , since the distributions are identical with the separation of means being the only manifestation of the presence of the target signal. Now, we introduce an output “signal-to-noise ratio (SNR)” via the de
Tob Tob Tob = ≈ : T+ + T − MTn + 2T− 2T−
Hence, we
(14)
(15)
It is of interest to compute and analyse the SNR (15) as a function of the bias amplitude A and other system parameters, as a means to optimizing performance. The simple threshold description of the ST as well as the potential-based models (mean-
160
L. Gammaitoni, A.R. Bulsara / Physica A 325 (2003) 152 – 164
compared to the threshold “height.” To get an analytical estimate of the SNR (15), we resort to our simple ST model. We assume the noise 2oor to be small (compared to the threshold setting), and to manifest itself in a 2uctuating threshold with mean value b; the 2uctuations are assumed to be Gaussian:
( − b)2 1 : (16) exp − P( ) = √ 2!2 2 !2 Let us
!A A2 2 ; (17) cos !t1 exp − 2 (sin !t1 − sin !t10 ) P(t1 ) = √ 2! 2 !2 which is normalized to unity over the interval 0 6 t1 6 T0 =4, which contains the
A2 !A 2 ; (18) cos !t2 exp − 2 (sin !t2 − sin !t20 ) P(t2 ) = √ 2! 2 !2 normalized to unity in T0 =2 6 t 6 3T0 =4. The bias signal must be well suprathreshold and the noise intensity !2 also should be small compared to the threshold height. In (17) and (18), the deterministic crossing times t1; 20 are given by (3). In terms of the density functions (17) and (18), we may write down formal expressions for the mean crossing times t1 th and t2 th , the subscript denoting the theoretical (in this case, approximate) quantity: T0 =4 P(t1 )t1 dt1 (19) t1 th = 0
and
t2 th =
3T0 =4
T0 =2
P(t2 )t2 dt2 :
(20)
The theoretical di:erence in residence times is then, MT th = T+ th − T− th = 2(t2 th − t1 th ) − T0
(21)
in terms of de
L. Gammaitoni, A.R. Bulsara / Physica A 325 (2003) 152 – 164
161
and the remaining term in the denominator of the square root factor in (15) replaced by the di:erence in the mean crossing times. The integrals above must be computed numerically, in general. We then readily observe that in the limit of small noise variance and large bias amplitude, the averaged quantities are well approximated by their deterministic counterparts: t1; 2 th ≈ t1; 20 ;
MT th ≈ MTST 0 ;
(23)
where the deterministic residence times di:erence is given in (4). We may also, in the regime of validity of correspondences (23), approximately evaluate integrals (19) and (20) using a second-order Laplace expansion (see e.g. Ref. [17]), in which we retain terms upto O(!2 ) only. We then obtain t1 th ≈ t10 +
!2 sec !t10 G10 (t10 ) + h:o:t: ; A2
t2 th ≈ t20 +
!2 sec !t20 G20 (t20 ) + h:o:t: A2
(24)
For the variance !t21 we obtain !t21 ≈
!2 sec !t10 {G2 (t10 ) − 2t10 G10 (t10 )} ; A2
(25)
where we have de
f1(2)
2+(2) 1
(t10 ) +
f1(1) +(3) 1 2 2[+(2) 1 ]
f1(2)
2+(2) 2
2 2[+(2) 2 ]
f2(2)
2+(2) 1
2 2[+(2) 1 ]
2 24[+(2) 1 ]
2 8[+(2) 2 ]
2 24[+(2) 2 ]
2 8[+(2) 1 ]
(26)
(t20 ) ;
(27)
(t10 )
2 5f2 [+(3) 1 ] 2 24[+(2) 1 ]
and f1 (t) = t cos !t;
(t10 ) ;
(t20 )
2 5f1 [+(3) 2 ]
f2 +(4) 1
(t10 ) −
(t10 )
2 5f1 [+(3) 1 ]
f1 +(4) 2
(t20 ) −
(t10 ) +
f2(1) +(3) 1
2 8[+(2) 1 ]
(t10 ) −
(t20 ) +
f1(1) +(3) 2
f1 +(4) 1
f2 (t) = t 2 cos !t ;
(t10 )
(28)
162
L. Gammaitoni, A.R. Bulsara / Physica A 325 (2003) 152 – 164
1 +1 (t) = − (sin !t − sin !t10 )2 ; 2 1 +2 (t) = − (sin !t − sin !t20 )2 : 2
(29)
In the above expressions, the superscripts (e.g. +(m) ) denote the mth time derivative. The mean crossing times (24) agree very well (in the limit of small !=A) with the values obtained by numerically evaluating integrals (19) and (20). Good agreement is also obtained between the standard deviation !t1 and its numerically obtained counterpart. In fact, a glance at Eqs. (24) shows that at large signal amplitude (and/or small noise intensity), the crossing times approach their deterministic values t1; 20 ; in turn, these behave as 1=A for large A. In this regime of operation, the residence times density functions (17) and (18) collapse into Gaussians having the form
1 1 2 ; (30) exp − 2 (t1 − t10 ) P(t1 ) ≈ 2,s 2 ,s2 which is normalized to unity on [ − ∞; ∞] and where ,s2 = !2 =A2 !2 , a “dressed” variance that is seen to decrease rapidly with decreasing ! and/or increasing A. A corresponding expression is obtained for P(t2 ). This can readily be veri
which, after some manipulations yields
1 1 P(Tu ) = exp − 2 (Tu − t10 + t20 )2 : 4,s 4 ,s2
(32)
An analogous expression may be computed for the residence times density function in the down state. Then, using expression (4), setting !T2n = 2,s2 , and taking T+ = t20 − t10 (with the deterministic crossing times de
L. Gammaitoni, A.R. Bulsara / Physica A 325 (2003) 152 – 164
163
to the above. Starting with expression (16) for the noise probability density function, we may obtain the crossing times density functions via a simple change of variables:
1 1 P(t1;(i)2 ) = (34) exp − 2 (t1;(i)2 − t1;(i)20 )2 ; 2,i 2 ,i2 which is also normalized to unity on [ − ∞; ∞]. Here, we have introduced, as we did for the sinusoidal bias case above, the “dressed” variance parameter ,i2 ≡ 2 !2 =!2 22 . Denoting by Tu(i) = t2(i) − t1(i) the residence time in the up-state, one obtains its density function in a manner analogous to that used above for (32):
1 1 (i) (i) 2 P(Tu(i) ) = ; (35) exp − 2 (Tu(i) − t10 + t20 ) 4,i 4 ,i2 (i) (i) − t10 and variance 2,i2 = 2 2 !2 =!2 22 . We readily which is Gaussian having mean t20 (i) (i) (i) observe that MT → 0 and t20 −t10 → T0 =2 when j → 0, as expected. The separation between the peaks in the residence times density function is given by (7) exactly as predicted for the noise-free case. The SNR (15) may now readily be estimated for this waveform. We
SNR =
1 MT (i) !k2 Tob : (i)
! 4 T0 − MT
(36)
−1 Note √ that, in the Gaussian regime (for large A=!), the SNR behaves like ! and like Tob . It is worth noting that the observation time Tob , in practical scenarios, plays a pivotal role; certainly it determines how much data can be accumulated for averaging purposes, when computing the quantities in (13).
Acknowledgements We gratefully acknowledge funding from the OQce of Naval Research Code 331.
References [1] L. Gammaitoni, A. Bulsara, Phys. Rev. Lett. 88 (2002) 230 601. [2] A. Bulsara, C. Seberino, L. Gammaitoni, M.F. Karlsson, B. Lundqvist, J.W.C. Robinson, Phys. Rev. E. 67 (2003) 16120. [3] P. Ripka, Sensors and Actuators A 33 (1992) 129. [4] J. Clarke, SQUIDs: theory and practice, in: H. Weinstock, R. Ralston (Eds.), The New Superconducting Electronics, Kluwer Publishers, Amsterdam, 1993. [5] D. Damjanovic, P. Muralt, N. Setter, IEEE Sensors J. 1 (2001) 191. [6] W. Bornho:t, G. Trenkler, in: W. Gopel, J. Hesse, J. Zemel (Eds.), Sensors, A Comprehensive Survey, Vol. 1,2, VCH, New York, 1992. [7] R. Bartussek, P. Hanggi, P. Jung, Phys. Rev. E 49 (1994) 3930; A.R. Bulsara, M.E. Inchiosa, L. Gammaitoni, Phys. Rev. Lett. 77 (1996) 2162; M.E. Inchiosa, A.R. Bulsara, L. Gammaitoni, Phys. Rev. E 55 (1997) 4049.
164
L. Gammaitoni, A.R. Bulsara / Physica A 325 (2003) 152 – 164
[8] L. Gammaitoni, P. Hanggi, P. Jung, F. Marchesoni, Rev. Mod. Phys. 70 (1998) 225; A.R. Bulsara, L. Gammaitoni, Phys. Today 49 (3) (1996) 39; L. Gammaitoni, F. Marchesoni, S. Santucci, Phys. Rev. Lett. 74 (1995) 1052. [9] L. Gammaitoni, F. Marchesoni, E. Menichella-Saetta, S. Santucci, Phys. Rev. Lett. 62 (1989) 349. [10] G. Bertotti, Hysteresis in Magnetism, Academic Press, San Diego, 1998. [11] H.E. Stanley, Introduction to Phase Transitions and Critical Phenomena, Oxford University Press, Oxford, 1971. [12] J. Millman, Microelectronics, McGraw Hill, New York, 1983. [13] B. McNamara, K. Wiesenfeld, Phys. Rev. 39 (1989) 4954; P. Jung, P. HVanggi, Phys. Rev. A 44 (1991) 8032. [14] C. Gardiner, Handook of Stochastic Methods, Springer, Berlin, 1985. [15] L. Gammaitoni, F. Marchesoni, E. Menichella-Saetta, S. Santucci, Phys. Rev. Lett. 62 (1989) 349; A. Longtin, A. Bulsara, D. Person, F. Moss, Biol. Cybern. 70 (1994) 569. [16] G. Papoulis, Probability, Random Variables, and Stochastic Processes, McGraw Hill, New York, 1991. [17] N. Bleistein, R. Handelsman, Asymptotic Expansions of Integrals, Dover, New York, NY, 1986.