Bridging the gap between sensor noise modeling and sensor characterization

Bridging the gap between sensor noise modeling and sensor characterization

Accepted Manuscript Bridging the gap between sensor noise modeling and sensor characterization Kshitij Jerath, Sean Brennan, Constantino Lagoa PII: DO...

3MB Sizes 68 Downloads 214 Views

Accepted Manuscript Bridging the gap between sensor noise modeling and sensor characterization Kshitij Jerath, Sean Brennan, Constantino Lagoa PII: DOI: Reference:

S0263-2241(17)30578-X http://dx.doi.org/10.1016/j.measurement.2017.09.012 MEASUR 4962

To appear in:

Measurement

Received Date: Revised Date: Accepted Date:

11 March 2016 26 July 2017 11 September 2017

Please cite this article as: K. Jerath, S. Brennan, C. Lagoa, Bridging the gap between sensor noise modeling and sensor characterization, Measurement (2017), doi: http://dx.doi.org/10.1016/j.measurement.2017.09.012

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Bridging the gap between sensor noise modeling and sensor characterization Kshitij Jeratha,1 , Sean Brennana , and Constantino Lagoaa a The

Pennsylvania State University

Abstract In prior works, the tasks of noise modeling and sensor characterization have typically been studied independently of each other. In spite of extensive research on noise modeling and sensor characterization, there still exists a need to bridge the gap between parameters used in sensor noise models, and quantities used to characterize commercially available sensors. The included work addresses this need by presenting tutorial-style exemplary analyses that relate noise model parameters to sensor characteristics for some common noise types found in sensors. Specifically, this work seeks to demonstrate that sensor noise characterization techniques can be applied to simulated sensor noise data to recover the original noise model parameters. The presented relationships between noise models and sensor characterization tools can help engineers and scientists numerically verify the expected performance bounds of their systems using simulated signals from commercially available sensors. Moreover, these numerical tools can also guide design engineers towards developing or selecting sensors with specifications especially suited to the design constraints of a given application. Keywords: sensor phenomena and characterization, noise, noise measurement, Allan variance

1. Introduction Noise is ubiquitous in the lives of engineers and scientists. We observe noise everywhere – as errors in our measurements, as disturbances in input signals to our systems, and as unwanted irregularities in our system’s output. It is often an engineer’s design objective (or a scientist’s research objective) to identify the noise source and determine its impact on system performance. In several instances, the noise is inherent in the operation of the sensor or system itself, such as quantization noise arising from discrete measurements of analog signals in digital systems, and may not be removed or mitigated easily. In many such scenarios, assessing the impact of various noise sources on sensor accuracy or system performance becomes an essential part of the design or analysis process. An additional motivation for studying noise sources is the recent proliferation of low-cost sensors [1]. While advances in miniaturization and commercialization have enabled cost reduction, the resulting sensors typically yield noisier measurements as compared to their more expensive counterparts. Consequently, low-cost sensors may require additional noise analysis in order to identify the performance bounds of the systems they are used in. That said, the discussion presented here is expected to be useful for a wide range of sensors with varying precisions and accuracies. Indeed, the techniques presented here have been used to analyze high precision atomic clocks [2] as well as low-cost inertial sensors [3][4]. Currently, there exists a gap between (a) noise synthesis, which utilizes noise model parameters, and (b) sensor noise Email address: [email protected] (Kshitij Jerath ) address: Washington State University

1 Permanent

Preprint submitted to Measurement

analysis, in which practitioners perform noise identification and characterization using Allan variance to determine sensor noise parameters. What is missing in the literature is an end-to-end to discussion that relates the noise modeling (synthesis) with sensor noise characterization (analysis). The goal of this paper is to demonstrate a method to bridge this gap, and show that noise characterization techniques can be applied to simulated sensor noise to recover the original noise model parameters, with Allan variance and regression fitting. To the authors’ knowledge, this link between the two has not been recorded in a detailed archival publication format. The presented work makes use of several examples to indicate the sensor noise modeling (synthesis) and noise analysis techniques, as well as the process that helps link the two traditionally separate research areas. Specifically, the use of linear regression coefficients to recover the original noise modeling parameters has been shown. The demonstration of the link between noise modeling and characterization processes opens the door for several applications such as creating sensor selection procedures using simulated noise, without the need for actually purchasing any test sensor hardware [3]. The remainder of this paper is partitioned into 5 sections that seek to illustrate both sensor modeling theory and implementation details. Section 2 provides a brief review by cataloging various early research efforts on noise modeling. Section 3 discusses several important issues with regards to sensor noise synthesis, such as the need, the fundamental components of, and the simulation implementations of noise synthesis. Section 4 introduces the various common noise types, as well as their specific modeling, characterization and simulation techniques. Section 5 provides an overview of Allan variance analysis, reaJuly 25, 2017

sons for its need, and calculation procedures. Section 6 summarizes the salient points of the paper.

a specific application with desired system performance requirements. In the sensor selection scenario, the objective of the engineer is to identify the appropriate sensor from a wide range of available sensors that will produce the desired level of system performance. Typically, it is infeasible to test a system with a large number of sensors due to issues of procurement, setup and testing times. However, it is remarkably easy to simulate the system with a virtual sensor that corrupts system signals with noise synthesized from known specifications. In this scenario, the ability to synthesize noise that accurately replicates sensor behavior plays a crucial part during evaluation of system performance and eventual sensor selection. On the other hand, in the sensor design scenario, the objective is to design a sensor with specifications that meet a predetermined level of system performance. In this scenario, it is even more unlikely that the objective will be accomplished effectively using only experimental approaches, such as iterative design and testing for several sensor specifications. The preferred approach would be to first synthesize sensor noise and test the system performance via simulations using signals corrupted with virtual sensors. In both scenarios, noise synthesis plays an indispensable role. The next subsection discusses how synthesized noise may be used to obtain corrupted signals.

2. Literature Review The origins of early studies into the nature of noise can be traced back to research on the apparently random motion of pollen on a film of water, what we today know as Brownian motion [5][6]. Originally referred to as spontaneous fluctuations, the early studies into the nature of noise were primarily phenomenological in nature, i.e. they attempted to explain observed phenomena, rather than to understand the underlying functional forms of noise. Over the late 19th and early 20th centuries, several researchers looked into the nature of noise. In 1877, a thermodynamic explanation of Brownian motion was first presented [7]. In 1906, more detailed works linking the observed phenomena of Brownian motion with a theoretical formalism were independently presented by Einstein and Smoluchowski [6][7]. In 1918, flicker noise was observed by Schottky in sensitive amplifiers, and was later explained by Johnson and Nyquist in 1925 [8]. In the following decades, noise research evolved its focus from phenomenolgical studies to development of theoretical framework and models to explain noise. [7]. More specifically, the understanding of noise phenomena gained immensely from analysis within the framework of stochastic processes. Seminal works by L´evy (on modeling of stochastic processes) [9], Weiner (on modeling of Brownian motion)[10], Uhlenbeck and Orstein [11], and Itˆo and Stratonovich (on stochastic differential equations)[12] helped lay a firm foundation for treating noise in a probabilistic framework. Recent work has built upon this foundation to characterize specific noise types in terms of their probability distribution functions and power spectral densities [13][14][15]. Some major postulates of these works will be discussed in Section 4. Alternative means of characterizing sensor noise, more amenable to a time series analysis, have been developed, first by Allan for studying frequency stability of atomic clocks [2][16], and later applied to inertial sensors [3][4]. These measures, including the standard Allan variance, will be discussed in detail in Section 5. The next section discusses some basics of synthesizing and simulating sensor noise signals.

3.2. Fundamental Concepts of Noise Modeling This subsection briefly discusses some fundamental concepts of noise modeling, viz. additive noise models, white noise, and stochastic differential equations. A firm understanding of these concepts can help generate a large variety of noise models, as will become evident in Section 4. 3.2.1. Additive Noise Model An additive noise model is one where the noise is directly added to the true signal value (ytrue (t)) to obtain the measured noisy output signal (ymeas (t)), i.e. ymeas (t) = ytrue (t) + x(t)

(1)

where x(t) denotes the resultant sum of all noise types included in the stochastic (or noise) model. For numerical simulations in the following work, the discrete-time counterpart of (1) is used. Typically, the noise x(t) for given sensor specifications is synthesized in units of output measurement itself. However, in some alternative scenarios, it may be easier or beneficial to simulate sensor noise in a related measurement unit. A prime example of this concept can be found in the noise synthesis for inertial sensors. Inertial sensors typically report readings in units of position, velocity or attitude (roll, pitch and yaw), but the underlying sensing principle is based on measuring linear acceleration and angular velocity. Thus, it is preferable to simulate noise and add it to linear acceleration or angular velocity, before integrating to obtain the measured output signal. The next subsection discusses white noise and stochastic differential equations as building blocks for synthesizing different types of noise x(t).

3. Synthesizing Sensor Noise Noise synthesis refers to the simulation of noise using given specifications. While noise synthesis has several applications, including those in visual graphics [17] and music industries [18], the discussion here is presented from the perspective of sensor selection and design. The following subsections discuss several pertinent issues related to the task of noise synthesis. 3.1. The Need for Noise Synthesis In general, the need for noise synthesis can arise is several scenarios, but this subsection deals specifically with the domain of sensors. The two foremost scenarios deal with (a) sensor selection from a wide variety of sensors, and (b) sensor design for 2



3.2.2. White Noise and Stochastic Differential Equations White noise holds a special place in the realm of stochastic processes, and is defined by two properties: first, it is uncorrelated, and second, it has a flat power spectral density (psd). The flat power spectral density extends over an infinite spectral range, implying that continuous white noise has infinite power. However, in typical engineering systems, sensors operate at a known sampling frequency f s that is higher than the bandwidth of the given system [19]. As a result, white noise in the sensor may be approximated by band-limited white noise (or discrete white noise), which has finite power uniformly distributed in the range [− f s /2, f s /2]. Additionally, if the band-limited white noise has zero mean, then the power of the sampled noise signal is given by its variance [20]: σ2fs

= N fs 2

(a)

/2 +

− − /2



2

2



2

2

Figure 1: (a) Nonlinear staircase function depicting the relationship between an input signal value x and the quantized output x0 obtained as a result of rounding off to the nearest discrete level. Dashed line indicates x0 = x. (b) Error due to quantization, eq = x0 − x.

(2)

where N 2 denotes the power spectral density inside the band [− f s /2, f s /2]. It is evident from (2) that as the bandwidth tends to infinity and the process approaches continuous white noise in the limit, the variance tends to infinity as well. From an implementation standpoint of simulating white noise to corrupt sensor signals, one may attempt to represent white noise W(t) as the derivative of the Wiener process (or Brownian motion) B(t) [21], i.e.

Noise modeling is the process of specifying a functional form and a set of parameter values that represent a noise source. The standard tools for this task are differential equations (ordinary as well as stochastic). Noise simulation is the process of using the known noise models to corrupt true values of a signal in order to simulate noisy measurements [3]. The standard tools for this task are computing environments, such as MATLAB or Mathematica, or simulation environments, such as Simulink. Some of the types of noise are named according to their observed noise sources in inertial sensors. For example, the noise models may be discussed for angular rate measurements obtained from inertial sensors, so a noise type may be referred to as angle random walk. Irrespective of the naming convention, the emphasis is on the underlying functional forms which are quite general and widely applicable to several sensors.

d B(t) (3) dt though the authors would like to emphasize that Brownian motion is not differentiable for any time t, and white noise as presented in (3) is not a true stochastic process [12]. For further information in this regard, the authors would like to direct the interested reader to an excellent text by Kloeden and Platen [22]. For ease of engineering analysis (as shown in Section 4.2), numerical integration of discrete or sampled white noise may be assumed to yield the Wiener process in the limit, as the sampling interval tends to zero. Alternatively, increments of the Wiener process B(t) generated at regular time intervals may be understood to approximate discrete white noise. The Wiener process may be used as a component of a stochastic differential equation (sde) to generate other forms of noise. For example, in the following sde: W(t) ,

dX(t) = aX(t)dt + dB(t)

(b)

4.1. Quantization noise Quantization noise is a consequence of the digital nature of today’s measurements. Specifically, analog signals of the real world are discretized during the measurement process into distinct levels. The deviation of the quantized output signal from the analog input signal is referred to as quantization noise. Fig. 1(a) and 1(b) describe the signal quantization and quantization error, respectively. Thanks to insights provided by the works of Widrow, Bennet, Claasen, and Sripad [13][14][15][26], today we possess a reasonable understanding that enables us to model and simulate quantization noise in sensors. These aspects of quantization noise are discussed next.

(4)

where a is a constant, the Wiener process (or Brownian motion) B(t) has been used to generate the stochastic process X(t) which may then be used as a noise model. Corrupting a given signal in a system requires knowledge of modeling and simulating various noise sources, as described in the following section for some common noise types. The discussed techniques are equally applicable to several other sensing apparatuses, such as GPS sensors [23], nano-voltmeters [24] and spectrometers [25].

4.1.1. Modeling of quantization noise It would seem that synthesizing quantization noise is merely a rounding off operation and as such would not require an elaborate modeling scheme. While this is true for synthesizing quantization noise for a particular sensor with known specifications, parametric models of quantization noise are useful in several scenarios including sensor design and analytical studies, as discussed in Section 3.1. The prevalent approach for modeling quantization noise makes use of the pseudo quantization

4. Common Types of Sensor Noise This section discusses some common types of noise encountered in sensors, with focus on noise modeling and simulation. 3

noise (PQN) model. In this approach, the quantization noise is modeled as uniformly distributed additive white noise within a predetermined bandwidth, such that the resulting white noise has properties that are statistically similar to observed quantization noise [13]. The predetermined bandwidth within which the model is applicable is given by the sampling frequency, i.e. [− f s /2, f s /2], and is typically much greater than the bandwidth of the input signal. The PQN model has been found to be a reasonable approximation for quantization noise in most engineering applications [13]. Given a set of design specifications, the appropriate value of the quantum step size can be obtained from the following two conditions that bound the applicability of the PQN model, presented here without proof. For additional details, the readers are directed to Sections 4.3 and 20.3 in [27] which include detailed results corresponding to the first and second conditions, respectively.

noise is only approximately modeled as white noise under the specific conditions discussed in the previous subsection. The autocorrelation and power spectral density (PSD) for angular measurements can be determined from the probability density function, and are given by (8) and (9) respectively. ! |τ| 2 Rθ (τ) = Q 1 − (8) T   2 2    Q T sinc (π f T ) S θ ( f ) = F [Rθ (τ)] =  (9)    ≈ Q2 T f < f s /2

• Condition 1: Quantization noise may be modeled as additive noise, if the following condition on the characteristic function Φx (ω) of the input signal is met:

4.1.3. Simulation of Quantization Noise This subsection covers a simple example that deals with the simulation and analysis of quantization noise. It must be understood that the rounding off operation is deterministic in nature. Consequently, its application to a periodic signal will always generate periodic noise, which will clearly not follow our model of additive band-limited white noise. Consequently, quantization of a simulated pure sinusoidal signal does not generate white noise as desired. However, most signals an engineer or scientist may observe are bound to have some deviation from perfect periodicity. Therefore, for the purposes of illustration, the input signal is considered to be a sinusoid that is corrupted by white noise. Additional discussion on the quantization noise in experimental sinusoidal signals can be found in Section 20.2.5 of [27]. Example 1(a) discusses the simulation of quantization noise for a specific input signal. In Example 1(b), the simulated quantization noise is characterized, the noise model conditions are verified, and a model is generated to recover the original noise source parameters. As shown in Table 1, it is found that the recovered values match quite well with the designed value of the quantum step size.

Φx (ω) = 0 ; for |ω| > 2π/q

and the associated rate noise PSD, i.e. the power spectral density of the noise in the angular rate measurements, is given by: S Ω ( f ) = (2π f )2 S θ ( f ) ≈ (2π f )2 Q2 T

(5)

Physically, this condition can be understood in the following manner: quantization of any input signal is a nonlinear operation that cannot, in general, be replaced with addition of noise. However, if the quantum step size is made ‘small enough’, then the output signal generated by either quantization or noise addition, can be used equally well to recover the PDF and characteristic function of the input signal. Thus, in this scenario, the quantization and noise addition operations are essentially statistically similar. • Condition 2: Quantization noise may be modeled as white noise, if the quantum step size is chosen appropriately. Specifically, the quantization noise has a white spectrum if the following condition for the quantum step size is met: q
E[| x˙(t)|] fs

(6)

where, K is a constant in the interval (1,2) and E[·] denotes the expected value. Physically, the condition corresponds to sampling only one or two data points at any quantum level. If this condition did not hold, several samples may be taken from a single quantum level, and will invariably be correlated. In other words, this assumption requires that the quantization of a signal cannot be large compared to the amplitude of signals being measured.

4.2. Random Walk Noise Several real world processes, such as diffusion [28], protein folding [29] and stock markets [30], may be modeled as random walks. In the context of gyroscopic measurements, noise in angle or attitude measurement may appear as a random walk, and is referred to as angle random walk noise. With specific regards to interferometric fiber-optic gyros, the origins of angle random walk noise can be traced back to spontaneous emission of photons [31].

4.1.2. Characterization of Quantization Noise The autocorrelation function for quantization, as defined in (8), closely resembles that for white noise and is identical to the autocorrelation for white noise in the limit T → 0. The constant Q2 is related to the quantum step size as: Q2 = q2 /12

(10)

4.2.1. Modeling of Angle Random Walk Noise Angle random walk (ARW) noise can be modeled as a partial sum or integral of a sequence of independent random variables. The underlying theoretical basis of angle random walk noise remains Brownian motion B(t) (or the Wiener process). The increments of the Wiener process, denoted by W(t) = B(t + δt) − B(t), are stationary in nature and successive values of W(t) are independent [32]. Sampled at regular intervals,

(7)

and denotes the variance of the quantization noise signal. Note that the quantization noise has finite variance and is independent of the bandwidth, unlike white noise. In fact, quantization 4

Example 1(a): Simulation of quantization noise

Example 1(b): Verification and analysis of quantization noise

Consider an input signal given by x(t) = A sin(2π f t) + v(t), where A = 3 units, f = 1 Hz, and v(t) is Gaussian white noise with mean zero and power spectral density of N 2 = (0.01A)2 unit2 /Hz, so that the variance of the bandlimited white noise is N 2 f s = 0.018 unit2 . The signal is sampled at f s = 20 Hz. Condition 2, as described in (6), is used to obtain an estimate of the appropriate quantum step size for simulating quantization noise. As shown in Fig. 2 (c), the derivative x˙(t) of the original signal x(t) is obtained via numerical differentiation. The expected value of the numerical derivative is then calculated by taking an average over the length of the signal. The constant K is chosen to be 1 to ensure a smaller quantum step size, which in turn improves the white noise character of the resulting quantization noise. Then, (6) yields: q < E[| x˙(t)|]/ f s = 0.6672

In order to verify Condition 1, the probability distribution function fx (x) for the input signal x(t) is required. The input signal x(t) = A sin(2π f t) + v(t) is simulated for 100 seconds and its probability distribution function is generated using a histogram (see Fig. 3(a)) The characteristic function Φx (ω) is obtained via the Fourier Transform of fx (x) using a standard FFT algorithm (such as fft in MATLAB). Since (5) must be satisfied to model quantization noise as additive noise, the appropriate values are calculated as follows: Φx (ω) = 0 ; for |ω| > 2π/q = 9.41 It is evident from the plot included in Fig. 3(b) that Φx (ω) ≈ 0 for |ω| > 2.5, so this condition holds true. Additional verification for the white noise nature of quantization noise is provided by plots in Fig. 3(c-d). The autocorrelation function is similar to the Kronecker delta function, and is given by (8). The power spectral density is given by (9) and is approximately flat at the level Q2 T , where T = 1/ f s . From equation p (8) and Fig. 3(c), Q2 = 0.0371. Equation (7) further implies that q = 12Q2 = 0.6676.

Quantization error (unit)

Signal amplitude (unit)

The design value of q is chosen to be 0.6672, and the input signal x(t) is quantized at the appropriate discrete levels using this value. The resulting signal xq (t) is shown as a solid red line in Fig. 2(a). The resulting quantization error is shown in Fig. 2(b). Note that the quantization error does not appear to be periodic.

4 2 0 −2 −4 0

(a)

1

2

Time (s)

3

4

0.4

5 (b)

0.2 0

−0.2 −0.4 0

1

2

Time (s)

3

4

5

Figure 2: Quantization of input signal in Example 1. (a) Dashed blue line denotes original signal x(t), solid red line denotes quantized signal xq (t), (b) Quantization error in measurement, eq (t) = x(t) − xq (t).

these increments yield sampled white noise. Consequently, the numerical integration of the sampled strictly white noise can be performed in discrete time to model Brownian motion or angle random walk as follows: xk+1 = xk + wk δt

(11)

where xk represents the value of the random walk at time instant k, wk represents the value of the sampled strictly white noise at time instant k, and δt represents the sampling interval. The sampled white noise wk has zero mean and its variance is given by (2).

Figure 3: Analysis of quantization of input signal in Example 1. (a) Histogrambased approximate probability distribution function fx (x) of input signal x(t), (b) Characteristic function of input signal Φx (ω), (c) Autocorrelation function of eq (t), (d) Power spectral density of eq (t).

4.2.2. Characterization of Angle Random Walk Noise As is evident from (11), the simple nature of the stochastic difference equation enables the characterization of angle random walk (arw) noise in terms of the underlying white noise characteristics alone. Specifically, the arw noise can be characterized via the autocorrelation or power spectral density of the

white noise used to generate it. Realizing that the noise in angle measurements is obtained bypassing the angular rate noise through an integrator H( f ), the power spectral density of the 5

where N 2 is the PSD of the underlying white noise.

Example 2: Simulation and analysis of random walk noise The random walks shown in Fig. 4 (a) are simulated using x0 = 0, i.e. they all begin from the origin. The underlying white noise used to generate these random walks is √designed with power spectral density S Ω ( f ) = N 2 , where N = 0.025 unit/ sec, and is sampled at f s = 20 Hz. The chosen value of N has the same order of magnitude as specifications for low-cost gyroscopes. The value indicates that the standard deviation of the random walk will be 0.025 units after 1 second. The units of N can be understood by observing (2), which indicates that the units of variance of white noise are (unit/s)2 , of white noise are (unit/s), and of the integral of white noise, i.e. random walks, are units. The parameter N may be recovered from the plot using the relation: S Ω ( f ) = 10 log10 (N 2 )

4.2.3. Simulation of Angle Random Walk Noise Example 2 discusses a simple simulation and analysis of angle random walk noise X(t). The angle random walk noise is generated by assuming x0 = 0. Subsequent values of the angle random walk xk are generated by numerically integrating sampled white noise values, as described in (11). When considering this noise in the context of gyroscopic measurements, the generic unit may be replaced by degrees or radians. The value of the power spectral density of the white noise used to generate the random walk noise, and the corresponding values recovered from autocorrelation and spectral estimation, are included in Table 1 and show good agreement.

where S Ω ( f ) denotes the mean value of the white noise power spectral density. The power spectral densities in all following examples are plotted in the log-scale (dB/Hz), as this makes it easier to visualize values that are separated by orders of magnitude. In the context of gyroscopes, the generic unit may be replaced by degrees or radians.

4.3. Flicker Noise or Bias Instability Flicker noise is generally associated with random processes that have lower frequency content (or longer temporal correlations) than random walk noise. The noise originates in electronics or other components susceptible to random flickering [31]. In the context of sensors, such noise is often referred to as bias instability, since it manifests as a bias in the measurements that changes slowly over time. Specifically, flicker noise is often observed in the measurement rate, such as angular velocity in gyroscopes, and the bias instability is manifested in the measurements themselves, such as the slow drift observed in gyroscope angle measurements. Flicker noise is also known as 1/ f noise due to its power spectrum that is given by: S x ( f ) ∝ 1/ f α

(13)

where α ∈ (0, 2), though its value is typically close to 1. 4.3.1. Modeling of Flicker Noise The difficulty in simulating bias instability or flicker noise can be understood by observing the nature of its power spectral density. Many types of noise can be synthesized easily because white noise can be used as an underlying building block. In order to generate flicker noise with the desired√1/ f spectrum, theplinear filter should be given by H(ω) ∝ 1/ ω, or H( f ) ∝ 1/ f , so that: S x ( f ) = |H( f )|2 S w ( f ) = N 2 / f

where S x ( f ) represents the spectral density of the flicker noise, and S w ( f ) represents the spectral density of white noise. It is evident that the filter H(ω) or H( f ) cannot be rational, which makes it difficult to generate flicker noise through the usual procedure of passing white noise through a linear filter. The most commonly used models for modeling flicker noise include autoregressive moving average (ARMA) models, wavelets-based synthesis, and fractional integration [21][33]. While ARMA models are quite popular for noise modeling, they present certain challenges when creating models for flicker noise, which are a direct result of the fractional nature of its spectral density. Specifically, modeling flicker noise with ARMA models presents issues with real-time implementation,

Figure 4: Simulation and analysis of random walk noise in Example 2. (a) Three different random walk signals generated from the same white noise specification, (b) White noise used to generate blue (dark) random walk, (c) Autocorrelation of underlying white noise, (d) PSD of underlying white noise.

angle random walk S θ ( f ) can be expressed as follows: !2 N2 1 S Ω( f ) = S θ ( f ) = |H( f )|2 S Ω ( f ) = 2π f (2π f )2

(14)

(12) 6

the need for significant computational resources, slow convergence of model parameters etc. These challenges are well documented in the literature [21][34][35]. There are research efforts into providing workarounds, such as in [36], but often ARMA models are approximated as autoregressive (AR) models. These simpler AR models can be implemented as infinite impulse response (IIR) filters in real-time [37], providing an effective time-domain technique to simulate flicker noise for online simulation and corruption of sensor signals. One of the simplest AR models for simulating approximate flicker noise is a first-order Gauss-Markov process as shown in (15) : dX(t) = −βX(t)dt + dB(t) (15)

4.3.2. Characterization of Flicker Noise It was mentioned in the introduction to this section that the power spectral density of flicker noise is proportional to 1/ f α . Given B, which denotes the standard deviation of the sampled white noise used as an input for the IIR filter in (20), the power spectral density for the discrete-time implementation of the IIR filter can be found using S x ( f ) = ∆tB2 H(z)H(z−1 ) with z = e jω∆t to be [21]:

where X(t) represents the approximate flicker noise, 1/β denotes correlation time, and B(t) denotes Brownian motion. Additional details on first-order Gauss-Markov processes can be found in [19][38]. However, the first-order Gauss-Markov model approximates flicker noise in a very narrow frequency band around β. Consequently, higher order autoregressive models of flicker noise which provide better fidelity need to be considered, and their discussion originates by revisiting the discrete-time implementation of the Weiner process or Brownian motion for sensor noise simulation. It is known that Brownian motion can be simulated in discrete time using the following difference equation:

(23)

xk = xk−1 + wk

S x( f ) =

S x( f ) =

1 1−



− α2 )z−2 − ...

4.3.3. Simulation of Flicker Noise Example 3 discusses the simulation and analysis of flicker noise. Specifically, flicker noise is generated to corrupt measurement rate signals. The generated flicker noise may be integrated to simulate noise that corrupts measurement signals (as opposed to measurement rate signals). From the previous discussions on modeling and characterization, it is apparent that two parameters are necessary for practical simulation of flicker noise, the flicker noise coefficient (B), and the number of terms (n) at which the IIR filter is truncated.The parameter n is determined by the lowest frequency at which flicker noise is still dominant, and beyond which the effects of other noise types that occur at even lower frequencies become more significant. The value of the lowest frequency at which flicker noise is dominant varies from sensor to sensor but typically lies in the range of 0.1 Hz to 0.001 Hz for available gyroscopes [40]. Truncation of the IIR filter (or AR process) shown in (20) results in a difference equation that requires a finite number of initial conditions. Assuming that the initial conditions x1 = x2 = ... = xn−1 = 0, the bias instability and flicker noise are simulated as shown in Fig. 5 (a) and (b), respectively. While the IIR filter described in (20) may be truncated to allow for real-time generation of flicker noise, it causes a deviation from the pure 1/f spectrum, as seen in Fig. 5 (c). It is also evident that the method of truncation of the IIR filter provides poor estimation of the flicker noise spectrum at low frequencies. However, this is not a significant concern since the lower frequency noise sources become more dominant in these frequency bands. Assuming that the lowest frequency at which flicker noise is dominant is given by fd , and the sampling frequency is f s , then the order of the AR process is given by:

(16)

(19)

which is equivalent to the AR model given by: xn = −a1 xn−1 − a2 xn−2 − ... + wn

(24)

In the context of gyroscopes, this expression corresponds to the spectral density of flicker noise observed in angular velocity (S Ω ( f )).

1 , z>1 (18) (1 − z−1 )α/2 where the denominator can be expanded as a power series to yield an IIR filter given by: 1 α 2! 2 (1

B2 ∆t1−α (2π f )α

For α = 1, the above form reduces to: ! B2 1 S x( f ) = 2π f

H(z) =

α −1 2z

(22)

which, below the Nyquist frequency, may be approximated by:

where xk represents the value of the Brownian motion (or Weiner process) at time instant k, and wk represents the sampled white noise at time instant k. It is straightforward to see that the z−domain representation of (16) as an IIR filter is given by: 1 H(z) = (17) 1 − z−1 Kasdin [21] and Hosking [39] independently conjectured that flicker noise with S x ( f ) = 1/ f α could be simulated using the following digital model:

H(z) =

B2 ∆t (2 sin(π f ∆t)α )

(20)

where the coefficients can be obtained via a recursive relationship as follows (with a0 = 1):  α  ak−1 ak = k − 1 − (21) 2 k In order to use the IIR filter for real-time sensor noise simulation, the series in the denominator may be truncated. The truncated IIR filter yields a difference equation with a finite number of initial conditions and can be used to simulate a stochastic process that approximately models flicker noise.

n= 7

fs fd

(25)

To complete the analysis, the value of the flicker noise coefficient B is recovered from Fig. 5(d) for various settings of the parameter n. The results are included in Table 1, and it is found that the recovered parameter values match quite well with the design value.

Example 3: Simulation and analysis of flicker noise or bias instability

1

4.4. Rate Random Walk Noise Rate random walk is very similar in nature to angle random walk noise. The origin of this noise is uncertain, but it is believed to be exponentially correlated noise with very long correlation time [31]. The following subsections briefly discuss the modeling, characterization and simulation of rate random walk noise. 4.4.1. Modeling of Rate Random Walk Noise Rate random walk (RRW) can be modeled using the same functional form as described in Section 4.2. The only distinguishing feature that separates rate random walk noise from angle random walk noise is that while angle random noise manifests due to white noise in angular rate (or velocity), rate random walk manifests due to white noise in angular acceleration, so that: Ωk+1 = Ωk + wk δt (26)

tab

where Ωk represents the value of the measurement rate at time instant k, wk represents the value of the sampled strictly white noise at time instant k, and δt represents the sampling interval. The numerical integration can further be applied in the Euler form to obtain the noise in measurements as follows:

0 −1 0

10

20

30 Time.tsb

40

50

0.1

60 tbb

xk+1 = xk + Ωk δt

−0.1 0

10

20

30 Time.tsb

40

50

4.4.2. Characterization of Rate Random Walk Noise As was previously the case with angle random walk noise, rate random walk noise too can be completely characterized in terms of the white noise parameters alone. Specifically, the power spectral density of the measurement rate (or angular rate in the context of gyroscopes) is given by:

60 tcb

−2 n=5000 n=500

10

n=50 −4 n=5



10

Slope.=.B2/2π

−6

10

−1

10

0

10

Frequency.tHzb

1

10

S Ω( f ) =

−1

10 tdBb

2π f.× SΩtfb

(27)

0

One−sided.PSD S tfb.tdB/Hzb

Noise.amplitude in.rate.tunits/sb

Noise.amplitude tunitsb

The flicker noise or bias instability shown in Fig. 5(a) is simulated at f s = 20 Hz, using the flicker noise parameter B = 0.0224 units/s, and assuming that the lowest frequency for which flicker noise needs to be simulated is fd = 0.01 Hz. These specifications are comparable to those of some lowcost gyroscopes. The order of the AR process used for the simulation is thus n = f s / fd = 2000. PSDs for different number of terms included in the IIR filter are shown in Fig. 5(c). The PSDs are calculated using a standard DFT implementation such as FFT. The number of points chosen for calculating the DFT is governed in part by the lowest desired frequency, which in this case is equal to fd . The PSD was calculated using an N-point DFT with N = 1024. It is evident that the model is in good agreement with the true PSD (dashed line) obtained from the design parameter for medium to high frequencies. It is also observed that as additional terms are added to the IIR filter, they do not provide a proportional increase in agreement with the true PSD, even though in the limit n → ∞ they agree completely. Fig. 5(d) shows the product 2π f S Ω ( f ) used to obtain the estimate of B2 for recovering the noise parameter. The true slope in Fig. 5(d) shown by the dashed line corresponds to the quantity B2des , where Bdes is the design parameter used to simulate the noise. The recovery of the noise parameter is performed using values of the PSD in the range of 0.1 Hz to 5 Hz (region shaded in light green). This region is selected by visual inspection in Fig. 5(d) using the region of where slope ≈ 0, since it is here that the models appear to be in greatest agreement with the expected flicker noise characteristics.

−5

10

!2 S acc ( f ) =

K2 (2π f )2

(28)

tdb

where K 2 represents the power spectral density of the white noise in, for lack of a better generic term, ‘measurement acceleration’. The ‘measurement acceleration’ is simply the first derivative of the measurement rate with respect to time. In the context of gyroscopes, this corresponds to angular acceleration.

n=5000 −3 n=500

10

1 2π f

n=50

Slope.=.0

n=5

−1

10

0

10 Frequency.tHzb

1

10

4.4.3. Simulation of Rate Random Walk Noise Example 4 discusses the simulation and analysis of rate random walk noise. The rate random walk noise is generated by assuming Ω0 = 0, with values of the rate random walk Ωk generated via (26) and (27). The designed and recovered values of the parameter K are included in Table 1 and appear to be in good agreement.

Figure 5: Simulation and analysis of flicker noise or bias instability in Example 3. (a) Three different bias instability signals generated from the same input flicker noise specification, (b) Measurement rate signal corresponding to blue (dark) bias instability signal, (c) Power spectral densities for flicker noise generated using different number of terms (n) in IIR filter. Dashed line indicates true PSD for designed flicker noise parameter, (d) Product 2π f S Ω ( f ) used to recover flicker noise specification. Dashed line indicates true value of B2 for designed flicker noise parameter.

8

4.5. Rate Ramp Noise

Example 4: Simulation and analysis of rate random walk noise

Rate ramp noise appears as a very low frequency drift in the sensor noise. Some models suggest that it should be treated as deterministic error for long, but finite, time intervals, much like the way constant bias is treated as a deterministic error [31]. In interferometric fiber optic gyroscopes (IFOGs), a possible source of this noise could be a slow monotonic change in the IFOG source intensity [31].

The three instances of rate random walk noise shown in Fig. 6 (a) are simulated at f s = 20 Hz, using the rate random walk parameter K = 0.0001 units/s3/2 . The units of the parameter K are obtained from the use of the equations relating the parameter to the PSD, and the PSD to the standard deviation of the sampled white noise in the ‘measurement acceleration’ (see accompanying section on simulation of rate random walk noise). Specifically, these relations are: S acc ( f ) = K 2 and σacc =

p S acc ( f ) f s

4.5.1. Modeling of Rate Ramp Noise As mentioned earlier, rate ramp noise may be treated as deterministic error, and can be modeled quite easily in continuous time as [31]: x(t) = Rt (29)

Noise amplitude (units)

where σacc has the units of units/s2 , corresponding to ‘measurement acceleration’. In the context of gyroscopes the units of σacc are deg /s2 . Simple algebraic manipulation shows that the units of K are units/s3/2 . Figs. 6(bc) reflect the random walk and white noise nature of the noise in rate and acceleration, respectively. These figures are generated using the dark blue measurement noise signal in Fig. 6(a).

or in discrete time as:

(a)

xk =

Autocorrelation Noise amplitude in Noise amplitude 2 2 2 (unit/s ) acceleration (units/s ) in rate (units/s) One−sided PSD Sacc(f) (dB/Hz)

(b)

Time (s) −3

x 10

(c)

0 −2 0

10

20

30 Time (s)

40

50

60

−7

2

x 10

(d)

0 −2 −2

−1

0 Lag (s)

1

2

x¨ +

−7

10 10

−9 −1

10

0

10 Frequency (Hz)



2ω0 x˙ + ω20 x = Ri w

(31)

where x(t) represents the ramp instability, ω0 represents the cutoff frequency, Ri denotes the ramp instability noise coefficient, and w denotes white noise with zero mean and unit variance. The second-order Gauss-Markov model is able to reproduce the 1/ f 3 in only a narrow spectral band.

(e)

−8

10

(30)

where R is the rate ramp noise coefficient, t (or k) represents the time (or time step index), and f s represents the sampling frequency. Owing to the time-dependence inherent in this model, the autocorrelation Rxx (t, t + τ) is a function of both the time t and the time interval τ, rather than just the time interval. As a result, rate ramp noise does not possess a power spectral density [41]. Rate ramp noise can sometimes be confused with ramp instability, but they represent two distinct noise types. Ramp instability bears a similar qualitative relation to rate ramp noise as bias instability bears to constant bias, i.e. it reflects an instability in the constant drift in sensor noise. Unlike rate ramp noise, ramp instability has a well-defined power spectral density that is proportional to 1/ f 3 . Since, in general, rational linear filters cannot generate an odd-power power spectral density using white noise as input, it is not possible to find a rational linear filter H( f ) that can accurately reproduce power spectral density S xx ( f ) proportional to 1/ f 3 . Ramp instability maybe approximated as a second-order Gauss-Markov (G-M) process, given by the differential equation [42]:

Time (s)

2

Rk fs

1

10

4.5.2. Characterization of Rate Ramp Noise Characterization of rate ramp noise is especially simple, since the noise coefficient R simply denotes the slope of the ramp, as is evident from (29). This is fortunate, since the explicit time-dependent nature of the rate ramp noise model implies that traditional tools such as autocorrelation and power spectral density cannot be used to characterize this noise source.

Figure 6: Simulation and analysis of rate random walk (RRW) noise in Example 4. (a) Three different bias instability signals generated from the same input RRW noise specification, (b) Measurement rate signal corresponding to blue (dark) RRW noise, (c) ‘Measurement acceleration’ signal corresponding to blue (dark) RRW noise, (d) Autocorrelation of ‘measurement acceleration’ (e) Power spectral density of ‘measurement acceleration’ signal.

9

5.1. A Brief History of Allan Variance In the 1960s, atomic clocks were increasingly being considered as the standard for accurate time keeping. Researchers who were studying the stability of these clocks – in the sense of them being able to repeatably and accurately measure the passage of time – found that while atomic clocks were very accurate, they appeared to drift slowly over time [2][44][45]. Quantifiable measures for the repeatability and stability of atomic clocks were necessary if they were to be used as a new time standard. Researchers found that the drift, or the error in the measurement of time, was correlated across time [2]. The correlated nature of the errors (or noise in the measurements) caused the variance of the errors to change as a function of time. This resulted in the need for an alternative technique to quantify the errors in the sensor measurements, i.e. the atomic clock measurements. This new technique is now referred to as Allan variance, after David W. Allan who first proposed this method in 1966 [2]. From a more general viewpoint, most sensors will have some form of correlated noise that corrupts their measurements. Allan variance provides an alternative to spectral estimation as a technique for sensor characterization for various noise types.

Example 5(a): Simulation of rate ramp noise with deterministic model Rate ramp noise simulated using the deterministic model and noise coefficient R = 1e-08 units/s2 , which is a typical value for commercial grade inertial measurement units [43]. The ramp nature of the noise is clearly visible in Fig. 7(b) which denotes the noise magnitude in the rate measurements. This noise type does not possess a power spectral density, since the autocorrelation function is a function of time t and the time interval τ. Consequently, the PSD for this signal is not shown, even though it can be numerically calculated for the finite time signal in Fig. 7(b).

Noise amplitude (units) Noise amplitude in rate (units/s)

−5

2

1

x 10

1

(a)

0 0

10

20

30 Time (s)

40

50

60

−6

x 10

0.5 0 0

(b) 10

20

30 Time (s)

40

50

60

5.2. The Basics of Allan Variance Allan variance characterizes the noise in a sensor by quantifying the variance observed in measurements across various timescales. Thus, as opposed to the frequency-domain characterization tools such as power spectral density, Allan variance is a time-domain characterization of various noise sources that offers a more intuitive understanding of how a noisy sensor signal evolves over time. Allan variance is defined as follows:

Figure 7: Simulation and analysis of rate ramp noise generated using the deterministic model in Example 5(a). (a) Measurement signals corresponding to noise generated using rate ramp specifications, (b) Rate ramp (measurement rate) signal corresponding to given noise specification.

4.5.3. Simulation of Rate Ramp Noise Example 5 discusses the simulation and analysis of rate ramp noise using the deterministic model. The rate ramp noise is simulated using forward Euler integration assuming the initial condition x0 = 0. Since traditional tools cannot be used to recover the noise specification R for rate ramp noise, this analysis is not performed on the simulated ramp signal.

σ2A (τ) =

1 E [( x¯k+m − x¯k )2 ] 2

(32)

where σ2A represents the Allan variance corresponding to a specific timescale given by the correlation time τ, x¯k represents the mean value of the sensor measurements at time instant k averaged over m(= τ f s ) measurements, f s represents the sampling frequency, and E denotes the expectation operator. Allan variance can be interpreted physically in a number of ways. Simply put, Allan variance corresponds to the spread in measurement values (or noise) when the data are observed at a given time scale τ. Alternatively, it can also be understood as a combination of high-pass (differencing) and low-pass (averaging) filtering operations to obtain a characteristic of the noise that is prevalent at a specific frequency f (= 1/τ) or frequency band. The various timescales (or correlation times) help further ease the analysis by differentiating between the various noise sources on the basis of their typical time scale of significance. For example, quantization noise is a high frequency noise and thus has the shortest correlation time among all noise types discussed in Section 4. At the other end of the spectrum is rate ramp noise, which is an extremely low frequency noise and thus has a large correlation time. While Allan variance calculation is firmly rooted in the time domain, most work on noise characterization has occurred in

5. Analyzing Sensor Noise

The analysis of each of the noise types discussed in Section 4 benefited from the fact that we possessed knowledge of the type of noise being analyzed. It was a result of this knowledge that the autocorrelation and spectral estimation could be used to extract the parameters that characterized the underlying noise process. However, in most sensors, the nature of the underlying noise is often unknown. In fact, several different noise sources may contribute simultaneously to corrupt the sensor noise signals. In such a scenario, tools such as autocorrelation and spectral estimation do not suffice to characterize the underlying noise. In this section, an alternative tool known as Allan variance (AVAR) will be introduced and discussed, with special emphasis being placed on its use for sensor noise characterization. 10

Table 1: Designed and recovered noise coefficients using autocorrelation and PSD (Included errors are relative errors in %)

Noise parameters →

Quantum step size (q) Value

% error

Value

% error

Value

% error

Value

% error

Designed

0.6672



0.0250



0.0224



1.00E-04



Angle random walk (N)

Bias instability (B)

Rate random walk (K)

Recovered (with autocorrelation)

0.6676

0.06

0.0249

0.40





9.94E-05

0.54

Recovered (with PSD)

0.6639

0.49

0.0250

0.00

0.0238*

6.25

9.89E-05

1.14

*Bias instability noise coefficient recovered using an IIR filter truncated to n = 500 terms (See Figure 5).

Algorithm 1 – Calculation of Allan variance 1. Truncate sensor measurements time series {xi }(i = 1, 2, ..., M) to nearest integer power of 2 to yield data length = 2 p (p ∈ Z+ ) 2. Set initial data block size mi = 2 (or correlation time τ = mi / f s ) 3. while block size mi ≤ 2 p−1 do 4. Initialize block number k = 1 5. Set number of data blocks, ni = (2 p /mi ) 6. while block number k ≤ ni do (k+1)m P i −1 7. Find mean for kth block, x¯k = m1i xj

the frequency domain through the use of tools such as power spectral density. Consequently, it is helpful to consider the relationship between these frequency-domain and time-domain noise characterization tools. Tehrani [46] provided the following relationship between Allan variance and power spectral density S x ( f ) for any given noise type: σ2A (τ)

=4

Z∞ S x( f )

sin4 (π f τ) df (π f τ)2

(33)

0

The proof of this relationship is provided in the appendix of [46]. Expressions for power spectral density for various noise types can be substituted into the above equation to obtain their corresponding Allan variance expressions.

j=kmi

8. 9. 10. 11. 12.

5.3. Calculation of Allan Variance As a rule, for the purpose of characterizing a sensor using Allan variance, measurements are obtained in the absence of any input to the sensor. For example, for an Allan variancebased characterization of a gyroscope, the data is collected by placing the gyroscope on a stationary flat surface so that it is not measuring any angular velocities that may bias the characterization (‘zero excitation’). The sensor is also placed in a room whose environment is controlled to avoid effects such as temperature-induced bias drift in the characterization process. Similar controlled environment and zero excitation test cases can be developed for characterization of other sensors as well. Allan variance can be calculated in a recursive manner while changing the correlation time as described in Algorithm 1. The general idea behind the algorithm is to break up the time series into non-overlapping blocks of data, each encompassing a segment of the measurements that spans the length defined by the correlation time τ, whose smallest value is inversely proportional to the sampling frequency as follows: τmin = 2/ f s . This is also indicated in step 2 of Algorithm 1. The average of the data in each of these non-overlapping data blocks of length τ is calculated, and finally the difference between successive averages is taken. The mean of the squares of the differences is used to find the Allan variance corresponding to the specific correlation time. Example 6 describes the progress of the algorithm at three different correlation times for white noise. Since Allan variance is a time-domain characterization technique, a prerequisite for its calculation is access to time series of sensor measurements. However, the length of the time series

13. 14. 15. 16. 17.

k ←k+1 end while Reset k = 1 while block number k ≤ (ni − 1) do Find difference between successive block means, dk = x¯k+1 − x¯k k ←k+1 end while P i −1) 2 Calculate Allan variance, σ2A (τ) = 2(n1i −1) (n k=1 dk mi ← 2mi end while

data that is collected plays a pivotal role in accurately characterizing the sensor using Allan variance. Specifically, the longer the time series data is collected, the better chance one has of characterizing low frequency noise in a sensor. Thus, it is important to know how much data to collect from a given sensor in order to characterize noise sources accurately. In order to correctly characterize a given sensor, one must collect data for a time interval that is at least of the order of the timescale of the lowest frequency noise source that may exist in the sensor. In most sensors, the lowest frequency noise is the rate ramp noise. Thus, the total duration of the test is limited by the time needed to estimate this noise source. The percentage error in the estimation of a particular noise source is given by [32][43]: 1 (34) e= r   M 2 −1 m where e represents the percentage error in the estimation of the noise source (i.e. the error in the estimation of a noise source coefficient), M denotes the total number of data points required, 11

Example 6: Steps in calculation of Allan variance

Example 7: Length of time series required for characterization

This example covers the calculation of Allan variance for a white noise signal with power spectral density N 2 = 0.1dB/Hz. If the sensor is a gyroscope, its measurement units are deg/s, so the p standard deviation of the white noise sampled at f s = 100 Hz is N/ f s = 0.01 deg/s. The Allan variance has been calculated for three different correlation times: τ = 0.02s, 0.32s and 2.56s. The block averages have been shown for the three correlation times in Figs. 8(a), (b) and (c), respectively. It is evident from these figures that the Allan variance for white noise decreases as correlation times increase. For τ = 0.02s (2 measurements), the block averages as spread out, for τ = 0.32s (32 measurements) the spread of the block averages is reduced, whereas for τ = 2.56s (256 measurements) the block averages can barely be distinguished apart. This decreasing trend is also evident in Fig. 8(d) which shows the Allan variance for increasing correlation times. Readers must note that this trend only corresponds to white noise. Other noise sources will be discussed in Section 5.4.

The expression presented in (34) can help determine the length of the data set required to estimate or characterize a noise source coefficient up to a given accuracy e. For example, assume that the rate ramp noise parameter needs to be estimated with a percentage accuracy of 33%, i.e. e = 0.33. Additionally, assume that the sensor data is being sampled at 20 Hz. Given that the rate ramp noise is one of the lowest frequency noise sources, and the timescale at which it occurs is typically a few hours (say 2 hours), the number of data points in one averaging step can be calculated to be m = τ f s = (2 hours)(20 Hz) = 144,000. Then, (34) can be written as: ! 1 N =m 1+ 2 (35) 2e Substituting in the values for e and m, the total number of data points (or sensor measurements) that must be collected is N = 8, 051, 157, which, at a sampling frequency of 20 Hz, implies that the sensor data must be collected for 11.18 hours. If the same values were used to obtain an estimate of the rate ramp noise coefficient that was 5% accurate, i.e. e = 0.05, calculations show that the sensor data must be collected for 402 hours.

(a)

0.02 0

SignalAamplitudeA(deg/s)

−0.02

1ABlockA=A0.02AsA=A2Ameasurements

0

2

4

TimeA(s)

6

8

5.4. Allan Variance for Common Types of Sensor Noise As mentioned earlier, Allan variance has significant advantages when performing noise characterization in signals corrupted by several different noise sources. It must be mentioned that because sensor noise characterization is carried out in a zero excitation environment, the sensor output then consists purely of the inherent noise sources and this can be used to characterize the noise types present in the sensor. In this section, the Allan variance characterization of the noise sources discussed in Section 4 will be presented along with detailed examples.

10

(b)

0.02 0 −0.02

1ABlockA=A0.32AsA=A32Ameasurements

0

2

4

TimeA(s)

6

8

10

(c)

0.02

5.4.1. Quantization Noise As discussed earlier in Section 4.1, the power spectral density of quantization noise is given by (10), and is repeated here for convenience:

0 1ABlockA=A2.56As =A256Ameasurements

−0.02

RootAAllanAVariance (deg/s)

0

2

4

TimeA(s)

6

8

10

−2

10

S Ω ( f ) ≈ (2π f )2 Q2 T ( f or f < f s /2)

(d)

Combined with Tehrani’s expression relating the PSD to the Allan variance (i.e. (33)), the expression of Allan variance for quantization noise can be found by substitution and evaluating the resulting integral to be:

−3

10

−2

10

−1

0

10 10 CorrelationATimeA(inAseconds)

σ2a (τ) = Figure 8: White noise data, block averages and Allan variance calculation for various correlation times. Sensor measurements are indicated by tiny blue dots, whereas block averages are indicated by large red dots grouped together and spread over the data block (a) Measurement data and block averages for τ = 0.02s, (2 sensor measurements) (b) Measurement data and block averages for τ = 0.32s, (32 sensor measurements) (c) Measurement data and block averages for τ = 2.56s (256 sensor measurements), and (d) Allan variance (blue circles) calculated for the three correlation times. The red line indicates the Allan variance trend for white noise.

3Q2 τ2

(36)

or equivalently, the expression of the square root of the Allan variance, known as the root Allan variance, can be written as: √ 3Q σa (τ) = (37) τ where the quantization noise coefficient Q is identical to the constant discussed in (10), i.e. Q is related to the quantum step size q as Q2 = q2 /12. The Allan variance values calculated for a specific noise source or stochastic signal are traditionally depicted on a loglog plot with the root Allan variance as the dependent variable on the y-axis versus the correlation time τ as the independent

and m (= τ f s ) denotes the total number of measurements corresponding to the timescale of the noise source being characterized. Example 7 describes how the appropriate length of time for data collection may be determined. 12

variable on the x-axis. It is evident from (37) that the log-log representation will have a slope of −1, as σA is inversely proportional to τ. Figure 9 denotes this trend in root Allan variance. The easiest way to determine the quantization noise coefficient Q from the root Allan variance plot is to look up the value of the root √ Allan variance corresponding to a correlation time of τ = 3 units. However, this technique doesn’t always produce reliable results, especially if the sensor measurements are corrupted by more than one type of noise. In an alternative approach, the known values of the correlation time τ, along with the calculated values of the corresponding Allan variance σ2a (τ) can be used in a linear regression framework to determine the quantization noise coefficient. For example, the expression of the Allan variance for quantization noise can be written as a linear function of the term 1/τ2 as follows: Σa = T qnCˆ qn

to recover the quantization noise coefficient. The quantization noise was simulated using the models discussed and used in Section 4.1. The root Allan variance plot shown in Figure 9(b) corresponds to the error rate in a quantized signal. Table 2 includes the designed and recovered values of the quantization noise coefficient and quantum step size using Allan variance. The characterization of the remaining noise types, including angle random walk noise, bias instability, rate random walk noise, and rate ramp noise, follows the same pattern. The Allan variance is calculated using Algorithm 1 and weighted linear regression is used to recover the noise parameter or coefficient for the specific noise types. Later, an example will be used to show that the noise parameters or coefficients can be reliably retrieved even when several noise sources are present in a sensor.

(38)

5.4.2. Angle Random Walk Noise The power spectral density of angle random walk noise described in (33) can be used to determine the root Allan variance as a function of correlation time to yield to following expression: N σ(τ) = 1/2 (43) τ where the angle random walk noise coefficient N is identical to the constant discussed in (12). In light of the expression presented in (43), it is expected that the log-log plot describing the relation of the root Allan variance to the correlation time for angle random walk will have slope of −1/2. As with quantization noise, a glance at (43) is sufficient to realize that the easiest method of identifying the angle random walk noise coefficient is to determine the value of the root Allan variance corresponding to the correlation time τ = 1 unit. However, weighted linear regression yields better estimates, so the noise coefficient can be evaluated from Allan variance measurements as follows:

where, given a set of k correlation times for which the Allan variance is calculated, T qn represents a k × 1 vector [1/τ21 , 1/τ22 , ..., 1/τ2k ]t , Σa represents a k × 1 vector [σ2a (τ1 ), σ2a (τ2 ), ..., σ2a (τk )]t consisting of the corresponding Allan variance values, and Cˆ qn represents an estimate of the quantization noise coefficient Q, such that:      1   1  σ2a (τi ) =  2  Cˆ qn ≈  2  3Q2 (39) τi τi Now, the estimate of the quantization noise coefficient can be obtained using (38) as follows: t t Cˆ qn = (T qn T qn )−1 T qn Σa ≈ 3Q2

(40)

The authors advise the use of the linear regression framework since it is extremely useful in scenarios where the underlying noise model, i.e. the types of noise sources inherent in the sensor, is unknown. In such a scenario, several different regression models may be tested and the model that provides the ‘best’ fit may be used to determine the types of noise sources. Criterion for model selection such as Pearson’s correlation coefficient may be used to determine which sources constitute the noise in the sensor. Another word of caution for practitioners: since the independent variable, i.e. the correlation time, spans several orders of magnitude, it is advisable to use weighted linear regression to determine the noise model and the appropriate noise coefficients. Mathematically, (40) may be re-written to account for the weighting of different correlation times and Allan variance according to the orders of their magnitude to read as follows: t t Cˆ qn = (T qn WT qn )−1 T qn WΣa (41)

t t Cˆ arw = (T arw WT arw )−1 T arw WΣa ≈ N 2

(44)

where the terms have their usual meanings except that T arw represents a k × 1 vector [1/τ1 , 1/τ2 , ..., 1/τk ]t . A word of caution for practitioners attempting to simulate angle random walk noise using given sensor specifications: recall that as per (2), the variance of the sampled white noise is related to the power spectral density as σ2fs = N 2 f s . Consequently, one must use the standard deviation σ fs of the sampled white noise to simulate the noisy sensor signal, rather than the square root of the power spectral density, N. See Example 8 and Table 2 for more details. 5.4.3. Flicker Noise or Bias Instability The power spectral density of flicker noise described in (24) can be used to evaluate the root Allan variance to yield the following expression: ( 2B2 sin4 (x) 2sin3 (x)cos(x) σ2a (τ) = ln2 − − π x 2x2  + Ci(2x) − Ci(4x) (46)

where W is a diagonal matrix containing weights along the diagonal that correspond approximately to the order magnitude, and all off-diagonal entries are zero. In order to get an accurate estimate of the noise coefficient, the diagonal elements of the matrix W may be taken to be roughly of the order of the calculated values of the Allan variance. Example 8 depicts the theoretical and calculated root Allan variance plots as well as the weighted least squares fit generated 13

Example 8: Quantization noise characterization with Allan variance

Example 9: Angle random walk characterization with Allan variance

The quantization noise generated in this example has the same characteristics and statistics as the signal generated in Example 1. The inut signal is given by x(t) = Asin(2π f t) + v(t), where A = 3 units, f = 1 Hz, and v(t) is Gaussian white noise with mean zero and power spectral density of N 2 = (0.01A)2 unit2 /Hz, so that the variance of the band-limited white noise is N 2 f s = 0.018 unit2 . The signal is simulated for 5000 s and sampled at f s = 20 Hz. Figure 9(a) denotes the theoretical root Allan variance plot as a function of the correlation time τ with the appropriate slope of −1 on the log-log plot, i.e. the root Allan variance decreases by 1 decade for an increase of 1 decade in the correlation time. Figure 9(b) denotes the calculated root Allan variance generated using the rate measurement of simulated quantization noise, which also has the appropriate slope of −1 on the loglog plot. The corresponding weighted least squares fit is used to calculate Cˆ qn from which the quantization noise coefficient is recovered as: s Cˆ qn Q= (42) 3

Random walk noise is observed in sensors as a result of white noise in the underlying sensor measurement. For example, white noise in angular rate measurements in a gyroscopes manifests as a random walk in the angle outputs. The angle random walk in this example is generated by an underlying white noise in rate measurements √ which is characterized by the arw noise coefficient of N = 0.025 unit/ s , and sampled at f s = 20 Hz. Figure 10 denotes the calculated root Allan variance generated using the rate measurement of simulated ARW noise, which also has the appropriate slope of −1/2 on the log-log plot, i.e. the root Allan variance decreases by 1 decade for an increase of 2 decades in the correlation time. The corresponding weighted least squares fit is used to calculate Cˆ arw from which the angle random walk noise coefficient is recovered quite simply as: q N = Cˆ arw (45)

In this example, the designed quantum step size was q = 0.6658, which is similar to the step size considered in Example 1. The recovered quantum step size is q = 0.6647. These quantum step sizes have been calculated using the known relationship Q2 = q2 /12 discussed earlier in Section 4.1.

Root Allan Variance, σ(τ)

(a)

Figure 10: Root Allan variance calculated using simulated angle random walk noise with known parameters. Blue circles indicate the calculated root Allan variance for a specific correlation time, red line indicates the weighted least squares fit used to recover the noise coefficient.

Correlation time, τ

verse cutoff frequency (i.e. 1/ f0 ), the second and third terms in (46) tend to 0, as their denominators continue to grow while the range of the numerators is restricted. The third and the fourth terms also tend to 0, as the cosine-integral approaches zeros for large values of τ. As a result, for large values of correlation time (or low frequencies), the root Allan variance of flicker noise or bias instability can be expressed as follows: r   2 ln2    B ≈ 0.664B σa (τ) =  (47) π 

Root Allan Variance (unit/s)

(b) 0

10

Slo

pe

= -1

−2

10

−4

10

−1

10

0

10

1

10 Correlation Time (s)

2

10

3

where B is also referred to as the flicker noise or bias instability coefficient, and is identical to the constant discussed in (24). The expression for root Allan variance in (47), implies that the root Allan variance is independent of the τ for large correlation times. As a result, it is expected that for large values of correlation time τ the log-log plot of the root Allan variance will have zero slope. As before, there exists a quick method of determining the flicker noise coefficient – one only needs to visually identify the region on the Allan variance plot that has zero slope and use the corresponding value of the root Allan variance to find the flicker noise coefficient via (47). Alternatively, the weighted least squares methodology may be used to determine the noise

10

Figure 9: Root Allan variance for quantization noise on a log-log plot (a) Theoretical root Allan variance plot, [31] and (b) Root Allan variance calculated using simulated quantization noise with known parameters. Blue circles indicate the calculated root Allan variance for a specific correlation time, red line indicates the weighted least squares fit used to recover the noise coefficient.

where B2 denotes the power spectral density of the underlying white noise that is used to generate approximate flicker noise, Ci(x) represents the cosine integral function, and x = π f0 τ. For values of correlation time τ that are much larger than the in14

coefficient as follows:

Example 10: Flicker noise characterization with Allan variance

Cˆ fn = (T 0t WT 0 )−1 T 0t WΣa ≈ (0.664B)2

(48)

The flicker noise generated in this example has the same characteristics as the one generated in Example 3. The flicker noise is generated at f s = 20 Hz and the noise coefficient is B = 0.0224 units/s. The flicker noise is simulated using the AR model described in (19), but truncated at n = 50, 500, and 5000 terms, as shown in Fig. 11(b). The root Allan variance plots for flicker noise models truncated at varying number of terms show significant departure from the theoretical plot (as shown in Fig. 11(a)). This is an expected outcome of our modeling approach which only approximates the true flicker noise spectrum as is evident in Fig. 5(c-d). Specifically, in Fig. 5(c), the spectrum of the simulated noise deviates from the true spectrum below 0.1 Hz. This translates directly to a correlation time of 10 s, which is where deviation in the root Allan variance plot begin to appear. In this scenario, the root Allan variance corresponding to the flat region of the flicker noise plot is best determined via visual inspection, a fact that is corroborated by the relative errors seen in the coefficient estimates in Table 2. Specifically, this flicker noise coefficient B can be recovered using: q 1 1 B= Cˆ fn = σa (49) 0.664 0.664 0

where the terms have their usual meanings except that T fn represents a k × 1 vector [τ01 , τ02 , ..., τ0k ]t , i.e. T fn = [1, 1, ..., 1]t , and Cˆ fn represents an estimate of the flicker noise or bias instability coefficient B. Practitioners may like to know that in many cases, such as when recovering noise parameters from simulated flicker noise, visual inspection of the root Allan variance plot offers a better technique than attempting to fit a zero slope line to the entire data set using weighted linear regression. This is due to the fact that suggested models for simulating flicker noise are only approximations of the actual phenomenon and the truncated model in (19) has poor fidelity in the very low frequency region. Consequently, linear regression provides unreliable estimates of the coefficient Cˆ fn . However, if several noise sources are present in the sensor, weighted linear regression yields much better results and remains the preferred method for characterization of sensors. Table 2 provides insight into the relative merits of the visual inspection and weighted linear regression methods when characterizing flicker noise alone. See Example 10 for additional details.

Root Allan Variance, σ(τ)

where σa0 represents the value of the root Allan variance for which the slope of the Allan variance plot is approximately 0.

5.4.4. Rate Random Walk Noise Using (33), the power spectral density can be used to arrive at the following expression for the root Allan variance of rate random walk noise as a function of correlation time (τ): ! K σa (τ) = √ τ1/2 3

Correlation time, τ

(50)

Root Allan Variance (unit/s)

where, K represents the rate random walk noise coefficient and is identical to the constant discussed in (28). Observing the expression in (50), it is expected that the root Allan variance plot will have a slope of +1/2. The simplest technique for identifying the rate random walk noise coefficient is to determine the value of the root Allan variance corresponding to the correlation time τ = 3 units. Alternatively, following a similar weighted linear regression framework as before, the rate random walk noise coefficient K can be evaluated as follows: K2 t t Cˆ rrw = (T rrw WT rrw )−1 T rrw WΣa ≈ 3

(a)

(51)

10

1

10

2

10

3

10

(b)

4

10

1

0

1

2

3

4

See Example 11 and Table 2 for additional details. Figure 11: Root Allan variance calculated using simulated flicker noise with known parameters. Circles indicate the calculated root Allan variance for a specific correlation time, lines indicates the weighted least squares fit used to recover the noise coefficient. Blue (dark) corresponds to n = 50 terms in the flicker noise model, orange (medium) corresponds to n = 500 terms and, green (light) corresponds to n = 5000 terms.

5.4.5. Rate Ramp Noise As opposed to the procedure followed for other types of noise, for rate ramp noise generated using the deterministic model discussed in (29), it much simpler to obtain the expression for Allan variance by directly substituting the model into 15

Table 2: Designed and recovered noise coefficients using Allan variance (AVAR) analysis (Included errors are relative errors in %)

Parameters

Quantization (Q)

Angle random walk (N)

Bias instability (B)

Value

% error

Value

% error

Value

% error

Value

% error

Value

% error

Designed

0.1922



0.0250



0.0224



1.00E-04



1.0000E-08



Recovered (AVAR)

0.1919

0.16

0.0252

0.80

9.09E-05

9.09

1.0006E-08

0.06

0.0032* 85.71* (0.0248†) (10.71†)

Rate random walk (K)

Rate ramp (R)

*Bias instability noise coefficient from an IIR filter model truncated to n = 500 terms. † Values obtained from visual inspection of Allan variance plot

Example 11: Rate random walk characterization with Allan variance

Example 12: Rate ramp noise characterization with Allan variance

The rate random walk simulated here has the same characteristics as the one simulated in Example 4, i.e. it is generated using the rate random walk noise coefficient K = 0.0001 units/s3/2 and sampled at 20 Hz. The coefficient obtained from the weighted linear regression as per (51) is 2.75e-09 and the RRW noise coefficient is retrieved using: q K = 3Cˆ rrw

The rate ramp noise simulated here has the same characteristics as the one simulated in Example 5(a), i.e. it is generated using the rate ramp noise coefficient R = 1.00e-08 units/s2 and sampled at 20 Hz. The coefficient obtained from the weighted linear regression as per (54) is 5.006e-17 and the rate ramp noise coefficient is retrieved using: q R = 2Cˆ rr

The designed and recovered values of the rate random walk coefficient are included in Table 2.

The designed and recovered values of the rate ramp noise coefficient are included in Table 2.

Figure 12: Root Allan variance for rate random walk noise on a log-log plot (a) Theoretical root Allan variance plot [31], and (b) Root Allan variance calculated using simulated RRW noise with known parameters. Circles indicate the calculated root Allan variance for a specific correlation time, lines indicates the weighted least squares fit used to recover the noise coefficient.

Figure 13: Root Allan variance calculated using simulated rate ramp noise with known parameters. Circles indicate the calculated root Allan variance for a specific correlation time, lines indicates the weighted least squares fit used to recover the noise coefficient.

where R denotes the rate ramp noise coefficient and is identical to the constant discussed in (29). Observing the expression in (53), it is expected that the log-log plot of the root Allan variance plot will have a slope of +1. The rate ramp noise coefficient may be determined by evaluating the value of the root √ Allan variance corresponding to the correlation time τ = 2 units, or through weighted linear linear regression as follows:

(32) as follows: σ2a (τ) = E [( x¯k+m − x¯k )2 ] ( ! !)2   (t + τ) + t t + (t − τ)   = E  R −R  2 2 ( ! !)2 (t + τ) + t t + (t − τ) = R −R 2 2 2 2 R τ = 2

R2 t t Cˆ rr = (T rr WT rr )−1 T rr WΣa ≈ 2

(52)

(54)

where T rr = [τ21 , τ22 , ..., τ2k ]t , and Cˆ rr estimates the rate ramp noise coefficient R. Additional details are available in Example 12.

where the first term denotes the average of all sensor measurements in the time interval [t, t + τ) and the second term denotes the average of all sensor measurements in the time interval [t − τ, t). Equivalently, the root Allan variance is given by: ! R σa (τ) = √ τ (53) 2

5.5. Noise characterization with multiple noise sources Thus far, the discussion has focused on a single type of noise, with emphasis on its modeling, characterization, and simula16

tion. However, one often encounters scenarios where sensor measurements are corrupted by multiple noise sources. It is in such scenarios that the power of Allan variance as a characterization technique really shines through. Moreover, it is in this scenario that the use of a linear regression based framework for evaluating the sensor noise specifications also achieves fruition. Specifically, the individual regression-based analyses carried out for each noise type can be grouped together to make full use of the capacity of the regression framework as discussed below. Let us assume that the sensor measurements are corrupted by three noise sources – angle random walk noise, flicker noise and rate random walk noise. Additionally, since these noise sources are driven by diverse processes and are prevalent over disparate frequency bands, they may be assumed to be statistically independent so that the expression for Allan variance can be obtained as the sum of the individual Allan variance expressions for each noise source as follows:

In this example, the virtual sensor measurements are corrupted by three noise sources – angle random walk noise, flicker noise (or bias instability), and rate random walk noise. Each of these noise sources is simulated using models discussed in previous sections, and the final noisy signal is obtained by adding each of these noise source signals together. The noise is simulated using the noise specifications N = 0.025 unit/s1/2 , B = 0.0224 unit/s, K = 0.001 unit/s3/2 and is sampled at f s = 20 Hz. Fig. 14(a) depicts the simulated noise sources plotted separately. It may be observed that the noise sources appear to have differing frequencies as expected – ARW noise has a higher frequency content, whereas RRW noise has a lower frequency content as compared to the others. Fig. 14(b) plots the calculated Root Allan Variance and the best fit obtained using a noise model that contains angle random walk, flicker, and rate random walk noises, to yield a coefficient of correlation, ρ = 0.9875. The designed and recovered values of the various noise specifications are included in Table 3.

0.5 Signal amplitude (unit)

σ2a (τ) = σ2arw (τ) + σ2fn (τ) + σ2rrw (τ)

Example 13: Noise characterization with multiple noise sources

(55)

The expression presented in (55) holds true for any combination of noise sources discussed above. In this situation, the noise specifications can be retrieved from sensor data using:  2    τ0 τ1   σa (τ1 ) τ−1  σ2 (τ ) τ1−1 τ10 τ  Cˆ arw  2    a 2   2 2   (56)  ..  =  .. .. ..   Cˆ fn   .   .  . .  ˆ   2   −1  Crrw σa (τk ) τk τ0k τk

05 0

500

1000 1500 2000 2500 3000 3500 Time (s)

10

or simply put, Σa = TCˆ

0

10

(57)

where Σa represents the vector of Allan variance values calculated at various correlation times (τ1 , τ2 , ..., τk ), T represents the array of correlation times raised to the appropriate power based on the noise types being considered, and Cˆ represents a vector of the regression coefficients corresponding to each of the noise sources included in the regression model. The regression coefficients, which can be obtained by solving the linear regression problem as follows: ˆ  Carw    Cˆ =  Cˆ fn  = (Tt WT)−1 Tt WΣa (58) ˆ  Crrw

10 10 10 10

can be used to recover the noise specifications as discussed in Examples 8 through 12. The astute reader may now ask: how can one decide which noise sources are present in the sensor before carrying out any analysis on the sensor measurement data. The answer is: we don’t. The regression technique performed here is identical to any other regression analysis in that one has to try several different noise models and test for how accurately a particular noise model represents the data. The usual technique to calculate the accuracy of a given noise model is to use Pearson’s coefficient of correlation, which can be done quite easily in computing environments, such as MATLAB, using predefined functions, such as corr. If the coefficient of correlation

Figure 14: Sensor noise characterization using Allan variance for sensor measurements corrupted by three noise sources – angle random walk, flicker, and rate random walk noise. (a) Simulated noise sources – ARW noise is depicted in green (light), FN is depicted in red (medium), and RRW noise is depicted in blue (dark), and (b) Root Allan variance and weighted least squares fit obtained using the appropriate noise model. Circles indicate actual root Allan variance values calculated using simulated data and line represents regression model.

appears to be ‘low’, then the same procedure can repeated for different combinations of noise sources that may be present in the sensor. The reader is directed to standard texts in regression analysis such as [47] for further details. 17

[5] M. Kac, Random walk and the theory of Brownian motion, The American Mathematical Monthly 54 (7) (1947) 369–391. [6] D. Abbott, Overview: Unsolved problems of noise and fluctuations., Chaos (Woodbury, N.Y.) 11 (3) (2001) 526–538. [7] A. van der Ziel, Noise, Prentice-Hall, 1954. [8] J. B. Johnson, The Schottky effect in low frequency circuits, Phys. Rev. 26 (1925) 71–85. [9] D. Applebaum, L´evy Processes and Stochastic Calculus, Cambridge Studies in Advanced Mathematics, Cambridge University Press, 2009. [10] V. Mandrekar, Mathematical work of Norbert Wiener, Notices of the AMS 42 (6) (1995) 664–669. [11] G. E. Uhlenbeck, L. S. Ornstein, On the theory of the Brownian motion, Phys. Rev. 36 (1930) 823–841. [12] H. Stark, J. W. Woods, Probability and Random Processes with Applications to Signal Processing, Prentice Hall PTR, 2002. [13] B. Widrow, I. Kollar, M.-C. Liu, Statistical theory of quantization, Instrumentation and Measurement, IEEE Transactions on 45 (2) (1996) 353– 361. doi:10.1109/19.492748. [14] W. R. Bennett, Bell System Technical Journal, Vol. 27, Blackwell Publishing Ltd, 1948, Ch. Spectra of Quantized Signals, pp. 446–472. [15] T. Claasen, A. Jongepier, Model for the power spectral density of quantization noise, Acoustics, Speech and Signal Processing, IEEE Transactions on 29 (4) (1981) 914–917. [16] W. J. Riley, Handbook of frequency stability analysis, US Department of Commerce, National Institute of Standards and Technology, 2008. [17] F. K. Musgrave, C. E. Kolb, R. S. Mace, The synthesis and rendering of eroded fractal terrains, SIGGRAPH Comput. Graph. 23 (3) (1989) 41–50. [18] M. Goodwin, Residual modeling in music analysis-synthesis, Acoustics, Speech, and Signal Processing, 1996. ICASSP-96. Conference Proceedings., 1996 IEEE International Conference on 2 (1996) 1005–1008. [19] A. Gelb, J. F. Kasper, R. A. Nash, C. F. Price, A. A. Sutherlan, Applied Optimal Estimation, 1974. [20] K. H. Lloyd, On the implementation of noise in the discrete simulation of continuous systems, Tech. rep., DTIC Document (1982). [21] N. J. Kasdin, Discrete simulation of colored noise and stochastic processes and 1/f: power law noise generation, Proceedings of the IEEE 83 (5) (1995) 802–827. [22] P. Kloeden, E. Platen, Numerical Solution of Stochastic Differential Equations, Applications of mathematics : stochastic modelling and applied probability, Springer, 1992. [23] S. R. Amin, Determining the uncertainty of a GPS-based collision vehicle detection system, Master’s thesis, The Pennsylvania State University (2011). [24] T. J. Witt, Using the Allan variance and power spectral density to characterize DC nanovoltmeters, Instrumentation and Measurement, IEEE Transactions on 50 (2) (2001) 445–448. [25] P. Werle, R. Mucke, F. Slemr, The limits of signal averaging in atmospheric trace-gas monitoring by tunable diode-laser absorption spectroscopy, Applied Physics B Photophysics and Laser Chemistry 57 (2) (1993) 131–139. [26] A. Sripad, D. Snyder, A necessary and sufficient condition for quantization errors to be uniform and white, Acoustics, Speech and Signal Processing, IEEE Transactions on 25 (5) (1977) 442–448. [27] B. Widrow, I. Koll´ar, Quantization Noise: Roundoff Error in Digital Computation, Signal Processing, Control, and Communications, NetLibrary, Inc, Cambridge University Press, 2008. [28] H. Mehrer, Diffusion in Solids: Fundamentals, Methods, Materials, Diffusion-Controlled Processes, Springer Series in Solid-State Sciences, Springer, 2007. [29] Y. Sugita, Y. Okamoto, Replica-exchange molecular dynamics method for protein folding, Chemical Physics Letters 314 (1–2) (1999) 141–151. [30] E. F. Fama, Random walks in stock market prices, Financial Analysts Journal 21 (5) (1965) 55–59. [31] 952-1997 - IEEE standard specification format guide and test procedure for single-axis interferometric fiber optic gyros, Tech. rep. (1998). doi:10.1109/IEEESTD.1998.86153. [32] A. Papoulis, S. U. Pillai, Probability, random variables, and stochastic processes, McGraw-Hill, 2002. [33] E. Milotti, 1/f noise: a pedagogical review, arXiv ePrints. [34] S. Nassar, K.-P. Schwarz, N. El-Sheimy, A. Noureldin, Modeling inertial sensor errors using autoregressive (AR) models, Navigation 51 (4) (2004)

Table 3: Designed values, recovered values, and % relative error of the various noise parameters using Allan variance (AVAR)

Noise parameters

Designed

Recovered

Rel. error (%)

Angle random walk

0.0250

0.0242

3.20

Flicker noise

0.0224

0.0207

7.58

1.00e-03

7.52e-04

24.8

Rate random walk

6. Concluding Remarks In this paper, several noise sources found in sensors – quantization, angle random walk, bias instability, rate random walk, and rate ramp – were discussed in the context of their modeling and characterization techniques. Several examples pertaining to their simulation and implementation in computing environments were also discussed. The authors have made a concerted effort to explain the relationship between commonly used noise model parameters (as discussed in Section 4), and the quantities used to characterize commercially available sensors (as discussed in Section 5). Specifically, the several exemplary analyses presented throughout the paper aim to help the uninitiated reader bridge the gap between sensor noise modeling and sensor characterization techniques. For readers interested in advanced methods of sensor characterization such as the modified Allan variance or the Hadamard variance, the authors would like to refer them to the NIST Handbook of Frequency Stability Analysis [16]. MATLAB code corresponding to all examples discussed here can be found on the MATLAB Central File Exchange. Since the paper assumes some prior knowledge of signal processing and stochastic processes, the authors have also provided an expanded tutorial-style version of the paper that assumes no prior knowledge. The expanded version includes additional description of the notation and basics as well as a couple of appendices, and is available on the authors’ website. Acknowledgments Part of this work was performed with the help of funding provided by the Federal Highway Administration under the Exploratory Advanced Research Program (FHWA BAA DTFH6109-R-00004) on the project titled ’Next Generation Vehicle Positioning Techniques for GPS-degraded Environments to Support Vehicle Safety and Automation Systems’. References [1] N. Barbour, G. Schmidt, Inertial sensor technology trends, IEEE Sensors Journal 1 (2001) 332–339. [2] D. W. Allan, Statistics of atomic frequency standards, Proceedings of the IEEE 54 (1966) 221–230. [3] K. Jerath, S. N. Brennan, GPS-free terrain-based vehicle tracking performance as a function of inertial sensor characteristics, Proceedings of the 2011 Dynamic Systems and Control Conference, Arlington, VA, USA. [4] N. El-Sheimy, H. Hou, X. Niu, Analysis and modeling of inertial sensors using Allan variance, IEEE Transactions on Instrumentation and Measurement 57 (2008) 140–149.

18

259–268. [35] S. Plaszczynski, Generating long streams of 1/ f α noise, Fluctuation and Noise Letters 7 (01) (2007) R1–R13. [36] L. Huang, Auto-regressive moving average (ARMA) modeling method for Gyro random noise using a robust Kalman filter, Sensors 15 (10) (2015) 25277–25286. [37] K. Wang, S. Xiong, Y. Li, Modeling with noises for inertial sensors, in: Position Location and Navigation Symposium (PLANS), 2012 IEEE/ION, IEEE, 2012, pp. 625–632. [38] M. Pittelkau, Attitude determination Kalman filter with a 1/ f flicker noise gyro model, in: Proceedings of the 26th international technical meeting of the ION satellite division, Nashville, Tennessee, 2013, pp. 2143–2159. [39] J. R. M. Hosking, Fractional differencing, Biometrika 68 (1) (1981) 165– 176. [40] J. Strus, M. Kirkpatrick, J. W. Sinko, GPS/IMU - Development of a high accuracy pointing system for maneuvering platforms, Inside GNSS 3 (2). [41] G. Matz, F. Hlawatsch, Time-varying power spectra of nonstationary random processes, in: B. Boashash (Ed.), Time Frequency Signal Analysis and Processing: A Comprehensive Reference, Elsevier Science & Technology Books, 2003. [42] V. Saini, S. Rana, M. Kuber, Online estimation of state space error model for MEMS IMU, Journal of Modelling and Simulation of Systems 1 (4) (2010) 219–225. [43] H. Hou, Modeling inertial sensors errors using Allan variance, University of Calgary, 2004. [44] J. A. Barnes, A. R. Chi, L. S. Cutler, D. J. Healey, D. B. Leeson, T. E. McGunigal, J. A. Mullen, W. L. Smith, R. L. Sydnor, R. F. C. Vessot, G. M. R. Winkler, Characterization of frequency stability, Instrumentation and Measurement, IEEE Transactions on IM-20 (2) (1971) 105–120. [45] M. A. Lombardi, T. P. Heavner, S. R. Jefferts, NIST primary frequency standards and the realization of the SI second, The Journal of Measurement Science 2 (4). [46] M. M. Tehrani, Ring laser gyro data analysis with cluster sampling technique, Proceedings of the SPIE (1983) 207–220. [47] L. R. Ott, M. Longnecker, An Introduction to Statistical Methods and Data Analysis, Brooks/Cole Cengage Learning, 2010.

19

Highlights 1. Outlines numerical simulation models for commonly found sensor noise types 2. Links noise model parameters to sensor noise characterization specifications 3. Provides method to find system performance bounds using simulated sensor data

Graphical Abstract (for review)