Estimation of parameters in random amplitude chirp signal

Estimation of parameters in random amplitude chirp signal

Estimation of parameters in random amplitude chirp signal Journal Pre-proof Estimation of parameters in random amplitude chirp signal Swagata Nandi,...

656KB Sizes 0 Downloads 59 Views

Estimation of parameters in random amplitude chirp signal

Journal Pre-proof

Estimation of parameters in random amplitude chirp signal Swagata Nandi, Debasis Kundu PII: DOI: Reference:

S0165-1684(19)30381-0 https://doi.org/10.1016/j.sigpro.2019.107328 SIGPRO 107328

To appear in:

Signal Processing

Received date: Revised date: Accepted date:

5 December 2018 9 August 2019 2 October 2019

Please cite this article as: Swagata Nandi, Debasis Kundu, Estimation of parameters in random amplitude chirp signal, Signal Processing (2019), doi: https://doi.org/10.1016/j.sigpro.2019.107328

This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. © 2019 Published by Elsevier B.V.

Highlights • We have considered the theoretical properties of the estimators which are already available in the literature. We have established the consistency and asymptotic normality results under a fairly general set of assumptions on the model parameters and also on the error random variables. • We have extended the existing model to a multiparameter model which are more effective for data analysis purposes. We have established the consistency and asymptotic normality properties of the proposed estimators. • Extensive simulation results suggest that the method work quite well and the proposed multi-component model can be used quite effectively for data analysis purposes.

1

ESTIMATION OF PARAMETERS IN RANDOM AMPLITUDE CHIRP SIGNAL SWAGATA NANDI1 AND DEBASIS KUNDU2

Abstract. In this paper, we consider the random amplitude chirp signal model observed with independent and identically distributed additive error as proposed by Besson et al. [2]. The random amplitudes are also independent and identically distributed random variables with non-zero mean and some finite higher order moments. We consider the estimators proposed by Besson et al. [2] and study their theoretical properties. It is observed that the estimators are strongly consistent and asymptotically normally distributed. We propose a new multi-component random amplitude chirp model, which can be more useful in practice. Numerical experiments are conducted to see the small sample performances.

1. Introduction In this paper we consider the estimation of the unknown parameters of a complex valued chirp signal model with random time-varying amplitude. Mathematically, the model can be expressed as follows: 0

0 2

y(t) = α(t)ei(θ1 t+θ2 t ) + e(t), t = 1, . . . , n.

(1)

√ Here i = −1, y(t) = yR (t) + iyI (t) are the complex-valued signals at n equi-distant time points; the amplitude {α(t)} is a sequence of real-valued random variables with non-zero mean, µα , and its specific structure is given in Assumption 1; θ10 and θ20 are the frequency and chirp rate, respectively and 0 < θ10 , θ20 < π. The additive error {e(t)} is a sequence of complex-valued random variables with zero mean and finite fourth moment. The random amplitude α(t) is like a multiplicative error. The problem here is to estimate the unknown parameters θ10 and θ20 based on the measurements {y(1), . . . , y(n)}. It should be mentioned that although, all physical signals are real valued, it might be advantageous from an analytical, a notational or an algorithmic point of view to work with 2000 Mathematics Subject Classification. 62J02; 62E20; 62C05. Key words and phrases. Random Amplitude Chirp Signal; Non-linear Least Squares; Multiplicative Error; Asymptotic Distribution; 2

ESTIMATION OF PARAMETERS IN RANDOM AMPLITUDE CHIRP SIGNAL

3

signals in their analytic form which is complex valued, see for example Gabor [7]. For a real valued continuous signal, its analytic form can be easily obtained using the Hilbert transformation. Therefore, it is quite natural to work with complex valued signals and develop the necessary properties associated with the complex model and use them to the corresponding real model. Due to this reason, in most of the signal processing literature it is observed that the complex valued models are being used for analytical derivation or algorithmic development, although they can be used for analyzing any real valued signal, see for example Stoica and Moses [13]. Model (1) is known as the random amplitude chirp signal model and it was first introduced by Besson et al. [2]. It has been mentioned by the authors that This kind of signal arises in many applications of signal processing, one of the most important being the radar problem. For instance, consider a radar illuminating a target. Then, the transmitted signal will be affected by two different phenomena. First, it will undergo a phase shift induced by the distance and relative motion between the target and the receiver. Assuming this motion is continuous and differentiable, the phase shift can be adequately modeled as φ(t) = a0 + a1 t + a2 t2 , where the parameters a1 and a2 are either related to speed and acceleration or range and speed, depending on what the radar is intended for and on the kind of waveforms transmitted. The second phenomenon to be accounted for is amplitude distortion caused either by target fluctuation or scattering of the medium (e.g. fading). Note that Besson et al. [2] originally considered the following model 0

0

0 2

y(t) = α(t)ei(θ0 +θ1 t+θ2 t ) + e(t),

t = 1, . . . , n,

(2)

where α(t), e(t), θ10 , θ20 are same as in (1), but in model (2), there is a phase term θ0 , which has not been considered in model (1). It can easily be observed that when the mean of the multiplicative error, µα , is unknown and a phase term θ00 , which is also unknown, is present, then both are not identifiable. Therefore, if both are present one has to be known. So a separate phase term has not been considered in our model (1), and it is basically included in α(t) without loss of any generality. Hence, models (1) and (2) are equivalent models. Model (1) can be seen as a generalized version of the chirp model, where α(t) is constant. An extensive amount of work has been done on different aspects of a chirp model because of its wide scale applicability, see for example, Abatzoglou [1], Farquharson et al. [5], Gini et al. [8], Grover et al. [9] and the references cited therein. Model (1) can also be seen as a generalized version of the random amplitude sinusoidal model when θ20 = 0. The random amplitude sinusoidal model has also received a considerable amount of attention in the signal

4

SWAGATA NANDI1 AND DEBASIS KUNDU2

processing literature, see for example Francos and Friedlander [6], Besson and Stoica [3], Zhou and Ginnakis [14] etc. Finally, if θ20 = 0 and α(t) is constant, then Model (1) becomes the sinusoidal model, which is one of the most studied models in the statistical signal processing literature, see for example, the monograph by Kundu and Nandi [10]. The main aim of this paper is to consider the estimation of the parameters θ10 and θ20 and study their theoretical properties under suitable assumptions on α(t) and e(t). We would like to emphasize that we are not proposing any new estimation procedure, but study the thereotical properties of the estimators proposed by Besson et al. [2]. They examined the performance using extensive simulation studies. The details about the procedure are provided in Section 2. They provided some heuristic argument regarding the properties of these estimators and obtained the approximate theoretical variances of these estimators. In this paper, we have shown that under a reasonable set of assumptions on the model parameters and on the additive as well as multiplicative error random variables, the estimators are strongly consistent and asymptotically normally distributed. Finally we propose a multicomponent random amplitude chirp model along the same line as the multicomponent chirp model of Djuri´c and Kay [4], and we extend the results in this case. We perform extensive simulation experiments to see how the estimators behave for different sample sizes and for different error variances. The performances of the estimators in simulation study is as per the theoretical results obtained in this paper. In Section 2, we provide the necessary assumptions and establish the consistency and the asymptotic normality properties of the estimators proposed by Besson et al. [2] in case of single component model. We propose a multicomponent random amplitude chirp model in Section 3 and extend our results of Section 2 in this case. Extensive numerical experiments are presented in Section 4. Finally, we conclude the paper in Section 5.

2. Estimation of Parameters of Random Amplitude Single Chirp Model In this section, we discuss the problem of estimation of unknown parameters, namely, the frequency and chirp rate present in model (1). The following assumptions on the random amplitude, the additive error and the true values of the parameters are required. Assumption 1. The multiplicative error {α(t)} is a sequence of independent and identically distributed (i.i.d.) real-valued random variables with mean µα , variance σα2 , µα 6= 0 and σα2 > 0. The fourth moment of {α(t)} exists.

ESTIMATION OF PARAMETERS IN RANDOM AMPLITUDE CHIRP SIGNAL

5

Assumption 2. The additive error {e(t)} is a sequence of complex-valued i.i.d. random variables with mean zero and variance σ 2 . Write e(t) = eR (t) + ieI (t), then {eR (t)} and 2 {eI (t)} are i.i.d. (0, σ2 ), have finite fourth moment γ and are independently distributed. Assumption 3. {e(t)} is assumed to be independent of {α(t)}. Assumption 4. (θ10 , θ20 ) ∈ (0, π) × (0, π), that is, the space of admissible parameter values. Note that in Assumptions 1 and 2 the existence of the fourth order moments has been assumed. The existence of fourth moment of the additive error has been assumed for constant amplitude chirp model also, see for example Nandi and Kundu [12]. This assumtion is made mainly due to technical reason. Without this assumption, establishing the consistency and asymptotic normality results becomes difficult. Moreover, often the errors are assumed to be Gaussian, and in that case all the moments are finite, hence Assumptions 1 and 2 hold true. We consider the same estimators as proposed by Besson et al. [2]. We do not assume any specific distribution of {e(t)} like Besson et al. [2], where it is assumed that {e(t)} is complex circular Gaussian process. Our aim is to estimate the parameters θ1 and θ2 given a sample of size n from model (1). Write θ = (θ1 , θ2 ) and θ 0 be the true value of θ. Define 2 n 1 X 2 2 y (t)e−i2(θ1 t+θ2 t ) . (3) Q(θ) = n t=1

Note that Q(θ) is the periodogram function of y 2 (t) with exponent replaced by twice the usual periodogram exponent. The unknown parameters θ1 and θ2 are estimated by maximizing b = (θb1 , θb2 ) be the maximizer of Q(θ), then Q(θ). Let θ b = arg max Q(θ). θ θ

(4)

If we write y 2 (t) = z(t), and 2θ1 = β1 and 2θ2 = β2 , then Q(θ) is nothing but the usual periodogram function for chirp model. The explicit expressions for real and imaginary parts of z(t), say zR (t) and zI (t) are given in Appendix A. Observe that the maximization of Q(θ), as given in (4), can be obtained by any two-dimensional optimization method over a bounded region namely [0, π] × [0, π]. We have used downhill-simplex method for this optimization. Under Assumptions 1-4, we prove the strong consistency of the proposed estimators and derive the asymptotic distribution. To prove the consistency, we need that the random amplitude α(t) has non-zero mean and finite variance. The existence of fourth moment

SWAGATA NANDI1 AND DEBASIS KUNDU2

6

is required to develop the asymptotic distribution. We state the results in the following theorems. Theorem 2.1 is proved in Appendix A and Theorem 2.2 is in Appendix B. Theorem 2.1. Under Assumptions 1-4, θb1 and θb2 defined in (4), are strongly consistent estimators of θ10 and θ20 , respectively. Theorem 2.2. Under Assumptions 1-4, as n → ∞, d

b − θ 0 )D−1 −→ N2 (0, 4(σ 2 + µ2 )2 Σ−1 ΓΣ−1 ) (θ α α

where with Cα = 8(σα2 + µ2α )σ 2 + 21 γ + 18 σ 4 , ! 3 0 n− 2 2(σα2 + µ2α )2 D= , Σ = 5 3 0 n− 2

1 1

1 16 15

!

, Γ = Cα

"

1 3 1 4

1 4 1 5

#

.

d

Here −→ denotes convergence in distribution. Remark 1. It can be shown explicitly that the asymptotic variances of θb1 and θb2 are h 1 1 4i 93 2 2 2 b 8(σα + µα )σ + γ + σ , Var(θ1 ) = n3 (σα2 + µ2α )2 2 8 h 135 1 4i 1 2 2 2 b Var(θ2 ) = 8(σα + µα )σ + γ + σ . n5 (σα2 + µ2α )2 2 8

In case the sequence of additive error is Gaussian with mean zero and variance σ 2 , then the fourth moment is 3σ 4 and asymptotic variances reduce to h 13 4 i 93 2 2 2 8(σ + µ )σ + σ , Var(θb1 ) = α α n3 (σα2 + µ2α )2 8 h 135 13 4 i 2 2 2 b Var(θ2 ) = 8(σα + µα )σ + σ . n5 (σα2 + µ2α )2 8

The asymptotic variances of θb1 and θb2 , proposed heuristically by Besson et al. [2] are h i h i 96 90 2 2 2 4 2 2 2 4 b2 ) = (σ + µ )σ + .5σ , Var( θ (σ + µ )σ + .5σ Var(θb1 ) = 3 2 α α α α n (σα + µ2α )2 n5 σα2 + µ2α )2

and they are slightly different than ours. It is not exactly clear, how they have been obtained. Remark 2. The asymptotic variances of θb1 and θb2 depend on the mean µα and variance σα2 of the random amplitude and variance σ 2 and fourth moment γ of the additive error. 3 5 According to Theorem 2.2, θb1 = Op (n− 2 ) and θb2 = Op (n− 2 ), where Op (.) denotes bounded 3 in probability. It means ∃ K1 > 0 and K2 > 0, such that limn→∞ P (n 2 |θb1 − θ10 | > K1 ) = 0 5 and limn→∞ P (n 2 |θb2 − θ20 | > K2 ) = 0. Therefore, for a given sample size, the chirp rate θ2 can be estimated more accurately than the frequency θ1 .

ESTIMATION OF PARAMETERS IN RANDOM AMPLITUDE CHIRP SIGNAL

7 0

Remark 3. The random amplitude sinusoidal signal (complex) model y(t) = α(t)eiθ1 t + e(t) is a special case of model (1). In this case, the effective frequency does not change over time and is constant since the chirp rate θ20 = 0. The unknown frequency can be estimated by maximizing a similar function as Q(θ), defined in (3). The consistency and asymptotic normality of the estimator follow in the same way. Remark 4. Random amplitude generalized chirp is a complex-valued model of the form 0 0 2 0 q y(t) = α(t)ei(θ1 t+θ2 t +···+θq t ) + e(t). In this case, the chirp rate is not linear and the change in frequency is governed by the term θ20 t2 + · · · + θq0 tq . The parameters are estimated using a periodogram like function of the squared signal. Under similar assumptions on α(t), e(t) and true values of the parameter as in model (1), the consistency and asymptotic normality can be obtained. 3. Multi-component Model In this section, we consider a general model, where instead of a single frequency and chirp rate pair, p such pairs are present. The model can be written as y(t) =

p X

0

0

2

αk (t)ei(θ1k t+θ2k t ) + e(t);

t = 1, . . . , n.

(5)

k=1

The sequence of additive errors {e(t)} is complex-valued and satisfies Assumption 2 similar to the single component model. The sequences of multiplicative errors, {α1 (t)} . . . {αp (t)} are sequences of i.i.d. errors and satisfy the following assumptions. Assumption 5. The multiplicative error corresponding to k-th component {αk (t)} is a se2 quence of i.i.d. real-valued random variables with mean µkα , variance σkα , and finite fourth 2 moment, µkα 6= 0 and σkα > 0, k = 1, . . . , p. Additionally, {αj (t)} and {αk (t)} for j 6= k are independent. Assumption 6. {e(t)} is assumed to be independent of {α1 (t)}, . . . , {αp (t)}. 0 0 0 0 0 0 Assumption 7. (θ1j , θ2j ) ∈ (0, π) × (0, π) for j = 1, . . . , p and (θ1j , θ2j ) 6= (θ1k , θ2k ) for j 6= k, j, k = 1, . . . , p.

The unknown parameters for multicomponent model (5) are estimated by maximizing Q(θ) locally. We write θ k = (θ1k , θ2k ) and let θ 0k be the true value of θ k . We use the notation Nk as a neighborhood of θ 0k such that for j 6= k, θ 0j ∈ / Nk . That is, θ 0k and θ 0j , j 6= k have to be well separated and Nk has to be chosen in such a way that no other θ 0j belongs

SWAGATA NANDI1 AND DEBASIS KUNDU2

8

to Nk . The choice of Nk for small samples depends on the variance of {e(t)} also. The exact dependence is difficult to establish theoretically and it is a problem of future interest. But in this paper it has been established that for a given Nk as it has been mentioned above, the method is going to work for large sample sizes. Now estimate θ k as 2 n X 1 2 bk = arg max θ y 2 (t)e−i2(θ1 t+θ2 t ) = arg max Qp (θ), (θ1 ,θ2 )∈Nk n (θ1 ,θ2 )∈Nk t=1

where y(t) is given in (5). Write y 2 (t) = z p (t) = zRp (t) + izIp (t); p is to denote a p-component bk is obtained model. The exact expression of zRp (t) and zIp (t) are given in Appendix C. Then, θ as the argument of the maximum of #2 " n 1 X p p Qp (θ) = z (t) cos(2(θ1 t + θ2 t2 )) + zI (t) sin(2(θ1 t + θ2 t2 )) + n t=1 R " n #2 1 X p −zR (t) sin(2(θ1 t + θ2 t2 )) + zIp (t) cos(2(θ1 t + θ2 t2 )) n t=1

in Nk . In this case to compute the estimators of the unknown parameters, we need to solve p two-dimensional optimization problems over bounded region and we have used the downhillsimplex method in simulation. A plot of Qp (θ) for a two component model is presented in Figure 1. The present of two pairs of frequency and chirp rate are clearly visible by two large peaks. However, this depends on the mean and variance of random amplitudes and additive error variance as well as how separated the pairs are. Now we have the following results. 600 500 400 300 200 100 0 0

0.5

x

1

1.5

2

2.5

3

3.5 0

0.5

1

1.5

2

2.5

3

3.5

y

Figure 1. Plot of Qp (θ) for a two component model.

bk , which maximizes Qp (θ) in Nk , is a Theorem 3.1. Under Assumptions 2, and 5-7, θ strongly consistent estimator of θ 0k .

ESTIMATION OF PARAMETERS IN RANDOM AMPLITUDE CHIRP SIGNAL

9

Theorem 3.2. Under Assumptions 2, and 5-7, as n → ∞ d

bk − θ 0 )D−1 −→ N2 (0, 4(σ 2 + µ2 )2 Σ−1 Γk Σ−1 ) (θ k kα kα k k

2 where Ckα = 8(σkα + µ2kα )σ 2 + 21 γ + 81 σ 4 ; D is same as defined in previous section and ! " # 1 1 2 + µ2kα )2 1 1 2(σkα Σk = , Γk = Ckα 13 41 . 16 3 1 15 4 5

bj − θ 0 )D−1 for k 6= j bk − θ 0 )D−1 and (θ Theorem 3.3. Under Assumptions 2, and 5-7, (θ j k are asymptotically independently distributed. The proof of Theorem 3.1 is given in Appendix C. The proof of Theorem 3.2 goes along the same line as the proof of Theorem 2.2 once we have equivalent results of Lemma 5 (see Appendix A) for the multicomponent model (5) using zRp (t) and zIp (t) instead of zR (t) and zI (t), respectively. A lemma, similar to Lemma 5, has been stated and proved in Appendix C as Lemma 6. Therefore, the proof of Theorem 3.2 is omitted to avoid repetition. Theorem 3.3 is proved in Appendix D. 4. Numerical Experiments In this section, we present results of numerical experiments based on simulation. We consider two models, the first one is a random amplitude single chirp model and is given by 0

0 2

Model 1 : y(t) = α(t)ei(θ1 t+θ2 t ) + e(t), t = 1, . . . , n

(6) 2

with parameter values θ10 = 1.0, θ20 = 0.15 and α(t) ∼ N (5, σα2 ) and eR (t) ∼ N (0, σ2 ) and 2 eI (t) ∼ N (0, σ2 ) and {eR (t)} and {eI (t)} are independently distributed. The second model is a two-component random amplitude chirp model (p = 2 in (5)) 0

0

2

0

0

2

Model 2 : y(t) = α1 (t)ei(θ11 t+θ21 t ) + α2 (t)ei(θ12 t+θ22 t ) + e(t), t = 1, . . . , n

(7)

0 0 0 0 with parameter values θ11 = 1.0, θ21 = 0.1, θ12 = 2.0, θ22 = 0.2. The random amplitudes 2 2 α1 (t) ∼ N (6, σ1α ) and α2 (t) ∼ N (5, σ2α ) and eR (t) and eI (t) are same as single chirp model and α1 (t) and α2 (t) are independently distributed. We have taken the value of the chirp rate to be much smaller than the initial frequency as it represents the rate of change and is comparatively small in general. We would like to see how the estimator discussed in the paper performs for different values of sample size n, additive error variance σ 2 and multiplicative 2 2 error variance σα2 (σ1α and σ2α for two component model). That is, in case of model (6), we 0 0 0 0 are interested in estimators of θ10 and θ20 and in case model (7), θ11 , θ21 , θ12 and θ22 . For

10

SWAGATA NANDI1 AND DEBASIS KUNDU2

simulation, we consider n = 100, 200 and 500; σ 2 = 0.01, 0.1, 0.5 and 1.0; σα2 = 0.5, 1.0 and 2 2 2.0; σ1α , σ2α = 0.5, 1.0 and 2.0. In order to generate a sample of size n for each combination of σ 2 and σα2 in case of model (6), we first generate e(t) and α(t) and then a sample y(1), . . . , y(n) using the true values of the parameters. The parameters θ1 and θ2 are estimated by maximizing Q(θ), defined in (3). 2 2 Similarly, in case of model (7), we first generate e(t), α1 (t) and α2 (t) using σ 2 , σ1α and σ2α , respectively, and then obtain the sample y(1), . . . , y(n) based on the true parameter values. The parameters are estimated by maximizing Qp (θ), p = 2 locally, that is, component-wise. The maximization of the criterion function Q(θ) in case of model (6) and Qp (θ) in case of model (7) has been carried out using downhill simplex method. For both the models, the true values of the parameters have been used as the initial estimators. Three-dimensional plots of Q(θ) and Qp (θ) reveal that the local maximizers are quite close to the true values of the parameters. Therefore, the true values have been used as the initial estimators. We observe that the process of estimation of parameters for model (6) involves a two-dimensional optimization, whereas for model (7), it involves two, two-dimensional optimizations. In case of a p-component model, it requires p, two-dimensional optimizations. We replicate the process of data generation and estimation of parameters 5000 times and calculated the average estimate and mean squared error (MSE) for each parameter estimates over these 5000 replications. The biases are negligible and they are not reported. The MSE and asymptotic variances (ASYM) for the frequency parameter θ1 in log 10 scale are presented in Figure 2 and for chirp rate parameter θ2 are given in Figure 3 in case of single chirp model (6). Three plots in each figure corresponds to σα2 = .5, 1.0 and 2.0. The ASYMs of the proposed estimators based on Theorems 2.2 and Cramer Rao bounds (CRB) proposed by Besson et al [2] are also plotted in Figures 2-3 for comparison. The results for two-component model (7) are presented in Figures 4-7. In case of model (7), CRBs are not available, therefore, asymptotic variances obtained from Theorem 3.2 are compared in Figures 4-7. The important findings of the above numerical experiments are as follows;

• As the sample size increases, the performance of the estimator improves. The biases and MSEs decreases as the sample size increases. • MSE increases as the additive error variance σ 2 increases.

ESTIMATION OF PARAMETERS IN RANDOM AMPLITUDE CHIRP SIGNAL σ2α = 0.5

σ2α = 1

σ2α = 2

0.5

1.0

1.5

−7

−6

−5

MSE,N=100 ASYM,N=100 CRLB N=100 MSE,N=200 ASYM,N=200 CRLB N=200 MSE,N=500 ASYM,N=500 CRLB N=100

−9

−8

−5 −6 −7 −8

MSE,ASYM & CRB in log scale

θ1

MSE,N=100 ASYM,N=100 CRLB N=100 MSE,N=200 ASYM,N=200 CRLB N=200 MSE,N=500 ASYM,N=500 CRLB N=100

−9

−8

−7

−6

−5

MSE,N=100 ASYM,N=100 CRLB N=100 MSE,N=200 ASYM,N=200 CRLB N=200 MSE,N=500 ASYM,N=500 CRLB N=100

−9

MSE,ASYM & CRB in log scale

θ1

MSE,ASYM & CRB in log scale

θ1

0.0

11

0.0

0.5

σ2

1.0

1.5

0.0

0.5

σ2

1.0

1.5

σ2

Figure 2. Model 1: MSE, ASYM and CRB of θ1 in log scale for different samples sizes and additive error variances σ 2 . σ2α = 1

0.0

0.5

1.0

1.5

−8

θ2

−12

−10

MSE,N=100 ASYM,N=100 CRLB N=100 MSE,N=200 ASYM,N=200 CRLB N=200 MSE,N=500 ASYM,N=500 CRLB N=100

−16

MSE,ASYM & CRB in log scale

−10 −12 −14

MSE,ASYM & CRB in log scale

−8

θ2 MSE,N=100 ASYM,N=100 CRLB N=100 MSE,N=200 ASYM,N=200 CRLB N=200 MSE,N=500 ASYM,N=500 CRLB N=100

−16

−14

−12

−10

MSE,N=100 ASYM,N=100 CRLB N=100 MSE,N=200 ASYM,N=200 CRLB N=200 MSE,N=500 ASYM,N=500 CRLB N=100

−16

MSE,ASYM & CRB in log scale

−8

θ2

σ2α = 2

−14

σ2α = 0.5

0.0

0.5

σ2

1.0

1.5

0.0

0.5

σ2

1.0

1.5

σ2

Figure 3. Model 1: MSE, ASYM and CRB of θ2 in log scale for different samples sizes and additive error variances σ 2 . σ2α = 1

σ2α = 2

0.5

1.0 σ2

1.5

−2 −3 −4 −5 −8 −9

−8 −9 0.0

θ11 MSE,N=100 ASYM,N=100 MSE,N=200 ASYM,N=200 MSE,N=500 ASYM,N=500

−6

−6

−5

−4

−3

MSE,N=100 ASYM,N=100 MSE,N=200 ASYM,N=200 MSE,N=500 ASYM,N=500

MSE & ASYM in log scale

−2

θ11

−7

−4 −5 −6 −7 −8 −9

MSE & ASYM in log scale

−3

MSE,N=100 ASYM,N=100 MSE,N=200 ASYM,N=200 MSE,N=500 ASYM,N=500

MSE & ASYM in log scale

−2

θ11

−7

σ2α = 0.5

0.0

0.5

1.0 σ2

1.5

0.0

0.5

1.0

1.5

σ2

Figure 4. Model 2: MSE and ASYM of θ11 in log scale for different samples sizes and additive error variances σ 2 . • MSE decreases as the variance of random amplitude, σα2 increases in case of model (6) as has been discussed in Remark 1. The same is also observed in case of (7) as 2 2 σ1α and σ2α increase.

SWAGATA NANDI1 AND DEBASIS KUNDU2

12 σ2α = 0.5

σ2α = 1

σ2α = 2

0.0

0.5

1.0

1.5

−10

−8

−6

MSE,N=100 ASYM,N=100 MSE,N=200 ASYM,N=200 MSE,N=500 ASYM,N=500

−14

−12

−6 −8 −10 −12

MSE & ASYM in log scale

θ21

MSE,N=100 ASYM,N=100 MSE,N=200 ASYM,N=200 MSE,N=500 ASYM,N=500

−14

−12

−10

−8

−6

MSE,N=100 ASYM,N=100 MSE,N=200 ASYM,N=200 MSE,N=500 ASYM,N=500

−14

MSE & ASYM in log scale

θ21

MSE & ASYM in log scale

θ21

0.0

0.5

σ2

1.0

1.5

0.0

0.5

σ2

1.0

1.5

σ2

Figure 5. Model 2: MSE and ASYM of θ21 in log scale for different samples sizes and additive error variances σ 2 .

σ2α = 1

0.0

0.5

1.0

1.5

−2

θ12

−5

−4

−3

MSE,N=100 ASYM,N=100 MSE,N=200 ASYM,N=200 MSE,N=500 ASYM,N=500

−6

−3 −4 −5 −6

−8 −9

−9

−8

MSE & ASYM in log scale

−2

θ12 MSE,N=100 ASYM,N=100 MSE,N=200 ASYM,N=200 MSE,N=500 ASYM,N=500

−7

−4 −5 −6 −7 −9

−8

MSE & ASYM in log scale

−3

MSE,N=100 ASYM,N=100 MSE,N=200 ASYM,N=200 MSE,N=500 ASYM,N=500

MSE & ASYM in log scale

−2

θ12

σ2α = 2

−7

σ2α = 0.5

0.0

0.5

σ2

1.0

1.5

0.0

0.5

σ2

1.0

1.5

σ2

Figure 6. Model 2: MSE and ASYM of θ12 in log scale for different samples sizes and additive error variances σ 2 .

σ2α = 0.5

σ2α = 1

σ2α = 2

0.0

0.5

1.0 σ2

1.5

−10

−8

−6

MSE,N=100 ASYM,N=100 MSE,N=200 ASYM,N=200 MSE,N=500 ASYM,N=500

−14

−12

−6 −8 −10 −12

MSE & ASYM in log scale

θ22

MSE,N=100 ASYM,N=100 MSE,N=200 ASYM,N=200 MSE,N=500 ASYM,N=500

−14

−12

−10

−8

−6

MSE,N=100 ASYM,N=100 MSE,N=200 ASYM,N=200 MSE,N=500 ASYM,N=500

−14

MSE & ASYM in log scale

θ22

MSE & ASYM in log scale

θ22

0.0

0.5

1.0 σ2

1.5

0.0

0.5

1.0

1.5

σ2

Figure 7. Model 2: MSE and ASYM of θ22 in log scale for different samples sizes and additive error variances σ 2 .

• MSEs are very close to the asymptotic variances and Cramer Rao bounds in most of the cases considered and they become closer as the sample size increases.

ESTIMATION OF PARAMETERS IN RANDOM AMPLITUDE CHIRP SIGNAL

13

−6.5

• The signal to noise ratio (SNR) in single chirp case is log10 (µ2α + σα2 )/σ 2 . MSE and ASYM are plotted against SNR/1000 when the sample size is 500 in Figure 8. As SNR increases, MSE decreases and quite close to asymptotic variances.

N = 500

−7.5 −8.0 −8.5 −9.5

−9.0

MSE,ASYM & CRB

−7.0

MSE ASYM CRLB

0.0

0.5

1.0

1.5

2.0

2.5

3.0

SNR

Figure 8. MSE and ASYM of θ1 in log 10 scale against SNR when N = 500 in case of Model 1. 5. Concluding Remarks In this paper, we discuss an equivalent estimator of the non-linear least squares estimator for the frequency and chirp rate parameters of random amplitude chirp signal. It has been shown that the estimators are strongly consistent and asymptotically normally distributed. We have proposed a multi-component random amplitude model and use the same estimator locally. It is implemented step by step. Numerical experiments based on simulation are reported to see the small sample performances. There are many open problems. How the estimators work if the additive error are covariance stationary. Estimation of the mean of random amplitude and variances of the additive error as well we random amplitude needs to be addressed. In multi-component model, the number of components is assumed to be known which needs to be estimated in practice. Further works are required in that direction. Acknowledgements The authors would like to thank three unknown reviewers and the associate editor for their constructive comments which have helped to improve the manuscript significantly. Part of the work of the second author has been funded by the grant no. MTR/2018/000179 of the Science and Engineering Research Board, Government of India.

SWAGATA NANDI1 AND DEBASIS KUNDU2

14

Appendix A Write z(t) = y 2 (t) = zR (t) + izI (t), then zR (t) = α2 (t) cos(2(θ10 t + θ20 t2 )) + (e2R (t) − e2I (t))

+2α(t)eR (t) cos(θ10 t + θ20 t2 ) − 2α(t)eI (t) sin(θ10 t + θ20 t2 ),

(8)

+2α(t)eR (t) sin(θ10 t + θ20 t2 ) + 2eR (t)eI (t).

(9)

zI (t) = α2 (t) sin(2(θ10 t + θ20 t2 )) + 2α(t)eI (t) cos(θ10 t + θ20 t2 )

Lemma 1. (Lahiri et al. [11]) If (θ1 , θ2 ) ∈ (0, π) × (0, π), then except for a countable number of points, the following results are true. n n 1X 1X 2 lim cos(θ1 t + θ2 t ) = lim sin(θ1 t + θ2 t2 ) = 0, n→∞ n n→∞ n t=1 t=1 lim

n→∞

lim

n 1 X

nk+1 1

n→∞ nk+1

t=1 n X

k

2

2

t cos (θ1 t + θ2 t ) = lim

n→∞

n 1 X

nk+1

tk sin2 (θ1 t + θ2 t2 ) =

t=1

tk cos(θ1 t + θ2 t2 ) sin(θ1 t + θ2 t2 ) = 0,

1 , 2(k + 1)

k = 0, 1, 2.

t=1

b = (θb1 , θb2 ) be an estimate of θ 0 = (θ0 , θ0 ) that maximizes Q(θ), defined in Lemma 2. Let θ 1 2  (3) and for any  > 0, let S = θ : |θ − θ 0 | >  for some fixed θ 0 ∈ (0, π) × (0, π). If for any  > 0,  1 limn→∞ sup Q(θ) − Q(θ 0 ) ≤ 0, a.s. (10) S n b → θ 0 a.s., that is, θb1 → θ0 and θb2 → θ0 a.s. Here, lim denotes the limit then as n → ∞, θ 1

2

supremum.

b by θ bn and Q(θ) by Qn (θ) to emphasize that these Proof of Lemma 2: We write θ bn does not converge to θ 0 as n → ∞. quantities depend on n. Suppose (10) is true but θ bn − θ 0 | >  for Then, there exists an  > 0 and a subsequence {nk } of {n} such that |θ k bn ∈ S for all k = 1, 2, . . .. By definition, θ bn is the estimator of k = 1, 2, . . .. Therefore, θ k k 0 θ that maximizes Qn (θ) when n = nk . This implies that h i bn ) − Qn (θ 0 ) ≥ 0. bn ) ≥ Qn (θ 0 ) ⇒ 1 Qn (θ Qnk (θ k k k k k nk  1 bn ) − Qn (θ 0 ) ≥ 0, which contradicts inequality (10). Therefore, limn→∞ sup Qnk (θ k k θ n ∈S nk k

Hence, the result follows.

ESTIMATION OF PARAMETERS IN RANDOM AMPLITUDE CHIRP SIGNAL

15

Lemma 3. (Nandi and Kundu [12]) Let {e(t)} be a sequence of i.i.d. real-valued random variables with mean zero and finite variance σ 2 > 0, then as n → ∞, n 1 X a.s. sup e(t) cos(at) cos(bt2 ) −→ 0. a,b n t=1 The result is true for all combinations of sine and cosine functions. Corollary of Lemma 3:

n 1 X a.s. tk e(t) cos(at) cos(bt2 ) −→ 0, sup k+1 a,b n t=1 n 1 X a.s. sup k+1 tk e(t) cos(at + bt2 ) −→ 0. a,b n t=1

The results are true for all combinations of sine and cosine functions. Lemma 4. (Grover et al. [9]) If (β1 , β2 ) ∈ (0, π)×(0, π), then except for a countable number of points, the following results hold. n n 1 X m 1 X m 2 t cos(β1 t + β2 t ) = lim 2m+1 t sin(β1 t + β2 t2 ) = 0, m = 0, 1, 2 lim 2m+1 n→∞ n 2 n→∞ n 2 t=1 t=1 Lemma 5. Under Assumptions 1-3, the following results are true for model (1). n X

1 nm+1 1 nm+1

t=1 n X

1 1 nm+1 for m = 0, 1, . . . , 4.

(11)

a.s

(12)

a.s

(13)

tm zR (t) sin(2θ10 t + 2θ20 t2 ) −→ 0,

t=1 n X t=1

1 (σ 2 + µ2α ), 2(m + 1) α

tm zI (t) cos(2θ10 t + 2θ20 t2 ) −→ 0,

t=1

n X

nm+1

a.s

tm zR (t) cos(2θ10 t + 2θ20 t2 ) −→

a.s

tm zI (t) sin(2θ10 t + 2θ20 t2 ) −→

1 (σα2 + µ2α ), 2(m + 1)

(14)

Proof of Lemma 5: We first note that E[eR (t)eI (t)] = 0 and Var[eR (t)eI (t)] = i.i.d.

{eR (t)eI (t)} ∼ (0,

σ4 2

σ4 2

and so

). Similarly, under Assumptions 1-3, we can show that

σ4 ), 2 σ2 i.i.d. {α(t)eI (t)} ∼ (0, (σα2 + µ2α ) ). 2 i.i.d.

{e2R (t) − e2I (t)} ∼ (0, 2γ −

i.i.d.

{α(t)eR (t)} ∼ (0, (σα2 + µ2α )

σ2 ), 2 (15)

SWAGATA NANDI1 AND DEBASIS KUNDU2

16

Consider

1 nm+1 =

1 nm+1

n X t=1 n X

tm zR (t) cos(2θ10 t + 2θ20 t2 ) tm α2 (t) cos2 (2θ10 t + 2θ20 t2 ) +

t=1

+ +

2

nm+1 2 nm+1

n X

1 nm+1

n X t=1

tm (e2R (t) − e2I (t)) cos(2θ10 t + 2θ20 t2 )

tm α(t)eR (t) cos(θ10 t + θ20 t2 ) cos(2θ10 t + 2θ20 t2 )

t=1

n X

tm α(t)eI (t) sin(θ10 t + θ20 t2 ) cos(2θ10 t + 2θ20 t2 ).

t=1

Write (t) = (e2R (t) − e2I (t)), then {(t)} is a sequence of i.i.d. random variables with mean zero and finite variance. Therefore, the second term converges to zero as n → ∞ using Corollary of Lemma 3. Similarly, the third and fourth terms also converge to zero as n → ∞ using (15). Now the first term can be written as

1 nm+1 = a.s.

1 nm+1

n X

tm α2 (t) cos2 (2θ10 t + 2θ20 t2 )

t=1

"

n X t=1

tm {α2 (t) − E[α2 (t)]} cos2 (2θ10 t + 2θ20 t2 ) +

1 E[α2 (t)] 2(m + 1) 1 (σ 2 + µ2α ), 2(m + 1) α

n X

tm E[α2 (t)] cos2 (2θ10 t + 2θ20 t2 )

t=1

−→ 0 + =

#

using Lemmas 1 and 5. Note that here we have used the fact that the fourth moment of α(t) exists. In a similar way, (12), (13) and (14) can be proved.

ESTIMATION OF PARAMETERS IN RANDOM AMPLITUDE CHIRP SIGNAL

Proof of Theorem 2.1: Expanding Q(θ) and using y 2 (t) = z(t) = zR (t) + izI (t)

n n h1 X oi2  1 zR (t) cos(2θ1 t + 2θ2 t2 ) + zI (t) sin(2θ1 t + 2θ2 t2 ) Q(θ) − Q(θ 0 ) = n n t=1

n n h1 X oi2 + −zR (t) sin(2θ1 t + 2θ2 t2 ) + zI (t) cos(2θ1 t + 2θ2 t2 ) n t=1 n n oi2 h1 X − zR (t) cos(2θ10 t + 2θ20 t2 ) + zI (t) sin(2θ10 t + 2θ20 t2 ) n t=1

n n oi2 h1 X 0 0 2 0 0 2 −zR (t) sin(2θ1 t + 2θ2 t ) + zI (t) cos(2θ1 t + 2θ2 t ) − n t=1

= S1 + S2 + S3 + S4 .

Using Lemma 5 with m = 0, we have

n

1X a.s. 1 zR (t) cos(2θ10 t + 2θ20 t2 ) −→ (σα2 + µ2α ), n t=1 2 n

1X a.s. zI (t) cos(2θ10 t + 2θ20 t2 ) −→ 0, n t=1 n

1X a.s. zR (t) sin(2θ10 t + 2θ20 t2 ) −→ 0, n t=1 n

1X a.s. 1 zI (t) sin(2θ10 t + 2θ20 t2 ) −→ (σα2 + µ2α ). n t=1 2

Then

lim S3 = −(σα2 + µ2α )2 and

n→∞

lim S4 = 0.

n→∞

17

SWAGATA NANDI1 AND DEBASIS KUNDU2

18

Now limn→∞ sup S1 S

= =

n n h1 X oi2 2 2 limn→∞ sup zR (t) cos(2θ1 t + 2θ2 t ) + zI (t) sin(2θ1 t + 2θ2 t ) S n t=1

n n h1 X limn→∞ sup α2 (t) cos[2(θ10 − θ1 )t + 2(θ20 − θ2 )t2 ] + 2e2R (t)e2I (t) sin(2θ1 t + 2θ2 t2 ) n S t=1

(e2R (t) − e2I (t)) cos(2θ1 t + 2θ2 t2 ) + 2α(t)eR (t) cos[(2θ10 − θ1 )t + (2θ20 − θ2 )t2 ] oi2 0 0 2 +2α(t)eI (t) sin[(2θ1 − θ1 )t + (2θ2 − θ2 )t ]

=

a.s.

−→ 0,

n n h1 X limn→∞ sup (α2 (t) − (σα2 + µ2α )) cos[2(θ10 − θ1 )t + 2(θ20 − θ2 )t2 ] 0 n t=1 |θ −θ |>

+2α(t)eR (t) cos[(2θ10 − θ1 )t + (2θ20 − θ2 )t2 ] + 2α(t)eI (t) sin[(2θ10 − θ1 )t + (2θ20 − θ2 )t2 ] oi2 +(σα2 + µ2α ) cos[2(θ10 − θ1 )t + 2(θ20 − θ2 )t2 ] h i The second and third terms are independent of θ 0 and they vanish using Lemma 3 a.s.

using Lemma 1 and Lemma 3. Similarly, we can show that limn→∞ supS S2 −→ 0. Therefore, limn→∞ sup S

h i  1 Q(θ) − Q(θ 0 ) = limn→∞ sup S1 + S2 + S3 + S4 → −(σα2 + µ2α )2 < 0 a.s. n S

and using Lemma 2, θb1 and θb2 which maximize Q(θ) are strongly consistent estimators of θ10 and θ20 , respectively.

Appendix B In this Appendix, we prove Theorem 2.2 which states the asymptotic distribution of the estimators of the unknown parameters of model (1). The first order derivatives of Q(θ) with respect to θk , k = 1, 2 are as follows; ∂Q(θ) 2 2 = f1 (θ)g1 (k; θ) + f2 (θ)g2 (k; θ), ∂θk n n

(16)

ESTIMATION OF PARAMETERS IN RANDOM AMPLITUDE CHIRP SIGNAL

19

where n X

f1 (θ) =

t=1 n X

g1 (k; θ) =

t=1 n X

f2 (θ) =

t=1

2

zR (t) cos(2θ1 t + 2θ2 t ) +

zI (t) sin(2θ1 t + 2θ2 t2 ),

t=1

k

2

zI (t)2t cos(2θ1 t + 2θ2 t ) − zI (t) cos(2θ1 t + 2θ2 t2 ) −

n X

g2 (k; θ) = −

n X

t=1

k

n X

zR (t)2tk sin(2θ1 t + 2θ2 t2 ),

t=1 n X

zR (t) sin(2θ1 t + 2θ2 t2 ),

t=1

n X

2

zI (t)2t sin(2θ1 t + 2θ2 t ) −

zR (t)2tk cos(2θ1 t + 2θ2 t2 ).

t=1

Observe that using Lemma 5, it immediately follows that 1 1 f1 (θ 0 ) = (σα2 + µ2α ) and (b) lim f2 (θ 0 ) = 0 a.s. n→∞ n n→∞ n

(a) lim

(17)

∂Q(θ) 2 = f1 (θ)g1 (k; θ) ignoring the second term in (16) which ∂θk n involves f2 (θ). The second order derivatives with respect to θk for k = 1, 2 are

Therefore, for large n,

2 ∂ 2 Q(θ) = 2 ∂θk n

(



n X

zR (t)2tk sin(2θ1 t + 2θ2 t2 ) +

n X

zI (t)2tk cos(2θ1 t + 2θ2 t2 )

t=1

t=1

)2

( n ) n X 2 X + zR (t) cos(2θ1 t + 2θ2 t2 ) + zI (t) sin(2θ1 t + 2θ2 t2 ) × n t=1 t=1 ( n ) n X X − zR (t)4t2k cos(2θ1 t + 2θ2 t2 ) − zI (t)4t2k sin(2θ1 t + 2θ2 t2 ) t=1

2 + n

(



(

t=1

n X t=1

zR (t)2tk cos(2θ1 t + 2θ2 t2 ) −

n X t=1

zI (t)2tk sin(2θ1 t + 2θ2 t2 )

) 2 − zR (t) sin(2θ1 t + 2θ2 t2 ) + zI (t) cos(2θ1 t + 2θ2 t2 ) × + n t=1 t=1 ( n ) n X X zR (t)4t2k sin(2θ1 t + 2θ2 t2 ) − zI (t)4t2k cos(2θ1 t + 2θ2 t2 ) , t=1

n X

n X

t=1

)2

SWAGATA NANDI1 AND DEBASIS KUNDU2

20

( n ) n X ∂ 2 Q(θ) 2 X = zR (t) cos(2θ1 t + 2θ2 t2 ) + zI (t) sin(2θ1 t + 2θ2 t2 ) × ∂θ1 ∂θ2 n t=1 t=1 ( n ) n X X − zR (t)4t3 cos(2θ1 t + 2θ2 t2 ) − zI (t)4t3 sin(2θ1 t + 2θ2 t2 ) t=1

2 + n ( −

t=1

(



n X

n X

zR (t)2t2 sin(2θ1 t + 2θ2 t2 ) +

t=1

n X

zR (t)2t sin(2θ1 t + 2θ2 t2 ) +

t=1

zI (t)2t cos(2θ1 t + 2θ2 t2 )

t=1

(

×

)

×

zI (t)2t2 cos(2θ1 t + 2θ2 t2 )

t=1

n X

)

)

) n X 2 + − zR (t) sin(2θ1 t + 2θ2 t2 ) + zI (t) cos(2θ1 t + 2θ2 t2 ) × n t=1 t=1 ( n ) n X X zR (t)4t3 sin(2θ1 t + 2θ2 t2 ) − zI (t)4t3 cos(2θ1 t + 2θ2 t2 ) n X

t=1

2 + n ( −

(

t=1



n X t=1

n X t=1

zR (t)2t2 cos(2θ1 t + 2θ2 t2 ) −

zR (t)2t cos(2θ1 t + 2θ2 t2 ) −

n X

n X

zI (t)2t2 sin(2θ1 t + 2θ2 t2 )

t=1

)

zI (t)2t sin(2θ1 t + 2θ2 t2 ) .

t=1

Now, we can show the following using Lemma 5, for m = 0, . . . , 4,



1 ∂ 2 Q(θ) 2 lim 3 = − (σα2 + µ2α )2 , 2 n→∞ n ∂θ1 θ 0 3 1 ∂ 2 Q(θ) 32 lim 5 = − (σα2 + µ2α )2 , 2 0 n→∞ n ∂θ2 θ 45 2 1 ∂ Q(θ) 2 lim 4 = − (σα2 + µ2α )2 . n→∞ n ∂θ1 ∂θ2 3 θ0 



∂ 2 Q(θ )  2∂θ12 ∂ Q(θ ) ∂θ1 ∂θ2



∂ 2 Q(θ ) ∂θ1 ∂θ2  . ∂ 2 Q(θ ) 2 ∂θ2

(18) (19) (20)

∂Q(θ) ∂Q(θ) , and Q00 (θ) = Define a diagonal matrix ∂θ1 ∂θ2 n o b using bivariate Taylor series expansion around θ 0 , D = diag 13 , 15 . Expand Q0 (θ) Write Q0 (θ) =

n2

n2

b − Q0 (θ 0 ) = (θ b − θ 0 )Q00 (θ), ¯ Q0 (θ)

ESTIMATION OF PARAMETERS IN RANDOM AMPLITUDE CHIRP SIGNAL

21

¯ is a point on the line joining θ b and θ. As θ b maximizes Q(θ), Q0 (θ) b = 0, the above where θ equation can be written as b − θ 0 )D−1 DQ00 (θ)D ¯ −[Q0 (θ 0 )D] = (θ

−1 b − θ 0 )D−1 = −[Q0 (θ 0 )D][DQ00 (θ)D] ¯ ⇒ (θ ,

¯ b → θ 0 a.s. and Q00 (θ) is a provided [DQ00 (θ)D] is an invertible matrix a.s. Because θ continuous function of θ, using continuous mapping theorem, we have ¯ lim [DQ00 (θ)D] = lim [DQ00 (θ 0 )D] = −Σ,

n→∞

n→∞

2(σα2 + µ2α )2 where Σ can be obtained using (18)-(20) as Σ = 3 elements of Q0 (θ 0 )D are

1 1

1 16 15

!

. Using (17), the

1 ∂Q(θ 0 ) 1 ∂Q(θ 0 ) 1 1 1 0 1 0 f (θ ) g (1; θ ) and = 2 = 2 f1 (θ 0 ) 5 g1 (2; θ 0 ). 1 3 3 1 5 n n n 2 ∂θ1 n2 n 2 ∂θ2 n2 for large n. Therefore, to find the asymptotic distribution of Q0 (θ 0 )D, we need to study the large sample distribution of 13 g1 (1; θ 0 ) and 15 g1 (2; θ 0 ). Replacing zR (t) and zI (t) in n2

g1 (k; θ 0 ), k = 1, 2, we have 1 n = =

2k+1 2

2 n

2k+1 2

4 n

2k+1 2



4 n

n2

g1 (k; θ 0 ) n X t=1 n X

t t

k

k

t=1 n X

2k+1 2

t=1

zI (t) cos(2θ10 t

+

2θ20 t2 )

eR (t)eI (t) cos(2θ10 t

+



n X

2 n

2k+1 2

2θ20 t2 )

+

tk α(t)eR (t) sin(θ10 t + θ20 t2 ) −

tk zR (t) sin(2θ10 t + 2θ20 t2 )

t=1

4

n

2k+1 2

2 n

2k+1 2

n X

tk α(t)eI (t) cos(θ10 t + θ20 t2 )

t=1 n X t=1

tk (e2R (t) − e2I (t)) sin(2θ10 t + 2θ20 t2 ).

The random variables eR (t)eI (t), α(t)eR (t), α(t)eI (t) and (e2R (t) − e2I (t)) are all mean zero finite variance i.i.d. random variables (Lemma 5). Therefore, E[ 13 g1 (1; θ 0 )] = 0 and E[

n2

0 5 g1 (2; θ )] = 0 for large n and all the terms above satisfy the Lindeberg-Feller’s condi-

1

n2

tion. So,

1

3

n2

g1 (1; θ 0 ) and

1

5

n2

g1 (2; θ 0 ) converge to normal distributions with zero mean and

finite variances. In order to find the large sample covariance matrix of Q0 (θ 0 )D, we first find

SWAGATA NANDI1 AND DEBASIS KUNDU2

22

the variances of Var

1

3

n2

g1 (1; θ 0 ) and

1

5

n2

g1 (2; θ 0 ) and their covariance.

h 1

i 0 g (1; θ ) 3 1

n2 n n h 4 X 4 X 0 0 2 = Var 3 teR (t)eI (t) cos(2θ1 t + 2θ2 t ) + 3 tα(t)eI (t) cos(θ10 t + θ20 t2 ) n 2 t=1 n 2 t=1 − = E

n 4 X 3

tα(t)eR (t) sin(θ10 t + θ20 t2 ) −

n 2 t=1 n h 16 X n3

t2 e2R (t)e2I (t) cos2 (2θ10 t

+

t=1 n X

n 2 X 3

n2

2θ20 t2 )

t=1

i t(e2R (t) − e2I (t)) sin(2θ10 t + 2θ20 t2 ) n

16 X 2 2 + 3 t α (t)e2I (t) cos2 (θ10 t + θ20 t2 ) n t=1

n i 16 4 X 2 2 2 2 2 2 0 0 2 + 3 t α (t)eR (t) sin (θ1 t + θ2 t ) + 3 t (eR (t) − e2I (t))2 sin2 (2θ10 t + 2θ20 t2 ) n t=1 n t=1 h The cross-product terms vanish due to Lemma 1 and independence of i α(t), eR (t) and eI (t).

σ2 1 σ2 1 σ4 1 σ2 σ2 1 . . + 16. .(σα2 + µ2α ). + 16. .(σα2 + µ2α ). + 4.(2γ − ) 2 2 6 2 6 2 6 2 6  8 2 1 1 = (σ + µ2α )σ 2 + γ + σ 4 . 3 α 2 8 Similarly, we can show that for large n i 8 h 1 1  1 Var 5 g1 (2; θ 0 ) → (σα2 + µ2α )σ 2 + γ + σ 4 , 5 2 8 n2 i h 1  1 1  1 Cov 3 g1 (1; θ 0 ), 5 g1 (2; θ 0 ) → 2 (σα2 + µ2α )σ 2 + γ + σ 4 . 2 8 n2 n2 → 16.

Now, note that Q0 (θ 0 )D can be written as h 1 i 2 1 Q0 (θ 0 )D = f1 (θ 0 ) 3 g1 (1; θ 0 ), 5 g1 (2; θ 0 ) . n n2 n2 a.s.

Then, as n → ∞, n2 f1 (θ 0 ) −→ 2(σα2 + µ2α ) and h 1 i 1 d 0 0 3 g1 (1; θ ), 5 g1 (2; θ ) −→ N2 (0, Γ), n2 n2 where " # 1 1  2  1 1 Γ = 8 (σα + µ2α )σ 2 + γ + σ 4 13 41 . 2 8 4 5

Therefore, using (17) and (21), Slutsky’s theorem can be applied. Hence, as n → ∞, d

Q0 (θ 0 )D −→ N2 (0, 4(σα2 + µ2α )2 Γ),

(21)

ESTIMATION OF PARAMETERS IN RANDOM AMPLITUDE CHIRP SIGNAL

23

and hence d

b − θ 0 )D−1 −→ N2 (0, 4(σ 2 + µ2 )2 Σ−1 ΓΣ−1 ). (θ α α

That proves the theorem.

Appendix C As defined in section 3, zRp (t) and zIp (t) are real and imaginary parts of y 2 (t) in case of multicomponent model (5) and are given as follows: zRp (t)

=

p X

0 0 2 αk2 (t) cos(2(θ1k t + θ2k t )) + 2

k=1

zIp (t) =

0 0 0 2 0 2 αk (t)αj (t) cos(θ1k t + θ1j t + θ2k t + θ2j t)

k6=j

+(e2R (t) − e2I (t)) + 2 p X

X

p X k=1

n o 0 0 2 0 0 2 αk (t) eR (t) cos(θ1k t + θ2k t ) − eI (t) sin(θ1k t + θ2k t ),

0 0 2 αk2 (t) sin(2(θ1k t + θ2k t )) + 2

X

0 0 0 2 0 2 αk (t)αj (t) sin(θ1k t + θ1j t + θ2k t + θ2j t)

k6=j

k=1

n o X 0 0 2 0 0 2 +2eR (t)eI (t) + 2 αk (t) eR (t) sin(θ1k t + θ2k t ) + eI (t) cos(θ1k t + θ2k t ) . p

k=1

Now a lemma similar to Lemma 5 is required for the multicomponent model (5). Lemma 6. Under assumptions 2, 5 and 6, the following results are true for model (5). 1 nm+1 1 nm+1 1 nm+1 1 nm+1

n X

0 0 2 t + 2θ2k t) = tm zRp (t) cos(2θ1k

t=1

n X

t=1 n X

t=1 n X

1 (σ 2 + µ2kα ), 2(m + 1) kα

0 0 2 t + 2θ2k t ) = 0, tm zIp (t) cos(2θ1k

0 0 2 t + 2θ2k t ) = 0, tm zRp (t) sin(2θ1k

0 0 2 tm zIp (t) sin(2θ1k t + 2θ2k t) =

t=1

k = 1, 2, . . . , p and m = 0, 1, . . . , 4.

1 (σ 2 + µ2kα ), 2(m + 1) kα

SWAGATA NANDI1 AND DEBASIS KUNDU2

24

Proof. of Lemma 6: Along the same line, as in the proof of Lemma 5, it can be shown that  σ4   σ4  i.i.d. i.i.d. {eR (t)eI (t)} ∼ 0, , {e2R (t) − e2I (t)} ∼ 0, 2γ − , 2 2   σ2  σ2  i.i.d. i.i.d. 2 2 , {αk (t)eI (t)} ∼ 0, (σkα + µ2kα ) . (22) {αk (t)eR (t)} ∼ 0, (σkα + µ2kα ) 2 2

Consider

1 nm+1 =

1 nm+1 + + +

n X

0 2 0 t) t + 2θ2k tm zRp (t) cos(2θ1k

t=1

n X

t

m

t=1

1 nm+1 2 nm+1 2 nm+1

n X t=1 n X

( p X j=1

t=1

+

)

0 2 2θ2j t)

0 0 2 cos2 (2θ1k t + 2θ2k t)

0 0 2 tm (e2R (t) − e2I (t)) cos(2θ1k t + 2θ2k t)

tm

t=1

n X

0 αj2 (t) cos2 (2θ1j t

p X j=1

tm

X

n o 0 0 2 0 0 2 0 0 2 αj (t) eR (t) cos(θ1j t + θ2j t ) − eI (t) sin(θ1j t + θ2j t ) cos(2θ1k t + 2θ2k t) 0 0 0 2 0 2 0 0 2 αj (t)αk (t) cos(θ1k t + θ1j t + θ2k t + θ2j t ) cos(2θ1k t + 2θ2k t ).

j6=k

The second, third and fourth terms converge to zero a.s. using Corollary of Lemma 3 and results given in (22). Using the same trick as used in the proof of Lemma 5, we can show that the first term ( p ) n 1 1 X m X 2 2 0 0 2 0 0 2 a.s. 2 t αj (t) cos (2θ1j t + 2θ2j t ) cos2 (2θ1k t + 2θ2k t ) −→ (σkα + µ2kα ), m+1 n 2(m + 1) t=1 j=1 using Lemma 1 for m = 0, 1, . . . , 4. Similarly, the other three identities can be proved. bk = (θb1k , θb2k ) be an estimate of θ 0 = (θ0 , θ0 ) that maximizes Qp (θ) locally Lemma 7. Let θ 1k 2k  k and for any  > 0, let Sk = θ k : |θ k − θ 0k | >  for some fixed θ 0k ∈ (0, π) × (0, π). If for any  > 0,  1 limn→∞ sup Qp (θ k ) − Qp (θ 0k ) < 0, a.s. (23) Sk n a.s. a.s. a.s. 0 0 bk −→ then as n → ∞, θ θ 0k , that is, θb1k −→ θ1k and θb2k −→ θ2k , k = 1, . . . , p.

The proof of Lemma 7 is similar to the proof of Lemma 2, so is omitted.

ESTIMATION OF PARAMETERS IN RANDOM AMPLITUDE CHIRP SIGNAL

25

Proof of Theorem 3.1: We first consider n1 Qp (θ 0k ). Using Lemma 6, it can be shown similarly, as in the proof of Theorem 2.1 that 1 a.s. 2 + µ2kα ). Qp (θ 0k ) −→ (σkα n

(24)

Now, write Qp (θ k ) = R1 + R2 , where R1

n n h1 X oi2 = zRp (t) cos(2θ1k t + 2θk2 t2 ) + zIp (t) sin(2θ1k t + 2θ2k t2 ) , n t=1

R2 =

n n oi2 h1 X −zRp (t) sin(2θ1k t + 2θ2k t2 ) + zIp (t) cos(2θ1k t + 2θ2k t2 ) . n t=1

Consider limn→∞ sup R1 Sk

n n h1 X oi2 = limn→∞ sup zRp (t) cos(2θ1k t + 2θk2 t2 ) + zIp (t) sin(2θ1k t + 2θ2k t2 ) Sk n t=1

p n n X h1 X o 0 0 αj2 (t) cos[2(θ1j − θ1k )t + 2(θ2j − θ2k )t2 ] = limn→∞ sup Sk n t=1 j=1

+2e2R (t)e2I (t) sin(2θ1k t + 2θ2k t2 ) + (e2R (t) − e2I (t)) cos(2θ1k t + 2θ2k t2 ) p n X 0 0 +2 αj (t) eR (t) cos[(2θ1j − θ1k )t + (2θ2j − θ2k )t2 ] j=1

+2

X

0 0 +eI (t) sin[(2θ1j − θ1k )t + (2θ2j − θ2k )t2 ]

o

0 0 0 2 0 2 αj (t)αl (t) cos(θ1k t + θ1l t + θ2k t + θ2l t ) cos(2θ1k t + 2θ2k t2 )

j6=l

+2

X

0 αj (t)αl (t) cos(θ1k t

+

0 θ1l t

+

0 2 θ2k t

+

0 2 θ2l t ) cos(2θ1k t

2

+ 2θ2k t )

j6=l

i2

p n n X  h1 X 0 0 2 + µ2jα ) cos[2(θ1j − θ1k )t + 2(θ2j − θ2k )t2 ] αj2 (t) − (σjα = limn→∞ sup 0 n t=1 j=1 |θ k −θ k |>  o 0 2 0 − θ1k )t + 2(θ2j − θ2k )t2 ] +(σjα + µ2jα ) cos[2(θ1j

+2

p X j=1

n 0 0 αj (t) eR (t) cos[(2θ1j − θ1k )t + (2θ2j − θ2k )t2 ] 0 +eI (t) sin[(2θ1j

− θ1k )t +

0 (2θ2j

2

− θ2k )t ]

o

SWAGATA NANDI1 AND DEBASIS KUNDU2

26

+2

X

0 2 0 2 0 0 t ) cos(2θ1k t + 2θ2k t2 ) t + θ2l t + θ2k t + θ1l αj (t)αl (t) cos(θ1k

j6=l

+2

X

0 2 0 2 0 0 t ) cos(2θ1k t + 2θ2k t2 ) t + θ2k t + θ2l t + θ1l αj (t)αl (t) cos(θ1k

j6=l

i2

[The second and third terms inside the squared bracket vanish using Lemma 3.] a.s.

−→ 0, a.s.

using Lemmas 1, 3 and 6. Similarly, we can show that limn→∞ supSk R2 −→ 0. Therefore, h i  1 1 2 +µ2kα )2 < 0 a.s. limn→∞ sup Qp (θ k )−Qp (θ 0k ) = limn→∞ sup R1 +R2 − Qp (θ 0k ) → −(σkα n S n Sk

Then, using Lemma 7, θb1k and θb2k which maximizes Qp (θ) locally, are strongly consistent 0 0 , respectively. and θ2k estimators for θ1k Appendix D

In this Appendix, we prove Theorem 3.3. The unknown parameters are estimated by maximizing Qp (θ) locally. Let Q0p (θ) be the p × 1 vector of first order derivatives and Q00p (θ) be the matrix of second derivatives. Then, the elements of Q0p (θ) can be written as 2 2 ∂Qp (θ) = f1p (θ)g1p (k; θ) + f2p (θ)g2p (k; θ), k = 1, . . . , p ∂θk n n where f1p (θ) g1p (k; θ)

= =

f2p (θ) =

n X

t=1 n X

t=1 n X t=1

g2p (k; θ)

= −

zRp (t) cos(2θ1 t

n X

2

+ 2θ2 t ) +

zIp (t) sin(2θ1 t + 2θ2 t2 ),

t=1

zIp (t)2tk

2

cos(2θ1 t + 2θ2 t ) −

zIp (t) cos(2θ1 t + 2θ2 t2 ) −

n X t=1

zIp (t)2tk

n X

zRp (t)2tk sin(2θ1 t + 2θ2 t2 ),

t=1 n X p zR (t) sin(2θ1 t t=1

2

sin(2θ1 t + 2θ2 t ) −

n X

+ 2θ2 t2 ),

zRp (t)2tk cos(2θ1 t + 2θ2 t2 ).

t=1

Using Lemma 6, similarly as (17), we have 1 p 0 1 2 f1 (θ k ) = (σkα + µ2kα ) and (b) lim f2p (θ 0k ) = 0 a.s. n→∞ n n→∞ n

(a) lim

(25)

ESTIMATION OF PARAMETERS IN RANDOM AMPLITUDE CHIRP SIGNAL

27

for k = 1, . . . , p. Now, as in the proof of Theorem 2.2, we can write ¯ k )D]−1 , k = 1, . . . , p bk − θ 0 )D−1 = −[Q0 (θ 0 )D][DQ00 (θ (θ p k p k

¯ k is a point on the line joining θ bk and θ 0 , k = 1, . . . , p. Therefore, to prove the where θ k 0 −1 b b asymptotic independence of (θ k − θ k )D and (θ j − θ 0j )D−1 , k 6= j we need to show that i h Σp = ((Σpij )) = Cov Q0p (θ 0k )D, Q0p (θ 0j )D = 0.

Assume p = 2. Now because of (25)(b), to prove

Σ2kj = Cov

"

1 n

2k+1 2

# 1 ∂Q2 (θ) ∂Q2 (θ) , = 0, k, j = 1, 2, ∂θ θ 01 n 2j+1 ∂θ θ 02 2

it is enough to show that for k, j = 1, 2

Cov



1 n

2k+1 2

g12 (k, θ 01 ),

1 n

2j+1 2

g12 (j, θ 02 ),



= 0.

Here Σ2kj denote the (k, j)-th element of Σp with p = 2. Expanding zRp (t) and zIp (t), we have

1 n

2k+1 2

g12 (k; θ 01 ), using

1

= =

g12 (k; θ 01 ) " n # n X p X 2 p k 0 0 2 k 0 0 2 zI (t)t cos(2θ11 t + 2θ21 t ) − zR (t)t sin(2θ11 t + 2θ21 t ) 2k+1

n

2k+1 2

n

2

t=1

2

n + − +

2k+1 2

2 n

2k+1 2

2 n

2k+1 2

t=1

0 0 0 2 0 2 tk α22 (t) sin(2θ12 t − 2θ11 t + 2θ22 t − 2θ21 t )+

t=1 n X

2k+1 2

2 n

n X

t=1

n X t=1

n X t=1

2

n

0 0 0 2 0 2 tk α1 (t)α2 (t) sin(θ11 t − θ12 t + θ21 t − θ22 t )− 0 0 2 tk α1 (t)eR (t) sin(θ11 t + θ21 t )− 0 0 2 tk α1 (t)eI (t) cos(θ11 t + θ21 t )+

2 n

2k+1 2

2 n

2k+1 2

n X t=1

n X t=1

2k+1 2

1 n

n X

0 0 2 tk eR (t)eI (t) cos(2θ11 t + 2θ21 t )

t=1

2k+1 2

n X t=1

0 0 2 tk (e2R (t) − e2I (t)) sin(2θ11 t + 2θ21 t )

0 0 0 2 0 2 tk α2 (t)eR (t) sin(2θ11 t − θ12 t + 2θ21 t − θ22 t ) 0 0 0 2 0 2 tk α2 (t)eI (t) cos(2θ11 t − θ12 t + 2θ21 t − θ22 t )

SWAGATA NANDI1 AND DEBASIS KUNDU2

28

=

2 n + + − +

2k+1 2

2 n

2k+1 2

2 n

2k+1 2

2 n

  2 0 0 0 2 0 2 tk α22 (t) − (σ2α + µ22α sin(2θ12 t − 2θ11 t + 2θ22 t − 2θ21 t )

t=1 n X

2k+1 2

2 n

n X

2k+1 2

0 0 2 tk eR (t)eI (t) cos(2θ11 t + 2θ21 t )

t=1

n X t=1

n X t=1

n X

0 0 0 2 0 2 tk α1 (t)α2 (t) sin(θ11 t − θ12 t + θ21 t − θ22 t )− 0 0 2 tk α1 (t)eR (t) sin(θ11 t + θ21 t )− 0 0 2 tk α1 (t)eI (t) cos(θ11 t + θ21 t )+

t=1

using Lemma 4. Similarly, 1 n =

2k+1 2

2 n + + − +

2k+1 2

2 n n

n X

2k+1 2

2k+1 2

2 n

2 n

2k+1 2

t=1

n X t=1

2k+1 2

t=1

0 0 2 tk (e2R (t) − e2I (t)) sin(2θ11 t + 2θ21 t )

0 0 0 2 0 2 tk α2 (t)eR (t) sin(2θ11 t − θ12 t + 2θ21 t − θ22 t ) 0 0 0 2 0 2 tk α2 (t)eI (t) cos(2θ11 t − θ12 t + 2θ21 t − θ22 t ),

(26)

g12 (k; θ 02 ) can be expressed as

  2 0 0 0 2 0 2 tk α12 (t) − (σ1α + µ21α sin(2θ11 t − 2θ12 t + 2θ21 t − 2θ22 t )

t=1 n X

2 n

2k+1 2

n

n

n X

g12 (k; θ 02 )

2k+1 2

2

1 n

2 2k+1 2

n X

1

2k+1 2

0 0 2 tk eR (t)eI (t) cos(2θ12 t + 2θ22 t )

t=1

n X t=1

n X t=1

n X

0 0 0 2 0 2 tk α1 (t)α2 (t) sin(θ12 t − θ11 t + θ22 t − θ21 t )− 0 0 2 tk α2 (t)eR (t) sin(θ12 t + θ22 t )− 0 0 2 tk α2 (t)eI (t) cos(θ12 t + θ22 t )+

t=1

2 n

2k+1 2

2 n

2k+1 2

n X t=1

n X t=1

1 n

2k+1 2

n X t=1

0 0 2 tk (e2R (t) − e2I (t)) sin(2θ12 t + 2θ22 t )

0 0 0 2 0 2 tk α1 (t)eR (t) sin(2θ12 t − θ11 t + 2θ22 t − θ21 t ) 0 0 0 2 0 2 tk α1 (t)eI (t) cos(2θ12 t − θ11 t + 2θ22 t − θ21 t ).

(27)

Under the assumption of independence of α1 (t) and α2 (t) and using Lemmas 1, 3 and 4 on expressions (26) and (27), we can show that   1 2 1 2 0 0 Cov 2k+1 g1 (k, θ 1 ), 2j+1 g1 (j, θ 2 ), = 0, for k 6= j = 1, 2. n 2 n 2 b1 − θ 0 )D−1 and (θ b2 − θ 0 )D−1 are asymptotically Therefore, Σ2 is a zero matrix and (θ 1 2 bk − independently distributed when p = 2. Similarly, for p > 2, it can be proved that (θ 0 0 bj − θ )D−1 for k 6= j are asymptotically independently distributed. θ )D−1 and (θ k

j

Declaration of Competing Interest

There is no conflict of interests.

ESTIMATION OF PARAMETERS IN RANDOM AMPLITUDE CHIRP SIGNAL

29

References [1] Abatzoglou, T. (1986), “Fast maximum likelihood joint estimation of frequency and frequency rate”, IEEE Transactions on Aerospace and Electronic Systems, vol. 22, 708 – 715. [2] Besson, O., Ghogho, M. and Swami, A. (1999), “Parameter estimation for random amplitude chirp signals”, IEEE Trans. Signal Process., Vol. 47, No. 12, 3208-3219. [3] Besson, O. and Stoica, P. (1995), “Sinusoidal signals with random amplitude: Least-squares estimation and their statistical analysis”, IEEE Transactions on Signal Processing, vol. 43, 2733 – 2744. [4] Djuri´c, P.M. and Kay, S.M. (1990), ”Parameter estimation of chirp signals”, IEEE Trans. on Acoustics, Speech and Signal Process., Vol. 38, No. 12, 2118-2126. [5] Farquharson, M., O’Shea, P. and Ledwich, G. (2005), “A computationally efficient technique for estimating the parameters phase signals from noisy observations”, IEEE Transactions on Signal Processing, vol. 53, 3337 – 3342. [6] Francos, J.M. and Friedlander, B. (1995), “Bounds for estimation of multicomponent signals with random amplitudes and deterministic phase”, IEEE Transactions on Signal Processing, vol. 43, 1161 – 1172. [7] Gabor, D. (1946), “Theory of communication. Part 1: The analysis of information” Journal of the Institution of Electrical Engineers - Part III: Radio and Communication Engineering, vol. 93, 429 – 441. [8] Gini, F., Montanari, M. and Verrazzani, L. (2000), “Estimation of chirp signals in compound Gaussian clutter: a cyclostationary approach”, IEEE Transactons on Acoustics, Speech and Signal Processing, vol. 48, 1029 – 1039. [9] Grover, R., Kundu, D. and Mitra, A. (2018), “On approximate least squares estimators of parameters on one dimensional chirp signal”, Statistics, vol. 52, 1060 – 1085. [10] Kundu, D. and Nandi, S. (2012), Statistical Signal Processing: Frequency Estimation, Springer, New Delhi. [11] Lahiri, A., Kundu, D. and Mitra, A. (2015), “Estimating the parameters of multiple chirp signals”, Journal of Multivariate Analysis, 139, 189 - 205. [12] Nandi, S. and Kundu, D. (2004),“Asymptotic properties of the least squares estimators of the parameters of the chirp signals”, Annals of the Institute of Statistical Mathematics, 56, 529-544. [13] Stoica, P. and Moses, R. (2005), Spectral Analysis of Signals, Prentice Hall, Upper Saddle River, New Jersey, U.S.A. [14] Zhou, G. and Giannakis, G.B. (1994), “On estimating random amplitude modulated harmonics using higher order spectra”, IEEE Transactions on Oceanic Engineering, vol. 19, No. 4, 529 – 539.

1

Theoretical Statistics and Mathematics Unit, Indian Statistical Institute, 7, S.J.S.

Sansanwal Marg, New Delhi - 110016, India, [email protected] 2

Department of Mathematics and Statistics, Indian Institute of Technology Kanpur, Pin

208016, India, [email protected]