Sigpro=1545=KCT=Venkatachala=BG
Signal Processing 80 (2000) 577}595
Minimax lower bounds for nonparametric estimation of the instantaneous frequency- and time-varying amplitude of a harmonic signal夽 Vladimir Katkovnik* Department of Statistics, University of South Africa (UNISA), P.O. Box 392, Pretoria 0001, South Africa Received 22 May 1997; received in revised form 26 October 1999
Abstract Estimation of the instantaneous frequency- and time-varying amplitude along with their derivatives is considered for a harmonic complex-valued signal given with an additive noise. Asymptotic minimax lower bounds are derived for the mean-squared errors of estimation provided that the phase and amplitude are arbitrary piece-wise di!erentiable functions of time. It is shown that these lower bounds are di!erent only in constant factors from the optimal upper bounds of mean-squared errors of estimates given by the generalized local polynomial periodogram. The time-varying phase and amplitude are derived which are `worsta, respectively, for estimation of the instantaneous frequency, amplitude and their derivative. These `worsta functions can be applied in order to test the accuracy of algorithms used for estimation of the instantaneous frequency and amplitude. 2000 Elsevier Science B.V. All rights reserved. Zusammenfassung Es wird die SchaK tzung der Momentanfrequenz und zeitvarianten Amplitude sowie deren Ableitungen fuK r ein durch additives Rauschen gestoK rtes harmonisches komplexwertiges Signal betrachtet. Unter der Annahme, da{ Phase und Amplitude beliebige stuK ckweise di!erenzierbare Funktionen der Zeit sind, werden asymptotische untere MinimaxSchranten fuK r die mittleren quadratischen SchaK tzfehler abgeleitet. Es wird gezeigt, da{ sich diese unteren Schranken nur durch einen konstanten Faktor von den optimalen oberen Schranken der mittleren quadratischen Fehler von SchaK tzwerten, welche durch das verallgemeinerte lokal polynomiale Periodogramm gegeben sind, unterscheiden. Es werden jene zeitvariante Phase und Amplitude abgeleitet, welche am unguK nstigsten bezuK glich der SchaK tzung der Momentanfrequenz und -amplitude sowie deren Ableitungen sind. Diese unguK nstigsten Funktionen koK nnen benuK tzt werden, um die Genauigkeit von Algorithmen fuK r die nichtparametrische SchaK tzung der Momentanfrequenz und -amplitude zu testen. 2000 Elsevier Science B.V. All rights reserved. Re2 sume2 Nous consideH rons l'estimation de la freH quence instantaneH e et de l'amplitude variant dans le temps ainsi que de leur deH riveH es, pour un signal a` valeurs complexes harmonique avec un bruit additif. Les limites infeH rieures minmax asymptotiques sont deH riveH es pour les erreurs quadratiques moyennes des estimations pourvu que la phase et l'amplitude
夽
This work was supported by the Foundation of Research Development of South Africa. * Fax: ##12-429-6298. E-mail address:
[email protected] (V. Katkovnik). 0165-1684/00/$ - see front matter 2000 Elsevier Science B.V. All rights reserved. PII: S 0 1 6 5 - 1 6 8 4 ( 9 9 ) 0 0 1 5 5 - 3
Sigpro=1545=KCT=VVC=BG
578
V. Katkovnik / Signal Processing 80 (2000) 577}595
soit des fonctions du temps arbitrairement di!eH rentiables par morceaux. Nous montrons que ces limites infeH rieures sont di!eH rentes seulement a` un facteur constant pre`s des limites infeH rieures optimales des erreurs quadratiques moyennes donneH es par le peH riodogramme polynomial local geH neH raliseH . L'amplitude et la phase variant dans le temps sont deH riveH es. Elles sont `les piresa respectivement pour l'estimation de la freH quence et l'amplitude instantaneH es et leur deH riveH es. Ces `piresa fonctions peuvent e( tre appliqueH es pour tester la preH cision des algorithmes utiliseH s pour l'estimation non parameH trique` des freH quences et amplitudes instantaneH es. 2000 Elsevier Science B.V. All rights reserved. Keywords: Instantaneous frequency; Fourier analysis; Minimax lower bound; Nonparametric estimation; Spectrum analysis; Timevarying amplitude; Time}frequency analysis
1. Introduction We consider the problem of estimating the instantaneous frequency (IF) du(t) X(t)" dt
(1)
and the real-valued time-varying amplitude A(t) from discrete-time observations y(s¹)"r(s¹)#e(s¹), r(s¹)"A(s¹)exp(ju(s¹)), (2) s"1, 2,2, N, where ¹ is the sampling interval, and N is the number of observations. The +e(s¹), are zero-mean white Gaussian circular complex-valued random variables: E(Re(e(s¹)))"E( Im(e(s¹)))"p/2,
and amplitude has been utilized to study a wide range of signals, including speech, music, and other acoustic signals, biological signals, radar and sonar signals, and other signals used in engineering systems (e.g. [1,4]). In some of these applications the derivatives of the IF and amplitude can be more e$cient than estimates of the IF and amplitude themselves, say, for early diagnostic and detection of crucial changes in a system. The main assumption of this paper is that the amplitude A(t) and phase u(t) to be estimated are unknown real-valued time-varying nonparametric functions. The word `nonparametrica indicates that nothing is known about a parametric form of A(t) and u(t). It is assumed in the paper that the time-varying IF and amplitude are belong to the following classes of nonparametric piece-wise smooth functions:
E Re(e(s¹)) Im(e(s¹))"0, Ee(s¹)eH(s¹)"p, where the asterisk means a complex conjugate value. It assumes the Gaussian distribution law for the two-dimensional variable e(s¹)"(Re(e(s¹)), Im(e(s¹))).
where
p Ee(s¹) e(s¹)" I, 2
dQA(t) dQ\X(t) dQu(t) AQ(t)" , XQ(t)" " . dtQ dtQ\ dtQ
where I is the identity matrix 2;2. Estimation of the IF and amplitude as well as their derivatives from noisy observations y(s¹) is a subject of this paper. Let us recall that a complex-valued harmonic with the time-varying phase
F (m )" A: sup "AK (t)")¸ (m ), m *1 , R
Here and furthermore prime is used for transpose. Thus, the covariance matrix of variable e(s¹) can be represented as follows: (3)
F (m )" u: sup "XKP \(t)")¸ (m ), m *2 , P P P P P R (4)
Eq. (4) means that u(t) and A(t) are smooth functions and almost for every t there is a neighbourhood of the time-instant t where u(t) and A(t) are di!erentiable with bounded derivatives of the corresponding orders.
Sigpro=1545=KCT=VVC=BG
V. Katkovnik / Signal Processing 80 (2000) 577}595
Note that the polynomial phase and amplitude functions of the powers m and m , P u(t)"d #d t#2#d P tKP /m !, K P "d P ")¸ (m ), P P K A(t)"r #r t#2#r tK /m !, K
(5)
"d ")¸ (m ), K belong to the corresponding classes F (m ) and P P F (m ) as a special case. Two points determine the major di!erence between these polynomial parametric and corresponding nonparametric functions. Let us make it clear for the phase function. First, for the polynomial phase of the power m all of the P derivatives dPu(t)/dtP,0 for r'm . This is not P true with a nonparametric function. The class F (m ) does not assume that the higher-order deP P rivatives equal to zero or do not exist. Second, the derivative dKP u(t)/dtKP is constant for the parameteric case while it is an arbitrary but a bounded function for the nonparametric case. Estimation of parametric functions is reduced to the estimation of a "nite number of constant coe$cients d and r instead of unknown functions of I I time A(t) and u(t). The nonparametric amplitude A(t) and phase u(t) can be estimated in a point-wise mode only, i.e. for every value of t. One of the advantages of parametric representation is a well established theory and a lot of estimation algorithms. A quite di!erent situation with estimation has a place in the nonparametric case. First of all, there are no universal methods, say, similar to the maximum likelihood for the parametric case, which could be applied for a regular design of nonparametric estimates. It seems that only the local polynomial approximation (LPA) or similar approximations by di!erent functions give a #exible approach to obtain nonparametric estimates (e.g. [2,5,9,10,13]). A further di!erence of the nonparametric case appears in the characterization of the estimation accuracy. The unbiased estimates at least exist for the parametric case and it is proved that the
579
Cramer}Rao lower bounds give an explicit picture of the best accuracy available in the asymptotics. For the nonparametric case even the best-possible estimates are biased with bias values bounded by the constants ¸ (m ) and ¸ (m ). This biasedness P P is a general property of nonparametric estimates. For the linear regression estimates it is discussed in particular in [5,9], while for the IF estimation this problem is considered in [10,13,18]. As a matter of fact this biasedness appears as a result of the derivatives dKP u(t)/dtKP and dK A(t)/dtK which are not constant in the classes ¸ (m ) and ¸ (m ). P P It deserves to be mentioned that the trade-o! between the biasedness and the variance usually used in nonparametric estimation means that the bias and the standard deviation of estimation errors are of the same order (see [10,13,18]). As a result of this biasedness the Cramer}Rao lower bounds derived for di!erent parametric models of the IF and amplitude (see in particular [6,7,17,20}22]) are not relevant to the nonparametric setting of the estimation problem. Finally, we would like to say more about a di!erence between the usual parametric Cramer}Rao and the nonparametric lower bound considered in this paper. It is assumed in the parametric setting of the problem that a number of observation N is given and it is shown that the Cramer}Rao lower bound is a decreasing function of N. It means that the best estimation accuracy is improved as N is increasing. It is emphasized that the accuracy does not impose any restrictions on the number of observations if the accurate parametric model is used in estimation. In the considered nonparametric problem the number of observation is found from the biasvariance trade-o!. This optimal number of observations as well as the accuracy of the nonparametric estimates on the whole depends on the constants ¸ (m ) and ¸ (m ) (see for example the accuracy P P analysis in Section 3 of this paper). Thus we arrive at a situation where the parametric lower Cramer}Rao bound depends on N and does not depend on ¸ (m ) and ¸ (m ), while the nonP P parametric lower bounds depend on ¸ (m ) and P P ¸ (m ) but does not depend on N. It means that these lower bound have a very di!erent forms and cannot be compared.
Sigpro=1545=KCT=VVC=BG
580
V. Katkovnik / Signal Processing 80 (2000) 577}595
In essence, these lower bounds cannot be compared as they are relevant to the di!erent classes of the time-varying frequency and amplitude and different accuracy evaluation problems. The minimax lower bounds developed in this paper are a generalization of the results obtained in [16] for IF estimation provided that the amplitude is time invariant and known. The generalized local polynomial periodogram (GLPP) proposed in [12] is studied in this paper as a nonparametric estimator of the time-varying amplitude and frequency. The accuracy analysis of the GLPP is presented and it is shown that the derived minimax lower bounds are di!erent only in constant factors from the optimal mean-squared error of the GLPP estimator. First, it demonstrates that the GLPP estimates are able to obtain the optimal convergence rates. Second, it proves that the derived lower bounds are attainable within constant factors, and these lower bounds cannot be principally decreased. For the rectangular window and Gaussian observation errors the formulas of the GLPP coincide with the maximum likelihood estimator of the time-varying parametric amplitude and phase proposed and studied in [6,7]. However, a principal di!erence is that the GLPP estimator has a bias error from the systematic variation of the IF and amplitude and this bias error increases with the window size. The leading order of the expected estimation error and the optimal window size minimizing the mean-squared error (MSE) are calculated in this paper. This bias error is not considered in [6,7] as only the parametric representation for the phase and amplitude is assumed, and as a result the variance of these estimates and the Cramer}Rao lower bounds are monotonically decreasing functions of the number of observations. It is found in [6,20] that the maximum likelihood estimates of the parametric time-varying amplitude and the phase are decoupled. It is proved in this paper that this e!ect is valid also for the nonparametric biased GLPP estimates. We wish to note that the emphasis in this paper is on the performance analysis of nonparametric amplitude and frequency estimation and not on algorithmic issues. The speci"c point of the nonparametric approach is a window size optimization
as a "rst step and it is done in this paper. The second step assumes a development of the algorithms with the adaptive and data-driven window size. It can be done for the GLPP on the basis of the approach developed in [11,14]. This paper is organized as follows. The proposition presenting the lower bounds and the `worsta phase and amplitude are given in Section 2. The GLPP as a nonparametric estimator of the timevarying IF and amplitudes is presented in Section 3. The mean squared errors of estimates as well as the windows sizes optimal for the estimation of the IF, amplitude and their derivatives are obtained in this section. Some simulation results are considered in Section 4. The proofs of the propositions are given in Appendices A and B.
2. Minimax lower bounds and `worsta phase and amplitude In this chapter we consider the accuracy evaluation, which is not algorithm dependent and in an accurate way re#ects information embedded in observations. For this particular problem the, so-called, minimax lower bounds have been developed in the theory of nonparametric regression estimation (e.g. [8,19]). These minimax lower bounds play a role of universal accuracy measures which replace the standard lower Cramer}Rao bounds of parametric estimation. Finding a similar minimax lower bounds for the IF and amplitude estimation is the main goal of this paper. We assume that the amplitude A(t) and the phase u(t) belong to the nonparametric classes F (m ) and F (m ) and u( (t) and a( (t) are arbitrary estiP P mates of the IF X(t) and the amplitude A(t), respectively. Then the corresponding mean-squared errors (MSE) of estimation are of the form r (u( (t),u(t), A(t))"E (u( (t)!X(t)), S r (a( (t),u(t), A(t))"E(a( (t)!A(t)), where the arguments of the risks r and r indicate S the dependence of the MSE on estimates u( (t) and a( (t) as well as on the phase u(t) and the amplitude
Sigpro=1545=KCT=VVC=BG
V. Katkovnik / Signal Processing 80 (2000) 577}595
A(t) to be estimated. The standard way to eliminate the dependence on u(t) and A(t) thus to come to a situation where one can evaluate the quality of the estimate itself is to pass from risks at a "xed u(t) and A(t) to uniform risks over the families F , F P of signals, i.e. to the quantities r (u( (t),u(t), A(t)), r (u( (t),F ,F )" sup S S P F F P PZ Z r (a( (t),F ,F )" sup r (a( (t),u(t), A(t)). P PZFP ZF For "xed families F and F one can de"ne the P optimal minimax risks as follows: r (u( (t),u(t), A(t)), rH (F ,F )"inf sup S S P S( R PZFP ZF
derivatives are assumed to be a priori information about the processes u(t) and A(t) or used as parameters in the accuracy analysis. Proposition 1. Let u( I\(t) be an arbitrary estimator for XI\(t), 1)k)m !1 and a( I(t) be an arbitP rary estimator for AI(t), 0)k)m !1. Let these estimators be functions of the observations +y(s¹), (2). Then: (1) For any xxed t"t the following inequality holds: sup r (t ) SI PZFP KP
rH (F ,F )"inf r (a( (t),u(t), A(t)). sup P ( F F ?R PZ P Z These optimal risks determine the precise minimax lower bounds as for any estimates u( (t) and a( (t) over the classes F ,F P r (u( (t),F ,F )*rH (F ,F ), S P S P (7) r (a( (t),F ,F )*rH (F ,F ). P P As a solution of problem (6) we obtain the minimax lower bounds rH (F ,F ), rH (F ,F ) and the S P P corresponding `worsta in F ,F functions u(t) P and A(t) for which these lower bounds are achieved. These `worsta phase and amplitude can be applied as test functions for a comparison of algorithms over the classes F ,F . P However, this study concerns a more general problem. We consider both estimation of the IF, amplitude and their derivatives and we "nd minimax lower bounds for the corresponding meansquared errors r (t)"E (u( I\(t)!XI\(t)), SI k"1,2, m !1, (8) P r (t)"E (a( I(t)!AI(t)), k"0,1,2, m !1. I Here u( I\(t) and a( I(t) are estimates of XI\(t) and AI(t), respectively. The values of the derivative orders m and m as P well as the upper bounds ¸ (m ) and ¸ (m ) of the P P
KP \I KP > p (¸ (m ))I> ¹ , P P A(t ) 1)k)m !1, (9) P sup r (t ) I F Z K *K ((¸ (m ))I>(¹p)K \I)K >, (10) IK 0)k)m !1, as ¹P0 and NPR. Here K are xnite constants IK depending only on k and m. (2) The minimax lower bound (9) for estimation of the (k!1) derivative of the IF is achieved on the following class of the `worsta phases: *K
(6)
581
IKP
u(t)"u (t)#hhIt P (h\(t!t )), IK
(11) 1)k)m !1, P hKP \I"h ) kH P /¸ (m ), h"XI\(t ) (12) IK P P where u (t) is an arbitrary polynomial of the power k!1. The minimax lower bound (10) for estimation of the kth derivative of the amplitude is achieved on the following class of the `worsta time-varying amplitudes: (13) A(t)"A (t)#hhIt (h\(t!t )), IK hK \I"h ) kH /¸ (m ), IK h"AI(t ), 0)k)m !1, (14) where A (t) is an arbitrary polynomial of the power k!1 and A (t)"0 for k"0.
Sigpro=1545=KCT=VVC=BG
582
V. Katkovnik / Signal Processing 80 (2000) 577}595
(3) The function t in (2) is dexned as a solution IK of the optimization problem
inf R \
u(t)"u #X(t ) ) h t (h\(t!t )), P P
t (u) du, tI (0)"1, IK IK
h "X(t ) ) kH /¸ (2), kH "1#(2. P P
sup "tK (u)""k, t (u)"0 ∀"u"*1, IK IK S
(15)
where k is a parameter and kH in (12) and (14) is IK found as follows:
K\I kH "arg min kI> ) 2 t (u) du . IK IK \ I
(16)
"X(t)")¸ (2). P Then calculations give the lower bound (9) for the IF estimation as follows: p ¸ (2) ¹ sup E(u( (t )!X(t ))*K P A(t ) PZF K +0.2968.
(18)
Here we use the subscript u in order to indicate that h gives a dilation of the phase function. Di!erentiation of (18) gives for the `worsta IF X(t)"X(t ) ) t (h\(t!t )). P
The proof of Proposition 1 is outlined in Appendix A. Comments on Proposition 1: (1) The minimax lower bounds on the right-hand side of inequalities (9) and (10) give the estimation accuracy as an explicit function of the parameters ¹, p/"A", and ¸. These lower bound cannot be improved for the classes F (m ) and F (m ). The accuracy natP P urally depends on the orders m and m determin P ing the classes F (m ) and F (m ) and the order P P k of the derivatives XI\ and AI to be estimated. The constants K as well as functions t deIK IK pend only on the values of k and m. It is important to emphasize also that the derived asymptotic minimax lower boundary for the IF and the amplitude estimates are decoupled. The lower bounds for the IF and amplitude estimates are independent even if we estimate the IF and amplitude simultaneously. (2) Consider some particular results concerning the lower bounds (9) and (10) and the corresponding `worsta phase and amplitude. Let the IF be estimated (k"1) and m "2, i.e. P
The `worsta phase is of the form
,
(17)
The formula for the function t is as follows [16]:
u!0.5ku, 0)u)1/(2, t (u)" 0.5k(1!u), 1/(2)u)1
(19)
and for the derivative it gives
1!ku, 0)u)1/(2, t (u)" !k(1!u), 1/(2)u)1,
(20)
where k"1#(2. Functions (19) and (20) are depicted for h "1 in P Figs. 1a and b. The values h '1 results in a natuP ral compression of the IF and the phase. Finally, we wish to note that the phase and the IF are the `worsta for estimation of the IF at one particular time instant t"t . X(t ) in (18) is a value of the IF to be estimated, while the function t (t) centred at t"t determines a neighbour hood, which is `worsta for estimation of this IF. (3) Now consider similar results concerning the lower bound (10) for estimation of the amplitude. Let the amplitude be estimated (k"0) and m "1, i.e. "A(t)")¸ (1). Then calculations give K +0.2845 and the lower bound (10) as follows: sup E(a( (t )!A(t ))*0.2845 ) (¸ (¹p)). (21) ZF
Sigpro=1545=KCT=VVC=BG
V. Katkovnik / Signal Processing 80 (2000) 577}595
583
Fig. 1. The phase (a) and frequency (b) `worsta for the IF estimation, k"1 and m"2.
Denote t "x and rewrite problem (15) in the IK equivalent form:
The `worsta amplitude is of the form A(t)"A ) t (h\(t!t )), (22) h "A ) kH /¸ (1), kH "1, where A is a constant and A(t )"A . Here we use the subscript A for h in order to indicate that h gives a dilation of the amplitude function. It can be shown that the function t is triangu lar:
1!u, 0)u)1, t (u)" 1#u, !1)u)0.
(23)
The interpretation A(t) of (22) as the `worsta amplitude function is similar to that given above for the `worsta phase function. A(t )"A in (22) is a value of the amplitude to be estimated, while the function t (t) centered at t"t determines a neighbourhood, which is `worsta for the estimation of this amplitude value. (4) Let us make notes about the calculation of the constants K and functions t in a general case. IK IK
inf x dt, xK(t)";(t), ";(t)")k, 3 \ xI(0)"1, x(t)"0 ∀"t"*1.
(24)
It is clear that for the dynamic system xK(t)";(t) Eq. (24) de"ne the optimal terminal control problem which provided some restrictions on the control signal ;(t) and the coordinate x(t). It is well known from the optimal control theory (see e.g. [3]) that for problem (24) the optimal control signal ; is a piece-wise constant signal which takes the values $k only. A number of its switches between $k is less or equal to m!1 on the time interval [!1,1]. Thus the well developed technique of the terminal optimal control can be applied in order to "nd the optimal x(t) and then to calculate the criteria x dt as a function of k. \ After that the constant K is calculated by IK routine minimization with respect to the parameter
Sigpro=1545=KCT=VVC=BG
584
V. Katkovnik / Signal Processing 80 (2000) 577}595
k in the problem
K\I \K> K "b ) min kI> 2 x(t) dt , IK IK \ I (25) where b is a constant given in Appendix A by IK formula (A.11). The value of k minimizing (25) determines kH in IK (16). (5) Proposition 1 presents a generalization of the results given in [16], where the minimax lower bound (9) is derived provided that the amplitude A is time invariant and known. The constants K and the corresponding t (u) for k"1 and IK IK m"2,3,4 are derived in [16]. (6) About a comparison of the lower bound (9) with a Cramer}Rao lower bound. Immediately we notice that in the former there is a number of observations, which is one of the main parameters of any Cramer}Rao lower bound and at the same time the minimax lower bound in (9) depends on ¸ (m ) which does not appear in a Cramer}Rao P P lower bound. Thus, as was mentioned in the introduction a comparison of these lower bounds is impossible. The essence of the problem is that in nonparametric estimation in order to obtain the best lower bound the optimization of the trade-o! of the bias-variance is produced and as a matter of fact this optimization determines an optimal number of observation. The e!ect of the trade-o! bias-variance optimization, embedded in the derivation of the minimax lower bounds, can be tracked in the explicit form in the accuracy analysis of speci"ed algorithms. In particular, it can be seen in Section 3 of this paper where the analysis of the GLPP is presented. It is useful also to mention the following fact which gives another idea why the Cramer}Rao and nonparametric lower bounds are so di!erent. As it was discussed in the introduction, the polynomial phase and amplitude can be treated as elements of the nonparametric classes ¸ (m ) and ¸ (m ) but it P P is emphasized that they are the `besta elements of these classes. Here the `besta means that these polynomial function allow nonbiased and minimum variance estimates. In contrast to it the `worsta elements of the classes ¸ (m ) and ¸ (m ) P P
are used in the nonparametric minimax lower bounds. Thus, the Cramer}Rao lower bounds obtained for the polynomial phase and amplitude and minimax lower bounds considered in this paper are addressed to di!erent elements of the classes ¸ (m ) and ¸ (m ). P P 3. Generalized local polynomial periodogram (GLPP). Accuracy analysis For IF and time-varying amplitude estimation the LPA is applied in the following form. Firstly, the truncated Taylor series are used in order to approximate the time-varying phase and amplitude. Secondly, these expansions are exploited locally to approximate the phase and amplitude on a small time interval only. In fact, the local expansion is used to calculate the estimates of the IF and time-varying amplitude for a single time instant. For the next time instant the calculations should be repeated. This sliding window estimation determines a nonparametric character of the point-wise estimation. The localization of estimates is ensured by a weight function (window) which discounts observations outside a neighbourhood of the centre t of the approximation. The point-wise LPA insures reproduction properties of the estimator with respect to polynomial components of the IF and the time-varying amplitude. It should be emphasized that at the same time the estimate IF and the time-varying amplitude are not assumed to be a polynomial function of time. This is a principal di!erence between the nonparametric LPA and the parametric estimators of the polynomial phase and amplitude, where constant coe$cients of a polynomial series are estimated. The LPA of the time-varying phase was used in [10,13] in order to derive the local polynomial periodogram (LPP) as a high accuracy nonparametric estimator of the arbitrary time-varying IF. A generalization of these results for simultaneous nonparametric estimation of the IF and time-varying amplitude was done in [12] where a few forms of the estimates were developed. In this paper we use the accuracy analysis of the GLPP proposed in [12] mainly in order to
Sigpro=1545=KCT=VVC=BG
V. Katkovnik / Signal Processing 80 (2000) 577}595
demonstrate that the minimax lower bounds derived in the previous chapter are achievable in the sense that we are able to obtain the accuracy close enough to the corresponding lower bounds. Let us introduce the polynomial vectors of the powers m !1 and m !1: P P ; (u)"(1, u,2, uK \/(m !1)!), P P (26) U (u)"(1, u,2, uK \/(m !1)!). Then polynomials h(u, C ) and A(u, C ) P h(u, C )"; (u)C , C 3RKP , P P P P (27) C "(C , C ,2, C P ), P P P PK \ and A(u, C )"; (u)C , C 3RK ,
(28) C "(C , C ,2, C ), K \ are used in the considered GLPP for the LPA of the phase and amplitude with the powers m !1 and P m !1, respectively. The GLPP with notation I(C ) is de"ned as P follows [12]: I(C )"Z (C , t )'\Z (C , t ), P F P F P Z (C , t )" o (s¹); (s¹) F P F Q ;Re[y(t #s¹)exp(!jh(s¹, C )], P
(29)
' " o (s¹); (s¹); (s¹), F Q where o (u)*0 is a window function and h'0 is F a parameter scaling a window size. It is assumed that o (s¹)"¹/h ) o(s¹/h), o(u) du"1, so that F \ o (s¹)P1 as h/¹PR. Q\ F Then the estimates of the IF and phase are obtained as a solution of the optimization problem: CK "argmax I(C ), P P !P Z/P Q "+C : 0)C (2ns!/¹Q, P PQ PQ s"0,1,2, m !1,. P
(30)
585
Let CK be an element of the vector PI CK "(CK ,CK , ,CK ), then CK is an P P P 2 PKP \ PI estimate of XI\(t )"*Iu(t )/*tI for k"1,2, m !1 and CK is an estimator of the phase u(t ). P P The estimator of the amplitude is of the form [12] CK "U\Z (CK , t ), (31) F P where the components of the vector CK " (CK ,CK , ,CK ) yield the estimates: 2 K \ CK for the amplitude A(t ) and CK for the deriv Q atives AQ(t ), s"1,2,2, m !1. Algorithm (29)}(31) is derived in [12] by minimizing the loss function: < " o (s¹)"y(t #s¹) F F Q
(32) !; (s¹)C exp(!j; (s¹)C )", P P where C and C are given by (27) and (28). P It is obvious that for the rectangular window o the loss function < is the same as the maximum F likelihood loss function provided that the noise is Gaussian. Then for a "xed value of the time-instant t estimates (29)}(31) coincide with the estimates studied in [6,7,20]. It was mentioned in the beginning of this section that the main di!erence of the LPA estimates is that the minimization of < is F used in order to obtain the estimate for the single time-instant t only and for the next time instant the calculations should be repeated. This sliding window estimation determines the nonparametric character of this point-wise estimation which is very di!erent from the parametric estimates considered in [6,7,20]. However, using the same or similar loss function < immediately means that for F every t a complicity of the nonparametric algo rithm (29)}(31) is just the same as the complicity of the corresponding parametric algorithms. It is also clear that all of the implementation methods developed for the parametric algorithms are fully applicable for the nonparametric LPA estimation. The main result of this section concerns asymptotic formulas for the covariance and bias of estimation errors and these results as it was discussed in the introduction are di!erent from the well known for the parametric settings of the problem.
Sigpro=1545=KCT=VVC=BG
586
V. Katkovnik / Signal Processing 80 (2000) 577}595
Let the estimation errors are given by the vectors *u( (t , h)"(*u( (t , h),*u( (t , h),2, *u( P (t , h))3RKP \, K \
(33)
*AK (t , h)"(*AK (t , h),*AK (t , h),2, *AK (t , h))3RK , K \ *u( (t , h)"XI\(t )!CK , k"1,2, m !1, I SI P *AK (t , h)"AI(t )!CK , k"0,2, m !1, I I where X(t ),X(t ),2,XKP \(t ) and A(t ), A(t ),2, AK \(t ) denote the true values of the IF, amplitude and their derivatives at t"t . Proposition 2. Let u( (t , h) and AK (t , h) be given by (30) and (31), ¹P0, hP0, ¹/hKP \P0, ¹/hK \P0 and u3F P , A3F , then the K K covariances of the estimates u( and AK are as follows: p¹ cov(S (h)*u( (t , h))" = S 2A(t )h S
(34)
and p¹ cov(S (h)*AK (t , h))" = 2h
(35)
and the upper bound for the estimation bias are given by the formulas sup "E(S (h)*u( (t , h))")hKP B ¸ (m ), S S P P PZFP
(36)
sup "E(S (h)*AK (t , h))")hK B ¸ (m ) ZF
(37)
where S "diag(h, h,2, hKP \), S S "diag(1, h,2, hK \) and the matrices = and = as well as the vectors S B and B are given by formulas (B.18)}(B.21) in S Appendix B. The vector inequality "x"(b in (36) and (37) means "x "(b for all s. Q Q
Proposition 2 is proved in Appendix B. Comments on Proposition 2: (1) It follows from (34) and (35) that p¹ (= ) , cov(*u( (t , h))" I 2A(t )hI> S I k"1,2,2, m !1, P p¹ cov(*AK (t , h))" (= ) , I 2hI> I>
(38)
k"0,1,2, m !1, where (= ) and (= ) are kth diagonal elements SI I of the matrices = and = . S Eqs. (36) and (37) give for the bias sup "E(*u( (t , h))")hKP \IB ¸ (m ), I SI P P PZFKP
(39) sup "E(*AK (t , h))")hK \IB ¸ ( ), I I> K ZFK where B and B are kth elements of the vectors SI I B and B . S Then the upper bounds of the corresponding MSE are as follows: p¹ sup r (t )) (= ) SI 2A(t )hI> S I PZFKP #(hKP \IB ¸ (m )), (40) SI P P k"1,2,2, m !1, P sup r (t ) I ZFK p¹ ) ¸ (m )), (= ) #(hK \IB I> 2hI> I> k"0,1,2, m !1.
(41)
In order to "nd the optimal bandwidth h we di!erentiate on h the right-hand sides of (40) and (41) and set these derivatives equal to zero. This minimization in (40) gives sup r (t ) SI PZFKP
KP \I KP > p )M P ¸I>(m ) ¹ , IK P P A(t ) (42)
Sigpro=1545=KCT=VVC=BG
V. Katkovnik / Signal Processing 80 (2000) 577}595
KP \IKP > (= ) SI M P "(2m #1) ) IK P 4(m !k) P I>KP > B SI ) , (43) 2k#1
(= ) (2k#1) p hKP >"¹ S I ) , I 4(m !k)B A(t )¸ (m ) P PI P P k"1,2,2, m !1, P
(44)
where h is the optimal bandwidth found from I minimization of the upper bound of the meansquared risk. This choice of the scale parameter h determines the optimal trade-o! of the bias-variance usual for nonparametric estimation. The optimal h are I di!erent for estimates of the IF and its derivatives. Note also that minimization of the exact MSE and its upper bound in general can give di!erent values for the optimal bandwidth. A comparison of (42) versus (9) shows that the MSE of the GLPP estimator is di!erent from the minimax lower bound only in a constant factor depending on m and k, such that P sup r (t ) SI PZFKP K P) IK (¸I>(m )(¹p/A(t ))KP \I)KP > P P )M P . IK
(45)
The formula similar to (45) is valid for the MSE of the amplitude estimate sup FK r (t ) Z I K ) )M , IK IK (¸I>(m )(¹p)K \I)K >
K \IK > (= ) I> M "(2m #1) ) IK 4(m !k) I>K > B ) I> , (46) 2k#1
(= ) ) (2k#1) p hK >"¹ I> ) , I 4(m !k)B ¸ (m ) I> k"0,1,2, m !1.
(47)
587
Thus formulas (45) and (46) show that the GLPP is able to give estimates of the IF and the amplitude with the MSE values which only in constants are di!erent from the corresponding lower bounds derived in Proposition 1. (2) Let us calculate the constants M P and IK M for the symmetric windows o(u)"o(!u) IK and compare their values with the corresponding K P and K in order to obtain the accurate IK IK comparison with the minimax lower bounds. For the IF estimation we consider the case m "2 and k"1 for which the corresponding P K "0.2968 is given in (17). It can be seen from (43) that for this case (o(u)u du) M "0.6466 (o(u)u du)
o(u)"u" du
. (48)
For amplitude estimation we consider the case m "1 and k"0 for which the corresponding K "0.2845 in (21). It can be shown from (46) that in this case
M "1.1906
o du ) o(u)"u" du
.
(49)
Table 1 presents values of the constants M and M for some popular windows. These values of M and M are quite close to the corresponding minimax lower bound given by K "0.2968 and K "0.2845. It is interesting to note that there is no valuable di!erence between the constants corresponding to the di!erent window types. We have simple formulas (48) and (49) for the constants M and M because the matrixes = and = in Proposition 2 depend only on the S values of m , m and the window function o. Thus P at the moment when the order of the LPA is "xed the constants M depend only on o. IK (3) Formulas (44) and (47) as well as the accuracy analysis show that the accuracy optimization requires di!erent bandwidth values in algorithm (29)}(31) in order to get the best accuracy for estimation of the IF, amplitude and their derivatives. (4) For the rectangular window the integer part of the ratios h /¹, N "W h /¹ X, gives the opI I I timal numbers of observations minimizing the
Sigpro=1545=KCT=VVC=BG
588
V. Katkovnik / Signal Processing 80 (2000) 577}595 Table 1 Factors K and M of the formulas for the optimal values of estimation MSE IK IK Window type 1 2 3 4
o(u)"1, "u")1/2, Rectangular o(u)"2(1!2"u"), "u")1/2, Triangular o(u)"3(1!(2u))/2, "u")1/2, Quadratic (Epanechnikov) o(u)"p cos(pu)/2, "u")1/2, Cosine
MSE. It follows from the above formulas that all h , N PR as ¸ (m ), ¸ (m )P0 while I I P P sup FKP r (t ) and sup FK r (t )P0. This rePZ SI Z I sult re#ects quite a trivial fact: as soon as ¸ (m ), ¸ (m )P0 the estimation bias disappear P P while the optimal window size requires larger and larger number of observations which results in a corresponding decrease of the estimation variance. At the same time the nonparametric u(t) and A(t) according to (4) become the parametric polynomials and the standard Cramer}Rao lower bounds become applicable for the accuracy evaluation. In this case the minimax and Cramer}Rao lower bounds give the same but trivial results: the corresponding MSEs approach zero simultaneously as N PR. I 4. Simulation Here we illustrate how the `worsta phase u(t) and amplitude A(t) can be used for accuracy evaluation over the classes of functions. Let us assume that we consider estimation of the IF provided that phase u(t)3F (2) and A(t)3F (1), i.e. the phase P and amplitude are arbitrary nonparametric functions with sup "X(t)")¸ (2) and sup "A(t)") R P R ¸ (1). It means that we can use the formulas of Sections 2 and 3 with m "2 and k"1 for the IF P and with m "1 and k"0 for the amplitude es timation. The considered minimax approach assumes that the phase u(t) and amplitude A(t) are `worsta in a neighbourhood of the time-instant t"t . Then we can represent the observation model as follows:
M ; K "0.2968
M ; K "0.2845
0.5383
0.4725
0.4972
0.4368
0.4990
0.4404
0.4981
0.4353
y(t #s¹)"r(t #s¹)#e(t #s¹), r(t)"A(t) exp( j u(t)),
(50)
where u(t) and A(t) are determined according to (18) and (22), respectively. The following parameter values are assumed in simulation: X(t )"100 1/s, p"0.2, ¹"10\ s, A "1. It is assumed for simplicity that h "h . It P means that the `worsta amplitude and IF have an equal dilation, i.e. they are concentrated in the same neighbourhood of the time-instant t . We denote this common width by h and consider it as a parameter of simulation. Then ¸ (1)"A ¸ (2)/(X(t )(1#(2)). (51) P There is one to one links between h and ¸ (2) P (and ¸ (1)). The increasing ¸ (2) results in decreas P ing the dilation h. This increasing of ¸ (2) comP presses the `worsta phase u(t) and amplitude A(t) and naturally makes the estimation more di$cult. We assume in the GLPP that the window o is rectangular and the orders m "1 and m "0 are P minimal corresponding to the assumptions about the phase and amplitude excepted above. Then the GLPP algorithm (29)}(31) can be represented in the following form: (u( (t ),XK (t ))"arg min I(C ), P !P !P 1 ,P I(C )" Re[y(t #s¹) P 2N #1 P Q\,P
;exp (!j(C #s¹C ))] P P
,
Sigpro=1545=KCT=VVC=BG
V. Katkovnik / Signal Processing 80 (2000) 577}595
589
Fig. 2. The experimental SRMSE (marked by &o') and the corresponding upper (DM and lower (D bounds as functions of the dilation M h for estimation of the IF (a) and amplitude (b).
1 , AK (t )" Re[y(t #s¹) 2N #1 Q\, ;exp(!j(CK #s¹CK ))]. P P
(52)
It is assumed here that the di!erent window lengths h "(2N #1)¹ and h "(2N #1)¹ are used P P for estimation of the IF and amplitude.
Sigpro=1545=KCT=VVC=BG
590
V. Katkovnik / Signal Processing 80 (2000) 577}595
For window size selection we use formulas (44) and (47) of the optimal window sizes, which give, respectively: (a) For estimation of the IF
¹p , h "3. 0314 A(t )¸ (2) P
(53)
with = "12 and B "0.1875; P (b) For estimation of the amplitude
¹p h " 4 , ¸ (1)
5. Conclusions (54)
where = "1 and B ". Section 3 submits also the formulas for the accuracy analysis corresponding to the considered case m "2 and k"1 and m "1 and k"0. Let us P represent this expressions in the form as they used in simulation. Let DM X "M (¸ (2) (¹p/A(t ))) and DX " P K (¸ (2)(¹p/A(t ))) then the inequalityM (45) P is as follows: (DM X )(E(X(t )!u( (t , h )))(DX . M
1 + (X(t )!u( (t , h )). M H
Acknowledgements The author would like to thank the anonymous referees for helpful and stimulating comments.
Appendix A (56)
Note that K "0.2845 and M "0.4725. In simulation, we reproduce the dilation e!ect by replacement of the initial sampling period ¹ by ¹h. Then it can be veri"ed that DM and D does not M depend on h. The simulation results for the accuracy estimation of the IF and the amplitude are given in Figs. 2a and b, where the square root mean-squared error SRMSE, averaged over M"100 simulation runs, is depicted as a function of h. For IF estimation the SRMSE has the form SRMSE"
The minimax lower bounds are derived for the MSE of nonparametric estimation of the time-varying IF and amplitude along with their derivatives provided that the time-varying phase and amplitude are piece-wise di!erentiable arbitrary functions of time. It is shown that the optimal choice of the window size in the GLPP estimates of the IF and amplitude as well as their derivatives results in the MSE values which are di!erent from the corresponding minimax lower bounds only by constant factors.
(55)
Thus DM X and DX provide theoretical lower and M mean-squared error. Note that upper bounds of the K "0.2968 and M "0.5383. For amplitude estimation formulas (46) give: DM "M (¸ (1)¹p), D "K (¸ (1)¹p) M then (DM )(E(A(t )!AK (t , h )))(D . M
The corresponding upper and lower bounds of the amplitude estimation are also given in Figs. 2a and b. The experimental values SRMSE for the IF and amplitude estimation are mainly located between the upper and lower bounds and in this way we con"rm the theoretical results. For the IF estimation, experimental results show a clear tendency towards the lower bound.
Proof of Proposition 1. The proof mainly is based on the approach and results developed in [16] in order to derive the minimax lower bound for nonparametric estimation of the IF provided that the amplitude is time invariant and known. As a generalization of the proof is straightforward, we only outline the basic arguments. We introduce as approximations of F P and K F two parametric family functions P P and K M P [15]. M P P "+u (t)"u (t)#hhIt(h\(t!t )): "h" M F )o ,, P 1)k)m !1, (A.1) P
Sigpro=1545=KCT=VVC=BG
V. Katkovnik / Signal Processing 80 (2000) 577}595
P "+A (t)"a (t)#ahIt(h\(t!t )): "a" M ? )o ,, 0)k)m !1,
where
(A.2)
J (h)" (*q(z"h, a)/*h)q\(z"h, a) dz $ , "2AhIp\ t(h\(s¹!t )) Q
where u (t) and a (t) are polynomial of the degrees k!1 for the corresponding k in (A.1) and (A.2) respectively and a (t)"0 for k"0. It is clear that P P LF P , P LF , K M K M
591
2A hI t(h\(v!t )) dv ¹p
(A.3)
2A " hI> ¹p
then we apply the straightforward inequality
t(u) du ,
sup D (t )* sup D (t ), SI SI PZFKP PZPMP
(A.4)
J (a)" (*q(z"h, a)/*a)q\(z"h, a) dz $
sup D (t )* sup D (t ). I I ZPM ZFK
(A.5)
, "2hIp\ t(h\(s¹!t )) Q
In order to derive the right-hand sides of (A.4) and (A.5) we assume for the beginning that in u (t) and F A (t) only constants h and a are unknown. The ? Cramer}Rao lower bounds for the estimates of these parameters are obtained, which are maximized on t. The obtained lower bounds do not depend on the polynomials u (t) and a (t) and the maximization on t is used in order to "nd the `worsta phase and amplitude functions. Let z"(y(¹), y(2¹),2, y(N¹)) be the vector of the observations, r (s¹)"A (s¹) ) (cos(u (s¹)), F ? F sin(u (s¹))), and u "(Re(y(s¹)), Im(y(s¹))). Then F Q according to (2) and (3) the corresponding probability density function is of the form , q(z"h,a)" N(u !r (s¹), Ip/2), Q F Q where N(0, Ip/2) represents probability density function for the two-dimensional Gaussian random variable with zero mean and covariance matrix Ip/2. The Cramer}Rao lower bound is determined by the Fisher information
J (h) $ J " $ J (h, a) $
J (h, a) $ , J (a) $
(A.6)
2 hI t(h\(t!t )) dt ¹p
2 " hI> ¹p
t(u) du
(A.7)
and
J (h,a)" (*q(z"h,a)/*h) (*q(z"h,a)/*a)q\(z"h,a) dz $ "0. It is assumed in the limit passages in (A.6) and (A.7) that as hP0 and ¹/hP0. Thus the information matrix J is diagonal. Now $ it is follows from formula (A.2) of Theorem A.1 [16, p. 3242] that i(o (J (h)) $ , sup E (u( I(t )!h)* P F J (h) $ FWMP
(A.8)
io (J (a) $ , sup E (a( I(t )!a)* ? J (a) $ ?WM
(A.9)
where J (h) and J (h) are given by (A.6) and (A.7). $ From this point the proof the results of Proposition 1 concerning the IF estimation repeats the proof presented in [16, pp. 2343}2344].
Sigpro=1545=KCT=VVC=BG
592
V. Katkovnik / Signal Processing 80 (2000) 577}595
In our results we use the following functions. let i(q) : [0,R)P[0,1) be a monotone increasing function, described by its inverse function q"q(i) as follows: arccos(!(i) p q" ! , i3[0,1), 2 (1!i
(A.10)
then b "max w\G>K>i(w). (A.11) GK U The results of Proposition 1 concerning the amplitude estimation are obtained by a slight modi"cation of calculations but following the same basic steps.
Appendix B Proof of Proposition 2. The technique used in this analysis is a modi"cation of one developed in [10,13]. It is based on the assumption that all estimation errors are small. After that linearized equations for these estimation errors are derived, which are used for calculation of the bias and variance. These calculations are quite bulky and only the basic steps are presented here. (1) The Taylor series for u(t #n¹) and A(t #n¹) given with a remainder term in Lag range's form are used as follows: u(t #s¹)"; (s¹)CM #*u(t , s¹), P P
(B.1)
CM "(u(t ),X(t ),2,XKP \(t )), P where ; (s¹) is given by (26) and *u is a residual P of the polynomial approximation of u(t #s¹) by ; (s¹)CM . Then P P (s¹)KP *u(t , s¹)" XKP (f), m ! P f"t #js¹, 0)j)1 and according to (4) "s¹"KP "*u(t , s¹)") ¸ (m ). m ! P P P
(B.2)
In a similar way for the time-varying amplitude A(t #s¹)"; (s¹)AM (t )#*A(t , s¹), (B.3) where ; (s¹) is given by (26), and *A(t , s¹) is a residual of the polynomial approximation of A(t #s¹) by ; (s¹)AM (t ) and "s¹"K "*A(t , s¹)") ¸ (m ). (B.4) m ! (2) Algorithm (29)}(31) is derived by minimization of the loss function (32): < " o (s¹)"y(t #s¹) F F Q !; (s¹)C exp(!j; (s¹)C )". (B.5) P P Then the estimates CK , CK (30) and (31) are a soluP tion of the equations *< /*C "0, *< /*C "0. (B.6) F P F (3) Substituting (B.1) and (B.3) in (B.7) and linearization of the derivatives *< /*C , *< /*C F P F with respect to assumed to be small the estimation errors *C and *C , P *C "CM !CK , *C "AM !CK , P P P result in the following set of the linear equations:
A*u o WIW*C"e !2 o W , F F *A Q Q ; (s¹) 0 *C P , W" P , *C" 0 ; (s¹) *C I"2 ) diag(A,1),
jA exp(!ju) e " o W; e F !exp(!ju) Q !jA exp(ju) # eH . !exp(ju)
(B.7)
(B.8)
In order to simplify notation we omit some of the arguments in (B.7) and (B.8). But it is emphasized that A, u and e enter in the equations with the argument (t #s¹) and W with the argument s¹. (4) Calculation of the "rst two moments give formulas for the bias and the variance of estimation
Sigpro=1545=KCT=VVC=BG
V. Katkovnik / Signal Processing 80 (2000) 577}595
The formula for the bias of the IF estimate follows from (B.11) in the form
errors. It can be shown that cov(e)"E(e ) e H)"pU , U " oWIW (B.9) F Q and then we obtain from (B.7) the following formulas for the covariance: p cov(*C)" U\U U\, 2
; ) ; A P P U" o F 0 Q ; ) ; A U " o P P F 0 Q and for the bias
0 ; ) ; 0
,
; ) ;
.
(B.10)
A*u
E(*C)"!U\ o W ) . (B.11) F *A Q As the matrices U and U are block diagonal the equations are decoupled and we obtain separate equations for the covariance of the estimates of the phase and the amplitude. Further calculations give the following results for the IF estimation errors: cov(*u( (t , h)) p " Q\ U !(g g #g g )/g S S S S S 2
593
g # g g Q\, (g ) S S Q"U !g g /g , S S S where
(B.12)
g " o ; A, g " o; A, S F S S F S Q Q ; "(s¹,(s¹)/2,2,(s¹)KP \/(m !1)!). S P
"E(*u(t , h))") o A"Q\(g /g !; )" ) "*u" F S S Q ¸ ) KP o A"Q\(g /g !; )" ) "s¹"KP . (B.15) F S S m ! P Q In a similar same way we obtain from (B.3) and (B.11) for the bias of the error of the amplitude estimation
\ ¸ "E(*A)") K o o ; ; ; ) "s¹"K . F F m ! Q Q (B.16) Note that in (B.15) and (B.16) "x" for a vector x means a vector of the absolute values of elements of x. (5) Now the integral formulas of Proposition 2 can be derived from the given above expressions as hP0, ¹P0, h/¹PR. The diagonal matrices S (h) and S (h) in Proposition 2 are used for the S scaling of estimation errors of the IF, the amplitude and their derivatives. In particular, it can be veri"ed that S\ o ; ; S\AP S F S S S Q
A(t ) o(u); (u); (u) du, S S
U " o ; ; A, U " o; ; A, S F S S S F S S Q Q g " o A, g " oA, F F Q Q
E(*u(t , h))" o Q\(g /g !; )A*u. (B.14) F S S Q Substituting (B.2) in (B.14) gives the upper bound for the bias
(B.13)
S\ o; ; S\APA(t ) o(u); (u); (u) du, S F S S S S S Q
S\ o ; P o(u); (u) du, S F S S Q S\ o; P o(u); (u) du, S F S S Q
o P1, oP o(u) du. F F Q Q
(B.17)
Sigpro=1545=KCT=VVC=BG
594
V. Katkovnik / Signal Processing 80 (2000) 577}595
Then the formulas for the covariance of the IF and the amplitude estimates (34) and (35) follow from the formulas (B.10) and (B.12). The formulas for the bias (36) and (37) follow from the formulas (B.15) and (B.16). The following notation is used in formulas (34)}(37): = "H\H H , S S S S
(B.18)
W "H\H H ,
H " o; ; du! o; du o; du, S S S S S
H " o; ; du S S S
!
o; du o; du# o; du o; du S S S S
# o du o; du o; du, S S
(B.19)
H " o; ; du,
H " o; ; du,
(B.20)
1 B " S m ! P
1 B " m !
o H\ S
o; du!; S S
o"H\; " ) "u"K du ,
) "u"KP du
(B.21)
where the vectors ; and ; are of the forms (26) S and (B.13). It completes the proof of Proposition 2. 䊐
References [1] B. Boashash, Estimating and interpreting the instantaneous frequency of a signal-Part1: Fundamentals and Part 2: Algorithms and applications, Proc. IEEE 80 (April 1992) 520}568. [2] R.G. Brown, Smoothing, Forecasting and Prediction of Discrete Time Series, Prentice-Hall, Englewood Cli!s, NY, 1963. [3] A.E. Bryson, J. Yu-Chi Ho, Applied Optimal Control, Optimization, Estimation and Control, Wiley, New York, 1975. [4] L. Cohen, Time-Frequency Analysis, Prentice-Hall, Englewood Cli!s, NJ, 1995. [5] J. Fan, I. Gijbels, Local Polynomial Modelling and its Application, Chapman & Hall, London, 1996. [6] B. Friedlander, M. Francos, Estimation of amplitude and phase of multicomponent signals, IEEE Tran. Signal Process. 43 (4) (1995) 917}925. [7] S.G. Golden, B. Friedlander, Maximum likelihood estimation, analysis, and application of exponential polynomial signals, IEEE Tran. Signal Process. 47 (6) (1999) 1493}1501. [8] I.A. Ibragimov, R.Z. Khasminskii, Statistical Estimation: Asymptotic Theory, Springer, New York, 1981. [9] V. Katkovnik, Nonparametric Identi"cation and Smoothing of Data (Local Approximation Method), Nauka, Moscow, 1985 (in Russian). [10] V. Katkovnik, Local polynomial periodogram for timevarying frequency estimation, South African Statist. J. 29 (1995) 169}198. [11] V. Katkovnik, Adaptive local polynomial periodogram for time-varying frequency estimation, Proceedings of the IEEE-SP on TFTS Analysis, Paris, June 1996, pp. 329}332. [12] V. Katkovnik, Local polynomial periodogram for signals with the time-varying frequency and amplitude, Proceedings of IEEE Int Conference of Acoustics, Speech & Signal Processing, Atlanta, Georgia, May 7}10, 1996, USA, pp. 1399}1402. [13] V. Katkovnik, Nonparametric estimation of instantaneous frequency, IEEE Trans. Inform. Theory 43 (1) (1997) 183}189. [14] V. Katkovnik, L. StankovicH , Periodogram with varying and data-driven window length, Signal Processing 67 (3) (1998) 345}358. [15] A.V. Nazin, On minimax bound for parameter estimation in ball (bias accounting), in: New Trends in Probab. and Statist., V. Sazonov, T. Shervashidze (Eds.), VSP/Mokslas, 1991, pp. 612}616. [16] A.V. Nazin, V. Katkovnik, Minimax lower bound for time-varying frequency estimation of harmonic signal, IEEE Trans. Signal Process. 46 (12) (1998) 3235}3245. [17] S. Peleg, B. Porat, The Cramer}Rao lower bound for signals with constant amplitude and polynomial phase, IEEE Trans. Acoust. Speech Signal Process. 39 (1991) 749}752.
Sigpro=1545=KCT=VVC=BG
V. Katkovnik / Signal Processing 80 (2000) 577}595 [18] K.S. Riedel, Kernel estimation of the instantaneous frequency, IEEE Trans. Signal Process. 42 (10) (1994) 2644}2649. [19] H.L. Van Trees, Detection, Estimation and Modulation Theory, Part 1, Wiley, New York, 1968. [20] G. Zhou, G.B. Giannakis, A. Swami, On polynomial phase signals with time-varying amplitudes, IEEE Trans. Signal Process. 44 (4) (1996) 848}861.
595
[21] G. Zhou, A. Swami, Performance analysis for a class of amplitude modulated polynomial phase signals, Proceedings of IEEE Int Conference of Acoustics, Speech & Signal Processing, Detroit, MI, May 1995, pp. 1593}1596. [22] K.M. Wong, Estimation of the time-varying frequency of a signal: the Cramer}Rao bound and the application of Wigner distributions, IEEE Trans. Acoust. Speech Signal Process. 38 (3) (1990) 519}535.