Least-squares self-coherency analysis of superconducting gravimeter records in search for the Slichter triplet

Least-squares self-coherency analysis of superconducting gravimeter records in search for the Slichter triplet

Physics of the Earth and Planetary Interiors 160 (2007) 108–123 Least-squares self-coherency analysis of superconducting gravimeter records in search...

1MB Sizes 0 Downloads 21 Views

Physics of the Earth and Planetary Interiors 160 (2007) 108–123

Least-squares self-coherency analysis of superconducting gravimeter records in search for the Slichter triplet Spiros D. Pagiatakis a,∗ , Hui Yin b , Mahmoud Abd El-Gelil a a

Department of Earth and Space Science and Engineering, York University, Toronto, Canada b School of Geodesy and Geomatics, Wuhan University, Wuhan, PR China

Received 20 September 2005; received in revised form 13 July 2006; accepted 11 October 2006

Abstract We develop a new approach for the spectral analysis of the superconducting gravimeter data to search for the spheroidal oscillation of the Earth solid inner core. The new method, which we call least-squares (LS) self-coherency analysis, is based on the product of the least-squares spectra of segments of the time series under consideration. The statistical foundation of this method is presented in the new least-squares product spectrum theorem that establishes rigorously confidence levels for detecting significant peaks. We apply this approach along with a number of other innovative ideas to a 6-year long gravity series collected at the Canadian Superconducting Gravimeter Installation (CSGI) in Cantley, Canada, by splitting it into 72 statistically independent monthly records. Each monthly record is analysed spectrally and all monthly LS spectra are multiplied to construct the self-coherency spectrum of the 6-year gravity series. The self-coherency spectrum is then used to detect significant peaks in the band 3–7 h at various significant levels with the aim to identify a triplet of periods associated with the rotational/ellipsoidal splitting of 1 S1 (Slichter triplet). From all the Slichter periods predicted by various researchers so far, Smylie’s triplet appears to be the most supported one, albeit very weakly, both, before and after the atmospheric pressure effect is removed from the series. Using the viscous splitting law [Smylie, D.E., 1992. The inner core translational triplet and the density near Earth’s center. Science 255, 1678–1682] as guide, we can also see one interesting and statistically significant triplet with periods A = {4.261 h, 4.516 h, 4.872 h}, which changes slightly to A = {4.269 h, 4.516 h, 4.889 h} after the atmospheric pressure correction is applied to the gravity series. © 2006 Elsevier B.V. All rights reserved. 1 S1

Keywords: Slichter triplet; Least-squares; Spectral analysis; Coherency; Product spectrum

1. Introduction The Earth’s solid inner core, held inside the fluid outer core mainly by gravitational forces, undergoes many oscillations with periods that are very sensitive to the physical (e.g. density, viscosity, elasticity) and geometrical characteristics (size, shape) of both, the

∗ Corresponding author. Tel.: +1 416 736 2100x20644; fax: +1 416 736 5817. E-mail address: [email protected] (S.D. Pagiatakis).

0031-9201/$ – see front matter © 2006 Elsevier B.V. All rights reserved. doi:10.1016/j.pepi.2006.10.002

inner and outer core and the inner core boundary (ICB). Slichter (1961) studied one particular oscillation, also known as 1 S1 spheroidal oscillation and realized that the ellipticity of the inner core and the rotation of the Earth splits this motion into three fundamental translational modes, namely one axial and two equatorial (one prograde and one retrograde), known as the Slichter triplet. Ever since Slichter’s prediction of the translational triplet, there has been an intensive effort to find evidence of this motion in gravimeter records by carefully analyzing long records particularly from superconduct-

S.D. Pagiatakis et al. / Physics of the Earth and Planetary Interiors 160 (2007) 108–123

ing gravimeters. Smylie (1992) performed analyses of four long records from superconducting gravimeters and identified three translational resonances at periods of 4.0150, 3.7677 and 3.5820 h, which he supported by theoretical calculations (Smylie, 1992) based on CORE11 Earth Model (Widmer et al., 1988). More recent calculations by Courtier et al. (2000) using the stacking method refined Smylie’s (1992) estimation to 4.0150, 3.7656 and 3.5822 h for retrograde, axial and prograde translational modes, respectively. The most recent searches for the Slichter triplet, e.g. by Kroner et al. (2004) and by Rosat et al. (2003, 2006) found no evidence of existence of spectral peaks on or close to these periods. In the theoretical front, Smylie’s model predictions and observational estimates of the Slichter periods were challenged by Crossley et al. (1992), Crossley et al. (1999), and Rogister (2003) who calculated them using the preliminary reference Earth model (PREM) (Dziewonski and Anderson, 1981); their periods, although consistent at about 5.98, 5.31 and 4.77 h, they are distinctly different from Smylie’s. The above authors as well as Rieutord (2002) predicted another set of periods based on the 1066A Earth model that are not as consistent amongst themselves having an axial mode period ranging from 4.26 to 4.53 h. We note that for the later calculations, Crossley (1993) and Rogister (2003) predict almost identical central periods at 4.533 and 4.529, respectively. The latest model predictions of the Slichter triplet periods are attributed to Rosat et al. (2006) on Earth models 1066A and PREM with modifications on the density jump at the ICB. In this study we focus on the development of a new approach for the analysis of the superconducting gravimeter data in an effort to detect statistically significant spectral peaks and compare their periods to the ones obtained by other researchers either observationally or theoretically. Our method is based on the least-squares spectral analysis (LSSA) (Van´ıcek, 1969, 1971) and the statistics of the least-squares (LS) spectrum (Pagiatakis, 1999), which are extended to the product least-squares spectrum and the development of its probability distribution function (pdf) in order to define confidence levels above which product spectrum peaks are statistically significant. More specifically, we use the original 1 s data from the Canadian Superconducting Gravimeter Installation (CSGI) at Cantley, Canada collected over a period of 6 years (1997–2002) along with the corresponding atmospheric pressure data and we follow an innovative and rigorous analysis approach comprising the following steps (details of each step are given in Section 5):

109

(a) We remove the body tide from the records using GWAVE (Merriam, 1992). (b) We filter the 1 s residual series using non-overlapping Parzen window in the time domain to produce statistically independent filtered values at random and unequally spaced intervals to increase spectral resolution and avoid possible residual aliasing. (c) We propagate the noise level (variance of the record) of the original 1 s data to the filtered values thus assigning realistic variances (hence weights) to the filtered values obtained in step (b) above, in order to be able to use a weighted least-squares spectral analysis (LSSA) approach (see also Pagiatakis, 2000). Based on our extensive experience with the weighted LSSA we reckon that this is the most important feature of the method that significantly enhances the spectrum. (d) We divide the 6-year residual series into monthly segments and use LSSA on these segments (unevenly spaced and unevenly weighted) by simultaneously suppressing (by least-squares fitting) ocean tide loading constituents. (e) We produce monthly LS spectra and carefully examine the presence of peaks and their significance through rigorous statistical testing developed by Pagiatakis (1999). (f) We use frequency domain multiplication of the monthly LS spectra to produce the product LS spectrum or the LS self-coherency spectrum to enhance spectral peaks that are common in all monthly spectra while filtering random noise. The identification of significant peaks in the LS self-coherency spectrum at various confidence levels is done rigorously using the probability distribution function of the product LS spectrum developed in this study. In the sequel we elaborate on the significance of the product of spectra of segments of the same times series, we develop the probability distribution function of the product LS spectrum for the statistical evaluation of its peaks and we emphasize the significance and effectiveness of the LSSA in the detection of weak signals buried in noise. In addition, we perform experiments by injecting known signals to the gravity series to examine their effect on the LS spectrum and assess their detectability using the statistical theory developed herein. We point out that the theory of statistics offers a useful tool for the evaluation of experimental data but must be used with discretion combined with common sense, practical experience, and external evidence (Van´ıcek and Krakiwsky, 1986) and this is the approach we follow in this study.

110

S.D. Pagiatakis et al. / Physics of the Earth and Planetary Interiors 160 (2007) 108–123

The atmospheric pressure effect on the gravity spectrum is dealt with in two different ways. In the first approach, and similar to the product spectrum of the gravity series, we estimate the product spectrum of the pressure time series and identify the periods of all significant peaks above the 90% confidence level. We then detect common peaks in both spectra to single out peaks in the gravity spectrum that are due to pressure. In the second approach we correct the gravity series from the atmospheric pressure effect using a constant admittance of −3.0 nm/s2 /h Pa and subsequently estimate the gravity spectrum which is supposedly free from atmospheric pressure effects. Here the term “supposedly” is used to underline our apprehension in using the constant atmospheric admittance of −3.0 nm/s2 /h Pa, at least in the band 3–7 h (see rational in Section 7). 2. The product spectrum and self-coherency analysis The notion of the product spectrum has been used as an effective and rigorous approach to identify common spectral peaks among many different time series obtained from various superconducting gravimeter installations for the purpose of detecting the Slichter triplet (e.g. Smylie et al., 1993; Courtier et al., 2000; Rosat et al., 2003; Kroner et al., 2004). It is conceptually straightforward to understand that the product spectrum, as a multiplication operation in the frequency domain, is equivalent to time-domain convolution, i.e. equivalent to time-domain filtering. Multiplication of two amplitude spectra corresponding to two experimental time series results in the sample cross amplitude spectrum (Jenkins and Watts, 1968) that shows whether the amplitude of a component at a particular frequency in one series is associated with large or small amplitude at the same frequency in the other series. Similarly, the sample phase spectrum tells us whether a frequency component in one series leads or lags the component at the same frequency of the other series. Both, sample cross amplitude and phase spectra compose the cross spectrum and define the coherency spectrum; the latter expresses frequency domain correlation of two series (Jenkins and Watts, 1968). Multivariate spectral analysis is the generalization of the bivariate cross spectrum in many dimensions. In this paper we use multivariate spectral analysis and the new notion of the LS self-coherency spectrum of one and the same time series, by subdividing it into many segments of sufficient length, then estimating segment LS spectra, and subsequently calculating the product spectrum thus effectively suppressing non-common peaks and noise whilst enhancing common frequency con-

stituents. The product spectrum of the segments of the series expresses coherency within the series itself and we call our approach least-squares (LS) self-coherency analysis. The validity of this proposed method is derived straightforwardly by working in the Fourier domain and taking the Fourier transform Y(f) of a signal y(t), where t is time and f is frequency. Without loss of generality, signal y(t) is assumed to be the output from two linear systems S and S in series, when signal x(t) is passed through them. If the linear systems S and S are characterized by the impulse response (weight) functions h(τ) and h (τ) respectively, of characteristic time lag τ, then after well-known transformations (see, e.g. Jenkins and Watts, 1968, Section 2.3), the Fourier transform Y(f) of y(t) can be written as Y (f ) = H(f ) ⊗ H  (f ) ⊗ X(f ),

(1)

where ⊗ signifies frequency domain multiplication. The upper case letters indicate Fourier transforms, and H(f) and H (f) are the transfer functions of systems S and S , respectively. Eq. (1) can be further generalized to establish the equivalency of time-domain convolution and frequencydomain multiplication when a signal passes through more than two linear systems in series. Moving now to the real world of digital signals we consider a discrete time series x(ti ), ∀i ∈ [1, nm], ∧n, m ∈ Z+ (Z+ is the set of positive integer numbers) that can be split into n mutually disjoint segments of equal length m as follows: x(t) =

n 

xi (tj ),

∀j ∈ [1, m], ∧n, m ∈ Z+ .

(2)

i=1

We then consider all Fourier transforms Xi (f), ∀i ∈ [1, n], ∧f ∈ [f1 , f2 ] within the band of interest [f1 , f2 ] and take the following product: Y (f ) =

n  Xi (f ),

f ∈ [f1 , f2 ].

(3)

i=1

Eq. (3), which is the product spectrum of the n time series segments, is equivalent to the convolution of any segment i, xi (tj ), ∀j ∈ [1, m], ∧i, m ∈ Z+ with all the remaining (i − 1) segments, one at a time, until all segments are exhausted. These convolutions achieve repeated filtering of a segment of a series using a weighting function that is in fact another segment of the series. This repeated filtering is actually the geometric mean of all factor spectra and results in a smooth spectrum Y(f) with peaks at frequencies that are common to all segments. The product spectrum is strongly biased due

S.D. Pagiatakis et al. / Physics of the Earth and Planetary Interiors 160 (2007) 108–123

to repeated filtering in that the peaks will have unrealistic height: common peaks in all factor spectra, albeit perhaps statistically insignificant themselves, may produce a significant product peak. On the other hand, the periods of the product peaks in the frequency band of interest are unbiased and in fact, as we show in this paper, they are maximum probability and maximum likelihood estimates of the true peaks if the noise of the data is Gaussian. This fact renders the product spectrum particularly suitable for detecting persistent peaks in a series thus, expressing the self-coherency of the signals throughout the series. We must emphasize here that the split of the series into shorter segments for the purpose of producing a product spectrum has at least one more important property: The analysis of the series in segments, is equivalent to the multiresolution analysis, which is particularly applicable to time series with varying statistical properties (non-stationary) and variable frequency content. This is chiefly desirable when it comes to the detection of the Slichter triplet that is suspected to come and go according to an unknown excitation mechanism. However, caution must be exercised when dividing the series into shorter segments not to lose resolution within the band of interest. We revisit this issue of resolution later when we perform the analysis of the actual GWR series. The Fourier approach to the product spectrum requires additional data pre-processing because the segmentation of the series arises from repeated box car function applications (sinc function in the frequency domain) to the original series. This operation will produce, in general, much shorter series, which when cast in a Fourier system will produce a strongly biased spectrum (smearing or Gibbs effects). Care must be taken to remove this effect by appropriately windowing the data in advance (time domain). This pre-windowing approach is usually unknowingly omitted and the resulting Fourier spectrum is contaminated (“dirty spectrum”) by the sinc function ripples. This is clearly demonstrated for instance in Baisch and Bokelmann (1999). However, the windowing of the data is not required when the leastsquares spectral analysis (LSSA) is used, because it is a double projection operation: (a) orthogonal projection of the observation space (observed time series) to the model space using trigonometric and other base functions and (b) orthogonal projection of the model space back to the observation space. For more details the interested reader can refer to Van´ıcek (1969, 1971) and Wells et al. (1985). 3. The least-squares spectrum revisited Least-squares spectral analysis (LSSA) has its real roots in Van´ıcek (1969, 1971); it was developed as an

111

alternative to the classical Fourier methods to bypass their inherent limitations and stringent requirements such as the need for long records, constant sampling rate, equal weights for the time series values, no presence of gaps or datum shifts (offsets) and other similar characteristics that render the experimental series strongly non stationary. The LSSA was implemented into computational algorithms by other researchers, most notably by Lomb (1976), and Scargle (1982) and has been applied heavily and successfully in many fields such as astronomy and biology because of the nature of the experimental data series that have usually gaps, nonconstant sampling rate, unequal weights and other. The LSSA method has been described in many other research papers and texts (e.g. Taylor and Hamilton, 1972; Maul and Yanaway, 1978; Press and Teukolsky, 1988; Press et al., 1992, and others) and need not be elaborated on here. Most recently, Pagiatakis (1999) gave a concise and comprehensive overview of the method along with the development of a statistical theory for the rigorous identification of significant peaks in the LS spectrum. He also showed (Pagiatakis, 1999) the superiority of the LSSA versus the various Fourier approaches using real time series examples with inherently unequally spaced and unequally weighted time series (VLBI baselines) and proved that the LS spectrum is equivalent to the classic hindcast skill of the linear regression analysis that is routinely used in meteorology and weather forecasting (e.g. Preisendorfer, 1988). However, the most important property of the LSSA, particularly for the present study, is that it can be applied directly to short data series without prior windowing of the data, and without any pre-processing such as, de-gapping, detrending, de-spiking or other; this is demonstrated with an example in Section 5 of this contribution. 4. Statistics of the product LS spectrum In order to assess the significance of the peaks in the product LS spectrum we need to derive its probability distribution function (pdf). The derivation of the product pdf is based on well-known statistical principles, concepts and theorems and more specifically on the central limit theorem (e.g. Hogg and Craig, 1995). Let the random variables Xi , ∀i ∈ [1, n], ∧n ∈ Z+ denote n statistically independent LS spectra each of which is distributed as beta(xi ; α, β), with α = 1 and β dependent upon the degrees of freedom of the linear LS system as follows (Pagiatakis, 1999, Theorem 3): β=

m−u−2 , 2

(4)

112

S.D. Pagiatakis et al. / Physics of the Earth and Planetary Interiors 160 (2007) 108–123

where m is the size of the time series, and u is the number of unknown parameters (such as linear or other trend, periodic constituents of known frequency or other) that are estimated simultaneously with the LS spectrum. All segmented series are of equal size with no overlap, which explains why their respective spectra are statistically independent. Their corresponding LS spectra are all estimated exactly the same way and thus, all random variables Xi , ∀i ∈ [1, n], are distributed identically as beta(xi ; α, β), with finite first and second moments (Pagiatakis, 1999): μb =

2 m−u

and σb2 =

4(m − u − 2) . (5) (m − u + 2)(m − u)2

For the derivation of the pdf of the product LS spectrum, i.e. the pdf of statistic Hn = X1 ·X2 ·X3 · · ·Xn , we take its natural logarithm: ln Hn =

n 

ln Xi .

(6)

i=1

According to the central limit theorem, ln Hn follows approximately the normal distribution, thus Hn follows the log-normal distribution. The term “approximately” is directly related to the number of random variables being added, i.e. the value of n: How large should the value of n be so that the central limit theorem applies reasonably accurately? This is determined by the Berry–Ess´een inequality that quantifies the rate at which the convergence to normality takes place. In layman’s terms, the Berry–Ess´een theorem states that in addition to the requirement that the random variables be statistically independent and with identical probability distribution functions of finite mean and variance, their third central moment (skewness) must be finite. If all conditions above are satisfied then the rate of convergence to the normal distribution is at least of the order of n−0.5 . There is an additional more stringent requirement to ensure uniform convergence to the normal pdf even if the individual random variables follow different distributions (Lyapunov’s theorem) but this does not apply to our case. In practice, even the sum of only 10 random variables can converge reasonably well to the normal pdf (Hogg and Craig, 1995, p. 249). Looking at our particular case, the LS spectrum is distributed as beta(1, β) with a finite third central moment, thus the validity of the central limit theorem is ensured when more than about 10 monthly spectra are multiplied (when their logarithms are added). In order to define the parameters of the normal distribution of ln Hn , we need to know the mean and variance of the individual distributions of ln Xi . In order to do this we follow the approach of transformation of variables

(Hogg and Craig, 1995, Section 4.3). If X is a random variable of continuous type, such that X ∼ beta(x; 1, β) = β(1 − x)β−1 in the set A = {x/x : 0 < x < 1} find the pdf of the new random variable Y = ln X. We introduce the transformation y = v(x) = ln x, which gives x = w(y) = ey . We now have the mapping of A onto B = {y/y : − ∞ < y < 0}, and the Jacobian of transformation is J = dx/dy = ey . The pdf of Y = ln X is then g(y) = beta(ey )|J|, or Y = ln X ∼ g(y) = β ey (1 − ey )β−1 , B = {y/y : −∞ < y < 0}.

(7)

The determination of the mean and variance of g(y) is now in order. By definition:  0  0 yg(y) dy = β y ey (1 − ey )β−1 dy. (8) μy = −∞

−∞

Introducing the substitution w = ey Eq. (8) gives  1 ln w(1 − w)β−1 dw. (9) μy = β 0

The calculation of the above integral involves the Beta function B(1, β) and the logarithmic factorial function Ψ (β) (see for instance Gradshteyn and Ryzhik, 1965, p. 538). After some development we obtain:  ∞   1 1 μy = − − . (10) k+1 k+1+β k=0

Eq. (10) gives the mean of g(y) as a convergent series. The convergence is relatively fast for β < 10,000 (directly related to the length of the data series under analysis), whereas for β > 10,000 the convergence is slower but does not pose any numerical problem. The determination of the variance of g(y) is very similar to that of the mean. By definition:  0 2 σy = y2 g(y) dy − μ2y −∞





0

−∞

y2 ey (1 − ey )β−1 dy − μ2y .

(11)

Similar to the evaluation of the mean, we introduce the substitution w = ey which gives  1 (ln w)2 (1 − w)β−1 dw − μ2y . (12) σy2 = β 0

Once again, the calculation of integral (12) involves the Beta function B(1, β) as well as the logarithmic factorial function Ψ (β) and its first derivative Ψ  (β)

S.D. Pagiatakis et al. / Physics of the Earth and Planetary Interiors 160 (2007) 108–123

Fig. 1. Plot of the probability function of random variable Y = ln X, when X ∼ beta(1, β), for different values of parameter β.

(Gradshteyn and Ryzhik, 1965, p. 541). After some development we obtain:  ∞   1 1 . (13) σy2 = − (k + 1)2 (k + 1 + β)2 k=0 Eq. (13) gives the variance of g(y) as a series that converges much faster than that of the mean (Eq. (10)). Fig. 1 and Table 1 give the plot of g(x) and values of μy and σy2 for characteristic values of β. Note that from (4) the size m of the data series under investigation is about twice the value of β. Recalling (6) and the central limit theorem, we have that Hn ∼ ln(h; nμy , nσy2 ), with μy and σy2 given by (10) and (13), respectively, and its pdf by Hines and Montgomery (1990): f (h) =

1 √

hσy 2nπ

e−(ln h−nμy )

2 /2nσ 2 y

.

(14)

Knowing μy and σy2 , the mean and variance of the log-normal distribution given by (14) (e.g. Aitchison and Brown, 1957) are: μh = en(μy +σy /2) , 2

σh2 = en(σy +2μy ) (enσy − 1). 2

2

113

The development of the pdf of the product LS spectrum reveals a very significant property for the identification of its peaks. We note that the pdf of the LS spectrum as derived by Pagiatakis (1999) has a beta distribution with parameter α always equal to unity, which makes this pdf significantly skewed. This means that the statistically significant peaks in the LS spectrum will not be maximum probability estimates. However, the product LS spectrum follows very closely the normal distribution and thus the significant peaks in this spectrum not only will they be maximum probability (MP) estimates (symmetrical pdf) but maximum likelihood (ML) estimates (because of normality). For more information on MP and ML estimates see Gelb (1974), Kendall and Stuart (1979), and Priestley (1981). This property of the product spectrum makes the LS self-coherency spectrum a best linear unbiased estimator (BLUE) that is much more attractive than the simple LS spectrum. We now state the following theorem that defines the probability distribution of the product LS spectrum from which we can establish confidence levels above which spectral peaks can be considered statistically significant. Product least-squares spectrum theorem. Let Xi , ∀i ∈ [1, n], ∧n ∈ Z+ , denote n statistically independent random variables corresponding to n LS spectra of n experimental data series, all being distributed identically as beta(xi ; α, β) with α = 1, and β = (m − u − 2)/2,where m is the number of data points in each series and u is the number of unknown parameters of the linear LS system. The logarithm of the product LS spectrum ln Hn is a maximum probability and maximum likelihood estimator that follows approximately the normal distribution N(yn ; nμy , nσy2 ), where μy and σy2 are given by (10) and (13). Equivalently, the product LS spectrum Hn follows the log-normal distribution ln(hn ; μh , σh2 ) with parameters μh and σh2 given by (15a) and (15b), respectively.

(15a)

5. Data reduction and filtering procedures

(15b)

We use program GWAVE (Merriam, 1992) to subtract the body tide from the 6-year, 1 s raw data obtained from CSGI. GWAVE software is based on Xi (1987, 1989) series of 3070 tidal waves and uses latitude and frequency dependent gravimetric factors. Subsequently, we use a Parzen window in the time domain to decimate the 1 s residual data to produce unequally spaced series with random sampling interval ranging from 100 to 200 s. The Parzen window has a lag equal to 50 units (in our case this lag corresponds to 50 s), which gives a variance ratio (smoothing factor) I/T × 10−5 and a bandwidth b = 1.86/50 (1 s) = 37 mHz (∼27s) (see Jenkins and

Table 1 Numerical values of mean, variance and standard deviation of random variable Y = ln X, when X ∼ beta(1, β), for various degrees of freedom. The length of the data series is approximately twice the value of β (cf. Eq. (4)) β

μ

σ2

σ

100 1,000 10,000 100,000

−5.1874 −7.4855 −9.7875 −12.0891

1.6350 1.6439 1.6448 1.6449

1.2787 1.2822 1.2825 1.2825

114

S.D. Pagiatakis et al. / Physics of the Earth and Planetary Interiors 160 (2007) 108–123

Watts, 1968, p. 255). The variance ratio provides a means to measure or evaluate the reduction in variance of the estimator (Parzen lag window) due to smoothing. In our case the variance ratio is much too low to introduce any bias in the spectrum, whereas the bandwidth is relatively narrow compared to the narrowest peak in the spectrum of the original 1 s data within the band of interest to mask any peaks during the decimation procedure. Therefore, the Parzen window achieves adequate decimation and unevenly spaced series without masking any possible narrow bandwidth peaks present in the original data, and without introducing appreciable bias, if any, in the spectrum. In addition to the decimation of the original data, we also propagate the local noise level of the series (variance of original data within the Parzen window) into each filtered value thus, estimating realistic weights for the time series values as required in a weighted LSSA algorithm. The concept of the uneven decimation procedure and estimation of the variance of the filtered values is based on the well-known digital time-domain convolution of any lag window (weight function) with the raw 1 s GWR data. This method is routinely used by all GGP investigators to produce equally spaced series at sampling interval of 1 min, albeit different investigators use different weight functions. In our case, the Parzen lag window is applied at unevenly time intervals defined by a pseudorandom number generator. This approach was described in detail and followed by Pagiatakis (2000) but we repeat it here for clarity and completeness. Once the position (in time) of the window is determined, the algorithm searches and identifies all series values that fall within the window width and determines for each data point its time difference from the centre of the window; this time difference is used in the Parzen lag window algorithm (see for instance Jenkins and Watts, 1968, p. 244) to calculate the weight of each series value. Obviously, the series values that are closer (in time) to the centre of the window attain higher weight than those being close to its tails. This approach does not require the input series to be equally spaced and without gaps, for the algorithm considers only the data points that are within the window width and thus any missing points simply do not contribute to the filtered value (weighted mean). In the case where there is a gap in the series and no data points fall within the window width, no filtered value is produced thus, we maintain the gap in the filtered series as it was in the original series. The estimation of the variance of the filtered value is based on the noise level of the segment of the series that was used to produce that value. More specifically, the algorithm estimates the variance of the raw series

values used to produce the filtered value, i.e. the variance of all those values that fall within the window. This variance is then propagated to the filtered value through the Parzen window algorithm (weighted mean algorithm) using the well-known covariance law (law of propagation of random errors). In this procedure it is tacitly assumed that the noise of the series within the lag window is uncorrelated. Obviously, noisier series segments will produce filtered values of higher variance than quieter ones thus, contributing less to the spectral estimates via the weighted least-squares spectral analysis approach, as they should. The above decimation approach is applied directly to the original data series (raw 1 s data) without prior pre-processing that may include de-spiking, de-gapping, de-noising, de-trending or other similar “repairs” that appear to be very popular, albeit very subjective in our view. These “cleaning” procedures may be critical or in some cases may even dominate the resulting spectra (Baisch and Bokelmann, 1999). Obviously, our approach does not require any de-gapping or de-noising of the original series simply because gaps are not problematic features in the LSSA approach, whereas the varying noise level in the series is taken into consideration by producing weighted filtered values. In addition, to show that with this approach we do not need to be concerned about any spikes or segments that have been saturated by earthquakes, suffice it to say that in the worst case scenario of an earthquake disturbance, the filtered value will attain such a high variance compared to the variances of other undisturbed segments that it will contribute very little, if anything, to the spectral estimates due to its low weight. In order to demonstrate the effectiveness of the above algorithm, we apply it on a real 1 s gravity record obtained at Cantley on day 339, 1997. In this record, we introduce an artificial gap of about 1.3 h duration (10.5–11.8 h UTC) and a spike at 7.7 h, whereas the two seismic events present are real (Fig. 2). We run the algorithm with lags equal to 50–300 s, respectively to show the effect of the window width on the filtered values and on their standard deviation. In Fig. 2 (both panels), the black line represents the original 1 s data, whereas the white dots and error bars correspond to the filtered values; the error bars shown have been scaled by a factor of 5 to be more visible. In the top panel we plot the filtered values obtained with the smoothing window of lag = 50 s, whereas the bottom panel shows the smoothed values with lag = 300 s. Both panels show very clearly that the filtered values during both earthquake events have error bars that are many times larger than the ones of the quieter segments. The spike has been filtered out and the

S.D. Pagiatakis et al. / Physics of the Earth and Planetary Interiors 160 (2007) 108–123

Fig. 2. Demonstration of the decimation algorithm used in this study. Black lines depict the original 1 s gravity data and white circles with error bars indicate filtered values. Whereas the original data are real, the spike at about 7.7 h and the gap are introduced artificially to demonstrate the effectiveness of the algorithm. Error bars have been exaggerated 5×. Panel (a) shows filtered values using Parzen window with lag = 50 s, whereas panel (b) depicts the results of Parzen window using lag = 300 s. The error bars during both seismic events are many times larger than in quieter segments.

filtered value affected by it has a larger error bar than the neighbouring points, whilst the gap is maintained in the filtered series. One other characteristic of the filtered values is that their error bars are smaller for the wide window (bottom panel), than for the narrow window (top panel), as they should. The filtered values produced by this approach can be fed directly to the weighted LSSA algorithm without any user intervention. Obviously, our approach does not replace noisy segments, nor does it fill gaps; simply put, noisier data segments contribute less to the analysis results. De-trending the original series is also of no concern to the LSSA, for trends (linear or other) are estimated simultaneously with the LS spectrum thus, achieving unbiased estimates of both as it has already been shown

115

by Pagiatakis (1999). Lastly, the worrisome presence of offsets in the time series is also treated very rigorously in the LSSA approach simply by identifying the times where offsets exist or we suspect they may exist; the LSSA algorithm will determine the offset simultaneously with the LS spectrum and other parameters along with its statistical significance. In the case when we wrongly suspected that there was an offset at a specific time, the estimated offset would appear statistically insignificant and no further action will be taken. The treatment of the offsets has also been described in detail by Pagiatakis (2000) and need not be repeated here. Overall, our approach is very simple, straightforward, fast and most importantly rigorous through simple, fast and objective algorithms that do not require or impose questionable data repairs of any kind that stem from visual inspection of, subjective decision and manual intervention on the series. The unequal sampling interval of the filtered series also eliminates any possible residual aliasing effects that may arise from the decimation of the data. Our approach of producing unequally spaced data is with no doubt diametrically opposite to what most researchers do to satisfy the requirements of the Fourier method with consequences in the final results. In our case we expect to recover correctly periods shorter than the average sampling interval without any possibility of residual aliasing because the randomly spaced data have some points spaced much closer than the “average” sampling rate (see for instance Press et al., 1992); this would be impossible with equally spaced data. Finally, the Parzen windows, as applied here, do not overlap with each other and thus produce time series values with independent statistical properties. The above decimation procedure gives us a total of 786,816 data points in the period of 6 years or an average sampling interval of about 4 min. The 6-year long GWR time series is split into 72 monthly segments to be analyzed independently. Monthly records are sufficiently long to allow reliable resolution of all peaks in the band of 2–10 h and detect minute signals at the nanogal level (e.g., see Rosat et al., 2003 for more details). We produce weighted monthly LS spectra by simultaneously suppressing eight tidal frequencies that may have a significant ocean loading effect. That is, we fit and remove eight sine waves corresponding to the strongest ocean load constituents. We follow this approach of subtracting the ocean load periods rather than removing them using predicted amplitudes and phases because ocean load is not as precisely modeled as the body tide, and a residual load signal may remain in the series. As expected, each of the 72 monthly spectra exhibits different characteristics, particularly in their

116

S.D. Pagiatakis et al. / Physics of the Earth and Planetary Interiors 160 (2007) 108–123

Fig. 3. Log-linear, period-time wavelet representation of 72 standardized LS monthly spectra at Cantley, Canada. The plot shows clearly significant activity in the band 3–7 h.

variance, and in order to have a complete “image” of the time behaviour of the spectra we produce a 3D wavelet representation after standardizing each monthly spectra with its standard deviation (Fig. 3). Fig. 3 shows significant activity at least in the band 3–7 h. Similar activity is found in other superconducting gravimeter records form France (Strasbourg), Australia (Canberra) and Japan (Matsusiro), although we used 1 min data that were readily available to us (plots are not shown here). The next step involves frequency domain multiplication of all monthly LS spectra by taking the natural logarithms of the monthly spectra and summing them up to form the natural logarithm of the product LS spectrum,

which follows approximately the normal distribution, i.e. W = ln Hn ∼ N(w; μw = nμy , σw2 = nσy2 ), where W is the random variable that describes the logarithm of the product LS spectrum. Defining the well-known z-score as (e.g. Ary and Jacobs, 1976, p. 125) Zn =

W − μw ln Hn − nμy √ = , σy n σw

(16)

we see that Zn ∼ N(z; 0, 1), which can be used to define confidence levels, for example at 90, 95 and 99%, i.e. at 1.65, 1.96 and 2.58 units, respectively above the mean of the product LS spectrum.

S.D. Pagiatakis et al. / Physics of the Earth and Planetary Interiors 160 (2007) 108–123

6. Injection of known artificial signals We use the approach of known signal injection to the real time series in order to investigate and understand the sensitivity of the LS self-coherency spectrum as well as identify the signal strength (amplitude) that makes it statistically significant above 90, 95 and 99% confidence levels. First we inject successively five sine waves of amplitudes 5, 10, 20, 30 and 40 ngal, respectively, all of which have the same period of 4.8567 h (0.2059 cph). This period is chosen to fall between two existing peaks that are very close to one another and significant at about 90%. Fig. 4(a) shows the details of the five injected peaks along with the original spectrum (before the injection—black line). The injected signal interferes with the existing adjacent peaks and as its strength grows it begins to dominate, while its period converges to the true injected period. The bandwidth of

117

the injected signal increases to about four times the bandwidth of the adjacent peaks indicating fusion of peaks. The second experiment involves the injection of a 20 ngal signal of period 4.1 h (0.2439 cph). This injected signal is in a band that is very quiet with no obvious peaks present in the immediate vicinity. Fig. 4(b) shows the result of this injection. The peak has a bandwidth similar to the other peaks in the immediate band and it is now precisely centered at 4.1 h. The peak is well above the 99% confidence level as it was the 20 ngal peak in the previous injection. The above injection experiment indicates that the LS self-coherency spectrum of the Cantley data has a noise level of less than about 5 ngal at 95% confidence level and gives unbiased estimates of periods of peaks as shown by the second injection experiment. The latter confirms experimentally the product LS spectrum theorem as presented in Section 4. 7. The atmospheric pressure

Fig. 4. Injection of known signals into the residual gravity series. In both panels the black line indicates the original LS self-coherency spectrum before the injection of signals. Panel (a) shows the results of the injection of five signals of the same period (4.856 h) but different amplitudes. Panel (b) shows a 20 ngal signal of period 4.1 h (blue line) injected into a relatively flat spectral band.

The atmospheric pressure effect on the GWR series is with no doubt worrisome and is expected to pose problems in the detection of weak gravity signals that stem from geodynamic phenomena such as the Earth’s core motions. In general, there are two approaches to remove the atmospheric pressure effect from the gravity records prior to their analysis. The first and theoretically most accurate method takes into consideration the global atmospheric pressure field and convolves it with appropriate Green’s functions to predict the atmospheric pressure loading at the station of interest (e.g. Mukai et al., 1995; Boy et al., 2001). This method can be refined to consider for instance pressure variation with altitude. However, present day capabilities in determining the global atmospheric pressure field are limited to six hours sampling interval at best (see for instance European Centre for Medium-Range Weather Forecasts (ECMWF), or National Centres for Environmental Prediction (NCEP)), which is clearly inadequate for high frequency calculations. The second method is known as the response method and considers the atmospheric pressure variations only in the near vicinity of the station as recorded by a local barometer. These atmospheric pressure records are then used together with gravity records in a cross-spectral analysis scheme to estimate admittance functions, which are then used to remove the atmospheric pressure effect from the gravity records. The response method described above is not new. The earliest attempts we could identify are attributed to Warburton and Goodkind (1977) and Steeves (1981). Steeves (1981) developed the least-squares response

118

S.D. Pagiatakis et al. / Physics of the Earth and Planetary Interiors 160 (2007) 108–123

method to study the effect of atmospheric pressure on tilt measurements. Later on, Pagiatakis and Van´ıcek (1985) applied Steeves’s least-squares response method to model the response of gravity and tilt to various atmospheric effects such as atmospheric temperature and pressure and bedrock temperature. Although the data were very noisy, they showed frequency-dependent admittances (Pagiatakis and Van´ıcek, 1985). Most recently the response method (not necessarily the leastsquares response method) has been used for instance by Crossley et al. (1995), Neumeyer and Dittfeld (1997), Sun et al. (2002), Hu et al. (2005) and others, to model the gravity response to air pressure using GWR data. Today, it is believed that a constant atmospheric pressure admittance, usually −3.0 nm/s2 /h Pa, can remove about 90% of the atmospheric pressure effect. Atmospheric admittance functions are usually location-, time-, and frequency-dependent but so far only the very low frequency admittance appears to be reliably determined near the value of −3.0 nm/s2 /h Pa, whereas in the high frequency band we find controversial results that show that either the admittance increases (e.g. Crossley et al., 1995; Hu et al., 2005) or decreases (e.g. Sun et al., 2002) with increasing frequency. We are presently working in reconciling this discrepancy particularly in the high frequency band. Our preliminary investigations using the CSGI data (Cantley, Canada) indicate that the pressure admittance may be different from the nominally accepted value of −3.0 nm/s2 /h Pa and this makes us very apprehensive in applying a constant admittance of −3.0 nm/s2 /h Pa; a constant admittance that is, for example, twice as large as the real (and unknown) admittance will in fact overcorrect the gravity record and will show essentially the same (amplitude) spectrum with the uncorrected gravity record. In this case the phase spectrum must be used to reconcile this problem. In this contribution we treat the problem of the atmospheric pressure effect in two ways: first, we do not correct the gravity data from atmospheric pressure effects but we perform analyses of the local atmospheric pressure data for the same time period by decimating the 10 s original data at unequally spaced intervals using a Parzen lag window with lag of 10 units (100 s). Monthly atmospheric pressure spectra are then produced in the band 2–10 h after filtering the decimated series above the 10 h cutoff point. Subsequently, product spectra are calculated (not shown here) to identify the periods of all significant peaks (see next section). Second, we apply pressure correction to the gravity series using a constant admittance of −3.0 nm/s2 /h Pa and subsequently perform spectral analyses of the corrected residual grav-

ity series as in the first case. We wish to emphasize here that the atmospheric effects on gravity are still insufficiently modeled and pose a roadblock on the detection of minute signals like the Slichter triplet. However, this should not stop us from developing objective and rigorous spectral analyses approaches to detect such signals and this is the main goal of this contribution. 8. Analyses results and discussion Fig. 5 depicts the LS self-coherency spectrum of 72 independent monthly gravity LS spectra before removing the atmospheric pressure effect. Fig. 6 shows the spectrum after the removal of the atmospheric pressure effect using a constant admittance of −3.0 nm/s2 /h Pa. Overall, and in the band 2–10 h, we observe that the removal of the constant pressure admittance reduces the power of the spectrum by a mere 5%, whereas the number of significant peaks (above 90%) remains practically the same. Whereas the pressure correction reduced 18 peaks below the 90% confidence level, it strengthened 21 existing peaks (>90%). Details of both spectra (Figs. 5 and 6) show (with arrows) the positions of the Slichter triplet predictions by various researchers. In all spectra (Figs. 5 and 6) we show the 90, 95 and 99% confidence levels. We identify all spectral peaks that are significant above the three different confidence levels in both spec-

Fig. 5. Normalized LS self-coherency spectrum of 72 independent monthly gravity series from Cantley, Canada, with no atmospheric pressure correction applied. The confidence levels at 90, 95 and 99% are shown. Dotted lines indicate the position of the 99% significant peaks. Slichter triplets predicted by other researchers as well as two triplets A and B discovered in this study are also shown. Triplet B is narrower by about 1.5% w.r.t. triplet A and it does not appear after the atmospheric pressure correction is applied (see Fig. 6).

S.D. Pagiatakis et al. / Physics of the Earth and Planetary Interiors 160 (2007) 108–123

Fig. 6. Normalized LS self-coherency spectrum of 72 independent monthly gravity series from Cantley, Canada, after a constant atmospheric admittance of −3.0 nm/s2 /h Pa was applied to the gravity series. The confidence levels at 90, 95 and 99% are shown. Dotted lines indicate the position of the 99% significant peaks. Slichter triplets predicted by other researchers as well as triplet A (very close to triplet A—see Fig. 5) is shown.

tra (before and after the removal of the constant pressure admittance), calculating also their bandwidths, hereafter denoted as b. Bandwidths vary from 90 mHz (11 s) to about 40 mHz (25 s) (in general, from higher to lower frequencies). Occasionally peaks have much wider bandwidth indicating that two or more peaks with nearly the same period are fused into one. We use the estimated bandwidths of the observed peaks to make a judgment about their fidelity and resolvability from other peaks found in atmospheric pressure data and non-linear tides. Table 2 summarizes the periods of the Slichter triplet predicted by various researchers, using different Earth structures and different models. For each model prediction we give the corresponding reference, the triplet periods and make comments. Below these predicted periods (same row) we list our estimated periods that are closest to the predicted for comparison purposes. Here “closest” means that the estimated peaks are within 1% or better of the model predictions. Dash indicates that no estimated periods can be identified within 1% of the model prediction, or peaks are insignificant and below the average noise level (50% confidence level). The choice of “1%” threshold was based on theoretical estimates of the effect of various parameters on the Slichter periods. Such parameters include viscosity, density, and Earth rotation (centrifugal and Coriolis) some of which may have been neglected in the models cited herein and thus have consequences to the final prediction of periods. Each of the above parameters does not probably contribute more than about 0.5% (see for instance

119

Rieutord, 2002; Rogister, 2003) thus the assumed threshold. Obviously, in this threshold we do not allow possible variations in the density jump at the ICB, which has far more significant effects in the prediction of the periods (see for instance Rosat et al., 2006). The first two rows of Table 2 (see also Figs. 5 and 6 for details) indicate that Smylie’s estimated periods and ours (before the removal of the pressure effect) are in agreement at the 0.22% level or better. Smylie’s predicted periods (second row of Table 2) are within 0.13% (or better) w.r.t. ours. The removal of a constant atmospheric pressure admittance changes slightly our estimated periods but they still remain well below the 0.8% level difference from Smylie’s. We note that our estimated period of 4.011 h is unaffected by the removal of the pressure effect however its amplitude reduces from 99 to 65% confidence level. This is due to the near proximity of this period (b = 0.003 h (93 mHz)) to the solar heating tide (S6). No significant atmospheric peaks were found around our estimated “axial” and “prograde” gravity peaks. The second set of comparisons is based on calculations using the 1066A Earth model (Table 2, rows 3–7). Essentially all five models are different in that: Dahlen and Sailor (1979) use a rotating (centrifugal and Coriolis accelerations) and elliptical Earth with an inviscid outer core, Crossley et al. (1992) and Crossley (1993) use an inviscid outer core, a spherical inner core boundary (ICB) and Coriolis acceleration; Rieutord (2002) assumes a viscous and neutrally stratified outer core with spherical ICB and full rotation (centrifugal and Coriolis accelerations); Rogister (2003) considers an inviscid outer core, an elliptical inner core boundary (ICB) and full rotation. Crossley’s and Rogister’s calculated periods are very consistent. We note however that none of our estimated periods (with or without the pressure effect) is close to any of the five predicted “retrograde” periods by better than 1%. We can draw the conclusion that all 1066A Earth model calculations are not in good agreement with our estimated periods. In essence, the removal of the pressure effect (constant admittance) does not affect the results in any particular and appreciable way. The third set of model predictions by Crossley (1993), Rogister (2003) and most recently by Rosat et al. (2006) on PREM (or modifications thereof) are summarized in the last three rows of Table 2. We notice that our estimated periods are, in general, in much better agreement with the predicted ones in this group than in the case of 1066A model. When the pressure effect is removed, our estimated “prograde” and “retrograde” periods shift closer to the model predictions, whereas the “axial” diminishes below the average noise level.

120

S.D. Pagiatakis et al. / Physics of the Earth and Planetary Interiors 160 (2007) 108–123

Table 2 Comparison of model (predicted) Slichter periods by various researchers with the ones estimated in this study Reference

1

2

3

4

5

6

7

8

9

10

Model

Period (h)

Comments

Prograde (m = −1)

Axial (m = 0)

Retrograde (m = +1)

Smylie (1992)1 This study This study

3.582 3.574 3.563

3.766 3.7652 3.736

4.015 4.011 4.0113

1 Refined

Data

Smylie et al. (2001) This study This study

3.579 3.574 3.563

3.765 3.7652 3.736

4.012 4.011 4.0113

1 Viscous

CORE111

Dahlen and Sailor (1979) This study This study

1066A

5.540 – –

6.127 – –

6.729 – –

Crossley et al. (1992) This study This study

1066A

5.674 5.648 –

6.259 – 6.259

7.032 7.005 –

Crossley (1993) This study This study

1066A

4.127 4.1321 4.132

4.533 4.516 4.516

5.016 – –

Rieutord (2002) This study This study

1066A

3.894 3.881 –

4.255 – –

4.687 – –

1066A

4.129 4.1321 4.132

4.529 4.516 4.516

PREM

4.777 4.759 4.788 4.7983

Rogister (2003) This study This study This study Rosat et al. (2006)1 This study This study

Rogister (2003) This study This study Crossley (1993) This study This study This study

estimates reported in Smylie et al. (2001) significant at 90% confidence level 3 Significant at 65% confidence level

2 Almost

outer core significant at 90% confidence level 3 Significant at 65% confidence level

2 Almost

1 Significant

at 80% confidence level

5.024 – –

1 Significant

at 80% confidence level

5.310 5.3132 5.355 –

5.979 5.9811 6.021 5.981

1 Significant

5.309 5.3132 5.355 –

5.991 5.9651 6.021 5.981

1 Significant

PREM

4.770 4.759 4.7883 4.7883 4.648 4.6302 4.630

5.244 5.254 –

5.835 – –

1 Original

PREM1

at 60% confidence level at 80% confidence level 3 Significant at 85% confidence level 2 Significant

at 60% confidence level at 80% confidence level 3 Significant at 85% confidence level 2 Significant

PREM, method 1 (Rosat et al., 2006, Table 2)

2 Significant

at 85% confidence level

All boldface periods are significant at least at the 90% confidence level. Normal text signifies that no pressure correction has been applied, whereas italics indicate that atmospheric pressure correction has been applied to the gravity series. Dashes in place of periods indicate that no peaks were found within 1% of the model predictions, or peaks are below the 50% confidence level.

We perform a rather basic search for other possible triplets using Smylie’s splitting law (e.g. Smylie, 1992; Smylie et al., 2001) as a guide both for viscous and inviscid outer core taking as axial periods the most significant peaks (>90%) and calculating prograde and retrograde periods. The calculated inviscid prograde and retrograde periods do not seem to fit any of the peaks of the self-coherency spectrum well, including the inviscid periods for PREM predicted by Smylie et al. (2001), i.e. {4.678 h, 5.181 h, 5.799 h}. Our closest (better than 0.2%) estimated triplet (excluding Smylie’s

(1992)) to the viscous splitting law (Smylie, 1992) is A = {4.261 h, 4.516 h, 4.872 h} before the removal of the pressure effect. This triplet becomes A ={4.269 h, 4.516 h, 4.889 h} after the removal of the pressure effect. For both triplets, the “axial” period remains constant but it reduces slightly in amplitude after the pressure effect is removed. We note that our “prograde” period has also been seen by Kroner et al. (2004, p. 277) in their product spectra of five GGP stations as an “interesting peak. . .close to 0.23 cph.” Although we do not have any estimates of the periods seen in their coherency spectrum

S.D. Pagiatakis et al. / Physics of the Earth and Planetary Interiors 160 (2007) 108–123

(Kroner et al., 2004, Fig. 14a), we make the conjecture that we can see very similar peaks to our “axial” and “retrograde” frequencies, near 0.221 and 0.205 cph, respectively. We must note that the periods of triplet A have amplitudes of the order of a few nGal with the “axial” and “retrograde” being slightly higher and near 5 nGal. Triplet A is in agreement with the splitting law (e.g. Smylie, 1992) at the 0.2% or better, whereas in triplet A only the “retrograde” period differs from the splitting law by about 0.5%. We note that before the removal of the atmospheric effect (cf. Fig. 5) we also see triplet B = {4.302 h, 4.516 h, 4.798 h}, which is narrower than triplet A by about 1.2%. Triplet B cannot be seen in the spectrum after the removal of the atmospheric pressure effect. 9. Concluding remarks In this paper we introduced several innovative approaches in the spectral analysis of any data series that may contain important geophysical signals that are barely above the noise level, like the spheroidal oscillation of the Earth’s inner core. Our approach consisted of very simple, yet effective steps that can lead to the detection of such signals. As a test bed of our method we used 6 years of superconducting gravimeter records and atmospheric pressure data from the Canadian Superconducting Gravimeter Installation (CSGI) in Cantley, Canada. The analyses of the data were straightforward and easy to carry out, because unlike other researchers we chose, as usual, not to “repair” the data through a series of algorithms and visual inspections. We started with the 1 s raw data, we subtracted the body tide and decimated the residual series with a simple symmetric lag window (Parzen) of very narrow bandwidth (37 mHz or ∼27 s) centered at unequal time intervals to increase peak resolvability and eliminate possible residual aliasing. Along with the decimation we applied the covariance law on the Parzen algorithm to propagate the noise level of the record within the window thus producing a realistic variance for the filtered values. The product of this straightforward and simple operation was an unevenly spaced and weighted series that was subsequently analysed via the least-squares spectral analysis. We introduced a new concept called self-coherency analysis by calculating the product spectrum of the series itself from spectra of shorter, statistically independent segments, whose length was chosen in such a way as to achieve good spectral resolution (resolvability of peaks) in the band 3–7 h. We called this spectrum (leastsquares) self-coherency spectrum and after we derived its pdf we showed that it provides unbiased, maximum

121

likelihood and maximum probability spectral estimates. The pdf of the self-coherency spectrum allows easy identification of significant spectral peaks at any significance level. These findings were recapitulated in the new (leastsquares) product spectrum theorem. The main focus of this paper was the development of a spectral analysis approach that would lead to the detection of weak signals buried in noise, particularly targeting the controversial Slichter triplet predictions and estimations. Nevertheless, we could not resist the temptation to glimpse at the self-coherency spectrum with the hope to recognize perhaps a triplet that could be reasonably identified as Slichter. In order to discover a triplet that might be called “Slichter” we used previous observational and theoretical results, which we summarized in Table 2. We could not find any observational evidence for the predicted triplets in any of the model calculations using Earth models 1066A and PREM. In addition, we found no evidence of significant triplets matching the latest model predictions on 1066A and PREM by Rosat et al. (2006), despite the fact that they used a wide range of density contrasts at the ICB. Moreover, we were unable to identify any periods in the vicinity of 3.6988 h (7.51 × 10−5 Hz) and 4.3952 h (6.32 × 10−5 Hz) as seen by Rosat et al. (2006, Fig. 7). In our coherency spectrum we were able to identify a weak triplet very close to the much criticised triplet of Smylie (1992). A constant atmospheric pressure correction of −3.0 nm/s2 /h Pa did not change appreciably the periods of our estimated triplet, only the amplitude of the “retrograde” period diminished to about 65% confidence level (from 99% before the pressure correction), most likely because of its proximity to S6. We need a much more reliable pressure admittance in this high frequency band to reconcile this. Our results do not seem to support any other model predictions (see Table 2), but this argument should not be taken as passing judgment to any model; simply we could not find any observational evidence using our method of analysis. We performed a basic search for other possible triplets using Smylie’s splitting law both for viscous and inviscid outer core (Smylie, 1992), taking as axial periods the most significant ones (>90%) from which we calculated prograde and retrograde periods. We found no match of the predicted inviscid prograde and retrograde periods to our estimates, whereas for the viscous periods we found triplet A = {4.261 h, 4.516 h, 4.872 h} (before the removal of the pressure effect) and triplet A = {4.269 h, 4.516 h, 4.889 h} (after the removal of the pressure effect). These triplets agree better than 0.3% both, between themselves and with the splitting law. These periods, or very similar thereto, are perhaps visi-

122

S.D. Pagiatakis et al. / Physics of the Earth and Planetary Interiors 160 (2007) 108–123

ble in the analysis by Kroner et al. (2004), at least as far we can tell from interpreting their spectrum (Kroner et al., 2004, Fig. 14a). In this paper we went as far as the interpretation of the self-coherency least-squares spectrum using its pdf to identify significant peaks. However, our approach can be extended to produce a “clean” self-coherency spectrum, which can be used to isolate the true peaks from the false with higher degree of certainty. The term “clean spectrum” is borrowed from Baisch and Bokelmann (1999), who presented an approach to clean the “dirty” spectrum and identify true peaks. Incidentally, the very same approach was also presented independently by Pagiatakis (1999). We performed a very basic search for triplets that might be identified as Slichter. Our search was also limited to the data set obtained from Cantley but we are currently working with data from other GGP stations to parallel this analysis and see whether similar features are observed. We are of the opinion that an exhaustive search as proposed by other research teams (e.g. Rosat et al., 2003, 2006), albeit useful in identifying triplets that perhaps follow one splitting law or another, cannot give unique and definite answers, particularly for the Slichter problem. For instance, are we willing to accept, if found, a triplet whose peaks are of markedly different power? Is the axial mode stronger than the equatorial and how is this dependent upon the location of the station? The effect of atmospheric pressure still hinders such an investigation. We are currently working on estimating an improved atmospheric pressure admittance in the high frequency band, which is the subject of a future contribution. Acknowledgements This research was financially supported in part by the GEOIDE National Centre of Excellence, Canada and by the Natural Sciences and Engineering Research Council of Canada (NSERC). The second author was partially supported by the Chinese Scholarship Council of PRC and Key Laboratory of Geospace Environment and Geodesy, Ministry of Education of China (04-0108). We deeply appreciate the comments from two anonymous reviewers that served improve significantly the original manuscript. References Aitchison, J., Brown, J.A.C., 1957. The Lognormal Distribution with Special Reference to its Uses in Economics. Cambridge University Press, Cambridge, London.

Ary, D., Jacobs, L.C., 1976. Introduction to Statistics. Holt, Rinehart and Winston, New York. Baisch, S., Bokelmann, G.H.R., 1999. Spectral analysis with incomplete time series: an example from seismology. Comput. Geosci. 25, 739–750. Boy, J.P., Gegout, P., Hinderer, J., 2001. Gravity variations and global pressure loading. J. Geod. Soc. Jpn. 47, 267–272. Courtier, N., Ducarme, B., Goodkind, J., Hinderer, J., Imanishi, Y., Seama, N., Sun, H., Merriam, J., Bengert, B., Smylie, D.E., 2000. Global superconducting gravimeter observations and the search for the translational modes of the inner core. Phys. Earth Planet. Int. 117, 3. Crossley, D.J., 1993. Eigensolutions and seismic excitation of the Slichter mode triplet for a fully rotating earth model. EOS 73 (43), 60. Crossley, D.J., Rochester, M.G., Peng, Z.R., 1992. Slichter modes and love numbers. Geophys. Res. Lett. 19, 1679–1682. Crossley, D., Jensen, O., Hinderer, J., 1995. Effective barometric admittance and gravity residuals. Phys. Earth Planet. Int. 90, 355–358. Crossley, D.G., Hinderer, J., Sasula, G., Francis, O., Hsu, H.-T., Imanishi, Y., Jentzsch, G., Kaarianen, J., Merriam, J., Meurers, B., Neumeyer, J., Richter, B., Shibuya, K., Sato, T., van Dam, T., 1999. Network of superconducting gravimeters benefits a number of disciplines. EOS 80, 121–126. Dahlen, F.A., Sailor, R.V., 1979. Rotational and elliptical splitting of the free oscillations of the earth. Geophys. J. R. Astr. Soc. 58, 609–623. Dziewonski, A.M., Anderson, D.L., 1981. Preliminary reference earth model. Phys. Earth Planet. Int. 25, 297–356. Gelb, A. (Ed.), 1974. Applied Optimal Estimation. The M.I.T. Press, Cambridge, MA, USA. Gradshteyn, I.S., Ryzhik, I.M., 1965. Table of Integrals, Series and Products. Academic Press, New York. Hines, W.W., Montgomery, D.C., 1990. Probability and Statistics in Engineering and Management Science, 3rd ed. John Wiley and Sons, New York. Hogg, R.V., Craig, A.T., 1995. Introduction to Mathematical Statistics, 5th ed. Prentice Hall, New Jersey. Hu, X.-G., Liu, L.-T., Hinderer, J., Sun, H.-P., 2005. Wavelet filter analysis of local atmospheric pressure effects on gravity variations. J. Geodyn. 79, 447–459. Jenkins, G.M., Watts, D.G., 1968. Spectral Analysis and its Applications. Holden-Day, San Francisco. Kendall, M., Stuart, A., 1979. The Advanced Theory of Statistics Inference and Relationship, vol. 2, 4th ed. MacMillan, New York. Kroner, C., Jahr, Th., Jentzsch, 2004. Results from 44 months of observations with a superconducting gravimeter at Moxa/Germany. J. Geodyn. 38, 263–280. Lomb, N.R., 1976. Least-squares frequency analysis of unequally spaced data. Astrophys. Space Sci. 39, 447–462. Maul, G.A., Yanaway, A., 1978. Deep Sea Tides Determination from GEOS-3. NASA Contractor Report 141435. NOAA Atlantic Oceanographic and Meteorological Laboratories, Miami, FL. Merriam, J.B., 1992. An ephemeris for gravity tide predictions at the nanogal level. Geophys. J. Int. 108, 415–422. Mukai, A., Higashi, T., Takemoto, S., Nakagawa, I., Naito, I., 1995. Accurate estimation of atmospheric effects on gravity observations made with a superconducting gravity meter at Kyoto. Phys. Earth Planet. Int. 91, 149–159. Neumeyer, J., Dittfeld, H.J., 1997. Results of three years observation with superconducting gravimeter at the Geo-ForschungsZentrum Potsdam. J. Geodesy 71 (2), 59–80.

S.D. Pagiatakis et al. / Physics of the Earth and Planetary Interiors 160 (2007) 108–123 Pagiatakis, S.D., Van´ıcek, P., 1985. Atmospheric perturbations of tidal tilt and gravity measurements at the U.N.B. Earth Tides Station. In: Ricardo Vieira (Ed.), Proceedings of the Tenth International Symposium on Earth Tides, September 23–27, Madrid, Spain. Consejo Superior de Investigaciones Cientificas, Madrid. Pagiatakis, S.D., 1999. Stochastic significance of peaks in the leastsquares spectrum. J. Geodesy 73, 67–78. Pagiatakis, S.D., 2000. Superconducting gravimeter data treatment and analysis. Cahiers du Centre Europ´eende G´eodynamique et S´eismologie 17, 103–113. Preisendorfer, R.W., 1988. Principal Component Analysis in Meteorology and Oceanography. Elsevier, Amsterdam. Press, W.H., Teukolsky, S.A., 1988. Search algorithm for weak periodic signals in unevenly spaced data. Comput. Phys. 2 (6), 77– 82. Press, W.H., Teukolsky, S.A., Vetterling, W.T., Flannery, B.P., 1992. Numerical recipes in FORTRAN: the art of scientific computing, 2nd ed. Cambridge University Press, New York. Priestley, M.B., 1981. Spectral Analysis and Time Series. Academic Press, London. Rieutord, M., 2002. Slichter modes of the earth revisited. Phys. Earth Planet. Int. 131, 269–278. Rogister, Y., 2003. Splitting of seismic-free oscillations and of the Slichter triplet using the normal mode theory of a rotating, ellipsoidal earth. Phys. Earth Planet. Int. 140, 169–182. Rosat, S., Hinderer, J., Crossley, D., Rivera, L., 2003. The search for the Slichter mode: comparison of noise levels of superconducting gravimeters and investigation of the stacking method. Phys. Earth Planet. Int. 140, 183–202. Rosat, S., Rogister, Y., Crossley, D., Hinderer, J., 2006. A search for the Slichter triplet with superconducting gravimeters: impact on the density jump at the inner core boundary. J. Geodyn. 41 (1–3), 296–306. Scargle, J.D., 1982. Studies in astronomical time series analysis. II. Statistical aspects of spectral analysis of unevenly spaced data. Astrophys. J. 263, 835–853. Slichter, L.B., 1961. The fundamental free mode of the Earth’s innercore. Proc. Natl. Acad. Sci. 47 (2), 186–190.

123

Smylie, D.E., 1992. The inner core translational triplet and the density near Earth’s center. Science 255, 1678–1682. Smylie, D.E., Hinderer, J., Richter, B., Ducarme, B., 1993. The product spectra of gravity and barometric pressure in Europe. Phys. Earth Planet. Int. 80, 135–157. Smylie, D.E., Francis, O., Merriam, J.B., 2001. Beyond tides, determination of core properties from super-conducting gravimeter observations. J. Geod. Soc. Jpn. 47 (1), 364–372. Steeves, R.R., 1981. Estimation of gravity tilt response to atmospheric phenomena at the Fredericton tiltimetric station using a least-squares response method. PhD thesis. TR 79, Department of Surveying Engineering, University of New Brunswick, Fredericton N.B., Canada. Sun, H.-P., Hsu, H.-T., Jentzsch, G., Xu, J.-Q., 2002. Tidal gravity observations obtained with a superconducting gravimeter at Wuhan/China and its application to geodynamics. J. Geodyn. 33, 187–198. Taylor, J., Hamilton, S., 1972. Some tests on the Van´ıcek method of spectral analysis. Astrophys. Space Sci. 17, 357–367. Van´ıcek, P., 1969. Approximate spectral analysis by least-squares fit. Astrophys. Space Sci. 4, 387–391. Van´ıcek, P., 1971. Further development and properties of the spectral analysis by least-squares. Astrophys. Space Sci. 12, 10–33. Van´ıcek, P., Krakiwsky, E.J., 1986. Geodesy: The Concepts, 2nd ed. North Holland, Amsterdam. Warburton, R., Goodkind, J., 1977. The influence of barometric pressure variations on gravity. Geophys. J. R. Astr. Soc. 48, 281–292. Wells, D.E., Van´ıcek, P., Pagiatakis, S.D., 1985. Least Squares Spectral Analysis Revisited. TR 84, Department of Surveying Engineering, University of New Brunswick, Fredericton, Canada. Widmer, R., Masters, G., Gilbert, F., 1988. The spherical Earth revisited. In: Proceedings of the 17th International Conference on Mathematical Geophysics, June, Blanes, Spain. Xi, Q., 1987. A new complete development of the tide generating potential for the epoch J2000. Bull. Inf. Mar. Terr. 99, 6786–6812. Xi, Q., 1989. The precision of the development of the tidal generating potential and some explanatory notes. Bull. Inf. Mar. Terr. 105, 7396–7404.