Characterization based symmetry tests and their asymptotic efficiencies

Characterization based symmetry tests and their asymptotic efficiencies

Statistics and Probability Letters 119 (2016) 155–162 Contents lists available at ScienceDirect Statistics and Probability Letters journal homepage:...

422KB Sizes 0 Downloads 23 Views

Statistics and Probability Letters 119 (2016) 155–162

Contents lists available at ScienceDirect

Statistics and Probability Letters journal homepage: www.elsevier.com/locate/stapro

Characterization based symmetry tests and their asymptotic efficiencies B. Milošević, M. Obradović ∗ Faculty of Mathematics, University of Belgrade, Studenski trg 16, Belgrade, Serbia

article

info

Article history: Received 22 December 2015 Received in revised form 6 July 2016 Accepted 7 July 2016 Available online 6 August 2016

abstract A new characterization of symmetric distributions is presented. Two classes of distributionfree symmetry tests based on it are proposed. Their Bahadur efficiency is calculated and used for comparison with similar tests. Some classes of most favorable alternatives are also determined. © 2016 Elsevier B.V. All rights reserved.

MSC: 60F10 62F05 62G30 Keywords: Symmetry tests Order statistics Bahadur efficiency U-statistics

1. Introduction The assumption of symmetry of distribution is important in many statistical procedures. Hence testing for symmetry has been a prominent topic in the statistics literature. Starting from the classical sign and Wilcoxon tests, a number of symmetry tests have been developed. Some of these can be found in e.g. Burgio and Nikitin (2001, 2007). In recent times, introducing tests based on characterizations became a popular direction in goodness-of-fit testing. These kinds of tests, for different classes of distributions, can be found in, e.g. Henze and Meintanis (2002), Jovanović et al. (2015) and Volkova and Nikitin (2014). Symmetric distributions, as a special class of distributions, are no exception. Some symmetry tests based on characterizations have been studied in Baringhaus and Henze (1992), Nikitin (1996) and Litvinova (2001). Ahsanullah (1992) proposed a characterization of symmetric distributions with respect to zero using the equidistribution of the absolute values of minimal and maximal order statistics from arbitrary sample size. Based on this characterization, Nikitin and Ahsanullah (2015) proposed two classes of symmetry tests. Here we generalize Ahsanullah’s characterization for arbitrary order statistics and create new symmetry tests based on central order statistics. 2. Characterization and test statistics Let X(k;m) be the kth order statistic from a sample of the size m. We present the following characterization of symmetry.



Corresponding author. E-mail address: [email protected] (M. Obradović).

http://dx.doi.org/10.1016/j.spl.2016.07.007 0167-7152/© 2016 Elsevier B.V. All rights reserved.

156

B. Milošević, M. Obradović / Statistics and Probability Letters 119 (2016) 155–162

Theorem 1. Let X1 , . . . , Xm be i.i.d. continuous random variables with distribution function F (x) and let k ≤ variables |X(k;m) | and |X(m−k+1;m) | are equally distributed if and only if X1 is symmetric with respect to zero.

m . Then the random 2

Proof. If X1 is symmetric with respect to zero, then |X(k;m) | and |X(m−k+1;m) | are obviously equidistributed. We now consider the ‘‘only if’’ part. Since with probability one X(k;m) < X(m−k+1;m) , from the equidistribution of |X(k;m) | and |X(m−k+1;m) | we have that X(k;m) and −X(m−k+1;m) have the same distribution, and the following identity holds m    m

j

j=k

F (x)j (1 − F (x))m−j =

m    m j=k

j

F (−x)m−j (1 − F (−x))j .

(1)

Put m    m

r (a) =

j

j=k

aj (1 − a)m−j .

Expression (1) then becomes r (F (x)) = r (1 − F (−x)). Since r (a) is the distribution function of the kth order statistic from the uniform distribution, it is monotonously increasing for a ∈ (0, 1) and we conclude that F (x) = 1 − F (−x) for all x ∈ R, i.e. X1 is symmetric.  Let G0 be the class of all continuous distributions that are symmetric about zero. Based on a sample of size n, X1 , . . . , Xn , from an unknown distribution F , we want to test the null hypothesis H0 : F ∈ G0 against the alternative H1 : F ̸∈ G0 . In view of our characterization, numerous possible tests can be constructed depending on the choice of k and m in Theorem 1. The tests based on extremal order statistics were considered in Nikitin and Ahsanullah (2015). Here we propose two classes of tests, of integral and Kolmogorov type, based on the equidistribution of two central order statistics from the subsamples of even size m = 2k. These tests have the following test statistics ∞



(Hn(k) (t ) − G(nk) (t ))dQn (t ),     Knk = supHn(k) (t ) − Gn(k) (t ), Jnk =

(2)

0

(3)

t >0

where Qn is empirical distribution function of the sample |X1 |, . . . , |Xn | and 1  I {|X(k),Xi ,...,Xi | < t }, Hn(k) (t ) =  n  2k 1 2k

(4)

I2k

1  G(nk) (t ) =  n  I {|X(k+1),Xi ,...,Xi | < t }, 2k 1 2k

(5)

I2k

are U-empirical distribution functions related to the characterization. Here X(k),X1 ,...,Xm denotes the kth order statistic from the sample X1 , . . . , Xm and Im = {(i1 , . . . , im ) : 1 ≤ i1 < · · · < im ≤ n}. We consider large values of both statistics to be significant. The tests with test statistics Jnk are not consistent against all alternatives, however, consistency takes place for many common alternatives. Next we show that our test statistics are distribution-free under the null hypothesis of symmetry. Statistic Jnk can be written as Jnk =

1

 1 2

(Hn(k) (F −1 (y)) − Gn(k) (F −1 (y)))dQn (F −1 (y)),

where F −1 (y) is the inverse of the distribution function, assuming, for simplicity, that it is strictly monotone. For y > have

(6) 1 2

we

1  Hn(k) (F −1 (y)) =  n  I {|X(k),Xi ,...,Xi | < F −1 (y)}. 1 2k 2k

I2k

Using the symmetry of the null distribution and the probability transform, we get 1  I {1 − y < U(k),Ui ,...,Ui < y}, Hn(k) (F −1 (y)) =  n  1 2k 2k

I2k

where U1 , . . . , Un are independent random variables with uniform U[0, 1] distribution. (k) (k) In a similar manner, both Gn (F −1 (y)) and Qn (F −1 (y)) can also be expressed as functions of U1 , . . . , Un . Thus, the k statistics Jn are distribution free. Analogously, the same holds for the statistics Knk . Therefore without loss of generality we may suppose that, under the null hypothesis, the random variables X1 , . . . , Xn are from the uniform U[−1, 1] distribution.

B. Milošević, M. Obradović / Statistics and Probability Letters 119 (2016) 155–162

157

3. Statistic Jnk After integrating, the expression (2) becomes n   

1 n

Statistic

I2k i2k+1 =1

2k

Jnk



I {|X(k),Xi ,...,Xi | < |Xi2k+1 |} − I {|X(k+1),Xi ,...,Xi | < |Xi2k+1 |} . 1 2k 1 2k

n

is asymptotically equivalent to the U-statistic with symmetric kernel (see Korolyuk and Borovskikh, 1994)

Φk (X1 , . . . .X2k+1 ) =

1





(2k + 1)! π∈Π (2k+1)

I {|X(k),Xπ(1) ,...,Xπ(2k) | < |Xπ (2k+1) |}

 − I {|X(k+1),Xπ(1) ,...,Xπ(2k) | < |Xπ(2k+1) |} ,

(7)

where Π (m) is the set of permutations of the set {1, . . . , m}. The projection of Φk (X1 , . . . .X2k+1 ) on X1 under H0 is

ϕk (s) = E (Φk (X1 , . . . , X2k+1 )|X1 = s)  1  = P {|X(k),X2 ,...,X2k+1 | < |s|} − P {|X(k+1),X2 ,...,X2k+1 | < |s|} 2k + 1  2k  P {|X(k),s,...,X2k | < |X2k+1 |} − P {|X(k+1),s,...,X2k | < |X2k+1 |} . + 2k + 1

(8)

After some calculations we obtain

ϕk (s) =

   −

1

(2k + 1)22k−1 1

  

((s2 − 1)k − (−1)k ),

((s2 − 1)k − (−1)k ), 2k−1

s ∈ (−1, 0), (9) s ∈ (0, 1).

(2k + 1)2 It is easy to show that E (ϕk (X1 )) = 0. The variance of this projection is σk2 = E (ϕk2 (X1 ))        √ √ π Γ k + 32 Γ (2k + 1) + 2Γ 2k + 23 Γ k + 32 − π Γ (k + 1)     . = 24k−1 (2k + 1)2 Γ k + 23 Γ 2k + 32

(10)

Thus, this projection is non-degenerate. Applying   the theorem from Hoeffding (1948) we get that the asymptotic distribution of



nJnk , under H0 , is normal N 0, (2k + 1)2 σk2 .

3.1. Local Bahadur efficiency For measuring the quality of our tests we shall use the local Bahadur efficiency. We give a brief introductory explanation here; for more details we refer to Nikitin (1995) and Bahadur (1971). For two sequences of test statistics with the same null and alternative hypotheses, Bahadur relative efficiency is defined as the ratio of sample sizes needed to reach a fixed power when the level of significance approaches zero. Let G = {G(x; θ ), θ ≥ 0} be the family of distributions with densities g (x; θ ), such that g (x; θ ) ∈ G0 only for θ = 0. Then, our null symmetry hypothesis can be reformulated as H0 : θ = 0, and the alternative as H1 : θ > 0. We also assume that the distributions from the class G satisfy regularity conditions from Nikitin (1995, ch. 6) including differentiating, with respect to θ , under the integral sign. Denote h(x) = gθ′ (x; 0). Suppose that, for θ > 0, the sequence {Tn } of test statistics converges in probability to some finite function b(θ ) > 0. Suppose also that the large deviations limit lim n−1 ln PH0 (Tn ≥ t ) = −f (t )

(11)

n→∞

exists for any t in an open interval I, on which f is continuous, and {b(θ ), θ > 0} ⊂ I. In such case, local Bahadur efficiency can be defined as (see Nikitin and Peaucelle, 2004): eff(T ) = lim

θ →0

cT (θ ) 2K (θ )

,

(12)

where cT (θ ) = 2fT (b(θ )) is the Bahadur exact slope, a function proportional to the exponential rate of decrease of test sizes when the sample size increases, while 2K (θ ) is the double Kullback–Leibler distance between G(x; θ ) and the family of symmetric distributions. For close alternatives the following expansion holds (see Nikitin, 1995) 2K (θ ) =

1





4 −∞

(h(x) − h(−x))2 dx · θ 2 + o(θ 2 ), g (x; 0)

θ → 0.

(13)

158

B. Milošević, M. Obradović / Statistics and Probability Letters 119 (2016) 155–162

Let us now find the large deviation function for the sequence of statistics Jnk under the null hypothesis. The kernel Φk is centered, non-degenerate and bounded. Applying the results on large deviations of non-degenerate U- and V -statistics from Nikitin and Ponikarov (1999), we state the following lemma. Lemma 2. For statistic Jnk , under H0 , the large deviation limit from (11) exists and is an analytic function for sufficiently small ε > 0. Moreover we have fJ k (ε) = n

1 2(2k + 1)2 σk2

ε 2 + o(ε 2 ),

ε → 0,

where σk2 is given in (10). The limit in probability of the sequence Jnk under the alternative is given in the following lemma. The proof follows from the general result of Nikitin and Peaucelle (2004). Lemma 3. For given alternative distribution g (x; θ ) from G it holds bJ k (θ ) = (2k + 1) n





ϕk (2G(x, 0) − 1)h(x)dx · θ + o(θ ),

θ → 0,

(14)

−∞

where ϕk is given in (9). We now proceed with the calculation of local Bahadur efficiencies against the following three classes of alternatives:

• a location alternative with the density g (x; θ ) = g (x − θ ; 0),

θ > 0;

(15)

• a skew alternative in the sense of Azzalini (2014) with the density g (x; θ ) = 2g (x; 0)G(θ x; 0),

θ > 0;

(16)

• a skew alternative in the sense of Fernandez and Steel (1998) with the density    x 2    1 + θ + 1 g 1 + θ ; 0 , x < 0 1+θ θ > 0; g ( x; θ ) = 2    g (( 1 + θ ) x ; 0 ), x ≥ 0 ,  1 1 + θ + 1+θ

(17)

in case of normal, logistic and Cauchy as null distributions. Since the case k = 1 lies in the intersection with Ahsanullah’s characterization and has already been considered in Nikitin and Ahsanullah (2015), for illustrating the calculation of efficiencies we shall use the case k = 2. From Lemma 2 we have that fJ 2 (ε) = 280ε 2 /107 + o(ε 2 ), ε → 0. n In the case of logistic distribution with distribution function FL and the Azzalini skew alternative (16) we have, using Lemma 3, bJ 2 (θ ) = 5 n





ϕ2 (2FL (x) − 1) −∞

ex (ex − 1)

(1 + ex )2

dx · θ + o(θ ) ≈ 0.25θ + o(θ ).

The expression (13) in this case is equal to 0.33, hence from (12) we get that local Bahadur efficiency is equal to 0.981. For the other values of k and the other alternatives, the calculations are similar. We present the obtained values of local Bahadur efficiencies in Table 1. The letters (N , L, C ) stand for the null distributions, and the indices (L, A, F ) for the type of alternative, e.g. NA stands for the Azzalini alternative in case of a normal null distribution. The Kullback–Leibler distance of the alternative (16) in case of Cauchy null distribution is infinity, so we excluded this particular case. 4. Statistic Knk (k)

(k)

For every fixed t ∈ (0, 1), the expression Hn (t ) − Gn (t ) is a U-statistic with symmetric kernel

Ξk (X1 , . . . , X2k ; t ) = I {|X(k),X1 ,...,X2k | < t } − I {|X(k+1),X1 ,...,X2k | < t }.

(18)

Its projection under H0 on X1 is

ξk (s; t ) = E (Ξk (X1 , . . . , X2k ; t )|X1 = s)    2k − 1  t + 1 k−1  1 − t k−1  = t −I {s < −t } + I {s > t } . k−1 2 2

(19)

B. Milošević, M. Obradović / Statistics and Probability Letters 119 (2016) 155–162

159

Table 1 Local Bahadur efficiencies for Jnk . Alt.

eff(Jn1 )

eff(Jn2 )

eff(Jn3 )

maxk eff(Jnk )

NL NA NF LL LA LF CL CF

0.977 0.977 0.819 0.938 0.962 0.915 0.358 0.958

0.957 0.957 0.704 0.981 0.920 0.819 0.497 0.998

0.932 0.932 0.638 0.989 0.885 0.766 0.587 0.974

0.977 k 0.977 k 0.819 k 0.989 k 0.962 k 0.915 k 0.877 k 0.998 k

=1 =1 =1 =3 =1 =1 = 42 =2

It is easy to show that, for every t ∈ (0, 1), E (ξk (X1 ; t )) = 0, and its variance is

2



2k − 1

τ (t ) = 2 k

k−1

24−4k t 2 (1 − t )(1 − t 2 )2k−2 . (k)

The maximum of τk2 (t ) is attained at t0

τk2 =



√ = ( 32k − 7 − 1)/(2(4k − 1)) and is equal to

 2 (√32k − 7 − 1)2 1 −

2k − 1

(20)

√ 32k−7−1 2(4k−1)

2k−1 

1+

√ 32k−7−1 2(4k−1)

2k−2

24k−2 (4k − 1)2

k−1

.

(21)

The variance τk2 (t ) is a polynomial positive on (0, 1). Therefore, the family of kernels {Ξk (·; t ), t ∈ (0, 1)} is non-degenerate in the sense of Nikitin (2010).



(k)

(k)

Using the arguments from Silverman (1983) we have that, under H0 , the U-empirical process n(Hn (t )− Gn (t )) weakly √ converges to a certain centered Gaussian process {κ(t ), t ∈ (0, 1)}, while the statistic nKnk converges in distribution to random variable supt ∈(0,1) |κ(t )| whose distribution is unknown.

4.1. Local Bahadur efficiencies In this section we shall calculate the local Bahadur efficiencies for the statistics Knk . We begin with the large deviation function for the sequence of statistics Knk under the null hypothesis. The kernel Ξk is centered, non-degenerate and bounded. Applying the results on large deviations of non-degenerate U- and V -statistics from Nikitin (2010), we state the following lemma. Lemma 4. For statistic Knk , under H0 , the large deviation limit from (11) exists and is an analytic function for sufficiently small ε > 0. Moreover we have fK k (ε) = n

1 8k2 τk2

ε 2 + o(ε2 ),

ε → 0,

where τk2 is given in (21). The limit in probability of the sequence Knk under the alternative is given in the following lemma. Lemma 5. For given alternative distribution g (x; θ ) from G it holds ∞

 

bK k (θ ) = 2k sup n

t >0

−∞

  ξk (2G(x; 0) − 1; 2G(t ; 0) − 1)h(x)dx · θ + o(θ ),

θ → 0.

Proof. Using Glivenko–Cantelli theorem for U-empirical distribution functions from Helmers et al. (1988) we get Pθ

Hn(k) (t ) − G(nk) (t ) → P {|X(k),X1 ,...,X2k | < t } − P {|X(k+1),X1 ,...,X2k | < t }

  =

2k k

G(t ; θ ) (1 − G(t ; θ )) − k

k

  2k k

G(−t ; θ )k (1 − G(−t ; θ ))k .

(22)

160

B. Milošević, M. Obradović / Statistics and Probability Letters 119 (2016) 155–162 Table 2 Local Bahadur efficiencies Knk . Alt.

eff(Kn1 )

eff(Kn2 )

eff(Kn3 )

maxk eff(Knk )

NL NA NF LL LA LF CL CF

0.764 0.764 0.677 0.750 0.747 0.753 0.376 0.803

0.810 0.810 0.570 0.865 0.767 0.678 0.569 0.904

0.804 0.804 0.518 0.886 0.754 0.627 0.662 0.903

0.810 k 0.810 k 0.677 k 0.889 k 0.767 k 0.753 k 0.858 k 0.904 k

=2 =2 =1 =4 =2 =1 = 37 =2

Denote the expression on the right hand side of the last equation with ak (θ ). Then we have

 

a′k (0) = −

2k k

k(G(t ; 0)(1 − G(t ; 0)))k−1 (2G(t ; 0) − 1) G′θ (t ; 0) + G′θ (−t ; 0)





   ∞   ∞ 2k − 1 (G(t ; 0)(1 − G(t ; 0)))k−1 (2G(t ; 0) − 1) · h(s)I {s > t }ds − h(s)I {s < −t }ds = 2k k−1 −∞ −∞  ∞ = 2k ξk (2G(s; 0) − 1; 2G(t ; 0) − 1)h(s)ds. −∞

Expanding the function ak (θ ) in Maclaurin’s series we complete the proof.



As in the previous section, we shall illustrate the calculation of Bahadur efficiency for k = 2. From Lemma 4 we have that the large deviation function is fK 2 (ε) = 0.78ε 2 + o(ε 2 ), ε → 0. In the case of a normal distribution and the alternative (17) n we have from Lemma 5 that the limit in probability is bK 2 (θ ) ≈ 1.046θ + o(θ ), θ → 0, therefore we obtain an efficiency n equal to 0.570. For the other alternatives the calculations are similar. We present them in Table 2 in an analogous way as in the case of integral-type statistics.

5. Conditions of local asymptotic optimality Since in nonparametrics it is usually not possible to determine the best test for a given set of null and alternative hypotheses, it is often useful to find the alternatives for which the test performs optimally. Such alternatives, with local Bahadur efficiency equal to one, are called most favorable alternatives. For more on this subject we refer to Nikitin (1995). Here we present two classes of most favorable alternatives, one for each class of our tests. Theorem 6. Let g (x; θ ) be a density from G that satisfies the following condition ∞

h2 (x)

−∞

g (x; 0)



dx < ∞.

Then, for small θ > 0, the alternative densities g (x; θ ) = g (x; 0) + C ϕk (x)θ + o(θ ),

C > 0, g (x; 0) ∈ G0 , θ → 0,

(k)

g (x; θ ) = g (x; 0) + C ξk (x; t0 )θ + o(θ ), (k)

where t0

C > 0, g (x; 0) ∈ G0 , θ → 0,

√ = ( 32k − 7 − 1)/(2(4k − 1)), are most favorable for Jnk and Knk respectively.

Proof. Denote h0 (x) = (h(x) − h(−x))/2 and let g (x; 0) ∈ G0 . Then 1





4 −∞

(h(x) − h(−x))2 dx = g (x; 0)



∞ −∞

h20 g (x; 0)

Since the projection ϕk is an odd function we get





ϕk (x)h0 (x)dx = −∞





ϕk (x)h(x)dx. −∞

dx.

(23) (24)

B. Milošević, M. Obradović / Statistics and Probability Letters 119 (2016) 155–162

161

Table 3 Comparison of efficiencies. Alt.

Order 3

Order 5

(3)

NL NA NF LL LA LF CL CF

Order 7 (5)

Order 2

Order 4

(2)

(7 )

Order 6 (4 )

(6)

Jn1 ≡ In

Jn2

In

Jn3

In

Kn1 ≡ Dn

Kn2

Dn

Kn3

Dn

0.977 0.977 0.819 0.937 0.962 0.915 0.358 0.958

0.957 0.957 0.704 0.981 0.920 0.819 0.497 0.998

0.975 0.975 0.837 0.923 0.964 0.928 0.332 0.944

0.932 0.932 0.638 0.989 0.885 0.766 0.587 0.974

0.956 0.956 0.884 0.868 0.959 0.959 0.253 0.890

0.764 0.764 0.677 0.750 0.747 0.753 0.376 0.803

0.810 0.810 0.570 0.865 0.767 0.678 0.570 0.904

0.733 0.733 0.694 0.697 0.725 0.756 0.314 0.747

0.804 0.804 0.518 0.885 0.754 0.627 0.662 0.903

0.636 0.636 0.702 0.549 0.651 0.727 0.162 0.582

Table 4 Simulated empirical sizes and powers at the level 0.05. n = 20

Alternative

n = 50

Jn1

Jn2

Kn1

Kn2

Jn1

Jn2

Kn1

Kn2

0.05 0.05 0.05 0.05

0.05 0.05 0.05 0.05

0.04 0.04 0.04 0.04

0.04 0.04 0.04 0.04

0.05 0.04 0.05 0.05

0.05 0.05 0.05 0.05

0.05 0.05 0.05 0.05

0.04 0.05 0.05 0.05

NL (0.5) N L (1 ) CL (0.5) CL (1) LL (0.5) LL (1)

0.68 1.00 0.24 0.48 0.33 0.76 0.61 0.98

0.73 1.00 0.33 0.62 0.40 0.82 0.68 0.99

0.26 0.82 0.04 0.10 0.08 0.30 0.20 0.67

0.34 0.92 0.09 0.26 0.12 0.44 0.30 0.86

0.96 1.00 0.40 0.72 0.60 0.92 0.92 1.00

0.96 1.00 0.50 0.83 0.63 0.94 0.94 1.00

0.80 1.00 0.15 0.60 0.30 0.81 0.69 0.98

0.88 1.00 0.36 0.84 0.44 0.92 0.84 1.00

NF (0.5) CF (0.5) LF (0.5)

0.85 0.67 0.82 0.74

0.80 0.67 0.78 0.73

0.50 0.24 0.43 0.32

0.51 0.35 0.47 0.41

1.00 0.96 0.99 0.98

1.00 0.96 0.99 0.98

1.00 0.78 0.95 0.88

0.96 0.90 0.96 0.93

N C L t

tL (0.5) tL (1)

tF (0.5)

The local Bahadur efficiency of the statistic Jnk is

 eff(Jnk ) =  ∞

∞

−∞ ϕk (x)h0 (x)dx

2

∞ 2 h2 ( x ) 0 −∞ g (x;0) dx −∞ ϕk (x)g (x;0)dx

.

This expression is less than or equal to 1 due to the Cauchy–Schwarz inequality. The equality holds if h0 (x) = C ϕk (x)g (x; 0) for some C > 0. Since this is true for the alternatives from (23), the first part of the theorem is proven. The proof of the second part is analogous so we omit it here.  6. Comparison of efficiencies In Table 3 the efficiencies obtained in Sections 3 and 4 are compared with the corresponding efficiencies from the classes (k) (k) of tests statistics In and Dn based on the characterization via extremal order statistics from Nikitin and Ahsanullah (2015). The comparison is always done for the statistics of the same order. The statistics are denoted in the same way as in Nikitin (3) (2) and Ahsanullah (2015). Note that the pairs of statistics Jn1 and In , as well as Kn1 and Dn , and therefore their efficiencies, coincide. From the table we can notice that no test dominates another. However, one can see which statistic is preferable in which case and use it accordingly. There is no clear general rule, except in the case of a Cauchy distribution, where the tests based on the characterization via central order statistics perform better than the corresponding tests where the extremal order statistics appear in the characterization. 7. Power study The empirical sizes and the powers of our tests are estimated by Monte Carlo method with 10 000 replications at 0.05 level of significance. The critical values used for the simulation were obtained from the uniform U[−1, 1] distribution. We considered the following null distributions: standard normal, logistic, Cauchy and Student’s (t ) with two degrees of freedom. For each we consider two location alternatives, with parameters θ = 0.5 and θ = 1, and a skew alternative in the sense

162

B. Milošević, M. Obradović / Statistics and Probability Letters 119 (2016) 155–162

of Fernandez and Steel, with parameter θ = 0.5. The results are shown in Table 4. We can see that the empirical sizes are good and the powers exhibited are high in most cases. 8. Conclusion In this paper we stated and proved a new characterization of symmetric distributions. We constructed two classes of symmetry tests of integral and Kolmogorov-type. The integral-type tests, being one-sided, are consistent against, in a sense, positively biased alternatives; the Kolmogorov-type tests are consistent against all alternatives. The values of efficiencies range from reasonable to high. The integral-type tests, except for the location alternatives in the Cauchy case, have greater efficiencies, as expected. In comparison to the tests from Nikitin and Ahsanullah (2015) we can conclude that no test dominates another. In the case of small samples, the values of simulated powers suggest that the tests can be used in practice. Acknowledgments We would like to thank the Editor, the Associate Editor and the referees for their useful remarks which improved the quality of the paper. Research of B. Milošević is supported by the Ministry of Education, Science and Technological Development of Republic of Serbia Grant No. 174012. References Ahsanullah, M., 1992. On some characteristic property of symmetric distributions. Pakistan J. Statist. 8 (3), 19–22. Azzalini, A., with the collaboration of Capitanio, A, 2014. The Skew-Normal and Related Families. Cambridge University Press, New York. Bahadur, R.R., 1971. Some Limit Theorems in Statistics. SIAM, Philadelphia. Baringhaus, L., Henze, N., 1992. A characterization of and new consistent tests for symmetry. Comm. Statist. Theory Methods 21 (6), 1555–1566. Burgio, G., Nikitin, Ya.Yu., 2001. The combination of the sign and Wilcoxon tests for symmetry and their Pitman efficiency. In: Asymptotic Methods in Probability and Statistics with Applications. Birkhäuser, Boston, pp. 395–407. Burgio, G., Nikitin, Ya.Yu., 2007. On the combination of the sign and Maesono tests for symmetry and its efficiency. Statistica 63 (2), 213–222. Fernandez, C., Steel, M.F.J., 1998. On Bayesian modeling of fat tails and skewness. J. Amer. Statist. Assoc. 93 (441), 359–371. Helmers, R., Janssen, P., Serfling, R., 1988. Glivenko-Cantelli properties of some generalized empirical df’s and strong convergence of generalized L-statistics. Probab. Theory Related Fields 79 (1), 75–93. Henze, N., Meintanis, S.G., 2002. Goodness-of-fit tests based on new characterization of the exponential distribution. Comm. Statist. Theory Methods 31 (9), 1479–1497. Hoeffding, W., 1948. A class of statistics with asymptotically normal distribution. Ann. Math. Statist. 19 (3), 293–395. Jovanović, M., Milošević, B., Nikitin, Ya.Yu., Obradović, M., Volkova, K.Yu., 2015. Tests of exponentiality based on Arnold-Villasenor characterization and their efficiencies. Comput. Statist. Data Anal. 90, 100–113. Korolyuk, V.S., Borovskikh, Yu.V., 1994. Theory of U-Statistics. Kluwer, Dordrecht. Litvinova, V.V., 2001. New nonparametric test for symmetry and its asymptotic efficiency. Vestnik St. Petersburg Univ. Math. 34 (4), 12–14. Nikitin, Ya.Yu., 1995. Asymptotic Efficiency of Nonparametric Tests. Cambridge University Press. Nikitin, Ya.Yu., 1996. On Baringhaus-Henze test for symmetry: Bahadur efficiency and local optimality for shift alternatives. Math. Methods Statist. 5 (2), 214–226. Nikitin, Ya.Yu., 2010. Large deviation of U-empirical Kolmogorov-Smirnov tests and their efficiency. J. Nonparametr. Stat. 22 (5), 649–668. Nikitin, Ya.Yu., Ahsanullah, M., 2015. New U-empirical tests of symmetry based on extremal order statistics, and their efficiencies. In: Hallin, M., Mason, D.M., Pfeifer, D., Steinebach, J.G. (Eds.), Mathematical Statistics and Limit Theorems, Festschrift in Honour of Paul Deheuvels. Springer International Publishing, pp. 231–248. Nikitin, Ya.Yu., Peaucelle, I., 2004. Efficiency and local optimality of nonparametric tests based on U- and V-statistics. Metron LXII (2), 185–200. Nikitin, Ya.Yu., Ponikarov, E.V., 1999. Rough large deviation asymptotics of Chernoff type for von Mises functionals and U-statistics. In: Proceedings of Saint-Petersburg Mathematical Society, Vol. 7. pp. 124–167. English translation in AMS Translations, ser. 2, 203:107–146, 2001. Silverman, B.W., 1983. Convergence of a class of empirical distribution functions of dependent random variables. Ann. Probab. 11 (3), 745–751. Volkova, K.Yu., Nikitin, Ya.Yu., 2014. Goodness-of-fit tests for the power function distribution based on the Puri-Rubin characterization and their efficiencies. J. Math. Sci. (NY) 199 (2), 130–138.