The relative efficiency of Liu-type estimator in a partially linear model

The relative efficiency of Liu-type estimator in a partially linear model

Applied Mathematics and Computation 243 (2014) 349–357 Contents lists available at ScienceDirect Applied Mathematics and Computation journal homepag...

449KB Sizes 25 Downloads 44 Views

Applied Mathematics and Computation 243 (2014) 349–357

Contents lists available at ScienceDirect

Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

The relative efficiency of Liu-type estimator in a partially linear model Jibo Wu ⇑ Department of Mathematics and KLDAIP, Chongqing University of Arts and Sciences, Chongqing 402160, China School of Mathematics and Finances, Chongqing University of Arts and Sciences, Chongqing 402160, China

a r t i c l e

i n f o

a b s t r a c t In this paper, we study the partially linear model, y ¼ Xb þ f þ e. We introduce a new Liu-type estimator in a partially linear model, then we compared the new estimator with the two-step estimator in the mean squared error sense. Finally, we give a simulation study to explain the validity and feasibility of the approach. Ó 2014 Elsevier Inc. All rights reserved.

Keywords: Liu-type estimator Partially linear model Mean squared error

1. Introduction Consider the following partially linear model

yi ¼ x0i b þ f ðti Þ þ ei ;

i ¼ 1; . . . ; n

y0i s

x0i

ð1:1Þ t 0i s

where are observations, ¼ ðxi1 ; . . . ; xip Þ and x1 ; . . . ; xn are known p-dimensional with p 6 n. are values of an extra univariate variable such as the time at which the observation is made, b ¼ ðb1 ; . . . ; bp Þ0 is an unknown parameter vector. f ðÞ is an unknown smooth function, and e0i s are random error supposed to be i:i:d. Nð0; r2 Þ distributed. Use matrix vector notation, model (1.1) can be written as follows:

y ¼ Xb þ f þ e;

ð1:2Þ 0

0

0

0

where y ¼ ðy1 ; . . . ; yn Þ ; X ¼ ðx1 ; . . . ; xn Þ; f ¼ ðf ðt1 Þ; . . . ; f ðt n ÞÞ and e ¼ ðe1 ; . . . ; en Þ . Engle et al. [1] also called model (1.1) as a partial spline model. f ðtÞ has been called as smooth part of the model and suppose that it represents a smooth unparameterized functional relationship. The main problem for us to consider is to estimate unknown parameter vector b and nonparametric function f from the data fy1 ; xi ; ti g. In this paper, we mainly discuss how to estimate unknown parameter vector b and nonparametric function f. If we know the estimator of b, then we can obtain the estimator of function f. There are many methods to estimate b and f. such as, penalized least-squares (see [2]), smoothing splines (see [3]), piecewise polynomial (see [4]) and two steps estimation methods (see [5]). Hu [6] used two-step way and introduced a ridge estimators for the partially linear model. Duran et al. [7] also discussed the two-step method. The main thought of two steps estimation is the following: the first step, f ðt; bÞ is defined with supposition where b is assumed to be known; the second ^ step, the estimator of parametric b is attained by a least-squares method; Then we may obtain ^f ðt; bÞ. When the regressor exists multicollinearity, many authors improved to overcome this problem. Tabakan and Akdeniz [8] introduced a difference-based ridge estimator in partially linear model. Duran et al. [9] proposed a difference-based ridge ⇑ Address: Department of Mathematics and KLDAIP, Chongqing University of Arts and Sciences, Chongqing 402160, China. E-mail address: [email protected] http://dx.doi.org/10.1016/j.amc.2014.05.103 0096-3003/Ó 2014 Elsevier Inc. All rights reserved.

350

J. Wu / Applied Mathematics and Computation 243 (2014) 349–357

and Liu estimator in partially linear model. Tabakan [10] consider the semiparametric regression model with linear equality restrictions and proposed a new restricted difference-based ridge estimator. In this paper, we use two-step method to introduce a new Liu-type estimator in partially linear model. We also discuss the superiority of the new estimator with the two-step estimator in the terms of the mean squared error. The paper is organized as follows. In Section 2, the Liu-type estimator in partially linear model is introduced. In Section 3, we give the comparison of the new estimator and the two-step estimator in the mean squared error sense and we give a method to choose the biasing parameter in Section 4. A simulation study is given to illustrate the new method in Section 5 and some conclusion remarks are given in Section 6. 2. The new estimator In the following, we introduce a Liu-type estimation method based on a two steps estimation process. In the first step, we suppose that b is known, and then the nonparametric estimator of f is given as follows:

f ðt; bÞ ¼ Sðy  XbÞ;

ð2:1Þ

1

where S ¼ ðI þ aKÞ is a smoother matrix which depends on a smoothing parameter a and K is a symmetric nonnegative matrix [2]. Then based on fyi  x0i b; t i g ði ¼ 1; . . . ; nÞ, and S ¼ Sðt 1 ; . . . ; t n Þ is an n  n positive-definite smoother matrix from univariate cubic spline smoothing [7]. Consider the following model

~ þ ~e; ~ ¼ Xb y

ð2:2Þ

~ ¼ ðI  SÞX; ~f ¼ ðI  SÞf ; e ¼ ðI  SÞe and ~e ¼ ~f þ e . (2.2) is linear model. ~ ¼ ðI  SÞy; X where y Least square estimator of b of semiparametric regression model is got by minimizing:

~ 0 ðy ~ ~  XbÞ ~  XbÞ: ðy

ð2:3Þ

~ ¼ ðI  SÞX has full column rank, then we obtain If we suppose that X

~ 0 XÞ ~ 1 X ~ 0y ^ p ¼ ðX ~; b

ð2:4Þ

^f p ¼ Sðy  X b ^p Þ:

ð2:5Þ

^p ¼ b ^TS . In the second step, we add a penalizing function The estimator can also be called as two-step estimator b ^ 1=2 db k k1=2p  k bk2 to the least squares objective (2.3) is same to minimizing the criterion

~ 0 ðy ~ þ ~  XbÞ ~  XbÞ ðy

^p db 1=2

k

!0

k

1=2

b

^p db k

1=2

!

k

1=2

b ;

ð2:6Þ

we obtain its solution, namely

~ 0X ~ þ kIÞ1 ðX ~0X ~ þ dIÞb ^p ðk; dÞ ¼ ðX ^p ; b

0 < d < 1; d 6 k;

ð2:7Þ

where d and k are tuning parameters. Then we obtain

^f p ðk; dÞ ¼ Sðy  X b ^p ðk; dÞÞ:

ð2:8Þ

Because there is a formal resemblance between (2.7) and the Liu-type estimator of the linear model, we call it a Liu-type estimator of the partially linear model. ~0X ~ is of full rank, then the Liu-type estimator becomes the two-step estimator of a Remark 1. Let k ¼ d, and suppose that X partially linear model. Remark 2. Let f ðtÞ ¼ 0, then the Liu-type estimator becomes Liu-type estimator in a linear model. ^p ðk; dÞ with two-step estimator b ^TS 3. Comparison of Liu-type estimator b ^p ðk; dÞ with two-step estimator b ^p in the mean In this section, we will give the comparison of the Liu-type estimator b squared error (MSE) sense. ~ ¼ p, then: Theorem 1. Suppose rankðXÞ ¼ p, and suppose that there exists a matrix S such that rank½ðI  SÞX ¼ rankðXÞ (a) for fixed k > 0:

351

J. Wu / Applied Mathematics and Computation 243 (2014) 349–357

^p ðk; dÞÞ 6 MSEðb ^TS Þ for 0 < mini f/ =fi g 6 maxi f/ =fi g < d < 1. (1) If /i > 0 and fi > 0, then MSEðb i i ^p ðk; dÞÞ 6 MSEðb ^TS Þ for arbitrary 0 < d < 1. (2) If /i < 0 and fi > 0, then MSEðb ^p ðk; dÞÞ 6 MSEðb ^TS Þ for 0 < d < mini f/ =fi g 6 maxi f/ =fi g < 1. (3) If /i < 0 and fi < 0, then MSEðb i i ^p ðk; dÞÞ P MSEðb ^TS Þ for arbitrary 0 < d < 1, (4) If /i > 0 and fi < 0, then MSEðb where /i ¼ kk2i n2i  ð2ki þ kÞðr2 rii þ g2i Þ  2bii k2i and fi ¼ r2 r ii þ g2i þ k2i n2i þ 2bi iki . (b) for fixed 0 < d < 1: ^p ðk; dÞÞ 6 MSEðb ^TS Þ for 0 < k < u =xi . (1) If xi > 0 and ui > 0, then MSEðb i ^p ðk; dÞÞ 6 MSEðb ^TS Þ for arbitrary k > 0. (2) If xi < 0 and ui > 0, then MSEðb ^p ðk; dÞÞ 6 MSEðb ^TS Þ for k > u =xi > 0. (3) If xi < 0 and ui < 0, then MSEðb i ^p ðk; dÞÞ P MSEðb ^TS Þ for arbitrary k > 0, (4) If xi > 0 and ui < 0, then MSEðb where xi ¼ k2i n2i  ðr2 rii þ g2i Þ and ui ¼ ð2ki þ dÞðr2 rii þ g2i Þ þ dk2i n2i þ 2ki bii ðki þ dÞ. All denotations will be defined in the course of the proof. Proof. Consider (2.4), (2.5), and (2.7), we obtain

~ 0X ~ þ kIÞ1 ðX ~0X ~ þ dIÞEðb ~ 0 XÞ ~ 1 X ~ 0 ~f Þ; ^p ðk; dÞÞ ¼ ðX ^p Þ ¼ H1 Hd Eðb ^p Þ ¼ H1 Hd ðb þ ðX Eðb k k

ð3:1Þ

~0X ~ þ kI. where Hk ¼ X

~ 0 XÞ ~ 1 X ~ 0 ðI  SÞðI  SÞ0 Xð ~ X ~ 0 XÞ ~ 1 Hd H1 : ^p ðk; dÞÞ ¼ H1 Hd Cov ðb ^p ÞHd H1 ¼ r2 H1 Hd ðX Cov ðb k k k k

ð3:2Þ

Then using the definition of the mean squared error and (3.1) and (3.2), we obtain: 0

^p ðk; dÞÞ ¼ E½ðb ^p ðk; dÞ  bÞ ðb ^p ðk; dÞ  bÞ MSEðb 0

~ 0 XÞ ~ 1~f 0 Þ  b  ½H1 Hd ðb þ ðX ~ 0 XÞ ~ 1 ~f 0 Þ  b g ^p ðk; dÞÞÞ þ trf½H1 Hd ðb þ ðX ¼ trðCov ðb k k ~ X ~ 0 XÞ ~ 1 Hd H1 g ^p ðk; dÞÞÞ þ ðk  dÞ2 trðH1 bb0 H1 Þ  2ðk  dÞtrfH1 b~f 0 Xð ¼ trðCov ðb k k k k 1 ~ 0 ~ 1 ~ 0 ~~0 ~ ~ 0 ~ 1 þ trfH1 k Hd ðX XÞ X f f XðX XÞ H d Hk g

¼ U 1 þ U 2  2U 3 þ U 4 :

ð3:3Þ

~ ¼ p, then X ~0X ~ is a positive-definite matrix. There exists an orthogonal matrix P such that: Since rankðXÞ

~ 0 XP ~ ¼ K ¼ diagðk1 ; . . . ; kp Þ P0 X and

PP 0 ¼ P0 P ¼ I Then we have

n o 0 ~ ~ 0 ~ 1 1 ~ 0 ~ 1 ~ 0 U 1 ¼ r2 tr H1 k Hd ðX XÞ X ðI  SÞðI  SÞ XðX XÞ H d H k 2 p n o X ðki þ dÞ r ii 1 1 ; ¼ r2 ¼ r2 tr PðK þ kIÞ ðK þ dIÞK1 RK1 ðK þ dIÞðK þ kIÞ 2 2 i¼1 ki ðki þ kÞ

ð3:4Þ

~ 0 ðI  SÞðI  SÞ0 XP ~ ¼ ðr ij Þ . where R ¼ P 0 X pp p   X 2 2 1 1 2 0 1 U 2 ¼ ðk  dÞ tr H1 ¼ ðk  dÞ trfPðK þ kIÞ P0 bb0 PðK þ kIÞ P0 g ¼ ðk  dÞ k bb H k i¼1

n2i ðki þ kÞ

2

;

ð3:5Þ

where n ¼ ðn1 ; . . . ; np Þ0 ¼ P 0 b.

n o 1 1 ~0 ~ ~ 0 ~ 1 ~ K1 ðK þ kIÞ1 ðK þ dIÞP0 g U 3 ¼ ðk  dÞtr H1 ¼ ðk  dÞtrfPðK þ kIÞ P0 b~f 0 XP k bf XðX XÞ H d H k ¼ ðk  dÞ

p X bii ðki þ dÞ i¼1

ki ðki þ kÞ

2

;

ð3:6Þ

~ ¼ ðbij Þ . where B ¼ P0 b~f 0 XP pp

n o 1 1 ~ 0 ~ 1 ~ 0 ~~0 ~ ~ 0 ~ 1 ~ 0~f ~f 0 XP ~ K1 ðK þ kIÞ1 ðK þ dIÞP0 g ¼ trfPðK þ kIÞ ðK þ dIÞK1 P 0 X U 4 ¼ tr H1 k H d ðX XÞ X f f XðX XÞ H d H k ¼

p X g2 ðki þ dÞ2 i

2

i¼1

k2i ðki þ kÞ

;

ð3:7Þ

352

J. Wu / Applied Mathematics and Computation 243 (2014) 349–357

~ Thus, using (3.2)–(3.7), we obtain: where g ¼ ðg1 ; . . . ; gp Þ ¼ ~f 0 XP.

^p ðk; dÞÞ ¼ r2 MSEðb

2 p X ðki þ dÞ r ii i¼1

k2i ðki þ kÞ

þ ðk  dÞ 2

2

p X i¼1

2 p p X bii ðki þ dÞ X g2i ðki þ dÞ  2ðk  dÞ þ : 2 2 2 2 ðki þ kÞ i¼1 ki ðki þ kÞ i¼1 ki ðki þ kÞ

n2i

ð3:8Þ

^TS : Let k ¼ d in (3.8), we obtain the MSE of two-step estimator b

^TS Þ ¼ MSEðb

p X r2 rii þ g2 i

k2i

i¼1

ð3:9Þ

:

Now we consider the difference: ^p ðk; dÞÞ  MSEðb ^TS Þ ¼ r2 MSEðb ¼

2 p X ðki þ dÞ rii

2 i¼1 ki ðki p X

2

þ kÞ

½ki ðki þ kÞ

þ ðk  dÞ

2

2

p X i¼1

n2i ðki þ kÞ

2

 2ðk  dÞ

p X bii ðki þ dÞ i¼1

ki ðki þ kÞ

2

þ

p X g2 ðki þ dÞ2 i 2 i¼1 ki ðki

2

þ kÞ



p X r2 rii þ g2 i

i¼1

n

k2i 

o

r2 ðki þ dÞ2 rii þ ðk  dÞ2 n2i k2i þ g2i ðki þ dÞ2  2ðk  dÞki bii ðki þ dÞ  ðki þ kÞ2 r2 rii þ g2i :

i¼1

ð3:10Þ

Now we discuss (3.10). (a) for fixed k > 0, then (3.10) can be written as

^p ðk;dÞÞ  MSEðb ^TS Þ ¼ ðk  dÞ MSEðb

p X    2  ½ki ðki þ kÞ  kk2i n2i  ð2ki þ kÞðr2 r ii þ g2i Þ  2bii k2i  d r2 r ii þ g2i þ k2i n2i þ 2bii ki i¼1

p X 2 ¼ ðk  dÞ ½ki ðki þ kÞ ð/i  dfi Þ;

ð3:11Þ

i¼1

where /i ¼ kk2i n2i  ð2ki þ kÞðr2 r ii þ g2i Þ  2bii k2i and fi ¼ r2 rii þ g2i þ k2i n2i þ 2bi iki . Then by (3.11), for fixed k > 0, we have: ^p ðk; dÞÞ 6 MSEðb ^TS Þ for 0 < mini f/ =fi g 6 maxi f/ =fi g < d < 1. (1) If /i > 0 and fi > 0, then MSEðb i i ^p ðk; dÞÞ 6 MSEðb ^TS Þ for arbitrary 0 < d < 1. (2) If /i < 0 and fi > 0, then MSEðb ^p ðk; dÞÞ 6 MSEðb ^TS Þ for 0 < d < mini f/ =fi g 6 maxi f/ =fi g < 1. (3) If /i < 0 and fi < 0, then MSEðb i i ^ ^TS Þ for arbitrary 0 < d < 1. (4) If /i > 0 and fi < 0, then MSEðbp ðk; dÞÞ P MSEðb (b) For fixed 0 < d < 1, then (3.10) can be written as

^p ðk; dÞÞ  MSEðb ^TS Þ ¼ MSEðb

p X    2   ½ki ðki þ kÞ  ð2ki þ dÞðr2 r ii þ g2i Þ þ dk2i n2i þ 2ki bii ðki þ dÞ þ k k2i n2i  ðr2 rii þ g2i Þ i¼1

p X 2 ½ki ðki þ kÞ ðkxi  ui Þ; ¼ i¼1

where xi ¼ k2i n2i  ðr2 r ii þ g2i Þ and ui ¼ ð2ki þ dÞðr2 r ii þ g2i Þ þ dk2i n2i þ 2ki bii ðki þ dÞ. Then by (3.12), for fixed 0 < d < 1, we have: ^p ðk; dÞÞ 6 MSEðb ^TS Þ for 0 < k < u =xi . (1) If xi > 0 and ui > 0, then MSEðb i ^p ðk; dÞÞ 6 MSEðb ^TS Þ for arbitrary k > 0. (2) If xi < 0 and ui > 0, then MSEðb ^p ðk; dÞÞ 6 MSEðb ^TS Þ for k > u =xi > 0. (3) If xi < 0 and ui < 0, then MSEðb i ^p ðk; dÞÞ P MSEðb ^TS Þ for arbitrary k > 0. (4) If xi > 0 and ui < 0, then MSEðb The proof of Theorem 1 is completed. h ~ ¼ p; f  f ðt; bÞ ¼ 0. Theorem 2. Suppose rankðXÞ ¼ p, and rankðXÞ (a) For (1) (2) (b) For (1) (2)

fixed k > 0: 2 2 ^p ðk; dÞÞ 6 MSEðb ^TS Þ for kni ki ð2k2i þkÞr < d < 1. If kn2i ki > ð2ki þ kÞr2 , then MSEðb r2 þni ki 2 2 ^p ðk; dÞÞ 6 MSEðb ^TS Þ for arbitrary 0 < d < 1. If kni ki < ð2ki þ kÞr , then MSEðb fixed 0 < d < 1: 2 2 ^p ðk; dÞÞ 6 MSEðb ^TS Þ for k > r ð2k2i þdÞþdni ki > 0. If r2 < n2i ki , then MSEðb ni ki r2 2 2 ^ ^ If r > ni ki , then MSEðbp ðk; dÞÞ 6 MSEðbTS Þ for arbitrary k > 0.

where ni and ki are the same as in Theorem 1.

ð3:12Þ

353

J. Wu / Applied Mathematics and Computation 243 (2014) 349–357

Proof. Since f  f ðt; bÞ ¼ 0, we obtain ~e  e ¼ 0, then we have

^p ðk; dÞÞ ¼ H1 Hd b Eðb k

ð3:13Þ

~ 0 XÞ ~ 1 Hd H1 ; ^p ðk; dÞÞ ¼ r2 H1 Hd ðX Cov ðb k k

ð3:14Þ

and

~0X ~ þ kI. Then we obtain the mean squared error (MSE) of b ^p ðk; dÞ: where Hk ¼ X 0

^p ðk; dÞÞ ¼ E½ðb ^p ðk; dÞ  bÞ ðb ^p ðk; dÞ  bÞ ¼ r2 MSEðb

2 p X ðki þ dÞ i¼1

ki ðki þ kÞ

2

2

þ ðk  dÞ

p X

n2i

i¼1

ðki þ kÞ

2

:

ð3:15Þ

Let k ¼ d in (3.15), we obtain:

^OLS Þ ¼ MSEðb ^TS Þ ¼ r2 MSEðb

p X 1 i¼1

ki

ð3:16Þ

:

Then by (3.15) and (3.16), we have:

^TS Þ  MSEðb ^p ðk; dÞÞ ¼ r2 MSEðb

p X 1 i¼1

ki

¼ ðk  dÞ

 r2

2 p X ðki þ dÞ i¼1

ki ðki þ kÞ

2

 ðk  dÞ

2

p X

n2i

i¼1

ðki þ kÞ

p X r2 ð2ki þ k þ dÞ  ðk  dÞki n2 i

2

ki ðki þ kÞ

i¼1

2

ð3:17Þ

:

(a) For fixed k > 0, we can write (3.17) as follows:

^TS Þ  MSEðb ^p ðk; dÞÞ ¼ ðk  dÞ MSEðb

p X







r2 þ n2i ki d  kn2i ki  ð2ki þ kÞr2 ki ðki þ kÞ

i¼1

 :

2

ð3:18Þ

So we have 2 2 ^p ðk; dÞÞ 6 MSEðb ^TS Þ for kni ki ð2k2i þkÞr < d < 1. (1) If kn2i ki > ð2ki þ kÞr2 , then MSEðb r2 þni ki 2 2 ^ ^ (2) If kni ki < ð2ki þ kÞr , then MSEðbp ðk; dÞÞ 6 MSEðbTS Þ for arbitrary 0 < d < 1. (b) For fixed 0 < d < 1, we can write (3.17) as follows:

^TS Þ  MSEðb ^p ðk; dÞÞ ¼ ðk  dÞ MSEðb

p X







r2  n2i ki k  r2 ð2ki þ dÞ þ dn2i ki



2

i¼1

ki ðki þ kÞ

ð3:19Þ

So from (3.19), we may have 2 2 ^p ðk; dÞÞ 6 MSEðb ^TS Þ for k > r ð2k2i þdÞþdni ki > 0. (1) If r2 < n2i ki , then MSEðb ni ki r2 2 ^p ðk; dÞÞ 6 MSEðb ^TS Þ for arbitrary k > 0. h (2) If r2 > ni ki , then MSEðb 4. Selection of the biasing parameters k and d In applications, how to choose the biasing parameters in LTE is very important. In this section we give a method to choose the biasing parameter k and d. Optimal values for d and k can be derived by minimizing

^p ðk; dÞÞ ¼ MSEðb

p n o X 2 ½ki ðki þ kÞ r2 ðki þ dÞ2 rii þ ðk  dÞ2 n2i k2i þ g2i ðki þ dÞ2  2ðk  dÞki bii ðki þ dÞ :

ð4:1Þ

i¼1

^p ðk; dÞÞ, we can see that f ðk; dÞ is a quadratic function of d, so the value of d which minimizes f ðk; dÞ for Let f ðk; dÞ ¼ MSEðb fixed k value can be got by differentiating f ðk; dÞ with respect to d: p X      @ 2   f ðk; dÞ ¼ ½ki ðki þ kÞ d r2 rii þ g2i þ n2i k2i þ 2ki bii  kn2i k2i  r2 r ii þ g2i ki þ kki bii  k2i bii @d i¼1

ð4:2Þ

and equating the numerator to zero, After the unknown parameters r2 and n2i are replaced by their unbiased estimators, we obtain the estimator of d for fixed k value as

 2  ^ r ii þ g2i ki þ kki bii  k2i bii k^n2i k2i  r ^ d¼ : r^ 2 rii þ g2i þ ^n2i k2i þ 2ki bii

ð4:3Þ

354

J. Wu / Applied Mathematics and Computation 243 (2014) 349–357

Thus we obtain the optimal estimator of d for fixed k

( max 0;

) ( )  2   2  ^2 2 ^ r ii þ g2i ki þ kki bii  k2i bii ^ r ii þ g2i ki þ kki bii  k2i bii k^n2i k2i  r ^opt 6 min 1; kni ki  r d 6 r^ 2 rii þ g2i þ ^n2i k2i þ 2ki bii r^ 2 rii þ g2i þ ^n2i k2i þ 2ki bii

ð4:4Þ

Similarly, we can obtain the optimal k use the same method, however, use this method the optimal is very complex, so we use the estimator proposed by Hoerl and Kennard [11] to estimate the ridge parameter k, which is defined as:

^ ^opt ¼ pr k : ^n0 ^n 2

ð4:5Þ

5. Numerical example In this section, we study the MSE performance of the proposed estimator. Our sampling experiment consists of different combinations of k; d and n. In this study, the explanatory variables are generated by the following equation [12,13]:

xij ¼ ð1  q2 Þzij þ qziðpþ1Þ ;

i ¼ 1; . . . ; n; j ¼ 1; . . . ; p;

where zij and ziðpþ1Þ are independent standard normal pseudo-random numbers and q is specified so that the correlation between any two explanatory variables is given by q2 . And observations on the dependent variable are then generated by

yi ¼ b1 xi1 þ b2 xi2 þ b3 xi3 þ b4 xi4 þ f ðt i Þ þ ei ;

ei  Nð0; r2 Þ;

where

f ðt i Þ ¼

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi t i ð1  t i Þsin

2:1p t i þ 0:05

is called the Doppler function for t i ¼ ði  0:5Þ=n; i ¼ 1; . . . ; n. For the weight function W ni ðtj Þ, we use

W ni ðtj Þ ¼

( ) 1 ti  tj 1 1 ðt  t Þ2 pffiffiffiffiffiffiffi exp  i 2 j ; ¼ K nhn nhn 2p hn 2hn

ð5:1Þ

which is Priestley and Chaos weight with the Gaussian kernel. We use the cross validation (C.V.) method to select the optimal bandwidth hn . In this paper, we consider n ¼ 1000; p ¼ 4; r2 ¼ 0:1; b ¼ ð10; 1; 2; 3Þ0 . Using the method we propose in Section 4, we can choose the optimal k ¼ 0:503 and d ¼ 0:212, then we can compute that:

^p ðk; dÞÞ ¼ 0:00197 < MSEðb ^TS Þ ¼ 0:00215: MSEðb

ð5:2Þ

0.0026

Thus we can see that the method we propose is meaningful in practice. Figs. 1 and 2 give the MSE values of Liu-type estimator and the two-stage estimator when d fixed. For d ¼ 0:3, when k > 0:312, the Liu-type estimator has smaller MSE values than the two-stage estimator. Accordingly, for d ¼ 0:5, when k > 0:515, the Liu-type estimator has smaller MSE values than the two-stage estimator.

0.0020

0.0022

MSE

0.0024

LTE TS

0.0

0.2

0.4

0.6

0.8

k

Fig. 1. The MSE values of the LTE and TS estimator vs. k when d ¼ 0:3.

1.0

355

MSE

0.0020 0.0022 0.0024 0.0026 0.0028 0.0030

J. Wu / Applied Mathematics and Computation 243 (2014) 349–357

LTE TS

0.0

0.2

0.4

0.6

0.8

1.0

k

0.0035

Fig. 2. The MSE values of the LTE and TS estimator vs. k when d ¼ 0:5.

0.0020

0.0025

MSE

0.0030

LTE TS

0.0

0.2

0.4

0.6

0.8

1.0

k Fig. 3. The MSE values of the LTE and TS estimator vs. k when d ¼ 0:7.

0.0030 0.0020

0.0025

MSE

0.0035

0.0040

LTE TS

0.0

0.2

0.4

0.6

0.8

1.0

d Fig. 4. The MSE values of the LTE and TS estimator vs. d when k ¼ 0:1.

Figs. 3–5 give the MSE values of Liu-type estimator and the two-stage estimator when k fixed. We can see that when k fixed, with the increase of d, the MSE value of Liu-type estimator is increasing. For k ¼ 0:5, when 0 < d < 0:55, the Liu-type estimator has smaller MSE values than the two-stage estimator. Accordingly, for k ¼ 0:7, when d < 0:71, the Liu-type estimator has smaller MSE values than the two-stage estimator. So from our simulation study the new estimator is more efficient than the two-stage estimator in most cases (see Figs. 6 and 7).

J. Wu / Applied Mathematics and Computation 243 (2014) 349–357

0.0030

LTE TS

0.0020

0.0025

MSE

0.0035

356

0.0

0.2

0.4

0.6

0.8

1.0

d

0.0020 0.0022 0.0024 0.0026 0.0028 0.0030

MSE

Fig. 5. The MSE values of the LTE and TS estimator vs. d when k ¼ 0:3.

LTE TS

0.0

0.2

0.4

0.6

0.8

1.0

d

0.0026

Fig. 6. The MSE values of the LTE and TS estimator vs. d when k ¼ 0:5.

0.0020

0.0022

MSE

0.0024

LTE TS

0.0

0.2

0.4

0.6

0.8

1.0

d Fig. 7. The MSE values of the LTE and TS estimator vs. d when k ¼ 0:7.

6. Conclusions In this paper, we propose a Liu-type estimator in partially linear model. We also show that under certain conditions, the new estimator is better than the two-stage estimation. Finally, we give a simulation study to illustrate the new estimator. As we all know, in practice, how to choose the biasing parameter in the new estimator is very important. In this paper, we did not study how to choose the biasing parameters. In the following study, how to choose the biasing parameters is a problem which can be discussed.

J. Wu / Applied Mathematics and Computation 243 (2014) 349–357

357

Acknowledgments This work was supported by the Scientific Research Foundation of Chongqing University of Arts and Sciences (Grant No. R2013SC12), Program for Innovation Team Building at Institutions of Higher Education in Chongqing (Grant No. KJTD201321), and the National Natural Science Foundation of China (Grant No: 71271227). References [1] R.F. Engle, C.W. Granger, J. Rice, A. Weiss, Semiparametric estimates of the relation between weather and electricity sales, J. Am. Stat. Assoc. 81 (1986) 310–320. [2] B. Fischer, M. Hegland, Collocation, filtering and nonparametric regression: Part I, Zfv 1 (1999) 17–24. [3] P.J. Green, B.W. Silverman, Nonparametric Regression and Generalized Linear Models, Chapman Hall, London, 1994. [4] S. Hong, The estimate theory of a semiparametric regression model, Sci. China Ser. A 12 (1991) 1258–1272. [5] C. Hung, Convergence rates for parametric components in a partly linear model, Ann. Stat. 16 (1) (1988) 136–146. [6] H.C. Hu, Ridge estimation of a semiparametric regression model, J. Comput. Appl. Math. 176 (2005) 215–222. [7] E.A. Duran, F. Akdeniz, H.C. Hu, Efficiency a Liu-type estimator in semiparametic regression models, J. Comput. Appl. Math. 235 (2011) 1418–1428. [8] G. Tabakan, F. Akdeniz, Difference-based ridge estimator of parameters in partial linear model, Stat. Pap. 51 (2010) 357–368. [9] E.A. Duran, W.K. Ha}rdle, M. Osipenko, Difference based ridge and Liu type estimators in semiparametric regression models, J. Multivariate Anal. 105 (2012) 164–175. [10] G. Tabakan, Performance of the difference-based estimators in partially linear models, Statistics 47 (2013) 329–347. [11] A.E. Hoerl, R.W. Kennard, Ridge regression: biased estimation for non-orthogonal problem, Technometrics 12 (1970) 55–67. [12] Y.L. Li, H. Yang, A new Liu-type estimator in linear regression model, Stat. Pap. 53 (2) (2012) 427–437. [13] J.W. Xu, H. Yang, On the restricted r-k class estimator and restricted r-d class estimator in linear regression, J. Stat. Comput. Simul. 81 (6) (2011) 679–691.