Bayesian prediction for the range with a burr distribution and a random sample size

Bayesian prediction for the range with a burr distribution and a random sample size

Microelectron. Reliab., Vol. 34, No. 6, pp. 1121-1125, 1994 ~ Copyright © 1994ElsevierScienceLtd Printed in Great Britain.All rights reserved 0026--...

249KB Sizes 0 Downloads 58 Views

Microelectron. Reliab., Vol. 34, No. 6, pp. 1121-1125, 1994

~

Copyright © 1994ElsevierScienceLtd Printed in Great Britain.All rights reserved 0026--2714/94$7.00+ .00

Pergamon

B A Y E S I A N P R E D I C T I O N FOR THE R A N G E WITH A B U R R D I S T R I B U T I O N A N D A R A N D O M SAMPLE SIZE S. K. ASHOUR Institute of Statistical Studies and Research, Cairo University, Cairo, Egypt and M. A. M. H. EL-WAKEEL Sadat Academy for Management Sciences, Maadi, Egypt

(Received for publication 12 December 1992) Abstract--This paper is concerned with the problem of predicting the future range with the two-parameter Burr distribution, given a sample of a random size and using the Bayesian approach. We consider two possible distributions of the random sample size and the results so obtained have been illustrated numerically, using iterative techniques and computer facilities.

1. INTRODUCTION Suppose that the parent distribution is the two-parameter generalized Burr distribution (GBD) with p.d.f.

f ( x ) = k e x " - l ( l + xO -{k+~) x , c , k >O,

size n are Xc1), X(2 ) . . . . . X(r), where X~0 is the ith order statistics. Following Ref. [3] and using gamma priors for c and k, the joint posterior distribution of k and c is

f ( k , c I data) = {IF(6 + r + 1)} -~

(1)

where c and k are two shape parameters. Let X,), X~2). . . . . X~,) be the r order statistics from an old sample of size n, (i.e., we will have a type II censored sample) from an underlying distribution having p.d.f. (1). Let Y,), Yc:l. . . . . Y~) be a second independent sample of future observations from a distribution with p.d.f. (1). The problem of predicting the future range R = YI~)- Yo), using the Bayesian approach, under the assumption that the future sample size is a random variable, has been discussed for the oneparameter exponential distribution [1] and for the two-parameter exponential distribution [2]. This paper is concerned with the same prediction problem when the underlying distribution is the two-parameter G B D with p.d.f. (1). The predictive distribution of R will be derived when the future sample size has a negative binomial distribution and also when it has a Poisson distribution. A numerical illustration is provided, using numerical techniques and computer facilities, and it is also used to assess the efficiency of the new results.

x e x p [ - k { 1 +y(c)(3c)-l}]k6+re-'Cc~x(c), where

y(e) = ~ ln(1 + xl]~) + (n - r)ln(1 + x(~)), i=l r

x(c) = [-[ x~,~ '(1 + x~,~) -~, i=1

I=

f0

e-~Cc~x(c){1 +y(c)(6c) -l} (','+'+lldc

and e, B, 7 and 6 are prior parameters. Let R = Y~,~/- Yc, denote the future range. The conditional p.d.f, of R given k and c is

f(Rlk, c)=m(m-l)c2k

i=0

x

()

2 ~ (-1) ~ m-2

exp[-k{(m -i-

i

!) ln(1 + y " )

+ (i + 1)ln(l + (y + R)")}]

x y(c)x(c) x R(c) dy, where Y = Ytl),

2. PREDICTION USING A FIXED SAMPLE SIZE

y(c) = y"-I(I + y,.)-I

Suppose we have a type II censored sample of n items whose life times have the two-parameter Burr distribution (l), i.e., the first r failures in a sample of

(2)

and

1121

R(c) = (y + R)'-'{1 + (y + R)"} -~.

(3)

1122

S . K . ASHOUR and M. A. M. H. EL-WAKEEL

F r o m e q u a t i o n s (2) a n d (3), the Bayesian predictive p.d.f, o f R is -2 i

f ( R l d a t a ) = A 4 1 -~ ~ ( - - 1 ) ~, i=0

~

~ e ,c dO

×cB+2y(c)R(c)x(c){R(i,m)}

~'+'+3)dc dy,

N u m e r i c a l solutions a n d n e e d e d to o b t a i n the p o i n t predictive interval for R. F o r the o n e - p a r a m e t e r e q u a t i o n s (4) a n d (5) reduce

c o m p u t e r facilities are e s t i m a t e o f R a n d the GBD to

(4)

f ( R [ d a t a ) = A2A4 ~ ( - 1)'

where

i=0

.4 4 =

m(m -- 1)(7 + r + 1)(7 + r + 2),

(c

is

known),

( m- 2) f °~ i

× y(c)R(c){R(i, rn, c)}-('~'+r+31dy, (6)

and

where

R ( i , m ) = l + y(c)(6c) ~+ ( m - i - 1 ) l n ( l

+ y~)

R(i, m, c ) = 1 + (6c + y(c))-l{(m - i - 1 )

+ ( i + l)ln{l + ( y + R)C}. × ln(1 + y ' ) + (i + 1)In(1 + 0 ' + R)')} T h e m e a n a n d variance, o b t a i n e d for e q u a t i o n (4), are, respectively, the Bayesian predictive e s t i m a t e o f R a n d the Bayesian risk for R. F u r t h e r , the o n e - s i d e d predictive interval for R is

and

P(R >1R , I d a t a ) = mc(7 + r + 1)(6c + y(c)) m 2

(m--l~f~

m--2

x ,=o y" ( - l ) ' \ i + l J J o

P(R>~R*ldata)=m(y + r + 1) ~ ( - - 1 ) ~ i=0

\i+lJJo

y(c)

x {R(i,m, c)}-(;'+r+z) dy.

fo

3. PREDICTION USING A RANDOM SAMPLE SIZE

×e-'"c¢+2y(c)x(c) ×{R(i,m)} ( 7 + ' + Z ) d c d y l - j . (5)

Suppose that the sample size m of the future sample has a negative binomial distribution. Using Refs [4, 5]

Table 1. Coefficients of variation of the range using fixed and random sizes (when c = 5)

NBSS r

PSS

Prior 7

Parameter 6

FSS

0 = 0.25

0 = 0.50

0 = 0.75

21

23

25

4 4 4 5 5 5 6 6 6

1 2 3 1 2 3 1 2 3

4.1778 1.0584 0.5552 5.6922 1.6575 0.6223 7.7743 1.7202 0.6178

2.5954 0.9524 0.3320 1.3782 0.3746 0.2990 0.9460 0.9411 0.4313

2.9445 1.1313 0.3488 1.8766 0.7157 0.6725 1.2679 1.0727 0.7701

0.9211 0.7715 0.3160 0.6378 0.6799 0.4057 1.1036 0.2157 0.4289

0.5546 0.5120 0.0838 0.4258 0.4017 0.1966 0.3556 0.2951 0.2444

0.7590 0.1129 0.0498 0.4119 0.1765 0.1112 0.1918 0.2310 0.1510

0.4461 0.1463 0.0791 0.3949 0.2377 0.1452 0.2547 0.2142 0.0725

9.0730

1.4289

2.1417

0.8522

0.3158

0.2240

0.3028

6.6121 1.5980 0.5638 8.9340 1.9783 0.7732 12.1126 2.4247 0.9826

0.7024 0.8932 0.4683 0.8356 0.5604 0.3473 0.7496 0.5478 0.3496

1.1656 1.0065 0.5506 0.8805 0.7136 0.4888 1.0496 0.5803 0.4239

0.8457 0.1627 0.5005 1.2600 0.8425 0.3374 1.2320 0.2942 0.6774

0.7398 0.3143 0.2226 0.2445 0.3900 0.2691 0.2520 0.2791 0.2046

0.5444 0.2157 0.0955 0.4322 0.1736 0.1296 0.2835 0.2336 0.1790

0.4754 0.1247 0.1094 0.2386 0.1582 0.0768 0.2220 0.1675 0.1502

21.5187

1.4241

1.3110

0.5777

0.2903

0.4020

0.1621

5.2896 1.7353 0.7636 6.6548 2.0713 0.9485 8.3846 2.4563 1.1402

1.9199 0.4630 0.3736 1.2473 0.6076 0.8963 0.7388 0.6200 0.5030

2.2039 0.8711 0.5425 0.7044 0.6217 0.3460 0.6080 0.9693 0.3218

0.7764 0.4842 0.5608 1.0654 0.4923 0.2559 1.2590 0.5141 0.3476

0.4598 0.4563 0.1840 0.3666 0.3832 0.2525 0.3602 0.2060 0.1852

0.5094 0.2598 0.1471 0.3543 0.1221 0.1747 0.3316 0.1925 0.1752

0.4090 0.2473 0.1351 0.4369 0.1025 0.0658 0.1201 0.1609 0.1270

8.0672

1.3646

2.1766

0.6687

0.2407

0.2529

0.2204

NIP 7

4 4 4 5 5 5 6 6 6

10

4 4 4 5 5 5 6 6 6

1 2 3 I 2 3 1 2 3 NIP 1 2 3 I 2 3 1 2 3

NIP

Bayesian prediction

1123

Table 2. Tail probabilities of the range using fixed and random sample sizes (when c = 5) NBSS

PSS

Priors 7

6

R

FSS

0 = 0.25

0 =0.50

0 =0.75

2 = 1

2 = 3

2 = 5

4

1

0.6 0.7 0.8 0.9

0.7859 0.8001 0.8386 0.8731

0.5192 0.5382 0.5891 0.5993

0.3686 0.3798 0.3897 0.3957

0.1286 0.2757 0.3297 0.3466

0.4995 0.4997 0.5198 0.5399

0.2917 0.3065 0.3194 0.3396

0.1122 0.1851 0.2861 0.2870

4

2

0.6 0.7 0.8 0.9

0.2368 0.2872 0.3601 0.4493

0.1440 0.1600 0.1761 0.2880

0.1234 0.1314 0.1578 0.2582

0.0984 0.1281 0.1635 0.1773

0.1179 0.1485 0.1588 0.1989

0.1076 0.1245 0.1297 0.1848

0.0632 0.0744 0.1022 0.1477

4

3

0.6 0.7 0.8 0.9

0.1736 0.1797 0.1888 0.1962

0.1685 0.1768 0.1784 0.1910

0.1384 0.1616 0.1633 0.1783

0.0719 0.1291 0.1305 0.1564

0.1618 0.1659 0.1791 0.1869

0.1062 0.1570 0.1592 0.1649

0.0747 0.1079 0.1258 0.1539

5

1

0.6 0.7 0.8 0.9

0.8373 0.8558 0.8807 0.9082

0.4988 0.5193 0.5395 0.5997

0.3879 0.4225 0.4564 0.4886

0.1646 0.2316 0.2709 0.3506

0.4805 0.4941 0.5097 0.5799

0.3767 0.3959 0.4068 0.4779

0.1404 0.1467 0.2667 0.2889

5

2

0.6 0.7 0.8 0.9

0.3484 0.3951 0.4622 0.5434

0.2700 0.3782 0.3877 0.4023

0.1941 0.2808 0.3615 0.3877

0.1081 0.1408 0.1758 0.2304

0.1842 0.1889 0.1988 0.1991

0.1701 0.1795 0.1832 0.1856

0.0861 0.0905 0.1222 0.1523

5

3

0.6 0.7 0.8 0.9

0.1864 0.1924 0.2117 0.2122

0.1432 0.1541 0.2078 0.2103

0.0977 0.0994 0.1671 0.2058

0.0441 0.0763 0.1216 0.1920

0.1407 0.1432 0.1824 0.2008

0.0801 0.0921 0.1345 0.1811

0.0427 0.0699 0.1519 0.1708

6

1

0.6 0.7 0.8 0.9

0.8766 0.8917 0.9118 0.9336

0.4789 0.4994 0.5196 0.5297

0.8953 0.3072 0.3187 0.3495

0.2320 0.2505 0.2546 0.2840

0.4213 0.4345 0.4672 0.4809

0.2434 0.2909 0.2928 0.3077

0.1906 0.2002 0.2401 0.2551

6

2

0.6 0.7 0.8 0.9

0.4442 0.4872 0.5485 0.6218

0.3827 0.3885 0.3910 0.3962

0.2761 0.2827 0.2890 0.3478

0.1323 0.1551 0.1875 0.1991

0.3222 0.3317 0.3699 0.3807

0.2091 0.2447 0.2555 0.2993

0.1129 0.1349 0.1789 0.1990

6

3

0.6 0.7 0.8 0.9

0.1796 0.2373 0.3222 0.4284

0.1771 0.2109 0.2458 0.2718

0.1487 0.1798 0.1961 0.2139

0.0632 0.0944 0.1263 0.1656

0.1699 0.1981 0.2121 0.2309

0.1409 0.1677 0.1883 0.1999

0.0604 0.0739 0.1008 0.1231

0.6 0.7 0.8 0.9

0.8656 0.8700 0.9058 0.9818

0.5785 0.5800 0.6875 0.7032

0.3787 0.3838 0.3943 0.4323

0.2416 0.2821 0.2464 0.3233

0.5333 0.5417 0.5998 0.6234

0.3368 0.3671 0.3778 0.3909

0.2097 0.2123 0.2779 0.3193

NIP

a n d u s i n g t h e p.d.f. (1), t h e c o n d i t i o n a l p.d.f, o f R g i v e n k a n d c is

and D,~ = ~ (m + t + l)!{i!(m + t + 2 - j ) ! }

f(R lk'c)= D'T'mc:k2 ~ ~ B(t'i'o)

x0J(l x exp[-k{(t

- i + l)ln(l

1

i= 2

+y")

F r o m e q u a t i o n s (2) a n d (7), t h e B a y e s i a n p r e d i c t i v e p.d.f, o f R is

+ (i + l ) l n ( l + (y + R y ) } ]

x y(c)x(c)R(c) dy,

--O)m+r+2-J

(7)

f(Rldata)=(ID,,) 'A,, ~, ~, t=0i=0

where

x B(t, i, O)

e-~"cl~+2y(c)

B(t'i'O)=(-l)i(1)(t + l)( m + t + l ) t + l - R(c)x(c){R(t, i)}-~'+ ~+ 31d y dc, x 0 ' + : ( 1 -- O) '~,

(8)

1124

S.K. ASHOURand M. A. M. H. EL-WAKEEL

where

By using equations (2) and (12), the Bayesian predictive p.d.f, of R is

A,4=m(y + r + 1)0, + r + 2 ) and

f ( R Idata)=(ID,2)-'A,3 ~ t=Oi=O

R(t, i ) = 1 + y ( c ) - ' + (t - i + 1)ln(l + y ' ) + (i + l)ln{1 + (y + R)~}. Clearly, the predictive p.d.f. (8) can be used to obtain the Bayes predictor of R and the Bayes risk for R. It also provides a one-sided interval for R, as

× P(t,i, 2)

f:f:

e-~Cc#+2y(c)

× R(c)x(c){R (t, i)} -(~'+R+3)dy de. (13) Hence the Bayesian predictive interval for R is

P(R >~R* I data) = (ID,)-lm(y + r + 1) Z t=Oi=O

x B(t, i, O)

fo f:

P(R/> R* I data) = (7 + r + l)

e - ~ c ~+~

t

xx(c) ~ F~p(t,i,,~)(i+ 1)-~ l~Oi~O

x y(c)x(c){R(t, i)} -t;.+,+2) dy de.

x {R*(t, i)} (~'+'+2)

(9)

xdy dc(ID,2) 1.

If c is known, we can obtain the predictive distribution

(14)

For the one-parameter GBD (i.e., c is known), the Bayesian predictive p.d.f, of R is

I data) = Dql A2A'4,=o,=o ~ ~ a(t, i, o) f f

f(R

e -~" c ~+ly(c)

×y(c)R(c){R(t,i, c)}-°'+'+3)dy, (10)

f ( R I data ) = D,~ l A:A, 3 ~ t=0i=0

where

x P(t, i, 2)

R(t, i, c) = 1 + (6c + y(c))-'{(t - i + 1) ln(1 + y') + (i + 1)ln(I + (y + R)9} and the one-sided Bayesian predictive interval is

f:

y(c)R(c)

x {R(t, i, c)}-(~'+'+ 3)dy,

(15)

so that the predictive interval for R is

P(R >1R . Idata)=mc(7 + r + 1)(6c + y(c)) -l P(R >~R, I data)=Dqlc(~, + r + 1)(6c + ~(c)) ~ x

B(t, i, O)

y(c)

t=0i=0

× ~ ~ P(t,i, 2 ) ( i + l ) -I

× {R(t, i, c)} -(~'+'+ 2)dy ' O q ' .

(11) Now, suppose that the sample size m has a Poisson distribution with parameter 2, then the conditional distribution of R given k and c is

f ( R I k, c) = D':'c2k: ,:oi=o ~ ~ P(t, i, 2) fo~

1~0i=0

x

y(c){R.(t, i, c)} -(''+r+z) dy. (16)

The results using the generalized negative binomial and the generalized Poisson have already been discussed and displayed [6].

x exp[-k{(t - i + 1)ln(l +y") + (i + l)ln(1 -I- (y + R)~)}]y(c)R(c) dy, (12) where

P(t,i, 2 ) = ( 1 ) ( t +t 2)e-~2t+'{(t +2)!} ' and

D,: = ~ e-~2//=2

l/j!

4. A NUMERICAL ILLUSTRATION This section contains a numerical illustration for the theoretical results for the range when c is known (c = 5), making use of a Papadopoulos' example [7] with a censored type II sample (with 50% and 70% censored and complete samples), double integration technique and personal computer facilities. We take cases when the sample size is fixed (m = 4) and when it is a random variable, the unknown parameters of the negative binomial and Poisson distributions are taken as 0 = 0.25, 0.5 and 0.75 and

Bayesian prediction 2 = 1,3 and 5, respectively. Different priors are chosen and used. The coefficients of variation and tail probabilities for the future range are obtained in the above mentioned cases and displayed in Tables 1 and 2. The use of computer facilities makes these methods practical and applicable to experiments in which the G B D is the appropriate statistical model. Based on the above example, we conclude that the use of random sample size gives smaller coefficients of variation and tail probabilities. Also, the use of a Poisson distribution is better than that of the negative binomial. This conclusion is valid for sensored and complete samples and also for informative and noninformative priors. The above mentioned conclusions are based on the above illustration of a Monte Carlo simulation and substantially more calculations are needed to reach concrete conclusions.

1125 REFERENCES

1. G. S. Lingappaiah, Bayesian prediction in exponential life testing when sample size is a random variable, IEEE Trans. Reliab. 35, 106-110 (1986). 2. S. A. Salem and F. A. EI-Galad, Bayesian prediction for the range based on two parameter exponential with random sample size, Microelectron. Reliab. 33, 623-632 (1993). 3. S. K. Ashour and M. A. El-Wakeel, Estimation of the unknown parameters of the generalized Burr distribution, Egyptian Stat. J. Cairo University 36, 1-14 (1992). 4. P. C. Consul, On the distribution of order statistics for a random sample size, Statist. Neerlandica 38, 249-256 (1984). 5. D. Gupta and R. C. Gupta, On the distribution of order statistics for random sample size, Statist. Neerlandica 38, 13-18 (1984). 6. M. A. El-Waked, On the generalized Burr distribution. Ph.D. Thesis, Institute of Statistical Studies and Research, Cairo University, Egypt (1992). 7. A. S. Papadopoulos, The Burr distribution as a failure model from a Bayesian approach, IEEE Trans. Reliab. R-5, 369-371 (1978).