Microelectron.Reliab., Vol. 33, No. I0, pp. 1523-1532, 1993.
0026-2714/9356.00+.00 © 1993 Pergamon Press Ltd
Printed in Great Britain.
BAYESIAN PREDICTION OF THE LIFE TIMES OF THE UNUSED ITEMS FOR THE BURR DISTRIBUTIONS WITH A RANDOM SAMPLE SIZE S. K. A S H O U R C a i r o U n i v e r s i t y , Institute o f Statistical Studies a n d R e s e a r c h , Cairo, E g y p t and M . A. M . H. E L - W E K E E L Sadat A c a d e m y for M a n a g e m e n t Sciences, M a a d i , E g y p t
(Received for publication 12 M a y 1992)
Summary
Evans and Ragab (1983) used a Bayesian approach to predict the number of unused items in a censored type II sample when tile parent distribution is Burr type XI1 (GBD) and the sample sizes of the old and future samples are fixed. Tile present paper is concerned with the same prediction problem when the size of the future sample is a random variable, and more realistic priors are considered and used for k and c. A numerical illustration has been provided.
1. Introduction In reliability studies, the generalized Burr distribution (GBD) plays a very important role. The density function of tile GBD is f ( x ) = k c x c-1 (l+xC) - ( k + l ) ,
x>0. k>0, c>0,
(1)
where k and c are the two shape parameters. Given a sample of size n from (1) and having observed tile first r failure times 0 < X(1 ) < X(2)< ... < X(r ) from an old sample of size n, the problem here is to predict the life times of the remaining (n-r) items in tile sample, denoted by X ( r + 0 < X(r+2)< ... < X(n ) respectively, where X(r+s), O < s
< n-r denotes the life time of the s th item that would have failed. Evans and Rageb
(1983) discussed a Bayesian prediction problem for X(r+s ). They used a discrete prior for c and a g a m m a prior for k. This discrete prior is not a realistic choice. Also, they considered the sample size as fixed. This paper is concerned with the same prediction problem, under a wider set-up, namely, that in which (i) the sample size is a random variable having a Negative binomial distnbution or alternatively, a Poisson ditribution, and (ii) the joint prior distribution of k and c is a bivariate g a m m a distribution. A numerical illustration will be provided to check the efficiency of tlle new set-up and to compare it with the previous results.
2. Prediction Using A Fixed Sample Size Suppose we have a type II censored sample, resulting in r failures out of n, where the life times have the two-parameter Burr distribution having the p.d.f. (1). Following Ashour and El-Wekeel 1523
1524
S.K. ASHOURand M. A. M. H. EL-WEKEEL
(1992) end using the gamma
priors for k and c, the joint posterior distribution of k and c is
f(k, c I data) = {I r(v+r+1)}-'exp {-k (l+y(c)(6c)"}] ka+r e-ac c3x(c),
(2) where r y(c) = E In (l+x~i)) +(n-r)In (l+X~r)), i=l x(c) = I~ x c-1 (1-I-x~)) -1 i--1 (i) I=
oo
I
e-ac c~ x(c) {l-fy(c)(6c)-l}-(7"}'r÷I)dc
O
and a, 8,
7
and 6 are prior parameters. Setting, for simplicity, Y = X(r+s ) and
X = X(r ) and using (1), the conditional p.d.f of X(r.l_s) given X(r ) is s-1 f(y[x) = k c s ~ s r ) ~ ' ~ (-1) ' C] 1) exp [-k(n+i-s-r+l) i--O • {In (l+yC) - In (l+xC)}] y*(c),
(3)
where y*(c) = yC-l(l+yC)'l.
From (2) and (3), the Bayesian predictive p.d.f of X(r+s ) given X(r ) is f(yldata) = s(7+r+l) s~ A(i,s) ? e-¢~c c~-t'1 y*(c) x(c) i=0 0 • {y(i,s)} "(7+r+2) dc I"1,
(4)
where A(i,s) =
and y(i,s) -- l+y(c) (6c)'t+(n+i-s+r+l) {ln (l+y~) - In O+xC)}. The mean and variance, obtained from (4), are respectively the Bayes predictive estimate of X(r+s ) and the Bayes risk for X(r+s). Moreover, the one-sided predictive interval for X(r+s ) is s-I P(X(r+s ) > y [data) = s ~ A(i,s) (n+i-s-r+l) "1 i=O oo • J" e-ac c~X (c) {y(i,s)}-(a+r+l) dc. I-I 0
(5)
Numerical solutions and computer facilities are needed to obtain a point estimator and an interval predictive estimator for X(r+s ). For the one - parameter G B D (c known), equations (4) and (5) reduce to s-I f(yldata) = sc(~f+r+l) (6c+y(c))"l ~ A(i,s)y*(c) . {y(i,s,c)y(7+r+2), i=0
(6)
where y(i,s,c)= 1+(n+i-s-r+1) (6c+y(c))"I {In (l+yC) - In (l+xC)} and
S-l P(X(r+s ) >_ y*Idata) ---s ~ A(i,s)(n+i-s-r+l)"I. {y(i,s,c))"(7+r+x) i=0
(7)
Bayesian prediction
1525
Special cases of (7) of particular interest include s=l, (corresponding to the perdiction of the time at which the next failure occurs) and s = n-r (corresponding to the prediction of the last failure time). For s = 1, equation (7) reduces to
P(X(r+l ) > y*[data)= [l+(n-r)(6c+y(c)) "l
• {In (1+y*C) - In (l+xC)}]-(7+rq'1),
(8)
while for s -- n-r, we have n-r-1 n r F(X(n ) ~ y*ldata) = ~ (-1) i ( i + l ) [l+(i+l) (Sc+y(c)) -1 i=0 • { In (l+y*) - In (l+xC)}] "(7+r+l)
(9)
Equations (8) and (9) are the same as those of Evans and Ragab (1983) when the infomative prior does not depend on c. Moreover, from (8), the lower 100c~ % prediction bound for X(r+l ) is given explicitly by the value L1 ,
where L1 = [(l+x c) exp {(5c-4-y(c)) (n-r)'l(a "1/(7÷r÷1) -1)} -1 ]1/c
(10) It is not possible to obtain an explicit form for the 100 ex% lower prediction bound for X(s) when 2 ~ s < n-r, and therefore numerical solutions and computer facilities are needed. The upper 100(1-a) % prediction bounds for X(r+l ) and X(n ) can be obtained similarly.
3. Prediction Using A Random Sample Size Suppose that the size of the future sample has a Negative binomial distribution. Referring to consul (1984), Gupta and Gupta (1984) and using the p.d.f (1), the conditional p.d.f of X(rq.s) given k and c is Oo
S-I
f(ylk, c) -- sn kc ~ ~ B (t,i,0) exp [-k (i+t+l) t=O i=O
(11)
• {In ( l + y C ) - In (l+xC)] y*(c) Di I ,
where / ~ - 1 \ [t-~-Sx
B(t,i,8) ----(-l)i ( i ) ( t ) (n+t+s-l) ! {(t+s) t n[}"l . Otq's (1-8) n
and DI= .~. (n+i+t-l) ! ~i !(n+i+t-j)!}-x oj (1.0)nq-i+t-j j--I From (2) and (11) , the Bayesian predictive p.d.fof X(r+s ) is O0
f(yldata)
= sa
(v+r+l) E
8-1
O0
1
~ B(t,i,O) J" • "'~e c~+ y*(c) x(e)
t=O i=O
O
• {y (t,i)} -(7+r+2) dc (I D1)-1,
(12)
where y it,i) = l+y(c) (6c)"l + (i+t+l) {In (l+y c) - In (l+xC)). MR
3~/1O--G
(12.a)
1526
S . K . ASHOUR and M. A. M. H. EL-WEKEEL Evidently the p.d.f in (12) can be used to obtain the Bayes predictive estimate of X(r+s)~the Bayes risk for X(r.i.s ) and a one-sided interval
e(X(r+s ) >_ y*ldata)= sn ~
s-~ B(t,i,O)(i-i-t%1)"
t=O i=O OO
• j" e"ac cflx(c)(y*(t,i)} -(~'+r+t) . d c (I D1) "1, 0
(13)
where y*(t,i) is given by (12.a) when X(r+s ) = y* . If c is known, one can obtain the predictive distribution Oo
f(y{dat&) = c s n
(v+r+X) (6 c+y(c))"l ~
$-1
E B(t,i,e) y*(c)
t=0 i=0
• {y (t,i,c)} "(7+r+2) DI l
(14)
where y(t,i,c) = l+(i+t+l)(&c+y(c)) "i { In ( l + y c) - In (l+xC)},
(14.a)
so that the one - sided Bsyesi&n predictive interval is OO
$-1
P(X(r+s ) >_ yl{data)= sn ~ ]~ B(t,i,8)(i+t+l) "1 {yl(t,i,c))'(T+r+1) t=0 i=0
• dc (I D~I,
(15)
where yl(t,i,c) is given by (14.a) when X(r.l.s ) = y~. In particuler, putting s -- 1 in (15), it follows that the lower and upper 100 1-% prediction bounds for X(r+l)are the solutions y~ of the equations OO
B(t,8) (t+l)'l{yl(t,c)) "(7+r+l)
DI I = r,
(16)
t=0 where
B(t,o) =
or+, (l.0)n,
y'(t,c) = l-I-(t-t-l)(6c+y(c)) "l (In ( l + y c) - In (l+xC)}, taking r = c~ and l"=l-a respectively. Similarly, the upper and lower 100 r% predictive bounds for X(n ) (i.e., s = n-r) are obtained from OO
E
m-I
E B1(t,i,8)(i+t+l)'l{Y'(t,i,c)} "(7+r+l) Di 1 -- T,
(17)
t---0 i=O where /n-r-l~ ,,n-l-t-l~ Bl(t,i,8 ) = (-l)i~ i ) ~ t )(2n+t-r-l). {(n-4-t-r)! n !1"1 0 n't't'r (1-8) n,
on taking ¢ = a and r = 1-a respectively. Now, suppose that the size of the future sample has a Poisson distribution with parameter A, then the conditional distribution of X(r.Fs) given k and c is given by :
f(ylk,c)= .~ k E
E
p(t,i,*)~ p [-k(i+t+1)
t--O i=O
• {In (l+yC) - {n (l+xC)}] y*(c) D~ I where
=
0,)ITS,
(is)
Bayesian prediction
1527
and Co A D2 = . ~ . e " ~J/j ] J=! Using (2) and (18), the Bayesian predictive p.d.f of X(r+s ) is CO
S-1
f(yldata) = s ( 7 + r + l ) ~ ~ p(t,i,A) 7 e'ac c/~+I x(c) t=O i=O 0 • y*(c) {y(t,i)} -(7+r+2) dc (I D2) "1,
(19)
so that the Bayesian predictive interval is Oo
P(X(r+s ) > yldata) = s ~:
S-I
~: p(t,i,A)(i+t+l)"I
t=O i=O
•7 e-aC c/~x(c){y(t,i)Y(7-Fr+l)dc (I D2)"I
(20)
0 For the one - lmrameter GBD (c known), the Bayesian predictive p.d.f of X(r+s ) is CO
f(y[data)= sc(7+r+l) (6c+y(c))"1 ~
s-I
~ p(t,i,A){y(t,i,c)} "(7+r+2)
t=O i=0 •
y*(c)
(21)
D~ l
where
y(t,i,c)= l+(i+t+l) (6c+y(c))'l{In (l+yC)- In (l+xC)},
(21.a)
so that the predictive interval is oO
8-1
P(X(r+s ) _> y*l_X(r)) = s ~ ]~ p(t,i,A) ( i + t + l ) "I (y(t,i,c)} -(7+r+l) . D~1 t=O i=O
(22)
Putting s = I in (22), it follows that the lower and upper 100r% predictive bouvds for X(r+l ) are the solutions y'of the oo
p(t,A) (t+l)'l{y(t,c)} -(7+r+1) D~I = r
(23)
t=0 where p(t,A) = e-A At/t ! and ~ is taken to be a and l-a
respectively. Similarly, putting s = n-r in (22), tile
upper and lower 10O r% prediction bounds/'or Y(m) are the solutlons y* of the equations OO
E where
~I p1(t,i,A)(i+t+l).1(y(t,i,c)}.(7+r+x) D~I = r,
(24)
t=0 i---0 n-r-I (n~t:-r) Pl(t,i,A) = (-I) i ( i ) e"A An+t'r/(n+t-r) !
and r is taken as a and I-¢~ respectively The results using the generalized Negative binomial distribution and the generalized Poisson distribution have been discussed and displayed in El-Waked (1992). 4. A Numerical Illustration This section contains a numerical illustration of most of the theoretical results mentioned in sections (2) and (3). Computer facilities, numerical techniques and papadopoulas' (1978) example (with
1528
S . K . ASHOURandM. A. M. H. EL-WEKEEL
censored type lI and complete samples) have been used. For the future sample and for the case when the sample size is a random variable, we used 8 = 0.25 , 0.5 and 0.75 for the Negative binomial parameter and we took d = 1,3 and 5 for the Poisson parameter. Prior distributions of the parameter (including non-informative prior) have been employed to obtain the Bayes coefficient of variation and tail probabilities for X(8 ) which is the predicted failure times of the remaining six items in the sample of papadopoulos' (1978) example where k and c are unknown. An analogous analysis has been carried out when c is known, taking c=5. The results are presented in tables (1),(2),(3) and (4).
Table (1) Coefficients of variation of X(8 ) using fixed (r=6, n=10) and random sample sizes
Prior Para. a
#
FSS "r
NBSS
6
0=0.25
PSS
O=o.soO=0.7s
A=z
A=3
A=5
1
-1
-1
1
1.2462 1.0561
0.6258
0.1622
0.2s38
0.1829
0.1000
1
-1
7
5
1.2462
0.1892
0.0658
0.2225
0.1299
0.0168
0.1918
1
-1
9
8
0.8540
0.5892
0,4925
0,2531
0.7329
0.4881
0.1818
3
-1
7
1
2.6502
o.eoeo
0.2086
0.3419
0.6461
0.1888
0.1111
5
7
-1
1
1.9548
0.8355
0.5288
0.2011
0.9104
0.7672
0.1774
8
9
-1
1
4.8335
0.3474
0.3175
0.267]
0.57]2
0.4035
0.0926
1
7
-1
3
4.1065
1.2209
0.2742
0,2390
0.6658
0.5789
0.2220
1.9752
0.3615
0.1713
0.2718
0.2570
0.2760
0.1589
NIP
Table (2) Coefficients fixed
of variation
o f X(8 ) u s i n g
( r = 6 , n = 1 0 ) and random s a m p l e s i z e s (when c = 5)
FSS
Prior 7
6
NBSS
PSS
0=0.25
0=0.50
0=0.75
~=1
A=3
A=5
0.4943
0.4470
0.1475
0,4992
0.1473
0.0348
1
1.2411
4
2
0.6913
0.4351
0.4789
0.1443
0,8156
0.4779
0.0983
4
3
0.5812
0.4586
O.1182
0.0262
0.4706
0.0952
0.0223
5
1
..4223
0,8404
0.4887
O.1417
0.4983
0.2935
O,1287
5
2
0.6088
1.0348
0.4896
0.5019
0.5155
0,3852
0.1263
$
3
0.5015
0.4955
0.1132
0.0862
0.5714
0.9248
0.1451
4
6
1
1.7542
0.9177
0.4297
0.1486
0.4367
0.6752
0.2821
6
2
0.8504
0.2579
0,3499
0.0251
0.5053
0.4757
0.1288
6
3
0.8039
0.7659
0.4935
0.0998
0.4710
0,1419
0.0198
10.8938
0.6194
0.5155
0.1583
0,3387
0.1714
0.1030
NIP
Bayesian prediction
1529
Table (3) Tail probabilities of X(8 ) using fixed and random sample sizes Prior Para. a
i
I
3
5
8
~
3'
-1
-1
-1
7
-I
9
-1 7
7
9
7
-I
-i
NIP
FSS 0=0.25
8=0.50
0=0.75
A=I
A=3
A=5
0.6
0.5610
0.3579
0.1294
0.0319
0.4297
0.2311
0.0522
0.7
0.5730
0.3731
0.1651
0.0582
0.4461
0.2503
0.0597
0.8
0.5813
0.3932
0.1817
0.0696
0.4681
0.2699
0.0804
0.9
0.5943
0.4196
0.2345
0.0815
0.4709
0.2812
0.1118
0.6
0.3898
0.2689
0.1292
0.0695
0.3890
0.1911
0.0972
0.7
0.4283
0.3042
0.1501
0.0503
0.4203
0.2099
0.1183
0.8
0.4572
0.3528
0.1809
0.1179
0.4185
0.2518
0.1826
0.9
0.6582
0.4134
0.2098
0.1609
0.4236
0.2508
0.2154
0.6
0.3906
0.3494
0.0548
0.0789
0.0368
0.0308
0.0879 0.0843
6 I
5
8
1
-I
y
I
1
3
NBSS
PSS
0.7
0.4086
0.3682
0.0518
0.0298
0.1411
0.1234
0.8
0.5052
0.4539
0.1245
0.0409
0.3482
0.2518
0.1726
0.9
0.5534
0.5007
0.1421
0.0612
0.4011
0.2737
0.2636
0.6
0.7932
0.3699
0.2576
0.1791
0.2789
0.1703
0.0629
0.7
0.8117
0.3823
0.2889
0.1909
0.2906
0.1599
0.0932
0.8
0.8213
0.3859
0.2818
0.1869
0.3092
0.1778
0.0913
0.9
0.8314
0.3912
0.2983
0.1891
0.3062
0.1942
0.0971
0.6
0.5817
0.3123
0.1407
0.0711
0.2511
0.1428
0.0715
0.7
0.6002
0.3119
0.1348
0.0816
0.3112
0.1367
0.0689
0.8
0.7068
0.3129
0.1502
0.0801
0.3214
0.1513
0.0867
0.9
0.7219 0.3308 0.1412 0.0863
0.3182
0.1853
0.1128
0.6
0.7113
0.2546
0.1298
0.0448
0.2817
0.0378
0.0139
0.7
0.7208
0.2617
0.1355
0.0406
0.3001
0.0603
0.0208
0.8
0.7438
0.2898
0.1413
0.0439
0.3004
0.0613
0.0428
0.9
0.7611
0.3022
0.1412
0.0417
0.3291
0.0718
0.0514
0.6
0.8249
0.3281
0.1808
0.0213
0.3201
0.1753
0.0918
0.7
0.8280
0.3299
0.1903
0.0718
0.3291
0.1942
0.1506
0.8
0.8377
0.3643
0.2077
0.0723
0.3379
0.2464
0.1549
0.9
0.8569
0.3769
0.2157
0.1157
0.4027
0.2603
0.1564
0.6
0.6877
0.2735
0.1704
0.0715
0.2641
0.0658
0.0411
0.7
0.7289
0.3128
0.1654
0.0698
0.3209
0.0713
0.0438
0.8
0.7606
0.3392
0.1807
0.1013
0.3311
0.1011
0.0612
0.9
0.7821
0.3578
0.2041
0.1234
0.3402
0.1097
0.0703
1530
S . K . ASHOURand M. A. M. H. EL-WEKEEL
Table (4) Tail probabilitiesof X(8 ) using fixed snd random sample sizes,when c=5. Priors
NBSS
y 7
3
1
2
3
1
2
3
NIP
0=0.25
0=0.50
0=0.75
A=I
A=3
A=5
0.6
0.5623
0.2713
0.2586
0.1463
0.2609
0.1512
0.0631
0.7
0.6431
0.5136
0.4025
0.1849
0.4534
0.2767
0.1025
0.8
0.7565
0.4327
0.2743
0.2348
0.4569
0.3034
0.1354
0.9
0.7809
0.5819
0.3062
0.1743
0.6571
0.3469
0.0516
0.6
0.5423
0.2786
0.1853
0.1243
0.2914
0.1924
0.1143
0.7
0.6489
0.3663
0.2902
0.2036
0.3492
0.2473
0.1805
0.8
0.6543
0.3147
0.2267
0.1178
0.3810
0.2787
0.2072
0.9
0.6452
0.3623
0.2610
0.1804
0.4028
0.3623
0.1308
0.6
0.7196
0.1875
0.0971
0.0851
0.2013
0.1651
0.1042
0.7
0.5167
0.2817
0.1769
0.1194
0.2991
0.2411
0.136?
6
2
PSS
FSS
0.8
0.7182
0.3528
0.2443
0.1523
0.3831
0.3022
0.1320
0.9
0.6562
0.3141
0.1876
0.1164
0.3742
0.3609
0.1201
0.6
0.5305
0.2637
0.0653
0.0609
0.2845
0.1118
0.0608
0.7
0.5073
0.3045
0.2633
0.1559
0.2543
0.1336
0.1001
0.8
0.6483
0.3282
0.3605
0.2261
0.2511
0.3521
0.1565
0.9
0.7896
0.5035
0.3998
0.2602
0.4567
0.3051
0.2419
0.6
0.6446
0.3095
0.1913
0.0911
0.5909
0.2113
0.02?6
0.7
0.5089
0.3077
0.1765
0.1398
0.4982
0.2014
0.0923
0.8
0.7551
0.4029
0.2811
0.1328
0.4101
0.2725
0.1009
0.9
0.7149
0.6290
0.4123
0.1608
0.5524
0.2948
0.1207
0.6
0.7012
0.4809
0.2681
0.1316
0.4978
0.1149
0.0063
0.7
0.6131
0.4718
0.1892
0.1623
0.4835
0.2229
0.0213
0.8
0.5741
0.4179
0.2168
0.1718
0.4259
0.1819
0.1169
0.9
0.7436
0.7003
0.3736
0.1827
0.6609
0.5234
0.1034
0.6
0.5419
0.1697
0.1571
0.1569
0.3817
0.2362
0.0864
0.7
0.4639
0.2889
0.2459
0.2289
0.4372
0.4117
0.0971
0.8
0.6551
0.3671
0.2523
0.2136
0.5109
0.3792
0.1158
0.9
0.7856
0.4273
0.2829
0.2126
0.5343
0.3141
0.1019
0.6
0.5192
0.4839
0.3881
0.1809
0.4881
0.4709
0.1448
0.7
0.5781
0.5573
0.4404
0.2368
0.5716
0.4011
0.1869
0.8
0.7311
0.5794
0.4109
0.2147
0.6364
0.4102
0.1193
0.9
0.7147
0.6889
0.5609
0.2307
0.5423
0.3923
0.1457
0.6
0.443?
0.2691
0.0823
0.0801
0.2955
0.1443
0.0487
0.7
0.5028
0.3018
0.1991
0.1607
0.2987
0.2117
0.1156
0.8
0.6054
0.3799
0.2021
0.1809
0.3600
0.1726
0.0978
0.9
0.6358
0.6309
0.2534
0.1843
0.5379
0.2129
0.0999
0.6
0.6529
0.3800
0.3168
0.1199
0.5428
0.1007
0.0723
0.7
0.5176
0.3383
0.2619
0.1038
0.5162
0.2219
0.0841
0.8
0.5851
0.5039
0.3700
0.1532
0.5178
0.3114
0.1017
0.9
0.5347
0.4724
0.4401
0.2397
0.4738
0.4738
0.1696
Bayesian prediction
1531
Based on the analysis of the above mentioned example, we can conclude that the use of a personal computer makes these new results practical and applicable to experiments in which the generalized Burr distribution is the appropriate statistical model. Also from tables (1) to (4), we can draw the following conclusions:-
(a) The coefficients of variations for X(8), using a fixed sample size, are greater than those for a random sample size having either the Negative Binomial or the Poison distributions, for both the informative and non-informative priors. This concusion is valid when c and k are unknown and also when c is known, for both the censored type II and the complete samples.
(b) The comparison of the results for the Negative binomial and the
P o i s o n distributions
shows that the coefficients of variation of X(8), based on the Poiuon dustribution, are smaller than those for the Negative binomial distribution, in all cases when c and k are unknown, but they are smaller only in 70% cases when c is known. Also, this is true when we have either the censored type II samples or the complete samples, both with informative and non-informative priors.
(c) The tail probabilities of X(8), using a fixed sample size, are greater than those for a random sample size in all cells of the table. This is valid for consoled type II and complete samples, both for informative and non-informative priors. This is also true when both c and k are unknown or when c is known.
(d) The tail probabilities of X(8 ), based on a sample of a random size having a Negative binomial distribution, are greater than the corresponding probabilities, using a Poisson distribution in 59.3% of the samples when k and c are unknown and in 92% of the samples when c is known.
From the above analysis, we conclude that the use of a random sample size is better than that of a fixed one. This is similar to Lingappalah's (1986) conclusion for the exponential distribution. In addition, the use of the Poisson distribution is more convient than that of the Negative binomial distribution. All these comments are based only on one illustration. To arrive at concrete conclusions, it is necessary to investigate this problem numerically, using Monte carlo simulation and extensive computer calculations.
REFERENCE
[1] A~our, S.K. and El-Waked, M.A. (lgg~).
Estimation o f t b e unknown
parameters of the Generalized Burr distribution. (Under pubilication).
[2] Coastal, P. C. (1aS.t). On
the distribution of order statistics for a random
sample size. Statistica Neerlandica, 38,249-256
[3] EL.Wak¢¢l, M. A. (1990)On
The Generalyed Burr Distribution. Ph.D.
Thesis, Institute of Statistical Studies & Research, Cairo University Egypt.
1532
S . K . Asx~oua and M. A. M. H. EL-WEKEEL
[4] Eras, L G. n d Ralab, A.
S (1983) Bayesian Inference given a type II
censored s~mple from a Burr distribution. Communications in statistiics, Theory and
[5]
Methods, 12,1569-1580
Gmpta, D. and Gapfa. R. C. (1984) On the distribution of order statistics for a random sample size. Statistics Neerlandiea, 38,13-18.
[8] LiagappaisA, G. S. (1986) Bayesian
Prediction in Exponeutial Life
Testing when Sample Size is a Random Variable. IEEE. Trans. on Reliability, 35, 106-110.
[7]
Papadopoalos, A. S. (1978) The Burr distribution as a failure model from a Bayesian approach, IEEE. Trans. on Reliability, R-5, 369-371