Accepted Manuscript Estimation of the inverted exponentiated Rayleigh Distribution Based on Adaptive Type II Progressive Hybrid Censored Sample Hanieh Panahi, Nasrin Moradi
PII: DOI: Article number: Reference:
S0377-0427(19)30342-5 https://doi.org/10.1016/j.cam.2019.112345 112345 CAM 112345
To appear in:
Journal of Computational and Applied Mathematics
Received date : 12 February 2019 Revised date : 5 July 2019 Please cite this article as: H. Panahi and N. Moradi, Estimation of the inverted exponentiated Rayleigh Distribution Based on Adaptive Type II Progressive Hybrid Censored Sample, Journal of Computational and Applied Mathematics (2019), https://doi.org/10.1016/j.cam.2019.112345 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Manuscript Click here to view linked References
Estimation of the inverted exponentiated Rayleigh Distribution Based on Adaptive Type II Progressive Hybrid Censored Sample Hanieh Panahi1* and Nasrin Moradi2 1
Department of Mathematics and Statistics, Lahijan Branch, Islamic Azad University, Lahijan, Iran. 2
Department of Statistics, Razi University, Kermanshah, Iran.
Abstract In this paper, the problem of estimating parameters of the inverted exponentiated Rayleigh distribution under adaptive Type II progressive hybrid censored sample is discussed. The maximum likelihood estimators (MLEs) are developed for estimating the unknown parameters. The asymptotic normality of the MLEs is used to construct the approximate confidence intervals for the parameters. By applying the Bayesian approach, the estimators of the unknown parameters are derived under symmetric and asymmetric loss functions. The Bayesian estimates are evaluated by using the Lindley’s approximation as well as the Monte Carlo Markov chain (MCMC) technique together with Metropolis-Hastings algorithm. The MCMC samples are further utilized to construct the Bayesian intervals for the unknown parameters. Monte Carlo simulations are implemented and observations are given. Finally, the data of the maximum spreading diameter of nano-droplet impact on hydrophobic surfaces is analyzed to illustrative purposes. Keywords: Adaptive Type II progressive hybrid censoring, Diameter of nano-droplet, Gibbs sampling; Inverted exponentiated Rayleigh Distribution, Metropolis-Hastings, Symmetric and asymmetric loss functions.
1. Introduction Several statistical distributions are used in the analysis of experimental data and in problems related to the modeling of failure processes. The inverted exponentiated Rayleigh distribution (IERD) is a particular member of a general class of inverse exponentiated distribution was introduced in the literature by Ghitany et al. (2014). The probability density function of the IERD is given by 1
f ( x; , ) 2 x 3 exp( / x 2 ) 1 exp( / x 2 )
; x 0, 0, 0,
(1)
and the corresponding cumulative distribution function is:
F ( x; , ) 1 1 exp( / x 2 ) ; x 0, 0, 0.
The hazard rate function can be written as: *
Corresponding Author:
[email protected].
1
(2)
1
h(t; , ) 2 x 3 exp( / x 2 ) 1 exp( / x 2 ) ; t 0.
Here , 0 are the shape and scale parameters, respectively. It is observed that the hazard rate of the IERD is non-monotone (see, Figure 1). As pointed out by Kundu and Howlader (2010), in many practical situations, it is often known a priori that the hazard rate cannot be monotone. So, the IERD is more appropriate distribution than the exponentiated Rayleigh distribution (ERD) because the ERD does not provide a satisfactory parametric fit if the data indicate a non-monotone hazard rate function. Therefore, if the empirical study suggests that the hazard rate function of the underlying distribution is non-monotone, then the IERD may be used to analyze such data sets. In fact, hazard rate of the IERD show similar behavior to some well-known statistical distributions, such as lognormal, inverse Weibull and generalized inverted exponential distribution. So, the applications of the IERD have been widespread. Specially, it can effectively be utilized to model phenomena where components under a study indicate early failure behavior such as mechanical or electrical devices. In recent past few years, the ERD and IERD have gained some attention among researchers and interesting results have been obtained. For example, Raqab and Madi (2009) studied Bayesian estimation and prediction for the ERD using informative and noninformative priors. Mahmoud and Ghazal (2017) considered different point and interval methods for the ERD under generalized Type II hybrid censored data. Kayal et al. (2018) discussed Maximum likelihood and Bayesian estimators of the IERD based on hybrid censoring scheme. Moreover, in reliability and engineering experiments, censoring is taken in order to save time and reduce the number of failed items. Type I and Type II are the two most popular censoring schemes. Hybrid censoring is the mixture of theses censoring schemes. Conventional Type I, Type II and hybrid censoring schemes have been studied in detail by many authors; see Banerjee and Kundu (2008), Gupta and Singh (2012) and Balakrishnan and Kundu (2013). One of the drawbacks of these schemes is that they do not have the flexibility of allowing removal of the units from the experiment at any time point other than the terminal point. Because of this lack of flexibility, the progressive censoring scheme which enables us to save time and cost of the experiment is proposed by Aggarwala and Balakrishnan (1998). The disadvantage of the Type II progressive censoring scheme is that the time length of the experiment can be very large. Because of that problem, Kundu and Joarder (2006) and Childs et al. (2008) combined the concepts of progressive and hybrid censoring schemes to develop the progressive hybrid censoring scheme. Unfortunately based on this proposed censoring scheme, the statistical inference procedure may not be applicable or they will have low efficiency. To overcome the drawback of this censoring, Ng et al. (2009) introduced an adaptive Type II progressive hybrid (Adaptive-IIPH) censoring scheme which has an advantage in terms of saving the total time test and increasing the efficiency of statistical analysis. To the best of our information, the problem of estimation of the inverted exponentiated Rayleigh distribution (IERD) has yet not been studied based on the Adaptive-IIPH censoring scheme. The main aim of this paper is twofold. First, we consider the maximum likelihood estimator and the Bayesian inference of the unknown parameters under different loss functions when the data are Adaptive-IIPH censored. The problem of the initial value of the MLEs is solved here by using the graphical method, proposed by Balakrishnan and Kateri (2008). Asymptotic confidence intervals are constructed using the normality property of the MLEs. The Bayes estimators cannot be evaluated in explicit form. Lindley’s approximation and Markov chain Monte Carlo (MCMC) technique together with the Metropolis-Hastings (M-H) sampling procedure are proposed to compute the Bayes estimates and the associated MCMC intervals. Further, due to the practicality of the IERD in engineering data analysis, we considered the data 2
of the maximum spreading diameter of nano-droplet on hydrophobic surfaces for illustrative purpose. The rest of the article is organized as follows. In Section 2, we provide the model description. We cover the maximum likelihood estimates of the unknown parameters and construction the approximate confidence intervals using the normality property of corresponding MLEs in Section 3. In Section 4, we discuss Lindley’s approximation and MCMC technique to compute the Bayes estimates. The MCMC samples are further utilized to construct MCMC intervals for the unknown parameters. Section 5 deals with simulation study to compare different estimators. Finally, maximum spreading diameter of nano-droplet data set is analyzed in Section 6 to illustrate the proposed methods of estimation. In the last section, we draw some conclusions about this paper.
2. Model Description Ng et al. (2009) introduced an Adaptive-IIPH progressive hybrid censoring scheme in which the experimental time is allowed to run over time T which is an ideal total test. In this censoring scheme, the effective sample size m is fixed in advance and the progressive censoring scheme ( R1, R2 ,..., Rm ) is provided, but the values of some of the Ri may change accordingly during the experiment. If X m:m:n occurs before time T, the experiment proceeds with the pre-specified progressive censoring scheme ( R1, R2 ,..., Rm ) and stops at X m:m:n . In this case, we will have usual Type II progressive censoring scheme. Otherwise, if X J :m:n occurs before time T, i.e. X J :m:n T X J 1:m:n ; J 1 m ,
then, we will not withdraw any surviving unit from the experiment by setting RJ 1, RJ 2 ,..., Rm1 0 and J
Rm n m
R . This setting enables us to get the effective number of failures m and assures that the i
i 1
total test time will not be too far away from the time T. In the recent past few years, the Adaptive-IIPH censoring scheme has gained some attention among researchers, for example, Mahmoud et al. (2013), considered the maximum likelihood and Bayes estimates of the unknown parameters of the Pareto distribution. Ismail (2014), studied the statistical inference of the Weibull distribution under step-stress partially accelerated life test model. AL Sobhi and Soliman (2015), investigated the different estimations of the two-parameter exponentiated Weibull distribution. Nassar and Abu-Kasem (2017) discussed the point and interval estimation of the Weibull distribution. Nassar et al. (2018) studied the maximum likelihood and the Bayes estimators for the unknown parameters of the Weibull distribution.
3. Maximum Likelihood Estimation This section is concerned with obtaining the MLEs for the unknown parameters of the IERD based on the data observed under the proposed censoring scheme. Let T denote an ideal total test time and x1:m:n x2:m:n ... xJ :m:n T xJ 1:m:n ... xm:m:n denote an Adaptive-IIPH censored order statistics from 3
IERD( , ), with censoring scheme R1,..., RJ ,0,...,0, n m
J
R .
So, the corresponding likelihood
i
i 1
function based on this data can be written as: L , | J j j
m
1
2
x
3 i:m:n
i 1
exp( / xi2:m:n ) 1 exp( / xi2:m:n )
j
1 exp( / x
Ri
2 i:m:n )
i 1
(3)
j
( n m
1 exp( / xm2 :m:n )
where,
Ri ) , i 1
j n n R1 1 n R1 R j 1 n R1 R j 2 n R1 R j m 1 .
The log-likelihood function ignoring the additive constant is given by: l , | J j m ln m ln
m
xi:m2 :n ( 1)
i 1
m
ln(1 exp( / x
2 i:m:n ))
i 1
(4) j Ri ln(1 exp( / xi2:m:n )) (n m i 1
Ri ) ln(1 exp( / xm2 :m:n )) . i 1 For computing the maximum likelihood estimates of and , we differentiate the l , | J j partially j
with respect to and equate to zero. The resulting equations are: l , | J j
m
m
j
R ln 1 exp( / x
ln 1 exp( / xi2:m:n )
i 1
2 i:m:n )
i
i 1
(5)
j
(n m
R ) ln(1 exp( / x i
2 m:m:n ))
0,
i 1
and
l , | J j
m
m
x i 1
1
2 i:m:n
( 1)
exp( / xi2:m:n ) 2 2 i:m:n (1 exp( / xi:m:n ))
m
x i 1
(6) Ri exp( / xi2:m:n ) 2 2 i 1 xi:m:n (1 exp( / xi:m:n )) j
j
(n m
R ) i
i 1
exp( / xm2 :m:n ) 2 2 xm:m:n (1 exp( / xm:m:n ))
0.
The maximum likelihood estimate of the parameter , say ˆ ML can be obtained, from (5) as: ˆ ML
m j
m
ln i 1
i:m:n
R ln i
i 1
,
j
i:m:n
(n m
R ) ln i
(7)
m:m:n
i 1
where, i:m:n 1 exp( / xi2:m:n ) and m:m:n 1 exp( / xm2 :m:n ) and also ˆML can be computed using the following equation: 4
W 1 ( ) ,
(8)
where, 1 ˆ ML m
W ( )
m
i 1
i:m:n 2 xi:m:n i:m:n m
+
x i 1
1
2 i:m:n
j
i 1
Ri i:m:n (n m 2 xi:m:n i:m:n
m
i:m:n 2 i:m:n i:m:n
x i 1
j
m:m:n 2 m:m:n m:m:n
R ) x i
i 1
.
Here, i:m:n exp( / xi2:m:n ) and m:m:n exp( / xm2 :m:n ) . We use the simple iterative scheme to solve (8). Start with an initial guess of , say 0 , obtain 1 W 1( 0 ) and proceeding in this way, obtain i 1 W 1( i ) . Stop the iterative procedure, when i 1 i , some pre-assigned tolerance limit.
3.1.
Approximate Confidence Intervals
In this subsection, we compute the approximate confidence intervals of and using the asymptotic normal distribution for the corresponding MLEs. Based on regularity conditions, the MLEs (ˆ ML , ˆML ) are approximately bivariate normal with mean ( , ) and covariance matrix I 1 ( , ) . Therefore, (ˆ ML , ˆML ) ~ N (( , ), Iˆ1 (ˆ ML , ˆML )) , where Iˆ(ˆ ML , ˆML ) is the observed information matrix as:
2 l , | J j 2 ˆI (ˆ , ˆ ) ML ML 2l , | J j
2 l , | J j 2l , | J j 2
( , ) (ˆ ML , ˆML ).
Using the log-likelihood function in Equation (4), we have 2 l , | J j 2l , | J j
2l , | J j
m
i 1
Ti:m:n 2 xi:m:n i:m:n
m
2
2
j
Ri
i 1
,
Ti:m:n (n m 2 xi:m:n i:m:n
j
R ) x i
i 1
Tm:m:n 2 m:m:n m:m:n
and l , | J j 2
j RT i i:m:n (n m 4 2 2 x i 1 i:m:n i:m:n m
Hence,
5
j
Ri )
i 1
m Tm:m:n Ti:m:n ( 1) . 2 4 xm4 :m:n m x 2 :m:n i 1 i:m:n i:m:n
,
Iˆ(ˆ ML , ˆML ) m Qi*:m:n i 1
j
m
m ˆ 2
Ri Qi*:m:n (n m
i 1
j
j
Ri Qi*:m:n ( n m
i 1
i 1
i 1
2
R )Q i
ˆ Qi** :m:n (ˆ 1)
x i 1
* m:m:n
i 1
m
m
* Ri )Qm :m:n
j
Qi*:m:n
Ti:m:n 4 2 i:m:n i:m:n
,
where, Qi*:m:n Qm* :m:n
Ti:m:n 2 xi:m:n i:m:n
j
( , ) (ˆ ML
,ˆ
ML
Qi** :m:n
, )
Tm:m:n ˆ 2 xm:m:n m:m:n ( , )(ˆ ML , ML )
i 1
Ri Ti:m:n (n m 4 xi:m:n i2:m:n
j
R ) x
Tm:m:n 4 2 ( , ) (ˆ ML , ˆML ) m:m:n m:m:n
i
i 1
and
. So, the approximate confidence intervals of and can be
constructed as: ˆ ML Z /2 Var (ˆ ML ), ˆ ML Z /2 Var(ˆ ML ) and ˆML Z /2 Var ( ˆML ), ˆML Z /2 Var ( ˆML ) ,
(9)
where, Var(ˆ ML ) and Var( ˆML ) are the entries on the main diagonal of Iˆ1 (ˆ ML , ˆML ) , and Z /2 is the upper percentile of the standard normal distribution.
4. Bayesian Estimation In this section, we evaluate the Bayes estimates of the unknown parameters and of the IERD(, ) . The independent prior distributions for and are considered as Gamma(a, b) and Gamma(c, d ) respectively. So, the joint prior distribution of and is of the form
( , ) a1eb c1ed ;
0, 0, a 0, b 0, c 0, d 0 .
(10)
It follows that the joint posterior density of and can be written as: j
, data
m a 1 m c 1
( n m
m:m:m
Ri ) i 1
e b d
m
i:m:n i:m1:m
i 1
Ri i:m:m
i 1
.
j
j
m a 1 m c 1
( n m
m:m:m
Ri ) i 1
e b d
m
1 i:m:n i:m:m
i 1
0 0
(11)
j
Ri i:m:m d d
i 1
Since the loss functions play an important role in Bayesian inference, we consider both symmetric and asymmetric loss functions as:
Square Error Loss Function ( L( ,ˆ) ( ˆ)2 ).
Linear-exponential (LINEX) Loss Function ( L(ˆ, ) e
s (ˆ ) s(ˆ ) 1; s 0 ).
So based on Equation (11), the Bayes estimate of any function of and say (, ) , under squared error loss function (SE) and LINEX loss function (LI) after simplification can be written as
6
j
ˆSE
( , )
m a 1
m c 1
( n m
m:m:m
Ri ) i 1
e
b d
1
i:m:n i:m:m
i 1
0 0
j
Ri i:m:m d d
i 1
,
j
m
m a 1
( n m
m c 1 m:m:m
Ri ) i 1
e b d
m
1 i:m:n i:m:m
i 1
0 0
(12)
j
Ri i:m:m d d
i 1
and j j m ( n m Ri ) R m a 1 m c 1 s ( , ) b d 1 m:m:m i 1 e i:m:n i:m:m i:m:i m d d 1 i 1 i 1 , ˆLI log 0 0 j s j m ( n m Ri ) R m a 1 m c 1 b d 1 m:m:m i 1 e i:m:n i:m:m i:m:i m d d i 1 i 1 0 0
(13)
respectively. We adopt two different estimation procedures for Bayes estimation in the following subsections.
4.1.
Lindley’s Approximation
In this subsection, we compute the approximate Bayes estimates of and using Lindley’s approximation technique (Lindley; 1980). Based on the Lindley’s approximation, the approximate Bayes estimates of i.e., when ( , ) for the SE and LI are 1
a 1
c 1
2 3 2 ˆ Lindley SE ˆ 2( b) ( ) 2( d )( )( ) 2 Lˆ 2 ˆ 12 (3 ) 2 12 (3 ) 2 12 (3 ) 2 ˆ
(
21 3 31 ) Lˆ 2( ) 2 Lˆ ( ) Lˆ , 2 2 2 2 2 (12 (3 ) ) 12 (3 ) (12 (3 ) )
and a 1 2 c 1 3 b)( )( d )( ) 2 2 ˆ ˆ ( ) ( ) 1 2 3 1 2 3
1 s
ˆ Lindley LI log A; A e sˆ se sˆ (
1 2 2 21 + s 2e sˆ ( ) se sˆ ( )2 Lˆ ( ) Lˆ 2 2 2 12 (3 ) (12 (3 ) 2 ) 2 12 (3 ) 2(
3 31 )2 Lˆ ( ) Lˆ , 12 (3 )2 (12 (3 ) 2 ) 2
respectively. Proceeding in a similar manner, the approximate Bayes estimates of under the SE and LI can be written as:
7
a 1
1
c 1
3 1 2 3 ˆLindley SE ˆ 2( b) ( ) 2( d )( )( ) Lˆ 2 ˆ 12 (3 ) 2 12 (3 ) 2 12 (3 ) 2 ˆ
3
13 1 Lˆ ( ) 2 Lˆ , 2 2 12 (3 ) 12 (3 )
and 1 s
ˆLindley LI log A; A e s se s ( ˆ
ˆ
a 1 3 c 1 1 b)( )( d )( ) ˆ 12 (3 )2 12 (3 ) 2 ˆ
1 1 1 23 13 ˆ ˆ + s 2 e s ( ) se s ( ) 2 Lˆ Lˆ 3 Lˆ . 2 2 2 2 2 12 (3 ) 12 (3 ) 12 (3 ) 12 (3 )
Where, ˆ and ˆ are the MLE of and , respectively and 1
j m j Ri Ti:m:n m m Tm:m:n Ti:m:n ˆ ˆ ; ( n m R ) ( 1) ;; 2 i 2 4 2 4 2 4 2 2 ˆ ˆ xm:m:n m:m:n i 1 i 1 xi:m:n i:m:n i 1 xi:m:n i:m:n
m
3 i 1
j
T
i:m:m 2 i:m:n i:m:m
x
Ri i 1
j
T
i:m:m 2 i:m:n i:m:m
x
(n m Ri ) i 1
Tm:m:m . Also, xm2 :m:n m:m:m
3 L , 2m Lˆ 3 ˆ ˆ , ˆ 3 3 j m L , j RT 2m T T Lˆ 3 i 6i:m:n 3 i:m:n (n m Ri ) m6:m:n 3m:m:n ( 1) i:6m:n 3i:m:n 3 ˆ , ˆ xm:m:n m:m:n i 1 i 1 xi:m:n i:m:n i 1 xi:m:n i:m:n
3 L , Lˆ 2
m
ˆ , ˆ
i 1
j
T
i:m:m 4 2 i:m:n i:m:m
x
i 1
RT x
i i:m:m 4 2 i:m:n i:m:m
j
(n m Ri ) i 1
T
m:m:m 4 2 ˆ , ˆ m:m:n m:m:m
x
; Lˆ
ˆ , ˆ
3 L , 2
;
ˆ , ˆ
0;
Here, i:m:m 1 exp( / xi2:m:n ) and m:m:m 1 exp( / xm2 :m:n ) .
4.2. Monte Carlo Markov Chain Method The Markov chain Monte Carlo is a useful technique for estimating complex Bayesian models. The Gibbs sampling and Metropolis-Hastings algorithm are the two most frequently applied Markov chain Monte Carlo method which are used in statistics, statistical physics, digital communications, signal processing, machine learning, etc. Due to their practicality, they have gained some attention among researchers and interesting results have been obtained. For example, Ritter and Tanner (1992) introduced the GriddyGibbs sampler for bivariate posterior densities. They developed the Gibbs sampler for sampling from the conditional distribution in the absence of conjugacy, which is preserved the conceptual and implementational simplicity of the Gibbs sampler. Gilks and Wild (1992) proposed the adaptive rejection sampling for handling non-conjugacy in applications of Gibbs sampling. Koch (2007) studied the Gibbs sampler by means of the sampling-importance-resampling algorithm. He demonstrated this method for the reconstruction and the smoothing of 3D digital images based on a linear relation between observations and unknown parameters. Shao et al. (2013) proposed the B-spline proposal Metropolis-Hastings 8
algorithm for obtaining an efficient proposal distribution which significantly raised the acceptance rate and made a relatively large step-size transition in the MCMC sampler. Martino et al. (2018) established a new approach namely, recycling Gibbs sampler for improving the efficiency without adding any extra computational cost. In the present work, we have developed a hybrid strategy combining the Metropolis-Hastings (Metropolis et al. (1953) and Hastings (1970)) algorithm within the Gibbs sampler for generating samples from the respective posterior arising from inverted exponentiated Rayleigh distribution. From (3) and (10), the joint posterior up to proportionality is given by: m
x
( , data ) m a 1 m c 1e b d
i 1
exp( / xi2:m:n ) 1 exp( / xi2:m:n ) j
j
1
2
3 i:m:n
1 exp( / i 1
Ri
xi2:m:n )
1 exp( /
( n m
xm2 :m:n )
Ri ) . i 1
So, we can write ( , data ) 1 ( , data ) 2 ( , data ) . It is clear that the conditional posterior density of ( 1 ( , data) ) is a gamma density function as:
1 ( , data ) m a 1exp b
m
ln(1 exp( / x
2 i:m:n ))
i 1
j
j
Ri ln(1 exp( / xi2:m:n )) (n m
i 1
R ) ln(1 exp( / x i
2 m:m:n ))
i 1
Therefore, samples of can be easily generated. Also, the conditional posterior density of ( 2 ( , data)) can be written as:
2 ( , data ) m c 1 exp (d
j
m
i 1
1 exp( / i 1
2 i:m:n ) 1 exp(
x
1
/ xi2:m:n )
j
Ri
xi2:m:n )
1 exp( /
( n m
xm2 :m:n )
Ri ) i 1
It is observed that the density function 2 ( , data) cannot be reduced analytically to the well known distributions. So, we apply the Metropolis-Hastings within Gibbs sampling steps to generate random samples from conditional posterior densities as:
9
Step 1: Initialize the values of ( (0) , (0) ) . Step 2: At stage h and for given m, n and Adaptive-IIPH censored data, generate ( h ) from Gamma m a, b
j
j
m
ln(1 exp( / x
2 i:m:n ))
i 1
R ln(1 exp( / x
2 i:m:n )) ( n m
i
i 1
R ) ln(1 exp( / x i
.
2 m:m:n ))
i 1
Step 3: Generate ( h ) from 2 ( ( h 1) ( h ) , data ) using the following algorithm: Step 3-1: Generate from Normal ( ( h1) ,Variance( )) . Step 3-2: Generate a w from a Uniform(0,1) distribution. 2 ( , data ) if w . min 1 , , where, ( h 1) ( ( h 1) ( h ) , data ) if w 2 (h)
Step 3-3: Put
(h)
Step 4: Consider h h 1 . Step 5: Repeat steps 2-4, N times.
Therefore, the Bayesian estimates of and under SE and LI are: ˆ MCMC SE
ˆMCMC SE
1 N M
1 N M
N
(h)
1 s
h M 1
N
exp( s ( h ) ) , h M 1
exp( s ( h ) ) , h M 1
1 N M
and ˆ MCMC LI log 1 s
1 N M
( h ) and ˆMCMC LI log
h M 1
N
N
respectively, where M is the burn-in-period of Markov Chain which is considered for removing the affection of the selection of initial values and guarantying the convergence of the algorithm. For constructing the MCMC intervals, first order the ( h ) and ( h ) for h 1, 2..., N , Then, the 100(1 )% MCMC intervals of and are given by: ( N /2) , ( N (1 /2)) and ( N /2) , ( N (1 /2)) .
5. Simulation Study In this section, we conduct Monte Carlo simulation study to compare different methods of estimation proposed for the inverted exponentiated Rayleigh distribution under Adaptive-IIPH censored data. For this purpose different combinations of sample sizes and censoring schemes are considered into account. We take T 1.5,3,6 , n, m 15,5 , 25,5 , 35,5 , 15,10 , 25,10 , 35,10 , 25,15 , 35,15 , and three different censoring schemes (SC) as SC 1: R1 n m 1, Rm =1 and Ri 0; i 1, m . SC 2: R1 n m and Ri 0; i 1 . 10
SC 3: Rm n m and Ri 0; i m . It is noted that the scheme 3 is the Type II censoring scheme, where n m units are withdrawn from the test at the time of the m th failure. The Scheme 2 is the left censoring scheme, where n m units are removed in first stage and scheme 1 is the usual Type II progressive censoring scheme. We used the R statistical software for all computations. First, we use the following steps to generate an Adaptive-IIPH censored sample of the IERD: 1. Generate an ordinary Type II progressive censored sample X1:m:n ,..., X m:m:n with censoring scheme ( R1 ,..., Rm ) based on the method proposed by Balakrishnan and Sandhu (1995) as:
2. Generate m independent U (0,1) observations W1 ,W2 ,...,Wm . 3. For given values of n, m, T and ( R1,..., Rm ) , we set m
1/( i
Vi Wi
Rj )
j m i 1
;
i 1, 2,, m.
4. Set Ui 1 VmVm1 Vmi 1 for i 1,2,, m . Then U1,U2 ,,Um is a progressive Type II censored sample of size m from U (0,1) distribution. 1/2
5. For a given values of parameters and , X i / ln(1 (1 Ui )1/ ) ; i 1,, m are progressive Type II samples from IERD( , ) . 6. Determine the value of J, where X J :m:n T X J 1:m:n , and discard the sample X j 2:m:n ,..., X m:m:n . 7. Generate the first m j 1 order statistics from a truncated distribution
f ( x) with sample 1 F ( x j 1:m:n )
j
size n
R j 1 as X i
j 2:m:n ,..., X m:m:n
.
i 1
The average values and mean square errors (MSEs) of the MLEs and Bayesian estimates for and are evaluated. The Bayesian estimates are computed by using Lindley’s approximation (BEL) and MCMC techniques (BEM). The BEMs are obtained using Gibbs within Metropolis scheme based on 5000 MCMC samples. Accordingly hyperparameters in gamma prior are assigned as a 5, b 10, c 19 and d 10 . All the estimates are computed for arbitrarily taken unknown parameters 0.5 and 1.9 . The results based on 104 Adaptive-IIPH censored samples are reported in Tables 1-6 respectively. In all Tables, we have tabulated the average estimates and their MSEs of MLEs (first and second rows), Lindley’s approximation under square error loss function ( BELSE , third and fourth rows), Lindley’s approximation under Linex loss function ( BELLI , fifth and sixth rows), MCMC approximation under square error loss function ( BEM SE , seventh and eighth rows) and MCMC approximation under Linex loss function ( BEMLLI , ninth and tenth rows). Further, the average 95% confidence lengths of approximate confidence intervals (AI) and MCMC intervals (MI) of and are constructed to examine how the proposed interval methods work. The average AIs and MIs are displayed in Tables 7-9. From these Tables the following conclusions are made:
11
1. For fixed n and T as m increases the MSEs of MLEs, BELs and BEMs decrease. So, we tend to get better estimation results with an increase in effective sample size. m 1
2. For fixed n, m and T, Scheme 2 ( n m,0,0,...,0 ) is smaller than Scheme 1 and 3 in sense of MSEs and average length of approximate confidence/MCMC intervals. 3. For fixed n and m, when T increases we do not observe any specific trend in the MSEs. It can be due to the fact that the number of observed failures is preplanned and no additional failures observed when T increase. 4. The average length of approximate confidence/MCMC intervals are narrow down when m increases while T and n remain fixed. 5. The Bayesian estimates are very good in sense of MSE. In most cases, the MCMC technique is better than the Lindley’s approximation in respect of MSE. 6. In most cases, the Bayesian estimates under Linex loss function based on MCMC method are better choice among all competing estimators and for all values of n, m and T . 7. Further, it is observed that the MCMC intervals have shorter average lengths than approximate intervals.
6. Real Data Analysis Coating by nano-droplets spreading performs a critical role in numerous novel industries such as plasma spray, nano self-assembling, nano safeguard coatings and ink jet printing processes. In this section, we consider the data of maximum diameter of nano-droplet collision on hydrophobic surfaces that obtained by Hai-Bao et al. (2014). The maximum diameter of nano-droplet (Max) depends on the contact time (t) and velocity (V) of collision (see, Figure 2). Several authors studied different statistical models for real experimental data, see for example, Shao et al. (2004), Saxena and Rao (2015) and Panahi (2017a and 2017b). Before we perform numerical calculations and provide way to an advanced point in the analysis of this data, we obtain the Kolmogorov Smirnov (K-S) distance between the empirical distribution and the fitted distribution functions, it is 0.14011, and the associated p- value is 0.52. Therefore, the result indicates that the IERD fits well to this data. First, we compute the maximum likelihood estimation of the unknown parameters using the iteration method described in the Section 2. Using the graphical method proposed by Balakrishnan and Kateri (2008), we determine the initial value of , (see, Figure 3) and stop the process when i 1 i 106 . From Figure 3, it is observed that the initial value of can be considered as 270.15. Using the estimated initial value, the MLEs of and are obtained as 2.13126 and 272.2386 respectively. We plot the profile log-likelihood function of in Figure 4, and it clearly indicates that the profile log-likelihood function is unimodal. The log-likelihood function is also plotted over the whole parameter space in Figure 5. Now, we generate the Adaptive-IIPH censored samples considering different values of m ( m 20,22,23 ), T ( T 18,19,20 ) and censoring schemes from the original measurements as:
12
SC 1: R1 n m 1, R2 · · · Rm1 0, Rm 1 . 6.7528,8.0809,9.0289,9.9806,12.4030,14.9220,16.0850,17.3770,18.9920,20.1280,20.5430,21.3180,21.55 10,22.4810,22.5950,23.4500,23.8280,24.4190,24.6810,25.5350. SC 2: R1 n m, R2 · · · Rm 0 . 6.7528,8.0809,9.0289,9.1085,9.9767,9.9806,10.2710,10.8530,11.0200,11.2400,12.4030,12.6330,14.7210, 14.9220,16.0850,16.1440,17.3770,17.6360,20.1280,20.5430,21.5510,25.5350. SC 3: R1 · · · Rm1 0, Rm n m . 6.7528,8.0809,9.0289,9.1085,9.9767,9.9806,10.2710,10.8530,11.0200,11.2400,12.4030,12.6330,13.6630, 14.7210,14.9220,16.0850,16.1440,17.3770,17.6360,18.7050,18.9920,20.1280,25.5350. The maximum likelihood and the Bayesian estimates using the Lindley’s approximation and the MCMC samples of size 104 are reported in Table 10. We mention that the Bayesian estimates are obtained under SE and LI loss functions. Because we have no prior information about the unknown parameters, we consider the noninformative gamma priors of the unknown parameters as a b c d 0 . Further, the %95 approximate confidence intervals and MCMC intervals are provided in Table 11. To calculate the Bayesian estimates using the Gibbs with in Metropolis-Hastings sampling, we use the MLEs as the initial values and generate Markov chain with N 104 samples. We proposed a normal candidate-generating density to simulate samples from 2 ( , data) . The trace plots of the first 104 MCMC outputs for posterior distribution of and are represented in Figures 6 and 7 respectively. A trace plot is a diagnostic tool for determination the convergence problem of the MCMC chain which is a plot of the iteration number (x axis) against the value of the draw of the parameter at each iteration (y axis). It is evident that the MCMC procedure converges very well.
7. Conclusions In this paper, we consider the estimation of parameters for an inverted exponentiated Rayleigh distribution when data come from an Adaptive-IIPH censoring. Using an iterative procedure and asymptotic normality theory, we have developed the MLEs and approximate confidence intervals of the unknown parameters. The Bayesian estimates are derived by Lindley’s approximation under square and Linex loss functions. Since the Lindley’s method fails to construct the intervals, we utilized the Gibbs sampling together with Metropolis-Hastings sampling procedure to compute the Bayesian estimates of the unknown parameters and also to construct associated MCMC intervals. A Monte Carlo simulation has been provided to show all the inferential results. The results illustrate that the proposed methods perform well. The applicability of the considered model in real situation has been illustrated based on the diameter of nano-droplet data and it was observed that the considered model can be utilized for analyzing the nanodroplet data well.
13
References Aggarwala, R. and Balakrishnan, N. (1998). Some properties of progressive censored order statistics from arbitrary and uniform distribution with application to inference and simulation. Journal of Statistical Planning and Inference. 70, 35-49. AL Sobhi, M.M. and Soliman, A.A. (2015). Estimation for the exponentiated Weibull model with adaptive Type-II progressive censored schemes. Applied Mathematical Modeling. 40(2), 1180-192. Balakrishnan, N. and Kateri, M. (2008). On the maximum likelihood estimation of parameters of Weibull distribution based on complete and censored data. Statistics and Probability Letters. 78, 2971-2975. Balakrishnan, N. and Kundu, D. (2013). Hybrid censoring models, inferential results and applications. Computational Statistics and Data Analysis. 57, 166-209. Balakrishnan, N., Sandhu, R.A. (1995). A simple simulational algorithm for generating progressive TypeII censored samples. The American Statistician. 49, 229-230. Banerjee, A. and Kundu, D. (2008). Inference based on type-II hybrid censored data from a Weibull distribution. IEEE Transactions on Reliability. 57, 369-378. Childs, A., Chandrasekar, B. and Balakrishnan, N. (2008). Exact likelihood inference for an exponential parameter under progressive hybrid censoring. In: Vonta F, Nikulin M, Limnios N, Huber-Carol C (ed) Statistical Models and Methods for Biomedical and Technical Systems. Birkhauser, Boston, 319-330. Gilks, W.R., Wild, P. (1992). Adaptive Rejection Sampling for Gibbs Sampling. Journal of the Royal Statistical Society, Series C (Applied Statistics). 41, 337-348. Ghitany, M.E., Tuan, V.K., Balakrishnan, N. (2014). Likelihood estimation for a general class of inverse exponentiated distributions based on complete and progressively censored data. Journal of Statistical Computation and Simulation. 84, 96-106. Gupta, P.K. and Singh, B. (2012). Parameter estimation of lindley distribution with hybrid censored data. International Journal of System Assurance Engineering Management. 1, 1–8. Hai-Bao, H., Li-Bin, C., Lu-Yao, B. and Su-He, H. (2014). Molecular dynamics simulations of the nanodroplet impact process on hydrophobic surfaces. Chinese Physics B. 23, 074702. Hastings, W.K. (1970). Monte Carlo sampling methods using Markov chains and their applications. Biometrika. 57, 97-109. Ismail, A.A. (2014). Inference for a step-stress partially accelerated life test model with an adaptive Type-II progressively hybrid censored data from Weibull distribution. Journal of Computational and Applied Mathematics, 260, 553–542. Kayal, T., Tripathi, Y.M. Rastogi, M.K. (2018). Estimation and prediction for an inverted exponentiated Rayleigh distribution under hybrid censoring. Communications in Statistics - Theory and Methods. 47, 1615-1640. Koch, K.R. (2007). Gibbs sampler by sampling-importance-resampling. Journal of Geodesy. 81, 581– 591. 14
Kundu, D. and Joarder, A. (2006). Analysis of Type-II progressively hybrid censored data. Computational Statistics and Data Analysis. 50, 2509- 2528. Kundu, D. and Howlader, H. (2010). Bayesian inference and prediction of the inverse Weibull distribution for Type-II censored data. Computational Statistics and Data Analysis. 54,1547-1558. Lindley, D.V. (1980). Approximate Bayesian method. Trabajos de Estadistica. 31, 223-245. Mahmoud, M.A.W., Soliman, A.A., Abd-Ellah, A.H. and El-Sagheer, R.M. (2013). Estimation of generalized Pareto under an adaptive type-II progressive censoring. Intelligent Information Management. 5, 73-83. Mahmoud, M.A.W., Ghazal, M.G.M. (2017). Estimations from the exponentiated Rayleigh distribution based on generalized Type-II hybrid censored data. Journal of the Egyptian Mathematical Society. 25, 7178. Martino, L., Elvira, V., Camps-Valls, G. (2018). The Recycling Gibbs Sampler for Efficient Learning. Digital Signal Processing. 74, 1-13. Metropolis, N., Rosenbluth, A.W., Rosenbluth, M.N., Teller, A.H. and Teller, E. (1953). Equations of state calculations by fast computing machines. Journal of Chemical Physics. 21, 1087-1092. Nassar, M. and Abo-Kasem, O. (2017). Estimation of the inverse Weibull parameters under adaptive type-II progressive hybrid censoring scheme. Journal of Computational and Applied Mathematics. 315,228-239. Nassar, M., Abo-Kasem, O., Zhang, C. and Dey, S. (2018). Analysis of Weibull Distribution under Adaptive Type-II Progressive Hybrid Censoring Scheme. Journal of the Indian Society for Probability and Statistics. 19, 25-65 Ng, H.K.T., Kundu, D. and Chan, P.S. (2009). Statistical analysis of exponential lifetimes under an adaptive Type-II progressively censoring scheme. Naval Res Logist. 56,687-698. Panahi, H. (2017b). Estimation Methods for the Generalized Inverted Exponential Distribution Under Type II Progressively Hybrid Censoring with Application to Spreading of Micro-Drops Data. Communications in Mathematics and Statistics. 5, 159-174. Panahi, H. (2017a). Estimation of the Burr type III distribution with application in unified hybrid censored sample of fracture toughness. Journal of applied Statistics. 44, 2575-2592. Raqab, M.Z., Madi, M.T. (2009). Bayesian analysis for the exponentiated Rayleigh distribution. METRON - International Journal of Statistics. 3, 269-288. Ritter, C., Tanner, M.A. (1992). Facilitating the Gibbs Sampler: The Gibbs Stopper and the Griddy-Gibbs Sampler. Journal of the American Statistical Association. 87, 861-868. Saxena, B.K., Rao, K.V.S. (2015). Comparison of Weibull parameters computation methods and analytical estimation of wind turbine capacity factor using polynomial power curve model: case study of a wind farm. Renewables: Wind, Water, and Solar. 2, 1-11. Shao, Q., Wong, H., Xia, J., IP, W.C. (2004). Models for extremes using the extended three-parameter Burr XII system with application to flood frequency analysis. Hydrological Sciences Journal. 49, 685702. 15
Shao, W., Guo, G., Meng, F., Jia, S. (2013). An efficient proposal distribution for Metropolis–Hastings using a B-splines technique. Computational Statistics and Data Analysis. 57, 465-478.
16
Figure
0.5 1.5 2.5 3.5
Figure 1. The hazard rate function for different values of .
Figure 2. The surface evaluation maximum diameter of nanodroplet data plot.
Figure 3. Plot of the 1 and W ( ) functions.
Figure 4. The profile log-likelihood function of .
2.5 2.0 1.5 1.0
3.0
3.5
4.0
Figure 5. Likelihood profile with respect to parameters for complete data.
0
2000
4000
6000
8000
10000
Figure 6. Trace plot of 104 iterations of under complete data.
350 300 250
200 150 100
0
2000
4000
6000
8000
10000
Figure 7. Trace plot of 104 iterations of under complete data.
Table
Table 1. The average estimates and MSEs of for different choices of n, m, R and T. (15,5) SC MLE
1 0.621399 0.069418
T=1.5 2 0.651415 0.068201
3 0.591425 0.069879
1 0.629558 0.068632
T=3 2 0.536107 0.067861
3 0.543888 0.068970
1 0.638037 0.067232
T=6 2 0.692036 0.066018
3 0.604070 0.067987
BELSE
0.643845 0.065572
0.624238 0.062956
0.622865 0.064577
0.638092 0.066658
0.573092 0.063515
0.559152 0.065156
0.576829 0.066894
0.584544 0.063975
0.586804 0.065758
BELLI
0.593325 0.062261
0.544008 0.060306
0.520597 0.061534
0.590337 0.063992
0.582269 0.060822
0.546084 0.061599
0.569685 0.063787
0.589886 0.061560
0.573334 0.062042
BEMSE
0.554892 0.053616
0.560891 0.051627
0.577193 0.052820
0.575413 0.053368
0.578215 0.051786
0.519363 0.052111
0.501447 0.053747
0.574564 0.051955
0.579847 0.052506
BEMLI
0.554893 0.052427
0.554192 0.051616
0.547419 0.052961
0.501947 0.052787
1 0.585593 0.069935
3 0.561746 0.067843
1 0.558929 0.070199
0.574564 0.051955 T=6 2 0.574219 0.066907
0.542347 0.052006
3 0.630419 0.069597
0.574715 0.051786 T=3 2 0.583597 0.067075
0.519363 0.052001
1 0.643621 0.069854
0.554871 0.050616 T=1.5 2 0.641103 0.068277
BELSE
0.614524 0.066240
0.541277 0.062705
0.598899 0.064946
0.633744 0.067854
0.603648 0.062776
0.502676 0.065245
0.517977 0.065967
0.663395 0.062699
0.538981 0.064622
BELLI
0.582056 0.065537
0.542876 0.061779
0.555526 0.061852
0.501980 0.066571
0.501986 0.060571
0.586207 0.065046
0.607087 0.064174
0.521879 0.061306
0.600886 0.063178
BEMSE
0.524530 0.053521
0.570791 0.052266
0.537310 0.052984
0.619238 0.053546
0.530821 0.051859
0.525898 0.052203
0.572277 0.053810
0.514225 0.052355
0.561437 0.052972
BEMLI
0.516250 0.052552
0.535632 0.052244
0.614995 0.053039
0.573397 0.052722
1 0.522315 0.072572
3 0.523231 0.069247
1 0.516503 0.072671
0.513743 0.051363 T=6 2 0.517116 0.067446
0.562151 0.052042
3 0.520849 0.069095
0.531772 0.051317 T=3 2 0.517636 0.066256
0.504783 0.053839
1 0.622359 0.071556
0.575368 0.051036 T=1.5 2 0.531091 0.064530
BELSE
0.540516 0.064647
0.587510 0.060658
0.631392 0.062263
0.569732 0.066723
0.552430 0.063749
0.580429 0.065272
0.572580 0.065130
0.643201 0.061506
0.569586 0.063842
BELLI
0.544653 0.063845
0.496372 0.058482
0.556416 0.061711
0.612611 0.063681
0.579351 0.059682
0.575394 0.061984
0.595732 0.063112
0.590827 0.059444
0.631599 0.062318
BEMSE
0.562678 0.053763
0.544672 0.051443
0.529239 0.052141
0.625664 0.053262
0.504179 0.051441
0.614441 0.052806
0.504332 0.053472
0.556291 0.052050
0.538051 0.052171
BEMLI
0.565432 0.053362
0.525674 0.051002
0.514663 0.052072
0.546499 0.053217
0.523476 0.051087
0.546980 0.052002
0.529965 0.052381
0.517623 0.050241
0.545732 0.051954
(25,5) Scheme MLE
(35,5) SC MLE
1
3 0.584355 0.067438
3 0.531177 0.068501
Table 2. The average estimates and MSEs of for different choices of n, m, R and T. (15,10) SC MLE
1 0.569213 0.058105
T=1.5 2 0.573446 0.055015
3 0.575895 0.056401
1 0.589643 0.059178
T=3 2 0.544121 0.054298
3 0.546445 0.057579
1 0.542912 0.059674
T=6 2 0.544146 0.055290
3 0.543679 0.057437
BELSE
0.645482 0.045165
0.624376 0.041469
0.482564 0.042084
0.574846 0.045602
0.602379 0.042952
0.495936 0.043585
0.552560 0.045062
0.603525 0.042717
0.573784 0.044441
BELLI
0.539613 0.043032
0.572832 0.040315
0.515022 0.041222
0.511625 0.044139
0.547012 0.041221
0.491021 0.043577
0.605030 0.044933
0.540769 0.041557
0.615712 0.043348
BEMSE
0.558482 0.042244
0.619712 0.040113
0.513393 0.041401
0.669918 0.042979
0.554444 0.041152
0.598575 0.041893
0.574393 0.042912
0.605299 0.040842
0.599436 0.041390
BEMLI
0.525678 0.041251
0.548800 0.039232
0.543678 0.041896
0.595433 0.041236
1 0.594303 0.056311
3 0.596885 0.055258
1 0.596811 0.055285
0.598765 0.039873 T=6 2 0.529528 0.052868
0.545632 0.040401
3 0.593181 0.054774
0.532467 0.039762 T=3 2 0.595213 0.052937
0.576544 0.040685
1 0.594705 0.056145
0.578543 0.038076 T=1.5 2 0.595789 0.051702
BELSE
0.496979 0.045497
0.558890 0.043468
0.634928 0.045204
0.564618 0.045575
0.579921 0.042391
0.572423 0.044241
0.569327 0.044806
0.437277 0.041934
0.556342 0.043174
BELLI
0.595407 0.044102
0.542768 0.041829
0.510049 0.043665
0.566255 0.044389
0.615636 0.042371
0.509392 0.043600
0.638154 0.045086
0.583552 0.042980
0.557019 0.043251
BEMSE
0.553225 0.042933
0.515202 0.039465
0.497900 0.042036
0.622996 0.042729
0.551607 0.039632
0.549652 0.042092
0.585632 0.041902
0.618365 0.039709
0.525083 0.040738
BEMLI
0.567621 0.041025
0.499876 0.038972
0.524588 0.042253
0.553652 0.041251
1 0.530519 0.061723
3 0.564346 0.057532
1 0.565751 0.054872
0.534322 0.038709 T=6 2 0.563722 0.051826
0.543244 0.039921
3 0.529583 0.057041
0.538712 0.039126 T=3 2 0.562735 0.055294
0.539006 0.041423
1 0.594216 0.061046
0.532466 0.037862 T=1.5 2 0.575518 0.054491
BELSE
0.557016 0.044256
0.507059 0.042393
0.566512 0.043153
0.579832 0.044373
0.564297 0.043134
0.516245 0.044004
0.528275 0.045028
0.577462 0.042005
0.628883 0.044618
BELLI
0.543552 0.043896
0.523255 0.041540
0.593773 0.042792
0.534154 0.043966
0.609803 0.042056
0.563314 0.043002
0.523003 0.043529
0.451095 0.041397
0.542467 0.042803
BEMSE
0.547804 0.043693
0.533460 0.040224
0.594608 0.041972
0.586009 0.043834
0.540727 0.041638
0.605168 0.041999
0.533890 0.043007
0.491683 0.040637
0.547106 0.042204
BEMLI
0.546543 0.042842
0.532315 0.039789
0.499876 0.040276
0.534456 0.043032
0.532432 0.040034
0.536890 0.040449
0.534278 0.041628
0.508214 0.039704
0.499803 0.040938
(25,10) SC MLE
(35,10) SC MLE
2
3 0.594019 0.053038
3 0.562010 0.052637
Table 3. The average estimates and MSEs of for different choices of n, m, R and T. (25,15) SC MLE
1 0.534209 0.037486
T=1.5 2 0.535530 0.034050
3 0.636165 0.036847
1 0.616496 0.047674
T=3 2 0.618648 0.045888
3 0.616576 0.046643
1 0.516931 0.0475141
T=6 2 0.518910 0.043793
3 0.519592 0.045546
BELSE
0.607751 0.035012
0.526128 0.032685
0.655475 0.034176
0.559113 0.035494
0.598863 0.032773
0.573643 0.033423
0.595050 0.035734
0.547814 0.032873
0.543290 0.034874
BELLI
0.519584 0.033042
0.582903 0.031872
0.564844 0.033264
0.598517 0.034993
0.516965 0.032853
0.564558 0.032967
0.595628 0.034044
0.524801 0.031165
0.616907 0.033667
BEMSE
0.598046 0.031812
0.5370911 0.029850
0.539145 0.029980
0.604936 0.031988
0.603555 0.028655
0.599226 0.029963
0.631447 0.030641
0.575522 0.028184
0.566129 0.029249
BEMLI
0.499867 0.027912
0.521367 0.026110
0.599854 0.027988
0.568765 0.027642
1 0.583767 0.046752
3 0.584501 0.043439
1 0.584161 0.046586
0.598433 0.025184 T=6 2 0.583251 0.042976
0.543556 0.026249
3 0.617434 0.043334
0.599964 0.025655 T=3 2 0.583273 0.041970
0.575432 0.025963
1 0.584813 0.046306
0.598765 0.024850 T=1.5 2 0.618603 0.041904
BELSE
0.672777 0.035856
0.629262 0.034708
0.605575 0.035133
0.624642 0.036535
0.582445 0.034289
0.614283 0.035033
0.609564 0.035954
0.603022 0.033415
0.617493 0.035056
BELLI
0.672622 0.034801
0.653641 0.033602
0.601542 0.034234
0.577133 0.035224
0.654317 0.033286
0.628665 0.034036
0.654956 0.034065
0.654044 0.032003
0.645493 0.033052
BEMSE
0.496886 0.035724
0.578245 0.032844
0.522822 0.033489
0.580940 0.035032
0.577289 0.031150
0.531481 0.033253
0.654728 0.034579
0.620351 0.031891
0.500531 0.032198
BEMLI
0.498766 0.033723
0.507654 0.029844
0.513455 0.031489
0.565433 0.033842
0.535677 0.029243
0.524561 0.032752
0.608875 0.032594
0.587655 0.028891
0.576555 0.030198
(35,15) SC MLE
3
3 0.584990 0.044229
Table 4. The average estimates and MSEs of for different choices of n, m, R and T. (15,5) SC MLE
1 1.973238 0.079675
T=1.5 2 1.939789 0.077831
3 1.885638 0.078876
1 1.867392 0.079728
T=3 2 1.834502 0.074958
3 1.847884 0.078985
1 1.830894 0.079596
T=6 2 1.911641 0.075108
3 1.934364 0.078967
BELSE
1.891014 0.068272
1.986081 0.065491
2.034979 0.066235
1.937474 0.068426
2.042167 0.065244
1.969651 0.066756
1.945366 0.067094
1.973607 0.063975
2.071729 0.064758
BELLI
1.972583 0.067765
1.976285 0.064376
1.982689 0.065884
1.984463 0.066992
1.921817 0.063222
2.012516 0.064599
2.040661 0.064777
2.032517 0.062560
1.965087 0.063042
BEMSE
1.948169 0.065671
1.944176 0.062970
1.944173 0.063497
2.027663 0.065730
1.953410 0.063016
1.961679 0.064774
2.010709 0.065280
1.992756 0.062741
1.934141 0.063194
BEMLI
1.944162 0.063427
1.957643 0.061589
2.021383 0.063786
1.960236 0.063980
1 1.881025 0.076943
3 1.972855 0.075491
1 1.942213 0.078296
1.995454 0.061805 T=6 2 1.933767 0.067667
1.932178 0.062594
3 1.95347 0.072971
1.924521 0.060496 T=3 2 1.980954 0.069489
1.961024 0.062427
1 1.853804 0.075851
1.944136 0.060467 T=1.5 2 1.825498 0.069386
BELSE
1.973509 0.068540
1.990955 0.065572
1.945768 0.066946
1.915587 0.068054
1.970208 0.065786
1.944995 0.066245
1.931701 0.068067
1.967516 0.065697
2.012349 0.067622
BELLI
2.002665 0.066537
1.969559 0.063779
1.999626 0.063952
2.028731 0.067571
2.028731 0.064532
1.984655 0.065646
1.904064 0.066134
2.045968 0.062306
1.906449 0.066178
BEMSE
2.008749 0.059877
2.038169 0.057133
1.934372 0.057895
1.988794 0.059997
2.038660 0.057649
2.04286 0.058457
2.015523 0.060303
2.051860 0.058425
1.973484 0.058966
BEMLI
2.005363 0.058807
1.927432 0.057119
1.985327 0.058077
2.028395 0.058769
1 1.862806 0.076893
3 1.960376 0.069495
1 1.942912 0.074680
2.026639 0.056436 T=6 2 1.959376 0.069775
1.9547321 0.057694
3 1.950116 0.074465
2.027658 0.055432 T=3 2 1.944698 0.064118
2.015797 0.057542
1 1.969658 0.076889
2.027952 0.056038 T=1.5 2 1.988566 0.070417
BELSE
1.975878 0.065657
1.919534 0.062658
1.913352 0.063263
1.952183 0.066723
1.969423 0.062749
1.947671 0.064272
1.844051 0.067130
1.917058 0.062506
1.929258 0.0654842
BELLI
2.013363 0.058845
1.966952 0.055482
2.003492 0.056711
1.966344 0.058982
1.941013 0.055682
1.867518 0.057684
1.924766 0.058121
1.938004 0.055444
1.953426 0.057318
BEMSE
1.972401 0.059323
1.972294 0.057503
1.989587 0.058653
1.964800 0.059054
1.972930 0.058231
1.984801 0.059962
2.038772 0.059961
1.993871 0.058338
1.991805 0.059645
BEMLI
1.956825 0.057323
1.956789 0.054324
1.987430 0.056432
1.946542 0.057543
1.946537 0.054324
1.9265437 0.056436
2.021675 0.056382
1.975433 0.055032
1.998406 0.056802
(25,5) SC MLE
(35,5) SC MLE
4
3 1.912932 0.075006
3 1.967892 0.071453
Table 5. The average estimates and MSEs of for different choices of n, m, R and T. (15,10) SC MLE
1 1.921846 0.068631
T=1.5 2 1.950832 0.065752
3 1.851199 0.066422
1 1.934297 0.068957
T=3 2 1.864161 0.062627
3 1.975013 0.066872
1 1.819948 0.068772
T=6 2 1.968442 0.064489
3 1.960876 0.066629
BELSE
1.887372 0.047165
1.937644 0.045417
1.984162 0.046084
1.970386 0.047692
1.862481 0.045407
1.997905 0.046585
2.020812 0.047595
1.957482 0.043304
1.987886 0.045421
BELLI
1.955154 0.045034
1.942713 0.042312
1.930631 0.044221
1.928957 0.045139
1.983933 0.040210
1.986196 0.045569
1.976542 0.045031
2.007507 0.042557
1.972874 0.044383
BEMSE
1.991172 0.046908
1.949181 0.043669
2.072178 0.045484
1.884554 0.047173
1.983401 0.043890
1.896977 0.046003
1.905299 0.046562
1.924939 0.045591
1.965875 0.046993
BEMLI
1.995432 0.044903
2.037654 0.043484
1.896005 0.045173
1.921355 0.044962
1 1.953421 0.058091
3 1.962184 0.056809
1 1.962701 0.057149
1.935568 0.042528 T=6 2 1.946895 0.053552
1.976543 0.043143
3 1.852712 0.055989
1.947678 0.042390 T=3 2 1.960107 0.054465
1.823577 0.044372
1 1.919238 0.058589
1.936547 0.042669 T=1.5 2 1.958045 0.052106
BELSE
1.980606 0.047497
1.988533 0.045468
1.956964 0.046204
1.960849 0.047702
2.045273 0.045102
1.932482 0.046055
1.947302 0.047237
1.937212 0.044384
1.944635 0.046992
BELLI
1.910302 0.04502
1.947216 0.043292
2.007776 0.044565
1.973022 0.047389
2.000368 0.043073
1.997982 0.044600
2.002009 0.045405
1.993926 0.043980
1.878747 0.044251
BEMSE
1.955231 0.046702
2.021834 0.042872
2.003352 0.045073
1.995063 0.046231
1.995571 0.043733
2.016105 0.044108
1.924083 0.046821
1.949492 0.044762
1.974315 0.045271
BEMLI
1.965444 0.044023
2.010986 0.043873
1.978890 0.044483
1.932556 0.045948
1 1.870547 0.067648
3 1.891902 0.064779
1 1.892254 0.070355
1.923456 0.042943 T=6 2 1.988194 0.064307
1.954477 0.043984
3 1.968083 0.065784
1.965339 0.042733 T=3 2 1.882219 0.059653
2.032598 0.043843
1 1.870235 0.068744
2.015667 0.042157 T=1.5 2 2.003352 0.063362
BELSE
1.994248 0.045882
1.948913 0.043392
2.000763 0.044423
1.923891 0.045973
1.957092 0.043134
1.948427 0.044817
1.992887 0.044628
1.911851 0.043005
1.928474 0.045618
BELLI
2.028727 0.043570
1.983665 0.042999
1.964908 0.043113
1.943154 0.044862
1.933773 0.042142
1.952075 0.043711
1.933632 0.043130
1.984804 0.041197
1.994212 0.042879
BEMSE
1.875302 0.044681
2.049804 0.041910
1.920990 0.043877
1.942404 0.044929
2.063353 0.042722
1.916561 0.044430
1.986064 0.043712
2.013146 0.042157
2.006899 0.042993
BEMLI
1.887096 0.041884
2.021458 0.039789
1.937652 0.041045
1.958763 0.041895
2.032675 0.039834
1.945678 0.040948
1.988753 0.041354
2.024567 0.038196
1..865443 0.038943
(25,10) SC MLE
(35,10) SC MLE
5
3 2.117773 0.055879
3 1.980001 0.067399
Table 6. The average estimates and MSEs of for different choices of n, m, R and T. (25,15) SC MLE
1 1.866557 0.040872
T=1.5 2 1.968989 0.035770
3 1.871328 0.043759
1 1.890274 0.044655
T=3 2 1.894774 0.038388
3 1.892327 0.042335
1 1.888119 0.045132
T=6 2 1.903932 0.037745
3 1.935332 0.042490
BELSE
1.915776 0.035613
2.008091 0.033685
1.898416 0.034176
1.941877 0.035765
1.922352 0.034773
1.963585 0.034993
1.998062 0.035034
1.999351 0.033872
1.880872 0.034365
BELLI
1.945238 0.035049
1.999221 0.032872
1.945921 0.033264
1.950923 0.035593
1.929221 0.033853
1.940075 0.034005
1.982881 0.034144
2.067825 0.032165
1.957472 0.032892
BEMSE
1.997672 0.032429
2.023193 0.029682
1.956038 0.029883
2.000252 0.032795
1.912962 0.029261
2.080173 0.031288
1.966123 0.032757
2.010151 0.028900
1.985813 0.029779]
BEMLI
1.956970 0.032132
2.021456 0.029046
1.975433 0.029345
2.065543 0.032003
1.934566 0.028932
2.065433 0.030980
1.935677 0.031936
2.003445 0.028237
1.976500 0.029014
1
T=1.5 2
3
1
T=3 2
3
1
T=6 2
3
1.934616 0.055812
1.884386 0.052734
1.878364 0.053777
1.892415 0.056184
1.830252 0.052512
1.834508 0.055978
1.933839 0.056003
1.930806 0.051659
1.866458 0.051985
BELSE
1.828333 0.036725
1.865634 0.035183
1.891321 0.036233
1.833686 0.036996
1.818133 0.034282
1.838547 0.035655
1.858423 0.036065
1.835306 0.032730
1.870373 0.035477
BELLI
1.843732 0.036432
1.865632 0.033395
1.996534 0.034206
1.833625 0.035045
1.865132 0.033273
1.988383 0.034930
1.968426 0.035384
1.843530 0.031284
1.850247 0.034432
BEMSE
2.012367 0.033043
2.032550 0.028653
1.986052 0.029879
1.925456 0.032914
1.972121 0.029789
1.983224 0.030407
1.988772 0.031682
1.945005 0.028582
2.052181 0.029938
BEMLI
2.012340 0.030564
2.004321 0.028343
1.956643 0.028994
1.935467 0.031453
1.927775 0.028375
1.924678 0.030056
1.9367886 0.028802
1.934566 0.026532
1.976543 0.028376
(35,15) SC MLE
6
Table 7. The average approximate and MCMC intervals when 0.5, 1.9, T 1.5. %95 Approximate interval n 15
25
35
%95 MCMC intervals
m 5
SC 1 2 3
[0.217022,1.358931] [0.244219,1.252215] [0.234674,1.335452]
[0.753219,2.626949] [0.775861,2.590252] [0.768880,2.612829]
[0.303775,1.056285] [0.334656,0.993757] [0.317652,1.038810]
[1.138125,2.427819] [1.187773,2.388941] [1.159461,2.408751]
10
1 2 3
[0.240401,1.337099] [0.263754,1.156657] [0.252596,1.329209]
[0.623226,2.130524] [0.684697,2.065956] [0.659259,2.115271]
[0.308849,0.975802] [0.359969,0.905473] [0.338008,0.952347]
[1.150683,2.380817] [1.201467,2.254766] [1.199697,2.376148]
5
1 2 3
[0.167447,1.230995] [0.182344,1.222979] [0.177845,1.230671]
[0.611088,2.310787] [0.669659,2.124191] [0.645937,2.302617]
[0.313906,1.066825] [0.346258,0.998411] [0.327805,1.046719]
[1.136538,2.329818] [1.288125,2.165185] [1.155937,2.202617]
10
1 2 3
[0.199255,1.136108] [0.223697,1.101103] [0.213386,1.105934]
[0.719246,2.001449] [0.763035,2.026965] [0.756254,2.100958]
[0.326247,0.979562] [0.361775,0.821899] [0.352543,0.959334]
[1.250283,2.228965] [1.294394,2.103436] [1.291609,2.178296]
15
1 2 3
[0.345477,1.053997] [0.391948,1.023427] [0.378436,1.051638]
[0.820817,2.002554] [0.868080,2.025040] [0.841107,1.939172]
[0.359715,0.949391] [0.389983,0.788405] [0.364437,0.869676]
[1.372504,2.168136] [1.440046,2.087659] [1.397334,2.108525]
5
1 2 3
[0.203479,1.289107] [0.209396,1.235995] [0.207164,1.260111]
[0.661323,2.666724] [0.692741,2.309759] [0.677811,2.405088]
[0.303372,0.989979] [0.346285,0.773775] [0.332875,0.831268]
[1.250081,2.115357] [1.298975,2.092936] [1.291965,2.107951]
10
1 2 3
[0.359182,1.108818] [0.393171,1.006819] [0.387053,1.100818]
[0.726563,2.223468] [0.764041,2.195959] [0.742930,2.129226]
[0.355766,0.858461] [0.398515,0.729625] [0.370248,0.797905]
[1.376925,2.109218] [1.400836,1.995781] [1.395965,2.001896]
15
1 2 3
[0.400805,1.084763] [0.437723,1.000157] [0.411788,1.008134]
[0.747260,2.044919] [0.804645,2.004055] [0.790557,2.014546]
[0.374919,0.822556] [0.410997,0.683886] [0.397873,0.787312]
[1.425328,2.041592] [1.489525,1.969232] [0.447873,1.997012]
7
Table 8. The average approximate and MCMC intervals when 0.5, 1.9, T 3. %95 Approximate interval n 15
25
35
%95 MCMC intervals
m 5
SC 1 2 3
[0.224248,1.265712] [0.254119,1.294967] [0.239339,1.269471]
[0.633868,2.671132] [0.685412,2.577928] [0.652724,2.601150]
[0.337975,1.055654] [0.370383,1.036897] [0.349107,1.050247]
[1.043504,2.235438] [1.172732,2.072939] [1.03585,2.219853]
10
1 2 3
[0.232678,1.251914] [0.259395,1.229257] [0.246952,1.259233]
[0.785632,2.342022] [0.737616,2.236895] [0.754714,2.340643]
[0.343042,0.996277] [0.379806,0.945076] [0.355677,0.983412]
[1.162148,2.175408] [1.222694,2.004307] [1.170572,2.103897]
5
1 2 3
[0.221671,1.268165] [0.263001,1.248558] [0.248447,1.253809]
[0.646945,2.243136] [0.675518,2.168390] [0.648447,2.325809]
[0.338666,1.049723] [0.377557,0.999906] [0.359632,1.005783]
[1.105181,2.106707] [1.209713,2.068766] [1.130088,2.100555]
10
1 2 3
[0.242887,1.129554] [0.283850,1.109576] [0.269891,1.110828]
[0.725133,2.11655] [0.787907,2.083375] [0.739286,2.127549]
[0.350185,0.976462] [0.396392,0.884766] [0.367782,0.956533]
[1.222812,2.087969] [1.285179,2.015836] [1.246572,2.047622]
15
1 2 3
[0.334387,1.095496] [0.355385,1.045646] [0.354576,1.091015]
[0.839834,2.068341] [0.875492,2.001039] [0.852091,2.100831]
[0.379067,0.967548] [0.401344,0.703184] [0.380039,0.820625]
[1.310081,2.002744] [1.440254,1.993098] [1.397476,1.911347]
5
1 2 3
[0.240762,1.249942] [0.279268,1.212839] [0.253797,1.226203]
[0.640079,2.254758] [0.668887,2.113641] [0.654465,2.325535]
[0.351212,0.971625] [0.377775,0.889712] [0.359858,0.896125]
[1.276975,2.220625] [1.292189,2.061825] [1.246575,2.144073]
10
1 2 3
[0.311536,1.115066] [0.350134,1.001746] [0.331276,1.106024]
[0.758862,2.201022] [0.788261,2.185489] [0.762818,2.110932]
[0.372388,0.823597] [0.388009,0.748818] [0.376337,0.793208]
[1.385937,2.182031] [1.506658,2.087969] [1.429283,2.122316]
15
1 2 3
[0.392429,1.087593] [0.421392,1.000493] [0.405153,1.0016398]
[0.768746,2.164669] [0.808448,2.013349] [0.781400,2.086290]
[0.383675,0.872811] [0.406542,0.696875] [0.388354,0.755863]
[1.408032,2.243869] [1.657825,2.169232] [1.502578,2.272949]
8
Table 9. The average approximate and MCMC intervals when 0.5, 1.9, T 6. %95 Approximate interval n 15
25
35
%95 MCMC intervals
m 5
SC 1 2 3
[0.221891,1.2752253] [0.283617,1.199379] [0.251148,1.253276]
[0.608440,2.2507189] [0.723457,2.116543] [0.649635,2.383343]
[0.317277,1.180417] [0.357541,1.010436] [0.329613,1.119967]
[1.089664,2.232345] [1.147473,2.046963] [1.046665,2.179237]
10
1 2 3
[0.303742,1.266257] [0.347452,1.153279] [0.326523,1.247536]
[0.717448,2.077521] [0.831368,2.000435] [0.735369,2.015607]
[0.343434,0.985875] [0.378667,0.901116] [0.361312,0.977086]
[1.043985,2.186484] [1.060122,2.000154] [1.019565,2.084831]
5
1 2 3
[0.220948,1.270295] [0.289017,1.186525] [0.254049,1.235544]
[0.624507,2.227998] [0.757070,2.089176] [0.700374,2.165152]
[0.399814,0.995857] [0.359375,0.935078] [0.376891,0.961672]
[1.082685,2.233715] [1.139969,2.113675] [1.059394,2.212123]
10
1 2 3
[0.315960,1.162922] [0.365186,1.014376] [0.330608,1.118971]
[0.686196,2.126698] [0.817042,2.048195] [0.749592,2.154782]
[0.302462,0.976481] [0.335375,0.864737] [0.389095,0.951769]
[1.028748,2.055669] [1.176805,2.008775] [1.067871,2.123748]
15
1 2 3
[0.320771,1.091281] [0.403206,1.002819] [0.362202,1.028794]
[0.695345,2.109273] [0.865616,2.092113] [0.769822,2.051780]
[0.302138,0.922981] [0.471875,0.748545] [0.432651,0.903185]
[1.165829,1.989146] [1.381787,1.969844] [1.255047,1.970017]
5
1 2 3
[0.227461,1.210659] [0.291349,1.179637] [0.246174,1.101398]
[0.624023,2.197847] [0.734257,2.086509] [0.675452,2.107993]
[0.390374,0.951461] [0.361399,0.902894] [0.328926,0.950301]
[1.089107,2.202266] [1.114722,2.098443] [1.071864,2.109439]
10
1 2 3
[0.306472,1.182667] [0.362522,1.098167] [0.348845,1.108166]
[0.687714,2.120852] [0.711664,2.024642] [0.692569,2.096505]
[0.356604,0.808634] [0.476327,0.769373] [0.451984,0.743334]
[1.157401,2.096215] [1.490766,1.981642] [1.346735,2.050934]
15
1 2 3
[0.355319,1.012721] [0.388670,1.007495] [0.373173,1.011453]
[0.718673,2.015370] [0.757981,2.008766] [0.724214,2.016424]
[0.404964,0.799065] [0.405145,0.748584] [0.439907,0.794385]
[1.399024,2.009608] [1.614981,1.943753] [1.516132,1.993497]
9
Table 10. The MLEs, MELs and BEMs for nano-droplet data (n=32). m
T
SC
Method
20
18
1
MLE
1.991372
369.92656
BELSE
2.100513
285.22854
BELLI
2.122482
284.79011
BEMSE
2.334395
271.24814
BEMLI
2.318505
271.24802
MLE
1.823382
231.80413
BELSE
2.012144
298.03556
BELLI
2.021051
298.98869
BEMSE
2.125400
271.22787
BEMLI
2.114121
271.22784
MLE
2.127238
233.26713
BELSE
2.091510
269.21696
BELLI
2.003203
285.85099
BEMSE
2.189406
271.22517
BEMLI
2.179646
271.22511
22
23
20
19
2
3
Table 11. The 95% AI and MI of and for nano-droplet data. %95 Approximate interval
%95 MCMC intervals
m
T
SC
20
18
1
[0.8967,3.0860]
[231.6689,508.1842]
[1.9549,2.4012]
[271.1947,271.3412]
22
20
2
[0.9366,2.7102]
[148.2428,315.3654]
[1.9643,2.4883]
[271.1487,271.2337]
23
19
3
[1.1146,3.1398]
[151.0406,315.4937]
[1.9526,2.3750]
[271.1446,271.2336]
10