Reliability Engineering and System Safety 87 (2005) 53–63 www.elsevier.com/locate/ress
Modeling the reliability of threshold weighted voting systems Minge Xiea,1, Hoang Phamb,* a Department of Statistics, Rutgers University, Piscataway, NJ 08854, USA Department of Industrial Engineering, Rutgers University, 96 Frelinghuysen Road, New Jersey, NJ 08854, USA
b
Received 28 November 2003; accepted 1 April 2004
Abstract In many applications, ranging from target detection to safety monitoring systems, we are interested in determining whether or not to accept a hypothesis based on the information available. In this paper we model the reliability of threshold weighted voting systems (WVS) with multi-failure-modes, where a general recursive reliability function of the WVS is presented. We also develop approximation formulas for calculating the reliability of WVS based on a large number of units. We also develop reliability functions of time-dependent threshold weighted voting systems, where each unit is a function of time. Finally, the optimal stopping time that minimizes the total cost of the systems subject to a reliability constraint is discussed. q 2004 Elsevier Ltd. All rights reserved. Keywords: System reliability; Weighted threshold voting; Time dependent system; Iterative reliability function; Saddle point approximation
1. Introduction In many applications, ranging from human decisionmaking to target detection including human organization (HO) and undersea communication systems, a decision has to make on whether or not to accept a hypothesis using a threshold weighted voting mechanism [1]. In HO systems for example, a group with n members needs to decide whether or not to accept an innovation-oriented proposal. The proposal is of two types: good or bad. Let us assume that the communication among members is limited. Each member makes a yes – no decision based on a given proposal. Each member can make two types of errors: the error of rejecting a good proposal and that of accepting a bad proposal. Therefore, it is worth to determine the optimal acceptance rule that minimizes the probability of making an incorrect decision. In undersea communication systems, the system consists of n electronic sensors each scanning the underwater for enemy target [2]. Each sensor in the system might falsely detect a target when none is approaching and have different failure probabilities; thus, each will have * Corresponding author. Fax: þ 1-732-445-5467. E-mail addresses:
[email protected] (H. Pham), mxie@stat. rutgers.du (M. Xie). 1 Supported in part by NSF SES-0241859. 0951-8320/$ - see front matter q 2004 Elsevier Ltd. All rights reserved. doi:10.1016/j.ress.2004.04.001
different weights on sensor output. Therefore, it is important to determine a threshold level that maximizes the probability of making a correct decision. The applications have the following principle in common: the individual decisions of the system units need not be consistent and can even be contradictory; for any such systems, rules must be made on how to incorporate all information into a final decision. System units and their outputs are, in general, subject to different errors, which in turn affect the reliability of the system. Reliability models become even more complicated when weights are imposed. In the HO systems [1] for example, some members within the organization often have more weights compared to others whether based on their expertise or status within the organization. Hence, it is of interest to develop methods that can be used to evaluate the reliability of such weighted threshold voting systems. Much research is on restrictive models. Un-weighted problems with multiple failure modes are studied intensively [3 –8]. Details of such studies can be found in Ref. [9]. There are a few recent works on weighted multifailure-modes [10,11]. This research focuses on the threshold weighted voting systems with multi-failuremodes. In Section 1.1 we summarize some existing WVS reliability models in the literature.
54
M. Xie, H. Pham / Reliability Engineering and System Safety 87 (2005) 53–63
Nomenclature n wi t P t n n0
In0
hi;10 ðtÞ hi;1x ðtÞ hi;11 ðtÞ hi;01 ðtÞ hi;0x ðtÞ
number of units in the system weight of unit i; i ¼ 1; 2; …; n a threshold value, 0 , t # 1 a system input (either 0 or 1); 0 ¼ unfavorable (reject), 1 ¼ favorable (accept) the time that each unit spends on decision making index for input of a unit; n ¼ 0; 1 index for output of a unit (either 0, or x; or 1); 0 ¼ unfavorable (reject), x ¼ no opinion, 1 ¼ favorable (accept) set of indices for units whose output is n0 ; n0 ¼ 0; x; 1 Pr(unit i stuck-at-0 at time tlP ¼ 1) Pr(unit i stuck-at-x at time tlP ¼ 1) Pr(unit i makes a right decision 1 at time tlP ¼ 1) ¼ 1 2 hi;10 ðtÞ 2 hi;1x ðtÞ Pr(unit i stuck-at-1 at time tlP ¼ 0) Pr(unit i stuck-at-x at time tlP ¼ 0)
1.1. Existing models Parhami [12] studied inexact weighted threshold voting systems. As opposed to exact voting systems, inexact voting systems have a range of output values that are considered correct. In an inexact weighted threshold voting system, a given instance is an output of the system when the cumulative vote for this instance exceeds or equals a predefined threshold. Parhami presents a pseudo-linear algorithm that determines all system outputs, given a set of weighted inputs and a threshold. Parhami [13] also proposes an improved OðnÞ-algorithm for computing all system outputs for a small object space. Blough and Sullivan [14] develop a general probabilistic model for voting systems by incorporating four distinct probability distributions and use the method of maximum likelihood estimation to determine the value that is most likely to be correct. Pham [15] studies dynamic redundant systems with 3-failure modes, where each unit is subject to stuck-at-0, stuck-at-1 and stuck-at-x failures. The system outcome is either good or failed. Focusing on the dynamic majority and k-out-of-n systems, Pham derives optimal design policies for maximizing the system reliability. In recent years, several authors [1,10,11] have been studied the introduction of weights to threshold-voting systems. A particular threshold voting system is studied in Refs. [1,10] assuming 3-failure modes for the individual units and 2 failure modes for the system. Nordmann and Pham [10] present an algorithm to evaluate the reliability of dynamic –threshold weighted voting systems. Nordmann and Pham [1] recently present a general model to evaluate the reliability of dynamic – threshold weighted voting systems. They show that the introduction of weights brings
hi;00 ðtÞ Pr(unit i makes a right decision 0 at time tlP ¼ 0) ¼ 1 2 hi;01 ðtÞ 2 hi;0x ðtÞ S system output; assume there are only two types of system output 0 or 1 St system output by time t R system reliability ¼ Pr(the system makes the correct decision for any input P) ¼ Pr(the system output S ¼ 1 when the input P ¼ 1) þ Pr(the system output S ¼ 0 when the input P ¼ 0) ¼ PrðS ¼ 1lP ¼ 1ÞPrðP ¼ 1Þ þ PrðS ¼ 0l P ¼ 0ÞPrðP ¼ 0Þ Rt system reliability by time t Qn ð0Þ Pr(the system output S ¼ 1 given the input P ¼ 1) ¼ PrðS ¼ 1lP ¼ 1Þ Qn;t ð0Þ the same as Qn ð0Þ; except that its value depends on the time t ~ n ð0Þ Pr(the system output S ¼ 0 given the input Q P ¼ 0) ¼ PrðS ¼ 0lP ¼ 0Þ ~ n;t ð0Þ the same as Q ~ n ð0Þ; except that its value depends Q on the time t
an immense computational complexity. Therefore, by restricting the weights to integer values and the threshold being described by a rational number, they present analytic and efficient computation methods to obtain the reliability of weighted threshold voting systems. A recent paper by Levitin and Lisnianski [11] develops an algorithm using the uniform generating function technique to compute the reliability. The threshold weighted voting systems in Ref. [11] is slightly different, but seems obvious as in reliability point of views when all units vote for indecisive outputs, from the one in Refs. [1,10]: the system in Ref. [11] produces an indecisive output when all units vote for indecisive ‘x’: The method in Ref. [11] is extended in article [16] to asymmetric threshold weighted voting systems. Some related problems, such as classification of multilayer perceptions, have been studied by several authors [17, 18]. De Stefano et al. [19] currently present a method defining a reject option applicable to a given 0-reject classifier. The introduction of a reject option therefore aims to reject the highest possible percentage of samples that would otherwise be mis-classified. The reject option is based on an estimate of the classification reliability, measured by a reliability evaluator function. This method for determining the optimal threshold value is independent of the specific 0-reject classifier, while the definition of the reliability evaluators is related to the classifier’s architecture. In this paper we present a simple approach to evaluate the reliability of threshold weighted voting systems with multifailure modes, for both regular and time-dependent cases. Section 2, through the illustration of the human organization (HO) system, provides a detailed description of the regular threshold weighted voting systems. It also presents a general
M. Xie, H. Pham / Reliability Engineering and System Safety 87 (2005) 53–63
recursive reliability function of the systems, which leads to a simple recursive algorithm to compute the reliability. For the systems with a large number of units (voters), a set of highly accurate approximation formulas (based on the saddle point approximation) is developed for computing the system reliabilities. The approximation formulas are very simple and easy to use. Section 3 extends the results in Section 2 to the timedependent systems, where each unit is a function of a given time. Section 4 formulates a cost function and provides the optimal stopping time that minimizes the total cost of the systems subject to a pre-specified level of reliability.
2. Model formulation In this section, we use the HO system as a general illustrative example to represent the applied techniques of the generalized threshold weighted voting systems. The weights are not used, at first, in order to present the basic principles more clearly; the extension to weighted models is obvious. In an un-weighted model of an HO system with n officers, one must decide whether or not to accept an innovation-oriented proposal. Each officer will review the proposal for a given set of available information and then return his/her decision by (or before) a fixed time t: Afterward, the HO system will finally make a decision in a manner that depends on the system structure. For example, if the organization considers as a weighted dynamic threshold system then it will ignore all indecisive units and accepts the proposal if and only if at least a pre-specified fraction of the remaining voters decides for the proposal. Otherwise the proposal is rejected. The reliability for the HO system is defined as: Pr{the HO system makes the correct decision} This assumes that the proposal is inherently either good or bad prior to the system’s decision. To see how such systems fit into the framework of general models, it should be noted that the proposal in question is either acceptable or unacceptable and the information is implicitly contained in the material that each officer is going to review. Thus, though in disguised form, we can assume that each officer (in the following, referred to as a ‘unit’) gets an input either a 1 (for acceptable) or a 0 (for unacceptable). All units are provided with the same input that depends on the actual proposal on which a decision is to be made, and therewith produces an individual output that is 1 (accept), 0 (reject), or x (indecisive). Desirably, for each unit, its output should always equal its input, which in turn means that each unit makes the right decision about the reviewed information. However, the units are subject to errors. The units are subject to three types of errors. (Type 1) Accept an input that should be rejected; (Type 2) Reject an
55
input that should be accepted; (Type 3) No decision is made. Depending on the input, the third type of error can be further divided into two other types of errors: (a) no decision is made on an input that should be rejected, and (b) no decision is made on an input that should be accepted. Let us denote hi;01 ; hi;10 ; hi;0x and hi;1x be the probabilities of type 1, type 2, type 3a and type 3b errors, respectively. As in Refs. [1,10,11] and for simplicity, the values of hi;nn0 are assumed to be known. In practice, they often can be specified by the experts or estimated from the past samples. In this paper, we study a general reliability model of threshold weighted voting systems subject to multiple failure modes and a variable threshold. Suppose wi is the weight of unit i; I1 is the collection of all units that provide favorable outcomes for the system input, Ix is the collection of all units that cannot reach a conclusion, and t is the threshold value, 0 , t # 1: The weighted voting system incorporates all the available information from individual units and produces an unanimous system output based on the following criterion [1,10,11]: The system output is ‘1’ (favorable) if and only if X X wi $ t wi ð1Þ i[I1
iIx
and it is ‘0’ (unfavorable) otherwise. Such a model is a dynamic threshold weighted voting system subject to two failure modes. Failure mode 1 is that the system accepts an input that should be rejected, and failure mode 2 is that the system rejects an input that should be accepted. The reliability of the system is the probability that the system makes a correct decision for any given input. 2.1. System reliability modeling Assume that the amount of time (say, t) that each unit spends on decision making will not affect the result. Such weighted voting systems have also been studied by Nordmann and Pham [1,10]. Nordmann and Pham [10] developed a mathematical model for evaluating the weighted voting system reliabilities. Their model, however, cannot be applied directly to many systems in practice, because its immense combinatorial complexity, by placing two restrictions on the generality of the model parameters. They require that unit weights are scaled to integers and the threshold be described by a rational number. In this subsection, we present a simple recursive formula to calculate the reliability of the weighted voting systems without imposing the two restrictions (on the weights and threshold values). Let us define a random variable Zi ; i ¼ 1; 2; …; n as, 8 1 2 t if i [ I1 > > < if i [ Ix Zi ¼ 0 ð2Þ > > : 2t if i [ I0
56
M. Xie, H. Pham / Reliability Engineering and System Safety 87 (2005) 53–63
So, each unit i contributes a value of Zi wi in system (1). In another words, as first discussed in Ref. [16], ‘each unit i adds value of ð1 2 tÞwi to the total WVS score if it votes for proposition acceptance, value of 2twi if it votes for proposition rejection, and nothing Pn if it abstains’. We further define a function Q ðsÞ ¼ Pr $ slP ¼ 1 : Then i¼1 n P wi Zi P Qn ð0Þ ¼PPr ni¼1 wi Zi $ 0lP ¼ 1 ¼ Pr i[I1 wi $ t iIx wi lP ¼ 1 ¼ PrðS ¼ 1lP ¼ 1Þ is the probability that the system output is 1 given that P the input ~ n ðsÞ ¼ Pr ni¼1 wi Zi , is P ¼ 1: Similarly, if we write Q P ~ n ð0Þ ¼ Pr ni¼1 wi Zi , 0lP ¼ 0 ¼ slP P ¼0; then Q P Pr i[I1 wi , t iIx wi lP ¼ 0 ¼ PrðS ¼ 0lP ¼ 0Þ is the probability that the system output is 0 given that the input is P ¼ 0: We now present the following result.
Theorem 1. The reliability of weighted threshold voting systems with n units is given by ~ n ð0ÞPrðP ¼ 0Þ R ¼ Qn ð0ÞPrðP ¼ 1Þ þ Q
ð3Þ
~ n ðsÞ can be written in terms of recursive where Qn ðsÞ and Q functions as follows: For n $ 2 þ Qn ðsÞ ¼ Qn21 ðs2 n Þhn;11 þ Qn21 ðsÞhn;1x þ Qn21 ðsn Þhn;10 þ ~ n ðsÞ ¼ Q ~ n21 ðs2 ~ ~ Q n Þhn;01 þ Qn21 ðsÞhn;0x þ Qn21 ðsn Þhn;00 þ where s2 n ¼ s 2 ð1 2 tÞwn and sn ¼ s þ twn ; and for n ¼ 1; 8 1 if s # 2w1 > > > > > < h1;11 þ h1;1x if 2 tw1 , s # 0 Q1 ðsÞ ¼ > > h1;11 if 0 , s # ð1 2 tÞw1 > > > : 0 if s . ð1 2 tÞw1 8 0 if s # 2w1 > > > > > < h1;00 if 2 tw1 , s # 0 ~ 1 ðsÞ ¼ Q > > h1;00 þ h0x if 0 , s # ð1 2 tÞw1 > > > : 1 if s . ð1 2 tÞw1
The proof of Theorem 1 is in Appendix A.1. Based on the results in the theorem, we have written a simple C-program to calculate the system reliability. Since PrðP ¼ 1Þ þ PrðP ¼ 0Þ ¼ 1; it is easy to see from (3) that the reliability value R is between Rmin ~ n ð0Þ} and Rmax ¼ and Rmax ; where Rmin ¼ min{Qn ð0Þ; Q ~ n ð0Þ}: To get a better sense of the weighted max{Qn ð0Þ; Q voting systems and the results in Theorem 1, we illustrate the following example.
Example 1. Suppose PrðP ¼ 1Þ ¼ PrðP ¼ 0Þ ¼ 1=2: Let’s consider the following three extreme cases: (a) the output of the system is always 1, (b) the output of the system is always
0, and (c) the output of the system is either 0 or 1 with equal probability. For simplicity and without loss of generality, we assume that there is only one unit in the system, i.e. n ¼ 1: In this case, the probability that the system output of 1 or 0 is ðh1;11 þ h1;01 Þ=2 or ðh1;10 þ h1;00 Þ=2; respectively. These two probabilities are different, as they should be. Also, from Theorem 1, the system reliability is R ¼ ðh1;11 þ h1;1x þ h1;00 Þ=2: In case (a), the system output is always 1, it requires ðh1;11 þ h1;01 Þ=2 ; 1: This implies that h1;11 ; h1;01 ; 1 and other h1;nn0 ’s should all be 0. Thus, the reliability of case (a) is 1/2. Similarly, we can easily obtain that the reliability of case (b) is also 1/2. Such results obviously make senses. Note that the reliability of a system is defined as the probability of making a correct decision. If the input is either 0 or 1 with probability 1/2 each, then the system will have an equal chance to make the right decision in either case (a) or (b), although the probabilities for case (a) or case (b) to happen are different. In case (c), the output of the system is either 0 or 1 with 1/2 probability each. So, we need to have ðh1;11 þ h1;01 Þ=2 ; ðh1;10 þ h1;00 Þ=2 ; 1=2: After a straight calculation, we obtain h1;00 ; 1 2 h1;01 ; h1;11 ; 1 2 h1;10 and h1;0x ; h1;1x ; 0: Thus, the reliability of the system is equal to h1;00 ; 1 2 h1;01 ; h1;11 ; 1 2 h1;10 ; which is not necessarily 1/2. The system reliability is 1/2 only in the case when h1;00 ; h1;01 ; h1;11 ; h1;10 ; 1=2: The next example uses Theorem 1 directly to compute the reliability for a system with 4 units.
Example 2. In the HO system described in Section 2, suppose we have n ¼ 4 officers (units) and the probabilities of their judgment errors are, respectively,
h1;10 ¼ 0:17; h1;1x ¼ 0:14; h1;01 ¼ 0:15; h1;0x ¼ 0:10; w1 ¼ 2 h2;10 ¼ 0:14; h2;1x ¼ 0:10; h2;01 ¼ 0:11; h2;0x ¼ 0:14; w2 ¼ 1 h3;10 ¼ 0:11; h3;1x ¼ 0:12; h3;01 ¼ 0:16; h3;0x ¼ 0:12; w3 ¼ 1:5 h4;10 ¼ 0:15; h4;1x ¼ 0:12; h4;01 ¼ 0:10; h4;0x ¼ 0:15; w4 ¼ 1 Assume the threshold t ¼ 0:5: Then, we have the following results: if the input PrðP ¼ 1Þ ¼ 1; then R ¼ 0:91615; if the input PrðP ¼ 0Þ ¼ 1; then R ¼ 0:91641; further, assume PrðP ¼ 1Þ ¼ 0:7 then R ¼ 0:91623: 2.2. System reliability approximation The possible combinations of votes in a weighted voting system defined in Eq. (1) (equivalently, the P size of the sample space of the random variable Yn ¼ ni¼1 wi Zi is 3n : Without any further special assumptions on wi ’s and on hi ’s (for example, those in Nordmann and Pham [1]), any
M. Xie, H. Pham / Reliability Engineering and System Safety 87 (2005) 53–63
algorithm for computing the system reliability (including the one discussed in Section 2.1 and the most recent one by Levitin [11]) involves a summation of 3n terms. When n is large, one may encounter computing difficulties of obtaining the reliability results, if not impossible. To overcome such problems, we propose an accurate large-sample approximation formula for evaluating the system reliability. The approximation formulas are developed based on the socalled saddle point approximation technique (see, e.g. Barndorff – Nielsen and Cox [20]). The approximation formulas also work well in practice even for mid or small size of samples. Except for solving a pair of simple nonlinear equations for finding the saddle points, the computations of the proposed approach are straightforward. For any given size n; the approximate values of system reliabilities can be calculated instantly. Define Zip ¼ Zi þ t; where Zi is defined in (2). When the system input P ¼ 1; the logarithm of the (conditional) moment generating function (also called the cumulant P function) of ni¼1 wi Zip is Kn ðuÞ ¼
n X
logðh1;11 euwi þ h1;1x eutwi þ h1;10 Þ
i¼1
Let 0 and 00 denote for the first and second derivatives of a function, respectively; i.e. K 0n ðuÞ ¼ ›=›uðKn ðuÞÞ and so on. 0 The Pn solution point u ¼ u^ n ; which solves equation K n ðuÞ ¼ t i¼1 wi ; is called a saddle point. Similarly, when the system input P ¼ 0; the (conditional) cumulant function of Pn p w Z i¼1 i i is K~ n ðuÞ ¼
n X
logðh1;01 euwi þ h1;0x eutwi þ h1;00 Þ
i¼1
The saddlePpoint u~ n can be obtained by solving the equation K~ 0n ðuÞ ¼ t ni¼1 wi : pffiffi Denote e^ n ¼ u^ n {K 00n ð^un Þ}1=2 ; f^n ¼ signð^un Þl 2{^un K 0n ð^upnffiffiÞ 2Kn ð^un Þ}1=2 l and e~ n ¼ u~ n {K~ 00n ð~un Þ}1=2 ; f~n ¼ signð~un Þl 2 {~un K~ 0n ð~un Þ 2 K~ n ð~un Þ}1=2 l; where signðuÞ is the sign function, taking value ‘ þ 1’ if u is positive and ‘ 2 1’ if u is negative. The following theorem presents the approximation ~ n ð0Þ: results for Qn ð0Þ and Q
Theorem 2. Suppose there exist small positive constants 1 . 0; d1 . 0; d2 . 0 and a large positive number M , 1; such that 1 , wi , M and d1 , hi;nn0 , 1 2 d2 for n – n0 : ~ n ð0Þ can be approximated by Then Qn ð0Þ and Q Qn ð0Þ ¼ Pr
n X
! wi Zi $ 0lP ¼ 1
i¼1
(
) 1 1 23=2 2 þ Oðn Þ ¼ 1 2 Fð^en Þ þ fð^en Þ e^ n f^n
~ n ð0Þ ¼ Pr Q
n X
57
! wi Zi , 0lP ¼ 0
i¼1
(
1 1 2 þ Oðn23=2 Þ ¼ Fð~en Þ 2 fð~en Þ e~ n f~n
)
where Fð·Þ and fð·Þ are, respectively, the cumulative distribution function and the density function of a standard normal random variable, and notation Oðn23=2 Þ represents that, as n ! 1; the remaining terms in the brackets tend to 0 at a speed equal or faster than n23=2 : A sketch proof of Theorem 2 is included in Appendix A.2. The error terms in Theorem 2 are at the rate of Oðn23=2 Þ: It is easy to see that the error term in the system reliability ~ n ð0ÞPrðP ¼ 0Þ; calculated function R ¼ Qn ð0ÞPrðP ¼ 1Þ þ Q from the above approximation, is at most at the order of Oðn23=2 Þ: It is worth to note that the rate Oðn23=2 Þ is three folds more accurate than the standard normal approximation at the rate of Oðn21=2 Þ: When n is large, these approximations are extremely accurate. The next numerical example shows that the approximations are fairly accurate and they are more accuracy as n is large.
Example 3. Table 1 lists the reliabilities calculated for a weighted voting system with the number of units n ¼ 5, 10, 15, 18, 20, 25, 40, 100, and 300. The parameters of the error probabilities of these n units are randomly re-sampled from the base set of 4 units provided in Example 2. The 2nd to 4th columns in Table 1 are related to Qn ð0Þ values. The 2nd column is the exact Qn ð0Þ values calculated using the recursive formulas in Theorem 1, and the 3rd column contains the approximation values of Qn ð0Þ calculated using the formulas in Theorem 2. The values in the 4th column are the differences between the values in the 2nd and 3rd columns. The 5th to 7th columns are the ~ n ð0Þ: The last three corresponding values related to Q columns are values related to the system reliability R given PrðP ¼ 1Þ ¼ 0:7: Up to n ¼ 18; our C-code takes less than 50 seconds to complete each calculation on a Sun Sparc 1 computer. When n ¼ 25; it takes a little over 3 h to complete the exact reliability calculation, thus we did not try to use the exact formula when n . 25. We see that even when n is as small as 5, the approximations are fairly reasonable. The accuracy increase as n grows bigger, as expected. When n $ 18; it is very accurate. We repeated the above study with several different sets of error probabilities. They all lead to the similar conclusion. The methodology discussed in this section is very general. It can incorporate any form of the error probabilities hi;nn0 ’s in the calculation. For example, hi;nn0 can be a function of time t; or a function of covariates that describe
58
M. Xie, H. Pham / Reliability Engineering and System Safety 87 (2005) 53–63
Table 1 n
5 10 15 18 20 25 40 100 300
~ n ð0Þ Q
Qn ð0Þ
R
Exact
Approx
Diff
Exact
Approx
Diff
Exact
Approx
Diff
0.95658 0.99188 0.99834 0.99936 0.99966 0.9423 – – –
0.97326 0.99528 0.99900 0.99960 0.99978 0.9452 0.96123 0.91478 1.000
20.01668 20.00340 20.03654 20.03258 20.03138 20.0429 – – –
0.96856 0.99655 0.99929 0.99976 0.99988 0.9480 – – –
0.94974 0.99436 0.99887 0.99962 0.99982 0.9469 0.96898 1.000 1.000
0.01882 0.00219 0.034225 0.031412 0.04686 0.0411 – – –
0.96018 0.99297 0.99863 0.99947 0.99972 0.9440 – – –
0.96621 0.99452 0.99895 0.99961 0.99979 0.9457 0.96356 0.91485 1.000
20.00603 20.00155 20.03331 20.03138 20.04759 20.0417 – – –
Note, 20.03654 denotes 20.0006537, 0.9423 denotes 0.999923, and so on.
the characteristic of the ith unit. As long as we can compute the hi;nn0 ’s, we can calculate the system reliability.
and the system produces an output based on the units’ results. Our development can be easily extended to the more general cases of different ti without any major difficulties, except for more complicated notations and additional clarifications. Although there are other possibilities, in this study, we consider a typical pattern, where the error probabilities hi;nn0 ðtÞ; for n – n0 ; are decreasing functions of t: Other patterns can be modeled accordingly. For example, a unit has the larger chance of making a mistake at the beginning when time is limited; however, the chance of making a wrong decision will be decreased after a certain learning period; and when sufficient time is available, the chance for the unit to make a mistake will be reduced to its saturated minimal value. Fig. 1 provides an example. At the beginning, the error probability decreases slowly from its maximum value then
3. Time dependent reliability function In this section, we consider that each unit will have up to the time t to make the decision. For example, in the HO system discussed in Section 2, the units are the officers who evaluate a proposal. The requirement of a limited time to complete their reports, together with their expertise, will affect the results of their evaluations. Suppose each unit, hi;nn0 ðti Þ; is a function of time ti ; where ti is the time that the unit must return the results. In order to simplify our notations and discussions, we assume in this section t1 ¼ t2 ¼ · · · ¼ tn ; t: That is, all units will be given the same period of time t to make a decision
Fig. 1.
M. Xie, H. Pham / Reliability Engineering and System Safety 87 (2005) 53–63
the error probability drops steeply as the unit (officer) has more time to evaluate the input, and eventually the error probability approaches its minimum value. The curves in Fig. 1 are from the following formula: for t . 0 and for n – n0 ; n ¼ 0; 1; n0 ¼ 0; 1; x; ðmaxÞ ðminÞ 2bi;nn0 hi;nn0 ðtÞ ¼ hðminÞ þ ð1 þ e Þ h 2 h 0 0 0 i;nn i;nn i;nn
2ai;nn0 tþbi;nn0
e 1 þ e2ai;nn0 tþbi;nn0
ð4Þ
59
Corollary 1. If the error probabilities of each unit take ~ n;t ð0Þ and the system the form of (4), then Qn;t ð0Þ; Q reliability Rt are non-decreasing functions of t: The proof of Corollary 1 is provided in Appendix A.3.
Example 4. Suppose there are n ¼ 3 officers (units) in the system. Assume their judgment errors, as functions of time t; can be modeled in the form of (4), and the parameters associated are
where ai;nn0 . 0 determines how fast the hi;nn0 ðtÞ reduces as t increases and bi;nn0 . 0 determines the location of the ðmaxÞ 0 curve. Also, the constants hðminÞ i;nn0 and hi;nn0 for n – n are ðminÞ ðmaxÞ between 0 and 1 and hi;nn0 # hi;nn0 which are, respectively, the minimum and maximum bounds of hi;nn0 ðtÞ: For simplicity in this study, these constants are assumed to be known; Often, they can be either specified by subject experts or estimated from the past samples. ðmaxÞ bi;nn0 When hðminÞ =ð1 þ ebi;nn0 Þ; formula i;nn0 ¼ 0 and hi;nn0 ¼ e (4) reduces to the standard logistic function for sigmoid curves. In fact, formula (4) is modified from the standard logistic function so that the value of hi;nn0 ðtÞ is between the two values hðminÞ and hðmaxÞ for all t . 0; and i;nn0 i;nn0 ðmaxÞ hi;nn0 ð0Þ ¼ hi;nn0 : To evaluate the reliability of the threshold voting systems under the time-dependent setting, we can extend the results in Section 2 to the following two theorems. Here Qn;t ðsÞ and ~ n;t ðsÞ are defined similarly as that of Qn ðsÞ and Q ~ n ðsÞ except Q that their values depend on time t:
ðminÞ ðmaxÞ Unit 1 : ðh1;10 ; h1;10 Þ ¼ ð0:02;0:20Þ; a1;10 ¼ 0:30; b1;10 ¼ 5:0
Theorem 3. Suppose the error probabilities of each individual unit in a threshold weighted voting systems are functions of time t: The reliability of the systems with n units at time t is given by the same formulas in Theorem 1, except ~ n ðsÞ and hi;nn0 ’s are replaced by Qn;t ðsÞ; Q ~ n;t ðsÞ that Qn ðsÞ; Q and hi;nn0 ðtÞ’s, respectively.
Note, Fig. 1 is, in fact, the plot of the time dependent error probabilities of Unit 1.
Theorem 4. Suppose there exist small positive constants 1 . 0; d1 . 0; d2 . 0 and a large positive number M , 1 such that 1 , wi , M and d1 , hi;nn0 , 1 2 d2 for n – n0 ~ n;t ð0Þ can be approximated by for all t: Then, Qn;t ð0Þ and Q formula provided in Theorem 2, except that e^ n ; f^n ; e~ n ; and f~n are replaced by e^ n;t ; f^n;t ; e~ n;t ; and f~n;t : Here, e^ n;t ; f^n;t ; e~ n;t ; and f~n;t are calculated using the same formulas of the e^ n ; f^n ; e~ n ; and f~n but with hi;nn0 ðtÞ instead of hi;nn0 : The proofs of Theorems 3 and 4 follow those of Theorems 1 and 2; they are omitted here. In the case when the error probabilities of each individual follow the typical pattern of sigmoid curve (4), we have the following Corollary from Theorem 3.
ðminÞ ðmaxÞ ; h1;1x Þ ¼ ð0:01;0:15Þ; a1;1x ¼ 0:15; b1;1x ¼ 3:0 ðh1;1x ðminÞ ðmaxÞ ; h1;01 Þ ¼ ð0:02;0:30Þ; a1;01 ¼ 0:20; b1;01 ¼ 4:0 ðh1;01 ðminÞ ðmaxÞ ; h1;0x Þ ¼ ð0:01;0:20Þ; a1;0x ¼ 0:15; b1;0x ¼ 4:0 ðh1;0x ðminÞ ðmaxÞ ; h2;10 Þ ¼ ð0:00;0:20Þ; a2;10 ¼ 0:30; b2;10 ¼ 4:5 Unit 2 : ðh2;10 ðminÞ ðmaxÞ ; h2;1x Þ ¼ ð0:01;0:15Þ; a2;1x ¼ 0:15; b2;1x ¼ 2:3 ðh2;1x ðminÞ ðmaxÞ ; h2;01 Þ ¼ ð0:01;0:23Þ; a2;01 ¼ 0:20; b2;01 ¼ 4:5 ðh2;01 ðminÞ ðmaxÞ ; h2;0x Þ ¼ ð0:00;0:18Þ; a2;0x ¼ 0:15; b2;0x ¼ 5:0 ðh2;0x ðminÞ ðmaxÞ ; h3;10 Þ ¼ ð0:01;0:20Þ; a3;10 ¼ 0:25; b3;10 ¼ 4:0 Unit 3 : ðh3;10 ðminÞ ðmaxÞ ; h3;1x Þ ¼ ð0:01;0:10Þ; a3;1x ¼ 0:20; b3;1x ¼ 2:0 ðh3;1x ðminÞ ðmaxÞ ; h3;01 Þ ¼ ð0:00;0:10Þ a3;01 ¼ 0:15; b3;01 ¼ 4:5 ðh3;01 ðminÞ ðmaxÞ ; h3;0x Þ ¼ ð0:01;0:20Þ; a3;0x ¼ 0:20; b3;0x ¼ 4:0 ðh3;0x
For two different choices of threshold value t ¼ 0:5 and 0.7, we use Theorem 3 to obtain the reliabilities Rt at time t: Fig. 2 shows four time-dependent reliability-curves against the time t: Plot (a) is for t ¼ 0:5 and plot (b) is for t ¼ 0:7: The two curves in each plot are corresponding to the reliability-curves of input P ¼ 1 and P ¼ 0; respectively. In plot (a), the reliability when P ¼ 0 is always smaller than the reliability when P ¼ 1: In plot (b), the two reliability curves cross twice at t ¼ 19:30 and 33.94.
4. Optimal stopping time with minimal costs Often the longer time the system needs to have, the higher the cost is. In practice, it is impractical to wait a long time for the system to generate an output. We assume that the error probabilities of each unit making mistakes decrease as the time the unit spends on decision-making increases. Thus, the system reliability is often an increasing
60
M. Xie, H. Pham / Reliability Engineering and System Safety 87 (2005) 53–63
Fig. 2.
function of t; the time period spent by each unit to make a decision. If t is limited, the reliability of the system may not be high enough to warrant any confidence in the output. The question is how long the units should spend on evaluating the input. It is desirable that a system takes a minimal time to generate an output while maintaining a high reliability. Let’s consider the HO system discussed in Section 2, in which the proposal is evaluated by the officers. Generally speaking, the longer time an officer spends on evaluating the proposal, the less the chance he or she will make a wrong decision. Thus, sufficient long time is desirable to ensure a high reliability. However, waiting a long time will increase the cost. In the case when there is only one or a few slow referees, the system definitely does not need to wait for these slow individuals, as long as the system can make a correct decision with reliability higher than a pre-specified (tolerable) value. The output from other referees may contain enough information for the system to make a correct decision with a high reliability. Our goal in this section is to determine the optimal time to minimize the total cost of a system by trading-off its reliability versus cost. Let RL be a pre-specified minimal reliability value that the system should achieve. Assume that c1 and c2 are, respectively, the cost associated making a wrong decision and the cost of each time unit spent on evaluating the input by the system units. We can formulate the cost function as follows. CðtÞ ¼ c1 ð1 2 Rt Þ þ c2 t such that Rt $ RL
ð5Þ
Often, c1 is much larger than c2 : We assume both c1 and c2 are given and they do not depend on time t: Also, we assume that the additional cost of the system making an output after receiving all decisions from its units is constant. In this case, the additional cost will not affect the determination of the optimal time, so it is left out in the above equation defining CðtÞ:
Denote T ¼ argmin{0,t,1;Rt $RL } CðtÞ; T is the optimal stopping time that minimizes the cost CðtÞ subject to the constraint Rt $ RL : Also, write t0 ¼ inf{tlRt $ RL and t . 0}; t0 is the minimum time to achieve the prespecified reliability. We have the following result. Theorem 5. In a threshold weighted voting system with n units, suppose Rt is a continuous and non-decreasing function in t for t . 0: There exists the optimal time T; such that T [ ½t0 ; t0 þ c1 c21 2 ð1 2 RL Þ : At the optimal time T; RT $ RL and the cost function CðtÞ is minimized. The proof of Theorem 5 is in Appendix A.4. Theorem 5 states that the optimal time T should fall inside the interval ½t0 ; t0 þ c1 c21 2 ð1 2 RL Þ : In order to obtain the numerical value of the optimal time T; inside the interval ½t0 ; t0 þ c1 c21 2 ð1 2 RL Þ ; one possible approach is to find the root(s) of the following estimating equation d d c CðtÞ ¼ 0 or Rt ¼ 2 ð6Þ dt dt c1 t¼T t¼T Eq. (6) can be solved by the Newton –Raphson algorithm. Starting with an initial value T ð0Þ [ ½t0 ; t0 þ c1 c21 2 ð1 2 RL Þ ; we calculate d c2 d ðkÞ ðk21Þ ðk21Þ 2 R R þ ; T ¼T dt t t¼T dt t t¼T ðk21Þ c1 for k ¼ 1; 2; … When the algorithm is numerically convergent, we usually get a solution (say, T p ) to Eq. (6). If T p [ ½t0 ; t0 þ p p c1 c21 2 ð1 2 RL Þ and CðT Þ , Cðt0 Þ; we take T ¼ T ; otherwise, we take T ¼ t0 : Example 5. (Continue from Example 4): under the setting of Example 4, suppose c1 ¼ 85 and c2 ¼ 1: Fig. 3 plots the cost functions CðtÞ against t for the four cases studied in Fig. 2.
M. Xie, H. Pham / Reliability Engineering and System Safety 87 (2005) 53–63
61
Fig. 3.
Assume the minimal reliability value required is RL ¼ 0:90: In the case when t ¼ 0:50; the minimum time to achieve RL is t0 ¼ 9:70; if the input P ¼ 1; and the minimum time to achieve RL is t0 ¼ 27:27; if the input P ¼ 0: When t ¼ 0:70; we have t0 ¼ 18:18; if the input P ¼ 1; and we have t0 ¼ 16:36; if the input P ¼ 0: From Theorem 5, the optimal time T should be in intervals [9.70, 18.20), [27.27, 35.77), [18.18, 26.68) and [16.36, 24.86) respectively, in these four cases. Taking an initial value from the these intervals, we apply the aforementioned method of solving Eq. (6) to calculate the optimal T in each of the above four cases, i.e., (i) when t ¼ 0:50 and P ¼ 1; (ii) when t ¼ 0:50 and P ¼ 0; (iii) t ¼ 0:70 and P ¼ 1; and (iv) t ¼ 0:70 and P ¼ 0: The optimal time T we got in these four cases are 9.70, 27.88, 21.21 and 16.36, respectively. In other words, if each individual (unit) has time t ¼ 9:70; 27.88, 21.21 or 16.36, respective, in each of the four cases to make his or her decision, the system cost CðtÞ will be minimal while achieving reliability at least RL ¼ 0:90:
weight of the indecisive votes, compared to the rest decisive votes. The first extension uses an absolute instead of relative threshold value in Eq. (1). Levitin [23] recently considered such a model. He referred this model as ‘static’ threshold voting system. The second extension brings the indecisive votes into the equation. Due to space limitation and to avoid a length of discussion, we will discuss these two possible extensions in a future paper. In fact, the techniques used in this paper are very general and can be easily adopted to deal with these two extensions.
Acknowledgements The authors wish to thank the editor and the reviewers for their kindly comments and suggestions.
Appendix A A.1. Proof of Theorem 1 It is clear that
5. Concluding remarks It is worth to mention two possible extensions of the system model in Eq. (1) as follows: P Extension 1: Replace inequality Eq. (1) with i[I1 wi $ t~; where t~ . 0 is a fixed threshold value. P Extension 2: Replace Eq. (1) with i[I1 P P inequality wi $ t iIx wi þ u i[Ix wi ; for a given constant u; 0 # u # 1. Here, the u can be viewed as the fractional
R ¼ Prðsystem output ¼ 1lP ¼ 1ÞPrðP ¼ 1Þ þ Prðsystem output ¼ 1ÞPrðP ¼ 0Þ X X ¼ Pr wi $ t wi P ¼ 1 PrðP ¼ 1Þ i[I1
þ Pr
X i[I1
iIx
wi , t
X iIx
wi P ¼ 0 PrðP ¼ 0Þ
~ n ð0ÞPrðP ¼ 0Þ ¼ Qn ð0ÞPrðP ¼ 1Þ þ Q
62
M. Xie, H. Pham / Reliability Engineering and System Safety 87 (2005) 53–63
Next, we develop iterative formulas to evaluate Qn ðsÞ and ~ n ðsÞ: For n ¼ 1; Q Q1 ðsÞ ¼ PrðZ1 w1 $ slP ¼ 1Þ ¼ Prðð1 2 tÞw1 $ s; ðunit 1Þ [ I1 lP ¼ 1Þ þ Prð0 $ s; ðunit 1Þ [ Ix lP ¼ 1Þ þ Prð2tw1 $ s; ðunit 1Þ [ I0 lP ¼ 1Þ ¼ 1{ð12tÞw1 $s} Prððunit 1Þ [ I1 lP ¼ 1Þ þ 1{0$s} Prððunit 1Þ [ Ix lP ¼ 1Þ þ 1{2tw1 $s} Prððunit 1Þ [ I0 lP ¼ 1Þ ¼ h1;11 1{ð12tÞw1 $s} þ h1;1x 1{0$s} þ h1;10 1{2tw1 $s}
A.2. Sketch proof of Theorem 2 This result in Theorem 2 is directly related to the Lugannani and Rice formula [21] for independent sum of random variables. Since the weight wi ’s are all bounded below from zero and above from infinite, a rigorous way to prove Theorem 2 is to follow the proof in Luagannani and Rice [21] almost line-by-line. Alternatively, the formulas in Theorem 2 can be directly obtained from formula (5.14) of Daniels [22], a review article on tail probability approximations. In using (5.14) of Daniel formula, we treat Yn ¼ Pn21 i¼1 wi Zi as a single random variable. Conditions in Theorem 2 ensure that (5.14) of Daniel holds in our case. Due to space constraint, the detail steps are omitted. A.3. Proof of Corollary 1
where 1{A} is an indicator function equal to 1 if set A is true and 0 otherwise. Similarly, we have:
We first prove a lemma to evaluate the first and second derivatives of Rt with respect to t:
~ 1 ðsÞ ¼ h1;01 1 Q {ð12tÞw1 ,s} þ h1;0x 1{0,s} þ h1;00 1{2tw1 ,s}
Lemma A. In a threshold weighted voting system with n units, suppose the error probability of each individual unit is a function of t: Then the first and second derivative of the reliability function Rt can be evaluated by d ~ n;t ð0ÞPrðP ¼ 0Þ Rt ¼ Un;t ð0ÞPrðP ¼ 1Þ þ U dt
For n $ 2; Qn ðsÞ ¼ Pr
X n i¼1
¼ Pr
wi Zi $ sP ¼ 1
nX 21
wi Zi $
i¼1
þ Pr
nX 21 i¼1
þ Pr
nX 21
s2 n ;
wi Zi $ s; ðunit nÞ [ Ix P ¼ 1 wi Zi $
i¼1
¼ Pr
nX 21
wi Zi $
i¼1
þ Pr
nX 21 i¼1
þ Pr ¼
ðunit nÞ [ I1 P ¼ 1
sþ n;
s2 n P
ðunit nÞ [ I0 P ¼ 1
¼ 1 Prððunit nÞ [ I1 lP ¼ 1Þ
wi Zi $ sP ¼ 1 Prððunit nÞ [ Ix lP ¼ 1Þ
n21 X
sþ n P
¼ 1 Prððunit nÞ [ I0 lP ¼ 1Þ
wi Zi $ i¼1 Qn21 ðs2 n Þhn;11 þ Qn21 ðsÞhn;1x
þ Qn21 ðsþ n Þhn;10
Similarly ~ n ðsÞ ¼ Pr Q
nX 21
wi Zi ,
i¼1
s2 n ; ðunit
nÞ [ I1 P ¼ 0
n21 X þ Pr wi Zi , s; ðunit nÞ [ Ix P ¼ 0 i¼1
nX 21 þ Pr w i Z i , sþ ; ðunit nÞ [ I P ¼ 0 0 n i¼1
þ ~ ~ ~ n21 ðs2 ¼Q n Þhn;01 þ Qn21 ðsÞhn;0x þ Qn21 ðsn Þhn;00
and " # d2 Rt ¼ Vn;t ð0ÞPrðP ¼ 1Þ þ V~ n;t ð0ÞPrðP ¼ 0Þ dt2 ~ n;t ðsÞÞ and ðVn;t ðsÞ; V~ n;t ðsÞÞ are, where functions ðUn;t ðsÞ; U respectively, the first and second derivatives of Qn;t ðsÞ and ~ n;t ðsÞ with respect to t: They can be obtained by iterations. Q For n $ 2; Un;t ðsÞ ¼ Un21;t ðs2 n Þhn;11 ðtÞ þ Un21;t ðsÞhn;1x ðtÞ d 2 þ Un21;t ðsþ Þ h ðtÞ þ Q ðs Þ h ðtÞ n;10 n21;t n n dt n;11 d d þ Qn21;t ðsÞ hn;1x ðtÞ þ Qn21;t ðsþ Þ h ðtÞ n dt dt n;10 Vn;t ðsÞ ¼ Vn21;t ðs2 n Þhn;11 ðtÞ þ Vn21;t ðsÞhn;1x ðtÞ d 2 hn;11 ðtÞ þ Vn21;t ðsþ n Þhn;10 ðtÞ þ Un21;t ðsn Þ dt d d þ Un21;t ðsÞ hn;1x ðtÞ þ Un21;t ðsþ hn;10 ðtÞ nÞ dt dt " # d2 þ Qn21;t ðs2 hn;11 ðtÞ þ Qn21;t ðsÞ n Þ dt2 " # " # d2 d2 þ hn;1x ðtÞ þ Qn21;t ðsn Þ hn;10 ðtÞ dt2 dt2
M. Xie, H. Pham / Reliability Engineering and System Safety 87 (2005) 53–63
63
þ where s2 n ¼ s 2 ð1 2 tÞwn and sn ¼ s þ twn : Formulas for ~ ~ Un;t ðsÞ; and Vn;t ðsÞ are the same as the above formulas for ~ U ~ and Un;t ðsÞ and Vn;t ðsÞ but with Q; U and V replaced by Q; ~ V; and hn;vv0 ’s replaced by h~n;vv0 ’s. ~ 1;t ðsÞ; and V~ 1;t ðsÞ can For n ¼ 1; formulas of U1;t ðsÞ; V1;t ðsÞ; U ~ 1;t ðsÞ: be easily derived from the formulas of Q1;t ðsÞ and Q
and Rt , RL ; for t , t0 : So, to satisfy the constraint Rt $ RL ; we need to have the optimal time T $ t0 : On the other side, in Eq. (5), for t $ t0 þ c1 c21 2 ð1 2 RL Þ; we have CðtÞ ¼ c1 ð1 2 RL Þ þ c2 t $ c2 t $ c2 {t0 þ c1 c21 2 ð1 2 RL Þ} ¼ Cðt0 Þ: So, the optimal time T should be smaller than t0 þ c1 c21 2 ð1 2 RL Þ:
Proof of Lemma A. The results can be directly obtained by taking the first and second derivatives on both sides of the equations in Theorem 3. A
References
Proof of Corollary 1. We only need to prove that Un;t ð0Þ ~ n;t ð0Þ are non-negative. We next prove Un;t ð0Þ $ 0; and U ~ n;t ð0Þ $ 0 is similar. the proof of U First, from (4), it is obvious that, when n – n0 ; d=dthi;nn0 ðtÞ , 0 for t . 0 and n ¼ 0; 1; n0 ¼ 0; 1; x: When n ¼ 1; 8 d > > if 2 tw1 , s # 0 2 h1;10 ðtÞ > > dt > < d d U1;t ðsÞ ¼ > 2 h1;10 ðtÞ 2 h1;1x ðtÞ if 0 , s # ð1 2 tÞw1 > > dt dt > > : 0 otherwise So, U1;t ðsÞ $ 0 for all s and t . 0: When n $ 2; from Lemma A, Un;t ðsÞ ¼ Un21;t ðs2 n Þhn;11 ðtÞ þ Un21;t ðsÞhn;1x ðtÞ d 2 þ Un21;t ðsþ hn;11 ðtÞ n Þhn;10 ðtÞ þ Qn21;t ðsn Þ dt d d þ Qn21;t ðsÞ hn;1x ðtÞ þ Qn21;t ðsþ hn;10 ðtÞ nÞ dt dt ¼ Un21;t ðs2 n Þhn;11 ðtÞ þ Un21;t ðsÞhn;1x ðtÞ 2 þ Un21;t ðsþ n Þhn;10 ðtÞ þ Qn21;t ðsn Þ þ ðQn21;t ðsÞ d 2 Qn21;t ðs2 ÞÞ h ðtÞ þ ðQn21;t ðsþ n nÞ dt n;1x d 2 Qn21;t ðs2 ÞÞ h ðtÞ n dt n;10
Note that (from the proof of Theorem 1 or 3) 2 þ Qn21;t ðs2 n Þ $ 0; Qn21;t ðsÞ 2 Qn21;t ðsn Þ $ 0 and Qn21;t ðsn Þ2 2 Qn21;t ðsn Þ $ 0: By the method of induction, it is easy to see that Un;t ðsÞ $ 0 for all s and t . 0: A A.4. Proof of Theorem 5 Since Rt is a continuous non-decreasing function and t0 ¼ inf{tlRt $ RL and t . 0}; we have Rt0 ¼ RL
[1] Nordmann L, Pham H. Weighted voting systems. IEEE Trans Reliab 1999;48(1):42–9. [2] Pham H. Reliability analysis of digital communication systems with imperfect voters. Math Comput Model J 1997;26:103–12. [3] Ben-Dov Y. Optimal reliability design of k-out-of-n systems subject to two kinds of failure. J Oper Res Soc 1980;31:743–8. [4] Biernat J. The effect of compensating fault models on n-tuple modular redundant system reliability. IEEE Trans Reliab 1994;43:294–300. [5] Mathur FP, de Sousa PT. Reliability models of NMR systems. IEEE Trans Reliab 1975;24:604–16. [6] Pham H, Pham M. Optimal design of ðk; n 2 k þ 1Þ systems subject to two modes. IEEE Trans Reliab 1991;40:559–62. [7] Pham H. Optimal system size for k-out-of-n systems with competing failure modes. Math Comput Model J 1991;15:77–82. [8] Satoh N, Sasaki M, Yuge T, Yanagi S. Reliability of 3-state device systems with simultaneous failures. IEEE Trans Reliab 1993;42: 470–7. [9] Pham H, Malon DM. Optimal design of systems with competing failure modes. IEEE Trans Reliab 1994;43:251–4. [10] Nordmann L, Pham H. Weighted voting human-organization systems. IEEE Trans Syst Man, Cybernet-Part A 1997;30(1):543– 9. [11] Levitin G, Lisnianski A. Reliability optimization for weighted voting system. Reliab Engng Syst Safety 2001;71:131–8. [12] Parhami B. Threshold voting is fundamentally simpler than plurality voting. Int J Reliab, Quality Safety Engng 1994;1(1):95–102. [13] Parhami B. Voting algorithms. IEEE Trans Reliab 1994;43:617 –29. [14] Blough DM, Sullivan GF. Voting using predispositions. IEEE Trans Reliab 1994;43:604–16. [15] Pham H. Reliability analysis for dynamic configurations of systems with three failure modes. Reliab Engng Syst Safety 1999;63:13–23. [16] Levitin G. Asymmetric weighted voting systems. Reliab Engng Syst Safety 2002;76:199–206. [17] Vasconcelos GC, Fairhust MC, Bisset DL. Investigating feed-forward neural networks with respect to the rejection of spurious patterns. Pattern Recogn Lett 1995;16(2):207– 12. [18] Cordella LP, De Stefano C, Tortorella F, Vento M. A method for improving classification reliability of multi-layer perceptions. IEEE Trans Neural Netw 1995;6:1140–7. [19] De Stefano C, Sansone C, Vento M. To reject or not to reject: that is the question—an answer in case of neural classifiers. IEEE Trans Syst Man, Cybernet-Part C 2000;30(1):84–94. [20] Barndorff-Nielsen OE, Cox DR. Asymptotic techniques for use in statistics. New York: Chapman and Hall; 1989. [21] Lugannani R, Rice S. Saddle point approximation for the distribution of the sum of independent random variables. Adv Appl Prob 1980;12: 475–90. [22] Daniels HE. Tail probability approximations. Int Stat Rev 1987;55: 37–48. [23] Levitin G. Threshold optimization for weighted voting classifiers. Naval Res Logistics 2003;50:322–44.