Computers & Industrial Engineering 57 (2009) 298–303
Contents lists available at ScienceDirect
Computers & Industrial Engineering journal homepage: www.elsevier.com/locate/caie
Optimal maintenance policy for a multi-state deteriorating system with two types of failures under general repair Michael Jong Kim 1, Viliam Makis * University of Toronto, 5 King’s College Road, Toronto, Canada M5G 3G8
a r t i c l e
i n f o
Article history: Received 29 May 2007 Received in revised form 1 April 2008 Accepted 27 November 2008 Available online 7 December 2008 Keywords: Semi-Markov decision process General repair Policy-iteration algorithm Embedded decision process Optimal maintenance policy
a b s t r a c t We present a general repair model for a multi-state deteriorating system subject to major and minor failures. The process is modeled as a semi-Markov decision process with the optimality criterion being the minimization of the long-run expected average cost per unit time. A modified policy-iteration algorithm using the embedded technique is developed as the computational approach used to find the optimal maintenance policy. The advantage of using the embedded technique is that it reduces the size of the linear system in the value-determination step of the algorithm resulting in reduced computational efforts for large state spaces. Numerical examples are given which illustrate the implementation of the computational approach. Ó 2008 Elsevier Ltd. All rights reserved.
1. Introduction Most systems deteriorate with usage and age and are subject to random failures. These failures can be of several kinds. A system can experience a major (catastrophic) failure which is irreparable and the system must be replaced by a new one. On the other hand, a system can experience a minor failure such as a failure of a cheaper part and it can be restored back to the level at which it was operating just prior to failure. Since it is often very costly to repair or replace a failed system, preventive maintenance is usually carried out applying a maintenance policy which determines in which states repair should be done. Repairs that can bring a system to an operating level somewhere between the level of the current system and the new system are known as general repairs. Problems concerning the optimal maintenance of multi-state deteriorating systems have been modeled and analyzed extensively in the literature. Valdez-Flores and Feldman (1989) gives an excellent summary of the fundamental models in inspection and maintenance. Wang (2002) gives a more current and thorough review of the area. The semi-Markov decision process (SMDP) is a natural framework in which maintenance decision models can be formulated. See Tijms (1994) for a general exposition of the SMDP framework. Moustafa, Abdel Maksoud, and Sadek (2004), Chen and Feldman
* Corresponding author. Tel.: +1 416 9784184. E-mail addresses:
[email protected] (M.J. Kim),
[email protected] (V. Makis). 1 Tel.: +1 416 3562828. 0360-8352/$ - see front matter Ó 2008 Elsevier Ltd. All rights reserved. doi:10.1016/j.cie.2008.11.023
(1997) considered a similar problem with minimal repair and replacement under the total expected discounted cost criterion. Love, Zhang, Zitron, and Guo (2000) formulated and analyzed a related maintenance model in the SMDP framework considering general repair and replacement. Chen and Trivedi (2005) presented a similar model in which inspection and maintenance decisions were considered simultaneously. Castanier, Berenguer, and Grall (2003) considered a SMDP model in which both the long-run system availability and long-run expected maintenance costs criteria were considered. Childress and Durango-Cohen (2005) analyzed a replacement problem of interdependent parallel machines in the SMDP framework. Their paper, in contrast to the previous papers, focused more on the theoretical structural properties of the optimal replacement policy. In many real situations in which systems can experience several types of failures, the above models are not suitable since they assume only one major failure state in the SMDP formulation. The models are also limited since maintenance decisions are restricted to only three types of actions: do-nothing, minimal repair, and replacement. The model presented in this paper differs from the previously published models in that our SMDP formulation incorporates both major failures and minor failures. Furthermore, our model also allows for general repair which can bring an operating system to a state between the state of the current system and a new system. The optimality criterion considered is the minimization of the long-run expected average cost per unit time. To compute the optimal maintenance policy, several computational approaches have been suggested in the literature. Jayakumar and Asgarpoor (2006) considered a maintenance problem that
299
M.J. Kim, V. Makis / Computers & Industrial Engineering 57 (2009) 298–303
In Fig. 1, the arrows represent the process of the system if no general repairs take place. De Leve et al. (1977) referred it to as the natural process.
could be represented as a linear program which can be solved by standard numerical algorithms such as the simplex method. Chen and Trivedi (2005) used the value-iteration algorithm to compute the optimal inspection and maintenance policy. Moustafa et al. (2004) used the well-known policy-iteration algorithm and discussed how this algorithm compares to a sub-optimal control limit type policy. For an overview of different computational approaches used to compute optimal policies for variety of maintenance models see Lam and Yeh (1994). To compute the optimal policy we develop a new modified policy-iteration algorithm which uses the embedded technique introduced by De Leve, Federgruen, and Tijms (1977). The embedded technique reduces computational efforts for large state spaces during the value-determination step of the policy-iteration algorithm. The remainder of this paper is organized as follows. Section 2 introduces the formulation of the SMDP. Section 3 presents the modified policy-iteration algorithm using the embedded technique. Section 4 provides numerical examples which illustrate the implementation of the computational approach. Section 5 contains concluding remarks.
2.2. Action space At each decision epoch, if the system is in some operational state i and a general repair, a, is performed, the system is repaired from state i to a state i a. Choosing a = 0 corresponds to performing no repair on the system. The action space for each operational state i is therefore given as AðiÞ ¼ fa 2 Zþ [ f0gja i 1g. If the system is in some minor failure state Mij, a repair Rij is done which repairs the system back to operational state j. If the system is in the major failure state F, a repair R takes place which repairs the system back to operational state 1. Thus, A(F) = {R} and A(Mij) = {Rij}. For each i 2 S, after repair a 2 A(i) is carried out, the system will be in repaired state
8 > < i a; if i 2 O r i ðaÞ ¼ j if i ¼ Mkj 2 M > : 1 if i ¼ F
It should be noted that if for any operational state i we restrict A(i) = {0, 1, i 1}, the action space reduces to the action space defined by Moustafa et al. (2004).
2. Model formulation In this section we formulate a mathematical model of a multistate deteriorating system that is subject to major and minor failures. In each state of the system a decision is made whether to perform a general repair. The process is modeled as a SMDP. This section is divided into four subsections which sequentially define the state space, action space, transition probabilities, and expected costs and times associated with the SMDP.
2.3. Transition probabilities If the system is in some state i 2 S and general repair a 2 A(i) is performed, the system will be repaired to state ri(a). For each i 2 S and a 2 A(i) we define pij(a) as the probability that the system will be in state j at the next decision epoch given that its current state is i and a repair a is performed. Thus, we have pij ðaÞ ¼ pri ðaÞ;j ð0Þ. Furthermore, for each operational states i and j, pij(0) = 0 whenever j 6 i. To implement the model in real applications, it is necessary to determine the system states for a particular technical system and to estimate the cost components and the transition probabilities {pij(a)}. These can be estimated using the information in maintenance records for the particular system.
2.1. The state space We denote the set of operational states as O ¼ f1; 2; . . . ; Ng. State 1 represents the state of a new system. The degree of deterioration increases with each subsequent operational state. We suppose that there are K types of minor failures that can occur from any operational state. In real systems this may correspond to the failure of any of K minor components of a system as opposed to a major system failure. If a minor failure of type i occurs from operational state j, then the state of the system is represented by Mij. We denote the set of all minor failure states as M ¼ fMij ji 2 f1; 2; . . . ; Kg; j 2 Og. The state representing a major system failure is denoted as F. The system can enter failure state F from any operational state. Thus, the state space is S ¼ O [ M [ fFg. Fig. 1 illustrates the relationship between operational and failure state.
2.4. Expected cost and sojourn time between epochs Let li represent the expected time the system will remain in operational state i when no repair is carried out. When the system is in operational state i the operating cost per unit time are denoted as di. For each state i 2 S the mean cost and time to carry out repair a 2 A(i) are denoted as bi(a) and ti(a), respectively. It is assumed that bi(0) = ti(0) = 0. Let m represent the idle cost per unit time while the system is undergoing general repair. Thus, for each
M1j M2j
MKj
1
ð1Þ
N-1
2 F
Fig. 1. An illustration of the system evolution when no general repair takes place.
N
300
M.J. Kim, V. Makis / Computers & Industrial Engineering 57 (2009) 298–303
i 2 S and a 2 A(i) the mean cost incurred until the next decision epoch is
ci ðaÞ ¼ bi ðaÞ þ mt i ðaÞ þ dri ðaÞ lri ðaÞ
ð2Þ
and the mean sojourn time until the next decision epoch is
si ðnÞ ¼ ti ðaÞ þ lri ðaÞ
ð3Þ
EðzÞ
pij ðzÞ as the probability that the first state the system will enter set E(z) is state j given that the current state is i and policy z is used. If the initial state is an element of E(z), we take the first state entered in the set E(z) to be the first state entered in E(z) upon EðzÞ the next return to E(z). To calculate pij ðzÞ, E(z) is thought of as a set of absorbing states and the following system of linear equations is solved:
X
EðzÞ
pij ðzÞ ¼ pij ð0Þ þ
3. The computational approach Our optimality criterion is to minimize the long-run expected average cost per unit time. This criterion has been widely used in the literature as well as in real applications. Maintenance cost minimization which is also directly related to equipment reliability improvement and failure reduction is of main interest to maintenance managers. We denote the set of all stationary policies by Z. Every stationary policy z 2 Z is a function which assigns to each state i 2 S an action z(i) 2 A(i). To determine an optimal maintenance policy for the formulated semi-Markov decision process we use a modified version of the policy-iteration algorithm in which an embedded subset of the state space is defined which attempts to decrease the size of the linear system during the value-determination step. This can significantly reduce computational efforts for large state spaces. We first review the steps of the policy-iteration algorithm without the use of the embedded technique.
P j2S
pij ðzðiÞÞv j ;
ð4Þ
X
EðzÞ
qik ðzÞpkj ðzÞ;
i 2 EðzÞ and j 2 EðzÞ
ð7Þ
kREðzÞ
where qij(z), i 2 E(z), and j R E(z) is the solution to the following system of equations
X
qij ðzÞ ¼ pij ðzðiÞÞ þ
pik ðzðiÞÞqkj ðzÞ;
i 2 EðzÞ and j R EðzÞ
ð8Þ
k2EðzÞ EðzÞ
EðzÞ
Next we define ci ðzÞ and si as the expected cost incurred and expected time until the system enters some state in E(z) given the current state is i and policy z is used. Then for each i R E(z) we comEðzÞ EðzÞ pute ci ðzÞ and si by solving the following system of equations: EðzÞ
ci
ðzÞ ¼ ai li þ
X
EðzÞ
pij ð0Þ cj
ðzÞ
ð9Þ
jREðzÞ
X
EðzÞ
pij ð0Þ sj
ðzÞ
ð10Þ
pij ðai Þv j
For the remaining states i 2 E(z) if ri(z(i)) R E(z) EðzÞ ci ðzÞ
EðzÞ
¼ cri ðzÞ ðzÞ þ ðbi ðzðiÞÞ þ mt i ðzðiÞÞÞ
ð11Þ
EðzÞ sEðzÞ ðzÞ ¼ sri ðzÞ ðzÞ þ t i ðzðiÞÞ i
ð12Þ
EðzÞ
ci
ðzÞ ¼ ci ðzðiÞÞ þ
X
EðzÞ
pri ðzðiÞÞ;j ð0Þcj
ðzÞ
ð13Þ
j2S
sEðzÞ ðzÞ ¼ si ðzðiÞÞ þ i
Step 2: Policy-improvement step For each state i 2 S, using the values g and vi, i 2 S obtained in the previous step, determine the action ai 2 A(i) that minimizes the expression
ci ðai Þ g si ðai Þ þ
¼
and if ri(z(i)) 2 E(z) we solve the system of equations
i 2 S;
for some s 2 S
X
ð6Þ
jREðzÞ
Step 0: Initialization Choose any stationary policy z 2 Z. Step 1: Value-determination step Compute g and vi, i 2 S, as the solution to the following system of linear equations:
v s ¼ 0;
i R EðzÞ and j 2 EðzÞ
For the remaining states i 2 E(z) we have EðzÞ pij ðzÞ
sEðzÞ ðzÞ ¼ li þ i
3.1. The policy-iteration algorithm
v i ¼ ci ðzðiÞÞ gsi ðzðiÞÞ þ
EðzÞ
pik ð0Þpkj ðzÞ;
kREðzÞ
ð5Þ
j2S
The new policy ~z 2 Z is obtained by taking ~zðiÞ ¼ ai for each i 2 S. Step 3: Convergence test If ~z ¼ z, the iteration stops and the optimal stationary policy is z. Otherwise, return to Step 1 replacing z with ~z. The next two subsections develop the modified policy-iteration algorithm using the embedded technique. Section 3.2 defines the embedded set of states and the associated elements necessary in carrying out the value-determination step. Section 3.3 presents the modified value-determination step which will replace the value-determination step of the standard policy-iteration algorithm. 3.2. The embedded subset Under policy z 2 Z define the embedded set of states as E(z) = {i 2 S|z(i) – 0}. Thus E(z) is the set of all states in which repair takes place. To use the embedded technique we require that E(z) – S and that E(z) be accessible from every state i 2 S. Both of these clearly hold since state 1 R E(z) and M [ fFg EðzÞ. We define
X
EðzÞ
pri ðzðiÞÞ;j ð0Þ sj
ðzÞ
ð14Þ
j2S
3.3. The modified value-determination step EðzÞ
EðzÞ
EðzÞ
Tijms (1994) showed that once E(z), pij ðzÞ; ci ðzÞ; and si ðzÞ are defined, the system of linear equations in (4) can be replaced by EðzÞ v i ¼ cEðzÞ ðzÞ g si ðzÞ þ i
v s ¼ 0;
P
EðzÞ
j2EðzÞ
pij ðzÞv j ;
i 2 EðzÞ ð15Þ
for some s 2 EðzÞ
The calculation of vi for the remaining states i R E(z) requires simply substituting values in the equations EðzÞ v i ¼ cEðzÞ ðzÞ g si ðzÞ þ i
X
EðzÞ
pij ðzÞv j ;
i R EðzÞ
ð16Þ
j2EðzÞ
4. Numerical examples This section provides two numerical examples illustrating the use of the embedded technique. To demonstrate the correctness of the new algorithm developed in this paper, we first consider the same example as the one presented by Moustafa et al. (2004). The model presented in this paper includes as a special case the model presented by Moustafa et al. (2004) where only major failures were considered and only three types of maintenance decisions were allowed in each state of the system, particularly: do-nothing, minimal maintenance or replacement. Thus, the use
301
M.J. Kim, V. Makis / Computers & Industrial Engineering 57 (2009) 298–303 Table 1 Expected costs and times. State
1
2
3
4
5
6
7
8
9
10
li
100 10 4500 20 45 0.2
90 13 4600 21 46 0.21
80 16 4900 23 49 0.23
70 19 5400 26 54 0.26
60 22 6100 30 61 0.3
50 25 7000 35 70 0.35
40 28 8100 41 81 0.41
30 31 9400 48 94 0.48
20 34 10,900 56 109 0.56
– – 12,600 65 126 0.65
di bi (R) ti (R) bi (Ri) ti (Ri)
Table 2 Summary of embedded components. EðzÞ
EðzÞ
State
pi;10 ðzÞ
ci
1 2 3 4 5 6 7 8 9 10
1 1 1 1 1 1 1 1 1 1
4740.6 4743.8 4313.0 3957.2 3687.9 3000.0 2183.5 1406.0 680.0 18640.6
We start with an initial policy z0 ¼ f0; 0; 0; 0; 0; 0; 0; 0; 0; Rg which performs no repairs in operational states. Thus, the embedded set of states is E(z0) = {10}. Using Eqs. (6)–(14) we obtain valEðz Þ Eðz Þ Eðz Þ ues for pij 0 ðz0 Þ; ci 0 ðz0 Þ; and si 0 ðz0 Þ as summarized in Table 2. Carrying out the value-determination step as described in Section 3.3, we solve the system of equations in (15) and (16) and obtain v1 = 10241.5, v2 = 8781.2, v3 = 6785.7, v4 = 5343.4, v5 = 4332.7, v6 = 3044.8, v7 = 1925.1, v8 = 1070.3, v9 = 445.7, v10 = 0, and g = 6.28. After carrying out the policyimprovement step using Eq. (5) we get a new policy z1 ¼ f0; 1; 1; 1; 1; R; R; R; R; Rg–z0 . Thus, we return to Step 1 taking the initial policy to be z1. The policy obtained after every iteration is summarized in the Table 3. We obtain a final optimal policy of z ¼ f0; 1; 1; 1; 1; 1; 1; R; R; Rg and a long-run expected average cost per unit time of g = 41.74. This is identical to the optimal policy and optimal expected cost rate obtained by Moustafa et al. (2004). The previous numerical example considered a system with only a major failure state. In the next example we consider a deteriorating system with both major and minor failure states. In particular, we assume a state space S ¼ f1; 2; 3; M 11 ; M 21 ; M 12 ; M 22 ; M 13 ; M 23 ; Fg. The modified policy-iteration algorithm using the embedded technique is the computational approach used to calculate the optimal policy. We also determine the optimal policy by using the standard policy-iteration algorithm (without the use of the embedded technique) to validate the results obtained when using embedded technique. Table 4 provides the mean transition costs and times of the process. The evolution of the system is described by the one-step transition probability matrix
sEðzÞ ðzÞ i
ðzÞ
266.2 240.3 197.2 165.2 142.5 107.4 73.0 44.0 20.0 331.2
of the new algorithm based on the embedded technique to solve the numerical example presented by Moustafa et al. (2004) should yield identical results. This numerical example considers a state space with 9 operational states (states 1–9), no minor failure states, and one major failure state (state 10). Table 1 provides the mean transition costs and times of the process. The natural evolution of the system is described by the one-step probability matrix
2
0 0:28 0:2 0:15 0:12 0:09 0:07 0:05 0:03 6 6 0 0 0:4 0:19 0:15 0:1 0:07 0:05 0:03 60 0 0 0:32 0:23 0:17 0:12 0:08 0:05 6 60 0 0 0 0:32 0:26 0:2 0:13 0:07 6 60 0 0 0 0 0:56 0:2 0:14 0:08 6 P¼6 0 0 0 0 0:6 0:25 0:13 60 0 6 60 0 0 0 0 0 0 0:65 0:22 6 60 0 0 0 0 0 0 0 0:7 6 40 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
3 0:01 0:01 7 7 0:03 7 7 0:02 7 7 0:02 7 7 7 0:02 7 7 0:13 7 7 0:3 7 7 1 5 1
ð17Þ
2
60 6 6 60 6 60 6 6 60 P¼6 60 6 6 60 6 60 6 6 40 0
Table 3 Summary of policies after each iterations.
z0 z1 z2
1
2
3
4
5
6
7
8
9
10
g
0 0 0
0 1 1
0 1 1
0 1 1
0 1 1
0 R 1
0 R 1
0 R R
0 R R
R R R
56.28 42.77 41.74
0 0:1 0:2 0:1 0:2
0
0
0
0:1
0
0
0:1 0:2
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
1
0
0
0
0
0
0
1
0 1
0
0
0
0
0
0
0
0
0
0
0
0 0
0 0
0 0
0 0
0 0
0 0
0
0
0:4
3
0:5 7 7 7 0:1 0:2 0:7 7 7 0 0 0 7 7 7 0 0 0 7 7 0 0 0 7 7 7 0 0 0 7 7 1 0 0 7 7 7 0 1 0 5 0 0 1 0
0
ð18Þ
Table 4 Expected costs and times. State
1
li di
100 10
ti (1) bi (1) ti (2) bi (2) ti (Rij) bi (Rij) ti (R) bi (R)
2
M11
M21
M12
M22
M13
M23
F
80 15
3 60 20
– –
– –
– –
– –
– –
– –
– –
– – – –
20 4000 – –
30 4500 50 8000
– – – –
– – – –
– – – –
– – – –
– – – –
– – – –
– – – –
– – – –
– – – –
– – – –
10 3000 – –
15 3500 – –
20 4000 – –
25 4500 – –
30 5000 – –
35 5500 – –
– – 60 9000
302
M.J. Kim, V. Makis / Computers & Industrial Engineering 57 (2009) 298–303
Table 5 Summary of embedded components. EðzÞ
State
ci
1 2 3 M11 M21 M12 M22 M13 M23 F
1372 1320 1200 1722 1822 1870 1970 1950 2050 2972
ðz0 Þ
siEðzÞ ðz0 Þ
ci
EðzÞ
120.6 86 60 130 135.6 106 111 90 95 180.6
1000 1300 1700 1350 1450 1550 1650 1750 1850 2600
sEðzÞ ðz1 Þ i
ðz1 Þ
100 120 150 110 115 120 125 130 135 160
Table 6 Eðz Þ Summary of embedded components, pi;j 0 ðz0 Þ. State
M11
M21
M12
M22
M13
M23
F
1 2 3 M11 M21 M12 M22 M13 M23 F
0.1 0 0 0.1 0.1 0 0 0 0 0.1
0.2 0 0 0.2 0.2 0 0 0 0 0.2
0.01 0.1 0 0.01 0.01 0.1 0.1 0 0 0.01
0.02 0.2 0 0.02 0.02 0.2 0.2 0 0 0.02
0.021 0.01 0.1 0.021 0.021 0.01 0.01 0.1 0.1 0.021
0.042 0.02 0.2 0.042 0.042 0.02 0.02 0.2 0.2 0.042
0.607 0.67 0.7 0.607 0.607 0.67 0.67 0.7 0.7 0.607
Table 7 Eðz Þ Summary of embedded components, pi;j 1 ðz1 Þ. State
2
3
M11
M21
M12
M22
M13
M23
F
1 2 3 M11 M21 M12 M22 M13 M23 F
0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1
0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2
0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1
0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4 0.4
Table 8 Summary of policies after each iterations.
z0 z1
1
2
3
M11
M21
M12
M22
M13
M23
F
g
0 0
0 1
0 2
R11 R11
R21 R21
R12 R12
R22 R22
R13 R13
R23 R23
R R
16.04 14.40
We start with an initial policy z0 ¼ f0; 0; 0; R11 ; R21 ; R12 ; R22 ; R13 ; R23 ; Rg which performs no repair in operational states. Thus, the embedded set of states is Eðz0 Þ ¼ fM 11 ; M 21 ; M 12 ; M 22 ; M 13 ; Eðz Þ M 23 ; Fg. Using Eqs. (6)–(14) we obtain values for pij 0 ðz0 Þ; Eðz Þ Eðz Þ ci 0 ðz0 Þ; and si 0 ðz0 Þ as summarized in Tables 5–7. Carrying out the value-determination step as described in Section 3.3, we solve
the system of equations in (15) and (16). Choosing vF = 0 we obtain ¼ 448:01; v M21 ¼ 11 ¼ 728:82; v M2 3 ¼ 12 22 13 748:61; and g ¼ 16:04. After carrying out the policy-improvement step using Eq. (5) we get a new policy z1 ¼ f0; 1; 2; R11 ; R21 ; R12 ; R22 ; R13 ; R23 ; Rg–z0 . Thus, we return to Step 1 taking the initial policy to be z1. The policies obtained after each iteration are summarized in Table 8. We obtain the final optimal long-run expected average cost per unit time g = 14.40. Next we compute the optimal policy using the standard policyiteration algorithm without the use of the embedded technique. We start with the same initial policy z0 ¼ f0; 0; 0; R11 ; R21 ; R12 ; R22 ; R13 ; R23 ; Rg which performs no repairs in operational states. Table 9 summarizes the values of vi and g obtained during each iteration as well as the new policy obtained at the end of the iteration. We see that the results obtained when using the policy-iteration algorithm without the use of the embedded technique, are consistent with the results obtained when the embedded technique is used. In both cases, the optimal maintenance policy is given as z1 ¼ f0; 1; 2; R11 ; R21 ; R12 ; R22 ; R13 ; R23 ; Rg–z0 with the optimal cost rate g = 14.40.
v 1 ¼ 637:58; v 2 ¼ 50:52; v 3 ¼ 460:08; v M 428:20; v M ¼ 279:66; v M ¼ 299:45; v M
5. Conclusions and future research This paper presented a general repair model for a multi-state deteriorating system subject to major and minor failures. The problem was formulated as a semi-Markov decision process with the optimality criterion being the minimization of the long-run expected average cost per unit time. A new modified policy-iteration algorithm using the embedded technique was developed as the computational approach used to find the optimal maintenance policy. The advantage of using the embedded technique is the reduction of the size of the linear system in the valuedetermination step of the algorithm resulting in reduced computational efforts for large state spaces which are common in real applications where the use of the traditional computational techniques is not feasible. Two numerical examples were presented. To verify the correctness of the new algorithm, it was first applied to the numerical example presented by Moustafa et al. (2004) and identical results have been obtained. The second numerical example showed that results obtained when using the policy-iteration algorithm with and without the embedded technique are identical, further confirming the correctness of the algorithm developed in this paper. To implement the model and the algorithm in real applications, it is necessary to determine the number of states for a particular technical system and to estimate the model parameters, namely the cost components and the transition probabilities using the information in maintenance records. Thus, a suggestion for future research is the development of a case study using real maintenance data utilizing the model and the fast computational algorithm presented in this paper. This should also lead to a further refinement of both the model and the algorithm.
Table 9 Summary of values and policies determined after each iterations. 1
M11
M21
M12
M22
M13
M23
F
g
0 637.58
0 50.52
0 460.08
R11 448.01
R21 428.21
R12 279.66
R22 299.45
R13 728.82
R23 748.61
R 0
16.04
0 736.30
1 724.20
2 354.54
R11 530.25
R21 502.23
R12 515.21
R22 443.24
R13 942.93
R23 970.96
R 0
14.40
mi z2
0
1
2
R11
R21
z0
mi z1
2
3
R12
R22
R13
R23
R
M.J. Kim, V. Makis / Computers & Industrial Engineering 57 (2009) 298–303
References Castanier, B., Berenguer, C., & Grall, A. (2003). A sequential condition-based repair/ replacement policy with non-periodic inspections for a system subject to continuous wear. Applied Stochastic Models in Business and Industry, 19, 327–347. Chen, M., & Feldman, R. M. (1997). Optimal replacement policies with minimal repair and age-dependant costs. European Journal of Operational Research, 98, 75–84. Chen, D., & Trivedi, K. S. (2005). Optimization for condition-based maintenance with semi-Markov decision process. Reliability Engineering and System Safety, 90, 25–29. Childress, S., & Durango-Cohen, P. (2005). On parallel machine replacement problems with general replacement cost functions and stochastic deterioration. Naval Research Logistics, 52, 409–419. De Leve, G., Federgruen, A., & Tijms, H. C. (1977). A general Markov decision method I: model and techniques. Advances in Applied Probability, 9, 296–315. Lam, C. T., & Yeh, R. H. (1994). Optimal maintenance-policies for deteriorating systems under various maintenance strategies. IEEE Transactions on Reliability, 43, 423–430.
303
Love, C. E., Zhang, Z. G., Zitron, M. A., & Guo, R. (2000). A discrete semiMarkov decision model to determine the optimal repair/replacement policy under general repair. European Journal of Operational Research, 125, 398– 409. Moustafa, M. S., Abdel Maksoud, E. Y., & Sadek, S. (2004). Optimal major and minimal maintenance policies for deteriorating systems. Reliability Engineering & Systems Safety, 83, 363–368. Tijms, H. C. (1994). Stochastic Models – An Algorithmic Approach. New York: John Wiley. Valdez-Flores, C., & Feldman, R. M. (1989). A survey of preventive maintenance models for stochastically deteriorating single-unit systems. Naval Research Logistics, Q36, 419–446. Wang, H. (2002). A survey of maintenance of deteriorating systems. European Journal of Operational Research, 139, 469–489. Jayakumar, A., & Asgarpoor, S. (2006). Maintenance optimization of equipment by linear programming. Probability in Engineering and Informational Sciences, 20, 183–193.