Time-critical dynamic decision modeling in medicine

Time-critical dynamic decision modeling in medicine

Computers in Biology and Medicine 32 (2002) 85 – 97 www.elsevier.com/locate/compbiomed Time-critical dynamic decision modeling in medicine Yanping Xi...

202KB Sizes 2 Downloads 46 Views

Computers in Biology and Medicine 32 (2002) 85 – 97 www.elsevier.com/locate/compbiomed

Time-critical dynamic decision modeling in medicine Yanping Xiang, Kim-Leng Poh ∗ Department of Industrial and Systems Engineering, National University of Singapore, 10 Kent Ridge Crescent, Singapore 119260, Singapore Received 18 September 2001; accepted 3 December 2001

Abstract Many real-world medical applications require timely actions to be taken in time pressured situations. Existing approaches to dynamic decision modeling have provided relatively e5cient methods for representing and reasoning, but the process of computing the optimal solution has remained intractable. A major reason for this di5culty is the lack of models that are capable of modeling temporal processes and dealing with time-critical situations. This paper presents a formalism called the time-critical dynamic in9uence diagram that provide the capability for both temporal and space abstraction. To deal with the time criticality, we exploit the concept of space and temporal abstraction to reduce the computational complexity and propose an anytime algorithm for the solution process. We illustrate through out the paper, the various approaches with the use of a medical problem on the treatment of cardiac arrest. ? 2002 Elsevier Science Ltd. All rights reserved. Keywords: Medical decision-support systems; Model abstraction; Temporal representation; Time-critical dynamic decision making

1. Introduction The goal of dynamic decision making is to select an optimal course of action that satis;es some objectives in a time-dependent environment. The decisions may be made in di
Corresponding author. Tel.: +65-874-2193; fax: +65-777-1434. E-mail address: [email protected] (K.-L. Poh).

0010-4825/02/$ - see front matter ? 2002 Elsevier Science Ltd. All rights reserved. PII: S 0 0 1 0 - 4 8 2 5 ( 0 1 ) 0 0 0 3 6 - 1

86

Y. Xiang, K.-L. Poh / Computers in Biology and Medicine 32 (2002) 85 – 97

solution processes will have very little or no time left to carry out any action. This problem is particularly signi;cant for large and complex models involving temporal relations and many uncertainties. In the medical domain for example, a doctor treating a patient experiencing a cardiac arrest must decide on the optimal or most appropriate course of treatment under very limited time as the patient’s conditions may deteriorate rapidly over time. Existing dynamic decision formalisms are inadequate in their capabilities to model and solve these time-critical problems. A more e
Y. Xiang, K.-L. Poh / Computers in Biology and Medicine 32 (2002) 85 – 97

87

<1,2,3,4> Y

<1,3>

X

D U Fig. 1. An example of a TDID in condensed form. t1=1

t2=2

t3=3

Y1

t4=4

Y3

Y4

Y2 X1

X3 X4

X2 D1

D3

D4

D2 U1

U2

U3

U4

Fig. 2. A deployed form.

A TDID model has two forms: the condensed form and the deployed form. The condensed form is used mainly in the modeling process whereas the deployed form which may be constructed from the condensed form, is used mainly for inference purposes. Although, in principle, both the condensed and deployed forms can be converted to and from each other, they play di
88

Y. Xiang, K.-L. Poh / Computers in Biology and Medicine 32 (2002) 85 – 97

its values only at time points 1 and 3. Its value at the other time points 2 and 4 are not explicitly represented. The use of temporal abstraction is evident from the deployed form of the model as shown in Fig. 2, where for variable Y , nodes Y1 ; Y2 ; Y3 and Y4 are explicitly represented while for variable X , only nodes X1 and X3 are explicitly represented as probabilistic node; nodes X2 and X4 are assumed to be deterministically dependent on (possibly equal to) nodes X1 and X3 , respectively. 2.2. De9nition of TDID We shall now provide a formal de;nition of time-critical dynamic in9uence diagrams. A fully speci;ed time-critical dynamic in5uence diagram is de;ned as a 9-tuple D; C ; V; Tm ; Ai ; At ; G; P; Trd where: D is a set of temporal decision variables. Each D ∈ D is a sequence of decision variables indexed by a time sequence TD and is represented in the graph by a square node. C is a set of temporal chance variables. Each C ∈ C is a sequence of chance variables indexed by a time sequence TC and is represented in the graph by an oval node. V is a temporal utility variable. It is a sequence of utility functions indexed by a time sequence TV . V is represented in the graph by a diamond node. Tm is the master time sequence. A time sequence is a set of time indices represented by t1 ; : : : ; tn , where t1 is the initial time point of interest and tn is the last time point of interest. Let TD = {TD | D ∈ D}; TC = {TC | C ∈ C }, and TV = Tm . Each time sequence in TD and TC must be a subsequence of the master time sequence. Ai ⊆ (D ∪ C ) × (D ∪ C ∪ {V })] is a set of instantaneous arcs such that (X; Y ) ∈ Ai if and only if there exists an instantaneous arc from node X ∈ (D ∪ C ) to node Y ∈ (D ∪ C ∪ {V }). An instantaneous arc is represented in the graph by a solid directed arc. At ⊆ (D ∪ C ) × (D ∪ C ∪ {V })] is a set of time-lag arcs such that (X; Y ) ∈ At if and only if there exists a time-lag arc from node X ∈ (D ∪ C ) to node Y ∈ (D ∪ C ∪ {V }). A time-lag arc is represented in the graph by a directed dashed or broken arc. G is the temporal unit granularity representing the time step in our model. P is a set of conditional probability distributions. For each chance node X ∈ C , we assess a sequence of conditional probability distributions p(Xi | (Xi )) where i ∈ TX ; (Xi )={Yj | (Y; X ) ∈ At ; j = max{k | k ∈ TY ; k ¡ i}} ∪ {Yj | (Y; X ) ∈ Ai ; j = i}. Trd is a deployment transformation that is described below. Given a TDID=D; C ; V; Ts ; Ai ; At ; G; P; Trd, we de;ne its condensed form to be FC =D; C ; V; Ts ; Ai ; At ; G; P. The condensed form of a TDID contains information on casual processes and the way they behave and evolve over time. This information is represented in a compact form and has to be unfolded over time to perform inferences. The unfolded representation is called the deployed form of the TDID. The transformation from the condensed form to the deployed form is called a deployment transformation. We de;ne a number of graphs associated with the TDID to facilitate its transformation of the model from the condensed to the deployed form. The single graph Gs of a TDID is a directed graph with nodes corresponding to the three types of variables, namely, temporal decision variable,

Y. Xiang, K.-L. Poh / Computers in Biology and Medicine 32 (2002) 85 – 97

Y1

89

Y3

X1 D1

X3 D3

U1

U3

Fig. 3. A simpli;ed deployed form with some time indices omitted.

temporal chance variable and temporal utility variable, and arcs corresponding to instantaneous arcs. We say that a single graph Gs is acyclic if and only if it contains no cycle. The condensed graph Gc of a TDID is a directed graph with the same nodes as that of the single graph but with the arcs corresponding to both instantaneous arcs and time-lag arcs. We de;ne the deployed graph of a TDID to be resulting graph obtained by applying the ;rst four steps of the deployment transformation to the condensed graph. The steps in the deployment transformation are as follows: 1. The time pattern for the deployed form is determined by the master time sequence Tm . 2. The single graph is replicated N times, where N is the number of time steps in the master time sequence. Let Gsi be the ith single graph for i = 1; : : : ; N . 3. Connect the nodes in two di
90

Y. Xiang, K.-L. Poh / Computers in Biology and Medicine 32 (2002) 85 – 97

CD

CD

POA

POA

CBF

CBF

CR

CR

Med & Inter

U Fig. 4. TDID condensed form for treatment of cardiac arrest.

are equal to 1; 3. In general, we say that a TDID is well-de9ned if and only if its corresponding single graph is acyclic. 2.3. Application of TDID modeling to the cardiac arrest problem We shall model the cardiac arrest problem [13] using time-critical dynamic in9uence diagrams. In this problem, the goal of the medical treatment is to maintain life and to prevent anoxic injury to the brain. The observable variable is the electrocardiogram or rhythm strip (CR). While patient survival is of primary importance, cerebral damage must be taken into account and can be viewed as part of the cost in a resuscitation attempt. The length of time that the patient has been without cerebral blood 9ow (CBF) determines the period of anoxia (POA). If the patient has ine
Y. Xiang, K.-L. Poh / Computers in Biology and Medicine 32 (2002) 85 – 97

t1

t2

Y1

X1

t3

Y2

D1

X2

D2

V1

91

V2

Y3

X3

D3

V3

Fig. 5. Solution form for TDID.

after action in time 1 in9uences the rhythm before action in time 3. Similar time-lagged probabilistic relations may be interpreted for the other broken arcs in Fig. 4. 3. Solving time-critical dynamic inuence diagrams In this section, we describe an anytime algorithm for solving TDID. Detailed descriptions of other available algorithms including their e
92

Y. Xiang, K.-L. Poh / Computers in Biology and Medicine 32 (2002) 85 – 97

The policy is re;ned by choosing the optimal temporal interval and then adding a time slice in the temporal interval. The anytime algorithm is as follows: 1. Select an initial model with one temporal interval (N = 2). 2. For each stage, compute its transition value.  p(j | x; m)p(x | i; d)Tv(x; j; m); Tt ∈ Tm ; Trv(i; m; t) =

(1)

j ∈Yt+1 ;x∈Xt

where Tv(x; j; m) is the transition value function with parameter x; j; m. Form a queue of all the transition values generated. 3. Repeat • Find the transition with the greatest transition value maxt Trv(i; m; t). • Add a time slice in the transition. If there is no additional information available, add a stage at the mid-point of this transition. • Use value iteration (see details below) to evaluate the resulting network. • Record the decision recommendation for this iteration. • Update the queue of transition values. Until interrupted. The value iteration step is based on dynamic programming [16,17,1]. The value of the optimal policy at time Tt ∈ Tm for state i ∈ Yt satis;es the equation      p(j | x; m)p(x | i; d)(Tv(x; j; m) + Vt+1 (j)) (2) Vt (i) = max rt (i; d) +  d∈ D t  j ∈Yt+1 ;x∈Xt

with the terminal condition VN +1 (·) = 0. N is the total number of stages; x stands for the state after action; m stands for the duration between the current time slice and the next time slice; j stands for the state before action at the next time slice; rt (i; d) = x∈Xt p(x | d; i)v(x; i; d) denotes the expected one step return at tth time slice, where v is a function whose parameters are d; x; i; Tv(x; j; m) is the transition value function with parameters x; j; m. The anytime algorithm is interruptible. It will return the current best policy when the decision-maker interrupts the process to take immediate action. In each of the re;nement step, the current abstraction model is re;ned, and the computational results from the previous step can be used. In the worst case, the complexity of one re;nement step is O(|D Y |2 ). 3.2. Anytime solution of the cardiac arrest problem We shall now apply the approach described in the previous section to solve the cardiac arrest problem we have described in Section 2.3. In this example, we have used a value function of the form v(x; i; d) = (value(x) + value(i))=2:

(3)

The values of this function for all states as well as the transition value function Tv(x; j; m) are estimated with the help of medical experts.

Y. Xiang, K.-L. Poh / Computers in Biology and Medicine 32 (2002) 85 – 97

93

1 0.9 0.8 0.7 Utility

0.6 0.5 0.4 0.3 0.2 0.1 0 Run Time

Fig. 6. Typical pro;le of the expected utility obtained with the anytime algorithm under uncertain deadline.

When a patient experiences pain in her chest and left arm, a medical doctor runs the algorithm to obtain the optimal solution. The action will be started when the patient experiences severe pain. Thus, in this problem, the deadline is uncertain. The anytime algorithm starts from an initial model (condensed form was shown in Fig. 4 and the master time sequence is 1; 10). The model is re;ned repeatedly in each re;nement step. Fig. 6 shows the value of optimal policy at any time. The utility increases with increasing computation time. The computation action can be interrupted at any time to supply a solution that is the current best policy. We noted that the result shows the consideration of computation cost and decision quality. In our experiment, we also note that the computational situation a
94

Y. Xiang, K.-L. Poh / Computers in Biology and Medicine 32 (2002) 85 – 97

CR

CR

Med & Inter

U Fig. 7. Space abstraction of the TDID model for cardiac arrest. CR

CR

CR

CR

CR

CR

Med & Inter

Med & Inter

Med & Inter

U

U

U

Fig. 8. Temporal abstraction of the TDID model for cardiac arrest.

Unlike most of the existing work which focus on abstracting the state space, our focus here is on abstracting network structure. In our approach, the model abstraction task is decomposed into three sub-tasks: context interpretation, space abstraction, and temporal abstraction. These tasks are supported by a domain knowledge base. Context interpretation is a set of relevant interpretation contexts, such as relation between temporal patterns, the context of state in di
Y. Xiang, K.-L. Poh / Computers in Biology and Medicine 32 (2002) 85 – 97

CR

CR

CR

95

CR

Med & Inter

Med & Inter

U

U

Fig. 9. Temporal abstraction.

5. Summary and conclusion Our work here is related to a number of previous work as well as some on going ones. The idea of representing a temporal sequence of probabilistic models into a compact form had been reported by Aliferis et al. [22–24]. These related work had mainly focused on Bayesian networks while our work here includes temporal decisions. The use of anytime algorithm and meta-reasoning for directing the course of computation has been investigated by Horvitz [25], Russell [26], and Horsch [27]. Horvitz [25] explored desirable properties of 9exible computation for problems under varying and uncertain time constraints and analyzed the problem of computing the expected values of computation (EVC). The idea of using expected value of re;nement and value of computation to direct model re;nement and abstraction was described in [28]. In this paper, we have proposed an approach to time-critical dynamic decision making which is domain independent. A knowledge and model representation scheme called the time-critical dynamic in9uence diagram was proposed. This formalism represents a dynamic decision model using two forms. The condensed form is used mainly for modeling and space–temporal abstraction, while the deployed is used only for inference purposes. We have applied this formalism to model a medical problem on cardiac arrest. An anytime algorithm was described to solve time-critical dynamic in9uence diagram to provide optimal solution to time-critical problems with uncertain deadlines. The approach o
96

Y. Xiang, K.-L. Poh / Computers in Biology and Medicine 32 (2002) 85 – 97

intractable. A major reason for this di5culty is the lack of models that are capable of modeling temporal processes and dealing with time-critical situations where a trade-o< between decision quality and computational tractability is essential. Hence, a more e
Y. Xiang, K.-L. Poh / Computers in Biology and Medicine 32 (2002) 85 – 97 [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28]

97

D.J. White, Markov Decision Processes, Wiley, Chichester, 1992. R.E. Bellmam, Dynamic Programming, Princeton University Press, Princeton, 1957. R.A. Howard, Dynamic Programming and Markov Process, The MIT Press, Cambridge, MA, 1960. J.H. Nguyen, Y. Shahar, S.W. Tu, A.K. Das, M.A. Musen, A temporal database mediator for protocol-based decision support, AMIA Annual Fall Symposium, Nashville, TN, 1997, pp. 298–302. Y. Shahar, Knowledge-based interpolation of time-oriented clinical data, Report No. SMI-97-0661, Stanford Medical Informatics, Stanford University, 1997. Y. Shahar, M.A. Musen, Knowledge-based temporal abstraction in clinical domains, Artif. Intell. Med. 8 (1996) 276–298. C.F. Aliferis, G.F. Cooper, M.E. Pollac, B.G. Buchanan, M.M. Wagner, Representing and developing temporally abstracted knowledge as means towards facilitating time modeling in medical decision-support systems, Comput. Biol. Med. 27 (5) (1997) 411–434. C.F. Aliferis, G.F. Cooper, A structurally and temporally extended Bayesian Belief network model: de;nitions, properties, and modelling technique, in: E. Horvitz, F.V. Jensen (Eds.), Proceedings of the 12th Conference on Uncertainty in Arti;cial Intelligence, Morgan Kaufmann Publishers, San Francisco, 1996, pp. 28–39. C.F. Aliferis, G.F. Cooper, A new formalism for time modeling in medical decision-support systems, Proceedings of the 19th Annual Symposium on Computer Applications in Medical Care, Hanley & Belfus, Philadelphia, New Orleans, 1995, pp. 213–217. E.J. Horvitz, Computation and action under bounded resources, Ph.D. Thesis, Stanford University, 1990. S.J. Russell, Do the Right Thing: Studies in Limited Rationality, MIT Press, Cambridge, 1991. M.C. Horsch, Flexible policy construction by information re;nement, Ph.D. Dissertation, Department of Computer Science, University of British Columbia, 1998. K.L. Poh, E.J. Horvitz, Reasoning about the value of decision-model re;nement: methods and application, in: D. Heckerman, A. Mamdani (Eds.), Proceedings of the Ninth Conference on Uncertainty in Arti;cial Intelligence, Morgan Kaufmann Publishers, San Francisco, 1993, pp. 174–182.

Yanping Xiang obtained her Bachelor and Master degrees in Computer Science and engineering from Shanghai Jiao Tong University in 1993 and 1996 respectively. She is currently a Ph.D. candidate in the Department of Industrial & Systems Engineering at the National University of Singapore. Kim-Leng Poh is an Associate Professor in the Department of Industrial & Systems Engineering of the National University of Singapore. He obtained his Ph.D. in Engineering-Economic Systems from Stanford University in 1993. His research interests include automated decision making under resource constraints in medicine and engineering.