Expert Systems Wtth Apphcattons, Vol 6, pp 411-420, 1993
0957-4174/93 $6 00 + 00 © 1993 Pergamon Press Ltd
Pnnted in the USA
SIMON: A Distributed Computer Architecture for Intelligent Patient Monitoring BENOIT M. DAWANT, SERDAR U C K U N , ERIC J. MANDERS, AND DANIEL P. LINDSTROM Vanderbdt Umverstty,Nashville,TN
Abstraet--Intelhgent real-trinepatwnt momtormg encompasses data acquzsttton and reductton, sensor vahdatton, dtagnosts, therapy advice, and selecttve dtsplay of mformatton Thts paper descrtbes the archttecture and the functionahty of a prototype mtelhgent patwnt momtormg system, named SIMON, destgned to meet these requlrements In SIMON, the vartous aspects of a momtormg task are performed by three semt-mdependent modules runnmg asynchronously thefeature extractlon, the pattent model, and the display modules Central to SIMON ts the notion of context sensmvtty which permtts (a) the adaptatton of the momtormg strategy tn response to changes etther in the patwnt state or in the monttortng equtpment and (b) the contextual lnterpretatton of incoming data SIMON ts currently apphed to the task of momtormg newborn mfants with resptratory distress syndrome (RDS) and undergoing asststed venttlaHon
information. SIMON is currently applied to the task of monitoring newborn infants with respiratory distress syndrome (RDS) and undergoing assisted ventilation.
1. INTRODUCTION INSTRUMENTS AND techniques for intensive care
monitoring continue to evolve at a steady pace. As a result, health care professionals working in intensive care units (ICUs) face the challenge of interpreting vast amounts of patient data in relatively short periods of time. Although conventional computing resources are invaluable in storing, retrieving, processing, and displaying patient information, most of the interpretation task remains the responsibility of ICU personnel. Examples of these tasks are diagnosis of clinical states, assessment of prognosis, explanation of causal mechanisms, interpretation of alarms, and management of therapy. In the past decade, a number of artificial intelligence (AI) techniques and methods have been developed in order to assist ICU personnel with one or more of these information management and decisionmaking tasks. Such AI systems are commonly referred to as intelhgent momtorlng systems, although the term is not specific to medical domains. This paper describes the architecture and the functionality of an intelligent patient monitoring system named SIMON. The acronym stands for Signal Interpretation and MONitorrag. SIMON is implemented as a series of independent and asynchronous processes performing the tasks of collecting, processing, and analyzing data, interpreting patient status, predicting the evolution of monitored variables and parameters, and selectively displaying
Requestsfor repnntsshouldbe sent to BenoitM. Dawant,Vanderbllt Umverslty, P O Box 1824 Station B, Nashville,TN 37235
2. BACKGROUND
Following the pioneering work ofFagan, Shortliffe, and Buchanan (1984) on VM, the last ten years have witnessed a number of research efforts aiming at developing intelligent monitoring systems in the medical domain. VM is a rule-based expert system which is designed to interpret on-line quantitative data about postsurgical patients undergoing mechanical ventilation. COMPAS (Sittig, Pace, Gardner, Beck, & Morris, 1989), Respaid (Chambrin et al., 1989), and VentPlan (Rutledge et al., 1989) are other examples of systems designed to assist respiratory therapy management. COMPAS is implemented as a large rule-based system, it has been the subject of an extensive validation study, and a level of agreement between expert judgment and system's advice of 90% has been reported. VentPlan combines quantitative and qualitative modeling and simulation techniques to provide therapy plan recommendations for patients in the intensive care unit. Respaid is designed to interpret on-line data on patients undergoing assisted ventilation. By analyzing a vector of parameters every five seconds, it is capable of estimating dynamic states which are subsequently used to interpret changes in ventilator parameters and to activate alarms. Respaid is currently in use as a consultation system. A number of systems such as Poni (Garfinkei, Matsiras, Mavrldes, McAdams, & Aukburg, 411
412
1989) or DYNASCENE (Cohn et al., 1989) have also been developed for other monitonng tasks. Poni, a rulebased system, provides a framework for companng and correlating the output of several devices measuring related parameters. The mare objective of the Poni project is the elimination of false alarms in the operating room. Finally, D Y N A S C E N E is able to interpret timevarying s~gnals and clinical states in the context of cardiovascular monitoring. It is based on the concept of "scenes" which represents the temporal evolution of hemodynamic abnormalities. Although generally successful m their respective domains of application, the aforementioned systems are not designed to handle the entire spectrum of activities future monitoring systems will have to coordinate. As recently proposed by Hayes-Roth et al. (1992) the next generation of monitoring systems should be able to continuously sense and interpret a wide variety of data channels, summarize that data, and predict the evolution of the monitored parameters. They should be able to detect, diagnose, and propose corrective actions for abnormal conditions and e q m p m e n t malfunctions requlnng immediate response, They should be capable to propose, refine, and revise short term and long term management plans, to suggest parameter adjustments on assisting devices, to selectively filter and display information, and they should be fitted w~th explanation facilities. Finally, they should be able to pnoritize these diverse actwit~es to meet real time constraints. Guardlan (Hayes-Roth et al., 1992), the Intelligent Cardiovascular Monitor (ICM) (Factor et al, 1989), and SIMON (the system described herein) are long-term research projects which address some of these issues, albeit from different perspectives Guardian is developed as an application of a general architecture for intelligent agents (Hayes-Roth, 1990) and it is currently applied to respiratory and cardiovascular monitoring problems. In the Guardian project which grows out of a long history of research on reasoning and problem solving much emphasis is put on the generic problem of coordinating and selecting one among possible perceptual, cognitive, or action operations while meeting realtime constraints. The ICM system has been developed using a domain-independent software architecture referred to as the process trellis. In the ICM project, the monitoring task is viewed as a real-time hierarchical data fusion problem and the system is able to process a large number of continuous waveforms, to convert them into a qualitative representation, to correlate the information extracted from the various waveforms, and to identify intermediate-level physiological states in real time. As opposed to the approach taken in Guardian, the process trellis approach assumes that sufficient computing resources are guaranteed to perform all relevant tasks, thus precluding the need for their scheduhng. At the time of writing, ICM is not fitted with any patient model and does not possess predictive ca-
B M Dawant et al
pabdlties. In SIMON, the momtoring task is divided into three subtasks performed by three semi-independent modules: the feature extraction, the patient model, and the user interface modules. The feature extraction module is in charge of acquinng, storing, vahdatmg, and reducing large amounts of raw data to provide key information to the patient model. The patient model rehes on the information provided by the feature extraction module to estimate the patient status, to predict the evolution of the monitored parameters, to provide patient management suggestions, and to define a m o n t t o r m g context The notion of monitoring context is central to the SIMON project because it is the mechanism by which information can be utilized to reconfigure the monitoring strategy and the user interface in response to changes in the clinical state of the patient. Although the concepts developed in the SIMON project are tested on the problem of monitoring neonates in the intenswe care umt, the long term objective of the research effort is to explore methods by which a new generation of monitoring devices, which satisfy the following reqmrements, can be developed: 1. Capability to reconfigure the data acquisition and the low level processing of incoming data in response to either variations in signal characteristics (e.g., signal-to-noise ratio) or context dependent directives issued by the domain model. 2 Low level data and sensor validation using domainspecific knowledge relating information extracted from various data sources. 3. Context-dependent display of information. 4 High level interpretation of patient status, prediction, and definmon of a monitonng context utdized to issue directives both to the feature extraction and to the display components of the system. 5. Real Ume operation. The following section discusses the architecture of SIMON in detail. Section 4 presents a short description of the domain in which the system is currently being tested. That section also exphcates the role of each component and describes their interaction. The current state of the project is discussed m Section 5 3. ARCHITECTURE The SIMON system consists of a number of modules implemented as independent U N I X processes, and communicating with each other through an interprocess communication protocol (IPC). The IPC is modeled after the server-client communication protocol in the X window system and the relevant terminology IS introduced In Table 1. A block diagram representation of the system is shown in Figure 1. Central to this figure is the server/ feature extraction module responsible for acquisition, processing, and storage of the data. The outside world (i.e., the patient/ventilator complex) providing the data
413
SIMON" Dtstributed Computer Architecture TABLE 1 Terminology Related to the IPC Protocol
1. Client: A UNIX process which sends request messages to the server process. 2. Server: A UNIX process which receives and handles requests sent by the client. 3. Request: A message sent by the client to the server, asking for a service. Requests may or may not requ=re a response from the server. 4. Reply: A message from the server to the client in response to a request to which the client expects an answer. There are two possible uses for the reply mechanism: (a) the client actively waits for the reply, and (b) the client proceeds with other tasks and handles the reply when =t becomes available In the latter case, the reply is referred to as a response. 5. Event: A message from the sewer to the client as a s~de effect of a request to wh=ch the chent does not expect an answer. 6. Error: A message from the server to the client on a failed request.
to the system, the patient model, and one or several display/user interface modules are attached to the server. The last two components are implemented as clients to the central module. Each of these components is described in detail in the following subsections. 3.1. The Feature Extraction Module
The feature extraction component of the system is responsible for the storage, validation, and the processing of incoming data. Figure 2 illustrates the hierarchical data structure utilized to represent several levels of abstraction on monitored signals. Each of these components is implemented as an object in the C + + language. At the intermediate level are the measurement objects. Each measurement object represents a sensor (real-time data) or an analysis device (off-line data) from which data are acquired. At the lowest level, raw data and processed data objects are attached to each measurement object. Incoming measurement points
I cewe I i
1
display/ userinterface
I
I
i
,..... I
pallenlventdator-
complex
d~splay/ userinterface
I
I client
I -J [ I
I
c lent
Jl
II
I I sewe, I I I I ; ~ display 1 I II 2 II I
1.....
,
I, & dataiacqulsition sewer
IT
Jcliel~I
are stored in the raw data object while processed data objects contain the outcome of transformations and data processing algorithms applied to the raw data. The knowledge and the methods required to perform these operations are located in the measurement objects. For example, possible numerical methods able to estimate the heart rate from an arterial pressure waveform are located in the arterial__pressure---waveform measurement object. Also located at that level is the knowledge required to select one among several possible numerical methods, when such choice is available. Criteria used in that decision could include signal characteristics, required precision or reliability of the estimate, speed at which the information needs to be extracted, etc. The highest level in the hierarchy is the quantity. A quantity object represents a significant parameter in the domain of application. Quantities can either be related directly to measurement objects or to other quantities. In the former case, information about the value of the quantity can be extracted from one or several measurement objects. The quantity object contains the knowledge required to select the most appropriate measurement object in the context in which the information is needed and Section 4.2.2 illustrates how that knowledge can be utilized. Finally, quantities whose values cannot be estimated directly from measurement objects can be related to other quantities. For example, the quantity I : E _ _ R a t i o (i.e., the inspiratory/expiratory time ratio) is related to the quantities inspiratory__time and expiratory__ t i m e , the value of which is required for the calculation of the I:E ratio.
datastorage
&
processing
FIGURE 1.
Block diagram representation of SIMON. Central to the system is the server/feature extraction module responsible for the acquisition, storage, and low level processing of incoming data. The patient model and the display user interfaces are implemented as clients to the server.
3.2. The Patient Model
The patient model and diagnostic core of SIMON is developed using a hybrid qualitative/quantitative ontology named YAQ (Uckun & Dawant, 1992). As such, YAQ provides the user with a modeling paradigm and an inference engine able to drive the model. YAQ is based on the Qualitative Process theory (QPT) (Forbus,
414
B M Dawant et al
Quantity Object
Measurement Object
Knowledge for sensor validation and measurement source selection •
Measurement
Raw Data Object
Signal processing
Raw Data ObleCt
Processed Data Objects
Processed Data Objects
FIGURE 2. The hierarchical data structure utilized to represent several levels of abstraction on monitored signals.
1984) which was originally devised for qualitative physics. The main constructs of YAQ are quantities, views, processes, and assertions. Each of these components are briefly described below: 1. Quantity: Each quantity represents a measurable, estimable, or constant parameter in the domain model. Quantities may take on symbolic or numerical values. 2. Process: Processes are the driving force of simulations based on QPT. Their influence on the system's variables are expressed in terms of quantitative constraint equations defined by influences and qualitaUve proportionallties. The former relates the derivative of a quantity to the value of another quantity For example, the expression (INFLUENCE+ A B ) (i.e.,quantity B has a positiveinfluence on quantity A) states that quantity A will increase (decrease) as long as quantity B is positive (negative). Qualitative proportionalities express weakly constrained relations between quantities. These relations can either be monotonically increasing or decreasing. For instance, the expression (QPROP+ A 13) (i.e., quantity A is qualitatively proportional to quantity B) states that quantity A will increase (decrease) when quantity B increases (decreases). Note that qualitative proport~onalities carry very little mformation because they do not describe how one variable varies with respect to the other.
A process can be either in an active or macUve state. When the process is active, its qualitative constramt equations contribute to the model of the system. The activity status of a process is determined by means of a series of activation conditions typically expressed in terms of quantity values, relations between quantities, or the activity status of one or several views. View: Each view models a specific context defined by particular values or h~stories of quantiUes, or by means of ordmahty, cardinality, or proportionality relations a m o n g quantmes. In the context of SIMON, views typically correspond to clinically meaningful conditions. As ~s the case for processes, views can either be m an active or inactive state, depending on the value of their activation conditions. Views may also define qualitative proporUonalities which are vahd when the view ~s active. However, as opposed to processes, views cannot define influences. 4. Assertions: Assertions are external facts which are crucial to the reasoning process and which may intervene in the activation conditions of views and processes, but which cannot be assessed by YAQ itself. A simple example is the transiUon between different modes of vennlation such as a shift from continuous mandatory ventilation (CMV) to contmuous positive airway pressure (CPAP)
SIMON. Dtstrtbuted Computer Archttecture
415
(View Excesslve_02_Content Descriptlon ("Blood 02 content is hxgher than normal") PrecondltlonsO ActxvateCondltlons (0R (If GT 02Content 20.5) (If GT Pa02 150)) ActlvageFcg() Ac%IvageActlons ((Alarm TRUE) (Sampllng-Interval Pa02 Decrease)))
FIGURE 3. A typicalview description.
Figures 3 and 4 illustrate the view E x c e s s i v e _ _ 0 2 _ _ C o n t e n t and the process Met__Corap__Acidos i s (metabolic and renal compensation for acidosis). In th~s case, the view does not contribute any constraint equations while the process definition indicates that, when the process is active, the difference between the normal blood pH value (i.e., 7.4) and the measured pH has a positive influence on the bicarbonate content of the blood. The fields P r e c o n d i t i o n s , kctivateConditions, and A c t i v a t e F c t are evaluated sequentially by YAQ when determining the activation status of each view and process. If the list of conditions in the P r e c o n d i t i o n s field evaluates to false, the corresponding view or process is inactive. If it evaluates to true, but the list of conditions in the ActivateConditions or the ActivateFct fields evaluate to false, the corresponding view or process is labeled as possible although not verified. The activation status of possible views and processes can be changed to active when YAQ reasons inductively (see next subsection). When the list of conditions in all three fields evaluates to true, the view or process is active. The ActivateConditions and ActivateFct fields are distinguished only by the type of expression they can contain. The former contains expressions which can be parsed by YAQ and included in the views and processes describing an application. The latter are C functions of arbitrary complexity which need to be independently developed and hnked with YAQ prior to execution. Views and processes may also be fitted with an ActivateActions field which permits the triggering of specific actions when the view or process is activated. This mechanism permits the contextual modification of normal and abnormal ranges for parameter values, the ranking of monitored or displayed parameters, or
the issuance of warnings and alarms. These actions are the main method by which the monitoring context is established. For instance, the k c t i v a t e k c t i o n s field of the view shown in Figure 3 indicates that an alarm needs to be sent to the user when the view is activated. It also specifies that the sampling rate of the PaO2 parameter needs to be increased to closely monitor its evolution. T h e k c t i r a t e k e t i o n s field o f the process sh ow n
in Figure 4 indicates that the user should simply be warned that the process has been activated but that the condition does not justify raising an alarm. The reasoning mechanism in YAQ involves two major steps: creation of a model instance, and prediction. The creation of a model instance consists of checking the activation conditions of all views and processes and building a model using the qualitative constraint equations contributed by active views and processes. Once the model is built, predicted changes in the system's variables are determined first by resolving the influences on the quantities, and then resolving the qualitative proportionalities in proper causal order. When available, YAQ also relies on numerical methods and data to resolve ambiguities created by qualitative constraint equations. Mapping observations to the activity status of views and processes to create a model instance is based on the abductive reasoning paradigm (Punch, Tanner, Josephson, & Smith, 1990). YAQ also uses its prediction capability to reason inductively. In this mode, YAQ compares observed values at time i with predicted values (i.e., values predicted with the model instance at time t - 1) and computes a discrepancy score. It then goes back to the model instance at time i - l and modifies it to reduce the discrepancy between observed and predicted values. The reduction in the discrepancy
(Process Met_Comp_Acldosls Descriptlon ("Metabollc and renal compensatlon for acldosls") Precondltions (AND (If Acldosls) (If K1dneys_Funct tonal) (If GT pHDifference 0.05)) Act ivat eCondlt ions () ActlvateFct ("actlvate_met_comp_acldosls") ActivateActlons (Alarm INFORM) Relatxons (Inf+ HC03 pHD1fference)) FIGURE 4. A typical process description.
416 score is achieved by reassessing the activation status of views and processes at state t - 1, creating a novel model instance, and running the simulaUon with that new model mstance. The activation status of views and processes is modified and model instances reevaluated until the difference between predicted and measured values is minimized. Thus, inductwe reasoning permits the adjustment and the refinement of the estimation of patient status as informaUon becomes available.
3.3. The Display Modules The display processes are implemented as clients of the SIMON server as well as X clients. Displays can thus be spawned on any workstation running an X server. The prototype user interface consists of several scrollable strips of real time or static data, and text interfaces for system messages and user input. The user or the model can dynamically turn the display of any quantity on or off. The system supports several displays with different configurations on different workstations. Possible applications of such capacity include the display of a few critical parameters on a central monitoring console and a more complete set of information at a bedside terminal.
3.4. I / 0 Management 3.4.1. The IPC Algortthm on the Server Slate. The role of the server/feature extraction module is twofold: dealing with incoming data from multiple data channels, and answering requests from mulhple clients that are handling various aspects of the monitoring task. In a real-time system, problems occur when the computer resources (i.e., speed at which the data can be stored, analyzed, and processed) are insuffioent to satisfy all requirements. For instance, a fast data acqmsition process may fill a temporary buffer with data before the server can access and handle the information. Although the loss of data cannot always be avoided, the impact of such a loss can be minimized by scheduhng (prioritizing) the tasks to be performed by the server. The strategy adopted in SIMON is to consider all messages coming from clients as essential. The only way to reduce the load on the server is therefore to ignore some of the incoming data. However, losing information indiscriminately on all data channels may be inappropriate because some of the data channels may carry more important information than others. Moreover, the criticahty of each of these data channels may vary over time. Thus, the scheduhng algorithm needs to be able to assign priority levels to each of the data channels and to act accordingly. In SIMON, a scheduling scheme has been implemented that takes into account a socalled base priority for each channel (i e., a priority level assigned by the application), the remaining space
B M Dawant et al in the buffer assigned to that channel, and Its samphng rate. The criticality, and thus the prionty level of each data channel can be modified by the application by changing the value of the base priority. Processing messages issued by the clients is slightly different than handling incoming data because none of these messages can be discarded. Also, these messages do not typically have to be dealt with immediately (except for alarms). Thus, a time period (the txmeout parameter) for which requests are allowed to remain unanswered can be defined for each client. If there is incoming data to be handled, the server responds to the client's request only after its t ± m e o u t period has expired. The pseudocode representation of the scheduhng algorithm is given m Figure 5. First, It is assumed that the server is in the waitForSomething function. As long as it is, the server checks the clients for pending requests, and the t i m e o u t parameter is computed for each client. The smallest t i m e o u t value determines the m a x i m u m period of time the server can dedicate to reading incoming data. The data channels are then checked and their relative priorities are computed. Finally, data channels are handled as long as the t i m e o u t value is not reached. When the t i m e o u t period is reached, the corresponding client's request is handled, and the process is repeated as long as a termination message is not received by the server. 3.4.2. The IPC Algortthm on the Chent's Side Figure 6 illustrates the c o m m u m c a t i o n algorithm implemented on the client side in SIMON. This figure shows that the chent can be in one of two modes: actively wmting for an answer, or processing events d~spatched by the server as they arrive. In the former case, the chent lets the events which are not answers to ~ts request accumulate in the event queue until the proper answer is received. Once such an answer is recewed, the client returns to handling the event queue.
4. I N T E L L I G E N T M O N I T O R I N G TASK USING SIMON
4.1. The Domain of Application The applicaUon domain on which the SIMON architecture is being tested is the respiratory management of infants with respiratory distress syndrome (RDS). Typically, the treatment of RDS is based on assisted ventilation and oxygenation using a mechanical ventilator. Ventilator management is a delicate task because halfa dozen parameters have to be adjusted while vital signs and variables relating to the oxygenation, ventdation, and acid-base balance of the patient have to be continuously monitored. Moreover, the relation-
SIMON Dlstrtbt.ted Computer Archttecture
417
MAIN LOOP OF SERVER stop := FALSE W H I L E stop = FALSE waitForSomething IF there is a chent process trying to connect T H E N accept new client END IF F O R all connected chents IF input pending T H E N update timeout; add client to chentlist END IF END FOR determine first client on list to t]me out F O R all channels IF input pending T H E N calculate priority; add channel to current channel list END IF END F O R sort current channel hst for decreasing priority FOR all channels in current channellist IF tlmeout not reached T H E N read data from channel END IF END FOR IF chent t]meout OR channellist empty T H E N read request from client with highest priority handle request delete request END IF END W H I L E
FIGURE 5. Pseudocode of the scheduling algorithm on the server side.
ships between ventilator parameters and physiological variables can only be described in qualitative terms. Although this complicates the task of optimally controlhng the patient/ventilator complex, it suggests that an approach based on qualitative modeling concepts may be appropriate for this problem.
M A I N LOOP OF CLIENT stop := FALSE W H I L E stop = FALSE waitForSomething IF waitmgForReply T H E N wait until reply comes m waltingForReply := FALSE handle reply delete reply END IF W H I L E something in EventQueue get next Event from EventQueue handle Event delete Event END W H I L E W H I L E input pending read Event from Server handle Event delete Event END IF END W H I L E FIGURE 6. Pseudocode of the scheduling algorithm on the client side.
4.2. The Domain File
The first step in using SIMON for a specific application is to define the measurement, quantity, view, and process objects. This is done by describing each of these components by a Lisp-like expression in the domain file (for information, the domain file for the RDS application contains 40 quantities, over 40 views to describe various aspects of the disease and therapy, and eight processes). This file is then parsed by the patient model and the feature extraction module. Data structures corresponding to the process, view, and quantity objects are created in the patient model. Data structures corresponding to the measurement and quantity objects are created in the feature extraction module. Figure 7 illustrates the quantity object pa02 (arterial oxygen partial pressure); typical views and processes have been illustrated in Figures 3 and 4. The description of the quantity paO2 shows that information about PaOz can be obtained either from the BloodGasO2 or TCPO2 (transcutaneous PO2) measurement objects. Also, it contains a reference to a selection function (' ' s e l e c t . 0 2 ' '). The knowledge expressing the advantages and disadvantages of either method and permitting the context-dependent selection of one of them is coded in that function: transcutaneous readings are noninvasive, readily available, and are always current while blood gas mea-
418
B M Dawant et al
(Quant ity Pa02 Descriptlon ("arterlal oxygen partlal pressure") Measurement (BleodGas02 TCP02) Select lonFct ("select_02") Threshold (0.05) Quant itySpace ( (0.0 - 20.0 "very low") (20 0 - 50.0 "low") (50 0 - 80.0 "normal") (80.0 - 150 0 "high") (150 0 - P MAX "very hxgh")) Default (75.0))
FIGURE 7. A typical quantity description.
surements are more accurate in general. A criterion based on the recency of the last blood gas measurement ~s utihzed by the s e i e c t _ _ O 2 function to decide the method of measurement to be preferred. The quantity descripUon also includes a threshold value, a default value, and a quantity space. Typically, the threshold parameter is utilized to detect changes in the value of a monitored variable. Because of normal variations and measurement noise, not all changes in these variables are s~gnificant, and the threshold is used to discriminate between these normal variations and significant changes. Any variation smaller than the threshold value is ignored while variations larger than the threshold value are interpreted as significant events warranting further analysis as detailed in the next subsection. Since the significance of a change may vary with the state of the patient, the value of the threshold can be modified by listing it in the ketivatekction field of the appropriate views and processes. Finally, the quantity space partitions the possible range of value for the quantity into climcally meaningful intervals. As is the case for the value of the threshold, both the default value and the quantity space can be dynamically modified. 4.2.1. The Role o f the Patient Model In addition to predicting the evolution of the monitored parameters, the patient model is responsible for establishing the monitoring context. The monitoring context encompasses (a) ranking the importance of the momtored quantities, (b) setting context-dependent values for a number of parameters pertaining to these quantxties, and (c) speofying conditions on these quantities warranting special attentmn. As was mentioned previously, the ranking of the monitored parameters and the context-dependent parameter values are specified via the ActivateAction mechanism. The former affects the base priority of the corresponding data channel, and the configuration of the display. The latter permits the adaptation of detection thresholds and alarming conditions at the feature extraction level. In SIMON, the predictive capability of the patient
model is used to drastically reduce the amount of information from the raw data level to the level at which it is exchanged between the patient model and the feature extraction module, in a way reminiscent of the approach followed in Guardian (Washington & HayesRoth, 1989). The main concept is to filter the information at the feature extraction level and to transmit only the most critical portion of that information to the model. The model establishes the cnticality of that reformation by predicting the evolution of each quantity and by identifying landmark values for these quantities (landmark values are values that necessitate a complete reassessment of the patient status when reached). The predicted evolution of the quantities and the landmark values are then communicated to the feature extraction module. Thin module processes every incoming data point and alerts the model only if a quantity departs from its expected course or when a quantity has reached a landmark value. I f a landmark value has been reached, a new model instance is created and a new prediction cycle xs performed When the feature extraction module reports an unexpected value for one or several of the monitored parameters, the model attempts to explain that discrepancy via its inductive reasoning capabihty. If an explanation can be found, the model is simply updated If no explanation is found, a message ~s sent back to the feature extraction module, instructing it to run a detailed analyms on the reliability of that data channel. If an error xs detected and it can be corrected, an updated value is sent to the model which reassesses the situation. If no error is detected, the validity of the data is assumed, the absence of explanation is attnbuted to the incompleteness of or an error in the model, and a warning message is sent to the user ]ndicatmg an unexplained situation. 4.2.2. The Role of the Feature Extraction Module The task of the feature extraction module is threefold: respond to the directives put forth by the model, perform low level data validation, and react to malfunctions in the momtoring equipment. Responding to the directives put forth by the model is done simply by checking the alarming condition, expected range, and progression of each quantity every Ume a new data point comes m. Data validation is performed at the quantity level using knowledge about relationships between measurements. For example, the quantity h e a r t _ _ r a t e could be connected to two measurement objects: ECG and arterial__pressure__waveform. Normally, values for the quantity heart__rate are derived from the ECG measurement object. Unexpected values reported by the ECG measurement object can, however, be questmned by the quantity h e a r t _ _ r a t e . The quantity can then query the a r t e r i a l _ _ p r e s s u r e _ w a v e f o r m measurement object for supporting evi-
SIMON" Dtstributed Computer Architecture
dence. If the latter measurement object corroborates the information provided by the h e a r t _ _ r a t e measurement object, the information is transmitted to the model. In the alternative, the quantity decides that the measurement object provides information that is no longer reliable, and it issues a warning indicating that a malfunction in the monitoring equipment is suspected. As long as the user does not indicate that the equipment has been verified, information about the heart rate is provided by the a r t e r ± a l _ _ p r e s s u r e w a v e f o r m measurement object.
5. DISCUSSION AND CONCLUSIONS Although a specific domain of application has been selected, SIMON is an ongoing research project aiming at developing generic computer architectures for intelligent real-time patient monitoring. Central to that problem is the communication between the feature extraction, the patient model, and the display modules and the notion of monitoring context. The distributed architecture of the current system is flexible and it permits the exploration of several approaches to the implementation of such functionalities. At the time of writing, the patient model has been implemented and it is being evaluated. The evaluation of the model focuses on two aspects of the reasoning process: (a) the accuracy of prediction, and (b) the accuracy of diagnoses. The accuracy of prediction can be evaluated by comparing predicted and observed values for the monitored parameters. The accuracy of diagnosis, however, is more difficult because of its subjectivity and the lack of accepted gold standards. A possible evaluation method would be to compare the outcome of the system with the opinion of panels of physician with various levels of expertise. This method, however, is extremely labor intensive and it is not practical in the environment in which the system is being developed. Therefore, the evaluation method that has been selected is crttiquing in which experts are asked to evaluate and to critique the output of a computer system. The following protocol will be utilized for the evaluation of the model. A number of patients records will be selected at random from the 300 records available at the Vanderbilt NICU. These records will be divided into two sets referred to as the training and the testing sets, respectively. For the records in the training set, the developer of the model (the second author of this paper) and a senior physician from the Vanderbilt NICU will criticize the interpretation produced by the model on a state-by-state basis (typically, a state corresponds to an arterial blood gas (ABG) measurement). The model will then be modified to correct discrepancies between the system's output and the two critics' opinion. Once modified, the system will be evaluated
419
on the testing set. The diagnoses proposed by the system will again be criticized on a state by state basis by the same critics, and the accuracy of prediction will be performed automatically. The IPC protocol, the server, the framework for the feature extraction module, and a prototype of the display have also been implemented and are being tested off line. However, the decision rules implemented at the quantity level to verify the validity of incoming data and to react to malfunctions in the monitoring equipment are still primitive. Also, measurement objects are only fitted with very limited signal analysis capabilities. Both these functionalities will be expanded progressively, and the next phase in the testing of the architecture will focus on the mteraction between the patient model and the feature extraction module. This will be done by processing a number of data channels simulating various system malfunctions as well as typical disease conditions and evolution.
Acknowledgments.--This research is supported, m parts, by NSF grant MIP-8909513, NSF grant INT-8913499, and by NIH grant HL-38354.
REFERENCES Chambnn, M.-C., Chopin, C., Ravaux, P., Mangalaboyl, J., Lestavel, P, & Fourrier, F. (1989). RESPAID: Computer-aided decision support for respiratory data in ICU In Proceedmgs of the 1 lth Annual Conference of lEEE Engmeermg m Medwme and Btology Socwty (pp. 1776-1777) Seattle, WA. Cohn, A.I, Rosenbaum, S., Factor, M., Sltt~g, D F, Gelernter, D, & Mdler, P L. (1989) Sequential chmcal "scenes" A paradigm for computer-based mtelhgent hemodynamtc monttonng In Proceedmgs of the 13th Annual Symposmm Computer Apphcattons m Medtcal Care (pp 5-10), Washington, DC Factor, M., Slttig, D F, Cohn, A.I, Gelernter, D.H, Miller, P L, & Rosenbaum, S (1989) A parallel software architecture for budding mtelhgent medical momtors. In Proceedmgs of the 13th Annual Sympostum m Computer Apphcatlons m Medtcal Care (pp 1116), Washington, D C. Fagan, L M, Shorthffe, E H., & Buchanan, B.G (1984) Computerbased medical deoslon making' From MYCIN to VM. In W.J Clancey & E.H Shorthffe, (Eds.) Readings m medtcal arttfictal mtelhgence Theftrst decade (pp 241-255) Reading, MA' Addison-Wesley Forbus, K.D. (1984) Quahtatwe process theory Unpubl,shed PhD thesis, Massachusetts Institute of Technology, Cambridge, MA. Garfinkel, D., Matsiras, P.V, Mavndes, T., McAdams, J., & Aukburg, S.J. (1989). Patient monitoring m the operating room Vahdat~on of instrument reading by artifioal intelligence methods In Proceedmgs of the 13th Annual Sympostum on Computer Apphcattons m Medwal Care (pp. 575-579). Washington, DC. Hayes-Roth, B (1990). Architectural foundations for real-time performance m mtelhgent systems The Journal of Real-Trine Systems, 2, 99-125. Hayes-Roth, B., Washington, R., Ash, D., Hewett, R, Colhnot, A., Vma, A., & Sewer, A (1992). Guardmn" A prototype intelligent agent for intenswe-care momtonng Art~fictal Intelhgence tn Medt¢me, 4, 165-185
420 Punch, W.F., Tanner, M.C., Josephson, J R, & Smith, J W., Jr (1990). Pezrce: A tool for experimenting with abduction IEEE Expert, 5(5), 34-44. Rutledge, G., Thomsen, G., Beinlich, 1., Farr, B, Shemer, L., & Fag,an, L.M (1989) Comi~mng quahtative and quantitative computation m a ventilator therapy planner in Proceedmgs of the 13th Annual Sympostum on Computer Apphcattons m Medwal Care (pp 315319), Washington, DC. SRUg, D F., Pace, N.L., Gardner, R.M., Beck, E., & Morns, A H. (1989) Implementation of a computerized pattent advice system
B M Dawant et al using the HELP chmcal mformatton system Computer~ and Bwmedwal Research. 22, 474-487. Uckun, S, & Dawant, B M. (1992) Quahtatwe modehng as a parad]gm for diagnosis and prediction m cnhcal care environments Artlfictal lntelhgence m Medwme, 4(2), 37-54 Washington, R, & Hayes-Roth, B (1989) Input data management m real-time Ai systems In Proceedings of the 1l th Internattonal Joint Conference on Arttfictal lntelhgence (pp 250-255) Detroit, MI" Morgan Kaufmann Pubhshers, Inc