i n t e r n a t i o n a l j o u r n a l o f m e d i c a l i n f o r m a t i c s 7 6 S ( 2 0 0 7 ) S252–S260
journal homepage: www.intl.elsevierhealth.com/journals/ijmi
To decay is system: The challenges of keeping a health information system alive Thomas Wetter ∗ Department of Medical Informatics, University of Heidelberg, Im Neuenheimer Feld 400, D-69120 Heidelberg, Germany
a r t i c l e
i n f o
a b s t r a c t
Article history:
Health information system (HIS) architecture and socio-technical approaches for system
Received 5 March 2006
deployment have been topics of systematic research for decades. Sustainable operation in
Accepted 11 May 2006
gradually changing environments, however, has not yet received sufficient attention. Even HIS that have gone life to the satisfaction of their developers and end-users may degrade gracefully or fail catastrophically if not continuously and thoroughly kept in sync with their
Keywords:
environment. Critical environmental changes may owe their origins to the complexity of
Systems analysis
health care and its delivery. Seemingly minor environmental changes can result in signif-
Socio-technical approaches
icant failures on the part of the information system and may adversely affect the quality
Information systems
of health care delivered. Such minor degradation or near failure may go unnoticed for a
Quality control
while and then hit unexpectedly. Five origins of decay will be analyzed. Methods of systematic observation and containment of such decaying processes will tentatively be presented. Some origins of system decay exist in the immediate hospital or regional setting of usage. Indicators to identify processes of decay will be suggested and methods to preemptively reduce the risk of decay will be presented. Other origins span national health care systems or beyond. Not all such risks can hence be controlled locally. Software Oversight Committees may be an instrument to monitor those risks that cannot be controlled through routine local management. © 2006 Elsevier Ireland Ltd. All rights reserved.
1.
Introduction
Designing and building information systems for the health care industry has received attention for at least three decades [1]. Despite remarkable technical progress, failures have still been reported when integrating technically sound systems into processes of care [2]. An IMIA IT in Health Care—Sociotechnical Approaches Conference in 2004 reflected this theme in its title “To err is system”, alluding to the 2000 Institute of Medicine report “To err is human” [3]. The sociotechnical approach of system development and introduction [4], attempts to handle change [5] or user expectations, strains and strain relief systematically [6], beneficial as they may
∗
be, cannot prevent significant failures from occurring. The impact of one such failure in 2003 was so deep that it created headlines in national newspapers [7]. But what happens down the road, when a system has been successfully integrated into work processes and has been well accepted by its clinical users, remains to be studied. Will the system remain ‘as is’ for the next month, year, or decade or will it have to be continually altered to suit the changes in the organizational environment? Is it an inherent trait of systems to decay, unless carefully safeguarded against all kinds of influences? This paper points out that seemingly minor environmental changes may cause a system to deteriorate in its usability in non-anticipated ways unless it evolves or its users
Tel.: +49 6221 567490. E-mail addresses:
[email protected],
[email protected]. 1386-5056/$ – see front matter © 2006 Elsevier Ireland Ltd. All rights reserved. doi:10.1016/j.ijmedinf.2006.05.011
i n t e r n a t i o n a l j o u r n a l o f m e d i c a l i n f o r m a t i c s 7 6 S ( 2 0 0 7 ) S252–S260
evolve. The paper also demonstrates that regular upgrades and creative, partially subvert workarounds by flexible endusers can compensate for the process of system deterioration. Other imposed changes can cause catastrophic failures. Mechanisms of decay will be classified according to their origins. Most of these decaying effects will be substantiated by reference to published reports or episodic and systematic observation from the author’s work environment. Some speculative ones will likely resonate immediately with professionals in the field. This is a conceptual paper intended to direct attention to the factors that become effective as soon as a system has gone live. For some of the phenomena of decay a frame of investigation will be suggested. This frame is intended to guide future empirical work in the field of medical or health informatics towards systematic observations that may eventually support or reject the conjectures made in this article. Some suggested concrete methods to prevent decay systematically and to steer systematic evolution need further investigation.
2.
What are origins of system decay?
Reasons of degradation or failure can be organized in a variety of ways. One can set out from data and knowledge required in clinical care and proceed towards the processes of care, key functions in the processes, etc. Or one can locate reasons of decay first in hardware then basic and application software and move on to clinical workflows enhanced by decision support system. Yet another principle of analyzing decay is applied here: In an inside out manner we start with causes that apply to the individual clinical user’s work sphere and move to increasingly larger spheres ending with causes that need to be addressed on a corporate level and affect one or even several organizations. There are two related reason for proceeding this way: • The spheres serve as frames of reference within which to perform observations and to substantiate conjectures made. • To design a solution or to guide an evolution the spheres are the target environments where the proposed solution must work (though the solution itself may originate from a different sphere). The following are the spheres discussed in this article: • • • • •
targeted users; care processes; software environment; state-of-the-art medical knowledge; vendor–provider–customer constellation.
2.1.
Spheres within a hospital
The clinical user is viewed first as an individual with an educational background and a set of attitudes and expectations in his role and responsibilities in patient care, for which he or she uses corresponding IT applications. Then he is regarded as a member of a professional team whose service is inter-
S253
twined with those of others in clinical care and administration. Next he is seen as a member in the large set of users of a possibly hospital wide IT infrastructure. In contrast to the former smaller sphere which is characterized through his normal work environment that he can directly influence and maybe control, he is now regarded as a more or less nameless customer of an IT department. This is the outermost sphere considered here that can still be regarded as being internal to one organizational unit (the hospital) where change can be managed and evolution can be steered. Hospital administration can, e.g. decide to change vendors, implement IT services that worked in one pilot department in other departments, etc.
2.2.
Outer spheres that reign into the hospital
In contrast, the standards of clinical practice that a hospital implements should be based on knowledge approved by a mainly external professional community. A hospital and its professionals may have contributed items to that knowledge but public release of a consolidated body of knowledge, e.g. in form of a clinical practice guideline, happens through scientific, professional, or quality assurance institutions outside the hospital. Therefore, when new medical knowledge becomes consolidated in the professional community to an extent that neglecting it would deteriorate care, requirements of change and risks of decay are imported into the hospital sphere from an outer sphere. Comprehensible medical reason originating from the professional community of the clinical user – such as a revised guideline – needs to be incorporated. Failing to incorporate or incorporating falsely may cause the risk of decay. The next sphere is the one of IT vendors, which may overwhelm their hospital customers with new products, new services, business partners, contracts, etc. Therefore, the entailed risks are imported, too. However, as opposed to the public and principally comprehensible nature of medical knowledge, IT vendor strategy, partnership, product (dis-) continuation, etc. may follow undisclosed purposes on the vendor’s side that implement shareholder value but are virtually unrelated to the necessities of the clinical workplace. Other origins of decay not detailed here are – details of code – hardware environment – legislation. Concerning hardware, one may think of unexpected usability issues raised thru new I/O devices such as PDA or voice recognition. As to legislation, HIPAA has haunted several of us with data no longer accessible that we had relied on.
3. How can a phenomenon of decay be investigated systematically? Generally we can regard any pattern of varying use of some technology as a sociological phenomenon that can be empirically studied by methods from anthropology, behavioral science, organizational psychology, etc. (see [8] for a collection of such approaches). Drawing on such approaches, reasons suspected to effect decay or interventions intended to stop decay can be hypothesized and systematic observation can be organized. This may go as far as to conjecture a cause–effect relation where decay is the effect and to describe experiments
S254
i n t e r n a t i o n a l j o u r n a l o f m e d i c a l i n f o r m a t i c s 7 6 S ( 2 0 0 7 ) S252–S260
and observables that would indicate whether the cause–effect relation is supported or rejected through the observations. These approaches will be outlined for one of the spheres below. But first all five spheres will be introduced in some more detail.
3.1.
Targeted users
A clinical system may have been targeted at a certain circumscribed user group such as physicians or medical coders. Its functions have been neatly adapted to responsibilities of that group of professionals. The functions are in accordance with the capacities and role models of the intended users who can fully exploit the possibilities. For instance ordering medications or procedures is usually reserved for physicians while they leave ADT/ICD/ICPM coding to medical coders. Let us assume that physicians are satisfied with the Computerized Physician Order Entry (CPOE) function of their system [9], while coders capture all billing related data in sufficient quality in their system from clinically oriented charting that physicians provide. Let us further assume that the management of a clinical unit decides that in order to simplify certain routine processes registered nurses are to be given permission for ordering certain medications or procedures.1 And let us assume that the hospital administration requests physicians to provide billing related coding under certain circumstances. In the first place this means that new responsibilities no longer match acquired role models. The nurses’ mind set now has to incorporate final responsibility for patient related decisions while the physicians’ mind set has to incorporate billing as a second reason for charting that may be in contradiction to medical reasons for charting. For example can some procedures be legitimately administered to a patient but not be coded in DRG groupers. How behaviour according to the evolving new role models matches existing or evolving IS functionality heavily depends on the general system architecture. Where there is a comprehensive system for all user groups, it may be possible to grant nurses permission for ordering functions that they did not have before and may give physicians permissions for coding functions that they did not have before. This appears as a straight forward evolution, but strange things can still happen, unless there is a highly sophisticated concept of permissions and effective monitoring of the ordering process: with typical architectures a user is granted permission for a function like ordering medications as a whole. It may be restricted to those patients he or she is assigned. However, the information system architecture is not normally selective as to what medications can be ordered. Irrespective of the clinical specialty: if one has the permission to order then one can order any medication. If nurses are permitted to order medications they may inadvertently start ordering cytostatics. Concerning the new rules for procedure coding, strange things may happen because the fact that a physician has emotionally adopted the role of a coder and has been granted
1 The former has recently happened in the Children’s Hospital of the University of Heidelberg Medical Center for the management of hyperbilirubinemea [21].
permission to code does not imply that he is trained to code. He may believe that a procedure is regarded as standard therapy with a given primary diagnosis of a patient and need not be coded separately. The grouper software would not ask him because it has been designed for coding specialists who can be assumed to know or to know how to find out that the respective procedure co-determines the patient’s DRG and must be coded explicitly. Therefore, what was intended to simplify turns out to make processes more complicated and prone to error. While the latter is a hypothetical US related scenario the following is the authentic German situation – with similar software architecture related ramifications. The question “Who codes what?” is a matter of ongoing debate in Germany these days [10] – interestingly in the reverse order than in the US: here physicians routinely code and medical coders argue that they can provide higher coding accuracy. In parallel advanced software architectures strive for context integration of clinical charting and DRG related coding to allow professionals of either group to switch perspectives upon a case between clinical and DRG.2 The consequences of new rules for procedure coding are even worse when administration/billing, nursing and medical charting are contained in three separate systems that have not been integrated to one another. The nursing system would not have medication ordering functionality and the medical system would not have coding functionality. As a result, the needed workarounds become very bizarre and almost impossible: Physicians may code on paper for coders to key in the results and nurses may shout along the corridor: “Dr. Miller, what is your new password?” Such usage that was not anticipated prior to developing the system may cause loss of accuracy, completeness, and control and thus restrict the ability to introduce new functionalities. A nice yet slightly different example of deteriorating effects of changing targeted users has recently been reported in [12]. In that case an oncology information system that had been unanimously accepted after its launch in the 1980s got under pressure and saw arguments starting to replace the legacy technology although physicians agreed that its function still served its purpose and day to day users agreed that the line based interface did not hamper usability. Concerns rather grew from the fact that a system that had met the vision to support clinical care was later “hijacked” for billing related coding. This came alongside with replacing highly qualified data coordinators thru data clerks. The effect intended by hospital administration was to increase revenue by at least the data clerks’ payroll cost. It worked. But physicians now no longer had medically meaningful comprehensive data sets at their disposal because the data clerks had neither the objective nor the qualification to draw an inspired medical picture of the patient through their coding. Physicians could no longer order new types of reports because the data clerks were not capable to provide them. In other words: a fully functional system was at risk of being replaced because with its new data clerks it could no longer fulfill its medical users’ expectations. [12] also report that the decision making about replacement is also a
2
Such as the Torex GAP MDK Manager.
i n t e r n a t i o n a l j o u r n a l o f m e d i c a l i n f o r m a t i c s 7 6 S ( 2 0 0 7 ) S252–S260
matter of controversy between cultures in the hospital (see Section 5.2). Administration cherishes revenue and weighs cost of technically maintaining legacy code while oncology cherishes meaningful data and function and their intangible benefits, which, however, do not at all translate easily into revenue. How such clash of cultures can influences overall performance of an health care institution has been analyzed in [13].
3.2.
Processes of care
With reorganization of processes of care, responsibilities may change and system functions required by a certain group of users in the old process may now be required by persons involved in the new process. The system may have been well designed in granting access to patient data based on physician assignment and patient admittance to a certain ward. With streamlining processes and attempting to reduce cost of labor, night shifts may be organized in such a way that where one physician was assigned per ward now one physician is in charge of two wards. This requires access to patient data from the two wards at least at night, but maybe continually, because labs must be followed up and progress notes may have to be written outside the shift. If the permissions concept of an information system is based on an M–N relationship between M physicians and N patients, the streamlined process can be mapped immediately by adding additional patients to a physician’s permissions. The only disadvantage is that a large number of permissions must be maintained: with m < M physicians becoming in charge of more than one ward and typically n < N patients in a ward we have m times ∼2n permissions at some point in time. If the permissions concept is based on an M–K relationship between M physicians and K wards the streamlined process can be mapped more easily by extending the m physicians permissions from 1 to a small number k < K of wards. Unfortunately, commonly used vendor based HIS are designed around M–K = 1 ward relationships: as noted at a recent DSAG3 meeting, the SAP system cannot handle the problem and a cure is not in sight (see also Section 3.5). This has been marked as an essential lack of flexibility in [14] in a presentation to DSAG 2004. When a process change as outlined above is implemented and the software cannot be changed, loopholes in the system tend to be found and misused which might enhance specific risks that are likely to happen. Physicians may use colleagues’ identities; rather than signing out the day shift physician leaves the workstation open for the night shift physician to take over; with all entailed risks for accountability, auditing, etc. Or physicians get multiple identities, one for each organizational unit they may eventually be in charge of. The risks now are subtler: physicians may lose track of their identities and authorizations hence failing to access critical data. To avoid this risk they may stop continually logging in and out. In the case of failed access attempts, patient related decisions may be based on less information than is principally avail3 DSAG = Deutsche SAP Anwendergruppe. German SAP Users’ Group, a not for profit independent association of users of SAP software products.
S255
able or progress and ordering documentation may be entered less timely and/or less complete than it was before—added later when user id and password had been found or reset again. Additionally in case of permanent access through one or more identities the risk of illegitimate exposure of patient data increases. Problems of this kind with even wider reach may result when organization of care is changed from structure to process; when assignments of patients to a ward, whose physician enacts care is transformed into clinical pathways where several physicians and paramedical professionals share the responsibility and require respective read and write access. While the “architecture of permissions” was clear cut and easy to maintain before moving to clinical pathways, granting permission now becomes an error prone patchwork. Unless the concept of permissions of the software is basically redesigned, access can only be granted manually individualby-individual, with risks of both critical denial of access and critical exposure of inappropriate data to non-authorized personnel. The literature shows no evidence that transformation of structure oriented architectures into process oriented ones have worked in the past. Positive evidence is reported about building a pathway oriented information system from scratch along with designing the pathways themselves [16].
3.3.
Software environment
In the context of the Software Oversight Committee initiative, Miller and Gardner have estimated that 1024 combinations of individual pieces of software may make up a clinical information system [17]. They use this fact to argue that certification of clinical software in complex clinical systems is not feasible. Miller and Gardner’s argument transfers to this article in the following way: when sufficient functionality has been achieved at a certain point in time with some combination of pieces of software, the exchange of one of these pieces for a new version – let alone for a new vendor’s technology – causes the risk of non-compatibility with all other pieces of software in that system. In the favorable case non-compatibility becomes visible as downtime: the new combination gives clear indication of its non-functionality. This may be the case when the new software violates data access rules or integrity checks that were in place before its implementation. Affected other pieces of software strictly reject access to the data and the querying software terminates with a fatal error. In the unfavorable case the new combination appears to work. However, subtle errors occur such as other physical units being used, digits being represented as text rather than as integer or floating point number, patient ids of subsequent transactions being mixed up, correction messages for laboratory values being transmitted like newly measured data, etc. Performing arithmetic calculations on inappropriately coded numerical data may still support the illusion that it works appropriately. Respective decision support software, benevolently designed for robustness and zero downtime, interprets non-numeric as zero or missing, while the human eye believes to see a numerical value. Calculations now lead to faulty yet plausible values and recommendations for a while. The flaw will go unnoticed and users will feel safe until some catastrophic
S256
i n t e r n a t i o n a l j o u r n a l o f m e d i c a l i n f o r m a t i c s 7 6 S ( 2 0 0 7 ) S252–S260
treatment error occurs. Of course standards such as HL7 are an important step in the direction of avoiding such failures. But with its backdoor of locally defined non-standardized fields HL7 does not securely prevent such problems.
3.4.
State-of-the-art medical knowledge
Guidelines and the Arden syntax as part of the HL7 standardization effort enable that more and more medical knowledge be made available at the point of care. Automatic reminders have been clearly demonstrated to have the capability of improving care [18]. Knowledge based guidance of care bears different risks from the perspective of a changing environment. In the first place, knowledge incorporated in a system may be well supported by evidence when a system is released. However, new and conflicting scientific evidence may later be reported. A system that keeps offering the previous and now outdated guideline makes its users falsely believe that they are acting correctly. How should the system be upgraded? There is a whole set of stairs that the valuable new knowledge has to climb before reaching the clinical user: a guideline needs to be revised, the revision customized for local use and made available in the information system like the former version was. But guideline revision itself is already among the weakest points in the medical professional communities. Eccles et al. analyzed a UK cross-section of guidelines, finding that “three quarters (of the guidelines) . . . needed updating” [19]. According to our own less elaborate analysis of the 1386 German guidelines, 368 (∼26%) are older than 5 years, which is deemed the maximal interval after which guideline revision or re-approval is recommended. In the US the National Guideline Clearing House clearly distinguishes withdrawn national guidelines [20] but does not enforce withdrawal of outdated local implementations. It is an interesting question whether a professional can be made legally liable for not doing updates, be it as an author or in any role in the local deployment process. The legal manifestations of such an occurrence have not been previously documented and should be analyzed owing to the widespread use of guidelines as standards of care. Software that supports the whole lifecycle of guidelines from authoring to feedback driven revision is available and will be discussed in some more detail below [21]. It principally allows to propagate the effects of changes to a national guideline to all affected places and hence to prevent decay. For this software to take effect, however, guideline authors need to comply with update intervals or concrete update requests. Local adaptation specialists need to map changes of medical contents into the practice of local processes and structures. For example, a revised diagnostic guideline may have good medical reason to request an additional lab test, which can, however, not be performed in some institution, cannot be ordered by its information system or its result cannot be transmitted electronically. Without a hospital policy driven authorized substitution or omission of that lab test, using paper, telephone or other media for ordering or other kinds of local workarounds (such as local fields in HL7 messages) will turn a well designed system into a slightly compromised and – potentially – error prone one. Errors here may create degradation or failure as described in Section 3.3. On the contrary, continuing
to present an outdated guideline may lead to fading trust of its medical users—as a form of degradation. A JCAHO accredited hospital that keeps presenting outdated guidelines without any user notification and fallback solution for accessing the up to date guideline should have its accreditation revoked—as a form of catastrophic failure.
3.5.
Vendor–provider–customer constellation
A wide majority of health care institutions now use vendor purchased systems for at least parts of their information processing. They may be happy with a set of vendors at some point in time. But a vendor may go bankrupt or merge with a different company. In the bankruptcy case lack of maintenance may lead to a gradual decay and consequently to choosing a system from a different vendor. This might present risks that are similar to those described in Section 3.3. In the case of a merger the situation may be less obvious: the new vendor may say that they fulfill contracts but may indeed not be willing or not be able to deliver. This kind of failure has been personally analyzed by the author in a project at Intermountain Health Care (IHC) in Salt Lake City. It has occurred similarly on a national scale in Norway [23] and on a province scale in Canada [24]. In both, the Norway and the Canada experience, vendors whose primary interest it was to diversify their markets and that had virtually no competence for the clinical product lines they were successfully bidding for. Recently, a merger of two database companies has created concerns among customers whether the product of the bought company will survive for an extended period of time, since the buying company can only reduce their cost if they cut down duplicate workforce.4 Another set of problems is reported from constellations where different vendors offer a bundle of products: SAP, GSD, and T-Systems5 are an example where the clinical user requires the ISH information system (from SAP) and the ISH-med clinical workstation (from GSD/T-Systems), and that the two interoperate. According to DSAG (see footnote 3) the clinical customer, however, when reporting some inappropriate function or error to either of the vendors keeps hearing that the problem is in the other vendor’s part of the system. In this case, there is no common provider, two vendors do not cooperate sufficiently and the customer does not have sufficient technical skills to trace the root cause and pinpoint the responsible party. It may be argued that customers in the market place of hospital information systems can not survive without such skills. Then, however, it is as fair to argue that they may be better off developing the systems themselves instead of purchasing them.6,7
4 The Oracle—Peoplesoft merger has not yet been scientifically analyzed but has already filled the gazettes. 5 Three Germany and Austria based software companies. 6 This is not a fancy. While the first version of this paper was submitted, Intermountain Health Care in Salt Lake City, Utah terminated their contract with a vendor and decided to develop HELP 2 themselves. 7 Between submission and final version of the paper for the journal the decision has been re-revised: HELP 2 will now be purchased from a different vendor.
i n t e r n a t i o n a l j o u r n a l o f m e d i c a l i n f o r m a t i c s 7 6 S ( 2 0 0 7 ) S252–S260
4. Systematic investigation of decay—an example In the following we will use the “Processes of care” sphere to outline systematic observation. With a set of software systems in daily use in a clinical setting several observables suggest themselves as indicators for sub-optimal or decaying function. Looking at the scenarios in Section 3.2, we may trace: • Number of user ids per person. When suddenly the number of users who apply for more than one id surges, this may optimistically be interpreted as higher acceptance or penetration of IT in the hospital. But it is as likely a response to some unnoticed development that makes it impossible for users to continue the work they used to do with one user id.8 The variable “Number of user ids per person”, by the way, is very easy to trace. • Number of progress notes, orders for procedures, etc. that are documented at the time they refer to. When suddenly delayed charting goes up it may indicate that physicians are overworked. But it is as likely a response to some change that makes system access more difficult than it used to be. This variable is harder to trace: automatic time stamps can be assigned for the moment of recording; however, to identify the time the record refers to needs disciplined manual recording through the users. This discipline is likely do deteriorate, too, when satisfaction with the system does. Rather than failing with the attempt to observe timeliness of charting it may be sufficient to count the overall number of orders and progress notes documented electronically as an approximation to the above observable. When number and case mix of patients has not changed and the number of electronic orders and electronically documented progress notes goes down there is good reason to check the causes for the decay. • Number and duration of sign in’s of clinical users. When users who used to sign in for half an hour six times in a 12 h shift, are now signed in for an uninterrupted 72 h, and when services and applications have not changed in any other way that explains the new pattern of behavior, sharing of an identity among different individuals may be the explanation and authorizations that are no longer in accordance with reorganized clinical processes are the cause. Of course, there is a technical solution to prevent the latter, named timeout. But of course, there is a compromising workaround: “Anybody coming along the PC please mouse click somewhere!”. And of course there is a quality of care hazard: when the workaround does not work around the timeout once in a while, a whole shift may treat their patients without seeing their patients’ data. And they may not even be willing to report the problem because that might expose a misconduct of months.
8
Users may try to get additional user ids with as little attention as possible. They may fear the additional effort to deliver input for a systematic solution and hence prefer a surreptitious approach. In Germany one sometimes speaks of U-Boot-Projekten (“submarine projects”) when progress is achieved without knowledge and against policies of the management.
S257
• Complaining would be the better option. It opens the chance for things to get better [15] claim that the lack of complaints has falsely been taken as a measure of success of a software system. If their observations from other industries transfer to the health care industry it is highly unlikely that a system fits its environment over years without change. Hence, when there are no complaints there is either no use or workarounds. Therefore, number of complaints should be considered as a measure of vitality of a system; its continuous decrease or small value after sentinel changes should be considered as an alert. Or, as [22] put it: “A complaint is a gift.” The above observables can also be regarded through Donabedian’s dimensions of quality [25]. Interestingly, the number of user ids per person is an indicator of (bad) structure quality, which points to a problem in the care process of monitoring more than one ward. The number of notes taken or taken concurrently is an indicator of process quality in Donabedian’s perspective. It may point to the problem of structure quality that consists of an insufficient permissions concept hard coded in the software.
5.
How to handle the threat of decay
5.1.
Different levels of criticality
In the first place a crucial distinction should be made. All of us have seen obvious non-compatibilities like in Section 3.3; they do not go unnoticed and require immediate action. We have seen different kinds of graceful degradation worked around by creative human users like in Sections 3.1, 3.2 and 3.4. Gradually decaying user trust and satisfaction can be expected when the number of required workarounds increases. Most critical, however, are catastrophic failures disguised as normal function, such as also discussed in Sections 3.3 and 3.5. Software Oversight Committees [17] can earn their merits here.
5.2. Addressing inherent traits of the health care industry Part of the problems in Sections 3.1 and 3.2 are home made. As long as we believe that we can afford three communities and academic cultures for administrative, medical, and nursing informatics, we need not be astonished that we fail to serve a hospital that bridges the gap between the professions. But legacy structures of hospitals may as well be the origin of problems: in times when each department head “owned” resources and patients and coexisted with other department heads who also “owned” resources and patients, corresponding legacy system may have been good at mimicking such a structure. However, with breaking up hierarchies and introducing clinical pathways orchestrated with professionals from different departments the “ownership” metaphor needs to be replaced by a “processes and services” metaphor and software has to reflect that change. If health care is a laggard in transforming itself from structure to process, health care software is a likely laggard, too.
S258
i n t e r n a t i o n a l j o u r n a l o f m e d i c a l i n f o r m a t i c s 7 6 S ( 2 0 0 7 ) S252–S260
Parts of the problems in Section 3.4 are typically caused by a research dominated medical community that honors innovation and disregards maintenance. Authoring a guideline increases professional reputation; revising it is comparatively tedious and rewardless work. However, when the present situation of wide ranging graceful degradation of guidelines gives rise to successful lawsuits, the whole beneficial movement of evidence based medical care may catastrophically fail. A system of incentives for maintenance is likely more beneficial for the health care system than a sudden wave of sanctioning and mistrust, which may have outstanding scientists withdraw from the guideline process. Parts of the problems in Section 3.5 relate to underfunded IT in health care institution budgets. A vendor company that is profitable by its projects with hospitals need not go bankrupt and need not be a cheap acquisition for whatsoever large company, which is not really interested in the products and customers but just wants to invest, diversify, or recruit workforce. The author has seen three failing IT projects in his immediate work environments where mergers, discontinuation of unprofitable product lines or bankruptcies have hampered major health care customers.
5.3.
Systematic containment of decay—an example
New knowledge can affect care and software systems that support care in a variety of ways. For those effects in Section 3.4—sphere that manifest themselves through guidelines, we have developed the HELEN system that is intended to detect and hopefully correct a decay early. Its core asset is a highly differentiated ontology for guideline contents and usage. Besides allowing to characterize parts of a guideline as to their purpose, intended audience, type of contents, local adaptation status, etc. it establishes all kinds of cross-references between contents and status information. Therefore, whenever an element of knowledge itself or a pattern of care linked to an element of knowledge undergoes some change the XML representation of the guideline immediately shows which other elements are affected and should therefore be checked whether they need to be changed accordingly [21]. With such a software in place, revisions still need to be done, but they are much less tedious than with a flat text representation of a guideline where all dependencies have to be identified by insight before local adaptation needs become visible. More generally speaking the HELEN system is preemptive change responsive software that links the definition of environmental conditions to software functions. When the environment changes and no longer matches the definition, HELEN indicates which of its functions are affected and need to be reconsidered or changed.
6.
Discussion
The analysis so far seems to suggest that decay is a part of the life cycle of information systems. This is in accordance with common experience that well-organized processes will begin to fade after a while because changes in the implicit circumstances for their functioning have gone unnoticed. Far fetched, as this may appear this is also in accordance with
the laws of thermodynamics that predict development away from unlikely ordered to more likely less ordered states unless energy is added to a system. This last thought may be the key for understanding what it means to contain decay. “Energy” needs to be added to a system in order to enable it to maintain its orderly and wellorganized state. What does energy mean here, and where does it come from? It suggests itself to identify financial and human resources as the energy required. For systems to evolve rather than to decay, availability and attention of knowledgeable humans needs to be held standby in order to act as soon as an indication of decay occurs. “Maintenance” should be understood broad here: it can mean to change a software or a process, but also to change a contract, or to educate a new target group or generation of users, etc. Where the energy – resources – come from depends whether a system has achieved a market pull status in its environment of use. In that case there is abundant evidence that users will put some of their energy in (for a CPOE example where this worked see [9]). When the situation still is technology push we may have serious problems: at the end of an implementation project the system is ready in the sense of project management and has been approved and responsibility transferred from vendor/developer to customer. The developer’s workforce is no longer in charge and not obliged to offer resources while the users have not yet profited enough to feel like they should put their energy in. A good example of a wise approach that created market pull before the real work on the part of the users started is presented in [11]. Kaiser Permanente Northwest had offered their physicians online access to the majority of their outpatient data the insurance company had collected and electronically stored over years. Only after the physicians had felt the benefits of comprehensive present and longitudinal data at their fingertips they were invited and happily accepted to chart electronically themselves. Wherever resources may come from, it is most important to note that we have not explored change management in this article. Change management is the set of methods to optimally implement planned changes. Containment of decay is the awareness and the means to act when non-anticipated changes happen. We might better speak of “change responsiveness”. Johnson et al. [15] have introduced the term “resynchronization” for those efforts needed to keep a software system in synchronization with environmental conditions as they change after launch of the software. They offer model calculations that a tremendous part of an IT budget is required to avoid writing off software soon after it has been introduced. Drawing on model calculations in [26] as early as 1991 they predict that within five years an IT budget that does now grow leaves but 7% for development while 93% is “absorbed” by maintenance if maintenance goes beyond maintaining technical function (estimated as 20% of a yearly budget) and includes adaptation to non-anticipated environmental changes (estimated as 40% of a yearly budget). One may argue that software technology has evolved and makes changes easier than they were 15 years ago. However, business re-engineering and hence the entailed need for resynchronization happens at a faster rate now than it did 15 years ago. The health care indus-
i n t e r n a t i o n a l j o u r n a l o f m e d i c a l i n f o r m a t i c s 7 6 S ( 2 0 0 7 ) S252–S260
try with its pressure to reduce cost is not an exception from that trend. Hence, the increasing rate of reorganization may outweigh the improved adaptability of modern software. A recent example is the German set of DRG billing rules which keep changing at a year to year rate. Furthermore, [15] and [26] do not address one aspect at all that was introduced here as the least controllable one: vendor mergers and their entailed risks of diffusion or dilution of skills required for resynchronization. The theory of improvisational change seems to counter such arguments [27]. It is based on the assumption that knowledgeable and creative humans exploit and enhance options and opportunities that information systems offer. As a result unforeseen functionality becomes discovered and enables unforeseen productive organizational behavior. Such effects have also been observed with IT in health care.9 They do not cover, however, developments where creativity is not applied to invent new productive behaviors but to bypass required behaviors: if it is necessary for good care that a physician accesses his patient’s electronic patient record and if it is required by law that we can trace who accessed which data at what point in time, using the id and authentication of someone else is not jazz – an early metaphor stimulating the improvisational change debate that is now being contrasted with other metaphors – but crime. Circumventing an operational but presumably outdated guideline is not jazz but gamble. Or, in other words, improvisation is applied to keep doing with inferior and possibly illegal means what could be done with appropriate means before a change intruded the working sphere. These considerations should make obvious that presently there are more questions than answers. It should make obvious, too, that answers do not come from approved methods of user participation, HIS architectures, standards, and change management alone but a new field of research needs to be established for methods to monitor and safeguard operational information systems against changes intruding from all directions.
Acknowledgments The author wants to thank the anonymous reviewers of former versions for valuable comments to get the arguments more straight. Valuable comments and hints to pertinent other work were also contributed by Shobha Phansalkar, University of Utah. Thank you also to my Medical Informatics students at Heidelberg-Heilbronn University whose journal club contributions and controversial views on the topic helped to come up with more pertinent examples. Thank you to my medical students who highly appreciated being taught about standardized documentation and the value of data in medical care and who may some day be a generation of physicians who are willing to offer some of their energy upfront to have better data at their disposal down the road.
9
A University Medical Center in Germany has exploited MS Outlook® to implement an X-ray imaging appointments system.
S259
references
[1] R.M. Gardner, T.A. Pryor, H.R. Warner, The HELP hospital information system: update 1998, Int. J. Med. Inf. 54 (1999) 169–182. [2] N.M. Lorenzi, R.T. Riley, Organizational issues = change, Int. J. Med. Inf. 69 (2003) 197–203. [3] L.T. Kohn, J.M. Corrigan, M.S. Donaldson (Eds.), To Err is Human: Building a Safer Health Care System, Institute of Medicine, National Academy Press, Washington, DC, 2000. [4] M. Berg, J. Aarts, J. van der Lei, ICT in health care: sociotechnical approaches, Methods Inf. Med. 42 (2004) 297–301. [5] N.M. Lorenzi, R.T. Riley, Managing change: an overview, J. Am. Med. Inform. Assoc. 7 (2000) 116–124. [6] S. Garde, A.C. Wolff, U. Kutscha, T. Wetter, P. Knaup, A procedure model to support the integration of health care information system components into established processes of care, Medinfo 11 (2004) 1609. [7] T. Chin, Doctors pull plug on paperless system, amednews.com, February 17, 2003, available at: http://www.ama-assn.org/amednews/2003/02/ 17/bil20217.htm, last visited May 13, 2005. [8] E. Coiera, When conversation is better than computation, J. Am. Med. Inform. Assoc. 7 (2000) 277–285. [9] F.D. Baldwin, CPRs in the winner’s circle, Healthcare Informatics, available at: http://www.healthcareinformatics.com/issues/2003/05 03/cover.htm, last visited May 10, 2005. ¨ ¨ die Beseitigung [10] E. Linczak, A. Tempka, N. Haas, Pladoyer fur ¨ ¨ arztfremder Kodiertatigkeit, Deutsches Arzteblatt 101 (2004) A2242–A2244. [11] A.T. Khoury, H.L. Chin, M.A. Krall, Successful implementation of a comprehensive computer-based patient record system in Kaiser Permanente Northwest: strategy and experience, Eff. Clin. Pract. 1 (1998) 51–60. [12] R. Rada, S. Finley, The aging of clinical information systems, J. Biomed. Inform. 37 (2004) 319–324. [13] C.P. Friedman, C. Milton, A.J. Krumrey, D.R. Perry, R.H. Stevens, Managing information technology in academic medical centers: a “multicultural” experience, Acad. Med. 73 (1998) 975–979. ¨ [14] T. Barthel, M. Resch, Bericht uber eine Fallstudie, Forbit e.V./Caro GmbH, available at: http://caro-gmbh.de/4-1-2.pdf, last visited May 10, 2005. [15] B. Johnson, W.W. Woolfolk, P. Ligezinski, Counterintuitive management of information technology, Bus. Horiz. 42 (1999) 29–36. [16] K.S. Glassman, J. Kelly, Facilitating care management through computerized clinical pathways, Top. Health Inf. Manage. 19 (1998) 70–78. [17] R.A. Miller, R.M. Gardner, Recommendations for responsible monitoring and regulation of clinical software systems, J. Am. Med. Inform. Assoc. 4 (1997) 442–457. [18] R.S. Evans, S.L. Pestotnik, D.C. Classen, T.P. Clemmer, L.K. Weaver, J.F. Orme, J.F. Lloyd, J.P. Burke, A computer-assisted management program for antibiotics and other antiinfective agents, N. Engl. J. Med. 338 (1998) 232–238. [19] M. Eccles, P. Shekelle, J. Grimshaw, S. Woolf, Updating of CPGs, experiences from the UK (and USA), in: Proceedings of the 18th Annual Meeting Int. Soc., HTA, Berlin, 2002. [20] http://www.guideline.gov/whatsnew/whatsnew GuidelineArchive.aspx, last visited May 10, 2005. [21] S. Skonetzki, H.J. Gausepohl, M. van der Haak, S. Knaebel, O. Linderkamp, T. Wetter, HELEN, a modular framework for representing and implementing clinical practice guidelines, Methods Inf. Med. 43 (2004) 413–426.
S260
i n t e r n a t i o n a l j o u r n a l o f m e d i c a l i n f o r m a t i c s 7 6 S ( 2 0 0 7 ) S252–S260
[22] J. Barlow, C. Møller, A Complaint is a Gift, Berret-Koehler, San Francisco, 1996. [23] G. Elingsen, E. Monteiro, Big is beautiful: electronic patient records in large Norwegian hospitals 1980–2001, Methods Inf. Med. 42 (2003) 366–370. [24] E. Balka, Getting the big picture: the macro-politics of information system development (and failure) in Canadian hospitals, Methods Inf. Med. 42 (2003) 324–330.
[25] A. Donabedian, Evaluating the quality of medical care, Milbank Q. 3 (1966) 166–206. [26] P.G.W. Keen, Shaping the Future: Business Design Through Information Technology, Harvard Business School Press, Cambridge, MA, 1991. [27] K. Kamoche, M.P.E. Cunha, J.O.O.V. da Cunha, Towards a theory of organisational improvisation: looking beyond the jazz metaphor, J. Manage. Stud. 40 (2003) 2023–2051.