Clin Lab Med 24 (2004) 997–1022
Patient safety in point-of-care testing Bruce A. Jones, MD, Frederick A. Meier, MD, CM* Department of Pathology, Henry Ford Hospital, 2799 West Grand Boulevard, Detroit, MI 48202, USA
The probable connection between patient safety and point-of-care testing appears at the intersection of technical limitations, rapid availability, and the therapeutic implications of this genre of tests [1]. In the absence of a systematic evidence base, a series of definitions may be used to approach the topic. (These include definitions of patient safety, medical error, and preventable adverse event, as well as definitions of point-of-care testing itself and point-of-care test operator; and the stipulations of three regulatory categories: waived testing, provider–performed microscopy testing, and moderate complexity testing.) The eight definitions collide with actual pointof-care testing practice to reveal latent conditions that predispose point-ofcare test systems to active medical errors [2–4]. These latent conditions appear with special prominence among tests in the regulatory category of waived point-of-care testing [5]. The medical consequences of these limitations require more attention, both to detect point-of-care testing errors and to prevent the adverse events that flow from them [6–10]. A modified version of Kost’s taxonomy of potential defects in the pointof-care testing process exposes the conditions conducive to error that are implicit in this kind of assay [11,12]. The taxonomy provides a framework for detecting where errors arise in the process, so the relative frequencies of different errors may be measured. It also permits measurement of the frequency with which various types of errors progress to preventable adverse events [13]. Systematic investigations of the ‘‘connectivity’’ between pointof-care testing error and preventable adverse events need to be performed. The authors suggest some topics for this research agenda. The authors conclude by outlining the existing point-of-testing laboratory ethos, embodied in the standard model of safe laboratory testing [14].
* Corresponding author. E-mail address:
[email protected] (F.A. Meier). 0272-2712/04/$ - see front matter Ó 2004 Elsevier Inc. All rights reserved. doi:10.1016/j.cll.2004.06.001 labmed.theclinics.com
998
B.A. Jones, F.A. Meier / Clin Lab Med 24 (2004) 997–1022
Adherence to this ethos encourages a ‘‘culture of safety’’ in point-of-care testing by (1) advancing operator competency, procedural consistency, quality control, and result integrity and (2) monitoring patient identification, specimen adequacy, and report accuracy [15,16].
Definitions The Institute of Medicine (IOM) report ‘‘To Err Is Human: Building a Safer Health System’’ (1999) defines patient safety, ‘‘the first domain of quality,’’ as freedom from accidental injury from the patient’s perspective [3]. The report defines medical error as either the failure of a planned action to be completed as intended (ie, error of execution) or the choice of an incorrect plan to achieve an aim (ie, error of planning) [2]. In the IOM report, an adverse event is an injury caused by medical management, rather than by the underlying condition of the patient [3]. Closing the circle of definitions, preventable adverse event, in the report, is an adverse event attributable to medical error [4]. Price and Hicks [3] define point-of-care testing as all testing that is undertaken close to the patient and not in a central laboratory, with the test result leading to a possible change in the care of the patient. In this context, the test operator is the person who undertakes test procedures close to patients to produce the results that potentially have an immediate influence on patient care. When considering the responsibilities of point-of-care test operators, one encounters three more definitions—these are for the United States federal regulatory categories into which point-of-care tests have been divided.
Waived testing Waived testing is the most common category for point-of-care tests performed in outpatient settings. Under the Clinical Laboratory Improvement Amendments of 1988 (CLIA ’88), waived tests are supposed to possess at least one of three attributes: (1) be cleared by the US Food and Drug Administration for home use, (2) ‘‘be so simple and accurate to perform that the likelihood of erroneous results would be negligible,’’ and (3) ‘‘not pose a reasonable risk to the patient [even] if performed incorrectly’’ [4]. Here we witness the first collision between regulatory definition and clinical practice. In many common clinical circumstances, particularly in the management of critically ill or unstable patients, waived point-of-care tests, specifically blood glucose testing in ICU patients [17–22] or prothrombin time determinations in unstable patients anticoagulated with warfarin [23– 27], cannot lay claim to attributes (2) and (3) [28,29]. Waived regulatory status has nonetheless exempted tests in this category from the further mandate of quality monitoring by proficiency testing [30].
B.A. Jones, F.A. Meier / Clin Lab Med 24 (2004) 997–1022
999
In the CLIA ’88 statutes and the subsequently developed regulations, waived regulatory status does not, however, exempt waived point-of-care tests from all quality standards [30]. Adherence to standards of practice in four aspects of test performance is required of point-of-care test operators, not only de jure in the CLIA regulations, but also de facto by inspection in institutions accredited by the Joint Commission for the Accreditation of Health care Organizations (JCAHO), the College of American Pathologists (CAP), or the Commission of Office Laboratory Accreditation (COLA). These quality attributes are (1) test operator competence demonstration, (2) procedure documentation, (3) quality control demonstration, and (4) result reporting documentation [3,30]. Survey results, cited later in this article, uncovered latent conditions conducive to error in the performance of substantial fractions of point-of-care tests that lacked any one of the first three of these attributes [5]. These survey findings suggest that the emphases accrediting bodies place on test operator training, procedural consistency, and quality control are well chosen. Provider-performed microscopy testing Provider-performed microscopy testing is the second United States federal category into which point-of-care tests fall under CLIA ’88. Three characteristics define this regulatory category: (1) use of a light (bright field or phase contrast) microscope, (2) performance by a physician or other licensed practitioner, and (3) production of results of a primarily qualitative (or no more than semiquantitative) nature [4]. Most of the point-of-care tests that are performed in outpatient venues and are not in the waived testing category show up in the provider-performed microscopy (PPM) testing classification [31]. Methods in this classification, like those in the waived testing category, avoid federally mandated proficiency tests [30]. In PPM we come upon a second collision between regulatory mandate and actual clinical practice. In their own health system, upon introducing active surveillance by point-of-care test coordinators, the authors discovered (1) that only about half of the physicians listed as performing PPM could produce evidence that their competence at test interpretation had been demonstrated (K.M. Bourlier, personal communication, 2004), and (2) that, in a common outpatient setting (an obstetrical clinic), only about half of microscopic urinalyses performed by nursing staff produced recoverable result reports (M. Diegle, personal communication, 2004). Moderate-complexity testing Moderate-complexity testing carries a heavier regulatory burden than tests in the first two categories. Moderately complex point-of-care tests are more frequent in inpatient settings and in emergency departments than they are in physicians’ offices and outpatient clinics. In typical acute care venues,
1000
B.A. Jones, F.A. Meier / Clin Lab Med 24 (2004) 997–1022
many point-of-care tests are performed using hand-held devices that measure, for instance, activated clotting time or partial thromboplastin time [32], blood gases, or ‘‘critical care panels’’ (ie, brief basic chemistry profiles that report whole blood electrolytes [sodium, potassium chloride, and bicarbonate], glucose, and blood urea nitrogen, as well as blood calcium and hematocrit) [32,34]. In the CLIA regulations, moderate complexity has, up until now, added to the initial four quality attributes five more features: (1) nondeviation from manufacturers’ instructions for testing; (2) documentation of assay calibration at least every 6 months; (3) performance of two levels of control material each day of testing; (4) documentation of review (by both test operators and laboratory supervisory personnel) of procedure manuals, quality control results, and patient results, with evidence of investigation and, if necessary, of remedial action in response to deviations from expected quality control or patient results; and (5) participation in proficiency testing [35]. Attributes (1) and (4) amplify and specify three of the de jure requirements already made by the CLIA regulations in the waived test category, for test operator training, procedural consistency, and a critical approach to results. Attribute (2), ongoing demonstration of method calibration, and attribute (5), performance of proficiency testing, are additional specific features that provide ways to monitor the accuracy of moderate-complexity point-of-care tests. These two additional de jure requirements recognize moderately complex point-of-care tests’ similarity to assays in the highly complex testing category. Highly complex tests, in the CLIA scheme, can be performed only in hospital and reference laboratories, where ongoing calibration and proficiency testing are standard practices. No point-of-care tests fall into this final CLIA category [36]. Requirement (3), of performance of two levels of control material each day, has been the occasion of a third collision between a defined regulatory requirement and actual clinical practice. For some moderately complex point-of-care tests, the control material stipulation has been honored more in its breach than in its observance: ‘‘electronic controls’’ have been substituted. These circuit checks test the integrity of the hand-held devices’ signal processing system but do not introduce any liquid control material into the chambers from which the devices sample blood specimens to generate the initial signal [14]. At the time of this writing (spring 2004), federal regulators appear to be moving slowly toward greater enforcement of the letter of the two levels of control material regulation, by insisting on regular, frequent use of liquid control materials in point-of-care chemical analyzers classified in the moderately complex category [37]. A final example of the collision between regulatory definitions and clinical practice is the way in which the same point-of-care tests can be held to different performance standards. Clinicians, taking the term ‘‘waived’’ at face value, are sometimes surprised to learn, usually in the context of an
B.A. Jones, F.A. Meier / Clin Lab Med 24 (2004) 997–1022
1001
impending inspection for accreditation by the bodies listed above, that JCAHO, CAP, and COLA extend all five of the moderately complex requirements de facto to waived tests in the many institutions they inspect [4]. Because of this extension, clinical test operators performing any pointof-care tests in, for example, CAP-accredited organizations find themselves performing proficiency testing on a regular basis [38]. Studies to determine whether external proficiency testing affects the quality of clinical testing, regardless of category, were mandated in the original CLIA ’88 legislation, but, to the authors’ knowledge, they were never performed. Thus the question of their necessity remains open. The authors consider the evidence for the value of such testing below. Looking back over this list of definitions ‘‘from the patient’s perspective,’’ one finds it far from demonstrated that the practical response to the current regulatory classification increases the likelihood that point-of-care tests, as planned actions, will be completed as intended [2]. (In addition, none of the test-related monitoring requirements focus on test selection— that is, on the choice of plan to achieve an aim, which is the IOM’s definition of the second opportunity for medical error [2].) The authors see four reasons for this state of affairs: Some common waived tests contradict two thirds of the definition of this largest category of point-of-care testing. The elements of the definition they contradict are (1) negligible likelihood of erroneous results and (2) absence of reasonable risk of incorrect performance. Without extensive, systematic supervisory efforts, at least in the authors’ setting, the second largest point-of-care testing category, practitioner performed microscopy, fails to deliver two of its four mandated quality attributes about half the time. These are (1) demonstration of test operator competence and (2) result reporting documentation. Moderately complex point-of-care tests, typically used in acute care settings, have also evaded one of the statutory requirements of that test category: the requirement that two levels of quality control material be assayed on each day of test use. If the use of liquid controls really improves testing quality, then the reliability of point-of-care tests is undermined by the systematic avoidance of this requirement. The degree of monitoring applied to point-of-care testing, rather than being site-neutral, as was the initial statute writers’ objective [4], varies from practice site to practice site depending on accreditation status. If the current regulatory requirements do indeed produce a higher standard of patient safety, then there is reason to believe that this mark is being missed in point-of-care testing settings where the requirements are ignored. This connection—between the measures incorporated in the regulatory definitions, medical errors, and preventable adverse events—remains to be investigated.
1002
B.A. Jones, F.A. Meier / Clin Lab Med 24 (2004) 997–1022
Latent conditions Regulatory controls on point-of-care testing justify themselves on the plausible hypothesis that freedom from accidental injury can be increased by greater test operator adherence to the standard model of good laboratory practice, the model to which the authors refer below as the laboratory ethos. This hypothesis advances from two underlying assumptions: first, that most medical errors that arise from point-of-care testing are errors of execution (failure to complete a planned action as intended), and, second, that the immediate availability of point-of-care test results, one of the defining characteristics of point-of-care testing (and its chief attraction), makes such results more likely to lead either to imprudent changes or to unwise persistence in courses of care that produce preventable adverse effects. The standard error-prevention model, based on these two assumptions, incorporates lessons learned by students of human error in contexts other than point-of-care testing. The English psychologist James Reason [39] lists (1) shortfalls in training, (2) unworkable procedures, and (3) less than adequate tools and equipment as latent conditions that interact with local circumstances to defeat any system’s safety defenses. Operator competence demonstration, combined with explicit documentation of procedures, demonstration of nondeviation from them, performance of quality control, and periodic method calibration respond precisely to Reason’s three latent conditions regarding training, working procedures, and equipment adequacy. These measures make up three fifths of the application of the standard model of safety to the point-of-care testing system. The entire model is summarized in Box 1. The fourth feature of the standard model arose from the demands of physicians and payers that laboratory results, including those from point-ofcare tests, be retrievable over time, so that practitioners could chart patients’ courses and payers could find the information they had purchased. The two consumer groups were distressed both by the disappearance of test results and by their plasticity. Disappearance of results from the record most often
Box 1. The standard safety model Test operator training and demonstration of competence, with ongoing supervision of performance Adherence to explicit written procedures Assessment of reagent and device adequacy by demonstration of quality control Recording and verification of retrievable patient results Statistically valid, comparative proficiency testing of the entire testing process
B.A. Jones, F.A. Meier / Clin Lab Med 24 (2004) 997–1022
1003
proved to be due to the failure to enter them in a retrievable form. The plasticity of results was most often due to the absence of an edit function once the results reached the record: hence the ‘‘recording’’ and ‘‘verification’’ components of the fourth feature. The concept and the industry of point-of-care test result ‘‘connectivity’’ arose as a response to the clinical and financial problems caused by these defects [12]. The fifth feature of the model responded to the discovery that test results reported from one testing site could be incommensurate with results for the same analyte, from the same specimen, tested with the same method at a different site [40,41]. Statistically valid comparative testing of the same test material by multiple sites proved to be an effective strategy for reducing this source of variation [42,43]. In at least one practice setting, point-of-care testing has been shown to deviate from the first three features of the standard model of safety just presented. In a survey performed 4 years ago by the Center for Medicare and Medicaid Services (CMS), the federal agency charged with enforcing the CLIA ’88 regulations, CMS surveyors observed test operators, in practice settings where only waived tests were performed, as they carried out waived category point-of-care testing and practitioner performed microscopy. The surveyors reported three relevant observations regarding Reason’s three latent conditions for error, defects in training, procedures, and equipment: Regarding test operator training, the latent condition was that 19% of the personnel whom surveyors found performing point-of-care tests had not received any training in how to do the tests, nor had anyone ever evaluated how well they did them. Observed consequences of this condition were that (1) 9% of test operators failed to store or handle reagents or kits in the ways manufacturers mandated and (2) 7% did not know how to calibrate the test assays that they were using before performing them. Regarding adherence to procedure, the latent condition was that 32% of test operators did not have current test manufacturers’ instructions available to follow. The observed consequence was that 16% of test operators deviated significantly from appropriate procedures. Regarding quality control, 32% of point-of-care test operators failed to perform quality control as required by test manufacturers or as mandated by the Centers for Disease Control and Prevention (the federal agency assigned, with the US Food and Drug Administration, to advise CMS on CLIA ’88 testing standards). The observed consequences were that in 20% of tests fecal occult blood test cards were cut in half, literally severing the manufacturer’s internal quality control indicators from the patient tests [5]. The survey does not address the fourth and fifth elements of the standard model: result reporting practices were not observed, nor were surveyed sites
1004
B.A. Jones, F.A. Meier / Clin Lab Med 24 (2004) 997–1022
required to perform proficiency testing. The scope of the survey did not include demonstration of observed medical errors; however, in 7% to 19% of the instances in which the various tests were attempted, the observed defects would render their results inaccurate. The absence of quality control in another third of tests left no evidence that this second sizable fraction of tests was functioning correctly. The CMS survey suggests two unanswered research questions. First, can these substantial error rates be linked to rates of preventable adverse events? Specifically, can the fractions of results from tests performed by untrained test operators in misperformed procedures, or those without quality control, be correlated with (1) fractions of results not available at the intended moments of clinical decision or (2) fractions of erroneous results made available to clinical decision makers at such moments? Second, are error rates different in testing venues inspected by accrediting bodies that apply all five elements of the standard model different from the rates in the CMS-observed cohort? From the survey itself, two things can be concluded: (1) three of the standard safety defenses—operator competence, standard operating procedure, and quality control—were, in substantial fractions of point-of-care tests, no longer present and (2) the findings of the CMS survey appear to put paid to one of the two dubious claims embedded in the initial CLIA regulations themselves, that waived point-of-care tests are ‘‘so simple and accurate to perform that the likelihood of erroneous results would be negligible’’ [4]. What of the second dubious claim embedded in the initial CLIA regulations: that waived point-of-care tests’ results do ‘‘not pose a reasonable risk to the patient [even] if performed incorrectly’’ [4]? Laboratory experience with rapid (stat) testing provides evidence that can be used against this claim. Laboratory stat testing certainly differs from point-ofcare testing, but it differs in ways that increase the time to result availability. This is because stat testing must clear the two hurdles that point-of-care testing eliminates: specimen transport from the patient to the laboratory and result reporting from the laboratory to the testing/treating caregiver. Advocates of point-of-care testing have made much of these inherent delays, despite difficulties linking them with differences in measurable patient outcomes in randomized controlled studies [44,34]. Stat testing nonetheless shares with point-of-care testing short turnaround times compared with the processing times associated with other types of laboratory tests: in the case of laboratory stat testing these turnaround times typically range between 5 and 60 minutes [45,46]. For laboratory stat testing, links have been demonstrated, as they have not been for point-of-care testing, between risk of error and the likelihood that error affects delivery of care. Error and adverse event likelihoods both increase with reduction in stat test turnaround time [47,48]. With respect to speed, point-of-care chemistry tests (eg, whole blood glucose) and point-of-care coagulation test methods
B.A. Jones, F.A. Meier / Clin Lab Med 24 (2004) 997–1022
1005
(eg, the whole blood simulacrum of prothrombin time) derive from and remain, as assay methods, very similar to stat laboratory tests that also assay whole blood [32–34]. It is possible, then, to conclude, from (1) the similarity of processing times and (2) the similarity of method design, that the rapidity and format of point-of-care testing also increase both the risk of error and the likelihood that error will affect care. Regarding the latent conditions that predispose to error in point-of-care testing, it thus appears reasonable to believe three things that contradict current regulatory assumptions: (1) that point-of-care tests are prone to errors of execution, (2) that the rapid availability of point-of-care test results makes them likely to produce preventable adverse events, and (3) that the standard model of safe laboratory testing, as a set of defenses against pointof-care testing errors, is not consistently applied. To investigate the validity of these propositions further, a classification of point-of-care testing errors is necessary. Kost’s error classification framework for point-of-care testing Kost’s approach [7] employs the useful division of the clinical laboratory testing process into three phases: preanalytic, analytic, and postanalytic. It also works from the hypothesis, well established by evidence in non–pointof-care laboratory testing, that defects leading to errors in point-of-care testing are more likely to occur in the preanalytic than in the analytic or postanalytic phases [49]. Kost does not include, however, an initial preanalytic variable that the authors suggest adding: test order indication and frequency. Point-of-care testing may be performed too often or in the midst of therapeutic interventions. Testing too often or in the midst of confusing interventions leads to the test orderer/interpreter’s ‘‘chasing his own tail,’’ that is, becoming unable to distinguish the ‘‘noise’’ of excessive or inappropriately timed testing from the ‘‘signal’’ of pathophysiologic or pharmacologic change in the patient. With the addition of this initial step and error category, the authors summarize the now modified Kost framework in Table 1. The modified Kost error classification framework has several attractive features. The preanalytic, analytic, and postanalytic phase separations are intuitively comprehensible. With the addition of the initial test ordering step at the beginning of the preanalytic phase, each phase has an easily remembered four steps arranged in time-sequence order. The steps also call to nonlaboratorian test operators’ and quality analysts’ attention some lessons learned about error causation in laboratory practice, which are likely to apply to point-of-care testing. The first of these lessons, already alluded to, is that errors are most likely to appear in the preanalytic phase [49]. If this proves not to be the case in a particular point-of-care testing process, the investigator of the process should look further to discover why this is so. Is the lesser rate of preanalytic
1006
B.A. Jones, F.A. Meier / Clin Lab Med 24 (2004) 997–1022
Table 1 Modified Kost error classification Phases/Steps in the point-of-care testing process
Step-specific defects with error-producing potential
1. Preanalytic phase Test ordering Patient/specimen identification
Suspected locus for most defects Excessive or mistimed testing Testing the wrong patient, testing the wrong specimen, entering the wrong patient/specimen identification information Inappropriate or inconsistent specimen type, volume, or application to the testing surface/chambers Patient-related (anemia/leukocytosis) or collection-related (hemolysis/microclot formation) defects
Specimen collection
Specimen attributes
2. Analytic phase Method calibration Specimen/reagent interaction
Result generation
Result validation 3. Postanalytic phase Result generation
Critical value reporting
Other result reporting
Report verification, recording, storage, and retrieval
Opaque ‘‘block box’’ kits or devices No, nonprotocol, or misentered calibration data Patient-related ‘‘native interference’’ (eg, non-specific agglutinins) Specimen-related ‘‘nontarget influences’’ (diluents, anticoagulants, other drugs) Specimen-reagent combination-related ‘‘matrix effects’’ (method-specific differences) Results outside the method’s validated range (eg, newborn glucose or hematocrit measured by devices validated only for adult populations’ ranges) Lack of quality control and/or other performance monitors Suspected locus for catastrophic defects Absent or inappropriate units or reference interval, garbled machine output, or mistaken human report or transcription Criticality of result not recognized, not brought to the effective decision-maker’s attention, or not documented for retrieval Reports not communicated, their communication delayed, or the communication lost to subsequent reference/retrieval Lack of correlation of recorded with initially generated result; failure to document initially, to transfer subsequently to an electronic record, or to locate finally with previous point-ofcare patient reports
B.A. Jones, F.A. Meier / Clin Lab Med 24 (2004) 997–1022
1007
errors due to a real difference between the error profiles of point-of-care and laboratory testing, or is it due to a failure to detect preanalytic errors in point-of-care testing? The second lesson from laboratory practice is the importance and rather paradoxical difficulty of accurate patient and specimen identification. Laboratorians led the way to the demonstration that patient identification is a major patient safety issue, an understanding that is now widely embraced [50–52]. A point-of-care test operator in a busy emergency department is just as liable as a phlebotomist to confuse Mr. J. Smith, aged 59, with shortness of breath and a new cough, with Mr. J. Smith, aged 70, with acute urinary retention and new onset confusion, when the operator is asked to ‘‘get a set of lytes on Mr. Smith’’ and does not attempt to verify two forms of patient identification. Regarding specimen labeling in a busy outpatient venue, two urine samples placed in unlabeled specimen cups, side-by-side on the same clinic counter, are as likely to be switched before point-of-care human gonadotrophin (hCG) testing as they are before laboratory hCG testing, should the cups’ labeling be ‘‘batched’’ and their origins confused. In this instance, one patient might be told that she was pregnant and the other that she was not, both in error. Incorrect or absent patient identification can also prevent uploading of large test volume point-of-care test results to electronic patient records. This defect delays both the availability of the results and their comparison with previous test results. In the most frequent clinical circumstance, this delay affects whole blood glucose determinations, making it difficult to trend and compare them over time [53]. In the authors’ practice, an ongoing quality monitor has been necessary to reduce uploading defects in point-of-care glucose test results from 30% to 1% to 2% (K.M. Bourlier, personal communication, 2004). Regarding specimen collection, in the authors’ practice, point-of-care testing coordinators have noted that the ‘‘learning curve’’ for test operators involves two processes. First, new operators must come to grasp the differences between venous blood samples and finger stick ‘‘capillary’’ blood specimens when one or the other specimen type is introduced into a pointof-care testing device. Second, the trainees must develop relative consistency, from test event to test event, in delivering the same amount of specimen (adequate but not excessive to the method’s specification) onto the device’s testing surface or intake chamber (S.M. Margle, personal communication, 2004) [54,55]. On the basis of these two observations, the authors hypothesize that the extent of test operator training and the frequency with which a trained operator performs a test method both influence the frequency of specimen collection defects. Regarding specimen attributes, the large test volumes processed in hospital and reference clinical laboratories allow people who operate tests in these settings to appreciate ‘‘in parallel’’ population variation in test results, because many results are generated and their quality is analyzed collectively [56,57]. Point-of-care test operators almost always experience patient-topatient variation in results ‘‘in series,’’ a pattern in which the operator
1008
B.A. Jones, F.A. Meier / Clin Lab Med 24 (2004) 997–1022
observes one test after another, making it much more difficult to appreciate population effects. The serial acquisition of patient test data also makes it difficult to separate patient-related specimen attributes that may precipitate erroneous results from collection-related attributes that may do so. In pointof-care settings, modest degrees of patient-related variation (eg, modest anemia [hemoglobin \7.5 g/dL] and leukocytosis [>50,000 cells/lL] [11]), of which the test operator and result interpreter may not be aware, can lead to defects in test results, which can progress to medical errors. In large studies of laboratory specimen quality, the authors have shown that small sample volumes produce more specimen defects of hemolysis or clotting than do large sample volume specimens [58,59]. All point-of-care whole blood tests are small volume, so they are liable to the same effects that the authors observed in non–point-of-care specimens. These collectionrelated defects can lead to the deployment of erroneous results if they are not recognized [60,61]. Such defects can be very hard to detect in closed microsystems, especially those through which point-of-care test specimens are processed in sequence. Specifically, (1) artifactual hemolysis falsely elevates potassium and depresses bicarbonate [11,62] and (2) poorly mixed plasma proteins are ‘‘partitioned’’ in the small whole blood samples to form microclots that consume coagulation factors before they are assayed. The latter phenomenon falsely increases coagulation indices [11,63]. The analytic phase of the point-of-care testing process is, on the one hand, the phase least affected by clinical (patient, operator, and sample) variability and, on the other hand, the phase least penetrable by test operator observation. This opacity is the reason for the common designation of point-of-care methods as ‘‘black box’’ tests. Four sources of variation do, however, appear around the edges of the black boxes. Test operators and test result interpreters should recognize that the relative mix of these sources of analytic error varies from method to method. The first analytic-phase opportunity for error is one of the CLIA-mandated operational responsibilities, method calibration. In non–point-of-care tests, failure to calibrate, departures from manufacturers’ specified calibration protocols, and clerical errors in the processing of calibration data have all been suspects in investigations of dissonant results. The frequency with which such defects reach patients as unrecognized medical errors is thought to be low but is not known from large reliable studies. Considerations of analytic phase error usually lead clinician point-of-care test operators deeper into the lair of laboratory medicine than they prefer to explore. Just as it is difficult, in small point-of-care testing specimens, to separate preanalytic patient-related from preanalytic specimen-related specimen characteristics, so it is difficult to distinguish the analogous analytic variables that are cited by Kost [11] as confounding specimen/reagent interactions. These are ‘‘native interference’’ and ‘‘nontarget influences,’’ which are difficult to separate not only from each other but from methodspecific ‘‘matrix effects.’’ Examples of patient-related ‘‘native interference’’ include false-positive results that the presence of heterophile antibodies (both
B.A. Jones, F.A. Meier / Clin Lab Med 24 (2004) 997–1022
1009
Forssman and Paul Bunnel) produces nonspecifically in point-of-care blood tests whose endpoints are agglutination. Specimen-related ‘‘nontarget influences’’ include (1) 5% glucose solutions being infused immediately upstream from a stopcock through which a point-of-care testing sample is collected, diluting with a glucose solution a blood sample being collected for point-of-care whole blood glucose testing; (2) heparin introduced to keep the convenient stopcock open, inducing spuriously prolonged point-of-care coagulation tests; and (3) cardiotropic and antidysrhythmic drugs that interfere with calcium channels in hand-held chemistry devices. All these ‘‘interfering substances’’ could also affect non–point-of-care laboratory methods; they are, however, more difficult to detect and more difficult to take rapidly into clinical account in the point-of-care setting. As with calibration defects, the frequency with which these analytic phase errors become preventable adverse events is considered low, but it is not known from large, statistically and methodologically valid studies. ‘‘Matrix effects’’ are intermethod differences in test results from the same specimens that are due to a systematic, rather than random, variation traced to specimen–reagent combinations. They can be quantified in point-of-care assays; for example, when one compares the simulacrum whole blood International Normalized Ratio (INR) produced by a point-of-care prothrombin time method with the thromboplastin-derived plasma INR produced by a laboratory prothrombin time assay, one detects matrix differences that make it confusing to move back and forth between these two method genres while managing anticoagulated patients [64,65]. This effect is a fourth source of analytic error known to be present in point-of-care testing whose impact on patient safety awaits quantification. The situation is clearer regarding issues of ‘‘linearity’’ and results not within a method’s clinically reportable range. Use of point-of-care glucose testing among neonates at risk for hypoglycemia has led to fears that decisions due to apparent differences, or lack of differences, in point-of-care results may not be substantiated by the results of laboratory methods [66]. Although there are no randomized controlled trials demonstrating the impact of quality control or other performance monitors in the analytic phase, the reliability of statutorily required measures appears to be established, not only by legislative fiat, but also by the historical record of the improvement in clinical laboratory tests’ precision and accuracy under the influence of these measures [67,68]. The tendency of some point-of-care test operators to disregard quality control, especially in regulatory waived testing settings [69,70] and when they are not inspected and accredited by agencies that require quality control and proficiency testing in a ‘‘siteneutral’’ fashion (ie, by JCAHO, CAP, and COLA) [5], may offer an opportunity for comparative study of the impact of these laboratory practice measures on patient safety. Compared with errors in the preanalytic and analytic phases, postanalytic phase errors are more easily detected, at least in retrospect, and
1010
B.A. Jones, F.A. Meier / Clin Lab Med 24 (2004) 997–1022
easier to connect to preventable adverse events. The settings in which the authors have observed clinically significant result reporting errors are (1) ‘‘in the heat of battle,’’ where report defects are not detected by test consumers who are distracted from critical assessment of results by their own craving for them and (2) ‘‘out of the blue,’’ where there is no clinical context available for consideration in which the mistranscribed, erroneous results might stand out as odd or inconsistent. In the authors’ practice, failure to notice, report, or document critical values for subsequent reference is the postanalytic defect that point-of-care test coordinators most often detect in emergency room settings (M. Diegle, personal communication, 2004) [71]. A whole industry supplies ‘‘connectivity,’’ or the electronic transfer of point-of-care testing results directly from test devices to electronic medical records [72,73]. The connectivity industry has developed to deal with the previously massive problem of the disappearance of point-of-care test results over time. The main safety impetus to connectivity—as oppposed to economic, regulatory, and medicolegal impetuses—was the loss of previous data points needed to evaluate trends in test results in relation to clinical interventions, and hence to assess the interventions’ impact. The reliability of connectivity requires constant monitoring. The effect of its absence on medical error rates (eg, in the management of patients with diabetes mellitus during acute medical illnesses) remains unquantified. The assessment of the completeness of connectivity has, among other consequences, revealed the importance of the regulatory requirement of report verification. These strategies turn up at least a 1% to 2% rate of reports revised because of the failure to transmit or to transcribe (in the case of manual tests) results into the patient record. Again, it is well established that these errors occur. The frequency of postanalytic phase errors is reasonably quantified, as opposed to that of errors in the preanalytic and analytic phases of point-of-care testing; what is lacking is the clinical correlation needed to determine what fraction of these errors progress to adverse events. Research questions raised by the framework The modified Kost framework for detecting point-of-care testing errors uncovers research questions regarding the relative impact of various error types on patient safety. The authors now list what appear to them to be some pressing topics for investigation in this context. First, are point-of-care testing errors indeed most likely to occur in the preanalytic phase of the testing process? If so, then troubleshooting and prevention efforts should—as they should in other genres such as laboratory testing—focus on the preanalytic phase. Second, are some patterns of point-of-care test ordering in themselves conducive to preventable adverse events? If so, this finding has implications
B.A. Jones, F.A. Meier / Clin Lab Med 24 (2004) 997–1022
1011
for the training of clinicians who order point-of-care tests and warrants advising them against these ordering patterns. It also has implications for monitoring of point-of-care test ordering practice to detect the frequency of dangerous patterns. Third, what kind of point-of-care test operator training and what extent of test operator observation do trainees need when they are being oriented to specific point-of-care test methods? What expertise do they need to develop (eg, the ability to deliver a uniform and adequate amount of specimen to a point-of-care device at each test event)? The fourth question regarding the preanalytic phase comes in two parts: (1) Which specimen attributes are more conducive to error: those related to the patient’s pathophysiologic state (eg, anemia or leukocytosis) or those related to the specimen-collection process (eg, hemolysis or microclot formation)? (2) Between the pathophysiologic and collection characteristics conducive to error, which are more likely to lead to preventable adverse events? The answers to this pair of questions should inform efforts, on one hand, to detect patient-related error-producing specimen attributes and, on the other hand, to prevent collection-related error-producing attributes. In the analytic phase, the first research question is what are the methodspecific frequency profiles of native interference, nontarget influences, and matrix effects? These variables can be detected and managed, but they cannot be prevented. The second analytic phase question articulates a potentially pressing patient safety research issue: what adaptations are needed for appropriate analysis in special patient populations? Neonates have been the prime example of a special population [74,75] in which the reference and clinically reportable ranges ‘‘wired into’’ point-of-care devices need to be different from those validated for adult populations. Neonates’ margins of safety in the face of analytic error may also be narrower than the margins in adult populations, and the pattern of preventable adverse events in the neonatal population is likely to be different from the patterns discovered in older age groups [76]. Regarding the postanalytic phase, the CLIA regulations on the one hand and the accreditation standards of the JCAHO, CAP, and COLA on the other have produced two cadres of point-of-care test operators performing waived tests. Members of the first group are not subject to inspections that check for the presence of the postanalytic elements of the standard model, especially regarding result documentation. Point-of-care test operators in the second group, those subjecting themselves to accreditation, are both inspected and required to maintain result report records. This state of affairs can be thought of as a regulatory experiment. Comparison of the error rates and associated preventable adverse event rates, measured in matched representative samples from the two groups, would offer a setting for an otherwise missing randomized control trail that would or would not demonstrate the presumed value of the ‘‘result management’’ elements of the standard laboratory model for patient safety. The other research questions
1012
B.A. Jones, F.A. Meier / Clin Lab Med 24 (2004) 997–1022
that arise from the postanalytic phase of testing are more operational: (1) What are effective strategies to ensure timely, accurate, complete, and retrievable reporting of critical values generated by point-of-care tests? (2) What are valid measures of point-of-care data-transfer integrity, completeness, and retrievability by electronic systems offering ‘‘connectivity’’? (3) Which point-of-care result presentation formats are conducive to effective medical decision-making? The amount of effort and resources devoted to answering these operational questions should follow from the risk profiles that erroneous result reporting creates; these profiles await demonstration.
The standard model as a culture of safety in point-of-care testing As outlined earlier, the laboratory ethos for point-of-care testing consists of a standard model of four quality attributes that are supposedly present in all such testing and five more features that are supposedly present in moderate-complexity point-of-care testing (Box 2). An evidence base that connects deviation from specific elements in this standard model with statistically valid rates of either medical errors or preventable adverse events does not yet exist. But a modified version of Kost’s taxonomy of point-of-care testing errors may serve as a framework for building such an evidence base through the pursuit of the relevant research questions just listed. Having said this, the authors still regard the nine quality attributes of the ethos as the best patient safety objectives currently available: ones that are worth pursuing until they are modified by statistically valid evidence. On this basis, the authors recommend the practical deployment of a series of measures aimed at achieving the objectives of the ethos, as the best means
Box 2. The laboratory ethos for point-of-care testing The first tablet of the ethos Operator competence Procedure adherence Quality control Result recording The second tablet of the ethos Nondeviation from manufacturer instructions Assay calibration every 6 months Two levels of quality control daily Documentation review Proficiency demonstration
B.A. Jones, F.A. Meier / Clin Lab Med 24 (2004) 997–1022
1013
currently available of building a culture of safety in point-of-care testing. The first four measures they recommend are (1) operator training, (2) program supervision, (3) competence assessment, and (4) proficiency testing. Participation in an accreditation program is the easiest path to these first four components of the point-of-care testing culture of safety. Second, the authors recommend ongoing, standardized, statistically analyzable, internally or externally comparable monitoring of three specific steps in the point-of-care testing process. The performance of each of these tasks, in the authors’ estimation, is worth the effort of monthly data collection and quarterly data analysis so that the monitoring indices can be trended over years. These three additional steps—important, in the authors’ view, to patient safety—are (5) patient identification, (6) specimen collection, and (7) result reporting. At present, the combination of these seven elements makes up the best available culture of point-of-care testing safety, pending the modifications that development of an evidence base will introduce [77,78]. Operator training Most point-of-care tests in acute care settings appear to be performed by nursing personnel or other personnel under nursing supervision. The authors of the 2003 report ‘‘Keeping Patients Safe: Transforming the Work Environment of Nurses’’ [79] point out that the education and training of nursing staff in technologies is an important error-reducing intervention. For point-of-care testing, the authors argue that the training curriculum should focus on eight moments of test operation when error can plausibly be prevented (Box 3). Trainees who demonstrate competence in the eight elements listed in Box 3 for specific point-of-care methods should be validated with unique personal identification codes and passwords as test operators, so that their actual performance can be supervised [78,80].
Box 3. Point-of-care curriculum
Device maintenance, including function checks Patient identification procedures Collection-site selection/preparation Collection procedures Test performance itself Result reporting/recording Critical review of quality control, patient results, and device maintenance records Solution of common, foreseeable problems arising from testing
1014
B.A. Jones, F.A. Meier / Clin Lab Med 24 (2004) 997–1022
Program supervision Training, evaluation of trainees, and maintenance of the list of methodspecific trained operators are the first three tasks required in practice of program supervisors. The responsible program leader also needs to collect information from direct observation of patient test performance (easily documented by checklists), to review quality control and patient result reporting records (particularly those documenting critical value reporting), and to inspect instruments and review their maintenance records. In most nursing units, clinics, and office settings, carrying out the second, ‘‘on-site’’ trio of supervisory responsibilities entails the collaboration of a multidisciplinary team that collects, analyzes, and acts effectively on information about point-of-care testing quality. Operator competence assessment Experts argue that competence assessment is crucial to error prevention [81,82]. The CAP’s Laboratory Accreditation Program Checklist recommends that this evaluative task be performed by combining training and supervision elements (Table 2) [83,84]. Proficiency testing When supervision and competence assessment are applied in practice, they usually include external proficiency testing, either as an accrediting agency requirement for waived and PPM testing or as a regulatory requirement for moderate-complexity testing. In both contexts, a proficiency testing program must include the characteristics listed in Box 4 [68]. Proficiency testing provides independent measures that a point-of-care testing program can use to demonstrate its ability to produce accurate test results. The CLIA regulations use external proficiency testing results, in the
Table 2 Competence assessment Checking
Testing
Routine test performance (eg, by structured observation) Result recording/reporting (by record review)
Review of previous proficiency testing results
Instrumentation maintenance/functionality (by direct observation) Result transmission/transcription, quality control, preventive maintenance (by comparison of intermediate, ie, instrument or worksheet, results with those in the patient record and by review of maintenance records)
Observation of proficiency testing events with subsequent correlation of results — Assessment of test performance with previously analyzed ‘‘unknown’’ (to the test operator) samples
B.A. Jones, F.A. Meier / Clin Lab Med 24 (2004) 997–1022
1015
Box 4. Requirements of proficiency testing performance (2004) Must occur in the course of routine testing, with no deviation from the usual procedure, from start to finish Must be performed by the same mix of test operators (eg, physicians, nurses, or office staff) who actually carry out routine testing Must be performed without nonroutine discussion, comparison, or repeating of tests and their results Must be performed with an 80% success rate
circumstances of moderately complex testing where the regulations require them, to place a floor of 80% proficiency testing success (for any specific analyte) as a qualification for offering this genre of testing at a specific CLIA-licensed site. Operator training, program supervision, operator competence assessment, and proficiency testing are a series of measures that realize the standard model’s objectives not only of test operator competence but also of procedure adherence. (This last objective encompasses nondeviation from manufacturer instructions, semiannual assay calibration, quality control— specifically, two levels on each day of operation—result recording, and supervisory review of documentation.) However, even when coupled with proficiency testing, these measures appear to fall short of the best available safety culture for point-of-care testing unless they are supplemented by a second set of activities. These activities approach pre- and postanalytic sources of error in an ongoing (continuous) and systematic (statistical) manner. They measure three steps in the point-of-care testing process: patient identification, specimen collection, and result reporting. Patient identification The authors note parenthetically that in their experience the first step in the testing process—test ordering—is more difficult to document in pointof-care testing than it is in any other laboratory process. After a point-ofcare test has been ordered, the critical second step in the preanalytic phase of testing has proved to be patient identification [51]. At the time of this writing (2004), accurate patient identification is the first patient safety goal of the JCAHO [52]. The JCAHO recommends that two patient identifiers (neither being a room number or location designation) be used whenever a patient sample is obtained [52]. Patient name and assigned identification number, birth date, social security number, telephone number, home address, and unique bar code are all suggested as candidate-unique patient identifiers in various settings [52]. Although the patient identification band provides
1016
B.A. Jones, F.A. Meier / Clin Lab Med 24 (2004) 997–1022
a convenient (if not perfect) source of inpatient identification, other systematic identification procedures need to be developed in the outpatient settings where most point-of-care testing is done. An ongoing monitor, along the lines of the CAP Q-Tracks patient identification monitor, works as follows: (1) the presence of two unique patient identifiers is sought each time a test sample is collected and (2) defects in identification are documented, characterized, and corrected each time they are discovered. This monitor offers a proven way to improve local integration of patient identification into a patient safety culture [51]. Specimen acceptability When an unexpected or, in retrospect, inconsistent point-of-care test result appears, experienced point-of-care test coordinators investigate first whether any defects in patient identification can be discovered. Second, they look for evidence of defective specimen quality [85]. This troubleshooting approach indicates the value of ongoing monitoring of specimen acceptability, with monitoring information aggregated to show local area (nursing unit, clinic, office) variation in event reporting, sorted by clinical locale. When increased rates of unacceptable specimens are detected at this level, supervisory personnel can subsequently ‘‘drill down’’ to individual specimen collectors or test operators to find the sources of variation. This approach has already worked well for subscribers to the CAP Q-Tracks specimen acceptability monitor [86]. This type of monitoring is a second proven means to improve patient safety by building better specimen quality into the culture. Result reporting A third ongoing monitor, last in the trio that the authors recommend, takes advantage of the widespread availability of computer programs that connect point-of-care instruments to electronic patient databases. These information systems prevent transcription errors and make results more widely and more rapidly available, often in more easily comprehended forms. These systems may also encourage a culture of safety by giving supervisory personnel quicker access to defects in data integrity and completeness, which point-of-care supervisors can begin to examine by measuring the frequency and distribution of results that the data transfer programs cannot upload. These programs also document the history of critical value reporting, at least as it is recorded in the test device. The feedback loop that these programs facilitate can increase the integrity of point-of-care result reporting. Other safety measures The seven measures the authors have just described (operator training, program supervision, competence assessment, proficiency testing, and active
B.A. Jones, F.A. Meier / Clin Lab Med 24 (2004) 997–1022
1017
monitoring of patient identification, specimen acceptability, and report integrity) seem to them to be the basic practical applications of the standard model of the laboratory ethos to point-of-care testing. This list, however, does not exhaust opportunities to build a patient safety culture in point-ofcare testing. Other opportunities may respond to the structural barriers to consistent test performance. These responses include developing more effective teaching materials to impart the laboratory ethos to nursing or clerical test operators. The authors have, in their own practice, adopted educational approaches developed by nonlaboratory personnel who ‘‘get it’’; they suggest teaching vocabulary and evaluation formats that effectively ‘‘pass it [the ethos] on’’ to other nurses or physician office staff [80]. These teaching aids can, for example, help nonlaboratorians better understand the practical implications of biases between point-of-care and standard laboratory testing methods. They also include innovative deployment of ‘‘supervisor extenders’’ who conduct structured reviews of records in dispersed testing sites, then e-mail the results of these reviews to point-of-care coordinators for comparative analysis to detect local variation. A further set of opportunities revolves around general efforts to align laboratory and nursing understandings of quality care. The 2003 Board on Health Sciences Policy/Institute of Medicine report ‘‘Keeping Patients Safe: Transforming the Work Environment of Nurses’’ suggests a vocabulary in which to develop this mutual understanding. Multidisciplinary point-of-care testing committees provide a forum for this work. In particular, an attention to the dangers induced by interruptions and distractions in point-of-care testing, as much as in medication management, needs to find its way into nursing practice [87]. Efficient record-keeping about point-of-care test operators should include items that are both easy to enter and convenient to retrieve and should allow for method-specificity—for example, documenting color discrimination testing in operators being validated for tests requiring this ability. Maintaining such records of operator competence in electronic files can make supervision of point-of-care testing easier and testing safer [78]. ‘‘Making the right thing to do the easiest thing to do’’ is a slogan of human factors engineers, who try to reduce the risks that irreducible human variability introduces into complex processes [39,87]. Following this advice in point-of-care testing has been made much easier over the past decade by point-of-care testing device manufacturers who have introduced engineering controls, such as ‘‘required fields’’ and ‘‘lock out’’ devices, to prevent patient testing without quality control. These engineering controls have increased the ‘‘robustness to error’’ of point-of-care methods [78]. Bar coding for patient identification has also shown promise in this context, without this promise being widely fulfilled [88,89]. Better patient identification strategies are especially needed in outpatient venues, where unequivocal patient identification is more difficult than it is in inpatient units.
1018
B.A. Jones, F.A. Meier / Clin Lab Med 24 (2004) 997–1022
Discussion of these engineering control measures frequently leads to consideration of current real practice situations in which control routines need to be circumvented. Such exceptions need to be anticipated, recognized as inherently error-prone, monitored, and evaluated in the context of the clinical outcomes to which they contribute. Adverse event reporting Outcome review is, in general, both as difficult and as valuable in assessing the relation of errors to preventable adverse events in point-of-care testing as it is elsewhere in health care. Measures that make reporting of events in both categories simple, convenient, and free of punitive implications have proved to be very difficult to activate in most health care settings. Summary In the authors’ view, the following four points compose the current state of the question of patient safety in point-of-care testing: The collision of definitions used in this article with actual practice in point-of-care testing is evidence for the likelihood of error in this genre of clinical tests. Uncovering of latent conditions conducive to error is the objective for investigations of this likelihood. A modified Kost classification serves as a basis for determining where latent conditions appear in the point-of-care testing process and as a framework in which to recognize these errors in an error classification process. Errors in point-of-care testing are likely to arise most frequently in the steps of patient identification, specimen collection, and result reporting. In the absence of an adequate evidence base, the authors recommend as measures to build a culture of patient safety in point-of-care testing the components of the standard model of safe laboratory testing. This model inculcates the laboratory ethos of test operator competence, procedure adherence, quality control, and result integrity. These objectives can be achieved by integrating operator training, program supervision, competence assessment, and proficiency demonstration into an institution’s or practice’s point-of-care testing program. Based on the authors’ hypothesis that medical errors in point-of-care testing, which lead to preventable adverse events most often arise in three testing processes—patient identification, specimen collation, and result reporting—they recommend ongoing monitors of these critical steps. If they are wrong, such monitoring will
B.A. Jones, F.A. Meier / Clin Lab Med 24 (2004) 997–1022
1019
disprove their hypothesis; if they are right, it will measurably reduce medical error in point-of-care testing. References [1] Price CP. Point-of-care testing: impact on medical outcomes. Clin Lab Med 2001;21: 285–303. [2] Kohn LT, Corrigan JM, Donaldson MS, editors. To err is human: building a safer health system. Washington, DC: National Academy Press; 1999. [3] Price CP, Hicks JM. Point of care testing: an overview. In: Price CP, Hicks JM, editors. Point of care testing. Washington, DC: AACC Press; 1999. p. 3–4. [4] Demers LM. Regulatory issues in point-of-care testing. In: Price CP, Hicks JM, editors. Point of care testing. Washington, DC: AACC Press; 1999. p. 252–6. [5] Szabo J. Quality concerns uncovered at laboratories doing waived tests. MLO Med Lab Obs 2001;33:32–4. [6] Terry LM. Point-of-care testing and recognizing and preventing errors. Br J Nurs 2002;11: 1036–9. [7] Bonini P, Plebani M, Ceriotti F, Rubboli T. Errors in laboratory medicine. Clin Chem 2002; 48:691–8. [8] Ioannides JPA, Lau J. Evidence on interventions to reduce medical errors. J Gen Intern Med 2001;16:325–34. [9] Bion JF, Heffner JE. Challenges in the care of the acutely ill. Lancet 2004;363:970–7. [10] Pronovost PJ, Nolan T, Zeger S, Miller M, Rabin H. How can clinicians measure safety and quality in acute care? Lancet 2004;363:1061–7. [11] Kost GJ. Preventing problems, medical errors, and biohazards in point-of-care testing. Point of Care 2003;2:78–88. [12] Kost GJ. Preventing medical errors in point-of-care testing: security, validation, safeguards, connectivity. Arch Path Lab Med 2001;125:1307–15. [13] Vincent C. Understanding and responding to adverse events. N Engl J Med 2003;348: 1051–6. [14] Bullock DG. Quality control and quality assurance. In: Price CP, Hicks JM, editors. Point of care testing. Washington, DC: AACC Press; 1999. p. 157–73. [15] Humbertson SK. Management of a point-of-care program: organization, quality assurance, and data management. Clin Lab Med 2001;21:255–68. [16] Jacobs E, Hinson KA, Tolnai J, Simson E. Implementation, management, and continuous quality improvement of point-of-care testing in an academic healthcare setting. Clin Chim Acta 2001;307:49–59. [17] van den Berghe G, Wouters P, Weekers F, et al. Intensive insulin therapy in the critically ill patients. N Engl J Med 2001;345:1359–67. [18] Coursin DB, Connery LE, Ketzler JT. Perioperative diabetic and hyperglycemic management issues. Crit Care Med 2004;32(Suppl):S116–25. [19] Finney SJ, Zekveld C, Elia A, Evans TW. Glucose control and mortality in critically ill patients. JAMA 2003;290:2041–7. [20] Krinsley JS. Association between hyperglycemia and increased hospital mortality in a heterogenous population of critically ill patients. Mayo Clin Proc 2003;78:1471–8. [21] Herouf PM, Erstad BL. Medication errors involving continuously infused medications in a surgical intensive care unit. Crit Care Med 2004;32:428–32. [22] Aement S, Braithwaite SS, Magee MF, Ahmann A, et al. Management of diabetes and hyperglycemia in hospitals. Diabetes Care 2004;27:553–91. [23] Yuoh C, Elghetany MT, Peterson JR, Mohammad A, Okorodudu AO. Accuracy and precision of point-of-care testing for glucose and prothrombin time at the critical care units. Clin Chim Acta 2001;307:119–23.
1020
B.A. Jones, F.A. Meier / Clin Lab Med 24 (2004) 997–1022
[24] Johi RR, Cross MH, Hansbro SD. Near-patient testing for coagulopathy after cardiac surgery. Br J Anaesthesiol 2003;90:499–501. [25] Pollar L, Keown M, Chauham W, van den Besselaar AMHP, Tripodi A, Shiach C, et al. European Concerted Action on Anticoagulation (ECAA). Reliability of international normalized ratios (INR) from CoaguChek Mini and TAS PT-NC whole blood point-of-care test monitor systems. BMJ 2003;327:30–2. [26] Tripodi A, Chantarangkul V, Marmucci PM. Near-patient testing devices to monitor oral anticoagulant therapy. Br J Haematol 2001;113:847–52. [27] Cachia PG, McGregor E, Adlakha S, Davey P, Goudie BM. Accuracy and precision of the TAS analyzer for near-patient INR testing by non-pathology staff in the community. J Clin Pathol 1998;51:68–72. [28] Hortin GL. Handbook of bedside glucose testing. Washington, DC: AACC Press; 1998. [29] Van Cott EM. Coagulation point-of-care testing. Clin Lab Med 2001;21:337–50. [30] Lee-Lewandrowski E, Lewandrowski K. Regulatory compliance for point-of-care testing: a perspective from the United States (circa 2000). Clin Lab Med 2001;21:241–51. [31] Kiechle FL, Gauss I. Provider-performed microscopy. Clin Lab Med 2001;21:375–87. [32] Cox CJ. Acute care testing: blood gases and electrolytes at the point-of-care. Clin Lab Med 2001;21:321–35. [33] Samama CM, Ozier T. Near-patient testing of haemostasis in the operating theatre: an approach to appropriate use of blood in surgery. Vox Sang 2003;84:251–5. [34] Giuliano KK, Grant ME. Blood analysis at the point-of-care: issues in application for use in critically ill patients. AACN Clin Issues 2002;13:204–20. [35] Centers for Disease Control and Prevention. Division of Laboratory Systems (DLS). Moderate complexity testing overview. Available at: www.phppo.cdc.gov/CLIA/moderate. aspx. Accessed July, 2004. [36] Department of Health and Human Services. Center for Disease Control and Prevention. Notice of specific list for categorization of laboratory test systems, assays, and examinations by complexity. Fed Regist 2000;65:25796–814. [37] Department of Health and Human Services. Center for Medicare and Medicaid Services, Center for Disease Control and Prevention, Medicare, Medicaid and CLIA Programs: laboratory requirements relating to quality systems and certain personnel qualifications: Final rule: Paragraph 293.1256. Standard: Control procedures. Fed Regist 2003;68:3707–8. [38] Boulier KM. High complexity test. The hitchhiker’s guide to point-of-care testing. Washington, DC: AACC Press; 1998. [39] Reason J. Managing the risks of organizational accidents. Burlington (VT): Ashgate; 2003. [40] Westgard JO, Wiebe DA. Cholesterol operational process specifications for assuring the quality required by CLIA proficiency testing. Clin Chem 1991;37:1938–44. [41] Bennett ST, Eckfeldt JH, Belcher JD, Connelly DP. Certification of cholesterol measurements by the National Reference Method Laboratory Network with routine clinical specimens: effects of network laboratory bias and imprecision. Clin Chem 1992;38:651–7. [42] Ross JW, Myers GL, Gilmore BF, Cooper GR, Naito HR, Eckfeldt J. Matrix effects and the accuracy of cholesterol analysis. Arch Pathol Lab Med 1993;117:345–51. [43] Wiebe DA, Westgard JO. Cholesterol-A model system to relate medical needs with analytic performance. Clin Chem 1993;39:1504–12. [44] Price CP. Delivering clinical outcomes. Point of Care 2003;2:151–7. [45] Sazama K, Haugh MG. Stat: the laboratory’s role. Chicago: American Society for Clinical Pathology Press; 1986. [46] Hilborne L, Lee H, Cathcart P. STAT testing: a guideline for meeting clinician turnaround time requirement (practice parameter). Am J Clin Pathol 1996;105:671–5. [47] Plebani M, Carraro P. Mistakes in a stat laboratory: types and frequency. Clin Chem 1997; 43:1348–51. [48] Witte DL, Van Ness SA, Angstadt DS, Pennell BJ. Errors, mistakes, blunders, outliers, or unacceptable results: how many? Clin Chem 1997;43:1352–6.
B.A. Jones, F.A. Meier / Clin Lab Med 24 (2004) 997–1022
1021
[49] Astion ML, Shojania KG, Hamill TR, Kim S, Ng VL. Classifying laboratory incident reports to identify problems that jeopardize patient safety. Am J Clin Pathol 2003;120: 18–26. [50] Renner SW, Howanitz PJ, Bachner P. Wristband identification error reporting in 712 hospitals: a College of American Pathologists’ Q-Probes study of quality issues in transfusion practice. Arch Pathol Lab Med 1993;117:573–7. [51] Howanitz PJ, Renner SW, Walsh MK. Continuous wristband monitoring over two years decreased identification errors: a College of American Pathologists’ Q-Tracks study. Arch Pathol Lab Med 2002;126:809–15. [52] Joint Commission on Accreditation of Healthcare Organizations. 2004 National patient safety goals: 1) improve the accuracy of patient identification. Available at: www.org/ accreditedþorganizations/patientþsafety/04þnpsg. Accessed July, 2004. [53] Salka K, Kiechle FL. Connectivity for point-of-care glucose testing reduces error and increases compliance. Point of Care 2003;2:114–8. [54] Skeie S, Thue G, Nerhus K, Sandberg S. Instruments for self-monitoring of blood glucose: comparisons of testing quality achieved by patients and a technician. Clin Chem 2002;48: 994–1003. [55] DuPlessis M, Ubbink JB, Vermark WJH. Analytical quality of near-patient blood cholesterol and glucose determinations. Clin Chem 2000;46:1085–90. [56] Johnson RN, Baker JR. Error detection and measurement in glucose monitors. Clin Chim Acta 2001;307:61–7. [57] Boyd JC, Bruns DE. Quality specifications for glucose meters: assessment by simulation modeling of errors in insulin dose. Clin Chem 2001;47:209–14. [58] Jones BA, Calam RR, Howanitz PJ. Chemistry specimen acceptability: a College of American Pathologists’ Q-Probes study of 455 laboratories. Arch Pathol Lab Med 1997;121: 19–26. [59] Jones BA, Meier FA, Howanitz PJ. Complete blood count specimen acceptability: a College of American Pathologists’ Q-Probes study of 703 laboratories. Arch Pathol Lab Med 1995; 119:203–8. [60] Lock JP, Szuts EZ, Malomok J, Anagnostopoulos A. Whole-blood glucose testing at alternate sites: glucose values and hematocritic of capillary blood drawn from fingertip and forearm. Diabetes Care 2002;25:337–41. [61] Kilpatrick ES, Rumley AG, Rumley CW. The effect of haemolysis on blood glucose meter measurement. Diabetes Medicine 1995;12:341–3. [62] Hawkins R. Measurement of whole-blood potassium—is it clinically safe? Clin Chem 2003; 49:105–6. [63] Witte DL, Van Ness SA. Frequency of unacceptable results in point-of-care testing. Arch Pathol Lab Med 1999;123:761. [64] Kemme MJ, Faaij RA, Schoemaker RC, Kluft C, Meijer P, Cohen AF, et al. Disagreement between bedside and laboratory activated partial thromboplastin time and international normalized ratio for various novel anticoagulants. Blood Coag Fibrinolysis 2001;12:583–91. [65] Hirsch J, Wendt T, Kahly P, Schaffartzik W. Point-of-care testing apparatus: measurement of coagulation. Anaesthesia 2001;56:760–3. [66] Jain R, Myers TF, Kahn SE, Zeller WP. How accurate is glucose analysis in the presence of multiple interfering substances in the neonate? J Clin Lab Anal 1996;10:13–6. [67] Miller WG. Specimen materials, target values, and commutability for external quality assessment (proficiency testing) schemes. Clin Chim Acta 2003;327:25–7. [68] Shahangian S. Proficiency testing in laboratory medicine: uses and limitations. Arch Pathol Lab Med 1998;122:15–30. [69] Jones BA, Bachner P, Howanitz PJ. Bedside glucose monitoring: a College of American Pathologists Q-Probes study of the program characteristics and performance in 605 institutions. Arch Pathol Lab Med 1993;117:1080–7. [70] Jones BA. Testing at the patient’s bedside. Clin Lab Med 1994;14:473–91.
1022
B.A. Jones, F.A. Meier / Clin Lab Med 24 (2004) 997–1022
[71] Emancipator K. Critical values: ASCP practice parameter. Am J Clin Pathol 1997;108: 247–53. [72] Blick KE. The essential role of information management in point-of-care/critical care testing. Clin Chim Acta 2001;307:159–68. [73] Taylor DM, Magness DJ, Held MS. Improving a point-of-care testing glucose program with connectivity and informatics. Point of Care 2003;2:106–13. [74] Peet AC, Kennedy DM, Hocking MD, Ewer AK. Near-patient testing of blood glucose using the Bayer Rapidlab 860 analyser in a regional neonatal unit. Ann Clin Biochem 2002; 39:502–3. [75] Sirbin A, Jalloh T, Lee L. Selecting an accurate point-of-care testing system: clinical and technical issues and implications in neonatal blood glucose monitoring. J Spec Pediatr Nurs 2002;7:104–12. [76] Newman JD, Ramsden CA, Balazs NDH. Monitoring neonatal hypoglycemia with the Accu-Chek Advantage II glucose meter: the cautionary tale of galactosemia. Clin Chem 2002;48:2071. [77] Wilcox RA, Whitham EM. Reduction of medical error at the point-of-care using electronic information delivery. Int Med J 2003;33:537–40. [78] Page A, editor. Keeping patients safe: transforming the work environment of nurses. Washington, DC: National Academy Press; 2003. [79] Miller K, Miller N. Benefits of a joint nursing and laboratory point-of-care program: nursing and laboratory working together. Crit Care Nurs Q 2001;24:15–20. [80] Hoff T, Jameson L, Hannan E, Flint E. A review of the literature examining linkages between organizational factors, medical errors and patient safety. Med Care Res Rev 2004;61:3–37. [81] Crosberry P, Wears RL, Binder LS. Setting the educational agenda and curriculum for error prevention in emergency medicine. Acad Emer Med 2000;7:1194–200. [82] College of American Pathologists Commission on Laboratory Accreditation. Laboratory accreditation program point of care check list. Northfield (IL): College of American Pathologists; 2003. [83] College of American Pathologists Commission on Laboratory Accreditation. Most frequent deficiencies—point-of-care testing checklist 2002–2003. Available at: www.cap.org. Accessed July, 2004. [84] Kilgore ML, Steindel SJ, Smith JA. Continuous quality improvement for point-of-care testing using background monitoring of duplicate specimens. Arch Pathol Lab Med 1999; 123:824–8. [85] Zarbo RJ, Jones BA, Friedberg RC, Valenstein PN, Renner SW, Schifman RB, et al. QTracks: a College of American Pathologists program of continuous laboratory monitoring and longitudinal tracking. Arch Pathol Lab Med 2002;126:1036–44. [86] Vande Voorde KM, France AC. Proactive error prevention in the intensive care unit. Crit Care Nurs Clin North Am 2002;14:347–58. [87] Reason J. Human error. Cambridge (U.K.): Cambridge University Press; 1990. [88] Douglas J, Larrabee S. Bring barcoding to the bedside. Nurs Manage 2003;34:36–40. [89] Anonymous. FDA proposes new rules that would require barcoding and new reporting procedures. Healthc Leadersh Manag Rep 2003;11:12–3.