Supervision during automated perimetry

Supervision during automated perimetry

Since the greatest learning increment is evident between the first and second tests, though it may continue subsequently, we felt examination of the t...

56KB Sizes 4 Downloads 116 Views

Since the greatest learning increment is evident between the first and second tests, though it may continue subsequently, we felt examination of the testing order over the first two tests would detect any influence of learning effect on our results. No influence was evident, though learning effect was present. Finally, we included in our analysis all objective data elements generated during our testing that might be relevant to reliability. In an environment where the costs of performing visual field testing are irrelevant, it is probably safer to supervise all visual fields. But the take-home message of our paper is that in fully cooperative, alert, younger patients, continuous supervision may not be a cost-effective strategy; if you have doubts about the patient, you’d be wise to supervise. ROSITA E. VAN COEVORDEN, MD RICHARD P. MILLS, MD, MPH LAN WANG, MD DEREK C. STANFORD Lexington, Kentucky Dear Editor: In their study, Van Coevorden et al assessed the effect of supervision during automated perimetry and patient characteristics predictive of the need for supervision. The rationale for their study was that, “There are no data on whether patients should be left alone during testing, whether unsupervised test results would be different in predictable ways, or whether there are patient characteristics that would predict when supervision is still necessary.” We were rather surprised at this statement because we had published a similar study only 6 years ago in this Journal, entitled “Effect of Intermittent Versus Continuous Patient Monitoring On Reliability Indices During Automated Perimetry.” It is possible that the authors did not identify our article when they performed a literature search of relevant articles. Out of curiosity, we performed a MEDLINE search combining the terms “perimetry” and “technician or supervision,” which resulted in 11 articles, one of which was our article.1 Although the study design and analysis in the articles by Van Coevorden and colleagues have similarities to the article by Johnson et al, there are some elemental differences. Johnson and colleagues randomly divided their 156 patients into two groups—79 patients who were continuously monitored by technicians, and 77 patients who were intermittently monitored by technicians. Both groups performed a supervised 30-2 test, which was stopped after 1.5 minutes. The number of errors in reliability indices (fixation losses, false-positive errors, and false-negative errors) was recorded for this 1.5-minute practice test. The patients then performed a new and permanently stored 30-2 test. During the permanent test, both groups were monitored for the first 1.5 minutes of visual field testing. This continuous monitoring was maintained for the continuously monitored patients but was performed intermittently after the first 1.5 minutes. Van Coevorden and colleagues tested each of their 200 patients twice with 15 minutes rest between tests; once under continuous supervision, and the other under no supervision. Both studies found no significant differences in fixation losses, false-positive errors, false-negative errors, shortterm fluctuation, mean defect, or pattern standard deviation between continuously or intermittently monitored patients

(Johnson et al), or between continuously monitored and unsupervised patients (Van Coevorden et al). Ten (12.7%) of the continuously monitored patients and 9 (11.7%) of the intermittently monitored patients in the study by Johnson and colleagues had unreliable visual fields. Van Coevorden et al found 19 (9.5%) of the continuously monitored patients and 31 (15.5%) of the unsupervised patients had unreliable visual fields. There was no difference in the proportion of unreliable visual fields in either study when corrected by Bonferroni adjustment for multiple statistical test. Van Coevorden et al concluded that patients with an educational level below grade 12 and those with a prior test with false-positive errors may need supervision, whereas Johnson et al concluded differently. Johnson and colleagues indicated that continuous monitoring of patients with potentially poor test performance may not be the answer, because even with continuous monitoring, more than 10% of patients had poor reliability parameters. The heuristic value of these studies is that regardless of no supervision, intermittent supervision, or continuous supervision of patients, approximately 10% of patients will have poor reliability parameters. Poor performers can be identified within the first 1.5 minutes of visual field testing. How to improve visual field performance once a poor performer is identified was a problem documented in the publication by Johnson and colleagues 6 years ago and continues to be a challenge. LENWORTH N. JOHNSON, MD Columbia, Missouri JOSEPH W. SASSANI, MD ALI AMINLARI, MD Hershey, Pennsylvania References 1. Johnson LN, Aminlari A, Sassani JW. Effect of intermittent versus continuous patient monitoring on reliability indices during automated perimetry. Ophthalmology 1993;100:76 – 84.

Author’s reply Dear Editor: Our profound apologies are due to Johnson, Sassani, and Aminlari for our unintended omission of their prior study. Two of us had performed MEDLINE searches and inexplicably failed to access their article. Had we done so, our discussion would have included the following: 1. Johnson et al1 studied intermittent vs. continuous technician monitoring, the intermittent group being visited an average of four times during the test after a monitored 90-second practice test. Because a busy technician would find it difficult to return intermittently to the visual field testing room every 3 to 4 minutes, we studied no monitoring vs. continuous monitoring after the first 30 to 60 seconds of the full-threshold test. 2. Johnson et al randomly assigned patients to either continuous or intermittent monitoring, whereas we used a paired sample design, in which each tested eye underwent both supervised and unsupervised tests in random order. This allowed us to use paired sample statistics, with their greater power to detect small differences. 3. Not surprisingly, because of study design and patient numbers, Johnson et al found no differences between supervision types. We found a difference in reliability

1439