LETTERS TO THE EDITOR
741
Assessing Agreement in Cardiac Output Monitoring Validation Studies To the Editor: We have read the meta-analysis by Mayer et al1 with interest. They should be congratulated for attempting to bring together a number of heterogenous studies into a single paper evaluating the accuracy and precision of the FloTrac/Vigileo System (Edwards Lifesciences, Irvine, CA) for monitoring cardiac output. We think that the article tries to address a very important issue, which is trying to take into consideration the results of several validation studies assessing the same device. The identification of the precision of a device is not straightforward and is made more difficult by the confusing and sometimes conflicting use of the terminology. The precision is traditionally determined from these types of studies by assessing the Bland-Altman plot and quoting twice the standard deviation (SD) of the difference between the 2 measurements. The range of this precision around the mean value (bias) is known as the limits of agreement. Unfortunately, some studies quote the precision as 1 standard deviation and others 2. This discrepancy makes the generalization and comparison of the results difficult. Unfortunately, the authors in this article did not consider this position; the authors report the Manecke and Auger study2 as 1 SD and the Mayer et al study as 2 SD.1 Perhaps the reporting of the limits of agreement rather than the varying definitions of precision would make the article easier to understand. The authors are to be praised for their explanation of how the interpretation of the limits of agreement from these studies is limited at best unless the precision of the reference technique is tested and described. Only if this is understood can the precision of the assessed device be put into any context. This previously has been described by ourselves3,4 and in a similar vein by others.5,6 Mayer et al1 correctly wrote that the level of precision of the reference technique is very rarely (if ever) reported, therefore making the meta-analysis more difficult. We totally agree with this. Interestingly, this challenge has already been taken up by Jansen and Van den Berg and was presented in a book chapter in 2005.6 In that publication, the authors proposed to weigh the percentage error according to different levels of precision of the reference technique. In this way, the weighted data are seen within a range of possible values depending on the possible different levels of precision of the reference technique. We believe that this would have been a more robust approach. We think that this article takes us a step forward in our ability to understand more about the accuracy and the precision of new cardiac output monitoring techniques. The difficulty of the authors to weigh the percentage errors from different studies highlights even more the need to perform future studies in a more controlled fashion. In this sense, we can never stress enough how much a consensus on how to validate new technologies is needed.7 Maurizio Cecconi, MD, MD(UK)* Christopher Hofer, MD, DEAA† Giorgio Della Rocca, MD‡
Robert Michael Grounds, MD, FRCA* Andrew Rhodes, FRCP, FRCA* *Department of General Intensive Care St George’s Hospital NHS Trust London, UK †Institute of Anaesthesiology and Intensive Care Medicine Triemli City Hospital Zurich, Switzerland ‡Department of Anesthesia and Intensive Care Medicine S. Maria della Misericordia University Hospital University of Udine Udine, Italy REFERENCES 1. Mayer J, Boldt J, Poland R, et al: Continuous arterial pressure waveform-based cardiac output using the FloTrac/Vigileo: A review and meta-analysis. J Cardiothorac Vasc Anesth 23:401-406, 2009 2. Manecke GR, Auger WR: Cardiac output determination from the arterial pressure wave: clinical testing of a novel algorithm that does not require calibration. J Cardiothorac Vasc Anesth 21:3-7, 2007 3. Cecconi M, Grounds M, Rhodes A: Methodologies for assessing agreement between two methods of clinical measurement: Are we as good as we think we are? Curr Opin Crit Care 13:294-296, 2007 4. Cecconi M, Rhodes A, Poloniecki J, et al: Bench-to-bedside review: The importance of the precision of the reference technique in method comparison studies with specific reference to the measurement of cardiac output. Crit Care 13:201, 2009 5. Critchley LA, Critchley JA: A meta-analysis of studies using bias and precision statistics to compare cardiac output measurement techniques. J Clin Monit Comput 15:85-91, 1999 6. Jansen JR, Van den Berg PCM: Cardiac output by thermodilution and arterial pulse contour techniques, in Pinsky MR, Payen D (eds): Functional Hemodynamic Monitoring. Berlin, Springer-Verlag, 2005 7. Cecconi MAR: Validation of continuous cardiac output technologies: Consensus still awaited. Crit Care 13:159, 2009 doi:10.1053/j.jvca.2009.11.008
Response To the Editor: We would like to thank Dr Cecconi and his colleagues for their thoughtful, insightful comments. We agree wholeheartedly that consensus and consistency in terminology and study design must be established for validation studies. Furthermore, we regret any confusion we might have contributed by our reporting the “precision” in our previous articles. Indeed, in the Manecke study, “precision” refers to 1 standard deviation from the mean, whereas in the Mayer study “precision” refers to 2 standard deviations from the mean. We agree we should have clarified this. In our meta-analysis, “precision” referred to the absolute value of 1 standard deviation from the mean, and “limits of agreement” referred to the actual values of 2 standard deviations from the mean. In reviewing the literature, we found these definitions to be the ones most commonly used. We fully recognized the variability of reporting between articles, and, when necessary, calculated the values according to these defi-