Objectivity in image analysis

Objectivity in image analysis

Pathology (April 2010) 42(3), pp. 205–206 EDITORIAL Objectivity in image analysis ANTHONY S-Y. LEONG Hunter Area Pathology Service, John Hunter Hosp...

56KB Sizes 0 Downloads 92 Views

Pathology (April 2010) 42(3), pp. 205–206

EDITORIAL

Objectivity in image analysis ANTHONY S-Y. LEONG Hunter Area Pathology Service, John Hunter Hospital, Newcastle, NSW, Australia

In this issue ofPathology, Tadrous1 embarks on the difficult task of convincing us that digital image analysis performed by computers is not as objective as we have been led to believe. Over the past 150 or so years pathologists have accumulated sufficient data on the morphological changes of a large range of diseases to allow us to render diagnoses based on such observations. Diagnosis formulation is knowledge-based and is applied in combination with visual recognition, both processes not being mutually exclusive. The pathologist must firstly have knowledge of the entities that can occur at the specific anatomical site with an appreciation of their relative frequencies, as common things occur commonly. In addition, there must be cognisance of the spectrum of histological features that are manifested by each pathological entity and the ability to recognise them, a skill commonly known as pattern recognition. While a specific diagnostic entity may display a number of histological changes, they may not all be present in every case, and the minimum or most important set of features that permits a definite diagnosis is very much determined by published data, the experience of the pathologist, and the clinical circumstances of the patient. The diagnostic process is thus extremely complicated. Because there is currently limited understanding of how the human brain processes all this information to formulate a diagnosis, it is impossible to teach or learn the diagnostic process in the same manner as the human brain functions. Furthermore, there is also the very useful intuitive or ‘gut’ feeling expressed by some experienced pathologists that defies definition. Undoubtedly, the unravelling of the brain processes in the formulation of histological diagnosis through the examination of tissue sections will greatly contribute to the design and building of a computer for this function. Thankfully, for those of us in the profession of making histological diagnoses, this development will be a few years away yet! Despite these shortfalls in the formulation of histological diagnoses, there is still good reproducibility between pathologists for the recognition of many common diagnostic entities; however, both inter-observer and intraobserver reproducibility is acknowledged to be poor in several areas, notably the assessment of endometrial hyperplasia, atypia in inflammatory bowel disease, atypia of breast proliferations, atypia in cervical lesions, atypia of melanocytic lesions, and in particular, the grading of tumours.2–7 The identification of uncommon diagnoses or

cases that present under unusual clinical circumstances is another problem area. Recognition of this fallibility has led us to embrace the concept of image analysis because of the promise of objectivity and reproducibility. In his paper Tadrous1 firstly establishes the assertions of objectivity by quoting from authors of publications on image analysis. He then proceeds to show that all of the seven methodological steps employed to perform digital image analysis are open to subjective decisions and choices. For example, in segmentation, several choices of mathematical approaches are available for selection, and thresholding, another important aspect of image analysis, is achieved through the subjective selection of grey levels to allow separation of the objects from background. Tadrous quotes examples where two ‘automated’ commercial systems of image analysis for measuring oestrogen receptor staining produced poorer inter-observer agreement compared to the results of scoring performed manually by four pathologists,8 and two out of three different automated computer algorithms gave erroneous results when attempting to identify the boundaries of epithelial cell nuclei in cervical cytology.9 Immunohistology is well established as an adjunct to morphological diagnosis, contributing significantly to greater objectivity in morphological diagnosis. The technique was developed primarily as a qualitative tool but its use to identify prognostic markers and more recently predictive markers, the latter with a major role in the determination of humanised antibody treatment, has resulted in increasing pressure on pathologists to report the results of these stains in quantitative terms. Attempts to do so have produced semiquantitative assessment of stain intensity and extent or percentage of positive staining cells. These two variables are combined to produce a seemingly objective number and such values are sometimes compared to external known positive control tissues. While it is comforting to think that one is comparing and somehow titrating the stain in the test sample against that of a known control, it needs to be emphasised that both tissues have been subject to different durations of fixation and methods of tissue processing, variables that have a significant impact on antigen preservation.10 Comparison of results between laboratories are also invalid as variables in antigen retrieval methodology including molarity, pH and composition of the retrieval solution,11 and duration and temperature of the retrieval process, that are all pivotal to the intensity and extent of staining,12 can differ significantly between laboratories.

Print ISSN 0031-3025/Online ISSN 1465-3931 # 2010 Royal College of Pathologists of Australasia DOI: 10.3109/00313021003637624

206

Pathology (2010), 42(3), April

EDITORIAL

Differences in interpretation also vary, as do cut-off values which are often arbitrarily set. Importantly, it has also been recently shown that variations in tissue section thickness and unevenness within the same tissue section can produce variations in staining intensity.13 All these variables clearly affect objectivity and reproducibility. Tadrous1 points out that the use of the colour separation technique (employed in image analysis of immunostains) is also subject to choice and based on informed opinion and the developer’s perception of what is ‘best’. He provides the example of ‘non-subjective’ analysis of ploidy through Feulgen stains where different results were obtained because one study employed tissue sections and another studied imprint cytology. Direct fluorescent stains produce different results to amplified peroxidase anti-peroxidase complex methods, and polymers and chromogens also influence staining results. Furthermore, the gamma characteristic of the camera and shutter speed is quoted to produce different values. Clearly choices exist in every step of image analysis and numerous variables influence the consistency and reproducibility of results between specimens. Just because the analytical process is performed by a ‘black box’ computer does not make the process objective, as the numerous subjective influences operative in the design of the analytical process cannot be eliminated. Such automated analytical processes are certainly useful for repetitive tasks such as counting but the ‘repeatability and automaticity must not be confused with objectivity’.1 What then is the way forward? Clearly methodology needs to be standardised between laboratories. One way of achieving this is the use of a uniformly approved design for image analysis instruments. In design engineering terms ‘standardisation’ means ‘the adoption of generally accepted uniform procedures, dimensions, materials, or parts that directly affect the design of a product or a facility’. In the case of immunostains, however, uniformity in image analysis design will not address the many existing variables that are pivotal to the preservation of tissue antigens and it may be necessary to titrate or compare the test sample

against a control of known value which has been subjected to identical preparation variables as the test sample. Currently, only internal controls satisfy these requirements but it is not practical to introduce a standardised control into every tissue block employed for testing. Address for correspondence: Professor A. S-Y. Leong, Hunter Area Pathology Service, John Hunter Hospital, Level 3, Lookout Road, New Lambton Heights, NSW 2305, Australia. E-mail: Anthony.Leong@ newcastle.edu.au

References 1. Tadrous PJ. On the concept of objectivity in digital image analysis in pathology. Pathology 2010; 42: 207–11. 2. McKenna BJ, Appelman HD. Dysplasia can be a pain in the gut. Pathology 2002; 34: 518–28. 3. Palli D, Galli M, Bianchi S, et al. Reproducibility of histological diagnosis of breast lesions: results of a panel in Italy. Eur J Cancer 1996; 32A: 603–7. 4. Stoler MH, Schiffman M; Atypical Squamous Cells of Undetermined Significance-Low-grade Squamous Intraepithelial Lesion Triage Study (ALTS) Group. Interobserver reproducibility of cervical cytologic and histologic interpretations: realistic estimates from the ASCUS-LSIL Triage Study. JAMA 2001; 285: 1500–5. 5. Farmer ER, Gonin R, Hanna MP. Discordance in the histopathologic diagnosis of melanoma and melanocytic nevi between expert pathologists. Hum Pathol 1996; 27: 528–31. 6. Franc B, de la Salmonie`re P, Lange F, et al. Interobserver and intraobserver reproducibility in the histopathology of follicular thyroid carcinoma. Hum Pathol 2003; 34: 1092–100. 7. Chandler I, Houlston RS. Interobserver agreement in grading of colorectal cancers-findings from a nationwide web-based survey of histopathologists. Histopathology 2008; 52: 494–9. 8. Gokhale S, Rosen D, Sneige N, et al. Assessment of two automated imaging systems in evaluating estrogen receptor status in breast carcinoma. Appl Immunohistochem Mol Morphol 2007; 15: 451–5. 9. Bamford P. Empirical comparison of cell segmentation algorithms using an annotated dataset. Proceedings of the IEEE International Conference on Image Processing (ICIP 2003) 2003; 2: 1073–6. 10. Leong AS-Y. Quantitation in immunohistology – Fact or Fiction? A discussion of factors that influence results. Appl Immunohistochem Mol Morphol 2004: 12: 1–7. 11. Leong AS-Y, Haffajee Z. Citraconic anhydride. A new antigen retrieval solution. Pathology 2010; 42: 77–81. 12. Leong TY-M, Leong AS-Y. Variables that influence outcomes in immunohistology. Aust J Med Sci 2007; 28: 47–59. 13. Leong AS-Y. Editorial: Quantitative immunohistology. Tissue section thickness, another glitch in the path to standardization. Appl Immunohistochem Mol Morphol 2009; 17: 465–9.