ISPRS Journal of Photogrammetry & Remote Sensing 56 (2002) 311 – 325 www.elsevier.com/locate/isprsjprs
Medical imaging: examples of clinical applications H.P. Meinzer *, M. Thorn, M. Vetter, P. Hassenpflug, M. Hastenteufel, I. Wolf Division of Medical and Biological Informatics, German Cancer Research Center, Im Neuenheimer Feld 280, D-69120, Heidelberg, Germany Received 1 March 2002; accepted 3 May 2002
Abstract Clinical routine is currently producing a multitude of diagnostic digital images but only a few are used in therapy planning and treatment. Medical imaging is involved in both diagnosis and therapy. Using a computer, existing 2D images can be transformed into interactive 3D volumes and results from different modalities can be merged. Furthermore, it is possible to calculate functional areas that were not visible in the primary images. This paper presents examples of clinical applications that are integrated into clinical routine and are based on medical imaging fundamentals. In liver surgery, the importance of virtual planning is increasing because surgery is still the only possible curative procedure. Visualisation and analysis of heart defects are also gaining in significance due to improved surgery techniques. Finally, an outlook is provided on future developments in medical imaging using navigation to support the surgeon’s work. The paper intends to give an impression of the wide range of medical imaging that goes beyond the mere calculation of medical images. D 2002 Elsevier Science B.V. All rights reserved. Keywords: medical imaging; operation planning; computer-based surgery; clinical applications; 3D visualisation; medical diagnosis; therapy support
1. Introduction In the last 25 years, several new image-producing modalities have been developed and integrated into the clinical workflow. Today, computer tomography (CT), magnet resonance imaging (MRI), colour Doppler ultrasound (US), single photon emission computed tomography (SPECT) and positron emission tomography (PET) are used in routine applications while the next generation of modalities, such as functional MRI (fMRI) and multidetector tomography, are being introduced into the clinic. These techniques are used across * Corresponding author. Tel.: +49-6221-422366; fax: +496221-422345. E-mail address:
[email protected] (H.P. Meinzer).
a wide spectrum of patient applications. CT is an X-ray procedure that produces cross-sectional pictures of the patient’s body. Because of the defined X-ray spectrum, the grey values of the resulting images are standardised as Houndsfield units (a grey value of zero is equal to the density of water). In contrast, the grey values of MR images are not standardised. This technique produces cross-sectional pictures with the help of a powerful magnetic field (z1.5 T). Resonances of atomic nuclei in different tissues are transformed into a grey value image. MRI differentiates soft tissue very well but liquids like blood and solid material like bones are not visualised. Because MRI is a more expensive procedure with a lower image resolution, CT is still the workhorse of radiological routine, while MRI is only used in special cases. SPECT and PET belong to the family of scintigraphic imaging modalities. There-
0924-2716/02/$ - see front matter D 2002 Elsevier Science B.V. All rights reserved. PII: S 0 9 2 4 - 2 7 1 6 ( 0 2 ) 0 0 0 7 2 - 2
312
H.P. Meinzer et al. / ISPRS Journal of Photogrammetry & Remote Sensing 56 (2002) 311–325
fore, radioactive material with a very short half-life is applied to the patient and detected by the modality. With this technique metabolic activity can be visualised to locate cancer-affected areas in the patient. The only common feature of these modalities is that they produce digital images. Whenever something is calculated from the first raw digital images, we speak of medical imaging. This definition means that the range of medical imaging is very broad. During the early years of medical imaging, research focusing on image processing resembled that of industrial image processing. A processing cascade was created consisting of the following processing steps: acquisition, image reconstruction, image filtering, feature extraction or segmentation, and classification. With ongoing research in this field the flow was expanded by displays of medical images and also by combining these steps in applications such as teleradiology, computer-based surgery planning, computer-supported and -aided surgery (CSS, CAS) and computer-based radiotherapy. The main technical problems in medical imaging are still organ segmentation and especially anatomical changes caused by accidents, illness or cancer. While it is easy to find bones in CT and X-ray images because nearly everything with a Houndsfield unit of more than 250 HU is bony tissue, it is a challenge to find a pancreatic tumour that looks like a pancreatitis or to automatically extract the liver with an error rate of less than 2% in each individual dataset. These are some of the numerous technical challenges in medical imaging but there are also practical challenges in this field of research. The question of how to integrate these tremendous results into both the daily clinical routine and into the clinical workflow has often been neglected. In particular, computer-generated diagnosis and therapy proposals result in very sensitive issues regarding influence, consequence and responsibility. As a sampling of 25 years of research in medical imaging, we want to present select applications developed in our division in cooperation with the University Clinic in Heidelberg. These include all typical imageprocessing steps in order to provide an idea of possible overlap with photogrammetry. The first section shows the results of visceral surgery1 planning with emphasis 1 Visceral surgery is dealing with diseases of the belly and its organs, like e.g. liver, pancreas, bowel and milt.
on liver surgery planning. CT and MR images will be analysed to establish a therapy proposal. The next section presents a project that uses 3D ultrasound and heart defects analysis. Both projects are in a clinical integration stage while the project in the third section deals with a very innovative task. By integrating 3D ultrasound, fast volume visualisation and navigation, the results of liver operation planning will be used to guide the surgeon during the operation.
2. Planning visceral surgery Visceral surgery is the only potentially curative strategy in most cancer cases. The typical requirement in such cases is to estimate the volume of tissue that must be resected and the volume of functional, healthy tissue that will remain in the patient. Such estimations are not very sophisticated when the tumour tissue is far away from important vascular structures. The challenging decisions, e.g. regarding tumour tissue near major vascular structures, must be made with the help of 2D image stacks, from which the radiologists and surgeons must mentally reconstruct threedimensional (3D) volumes. Whether their decision is right for the patient depends on their experience. The first thing medical imaging can help with is a 3D reconstruction of segmented images so that physicians can see and interact with the images in real 3D (Figs. 1 and 2). Furthermore, it is possible to calculate the tissue that has to be resected (Fig. 5). There are different approaches to the planning of surgical strategies (Selle et al., 2000) but also to teaching and simulating such strategies (Marescaux et al., 1998). In the following, we will discuss a procedure for achieving this with the liver surgery planning system developed in our department. Our approach calculates the segmental classification by means of an analysis of the portal vessel structure. It defines partial volumes of the organ tissue that are assigned to the different branches of the vessel tree. The location of the interfaces between these segments provides significant surgical information in addition to the position of the tumour and its relation to the vessel system. Additionally, tissue areas, which are dependent on branches located inside of a safety margin, are detected. These volumes will presumably have to be resected as well.
H.P. Meinzer et al. / ISPRS Journal of Photogrammetry & Remote Sensing 56 (2002) 311–325
313
Fig. 1. Visualisation of a human pancreas (orange) including the arterial vessels (red) and the rips (grey).
The integration of computer-aided operation planning in this field requires that all steps of the analysis process have to be embedded in a system that enables the reception of image data and forwarding of the results. The individual steps are: 1. segmentation of the liver and tumour, 2. segmentation of the vessel system, 3. editing of the vessel tree in order to differentiate the portal system, and 4. calculation of a resection proposal and visualisation of the results. 2.1. Liver surgery planning The first step is the segmentation of the image data in order to tag the liver and diseased areas of liver tissue. Various semi-automatic algorithms for this task
have been integrated into a segmentation model and are stable for use in clinical routine. This module provides basic interactions like region growing (Justice et al., 1997), merging and cutting, threshold, active contours (Kunert et al., 2002), undo and propagation (Herman et al., 1992). The framework in which the modules are integrated permits generating a graphical user interface for the image processing function. Furthermore, the introduction of new functions is easily realised (Cardenas, 2001). Fig. 3 shows the segmentation module of the system. Vessel segmentation is carried out using an algorithm that generates a symbolic description of the vessel tree in addition to segmenting the vessels (Zahlten et al., 1995). It begins with selecting a starting point and a grey value range. This is done interactively by changing the level/window values for a two-dimensional image until only vessels are displayed. The starting point is placed in the stem of the
314
H.P. Meinzer et al. / ISPRS Journal of Photogrammetry & Remote Sensing 56 (2002) 311–325
Fig. 2. Visualisation of a kidney of a dog.
portal vein system. The algorithm passes through the dataset in order to find connected vessel structures. The main component of this module is a three-dimensional reconstruction created from the preceding module. In this module, the user can edit the vessel system. The vessel branch that contains the starting point of the segmentation algorithm is coloured green. The segmentation process may have generated invalid connections between the two venous systems but these can be interactively severed (Fig. 4) (Thorn et al., 2001). The location of the invalid connections can be detected by calculating the path from the portal stem to a part of the hepatic system that belongs to the segmentation result. In this display, the location, where the segmentation result must be severed, is easily detected. Each branch can be chosen interactively and tagged as a ‘‘stop branch’’. The tag then severs the pseudo-connection. Only those parts of the vessel tree that are still connected to the portal stem will be displayed in bright colours. This is continued
Fig. 3. Liver surgery planning system: module for the semi-automatic segmentation of the liver and tumour tissue (here, only the liver is shown).
H.P. Meinzer et al. / ISPRS Journal of Photogrammetry & Remote Sensing 56 (2002) 311–325
Fig. 4. Separating the two vessel systems (vena porta and liver veins) with five clicks on average.
315
316 H.P. Meinzer et al. / ISPRS Journal of Photogrammetry & Remote Sensing 56 (2002) 311–325
Fig. 5. Visualisation of the planning results (two vessel systems, tumour, resection proposal in compliance with the security margin).
H.P. Meinzer et al. / ISPRS Journal of Photogrammetry & Remote Sensing 56 (2002) 311–325
until all parts of the hepatic system are removed from the segmentation result. After that the portal vein system is classified and those vessels running through the tumour or within the security margin around the tumour can be identified as the vessels supplying the liver parenchyma that must be resected. The last step in the planning procedure is the presentation of results (Fig. 5). It includes a visualisation showing the tumour position in relationship to the vessels. Vessels located inside the safety margin can also be shown. Operation planning visualises the proposed resection lines on the surface of the liver. The quantitative results include total liver volume and tumour size. The volume of the sound liver tissue that must be resected in addition to the tumour is particularly important because here size approximates the loss of liver functionality. 2.2. Result presentation in the operation theatre We installed a touch screen display to permit direct access by the surgeon to all information (see Fig. 6). With this installation the surgeon is able to manipulate visualisations and can choose different views of the result data. First, he can look at the CT data and leaf through it in cine mode. Several results from oper-
317
ation planning, i.e. the security margin around the tumour, the resection proposal and the specific vessel trees for the vena porta or the vena hepatica, can be blended into the CT data as overlays. This application emerged as an obligatory requirement for the system so that a surgeon’s decision is hedged by CT data. In a second view, the surgeon can have a three-dimensional look at the supplying and draining vessels to support his spatial visualisation (Lamade´ et al., 2000). All three-dimensional views can be freely rotated and zoomed. In addition to pure anatomy, it is also possible to view the tumour, security margin and resection proposal in three dimensions. The resection proposal is calculated with the aid of a module, which allows the user to attribute parts of the portal vessel tree to be resected (Glombitza et al., 1999). For this purpose the surgeon has the option of clicking onto the visualised vessel tree to increase or decrease the resection proposal during the surgery (Thorn et al., 2001) and adjust strategy to the actual situation in surgery. The results of these intra-operative adjustments are also viewable in the two- and three-dimensional visualisations. As a result of its high acceptance, the system was used by various surgeons in five liver surgeries during the last 4 months. This cumulative experi-
Fig. 6. Touch-screen monitor over the operation field.
318
H.P. Meinzer et al. / ISPRS Journal of Photogrammetry & Remote Sensing 56 (2002) 311–325
ence as well as software ergonomy and additional features are currently being examined. At the same time we are carrying out extensive studies to evaluate the results of resection planning to encourage the acceptance of the software system. We are also thinking about an expansion of the system so that the preceding operation planning can be updated during the surgery because the intra-operative ultrasound examination has shown that sometimes tumours were underrated or overlooked in CT images. That led us to also use MR images for operation planning.
3. Diagnostic support for heart diseases using three-dimensional echocardiography Echocardiography (cardiac ultrasound) is today the predominant technique for evaluation of left ventricular function (vitality of the left ventricle) and also for the assessment and quantification of valvular heart lesions. Three-dimensional echocardiography has been proven to be superior to conventional twodimensional echocardiographic techniques (Fenster et al., 2001). In addition to improved imaging of structures and details that can only be studied by 3D methods, a major advantage is the ability to perform measurements on the 3D datasets. Segmentation of cardiac structures is required for determining many important diagnostic parameters. However, manual quantitative analysis of 3D (or even 4D=3D+time) data is very time consuming and has prevented widespread use of 3D echocardiographic methods. A major reason for the time intensiveness is that many quantification tasks require segmentation of 4D datasets. We have developed an echocardiographic analysis system—called EchoAnalyzerk—that integrates many up-to-date visualisation and quantification methods (Wolf et al., 2001). This includes a new segmentation approach with the potential of speeding up use of three-dimensional echocardiographic techniques, which makes them available for routine clinical medicine. The EchoAnalyzerk system provides colour-coded, interactive 4D-volume visualisation, measurements of cavity volumes, ejection fraction and cardiac output, and regurgitation quantification as well as the display of flow profiles.
3.1. Visualisation methods Transesophageal Doppler echocardiography is used for data acquisition. The patient swallows the ultrasonic probe, which is placed in the esophagus to provide a good view of the cardiac structure of interest. We use a Sonos 2500 DSR/Sonos 5500 DSR ultrasound system (Philips, Andover, MA, USA) with a transesophageal multiplane probe (5 MHz) that allows digital data storage. The multiplane technology allows for the rotation of the two-dimensional acquisition sector around its bisection line. Four-dimensional datasets are acquired by the builtin rotational controller of the system, as triggered by the ECG and respiratory gating. The backscatter (i.e., morphological data) and Doppler data are each stored separately with 8 bits per pixel. The Doppler data can contain either velocity information only or 6 bits of velocity and 2 bits of turbulence or 5 bits of velocity and 3 bits of turbulence information. Turbulence information is calculated as the variance of the Doppler signal and is important for the assessment of regurgitation, i.e. blood flowing in the wrong direction due to a defective heart valve (De Simone et al., 1999a). Fig. 7 shows a typical two-dimensional echocardiographic image. Such images are used to examine the function of the mitral valve, i.e. the valve separating the left atrium from the left ventricle. Segmentation of the left atrium is required in the case of a mitral valve insufficiency for reliable quantification of regurgitant volumes with the three- or even four-dimensional approach developed in our group (De Simone et al., 1999a). The quantification of heart valve defects is important to avoid unnecessary replacement of heart valves. Substitution of heart valves by artificial prostheses is not only a severe operation, but also leads to an increased risk of thromboembolism resulting in a life-long necessity to take anticoagulants. The EchoAnalyzerk system allows the display of the originally two-dimensional slices, either with combined backscatter/Doppler signals, backscatter only, velocity only or turbulence only. Additionally, the data can be reconstructed in arbitrary cut planes (multiplanar reformations, MPR). Volume visualisation of 3D/4D echocardiographic data provides additional information (see Fig. 8), e.g. about its shape, size and origin of regurgitant jets. The
H.P. Meinzer et al. / ISPRS Journal of Photogrammetry & Remote Sensing 56 (2002) 311–325
319
Fig. 7. Typical view obtained from transesophageal acquisition position. LA: left atrium, LV: left ventricle, MV: mitral valve.
volume renderer integrated into the EchoAnalyzerk software enhances three-dimensional perception by virtual illumination with two light sources. The system allows interactive rotation of the visualised volume. 4D datasets are displayed as movies. Doppler data can be visualised with the same colour encoding that was used during a conventional two-dimensional examination on the screen of the ultrasound equipment, either in combined mode together with the backscatter data or without the surrounding tissue. Any number of cutaway planes can be used to mask the volume parts that obstruct the view on structures of interest. 3.2. Quantification methods The vitality of the heart muscle can be assessed by measuring the heart’s performance, which can be done by determining cardiac output and the ejection fraction. Cardiac output is the amount of blood in litres per minute pumped from the heart into the body. Cardiac output measurements can be performed either by
segmenting the left ventricle or by spherical surface integration of velocity vectors (SIVV). SIVV is a three-dimensional, angle-independent method that calculates flow by integrating the Doppler velocity information over a spherical shell with its centre located at the position of the ultrasound transducer. It has the potential of being independent of differences in flow area occurring during the cardiac cycle (Brandberg et al., 1999). The ejection fraction represents the percentage or fraction of left ventricular diastolic volume that is ejected during the systolic phase. The determination of the ejection fraction is from a technical point of view a segmentation problem. In medical image processing, segmentation is still a difficult task to perform automatically, especially if the signal-to-noise ratio is low and the object contour is incomplete, which are often both the case in echocardiography. We have integrated in the analysis software a nearly automatic segmentation algorithm that greatly reduces the user time necessary for segmentation (Wolf et al., 2000). The algorithm, which is integrated in the
320
H.P. Meinzer et al. / ISPRS Journal of Photogrammetry & Remote Sensing 56 (2002) 311–325
Fig. 8. Hybrid visualisation of volume-rendered blood flow and surface-rendered mitral annulus.
analysis software, is a contour-based approach integrating region-based information. If necessary, the segmentation results can be corrected interactively. To quantify regurgitation, a three-dimensional method is implemented in the analysis software that yields reliable volumetric measurements even for asymmetrical, wall-impinging regurgitation jets (De Simone et al., 1999b). Regurgitant jets are defined as the fast, turbulent flow component located above the atrioventricular valves. Blood flow distributions downstream of the aortic valves may be analysed by visualisation as velocity landscapes to compare the effects of various artificial aortic valves on blood constituents. After definition of the primary direction of flow, an angle correction of the velocity component supplied in the Doppler data is performed and the result is displayed as a profile graph that can be interactively rotated.
The echocardiographic analysis system presented enables visualisation and quantification of three- and four-dimensional echocardiographic datasets. It integrates leading-edge methods in a user-friendly environment and makes them available for clinical medicine. Internal test results of the software package at our institution are promising. After their completion, the software will be made available for multicentre evaluation.
4. Computer-assisted surgical interventions Ever since the first surgical interventions, surgeons have had to compensate for the lack of human organ transparency. They have learned to orient themselves by means of their visual and tactile senses while their actions were limited by their dexterity. Nowadays,
H.P. Meinzer et al. / ISPRS Journal of Photogrammetry & Remote Sensing 56 (2002) 311–325
augmented reality (AR) allows for enhanced perception of the surgical situs by superimposing stereoscopic projections over the field of operation. This can be achieved by an image guided surgery system (IGSS). IGS denotes that the surgeon is guided by computer-generated instrument visualisations that are displayed relative to planned target structures in the pre- and intra-operative image data. Technically, this is accomplished by navigation systems that combine components for intra-operative image acquisition, image registration, instrument and marker tracking, modelling of organ deformations and visualisation. The whole IGSS must integrate seamlessly into the surgical workflow to avoid interfering with the medical staff. Using computer-assisted surgery (CAS) the surgeon is supported in an ideal manner during the whole sequence of complex surgical procedures consisting of image acquisition, surgical planning, surgical simulation, therapy support and long-term followup. Computer-assisted navigation systems for rigid anatomical regions are highly advanced and used in clinical routine, i.e. for bones and tissue adjacent to bones. Examples for IGSS in clinical use include applications in skull base surgery, ear –nose –throat (ENT) surgery, cranio –maxillofacial surgery, neurosurgery and orthopaedics. Navigation systems make available the results of the surgical planning during the intervention. Information gained by the preoperative surgical planning can be transferred to the intraoperative situs. This enhances the indication and allows a more exact execution of the intervention. In soft tissue operations support by a navigation system was previously unavailable. The movement of soft organs results in a strong variation from the preoperative recorded dataset. Therefore, the preoperatively obtained resection proposals no longer coincide with the current situs. In our work, we are developing a prototype of image-guided navigation in liver surgery named ARIONk. Arion was a handsome and talented lute player in Greek mythology. His rivals wanted to throw him overboard into the Mediterranean Sea but he was allowed to play a last tune. The high pitch from his lute called a rescuing dolphin that carried him to the coast. Therefore, he was the first person whose life was saved by ultrasound. ARION is also an acronym for Augmented Reality for Intra-Operative Na-
321
vigation. Surgical cutting instruments are tracked by an electromagnetic tracking system (AURORAk, Northern Digital, Waterloo, Ontario, Canada). A display from Dresden 3D (Dresden, Germany) with a Diamond Fire GL3 graphics card was used for auto stereoscopic visualisation. For the shutter glasses, we use the ELSA (San Jose, USA) Gladiac ultra technology. Current problems in oncological liver surgery are not adhering to safety margins, possible injury of major blood vessels, spreading of tumour cells, and the difficulty of implementing the resection proposal at the intra-operative site. The quality of the surgery depends on the surgeon’s experience and spatial imagination. Therefore, all these problems have their cause in the intra-operative orientation of the surgeon. Our requirement analysis indicated that the major problems are the identification of vessels and orientation within the liver (Hassenpflug et al., 2001). Surgeons want support in global and local orientation. Globally, they want to identify vessels and determine their course through the organ. Locally, they are more interested in estimating distances and the relationship of their instrument to intrahepatic structures. Therefore, visual depth perception is important for intra-operative orientation and navigation. Computerized surgical planning for intra-operative navigation provides segmented vessels and tumours as well as resection proposals (Fig. 9). Stradx (Prager et al., 1998) is used as the software for intra-operative freehand 3D ultrasound. Intra-operatively the liver is fixated and three-dimensional freehand Doppler ultrasound and B-scans are acquired simultaneously. These data provide ARION with the necessary features for registration of the preoperative planning data: a reconstructed vessel tree extracted from pre-operative CT data and the corresponding vessels extracted from three-dimensional ultrasound images are used for registration. An efficient registration is performed by elastically mapping corresponding vessels. The major challenge is maintaining the registered condition during the resection after revoking the fixation of the liver (Vetter et al., 2001). This is necessary in realtime for monitoring a surgical volume of interest. Therefore, we developed navigation aids that can be localised by an electromagnetic tracking system. The navigation aids (Vetter et al., 2002a) are anchored deep within the registered liver. Subsequently, liver
322
H.P. Meinzer et al. / ISPRS Journal of Photogrammetry & Remote Sensing 56 (2002) 311–325
Fig. 9. Combined surface visualisation of a clipped liver with intrahepatic vessels, tumour, and a depiction of the virtual instrument in ARION. Liver hull (grey), liver vein (light blue), portal vein (dark blue), tumour (red) and instrument (yellow).
deformation can be described by a deformation model based on parameters from the localisation of the navigation aids. Now the instruments are visualised in real-time with respect to the tumour and the intrahepatic vessels on an auto-stereoscopic display for proper depth perception (Vetter et al., 2002b) (Fig. 10). On an auto-stereoscopic display, the stereo images are separated by a prism mask (Schwerdtner and Heidrich, 1998). The prism mask is moved horizontally relative to the position of the observer’s eyes. Therefore, a stereo camera system atop the stereo display is used to automatically detect the position of the observer’s eyes using real-time image processing and pattern recognition. This enables quasi-holo-
graphic visualisation by moving the virtual cameras in tandem with the observer’s eye positions. This offers the possibility of looking stereoscopically around virtual objects in a natural way. The quasiholographic technique is superior to other visualisation techniques in the case of complex anatomical scenarios. The accuracy of the magnetic tracking system can be continuously monitored by comparing the measurements dynamically with an additional optical tracking system (Hassenpflug et al., 2002). We are convinced that IGSS, like ARION, will bridge the gap between surgical planning and the difficult intra-operative orientation in open liver surgery. Augmented-reality techniques will significantly
H.P. Meinzer et al. / ISPRS Journal of Photogrammetry & Remote Sensing 56 (2002) 311–325
323
Fig. 10. Interaction with ARION using augmented reality: Tracking of the surgical instrument and the registered liver and auto-stereoscopic visualisation on a prism-based display.
impact on operation time and quality in the not too distant future. Therefore, enhanced cooperation between disciplines like medicine, medical informatics, mathematics and engineering is becoming more and more important.
5. Pitfalls and drawbacks Medical image processing is nothing more than a specialisation of computerised image processing in general. Since the early days of image processing, several well-explored methods and tools have become known, e.g. convolution, Fourier transform, and wavelet-based approaches. All of them are very mathematical and focus on what can be computed, mostly by looking at the given images from a frequency or band-pass point of view. Unfortunately, this is not the way a human being perceives images. We open our eyes and see stereo,
perspective, depth, colour and motion, just to name a few features. Neither medical doctors nor scientists and engineers understand human perception in sufficient depth to simulate it on a computer. All attempts at using artificial intelligence to comprehend images have failed, with the artificial exception of automated bar chart reading. Instead of the question ‘‘what can we see?’’ the question stalled on ‘‘what can we calculate?’’ In medical image processing, all problems of generalised image processing are again encountered. In the hope of helping patients to better health by enabling medical doctors to have a better understanding of the patient’s problems, images have been used since Konrad Roentgen’s time 100 years ago. Nowadays, we collect images by the hundreds for one patient in one session. Simple plane presentation of the 2D slices from CT, MR tomography and US images, just to name the most popular ones, is still the standard.
324
H.P. Meinzer et al. / ISPRS Journal of Photogrammetry & Remote Sensing 56 (2002) 311–325
In order to improve diagnosis and subsequent therapy planning, we have worked on algorithms for image understanding, i.e. perception. The dream of any kind of automated image analysis in the medical context has dramatically failed. Radiologists are not able to tell computer scientists how they perceive radiographs and, thus, are unable to construct a perception algorithm. This is why semi-automatic approaches are in vogue. We use the human eye for control and guidance of computer analysis of medical images. The result of the semi-automatic segmentation process is then visualised by surface- and volumerendering techniques. The research in this field was propelled by the film industry and is now available. The remaining task in medical applications is to render the images in time, i.e. in real-time. The most difficult remaining problem for a full implementation of all possible enhancements of imaging technologies in the clinical routine is the lack of smooth integration into the traditional clinical workflow. Human and methodological hierarchies are not easily changed because clinical routine has been very ritualised for centuries. New approaches are only accepted when there is an obvious benefit to the medical procedure. This can mean better care for the patient but often is related to benefits for health care professionals. The benefit is not necessarily monetary because health care professionals may be motivated to accept imaging technology because of personal fame, theoretical considerations or the wish to improve patient care. Based on our own experience, we know that this process of clinical integration of imaging technologies supporting diagnostics and therapy planning can take years after the primary publication of a new approach.
References Brandberg, I.J., Janerot-Sjo¨berg, B., Ask, P., 1999. Increased accuracy of echocardiographic measurement of flow using automated spherical integration of multiple plane velocity vectors. Ultrasound Med. Biol. 25 (2), 249 – 257. Cardenas, C.E., 2001. Konzeption und Implementierung eines objektorientierten Frameworks und dessen Komponenten fuer die multimodale praeoperative Operationsplanung in der Leber- und Nierenchirurgie. PhD thesis, University of Heidelberg, Medical Faculty. De Simone, R., Glombitza, G., Vahl, C.F., Albers, J., Meinzer, H.P.,
Hagl, S., 1999a. Three-dimensional color Doppler. A clinical study in patients with mitral regurgitation. J. Am. Col. Cardiol. 33 (6), 1646 – 1654. De Simone, R., Glombitza, G., Vahl, C.F., Albers, J., Meinzer, H.P., Hagl, S., 1999b. Three-dimensional color Doppler: a new approach for quantitative assessment of mitral regurgitant jets. J. Am. Soc. Echocardiogr. 12 (3), 173 – 185. Fenster, A., Downey, D.B., Cardinal, H.N., 2001. Three-dimensional ultrasound imaging. Phys. Med. Biol. 46 (5), 67 – 99. Glombitza, G., Lamade´, W., Demiris, A.M., Go¨pfert, M.R., Mayer, A., Bahner, M.L., Meinzer, H.P., Richter, G., Lehner, Th., Herfarth, C., 1999. Virtual planning of liver resections: image processing, visualization and volumetric evaluation. Int. J. Med. Inf. 53 (2 – 3), 225 – 237. Hassenpflug, P., Vetter, M., Ca´rdenas, C., Thorn, M., Meinzer, H.-P., 2001. Navigation in liver surgery—results of a requirement analysis. In: Lemke, H.U., Vannier, M.W., Inamura, K., Farman, A.G., Doi, K. (Eds.), Computer Assisted Radiology and Surgery (CARS 2001), Proceedings of the 15th International Congress and Exhibition, Berlin, June 27 – 30, vol. ICS 1230. Elsevier, Amsterdam, p. 1162. Hassenpflug, P., Vetter, M., Schneberger, M., Liebler, T., Richter, G.M., Thorn, M., Ca´rdenas, C., Lamade´, W., Herfarth, C., Meinzer, H.-P., 2002. Influence of the operating room on magnetic tracking. Proc. SPIE 4681 (to be published). Herman, G.T., Zheng, J., Buchotz, C.A., 1992. Shape-based interpolation. IEEE Comput. Graph. Appl. 12 (3), 69 – 79. Justice, R.K., Stokely, E.M., Strobel, J.S., Ideker, R.E., Smith, W.M., 1997. Medical image segmentation using 3-D seeded region growing. Proc. SPIE 3034, 900 – 910. Kunert, T., Heiland, M., Meinzer, H.P., 2002. User-driven segmentation approach: interactive snakes. Proc. SPIE 4684 (to be published). Lamade´, W., Glombitza, G., Fischer, L., Chiu, P., Cardenas Sr., C.E., Thorn, M., Meinzer, H.P., Grenacher, L., Bauer, H., Lehnert, T., Herfarth, C., 2000. The impact of 3-dimensional reconstructions on operation planning in liver surgery. Arch. Surg. 135 (11), 1256 – 1261. Marescaux, J., Clement, J.-M., Tassetti, V., Koehl, C., Cotin, S., Russier, Y., Mutter, D., Delingette, H., Ayache, N., 1998. Virtual reality applied to hepatic surgery simulation: the next revolution. Ann. Surg. 228 (5), 627 – 637. Prager, R.W., Gee, A.H., Berman, L., 1998. Stradx: Real-time acquisition and visualization of freehand 3d ultrasound. Technical Report CUED/F-INFENG/TR 319. Univ. of Cambridge, Dept. of Engineering. Schwerdtner, A., Heidrich, H., 1998. Dresden 3D display: a flat autostereoscopic display. Proc. SPIE 3295, 203 – 210. Selle, D., Spindler, W., Schenk, A., 2000. Computerized models minimize surgical risk. Diagn. Imag. Eur. 16 (9), 16 – 20. Thorn, M., Vetter, M., Cardenas, C., Hassenpflug, P., Fischer, L., Grenacher, L., Richter, G.M., Lamade´, W., Meinzer, H.P., 2001. Interaktives Trennen von Gefa¨ßba¨umen am Beispiel der Leber. In: Handels, H., Horsch, A., Lehmann, T.M., Meinzer, H.P. (Eds.), Informatik Aktuell, Bildverarbeitung fu¨r die Medizin 2001—Algorithmen, Systeme, Anwendungen. Springer, Berlin, pp. 147 – 151.
H.P. Meinzer et al. / ISPRS Journal of Photogrammetry & Remote Sensing 56 (2002) 311–325 Vetter, M., Hassenpflug, P., Thorn, M., Ca´rdenas, C., Glombitza, G., Lamade´, W., Richter, G.M., Meinzer, H.-P., 2001. Navigation in der Leberchirurgie-Anforderungen und Lo¨ sungsansatz. In: Wo¨rn, H., Mu¨hling, J., Vahl, C., Meinzer, H.P. (Eds.), Proc. Rechner- und sensorgestu¨ tzte Chirurgie. Lect. Notes Inf. (LNI), vol. P-4. Gesellschaft fu¨r Informatik, Bonn, pp. 92 – 102. Vetter, M., Hassenpflug, P., Wolf, I., Thorn, M., Ca´rdenas, C., Grenacher, L., Richter, G.M., Lamade´, W., Bu¨chler, M.W., Meinzer, H.-P., 2002a. Intraoperative navigation in der Leberchirurgie mittels navigationshilfen und Verformungsmodellierung. Proc. Bildverarbeitung fu¨r die Medizin 2002. Springer, Heidelberg, pp. 73 – 76. Vetter, M., Hassenpflug, P., Thorn, M., Ca´rdenas, C., Grenacher, L., Richter, G.M., Lamade´, W., Herfarth, C., Meinzer, H.-P., 2002b. Superiority of auto-stereoscopic visualization for image-guided navigation in liver surgery. Proc. SPIE 4681 (to be published).
325
Wolf, I., De Simone, R., Glombitza, G., Meinzer, H.P., 2000. Automatic segmentation of heart cavities in multidimensional ultrasound images. Proc. SPIE 3979, 273 – 283. Wolf, I., De Simone, R., Glombitza, G., Meinzer, H.P., 2001. EchoAnalyzer – a system for three-dimensional echocardiographic visualization and quantification. In: Lemke, H.U., Vannier, M.W., Inamura, K., Farman, A.G., Doi, K. (Eds.), Proc. Computer-Assisted Radiology and Surgery. Elsevier, Amsterdam, pp. 902 – 907. Zahlten, C., Juergens, H., Peitgen, H.O., 1995. Reconstruction of branching blood vessels from CT-data. In: Goebel, M., Mu¨ller, H., Urban, B. (Eds.), Visualization in Scientific Computing. Springer Verlag, Wien, pp. 41 – 52.