CHAPTER SEVENTEEN
Biomedical image visualization and display technologies Jinman Kim1, Younhyun Jung1, David Dagan Feng1 and Michael J. Fulham1, 2, 3 Biomedical & Multimedia Information Technology (BMIT) Research Group, School of Computer Science, The University of Sydney, Sydney, NSW, Australia 2 Department of molecular imaging, Royal Prince Alfred Hospital, Sydney, NSW, Australia 3 Sydney Medical School, The University of Sydney, Sydney, NSW, Australia 1
17.1 Introduction Biomedical imaging has revolutionized healthcare and now plays an indispensable role in routine clinical care [1]. Biomedical imaging enables the depiction of the interior of a human body noninvasively, thus effectively accessing and examining the information of human patient studies necessary for clinical decision-making. Since the advent of computerized imaging modalities such as computed tomography (CT), magnetic resonance (MR) imaging, and positron-emission tomography (PET) in the 1970s and 1980s, biomedical images have evolved from two-dimensional (2-D) images to fully isotropic three- and four-dimensional (3-D and 4-D) imagesde.g., 4-D dynamic scans of the heart. Biomedical imaging instrumentation has rapidly moved into combined technologies (or modalities) in a single device, so-called “multimodality” imaging. Such multimodality imaging scanners include PET-CT, single-positron emission-computed tomography (SPECT)-CT, and PET-MR. These imaging modalities provide unprecedented visualizations of human anatomy and function. At the same time, the enormous increase in imaging data size has challenged clinicians with information overload and practical visualization constraints to assimilate, understand, and make sense of the data. PET-MR first introduced the ability to acquire anatomical (MR) and functional (PET) data simultaneously in a limited region of the body (brain, heart, part of the body). Recently, the first total-body PET-CT scanner was built, which at 1.94 m in length is able to image the entire body at one time. Named “The Explorer,” this scanner generates a massive amount of data and will soon be available commercially [2]. There is constant research to develop novel hardware and algorithms to visualize the data in a form that is effectively and intuitively interpretable by clinicians. However, conventional visualization techniques for biomedical images are still mainly reliant on a 2-D slice-based fashion; an example PET-CT visualization is shown in Fig. 17.1. With 2-D views, clinicians must manually browse through the images slicebyslice in different
Biomedical Information Technology ISBN 978-0-12-816034-3, https://doi.org/10.1016/B978-0-12-816034-3.00017-1
Ó 2020 Elsevier Inc. All rights reserved.
561 j
562
Jinman Kim, Younhyun Jung, David Dagan Feng and Michael J. Fulham
Figure 17.1 Conventional 2-D cross-sectional views of multimodality PET-CT images: transaxial (top row), coronal (middle row), and sagittal (bottom row); and from left to right: PET (visualization of the human function), CT (human anatomy), and fused PET-CT images; respectively. Bounding boxes (dashed red lines) outline regions of interest that correspond to the site of lung tumors (black regions on PET images and yellow regions on fused images). All patient studies were scanned in the supine position, and so in the transaxial and coronal planes, the right side of the subject appears on the left of the images.
Biomedical image visualization and display technologies
cross-sectional views and may need to adjust window and level parameters to define appropriate brightness and contrast settings [3,4]. Research on3-D visualization of biomedical imaging data has been motivated by the desire to simplify interpretation. A 3-D visualization can navigate the image volume to depict regions of interest (ROIs) and the surrounding structures (context). The data from Fig. 17.1 are displayed in a 3-D visualization in Fig. 17.2, showing abnormalities (from PET) with contextual anatomical structures (from CT). The 3-D volumetric visualization also allows for the depiction of the shape of the organde.g., lungs, spleen, and liver. Three-dimensional visualizations provide complementary viewsde.g., an overview of whole-body PET-CT [5] and the 3-D shape of an organ to visualize the deformation of CT lungs [6]. They are used in presurgical planning to identify the spatial relationships between ROIs such as pathologic lesions and surrounding structures, the surgery itself, and for the planning of radiation therapy [7e9]. Further applications of 3-D visualization include education/training for surgery/radiological diagnosis [10] and quantification for modeling [11]. These clinical applications use different rendering techniques to visualize the imaging data on various types of display devices (hardware) to view and navigate the data.
Figure 17.2 An example of 3-D visualization of the regions of interest (ROIs, outlined in red) from a PET-CT patient study from Fig. 17.1. The rendering is generated using a direct volume rendering technique; the ROIs outline the sites of disease detected with PET overlain on the underlying anatomy.
563
564
Jinman Kim, Younhyun Jung, David Dagan Feng and Michael J. Fulham
In this chapter, we outline fundamental visualization concepts (visualization pipeline) and then discuss state-of-the-art rendering algorithms and display hardware technologies. We have focused on visualization in diagnostic imaging, given its critical role in everyday patient care, but much of the research discussed is relevant to other biomedical imaging application areas. There is a major body of work on 3-D visualization research. Our chapter identifies and builds on previous work together with providing a snapshot of the future. For further reading, we suggest that readers refer to the papers of Kersten-Oertel, Zhang, Ljung, Caban, and Douglas [9,12e15]. Zhang et al. [12], provide the technical explanations for the different types of volume rendering techniques, e.g., direct volume rendering (DVR) versus indirect volume rendering, with examples in biomedical imaging data. Ljung et al. [13], discuss the extensive literature on 3-D visualization manipulations in an intuitive or automated manner. Caban et al. [14], compared the pros and cons of different publicly available 3-D biomedical imaging visualization tools through open-source policy. Application of AR and VR has been reviewed for surgical applications by KerstenOertel et al. [9], while their applications for diagnostic biomedical images have been discussed in a recent review by Douglas et al. [15]. The rest of the chapter is as follows. In Section 17.2, we outline the major types of biomedical imaging modalities and the properties that allow 3-D visualizations. A summary of the visualization pipeline is found in Section 17.3. In Section 17.4, recent advances in rendering techniques, in particularDVR and its optimization algorithms, are discussed. In Section 17.5, display technologies are explained, and new technologies such as virtual reality (VR) and augmented reality (AR) are introduced. In Section 17.6, we discuss the publicly available development software and platforms for biomedical imaging visualization.
17.2 Biomedical imaging modalities In this section, we briefly summarize the characteristics of major imaging modalities used in 3-D biomedical visualization. Volumetric biomedical imaging modalities can be categorized as either scalar or vector image types. In scalar images, each acquired image consists of discrete sample points, known as voxels, that are reconstructed on a regular 3-D grid with a predefined resolution. Each voxel of this grayscale image has a value that represents the intensity (scalar) of structures in the object. An alternative to scalar data is vector data, where for each voxel a continuous scalar sample is acquired, with examples including functional and tensor imaging that capture diffusion processes over time representing white matter activities connecting different parts of the brain. This chapter only refers to scalar data types. For readers interested in the vector data type, we refer to prior surveys in vector types of modalities [16,17].
Biomedical image visualization and display technologies
Figure 17.3 Volumetric biomedical imaging data acquisition with a CT scanner.
A typical scalar data acquisition process is shown in Fig. 17.3 with a CT image. Here, volumetric data are acquired as the scanning table is moved continuously through the scanner comprising an X-ray source and detectors, with the trajectory of the array of the detectors through the patient tracing out a spiral pattern. Cross-sectional images, with levels of grayscale depicting different structures, are then generated by projecting the patient at multiple angles.
17.2.1 Single-modality volumetric biomedical imaging data Several biomedical imaging modalities are acquired in single-modal 3-D volumes. We focus on three common imaging modalitiesdCT, MR, and PETdthat can be visualized using 3-D rendering techniques. CT is an anatomical imaging modality that captures images of the organs, the bony skeleton, and soft tissues. CT is used to diagnose many disordersde.g., hemorrhage, fractures, lung tumors/inflammation/disease, bowel obstruction, tumors, and kidney stones (pyelography) [18e20]. CT produces high-resolution volumetric images that are particularly well suited for inspecting dense structures [21], such as the bone or lung tissues. Different anatomical structures have a highly predictable way of attenuating Xrays, and the intensity range is represented as Hounsfield units (HUs) [22]. HUs can categorize certain structuresde.g., bone versus soft tissue versus air versus brain. A transaxial image of a CT of the thorax is shown in Fig. 17.3. CT and MR can be used with contrast agents to make structures easier to identify. CT angiographyis an example where contrast agents combined with very rapid CT scanning allow for the production of images of the vasculature [23]. MR scanners use strong magnetic fields and radiofrequency pulses to generate images of the brain, spine, heart, liver, etc.[24]. MR provides an exquisite anatomical definition in static structures such as the brain and limbs and has been used routinely in the assessment of disorders of the brain and spine. Motion
565
566
Jinman Kim, Younhyun Jung, David Dagan Feng and Michael J. Fulham
is a problem for MR imaging, but engineering advances in the development of more rapid image acquisition methods and motion correction algorithms have meant that MR is now being applied to moving structures such as the heart, lungs, and liver [25]. PET is a functional imaging modality that captures physiological activities, such as metabolism and blood flow [26]. PET requires the intravenous injection of a PET radiopharmaceutical, and then a scanner detects the gamma rays emitted when the PET radiopharmaceutical decays. The PET radiopharmaceutical is a compound, an analog of glucose-deoxyglucose or a protein/peptide/drug labeled with a positron emitterd18F, 15 O, 11C, 13N. The positron emitters are produced by cyclotrons. The PET radiopharmaceutical, once injected into the body, participates in the biological process but does not perturb it, and with the PET scanner, it depicts the biochemical process of interest such as glucose metabolism with 18F-fluorodeoxyglucose (FDG), which is the most commonly used PET radiopharmaceutical in patient care. FDG PET-CT is now widely used in the staging of cancer because cancer cells generally have higher rates of glucose metabolism than other parts of the body sites and “show up” as regions of higher FDG uptake [27].
17.2.2 Multimodality biomedical imaging Multimodality imaging refers to the acquisition of data using different imaging modalities (CT, PET, MR) in one patient in a single scanning session. PET-CT scanners allow the sequential acquisition first of CT data and then PET data, while with PETMR, PET and MR data are acquired simultaneously [28,29]. Other modalities such as ultrasound are now being added to these scanning sessions to expand the diagnostic and therapeutic options in medical imaging. These multiple data acquisitions also allow for the different types of data to be “overlain” on each other; PET data can be registered to the CT data to give a “fused” image, where data in color from the PET is overlain on the grayscale images of the CT to provide anatomical localization of the ROIs detected on PET.
17.2.3 Serial scans of biomedical imaging modalities Serial scans refer to the acquisition of imaging data across multiple scans and is usually to assess the response or lack of response to treatment [30,31]. As an example, a PET serial scan of a patient study with multiple tumor recurrences (within the red box) is presented in Fig. 17.4, which shows the response of the tumor from treatment between scans 1 to 3. These serial scans require different visualizations, such that tumor changes between the scans can be visualized while retaining consistent anatomical data.
Biomedical image visualization and display technologies
Figure 17.4 PET images with tumors (indicated by red bounding boxes) from three serial scans, (A) to (C), of a patient. The images are in coronal views.
17.3 Biomedical image visualization pipeline We define a typical biomedical image visualization pipeline to comprise four components (gray boxes on the top row) as in Fig. 17.5. A PET-CT scanner (Box 1: Data Acquisition) is used to sequentially acquire PET and CT images that are then processed (Box 2: Image Processing) to segment out tumor ROIs from PET; the processed data are then rendered (Box 3: Volume Rendering) using a rendering technique, and the rendered result is then visualized on a conventional “flat-screen” monitor display (Box 4: Display Technology). Along this pipeline, the image processing component is optional and may be omitteddi.e., raw data can be direct volume rendered. The following sections will focus our discussion on the last two components of the pipeline, and we make references to other complementary components where relevant.
17.4 Volume rendering techniques This section initially describes conventional 2-D rendering techniques followed by the two common 3-D rendering techniques of surface rendering and DVR, with applications to various biomedical imaging modalities including single-modal, multimodal, and serial scan datasets.
567
568
Jinman Kim, Younhyun Jung, David Dagan Feng and Michael J. Fulham
Figure 17.5 Three-dimensional visualization pipeline for biomedical imaging data illustrated with a PET-CT lung cancer (thorax) patient study.
17.4.1 Two-dimensional visualization Conventional and default visualization of biomedical images is in 2-D cross-sectional views, with multiple images presented in a “montage” or “slice-by-slice” format. The 2-D views can be formatted into orthogonal orientations, namely transaxial, sagittal, and coronal views, and displayed simultaneously, as shown in Fig. 17.1. Besides orthogonal views, other approaches to 2-D slice formatting include volume slicing [32], multiplanar reformatting [33], and curved sectioning [34].
17.4.2 Three-dimensional surface rendering visualization Surface rendering is a 3-D visualization technique that renders the “surfaces” of the structures derived from the image volume. It is a commonly used visualizationde.g., to render skeletal and vasculature structures [35]. Surface rendering builds a geometrical contour representation of the surface defined by the segmentation of the image volume. The contours of the segmented volume(s) are then extracted with surface tiling techniques [36e38], which creates polygonal surface representations of the structure. One of the most well-known algorithms for surface tiling is the marching-cubes algorithm [37], which functions by creating triangulated representations of the surface and has produced many optimizations and variations for different applications. Generally, surface rendering requires segmentation of the image volume to determine the structures for tilling. Segmentation, as a preprocess, is the main limiting factor for surface rendering, where segmentation of the structures is difficult to automate (via image processing) and often requires manual refinements and corrections. Further, due to this need, once the surfaces
Biomedical image visualization and display technologies
have been reconstructed, it is computationally inefficient to modify the generated surfaces, thus not providing interactive visualization capabilities.
17.4.3 Three-dimensional direct volume rendering visualization DVR visualization, with its ability to render the entire volume, is the dominant rendering technique to visualize biomedical images based on the number of applications [12,35]. An example of a 3-D DVR visualization of a PET-CT patient study is shown in Fig. 17.2. It is also feasible to combine surface renderingde.g., of an ROIdtogether with volume rendering [13]. 17.4.3.1 Direct volume rendering computing pipeline DVR imitates the functionality of a pinhole camera, with light passing through a volume before reaching the viewer (camera) as shown in Fig. 17.6. This process is based on a simplified model of real physical phenomena that occur when light (source) interacts with a participating object (medium). Optical models are often described as the net gain or loss (differential changes) of radiance at any spatial location as light travels through a volume. Entry and exit points, a and b, on the surfaces of the image volume (the cube encapsulating the object) are defined as the first and last intersections between the ray of light and the volume. By evaluating the optical contribution (color and opacity) of voxels along the ray, initial light radiance, I0, is modulated to I(b) at the exit pointdi.e., the pixel value of the final rendering. This DVR process, which relies on independent ray-by-ray computation (i.e., each pixel of an image) has seen rapid computational performance gains from the formulation of ray computing to leverage parallel processing support in graphics processing units (GPUs) [39,40]. 17.4.3.2 Image semantic analysis for direct volume rendering visualization The DVR process combined with image semantics, such as ROI segmentation (results from an image processing component), can be used to produce effective visualization of
Figure 17.6 An overview of a direct volume rendering computing pipeline.
569
570
Jinman Kim, Younhyun Jung, David Dagan Feng and Michael J. Fulham
biomedical imaging data that can be used to provide “focus-and-context” to the image volume [41]. The main advantage of this approach is in facilitating visualization for data interpretation in a user-perceivable mannerdi.e., the user’s attention can be directed toward the “focused” ROI with automated emphasis on “contextual” structures that may enhance the understanding of the ROI. The use of image semantics is exemplified in Fig. 17.7 based on the work of Jung et al. [42], where a semantic-based visualization was demonstrated to provide focus and context to individual structures (as ROIs) in a CT volume. Here, the initial visualization (Fig. 17.7A) contains three sets of ROIs (red boxes)dthe lungs, spine, and kidney transplant(in the left lower abdomen)dand all were visually cluttered by the surrounding structures. By allowing the user to specify an ROIdi.e., selecting a point (yellow circle)dthe method updated the DVRs such that the contextual ROI was focused while the surrounding contextual structures were omitted from the visualization. 17.4.3.3 Transfer function designs Transfer functions (TFs) are a design editor to allow users to assign different optical parameters to a volume in DVR visualizations [13]. TFs can be used to interactively identify and visually emphasize different ROIs. A standard one-dimensional (1-D) TF (A)
(B)
(C)
(D)
Figure 17.7 An example of a focus-and-context visualization. (A) default rendering of the whole-body CT; (BeD) rendering of ROIs based on user’s ROI selection (focus) where all surrounding structures (contextual) have been removed.
Biomedical image visualization and display technologies
and its resulting DVR are shown with a CT volume of the human abdomen in Fig. 17.8. Here, a user can manipulate the voxels in the volume by assigning different intensity and opacity attributes in order to, for example, selectively visualize ROIs. In a typical 1-D TF design editor, the horizontal axis represents the intensity range while the vertical axis represents the opacity values; a background histogram (red bars in the x-axis) is the intensity distribution representing the volume. The histogram provides “indicators” for the user to detect and classify ROIs through identifiable intensity distributions, where certain structures can be associated with a specific intensity range (e.g., the CT intensity range in HUs) and the “peaks” and “valleys” in the histogram. Using the TF, a user can select an intensity range and then assign color and opacity values to it. In this example, the 1-D TF design editor has four opacity assignments (with different colors), with blue assigned to the lungs, red to the kidneys, orange to vessels, and white to bones. Multidimensional TF: TF research has been extended to multidimensional TFs that use additional visual attributes derived from the rendered volume to complement the standard intensity/opacity attributes in 1-D TFs. Projecting these attributes as extra dimensions on TF design editors facilitates greater volume exploration, where the new dimensions typically act as additional indicators to guide the user during TF design. Kindlmann et al. [43,44] pioneered the introduction of 2-D TFs based on intensity and gradient attributes such as first- or second-order derivative of the intensity, which allowed for identification and emphasis of ROI boundaries. The use of other attributes including the ROIs’ local shapes [45], relative sizes [46], and spatial occlusions [47] were later proposed to improve TF capabilities. Attribute clustering-based TF: Conceptually, the introduction of additional attributes led to greater classification (identification) capabilities. However, these attributes also introduced increased complexity for the user to understand and manipulate the TF. In
Figure 17.8 A standard one-dimensionaltransfer function (TF) and the resulting direct volume rendering visualization from the TF manipulation.
571
572
Jinman Kim, Younhyun Jung, David Dagan Feng and Michael J. Fulham
practice, 1-D TFs can be defined as a line or a curve as in Fig. 17.8, but 2-D TFsrequiremodification of “shapes” or “polygons” in a 2-D space and thus raise design complexities [48]. In order to mitigate the increased complexity, several works have been proposed to simplify multidimensional TF designs. In one approach, the TF attribute (i.e., volume histogram) was grouped (via clustering) in an automated way, with each group (or cluster) representing distinct ROIs. With these ROIs, TF manipulations were found to be simplified, as the TF design was restricted to individual ROIs rather than the whole image volume. However, as with all automated image processing, the ability to define ROIs was limited to the performance of the clustering algorithms; many variants of clustering algorithms were applied including K-mean clustering [49], Gaussian mixture model [50], nonparametric clustering [51], and multilevel clustering [52]. TF parameter optimization: Extensive research in TF parameter (or attribute) optimization has been aimed at automatically generating TFs to reduce complexities in TF designs. A typical TF parameter optimization process involves an iterative optimization process that sets initial TF parameters that are then continuously adjusted from one set of parameters to another until an optimal set is found that satisfies a user-defined objective metric. In an early work, Correa et al. [53]proposed the use of “visibility” distribution as the objective metric to represent how “visible” ROIs were in the resulting DVR visualization. Ruiz et al. [54] proposed another variant of the visibility-driven TF optimization scheme that optimized an initial TF by matching between target visibilities assigned by the user and the resulting visibilities from iteration of the optimization process. Other approaches to TF optimization include viewpoint selection (described in the following Section 17.4.3.4) and multimodal (Section 17.4.4) and serial scans (Section 17.4.5). 17.4.3.4 Volume clipping and viewpoint selection Volume clipping allows for a selective view of ROIs in the interior of a volume by omitting other extraneous details as exemplified in Fig. 17.9. This is achieved by defining a conventional clipping plane and moving it through the volume. Rectangular clipping planes are typically used, but more complex clipping geometries [55e57] have also been introduced to improve the usability of volume clipping. Viewpoint selection is another factor in visualization. Different viewpoints can be used to render different spatial relationships (occlusion) among ROIs in a volume. For instance, with a particular viewpoint, an ROI may appear to be occluded by irrelevant regions (rendered at the back of other regions), whereas in the opposite viewpoint, the same ROI will be in front. Identifying an ideal viewpoint can be a time-consuming manual process. In an attempt to automate viewpoint selection, early works such as those by Bordoloi et al. [58] relied on the use of “visibility” parameters to measure the “goodness” of the viewpoints, where high visibility of ROIs was assigned to be ideal. As an extension, Takahashi et al. [59] only used the visibility of the boundaries of the ROIs
Biomedical image visualization and display technologies
Figure 17.9 Volume clipping applied to a DVR rendering of a PET-CT patient study. (A) default DVR of an entire fused PET-CT image volume in transaxial orientation. 2-D TF (top right cornerdintensity, its gradient, and opacity as the image attributes) was manually adjusted to minimize occlusion from the bones and soft tissue. (B) volume clipping of the same volume to the lung structure, now revealing the tumor ROI from PET while having anatomical information from CT.
in their viewpoint measurements. As such, a good viewpoint from this approach had the boundaries of ROIs well preserved.
17.4.4 Multimodality direct volume rendering visualization In a typical multimodality visualization, volume manipulations such as with TFs are applied independently on the individual volumes prior to the resultant volumes being fused. The rendering in Fig. 17.2 is an example of a PET-CT visualization, where the PET TF was initially defined to visualize the abnormal ROIs, with its counterpart CT’s TF set to minimize occlusion to the PET’s ROIs. Unfortunately, such individual TF manipulations are complicated and time-consuming. Many approaches have been proposed to mitigate this TF complexity in multimodality TF design with variations in the use of visual attributes and/or the fusion of the volume data [17,54,60e62]. An example of a multimodality TF optimization is from Jung et al. [62], where a visibility-based rendering parameter set was optimized for a multimodality volume. As depicted in Fig. 17.10, in their work, visibility was used to calculate a “relative spatial metric” such that the visibility of ROIs in a volume was influenced by the visibility of other regions in the counterpart volume.
17.4.5 Direct volume rendering visualization for serial scans The acquisition of serial scans for a patient study often occurs when accessing response or lack of response to treatment. Treatment response can be variable, for example, in serial PET scans, with some parts of the original ROI showing reduced FDG uptake while other parts show uptakes that remain the same or increase. The accurate interpretation of serial scans always requires reference to previous studies [17] to compare them side-byside and above-and-below. The ability to allow a user to identify and track ROI changes
573
574
Jinman Kim, Younhyun Jung, David Dagan Feng and Michael J. Fulham
Figure 17.10 Transfer function parameter optimization using “visibility” as the objective metric for use in multimodality PET-CT visualization.
over serial scans with a consistent structure and without distracting nonrelevant image data would permit improved evaluation of ROIs. Recent work by Jung et al. [63], an example of a serial scan visualization with serial PET-CT datasets,is presented in Fig. 17.11. In that study, a combination of TF parameter optimization, ROI segmentation, and parallel GPU architecture was introduced to address serial scan visualization challenges. To achieve functional and anatomical visual consistency among the scans, a CT TF was automatically optimized to maximize segmented ROIs over the serial PET scans, and the optimal CT TF was then applied to all the serial scans.
17.5 Display technology Display technology is an important component of the visualization pipeline, as it is the hardware that physically presents the visualizations in its full 3-D “native” representation [64,65]. In recent years, we have seen key developments in stereoscopic (e.g., VR) and holographic (e.g., AR) technologies that have made them accessible and practical for the biomedical imaging domain. This section will discuss conventional 2-D “flat” monitors, as these are currently the standard in biomedical image display, followed by 2-D screen-based stereoscopic displays. We will then present emerging technologies in VR and AR that are opening up new clinical applications in 3-D biomedical imaging visualizations. The differences between the display technologies through a CT imaging visualization scenario (hip joint analysis) are illustrated in Fig. 17.12. With “reality” visualization, the conventional 2-D cross-sectional viewer can be viewed alongside a 3-D printed femur bone (derived from segmentation of the CT). The hip structure can also be 3-D printed but is not used in this scenario. In an AR visualization, the 3-D printed femur can be “augmented” via holograms to simulate pressure points when it is interacting with the hip structure (virtual objectdalso derived from the CT) that is rendered
Biomedical image visualization and display technologies
Figure 17.11 A 3-D DVR visualization of three serial scans, (A) to (C), of the same patient study from Fig. 17.4. Compared with Fig. 17.4, this visualization provides a 3-D overview of the changes in the ROIs (abnormalities from PET rendered in red) over the serial scans.
Figure 17.12 Reality to virtual continuum schema for biomedical image visualization.
using surface rendering. The 2-D viewer is the same as in the reality scene. For VR visualization, the femur and the hip bone structures are both rendered (virtual objects) as well as the 2-D viewer.
17.5.1 Two-dimensional conventional visualization display technologies In a typical biomedical imaging workstation, it is common to find multiple 2-D flatscreen monitors for both 2-D and 3-D visualizations. These screen displays typically
575
576
Jinman Kim, Younhyun Jung, David Dagan Feng and Michael J. Fulham
have resolutions of FHD (full high definitiond1920 1080) and greater, including UHD (ultrahigh definitiond3840 2160), with a pixel depth of up to 24-bit color. On a 2-D screen display, true 3-D visualization can be obtained only by displaying the data stereoscopically (the illusion of depth). This technique has been adopted in visualization applications such as surgical simulations [66,67]. Fundamentally, stereoscopy presents a slight variation of the images to the left and right human eyes such that each eye sees only a single image that is superimposed together.
17.5.2 Virtual reality visualization Stereoscopic visualization can be extended to a virtual environment. VR visualization is the provision of a computer-generated artificial (virtual) environment in which users can feel as though they are “present” in the environment. VR typically contains a head-mounted display (HMD) with sensors tracking the wearer’s movements and locations. HMDs present a virtual image that completely occludes the realworld from the user’s field of view. The user can maneuver through the virtual environment by head movements (via HMD tracking) or walking (via external camera tracking). Using additional handheld navigating devices with haptic feedback or voice gestures, users can control the visualizations in an immersive way rather than controlling the visualization from a user interface on the computer screen. As such, biomedical imaging data can be presented in a 3-D immersive and interactive environment for natural and intuitive visualization and navigation [68]. An example of a biomedical imaging application of VR is in the imaging of breast cancer [69]. In this example, a survey among clinicians was conducted that found that VR visualization, compared to 2-D flat monitors, was better at characterizing linear and branching patterns of microcalcifications in breast imaging. Another VR visualization example is with virtual endoscopy, which was able to provide better capabilities in structure navigation compared with flat screens. Virtual endoscopy has been applied to many anatomical structures such as the trachea, colon, aorta, brain ventricles, nasal cavity, and paranasal sinuses [70e73]. Several challenges need to be addressed for the broader application of VR. Aside from the inherent limitations of VR, which include symptoms of motion sickness and strain on the ocular system, technical challenges also must be overcomed e.g., HMDs with a limited field of view or sensor navigation devices having inadequate robustness as well as problems of latency and poor registration. Lack of standardization in VR devices/software is another concern.
17.5.3 Augmented reality visualization AR is a visualization concept in which devices render virtual objects in a user’s “real” physical environment. AR can provide an environment where real-world and virtual-world objects are presented together within a single display [64]. Here, virtual-world objects are
Biomedical image visualization and display technologies
seamlessly integrated into the real world. The virtual-world objects blend in with a user’s own reality, enabling the user to walk around, anchor to a specific location in the real world, and interact with real-world objects. AR has traditionally been performed on screens as they project a camera feed, such as on a smartphone device. More recently, HMDs have been introduced where virtual renderings are displayed on a transparent screen in front of the user’s eyes, such as with Microsoft’s HoloLens [74] and Magic Leap [75]. Built-in sensory devices, e.g., hand gesture tracking in HoloLens, or additional sensory navigating devicesde.g.,the Leap Motion controller [76] or Microsoft Kinect [77]dcan be further included to provide control of the visualization environment. Due to the nature of AR in augmenting information on top of physical objects, most research on AR for biomedical imaging visualization has been with image-guided surgeryde.g., to visualize preoperative or intraoperative models of patient images that are registered with the patient physical body [9]. There are opportunities, however, for adopting AR technologies in other applicationsde.g., for chemotherapy visualization [15]dand this is a promising research area for future development. These examples have begun to demonstrate the capabilities of using AR for biomedical image visualizations. However, as with VR, several technical challenges must be addressed before AR can be more broadly applied. These include the limited field of view of HMDs and low screen resolution, sensor navigation devices having inadequate robustness, and the need to improve accurate registration of virtual objects with the real world in 3-D and real-time. Nevertheless, both hardware and software are actively being developed to overcome these challenges.
17.6 Development platforms for biomedical image visualization Rapid developments in software are important for the advancement of research and innovations in biomedical imaging and visualization. Several well-established software development platforms are specifically designed and optimized for the biomedical visualization community. These platforms are based on libraries and frameworks that are publicly available, thereby enabling efficient development by leveraging these extensive visualization techniques. These libraries and frameworks are typically developed via open-source collaboration and made freely available for use, modification, and redistribution [14]. This section provides an overview of three popular open-source, publicly available visualization platforms as summarized in Table 17.1.
17.6.1 Voreen (volume rendering engine) Volume rendering engine (Voreen) is a rapid prototyping framework for GPU-based high-performance 3-D visualization [78]. It is well known for its stereoscopic and VR support. Voreen is equipped with a wide array of biomedical visualization algorithms for
577
578
Jinman Kim, Younhyun Jung, David Dagan Feng and Michael J. Fulham
Table 17.1 Differences between three selected visualization development platforms. Visualization Open Programming platform Compatibility source Visualization strengths support
Volume Windows, rendering Linux, Mac engine (Voreen) Visualization Windows, toolkit (VTK) Linux, Mac
MeVisLab
Windows, Linux, Mac
Entirely Advanced 3-D rendering technologies, stereoscopic (virtual reality) support Entirely Commercial and expert support, extensive documentation with large community support Partially Extensive open source code library integrationde.g., with insight toolkit and VTK
Visual (plug and play), scripting (modules), and Cþþ Scripting and Cþþ
Visual (plug and play), scripting (modules), and Cþþ
2-D, 3-D, 3-D þ time, multimodality images. It also includes fundamental biomedical image processing algorithms as built-in processors. Voreen is an open source Cþþ framework and can be developed using visual, scripting, and Cþþ programming. Visual and scripting allow for rapid research prototyping.
17.6.2 The visualization toolkit Visualization toolkit (VTK) is an open-source, freely available software framework for rapid development of biomedical imaging and visualization applications [79]. VTK contains a number of functionalities for 2-D and 3-D biomedical imaging visualizations together with strong support for biomedical imaging processing algorithmsde.g., image segmentation and classification. Developed in Cþþ, VTK provides a number of highlevel classes, extensive documentation, and examples, thereby making it a popular development platform for practitioners and developers. VTK supports Cþþ programming as well as bindings for popular scripting languages including support for tool command language and Python. Additionally, Kitware, Inc. (Clifton Park, NJ) provides commercial and expert support for VTK and custom-designed software for companies and customers.
17.6.3 MeVisLab MeVisLab is a Cþþ based visualization framework for biomedical imaging data [80]. MeVisLab is known for its extensive support for biomedical imaging processing and visualization algorithms (more than 500 components), either by integrating existing open-source libraries, including high-performance 3-D visualization released by Silicon Graphics, insight toolkit image processing, and VTK, but also by enabling the simple
Biomedical image visualization and display technologies
addition of new algorithms. MeVisLab also has visual-level programming, scripting support such as Python and NumPy, and Cþþ programming.
17.7 Conclusions We have outlined the main visualization techniques for multidimensional and multimodality biomedical imaging data, presented some of the ongoing developments in the rapidly changing field of display technologiesdin particular, VR and AR wearable displaysdand discussed the open-source software development platforms that can be used for rapid visualization development.
17.8 Questions 1. Explain the main processes in the biomedical imaging visualization pipeline using dual-modal PET-CT imaging data. 2. Compare the two 3-D rendering techniques of direct volume rendering (DVR) and surface rendering (both similarities and differences). 3. Explain the role of transfer function (TF) design in DVR visualization. What are the main differences between single-dimensional and multidimensional TFs? 4. TF parameter optimization is an important area of visualization research due to its ability to automate the complex and tedious TF design. Explain the TF parameter optimization concept. 5. For temporal visualization, list the important visual features from image volumes that should be visualized consistently among all series of rendered volumes. 6. Describe the main differences between virtual and augmented reality display technologies and how they can be used in different clinical scenarios.
References [1] K. Doi, Computer-aided diagnosis in medical imaging: historical review, current status, and future potential, Computerized Medical Imaging and Graphics 31 (2007) 198e211. [2] S.R. Cherry, T. Jones, J.S. Karp, J. Qi, W.W. Moses, R.D. Badawi, Total-body PET: maximizing sensitivity to create new opportunities for clinical research and patient care, Journal of Nuclear Medicine 59 (2018) 3e12. [3] H. Song, J. Yun, B. Kim, J. Seo, GazeVis: interactive 3D gaze visualization for contiguous crosssectional medical images, IEEE Transactions on Visualization and Computer Graphics 20 (2014) 726e739. [4] A. Van der Gijp, M. Van der Schaaf, I. Van der Schaaf, J. Huige, C. Ravesloot, J. Van Schaik, T.J. ten Cate, Interpretation of radiological images: towards a framework of knowledge and skills, Advances in Health Sciences Education 19 (2014) 565e580. [5] Y. Jung, J. Kim, D. Feng, M. Fulham, Occlusion and slice-based volume rendering augmentation for PET-CT, IEEE Journal of Biomedical and Health Informatics 21 (2017) 1005e1014. [6] A.P. Santhanam, C. Imielinska, P. Davenport, P. Kupelian, J.P. Rolland, Modeling real-time 3-D lung deformations for medical visualization, IEEE Transactions on Information Technology in Biomedicine 12 (2008) 257.
579
580
Jinman Kim, Younhyun Jung, David Dagan Feng and Michael J. Fulham
[7] D. Jaffray, P. Kupelian, T. Djemil, R.M. Macklis, Review of image-guided radiation therapy, Expert Review of Anticancer Therapy 7 (2007) 89e103. [8] Y. Masutani, T. Dohi, F. Yamane, H. Iseki, K. Takakura, Augmented reality visualization system for intravascular neurosurgery, Computer Aided Surgery: Official Journal of the International Society for Computer Aided Surgery (ISCAS) 3 (1998) 239e247. [9] M. Kersten-Oertel, P. Jannin, D.L. Collins, The state of the art of visualization in mixed reality image guided surgery, Computerized Medical Imaging and Graphics 37 (2013) 98e112. ¨ . Smedby, Advanced 3D visualization in student-centred [10] C. Sile´n, S. Wirell, J. Kvist, E. Nylander, O medical education, Medical Teacher 30 (2008) e115ee124. [11] P. Beylot, P. Gingins, P. Kalra, N.M. Thalmann, W. Maurel, D. Thalmann, J. Fasel, 3D interactive topological modeling using visible human dataset, in: Computer Graphics Forum, 1996, pp. 33e44. [12] Q. Zhang, R. Eagleson, T.M. Peters, Volume visualization: a technical overview with a focus on medical applications, Journal of Digital Imaging 24 (2011) 640e664. [13] P. Ljung, J. Kru¨ger, E. Groller, M. Hadwiger, C.D. Hansen, A. Ynnerman, State of the art in transfer functions for direct volume rendering, in: Computer Graphics Forum, 2016, pp. 669e691. [14] J.J. Caban, A. Joshi, P. Nagy, Rapid development of medical imaging tools with open-source libraries, Journal of Digital Imaging 20 (2007) 83e93. [15] D.B. Douglas, D. Venets, C. Wilke, D. Gibson, L. Liotta, E. Petricoin, B. Beck, R. Douglas, Augmented reality and virtual reality: initial successes in diagnostic radiology, in: State of the Art Virtual Reality and Augmented Reality Knowhow, IntechOpen, 2018. [16] Z. Peng, R.S. Laramee, Higher dimensional vector field visualization: a survey, in: TPCG, 2009, pp. 149e163. [17] A. Brambilla, R. Carnecky, R. Peikert, I. Viola, H. Hauser, Illustrative flow visualization: state of the art, trends and challenges, in: Visibility-oriented Visualization Design for Flow Illustration, 2012. [18] B. Daly, P.A. Templeton, Real-time CT fluoroscopy: evolution of an interventional tool, Radiology 211 (1999) 309e315. [19] R.K. Kaza, J.F. Platt, R.H. Cohan, E.M. Caoili, M.M. Al-Hawary, A. Wasnik, Dual-energy CT with single-and dual-source scanners: current applications in evaluating the genitourinary tract, RadioGraphics 32 (2012) 353e369. [20] W. Brisbane, M.R. Bailey, M.D. Sorensen, An overview of kidney stone imaging techniques, Nature Reviews Urology 13 (2016) 654. [21] G.T. Herman, Fundamentals of Computerized Tomography: Image Reconstruction from Projections, Springer Science & Business Media, 2009. [22] R. Wendt (Ed.), The Mathematics of Medical Imaging: A Beginner’s Guide, Soc Nuclear Med, 2010. [23] G.D. Rubin, J. Leipsic, U.J. Schoepf, D. Fleischmann, S. Napel, CT angiography after 20 years: a transformation in cardiovascular disease characterization continues to advance, Radiology 271 (2014) 633e652. [24] C. Catana, Y. Wu, M.S. Judenhofer, J. Qi, B.J. Pichler, S.R. Cherry, Simultaneous acquisition of multislice PET and MR images: initial results with a MR-compatible PET scanner, Journal of Nuclear Medicine 47 (2006) 1968. [25] W. Hollingworth, C.J. Todd, M.I. Bell, Q. Arafat, S. Girling, K.R. Karia, A.K. Dixon, The diagnostic and therapeutic impact of MRI: an observational multi-centre study, Clinical Radiology 55 (2000) 825e831. [26] H. Zaidi, I. El Naqa, PET-guided delineation of radiation therapy treatment volumes: a survey of image segmentation techniques, European Journal of Nuclear Medicine and Molecular Imaging 37 (2010) 2165e2187. [27] V. Kalff, R.J. Hicks, M.P. MacManus, D.S. Binns, A.F. McKenzie, R.E. Ware, A. Hogg, D.L. Ball, Clinical impact of 18F fluorodeoxyglucose positron emission tomography in patients with none small-cell lung cancer: a prospective study, Journal of Clinical Oncology 19 (2001) 111e118. [28] D.W. Townsend, T. Beyer, T.M. Blodgett, PET/CT scanners: a hardware approach to image fusion, in: Seminars in Nuclear Medicine, 2003, pp. 193e204. [29] D.W. Townsend, T. Beyer, A combined PET/CT scanner: the path to true image fusion, The British Journal of Radiology 75 (2002) S24eS30.
Biomedical image visualization and display technologies
[30] C. Rousseau, E. Bourbouloux, L. Campion, N. Fleury, B. Bridji, J. Chatal, I. Resche, M. Campone, Brown fat in breast cancer patients: analysis of serial 18 F-FDG PET/CT scans, European Journal of Nuclear Medicine and Molecular Imaging 33 (2006) 785e791. [31] K.M. Greven, D.W. Williams III, W.F. McGuirt Sr., B.A. Harkness, R.B. D’Agostino Jr., J.W. Keyes Jr., N.E. Watson Jr., Serial positron emission tomography scans following radiation therapy of patients with head and neck cancer, Head & Neck: Journal for the Sciences and Specialties of the Head and Neck 23 (2001) 942e946. [32] R.A. Robb, Visualization in biomedical computing, Parallel Computing 25 (1999) 2067e2110. [33] A. Rosset, L. Spadola, O. Ratib, OsiriX: an open-source software for navigating in multidimensional DICOM images, Journal of Digital Imaging 17 (2004) 205e216. [34] K.U. Barthel, Volume Viewer, ImageJ Plugin, 2005. [35] R. Shahidi, B. Lorensen, R. Kikinis, J. Flynn, A. Kaufman, S. Napel, Surface rendering versus volume rendering in medical imaging: techniques and applications, in: Visualization’96. Proceedings, 1996, pp. 439e440. [36] G. Lohmann, Volumetric Image Analysis, 1998. [37] W.E. Lorensen, H.E. Cline, Marching cubes: a high resolution 3D surface construction algorithm, in: ACM Siggraph Computer Graphics, 1987, pp. 163e169. [38] J.K. Udupa, Multidimensional digital boundaries, CVGIP: Graphical Models and Image Processing 56 (1994) 311e323. [39] E. Gobbetti, F. Marton, J.A.I. Guitia´n, A single-pass GPU ray casting framework for interactive outof-core rendering of massive volumetric datasets, The Visual Computer 24 (2008) 797e806. [40] J. Kruger, R. Westermann, Acceleration techniques for GPU-based volume rendering, in: Proceedings of the 14th IEEE Visualization 2003 (VIS’03), 2003, p. 38. [41] C. Tominski, S. Gladisch, U. Kister, R. Dachselt, H. Schumann, Interactive lenses for visualization: an extended survey, in: Computer Graphics Forum, 2017, pp. 173e200. [42] Y. Jung, J. Kim, A. Kumar, D. Feng, M. Fulham, Feature of interest-based direct volume rendering using contextual saliency-driven ray profile analysis, in: Computer Graphics Forum, 2018, pp. 5e19. [43] J. Kniss, G. Kindlmann, C. Hansen, Interactive volume rendering using multi-dimensional transfer functions and direct manipulation widgets, in: Proceedings of the Conference on Visualization’01, 2001, pp. 255e262. [44] G. Kindlmann, R. Whitaker, T. Tasdizen, T. Moller, Curvature-based transfer functions for direct volume rendering: methods and applications, in: Visualization, 2003. VIS 2003. IEEE, 2003, pp. 513e520. [45] J.-S. Praßni, T. Ropinski, J. Mensmann, K. Hinrichs, Shape-based transfer functions for volume visualization, in: Visualization Symposium (PacificVis), 2010 IEEE Pacific, 2010, pp. 9e16. [46] C. Correa, K.-L. Ma, Size-based transfer functions: a new volume exploration technique, IEEE Transactions on Visualization and Computer Graphics 14 (2008) 1380e1387. [47] C. Correa, K.-L. Ma, The occlusion spectrum for volume classification and visualization, IEEE Transactions on Visualization and Computer Graphics 15 (2009) 1465e1472. [48] H. Pfister, B. Lorensen, C. Bajaj, G. Kindlmann, W. Schroeder, L.S. Avila, K. Martin, R. Machiraju, J. Lee, The transfer function bake-off, IEEE Computer Graphics and Applications (2001) 16e22. [49] J.J. Caban, P. Rheingans, Texture-based transfer functions for direct volume rendering, IEEE Transactions on Visualization and Computer Graphics 14 (2008) 1364e1371. [50] Y. Wang, W. Chen, J. Zhang, T. Dong, G. Shan, X. Chi, Efficient volume exploration using the Gaussian mixture model, IEEE Transactions on Visualization and Computer Graphics 17 (2011) 1560e1573. [51] R. Maciejewski, I. Woo, W. Chen, D. Ebert, Structuring feature space: a non-parametric method for volumetric transfer function generation, IEEE Transactions on Visualization and Computer Graphics 15 (2009) 1473e1480. [52] C.Y. Ip, A. Varshney, J. JaJa, Hierarchical exploration of volumes using multilevel segmentation of the intensity-gradient histograms, IEEE Transactions on Visualization and Computer Graphics (2012) 2355e2363.
581
582
Jinman Kim, Younhyun Jung, David Dagan Feng and Michael J. Fulham
[53] C.D. Correa, K.-L. Ma, Visibility histograms and visibility-driven transfer functions, IEEE Transactions on Visualization and Computer Graphics 17 (2011) 192e204. [54] R. Bramon, M. Ruiz, A. Bardera, I. Boada, M. Feixas, M. Sbert, Information theory-based automatic multimodal transfer function design, IEEE Journal of Biomedical and Health Informatics 17 (2013) 870e880. [55] K.H. Hohne, M. Bomans, M. Riemer, R. Schubert, U. Tiede, W. Lierse, A volume-based anatomical atlas, IEEE Computer Graphics and Applications 73 (1992) 77e78. [56] O. Konrad-Verse, A. Littmann, B. Preim, Virtual resection with a deformable cutting plane, in: SimVis, 2004, pp. 203e214. [57] T. McInerney, S. Broughton, Hingeslicer: interactive exploration of volume images using extended 3d slice plane widgets, in: Proceedings of Graphics Interface 2006, 2006, pp. 171e178. [58] U.D. Bordoloi, H.-W. Shen, View selection for volume rendering, in: Visualization, 2005. VIS 05. IEEE, 2005, pp. 487e494. [59] S. Takahashi, I. Fujishiro, Y. Takeshima, T. Nishita, A feature-driven approach to locating optimal viewpoints for volume visualization, in: Visualization, 2005. VIS 05. IEEE, 2005, pp. 495e502. [60] K. Lawonn, N.N. Smit, K. Bu¨hler, B. Preim, A survey on multimodal medical data visualization, in: Computer Graphics Forum, 2018, pp. 413e438. [61] Y. Jung, J. Kim, A. Kumar, D.D. Feng, M. Fulham, Efficient visibility-driven medical image visualisation via adaptive binned visibility histogram, Computerized Medical Imaging and Graphics 51 (2016) 40e49. [62] Y. Jung, J. Kim, S. Eberl, M. Fulham, D.D. Feng, Visibility-driven PET-CT visualisation with region of interest (ROI) segmentation, The Visual Computer 29 (2013) 805e815. [63] Y. Jung, J. Kim, L. Bi, A. Kumar, D.D. Feng, M. Fulham, A direct volume rendering visualization approach for serial PETeCT scans that preserves anatomical consistency, International Journal of Computer Assisted Radiology and Surgery (2019) 1e12. [64] P. Milgram, F. Kishino, A taxonomy of mixed reality visual displays, IEICE Transactions on Information and Systems 77 (1994) 1321e1329. [65] A. Badano, AAPM/RSNA tutorial on equipment selection: PACS equipment overview: display systems, RadioGraphics 24 (2004) 879e889. [66] P.-D. Dai, T.-Y. Zhang, J.X. Chen, Z.-M. Wang, K.-Q. Wang, A virtual laboratory for temporal bone microanatomy, Computing in Science & Engineering 7 (2005) 75e79. [67] A. Gronningsaeter, T. Lie, A. Kleven, T. Mørland, T. Langø, G. Unsga˚rd, H. Myhre, R. Ma˚rvik, Initial experience with stereoscopic visualization of three-dimensional ultrasound data in surgery, Surgical Endoscopy 14 (2000) 1074e1078. [68] D. Douglas, C. Wilke, J. Gibson, J. Boone, M. Wintermark, Augmented reality: advances in diagnostic imaging, Multimodal Technologies and Interaction 1 (2017) 29. [69] D.B. Douglas, E.F. Petricoin, L. Liotta, E. Wilson, D3D augmented reality imaging system: proof of concept in mammography, Medical Devices (Auckland, NZ) 9 (2016) 277. [70] M. De Nicola, L. Salvolini, U. Salvolini, Virtual endoscopy of nasal cavity and paranasal sinuses, European Journal of Radiology 24 (1997) 175e180. [71] F.A. Jolesz, W.E. Lorensen, H. Shinmoto, H. Atsumi, S. Nakajima, P. Kavanaugh, P. Saiviroonporn, S.E. Seltzer, S.G. Silverman, M. Phillips, Interactive virtual endoscopy, AJR. American Journal of Roentgenology 169 (1997) 1229e1235. [72] G. Kagadis, V. Patrinou, C. Kalogeropoulou, D. Karnabatidis, T. Petsas, G. Nikiforidis, D. Dougenis, Virtual endoscopy in the diagnosis of an adult double tracheal bronchi case, European Journal of Radiology 40 (2001) 50e53. [73] R. Robb, Virtual endoscopy: development and evaluation using the Visible Human datasets, Computerized Medical Imaging and Graphics 24 (2000) 133e151. [74] M.G. Hanna, I. Ahmed, J. Nine, S. Prajapati, L. Pantanowitz, Augmented reality technology using Microsoft HoloLens in anatomic pathology, Archives of Pathology & Laboratory Medicine 142 (2018) 638e644. [75] N.E. Samec, J.G. Macnamara, C.M. Harrises, B.T. Schowengerdt, R. Abovitz, M. Baerenrodt, Methods and systems for diagnosing and treating eyes using laser therapy, in: Google Patents, 2017.
Biomedical image visualization and display technologies
[76] D. Bachmann, F. Weichert, G. Rinkenauer, Review of three-dimensional human-computer interaction with focus on the Leap motion controller, Sensors 18 (2018) 2194. [77] C. Chang, Y. Gao, 3D medical image interaction and segmentation using Kinect, in: 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), 2016, pp. 498e501. [78] J. Meyer-Spradow, T. Ropinski, J. Mensmann, K. Hinrichs, Voreen: a rapid-prototyping environment for ray-casting-based volume visualizations, IEEE Computer Graphics and Applications 29 (2009) 6e13. [79] W.J. Schroeder, L.S. Avila, W. Hoffman, Visualizing with VTK: a tutorial, IEEE Computer Graphics and Applications 20 (2000) 20e27. [80] F. Heckel, M. Schwier, H.-O. Peitgen, Object-oriented application development with MeVisLab and Python, GI Jahrestagung 154 (2009) 1338e1351.
583