Visualization of perfusion abnormalities with GPU-based volume rendering

Visualization of perfusion abnormalities with GPU-based volume rendering

Computers & Graphics 36 (2012) 163–169 Contents lists available at SciVerse ScienceDirect Computers & Graphics journal homepage: www.elsevier.com/lo...

767KB Sizes 2 Downloads 67 Views

Computers & Graphics 36 (2012) 163–169

Contents lists available at SciVerse ScienceDirect

Computers & Graphics journal homepage: www.elsevier.com/locate/cag

Novel Applications of VR

Visualization of perfusion abnormalities with GPU-based volume rendering Tomasz Hachaj a,n, Marek R. Ogiela b a b

Institute of Computer Science and Computer Methods, Pedagogical University of Krakow, 2 Podchorazych Avenue, 30-084 Krakow, Poland AGH University of Science and Technology, 30 Mickiewicza Avenue, 30-059 Krakow, Poland

a r t i c l e i n f o

abstract

Article history: Received 12 March 2011 Received in revised form 30 November 2011 Accepted 11 January 2012 Available online 21 January 2012

This article presents an innovative GPU-based solution for visualization of perfusion abnormalities detected in dynamic brain perfusion computer tomography (dpCT) maps in an augmented-reality environment. This new graphic algorithm is a vital part of a complex system called DMD (detection measure and description), which was recently proposed by the authors. The benefit of this algorithm over previous versions is its ability to operate in real time to satisfy the needs of augmented reality simulation. The performance speed (in frames per second) of six volume-rendering algorithms was determined for models with and without semi-transparent pixels. & 2012 Elsevier Ltd. All rights reserved.

Keywords: Augmented reality Dynamic brain perfusion Pattern recognition Diagnostic support system

1. Introduction This article presents a GPU-based solution for the visualization of perfusion abnormalities detected in dynamic brain perfusion computer tomography (dpCT) maps. This new graphic algorithm is a vital part of a complex diagnosis support system called DMD (detection measure and description), which was recently proposed by the authors [1]. The system enables the quantitative and qualitative analysis of head injuries, epilepsy, brain vascular disease, ischemic and hemorrhagic stroke, and other pathologies that change blood perfusion. The new implementation of the DMD also includes an intuitive augmented reality (AR) visualization module that is an extension of the authors’ previous work [2]. The recent literature on AR systems reveals several significant directions for current research, including

   

Tracking techniques, Display technologies, Mobile augmented reality, Interaction techniques.

Tracking techniques include a group of computer methods that are used to achieve robust and accurate overlay of virtual imagery. Current optical tracking techniques include fitting a projection of a 3D model onto detected features in the video image, matching a video frame with still photos and real-time image registration algorithms [3]. One of the most popular methods is to calculate n

Corresponding author. Tel.: þ48 12 662 63 22; fax: þ 48 12 662 61 66. E-mail addresses: [email protected] (T. Hachaj), [email protected] (M.R. Ogiela). 0097-8493/$ - see front matter & 2012 Elsevier Ltd. All rights reserved. doi:10.1016/j.cag.2012.01.002

the 3D position of a camera relative to a single square tracking marker [4]. Commonly used display technologies are head-mounted, hand-held and projection displays for AR. Many contemporary mobile devices, such as iPods and cell phones, can execute sophisticated algorithms and are equipped with high-resolution color displays and cameras. For those devices, mobile AR applications for outdoor usage can be developed. Interaction techniques enable user interaction with AR content. This group also contains augmented reality simulators (e.g., suturing training [5] or laparoscopic simulators [6]). While most virtual reality simulators lack realistic haptic feedback, augmented reality simulation systems offer a combination of physical objects and VR simulation. Augmented reality shows its usefulness, especially in the medical field [7]. The most notable examples are deformable body atlases, AR surgical navigation systems, interfaces and visualization systems [8,9]. Augmented reality aims at improving support for pre- intra- and post-operative visualization of clinical data by presenting more informative and realistic 3D visualizations [10]. Not many years ago, medical AR systems required expensive hardware configuration and dedicated operating systems to handle real-time superimposing and visualization processes [3]. Most of the publications in the field of medical AR concentrate on developing new types of systems rather than improving the speed or portability of existing systems. The authors’ intention in this article is to fill this gap. Our goal is to create a portable augmented reality interface for the visualization of volumetric medical data in a commercial OS that can be used with off-the-shelf hardware to test its capability. The main advantages of our proposed solution are the speed and quality of rendering a virtual object as well as the low hardware cost. To achieve that goal, three basic conditions must be satisfied. First, the

164

T. Hachaj, M.R. Ogiela / Computers & Graphics 36 (2012) 163–169

software must be written in managed code that can be run on a virtual machine in order to be independent of the GPU hardware vendor. Second, the algorithm must be capable of detecting the position of virtual objects in real scenes in real time. Third, it must be able to perform real-time rendering of volumetric medical data. All of these conditions were satisfied by authors’ solution based on .NET platform, NyARTToolkit CS and XNA Framework 2.0. The authors implemented this solution in C# and tested the application on the .NET Framework 2.0, DirectX 9.0c library and XNA Framework Redistributable 2.0. These components are all freely available for download. The algorithm requires hardware that supports at least DirectX 9.0c and Shader Model 2.0. These requirements are satisfied by all commercial models of GPUs. The main drawback of this proposed implementation is the difficulty of running the software on non-Microsoft operating systems. This problem can be solved in the future with cross-platform, opensource .NET and XNA framework ports such as Mono (Mono.Xna or MonoGame), but this strategy requires further research. The application may be virtualized on a different operation system with a proper Microsoft OS emulator, but the authors did not find a universal multi-platform solution that assured proper rendering speed. The authors’ system enables real-time visualization of 2D and 3D medical high-resolution data on any device supporting the .NET framework with XNA module. Three-dimensional volume rendering also requires GPU support that may not be included in small ubiquitous computing devices, but it has quickly become standard in laptops and gaming consoles. The new solution presented in this publication is an innovative fusion of an advanced diagnostic support algorithm with an easy-to-use and intuitive augmented reality interface. Many researchers who apply AR to medical systems (for example, [11]) use the common GPU ray-casting implementation included in the Cþþ library of OpenGL. The authors decided to inspect three types of volume rendering algorithms on different viewport size and measure their performance speed in frames per second while rendering volume data from computer tomography (CT) images with superimposed dpCT. This was necessary because it is difficult to find reliable and up-to-date information on the performance times of different visualization techniques on contemporary computer hardware and portable software implementation. Direct volume-rendering algorithms with hardware accelerated linear interpolation of volume voxels are commonly accepted and well-described visualization methods for scalar fields in computer medicine. Because the quality of the rendered image (e.g., the level of fringing artifacts) for those methods was reported to be sufficiently good for medical usage, rendering speed was the main goal for this research. Superimposing the dpCT data onto another medical imaging modality, such as volume CT, enables physicians to perform more detailed diagnoses by simultaneously presenting dynamic and static aspects of brain functionality. In the second section, the authors introduce the DMD system description in more detail, specifying the input and output data of the algorithm. The third section contains a short description of GPU-based volume rendering algorithms tested by the authors. The last two sections present the results of performed tests and conclusions. The authors’ achievements are not directly related to Artificial Reality; rather, AR is only one possible application. The results can be applied to any GPU-based 3D scalar field visualization problem.

2. Detection of perfusion abnormalities The analysis of dpCT images proposed by the authors is a fusion of image processing, pattern recognition and image analysis procedures. All of these stages will be described below. The

input data for the DMD algorithm is a cerebral blood flow (CBF) map, cerebral blood volume (CBV) map and CT image. Because the software used in this study is capable of automatically detecting and describing both ischemic and hemorrhagic anomalies, it has a wider potential usage than other commercial solutions. It can display quantitative color maps of cerebral blood flow, cerebral blood volume and time-to-peak, providing valuable clinical information that can assist in treatment planning. Detailed information on perfusion parameters and anatomy allows clinicians to make reasonably accurate predictions in individual patients about which brain tissue ought to be the target of medical treatment. A detailed comparison of state-of-the-art dpCT analysis methods that justify authors’ choice of the DMD system is presented in [12]. 2.1. Image processing The image-processing step consists of a lesion detection algorithm and image registration algorithm. Our lesion detection algorithm finds potentially pathologic tissues (regions of interest – ROI). The image registration algorithm is used to create a deformable brain atlas that can facilitate detailed description of visible tissue. Both algorithms are briefly described below. 2.1.1. Lesion detection The algorithm used for detecting potential lesions is the Unified Algorithm described in detail in [1]; thus, only a basic conceptual overview will be presented here. To perform a perfusion image analysis, it is necessary to find lines that separate the left brain hemisphere from the right one (referred to here as a ‘‘symmetry axis’’). In many CT scans, the head of the patient is at a small angular offset to the tomography bed, making the symmetry axis non-orthogonal to the OX axis. The symmetry axis of the image is not always the same as the line that separates the brain hemispheres, complicating any location algorithm. In commercial software, the symmetry axis is selected manually. The algorithm for symmetry axis derivation proposed here determines the symmetry axis by computing centers of mass from horizontal image ‘‘slices’’. The symmetry axis is approximated as the straight line that minimizes the least square error between all center of mass coordinates. After computing the symmetry axis, the values of corresponding pixels on the left and right sides of the image are divided, which generates the asymmetry map. The asymmetry map is thresholded to find differences of sufficient value. Then, morphological algorithms are used to eliminate small asymmetries that cannot be classified as lesions. The same algorithm for lesion detection can be used with both CBF and CBV maps. After image processing, two symmetric regions are detected in the left and right hemisphere. The position of the lesion and its type (ischemic or hemorrhagic) is determined unambiguously in this image analysis process. 2.1.2. Image registration and atlas construction The anatomy atlas (AA) approach for labeling brain images is a reliable and broadly used method [13]. The important issue is to choose the proper image registration technique. The goal of a registration algorithm is to minimize errors and maximize similarities between two images: the template image (also called the static image) and the newly acquired image (moving image). Due to the registration of 2D brain images and the possibility of different gantry tilt between template and moving images, it is not possible to use methods that are based on the detection of landmarks. To overcome this limitation, the authors used an intensity-based approach. The correlation coefficient was used

T. Hachaj, M.R. Ogiela / Computers & Graphics 36 (2012) 163–169

as a function that measures similarity between images before and after the registration process.   Pn Pm i¼1 j ¼ 1 ðAij AÞ Bij B ccðA,BÞ ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi2 ð1Þ 2 Pn Pm Pn Pm  i¼1 j ¼ 1 ðAij AÞ i¼1 j ¼ 1 Bij B A– average value of pixels colors Registration based on affine transformation captures only the global motion of the brain. In many cases, an additional transformation is required that models the local deformation of tissues. The nature of the local deformation of the brain can vary significantly across patients and with age. Therefore, it is difficult to describe local deformation via parameterized transformations. Instead, we employ the free-form deformation (FFD) model based on B-splines that has been successfully used in the registration of breast MR images [14], CT liver images [15] and the tracking and motion analysis of cardiac images [16]. FFDs deform an object by manipulating an underlying mesh of control points. The resulting deformation controls the shape of the 3D or 2D objects and produces a smooth and continuous transformation. For registration purposes, the authors have chosen the FFD algorithm proposed in [17]. One familiar technique for representing a nonrigid transformation is to employ spline functions such as Bsplines. Because B-splines are locally controlled, they are computationally efficient compared to other globally controlled splines. f f dðx,yÞ ¼

3 X 3 X

bl ðuÞbl ðvÞfi þ l,j þ m

ð2Þ

l¼0m¼0

The parameter fi,j is the set of the deformation coefficients, which is defined on a sparse, regular grid of control points placed over the moving image. The functions b0 through b3 are thirdorder spline polynomials [17]. The minimum of the error function (1) was found using a gradient descent method. 2.2. Image analysis Defining the features of the entire image after lesion detection is an easy task. Our algorithm measures some important features from a medical point of view:

 Perfusion in ROI of left and right hemispheres.  Relative perfusion (perfusion in ROI of left hemisphere divided 

by perfusion in ROI of right hemisphere and perfusion in ROI of right hemisphere divided by perfusion in ROI of left hemisphere). Size of ROI.

The scaling factors between the perfusion map and ‘‘real brain’’ can be derived directly from DICOM files. 2.3. Pattern recognition In the pattern recognition step, the algorithm determines what type of lesion has been detected and in which hemisphere. This step requires medical knowledge on average perfusion values. After the image-processing step, two symmetric regions are detected in the left and right hemisphere. Our algorithm compares perfusion in left and right (symmetrical) ROIs with average perfusion norms and places the potential lesion in the hemisphere where the modulus of the difference between this average and the ROI value is greater. After this step, the type of lesion (hemorrhagic or ischemic) can be determined simply by checking if perfusion in the ROI is greater or smaller than average. The last step of the algorithm is to state the prognosis for lesion evolution in the examined brain tissues. In many cases, simultaneous analysis of both CBF and CBV perfusion parameters

165

enables accurate diagnosis of ischemia in visualized brain tissues and can predict further changes. This permits not only a qualitative but also a quantitative evaluation of the degree of severity of the perfusion disturbance resulting from the particular type of occlusion and collateral blood perfusion. Our algorithm analyzes both perfusion maps simultaneously to detect:

 Tissues that can be salvaged (tissues that are present on CBF  

and CBV asymmetry map whose values of rCBF are not below 0.48 [18]). Tissues that will eventually become infracted (tissues that are present on CBF and CBV asymmetry map whose values of rCBF are below 0.48 [18]). Tissues with an auto-regulation mechanism in the ischemic region (decreased CBF with correct or increased CBV).

Potential lesions should be compared with a corresponding CT/ MR image. This process will enable proper treatment planning. The detailed (AA-based) description of the relevant images has been presented elsewhere [19]. In sum, the algorithm’s output consists of regions of perfusion abnormalities, an AA description of brain tissues, measures of perfusion parameters and a prognosis for infracted tissues. This information is superimposed onto volumetric CT data and displayed for the radiologist.

3. GPU-based volume rendering Volume rendering describes a wide range of techniques for generating images from 3D scalar data [20]. These techniques were originally motivated by scientific visualization, where volume data (3D arrays of pixels) is acquired by the measurement or numerical simulation of natural phenomena. Typical examples are medical data on the interior of the human body obtained by computer tomography (CT) or magnetic resonance imaging (MRI). The power of GPUs is increasing much faster than that of CPUs. This trend forces many computer programmers to move the burden of visual algorithm computation to GPU processors. Additionally, hardware support for the interpolation of 2D and 3D texture data enables the rendering of complex volumetric images in real time. Currently, graphics programmers utilize high-level languages (like HLSL) that support the implementation of parallel algorithms on GPUs. The main obstacle that must be overcome during the creation of a volume-rendering algorithm is the fact that contemporary computer hardware still does not support the direct rendering of volumetric data. There are two main groups of the algorithms that support fast visualization of volumetric data with hardware accelerated interpolation: ray-casting algorithms and texture-based algorithms. Both groups can produce almost identical visualization results, but their schemas are quite different. Moreover, the difference in performance speed between these two algorithm types is an important factor for the augmented reality environment. The algorithms described below use tri-linear interpolation hardware support and pre-computed volume gradients for the purpose of illumination. The tissues’ density and gradient data are passed to the GPU memory as 3D textures with two-byte precision per voxel. 3.1. Volume ray-casting algorithms To render each pixel in the image in the ray-casting process, the algorithm casts a single ray from the eye through the pixel’s center into the volume and integrates the optical properties obtained from the encountered volume densities along the ray. This algorithm uses standard front-to-back blending equations to

166

T. Hachaj, M.R. Ogiela / Computers & Graphics 36 (2012) 163–169

the ray-casting algorithm. The number of slices required for accurate visualization can easily be adjusted during the algorithm’s run.

find the color and opacity of rendered pixels: C dst ¼ C dst þð1adst Þasrc C src

adst ¼ adst þ ð1adst Þasrc

4. Results

where C dst and adst are the color and opacity values of the rendered pixels and C src and asrc are the color and opacity values of the incoming fragment. The approach proposed in [21] includes standard acceleration techniques for volume ray casting, such as early ray termination and empty-space skipping. By means of these acceleration techniques, the framework is capable of efficiently rendering large volumetric data sets that include opaque structures with occlusion effects and empty regions.

The current visualization solution was tested on a consumerquality PC with an Intel Core 2 Duo CPU 3.00 GHz processor, 3.25 GB RAM, and an Nvidia GeForce 9600 GT graphics card, running 32-bit Windows XP Professional. All algorithms were implemented in managed DirectX SDK in C# and HLSL. The authors have tested six volume-rendering algorithms: Algorithm 1. Texture-based volume rendering with 512 view aligned slices. Algorithm 2. Texture-based volume rendering with 256 view aligned slices.

3.2. Texture-based algorithms The ray-casting approach is a classic image-order approach because it divides the resulting image into pixels and then computes the contribution of the entire volume to each pixel [20]. Image-order approaches, however, are contrary to the way GPU hardware generates images. Graphics hardware generally uses an object-order approach that divides the object into primitives and then calculates which set of pixels the primitive influences. To perform volume rendering in an object-order approach, resampling locations are generated by rendering proxy geometry with interpolated texture coordinates. The most basic proxy geometry is a stack of planar slices, all of which must be aligned with one of the major axes of the volume (either the x, y, or z axis) [20]. During rendering, the stack with slices most parallel to the viewing direction is chosen. To minimize switching artifacts, inter-slice interpolation should be included [20]. The more complex proxy geometry chooses slices aligned with the viewport [20]. Such slices closely mimic the sampling used by

Algorithm 3. Volume ray casting with empty space skipping. Algorithm 4. Volume ray casting without empty space skipping. Algorithm 5. Texture-based volume rendering with image aligned slices. Algorithm 6. Texture-based volume rendering with image aligned slices and inter-slice interpolation. Model 1 had dimensions of 256  256  221 pixels; model 2 had 256  256  207 pixels, and model 3 had 256  256  248 pixels. Each model was rendered using a transfer function with (Table 1) and without semi-transparent pixels (Table 2). The data from Table 1 to Table 2 are also presented graphically in Fig. 2. Average performance speed was computed for all algorithms. Each algorithm rendered three different volumetric models (Figs. 1 and 2). There is no visible difference in the quality of the rendered volume data among algorithms (Fig. 3).

Table 1 Average speed of rendering 3D models by viewport size in frames per second. Transfer function without semi-transparent pixels. Screen size 800  600 832  624 960  600 1088  612 1024  768 1280  720 1280  768 1152  864 1280  800 1280  960 1280  1024

(0.48) (0.52) (0.54) (0.67) (0.79) (0.92) (0.98) (1.00) (1.02) (1.23) (1.31)

Algorithm 1

Algorithm 2

Algorithm 3

Algorithm 4

Algorithm 5

Algorithm 6

20.00 18.33 19.67 19.00 12.67 14.33 12.33 10.33 11.00 8.33 7.33

39.33 36.67 38.33 38.00 25.00 29.00 25.00 20.00 23.33 16.67 15.00

58.33 54.33 58.33 57.33 46.00 48.33 46.00 40.33 45.33 34.67 33.00

33.67 32.00 33.33 31.00 21.67 25.33 21.00 18.33 20.33 15.00 12.67

20.00 19.00 20.00 19.67 13.00 14.33 13.00 10.00 12.33 8.67 7.00

20.33 19.00 20.33 20.00 13.33 14.67 13.33 10.33 12.33 8.33 7.00

Table 2 Average speed of rendering 3D models by viewport size in frames per second. Transfer function with semi-transparent pixels. Screen size 800  600 832  624 960  600 1088  612 1024  768 1280  720 1280  768 1152  864 1280  800 1280  960 1280  1024

(0.48) (0.52) (0.54) (0.67) (0.79) (0.92) (0.98) (1.00) (1.02) (1.23) (1.31)

Algorithm 1

Algorithm 2

Algorithm 3

Algorithm 4

Algorithm 5

Algorithm 6

20.33 18.33 20.00 19.67 12.00 14.00 12.67 10.33 11.00 8.33 7.33

39.33 36.67 38.33 37.67 25.00 29.00 25.00 20.00 23.33 16.67 15.00

32.33 30.33 32.33 31.00 22.33 24.33 21.33 18.33 20.67 16.00 14.00

23.00 22.00 22.67 22.33 14.67 16.67 14.67 13.33 13.33 9.67 8.00

20.00 19.00 20.00 19.67 13.00 14.33 13.00 10.00 12.33 8.67 7.00

20.33 19.00 20.33 20.00 13.33 14.67 13.33 10.33 12.33 8.33 7.00

T. Hachaj, M.R. Ogiela / Computers & Graphics 36 (2012) 163–169

167

Fig. 1. Three different models based on volume CT data. Top row – models using a transfer function without semi-transparent pixels. Bottom row – models using a transfer function with semi-transparent pixels.

Fig. 2. Data from Table 1 to Table 2 presented in the charts. Average speed of rendering 3D models by viewport size (megapixels) in frames per second. Left: models without semi-transparent pixels (Table 1). Right: models with semi-transparent pixels (Table 2).

Fig. 3. Left: three AR markers. Right: example visualization of volume CT data in augmented reality with three different GPU-based volume-rendering algorithms. From left to right: texture-based with image-aligned slices, texture-based with view-aligned slices, volume ray casting.

For visualizing the superimposition of dpCT onto volume CT data in AR, the authors recommend Algorithm 3. This algorithm is more than 1.5 times faster than the second fastest (Algorithm 2) for models without semi-transparent pixels, and it has asymptotically the same speed as Algorithm 2 for models with semi-transparent

pixels. An example visualization of the DMD system output in AR is presented in Fig. 4. A validation of this DMD solution on a large corpus of real medical data is described in [1]. The system was validated on a set of 75 triplets of medical images acquired from 30 different adult

168

T. Hachaj, M.R. Ogiela / Computers & Graphics 36 (2012) 163–169

Fig. 4. Example of the superimposition of dpCT onto volume CT data in AR. (A) CBV map with marked perfusion abnormalities. (B) Detailed view of CBV and diagnostic image. (C) Volume CT data. (D) Detailed view of diagnostic image. In the upper left of each image is a description generated by the DMD system.

patients (male and female) with indications of ischemia/stroke. Each triplet consisted of perfusion CBF and CBV maps and a ‘‘plain’’ CT image for anatomical atlas purposes. The algorithm’s response was compared to the radiologist’s report for each case. In total, 77.3% of tested maps were correctly classified, and the visible lesions were detected and described by the DMD system identically to the human radiologist. This is an acceptable level for a supplementary advisory system.

5. Conclusions The requirements for contemporary computer aided diagnostic systems (CAD) are constantly increasing. To be accepted by medical professionals, the CAD system must be not only reliable but also fast and intuitive to use. The solution presented in this publication satisfies all of these requirements. Data from the perfusion abnormality detection algorithm can be instantly displayed with a volumerendering technique. The superimposition of perfusion data on volumetric CT images supports the diagnostic process and helps radiologists interpret visualized symptoms. The AR module supports the visualization of clinical data by presenting more informative and realistic 3D visualizations. The image data can be intuitively rotated and zoomed in and out simply by changing its distance from the viewer’s eyes. If the image is near enough to the camera, a description of the image generated by the DMD algorithm is displayed in the upper left corner. For viewport resolutions less than 1088  612, the visualization algorithm performs faster than 30 fps even with semi-transparent image data. In addition to the proposed detection and visualization algorithms, the authors present the results of comparative research on the speed of popular GPU-based volume rendering algorithms. These timely findings are valuable for all researchers who plan to utilize similar solutions in their future work. Future directions for this research will focus on integrating the DMD system with a marker-less template tracking AR system [22] that might be easier to introduce into routine medical practice. To achieve the full functionality of this solution on a portable device, the application

must be deployed on hardware with a GPU that supports programmable shaders. In the authors’ opinion, this hardware will become standard in a few years.

Acknowledgments This work has been supported by the National Science Centre, Republic of Poland, under project number N N516 511939. References [1] Hachaj T, Ogiela MR. CAD system for automatic analysis of CT perfusion maps. Opto-Electron. Rev. 2011;19(1):95–103. [2] Hachaj T, Ogiela MR. Augmented Reality Interface for Visualization of Volumetric Medical Data. Image Processing and Communications Challenges 2. Berlin Heidelberg: Springer-Verlag; 2010 pp. 271–277. [3] Warfield S, et al. Advanced Nonrigid Registration Algorithms for Image Fusion, Brain Mapping: The Methods.Second edition San Diego: Academic Press; 2002 pp. 661–690. [4] Kato H, Billinghurst M Marker tracking and HMD calibration for a videobased augmented reality conferencing system. In Proceedings of the 2nd International Workshop on Augmented Reality (IWAR 99), pp. 85–94. (1999). [5] Botden SMBI, de Hingh IHJT, Jakimowicz JJ Suturing training in augmented reality: gaining proficiency in suturing skills faster, Surgical Endoscopy, 23, 9, pp. 2131–2137, doi: 10.1007/s00464-008-0240-2. [6] Sanne MBI, Botden S, Buzink N, Schijven MPMP, Jakimowicz JJ Augmented versus virtual reality laparoscopic simulation: What is the difference? A Comparison of the ProMIS augmented reality laparoscopic simulator versus LapSim virtual reality laparoscopic simulator, World Journal of Surgery, 31, 4, 764–772, doi: 10.1007/s00268-006-0724-y. [7] Yang G, Jiang T. Medical imaging and augmented reality. 2nd International Workshop, MIAR 2004;2004. [8] Soler L, Forest C, Nicolau S, Vayssiere C, Wattiez A, Marescaux J Computerassisted operative procedure: from preoperative planning to simulation, European Clinics in Obstetrics and Gynaecology 2, 4, pp. 201–208, doi: 10.1007/s11296-006-0055-4. ¨ [9] Marmulla R, Hoppe H, Muhling J, Eggers G. An augmented reality system for image-guided surgery. Int. J. Oral Max. Surg. 2005 Sep;34(6):594–6. [10] Denis K et al. (2006) Integrated medical workflow for augmented reality applications, International Workshop on Augmented environments for Medical Imaging and Computer-aided Surgery (AMI-ARCS). [11] Kutter O, Aichert A, Bichlmeier C, Traub J, Heining SM, Ockert B, et al. Realtime volume rendering for high quality visualization in augmented reality, International Workshop on Augmented environments for Medical Imaging

T. Hachaj, M.R. Ogiela / Computers & Graphics 36 (2012) 163–169

[12]

[13]

[14]

[15] [16]

including Augmented Reality in Computer-aided Surgery (AMI-ARCS 2008), USA, New York, 2008. Hachaj T, Ogiela MR. A system for detecting and describing pathological changes using dynamic perfusion computer tomography brain maps. Comput. Biol. Med. 2011;41:402–10. Thompson PM, Mega MS, Narr KL, Sowell ER, Rebecca E. Blanton, Arthur W. Toga. Brain image analysis and atlas construction, Handbook of Medical Imaging chapter 17. SPIE; 2000. 1066–1119. Rueckert D, Sonoda LI, Hayes C, Hill DLG, Leach MO, Hawkes DJ. Nonrigid registration using free-form deformations: application to breast MR images. IEEE Trans. Med. Imaging August 1999;18(8). Ino F, Ooyama K, Hagihara K A data distributed parallel algorithm for nonrigid image registration, Parallel Computing, 2005. Bardinet E, Cohen LD, Ayache N. Tracking and motion analysis of the left ventricle with deformable superquadrics. Med. Image Anal. 1996;1(2): 129–49.

169

[17] Parraga A, Pettersson J, Susin A, De Craene M, Macqd B. Non-rigid registration methods assessment of 3D CT images for head-neck radiotherapy, Medical Imaging 2007: Image Processing. SPIE; 2007. [18] Koenig M, Kraus M, Theek C, Klotz E, Gehlen W, Heuser L. Quantitative assessment of the ischemic brain by means of perfusion-related parameters derived from perfusion CT. Stroke, A J. Cereb. Circ. 2001;32(2):431–7. [19] Hachaj T. The registration and atlas construction of noisy brain computer tomography images based on free form deformation technique. Bio-Algorithms and Med-Systems, 7. Collegium Medicum – Jagiellonian University; 2008. 43- 50. [20] Engel K, Hadwiger M, Kniss J, Rezk-Salama C, Weiskopf D. Real-Time Volume Graphics. CRC Press; 2006. [21] Kruger J, Westermann R Acceleration techniques for GPU-based volume rendering. Proceedings of the 14th IEEE Visualization 2003 (VIS’03), 2003. [22] Lin L, Wang Y, Liu Y, Xiong C, Zeng K. Marker-less registration based on template tracking for augmented reality. Multimed. Tools Appl. 2001;41(2): 235–52.