Clinical significance of creative 3D-image fusion across multimodalities [PET + CT + MR] based on characteristic coregistration

Clinical significance of creative 3D-image fusion across multimodalities [PET + CT + MR] based on characteristic coregistration

European Journal of Radiology 81 (2012) e406–e413 Contents lists available at SciVerse ScienceDirect European Journal of Radiology journal homepage:...

1MB Sizes 0 Downloads 31 Views

European Journal of Radiology 81 (2012) e406–e413

Contents lists available at SciVerse ScienceDirect

European Journal of Radiology journal homepage: www.elsevier.com/locate/ejrad

Short communication

Clinical significance of creative 3D-image fusion across multimodalities [PET + CT + MR] based on characteristic coregistration Matthew Jian-qiao Peng a , Xiangyang Ju b , Balvinder S. Khambay c , Ashraf F. Ayoub c , Chin-Tu Chen d , Bo Bai a,∗ a

Department of Joint Surgery @ 1st Affiliated Hospital of Guangzhou Medical College & Orthopedic implantation key lab of Guangdong Province, China Glasgow Dental Hospital and School, University of Glasgow, UK & Medical Device Unit, NHS Greater Glasgow and Clyde, UK c Glasgow Dental Hospital and School, University of Glasgow, UK d Functional & Molecular Imaging Research Institute, Pritzker Medical School, University of Chicago, USA b

a r t i c l e

i n f o

Article history: Received 27 October 2011 Received in revised form 10 December 2011 Accepted 12 December 2011 Keywords: PET/MRI/CT Characteristic registration Hybrid detector Cross modality image fusion

a b s t r a c t Objective: To investigate a registration approach for 2-dimension (2D) based on characteristic localization to achieve 3-dimension (3D) fusion from images of PET, CT and MR one by one. Method: A cubic oriented scheme of“9-point & 3-plane” for co-registration design was verified to be geometrically practical. After acquisiting DICOM data of PET/CT/MR (directed by radiotracer 18 F-FDG etc.), through 3D reconstruction and virtual dissection, human internal feature points were sorted to combine with preselected external feature points for matching process. By following the procedure of feature extraction and image mapping, “picking points to form planes” and “picking planes for segmentation” were executed. Eventually, image fusion was implemented at real-time workstation mimics based on auto-fuse techniques so called “information exchange” and “signal overlay”. Result: The 2D and 3D images fused across modalities of [CT + MR], [PET + MR], [PET + CT] and [PET + CT + MR] were tested on data of patients suffered from tumors. Complementary 2D/3D images simultaneously presenting metabolic activities and anatomic structures were created with detectablerate of 70%, 56%, 54% (or 98%) and 44% with no significant difference for each in statistics. Conclusion: Currently, based on the condition that there is no complete hybrid detector integrated of triple-module [PET + CT + MR] internationally, this sort of multiple modality fusion is doubtlessly an essential complement for the existing function of single modality imaging. Crown Copyright © 2012 Published by Elsevier Ireland Ltd. All rights reserved.

1. Introduction Positron emission tomography (PET) is popularly used in early diagnosis and accurate therapy by its noninvasive detection for molecular metabolic information, however, its anatomical visualization is not so clear due to low resolution spatially, and is thus only considered as a detector for functional localization [1]. X-ray computed tomography (CT) shows superior contrast to hard-tissue [2] but inferior contrast to soft-tissue than magnetic resonance imaging (MRI or MR) does. Therefore, “Image Fusion” is the shortest path to obtain triply complementary information in the field of diagnostics and treatment [3], it implies that such development is a novel direction for nuclear imaging in the future. Image fusion is classified as cross-modality imaging (images of diverse sources scanned occurs on diverse detective system at duration apart from) and hybrid imaging (images of diverse sources scanned

∗ Corresponding author. Tel.: +86 20 8306 2680; fax: +86 20 8306 2680. E-mail address: [email protected] (B. Bai).

occurs on a single detective system simultaneously). Hybrid imaging of [PET + CT] product such as PET–CT machine has been popularly accepted for ten years commercially, hybrid equipment (two devices in one) of [PET + MR] and [CT + MR] products were introduced to clinic last year however remain in 2D stage of hardware-based [4], while hybrid imaging [PET + CT + MR] with 3D function has not been invented even not investigated yet, and thus can alternatively be realized by “Cross-modality Image-fusion”. 2. Materials and methods 2.1. Research target We extracted 20 samples from “table of random number” which is drawn by 200 patients primarily suffered by cancers, who were enrolled in our hospital during 2010–2011, there were 14 males plus 6 females aged from 21 to 80 (mean 60.5 ± 15.4) years included. The three cases with significant aspect of image fusion and selected by this article are: case # 00333803 (last name Zhou), male, 63 years old, suffered from Left side lung cancer; case #

0720-048X/$ – see front matter. Crown Copyright © 2012 Published by Elsevier Ireland Ltd. All rights reserved. doi:10.1016/j.ejrad.2011.12.031

M.J.-q. Peng et al. / European Journal of Radiology 81 (2012) e406–e413

00321246 (last name Cai), female, 54 years old, suffered from Right side breast cancer; case # 00316561 (last name Li), male, 38 years old, suffered from central type carcinoma of right lung (informed consent was obtained from all patients). 2.2. Instrumental equipment 

PET of “Discover St8 made in GE Medical Systems USA; MRI of “Intera1.5T Nova Phillips Healthcare” made in Holland; CT of “Aquilion TSX-101A” made in Toshiba Japan; and Medical Imaging Software “Materialise Mimics-14” made in Belgium.

e407

from the 2nd plane group, and a coronal plane from the 3rd plane; similarly, in the 3D image of “target” solid, three 2D “thin planes” of axial, sagittal and coronal are selected and virtually cut away.<2> “9-point” principle: by reference of the concrete session of item # 4, establish 3 “mixed feature points” upon each “feature plane”, specifically, start from 3 × 3 = 9 “focus points” in a series, these 3 segmented planes (says plane A, plane B or plane C) from this radio detector and those 3 planes (says plane D, plane E or plane F) from that detector are doubly aligned to be coincided correspondingly set by set [6]. Co-registration of dual-modality [PET + CT] or [MR + CT] is performed according to the procedure similar to that of [PET + MR] after correspondent original image data imported.

2.3. Research methodology 2.3.1. Somatotopic localization We laid a Fast-holder on the patient’s feet by plastics vacuumhuman-cushion and fixed their heads by dedicate loops so as to keep the same exposure as to minimize physical position errors while [PET], [MR] and [CT] examinations were undergone in different duration [4]. Position-line is localized usually by picking four points: from a patient’s supraorbital foramen, entrecejo and middle submaxilary; three points: from a patient’s sternal angle and nipples. These points are marked by small “cod-liver-oil” or “lead capsules” as fluorescence symbol before scanning, and these remarkable somatotopic localizations are not only able to suggest the approximate region of examination but also able to provide reference to locate co-registration points of the next step after image built. 2.3.2. Data acquisition 2.3.2.1. PET scan protocol. In fast condition for 4 h to keep blood glycemia level within normal limits of 100–150 mg/dl, injected with 5.5 MBQ/kg of 18 F-FDG depending on their weights 40–60 min before scanning performed (of 6 beds), each bed was acquired for 4 min for a total of 24 min beds radio traced, all patients were undergone by supine with arms up position from parietal bone to hip joint to capture axial image, while sagittal and coronal were acquitted by multi-plane reformation (MRP) [3,5]. 2.3.2.2. CT scan protocol. Parameters: voltage = 120 kV, intelligent auto current = 50–150 mA/s, pitch = 1.0, slice thickness = 0.6 mm, interval gap = 5 mm, field of view (FOV) = 350 mm × 350 mm. Data acquisition mode is similar to the standard as item # Section 2.3.2.1 [3,5]. 2.3.2.3. MR scan protocol. Parameters: T1-weighted TSE sequence without interval, repetition time (TR) = 500 ms, Echo Time (TE) = 17 ms, matrix = 256 × 256, number of signals acquired (NSA) = 3, pixel size = 0.98 mm × 0.98 mm × 1.10 mm, flip angle = 90◦ , slice thickness = 2 mm, FOV = 300 mm × 240 mm. 2.3.3. Experimental processes 2.3.3.1. “First phase” co-registration of dual-modality in 2D. Raw data collected of format of [PET] or [MR] or [CT] in DICOM (digital imaging and communications in medicine) were input onto Mimics-14 workstation, these image sources from diverse devices were transferred, exchanged and then created to be initial 3D of single modality (says [PET] “arrow” solid or [MR] “target” solid). We implemented a method of “3-plane & 9-point” co-registration composed of 2 steps:<1> “3-plane” principle: in the 3D image of “arrow” solid, 3 “extra characteristic (feature) points” are picked according to special human structure (bone landmark as the first choice). Of the 3 groups of transverse plans which pass through each of those characteristic points, by the way of selection and virtual segmentation, three characteristic planes of 2D are extracted totally in turn as follows: an axial plane from the 1st plane group, a sagittal plane

2.3.3.2. “First phase” image-fusion of dual-modality in 2D. Based on the cubic registration of “9-point & 3-plane” rule mentioned above, similarly, [MR] images are associatively mapped to [PET] images. Selection of merging is performed by following the interface of Mimics-14 graphics toolkit upon oriented images, there are 12 optional methods for image signal to overlay, such as “plus/subtract/multiply/divide/differentiate/maximize/minimize/ average”, according to clinical need of “earlier findings” for this pilot study, we choose “plus” (based on signals yielded but regardless of tissue sources, an algorithm similar to the superposition principle of “value composition” when “sine” (or “cosine”) waves of various frequency (or wavelength) meet together mathematically, acoustically or optically, so as to amplify any abnormally tiny signals early found). Then, images are combined as [PET + MR] of 2D by following auto-fusing style of information exchanged by signal-overlaid technique (“composition value” is visualized immediately). In parallel, image-fusion of dual-modality [PET + CT] or [MR + CT] of 2D is associatively mapped according to that procedure of [PET + MR]. 2.3.3.3. “Second phase” co-registration of triple-modality in 2D. We treated the “first phase image-fusion of dual-modality” resulted from item # Section 2.3.3.2 as a “target” solid (says [PET + MR]), and the “third party” image never participated the previous “coregistration and image-fusion” as the “arrow” solid (says [CT]), and continued with a process so called “second phase co-registration” by similarly following the pattern of # Section 2.3.3.1 to be triple-modality (says [(PET + MR) + CT] co-registration), for more instances: co-registration of target [MR + CT] re-mapping arrow [PET]; co-registration of arrow [PET + CT] re-matching target [MR]. 2.3.3.4. “Second phase” image-fusion of triple-modality in 2D. By using the result of “second phase co-registration of triple-modality” from item # Section 2.3.3.3, we continue a further image-fusion process by following the scheme of # Section 2.3.3.2 similarly to complete this “second phase image-fusion”. For example: target image [MR + CT] is combined to arrow image [PET] to form a fused triple-modality image [(MR + CT) + PET] of 2D. 2.3.3.5. “Second phase” reconstruction of triple-modality in 3D. These 2D images (either [(MR + CT) + PET] or [(PET + CT) + MR] or [(PET + MR) + CT]) merged by step # Section 2.3.3.4 are imported onto Reversal Soft called Mimics, triangulated grids and point cloud mesh are then exported, and then edited by ways of fairing, padding, compressing and erasing on real-time workstation, and computer aided design (CAD) models are then organized by Boolean execution, curved surface of inside and outside in “NURBS (non-uniform rational B-splines)” are then re-drawn, and these multiple surfaces are then sewn into 3D solid model. These data processing procedure can be reversed, specifically as: (1) rebuild “first phase 3D images” of single-modality such as 3D [PET], 3D [MR] or 3D [CT], and then combine 2 of these cubic images to form “second phase 3D images” of double-modality

e408

M.J.-q. Peng et al. / European Journal of Radiology 81 (2012) e406–e413

such as 3D [MR + CT] or 3D [PET + CT] or 3D [PET + MR] by obeying co-registration solution similar to item # Section 2.3.3.1 (2). This “first phase reconstructed, second phase fused” 3D-doublemodality arrow solid (says cubic [PET + MR]) is combined to the “none participated third party” 3D-single-modality target solid (say cubic [CT]) to thirdly form a 3D-triple-modality (say cubic [(PET + MR) + CT]) by obeying co-registration solution similar to item # Section 2.3.3.3. 3. Results 3.1. First phase fusion 3.1.1. Dual-modality fusion [PET + MR] Based on the strength of each imaging system, [PET] provides functional information in metabolism but is limited on anatomic visualization, while [MR] shows high resolution in morphology for soft-tissue but lacks definitively functional information. For specific example as resident account # 00333803, [PET] demonstrates metabolic lesion of compact opacity in lung with position unspecified as Fig. 1A illustrates, [MR] visualizes higher contrast for parenchyma but lower sensitivity in tumor sign as Fig. 1B illustrates, while image of [PET + MR] combination expresses complementary diagnostic information in structure and metabolism as Fig. 1C and D illustrate.

3.2.2. Triple-modality fusion [(PET + MR) + CT] Tri-modality fusion of [(PET + MR) + CT] is to be processed similarly to that of [(CT + MR) + PET] mentioned previously: continue to combine [CT] on the base of [PET + MR]. For more instance as resident account # 00321246, the purpose of adding [CT] image is to own its strength of addressing lesion so as to learn in advanced the correlative location of phenomena found by [PET + MR] as Fig. 5 illustrates. Image fusions are so diverse that they are classified to variants of series, suitable options of fusion must thus be picked addressing to specification of different diseases besides common aspects. For mammary and lung tumor, either [PET] scan or [CT] scan or [MR] scan presents identical value [8]; for secondary oncologic metastasis and lymphoma, the sensitivity and specificity of [PET + CT] fusion are respectively higher and thus preferable. Potential indications in which [PET + MR] may be superior to [PET + CT] will be those in which [MR] alone has been found more accurate than [CT] alone, including tumors of musculoskeletal, of intracranial, of neck, of breast, of prostatic, of liver for metastatic spread [9–11], detectable rate of [PET + MR] is comparably higher and probably improve assessment consequently. Tri-modality [PET + CT + MR] does eventually and entirely dominate [PET], [CT] and [MR] and thus become the most advantageous and the first priority of all.

4. Calculation 3.1.2. Dual-modality fusion [CT + MR] According to the feature of each imaging system, [CT] provides superior hard-tissue contrast but inferior soft-tissue contrast, while [MR] provides superior soft-tissue contrast but inferior hardtissue contras. For specific instance as resident account # 00321246, bright hard-tissue from [CT] is illustrated as Fig. 2A, clear soft-tissue from [MR] is illustrated as Fig. 2B, image of fused [CT + MR] takes advantages of [CT] and [MR], the comprehensive dual images such as Fig. 2C and D compose signal properties from both hard-tissue and soft-tissue, and thus able to deeply acquire their correlation among cancerous tissues and organs [5]. 3.1.3. Dual-modality fusion [PET + CT] Now that [CT] presents high contrast in morphology for hardtissue but lacks functional information as Fig. 3A illustrating density lesion with nature unknown, while [PET] diagnoses minimal pleural abnormality clearly but is poor in addressing region as Fig. 3B illustrates, image of fused [PET + CT] consequently delivers more convenient and accurate basis for radiologist to interpret radiographs than individual [PET] or individual [CT] does respectively. For specific example as resident account # 00316561, half-distinct images created from conventional [PET + CT] present metabolic activities and vicinity of the structures simultaneously, such as Fig. 3C and D, not only depict the nature of mammary tumor by [PET] but also define the location of mammary lesion found out by [CT] [7], “Central carcinoma at right lung” is thus discovered in time initially for this case [8], because metabolic changes are prior to morphologic structures. 3.2. Second phase fusion 3.2.1. Triple-modality fusion [(CT + MR) + PET] Tri-modality fusion of [(CT + MR) + PET] is to be processed similarly to that of dual-modality as: continue to merge [PET] on the base of [CT + MR]. For more instance as resident account # 00317106, even if it contains more medical information, what double modalities [CT + MR] expresses belongs to anatomic structure with little abnormality in morphology, the unmoral biological properties is however better explored by [(CT + MR) + PET] as Fig. 4 illustrates.

By selecting specialists with same qualification and proficiency, a 5-expert-team was formed which includes 3 radiologist and 2 nuclear-medicine physician. By “double blinded” methodology (each individual reads image without communication, with clinical information erased) [12], the visualization efficiency of PET/CT/MR as well as their inter-merging images for these 20 cases experiment was analyzed. Every expert assessed each image by considering one judgments from the following 5 options: “definite positive (+), probable positive (±), definite negative (−), probable negative (∓), non-recognition (x)”, in consequence, the effect of each image source is professionally “graded” and thus cumulated to be 20 × 5 = 100 items of “score”, the result of this survey is illustrated as Table 1. If we classify the distinguishable-degree into 3 bigger groups such as: “non-recognition (x)” = “indistinguishable (x)”, “probable positive + probable negative” = “probable (±)”, “definite positive + definite negative” = “absolute (+)”, the above items are added up to form Table 2. In term of authenticity characteristic indications such as “specificity, sensibility and precision (correctness)” [11], because imaging principle for these 3 devices are different, and, body parts such as “head, chest, abdominal and knee” contains different tissues (hard or soft), additionally, equipments such as PET/CT that addresses variant clinical needs regularly provide information for various experimental samples emphasizing tendency of different metabolic or structure, the specificities which image fusion uses to test disease are also different and become difficult to compare. Meanwhile, patients usually glad to accept gold-standard biopsyproven [7] only when their radiologic diagnosis is “positive (+)”, for those “negative (−)” patients, further section information is difficult to collect, objective diagnostics of “correct diagnostic indices” are hard to acquire and should be ignored here, therefore, our plan of experimental testing is limited to reliability index of rough census (labeled as “indistinguishable”, “probable” and “absolute”), till visualization index reaches “uniformity”. Outcome of the above survey is analyzed by SPSS-16.0 and expressed as the following broken-curve and cylinder-curve of Fig. 6. After “Fridman Testing” of matched-pair for multiple-samples, P < 0.001 < 0.05 indicates “significant difference” for discrimination

M.J.-q. Peng et al. / European Journal of Radiology 81 (2012) e406–e413

e409

Fig. 1. PET as arrow, MR as target image, 2D → 3D [PET + MR] cross modalities fusion. 1A: PET coronal. 1B: MR coronal. 1C: PET + MR coronal. 1D: PET + MR 3D.

Fig. 2. CT as arrow image, MR as target image, 2D → 3D [CT + MR] cross modalities fusion. 2A: CT coronal. 2B: MR coronal. 2C: [CT + MR] coronal. 2D: [CT + MR] 3D.

Fig. 3. PET as arrow image, CT as target image, 2D → 3D [PET + CT] cross modalities fusion. 3A: CT sagittal. 3B: PET sagittal. 3B: PET + CT sagittal. 3D: PET + CT 3D.

Fig. 4. [CT + MR] as target image, [PET] as arrow image, 2D → 3D [(CT + MR) + PET] tri-modality. 4A: CT axial. 4B: MR axial. 4C: [CT + MR] axial. 4D: CT + MR + PET 3D.

Fig. 5. [PET + MR] as target image, [CT] as arrow image, 2D → 3D [(PET + MR) + CT] tri-modality. 5A: PET 3D segment. 5B: CT 3D segment. 5C: MR 3D segment. 5D: PET + MR + CT 3D cut.

e410

M.J.-q. Peng et al. / European Journal of Radiology 81 (2012) e406–e413

Table 1 Evaluation on resolution of multimodality image identification. Source pattern

[CT] Single

[MR] Single

[PET] Single

[CT + MR] Dual

[PET + MR] Dual

[PET ± CT] Dual Hybrid

[PET + CT + MR] multimodality

Accumulated —

(+) (±) (−) (∓) (x) Total

63 12 10 8 7 100

12 3 56 20 9 100

85 2 8 3 2 100

59 11 11 9 10 100

49 17 7 13 14 100

46 14 8 16 16 100

25 19 19 15 22 100

430 79 126 85 80 800

91 1 7 1 0 100

Table 2 Classification of cross-modality image identification. Source Pattern

[CT] Single

[MR] Single

[PET] Single

[CT + MR] Dual

[PET + MR] Dual

[PET ± CT] Dual Hybrid

[PET + CT + MR] Multimodality

(x) (∓) (+) Detectable (%)

7 20 73 73

9 23 68 68

2 5 93 93

10 20 70 70

14 30 56 56

16 30 54 54

22 34 44 44

Fig. 6. Comparison of contrast between multimodalities (K-related sample nonparametric test).

0 2 98 98

M.J.-q. Peng et al. / European Journal of Radiology 81 (2012) e406–e413

e411

Table 3 Partnership crosstabulation testing of Dual [PET + CT] vs Dual [PET + MR]. Across [PET + CT]

(X) (±) (+) Total

Across [PET + MR]

(X)

(±)

(+)

Total

3 (21.4%) 5 (16.7%) 8 (14.3%) 16

4 (28.6%) 13 (43.3%) 13 (23.2%) 30

7 (50.0%) 12 (40.0%) 35 (62.5%) 54

14 30 56 100

z = 0.401, p = 0.688 (inspection level = 0.05/3 = 0.017).

Table 4 Partnership crosstabulation testing of Across [CT + MR] vs Single [CT]. Across [CT + MR]

Single [CT]

(X) (±) (+) Total

(X)

(±)

(+)

Total

1 (14.3%) 1 (5.0%) 8 (11.0%) 10

1 (14.3%) 4 (25.0%) 14 (19.2%) 20

5 (71.4%) 14 (70.0%) 51 (69.9%) 70

7 20 73 100

z = 0.702, p = 0.483 (inspection level = 0.05/3 = 0.017).

of fusion result between single modality and multimodality. Therefore, further comparisons of 2 vs 2 are urgent for those visual patterns with detectable-rate closer to each other in Fig. 6 as Tables 3–5 showing (1), (2) and (3) in turn: (1) z = 0.401 and P = 0.688 for crosstabulation testing of cross-modality [PET + CT] vs cross-modality [PET + MR]; (2) z = 0.702 and P = 0.483 for crosstabulation testing of cross-modality [CT + MR] vs single-modality [CT]; (3) z = 1.811 and P = 0.070 for crosstabulation testing of cross-modality [PET + CT] vs single-modality [PET]; to summarize, 3 “P values” are found to be all greater than “inspection level” which is 0.05/3 = 0.017, in consensus, all matched-pair result in “no significant difference” [6] for these 3 sorts of comparison. In conclude, cross-modality [PET + CT] is visually blurred, primarily by the reasons that two categories of equipment’s visual principles are huge in difference, and that scanning occurs at different time and place; while hybrid-modality [PET + CT] is visually distinct, primarily by the reasons that two categories of equipment are integrated organically, and that scanning occurs at the same time and place, these are the unmatchable advantages other fusion methods unable to take. [CT + MR] and [PET + MR] visualize “halfdistinct” images, [CT + MR] is however a little better than [PET + MR] because information of CT/MR are based on closer level of cell tissue, while PET/MR based on different degree of cell tissue level [9]. 5. Theory In 1992, Brown et al. summarized a 2-step-procedure for “image exchange” and “image locating”: <1> “image exchange” ensures each pixels/voxels represent equal size of spatial area practically by scenarios of translation, rotation, reflection and so on. <2> In term of “image locating”, 2 methods like “external characteristic based” and “internal characteristic based” (primarily based on configuration and segmentation) are carried out in this research, by

following procedure of “feature extraction” and “image mapping”, and started from marking and dividing internal features (tissue characterization), process of “picking points to form planes” and “picking planes for segmentation” are enforced. “Localizing registration” is the principal step and critical condition of image fusion, it enforces correlation so called “one to one” through which one floating image maps another spatially in relocatability [3], so that the certain anatomic point (of the human body) of the 1st image locates on the same position as another point of the 2nd image at the coordinate set. Here is the process in detail: upon the segmented planes where each human “feature point” locates, geometrically align its principal axis and center mass by translation and rotation, double vertical axis of “coordinate set like” is auto-created on these transactions (formed inside the domain where is joined crossover structural lines of body surface, organ boundaries and infectious focus), extend these double-axis till they intersects with the plane edge or organ surface, as a result, there lies 4 intersect-points (internal “feature points”) existing, additional with the previous human external “feature points” where the previous plane passed through, 3 mixed (internal plus external) “focus points” can then be picked from at least (4 + 1 =) 5 of them. By then we totally got 3 feature planes decided by 3 external feature points, and also picked 3 focus points from each of them as “registration points”, when the imaging workstation prejudges that the coordinate transformation of (3 × 3 =) 9 focus points become minimums, the two 3D solids of “9 to 9 based” are catchable and entirely located. For geometrical example: suppose there is a registration point P (a, b, c) on plane M, and a correspondent registration point Q (i, j, k) on correspondent plane N, when√ = (a − i)2 + (b − j)2 + (c − k)2 is minimal, the square root of (says ) is also optimal, while these 2 points are superimposed,  = 0 stands for reduplication of P and Q. Obviously, the superiority of “3-plane and 9 points” algorithm upon other algorithms is committed by the science foundation of “3 points forming a plane, 3 planes forming a solid” principle in dedication geometrically. The

Table 5 Partnership crosstabulation testing of Hybrid [PET + CT] vs Single [PET]. Hybrid [PET + CT]

Single [PET]

(X) (±) (+) Total

z = 1.811, p = 0.070 (inspection level = 0.05/3 = 0.017).

(X)

(±)

(+)

Total

0 (0.0%) 0 (0.0%) 0 (0.0%) 0

0 (0.0%) 0 (0.0%) 2 (2.2%) 2

2 (100.0%) 5 (100.0%) 91 (97.8%) 98

2 5 93 100

e412

M.J.-q. Peng et al. / European Journal of Radiology 81 (2012) e406–e413

accuracy of “localizing registration” directly affects the quality of image fusion, because of patients posture change or physical repositioning as well as the biological activities of their internal organs, the data gathered is incompatible, this situation results in difficulties for co-registration followed up. This is the disadvantage of this cross-modality fusing experiment and is hard to overcome, by then, the degree of accuracy can only be handled by delineating experience and visual insight of the processor [12–13]. There are 2 solutions for image integrity: valuable information is extracted from “this” image to combine with “that” image; valuable information is extracted from both images and then casted onto a new imaging space where combination occurs [4]. The first way is preferable for this paper: the image of “targeted” set is treated as image background so called vector, the referenced image of “arrow” set is treated as carrier for supplementary information, its unmoral regions of pathologic are to be extracted and casted onto the base image. After matching, fusion is built by 2 categories of solution: fusion of image-pixel based, and fusion of image-feature based [3]. Here we pick the second solution. In this experiment we choose “overlap technique” according to clinical needs based on cubic registration. Among the above mentioning 12 sorts of techniques of item # Section 2.3.3.2, overlapping is considered to be the majority feasible for enforcing fusion and thus more popularly practiced.

6. Discussion Medical information identified by [MR] is within a range of anatomic structure after all, it only visualizes mild morphologic abnormality, however, abnormality of biological nature can be detected by dual modalities called [PET + MR]. In many cases, their pure [MR] images keep presenting oncologic lesions after patients’ radiotherapy undergone, this sense is not exactly caused by ineffectiveness of the treatment, and thus further observation must be done upon metabolic inhibition, because therioma degree is proportional to metabolic activity as it plays a key role in evaluating therapy efficiency. Image fusion is also able to identify whether a tumor is innocent or not, for instance, to clarify if a lesion is post-surgical scar or initial tumor, data amount of singlemodality image is insufficient, instead, metabolic factor brought in by double-modality is typically valuable in diagnostics to locates complex structures, therefore, the deficiency of judgment only based on formation change or metabolic abnormality is remedy, and missing-diagnosis or misdiagnosis can be decreased naturally as a result. Previously, thoracic diagnosis is likely dependent on [CT] most of the time, by which tiny lesion is not visually enough to detect and often results in diagnosis-lost. On the other hand, [MR] test is usually preferable to parenchyma such as abdominal organs, musculoskeletal and brain, but seldom to pathological changes of pleural. The principle of [PET] diagnosis is based on spiking malignant tissues which are growing rapidly, there existing pathophysiology signed by high receptor or antibody and signs of high glycolysis as well as enzyme expression in cancer cells, and thus uptakes 18 F-FDG to congregate. Its radioactive morph, location, distribution, density and intensity can be scanned to present “positive (+)” visualization with high sensitivity. Therefore, once [PET] image is combined in synergies to [MR] or [CT] image, says cross modalities [PET + MR] of 2D, it can be practiced to judge abnormalities nature earlier, furthermore, to improve the specificity compared with [PET] alone, and thus extends [MR] application to thoracic disease (says breast cancer) detection. Historically, thoracic treatment is classified as surgery, chemotherapy and radiotherapy, for multimodalities [PET + MR + CT] of 3D, particularly, its advantages over unique [PET] or pure [MR] or [CT] alone can be envisioned as: firstly, to provide accurate values for surgical area strategy

preoperatively, or for target region of radiotherapy before treatment; secondly, during the middle stage of therapy, to follow-up, to monitor, to adjusts and to update treatment plan [14] by learning cancer correspondent reactions from medication-does or radiant quantity finally, to enable reliable statistics data for evaluating recidivating-test or radio-surgery during the last stage of treatment post-therapeutically. 7. Conclusions This paper tries to highlight some perspectives of applying a [PET + CT + MR] integrated scheme for simultaneous morphologic (of soft-tissue and hard-tissue) as well as functional imaging by mechanism of combining 3 images of different modal sources into 1 tri-modality available for diagnostic facility. Characteristic (or feature) co-registration approach of “9-point and 3-planes” designed for this experiment is the base to realize the process of cross-modality image fusion, regardless of using a certain algorithm alone, we solved problems relevant to 3-modality image merged by “overlapping technique” on real-time workstation of imaging system Mimics-14 at a single time, and finally realized the innovational triple modalities [PET + CT + MR] image fusion advantageously derived from [PET], [CT] and [MR] [1]. This combination of mutual data exchanged, not only can integrate relative information to be presented by one single image but also can detect new clues for early diagnosis, and thus enables not only to improve diagnostic accuracy but also to enhance treatment effect in clinical oncology by therapy response [4]. With the exception that the detectable-rate of cross modalities of [CT + MR] is a little superior than that of single modality, [MR] the rest cross modalities [PET + MR], [PET + CT] are all inferior to single modality, the detectable-rate of triple modalities [PET + CT + MR] (44%) is even not enough than half of that of single modality [PET] (93%). However, the hypothesis importance of cross modalities cannot be denied by this phased limitation, because “hybrid [PET + CT]” climbs to the top candidate of examination by detectable-rate of 98% is caused under the participation of industry exactly developed from “cross-modality [PET + CT]” in the past. Even though our research of cross-modality image fusion is currently at the initial stage which efficiency is uncompetitive to hybrid-detector so as unable to be clinical routine tool adequately [6,14–15], based on the condition that there is no complete 3D hybrid radio detector of [PET + CT + MR] yet internationally, this sort of 3D fusion, is doubtlessly an essential complement for the existing dual-modality hybrid-detector, and at least is contemplated as the next step in the “hybridization” evolution of 3D hybrid-device for multimodalities [PET + CT + MR] predictably. Conflict of Interest This work is supported by “GuangDong Dept. of Science Technology, China” granted project (#2011B050400029) titled: «Creative Thermal Simulation based on 3D Thermography» (No potential conflict of interest). This work is supported by “GuangDong Natural Science Foundation, China” granted project (#S2011010005011) titled: «Creative Thermal Simulation based on 4D Virtual Thoracic Operation» (No potential conflict of interest) References [1] Schulthess GK, Schlemmer H-PW. A look ahead: PET/MR versus PET/CT. Eur J Nucl Med Mol Imaging 2009. [2] Aziz F. Radiological findings in a case of advance staged mesothelioma. J Thorac Dis 2009;1(1):34–5. [3] Karlo CA, Steurer-Dober I, Leonardi M, et al. MR/CT image fusion of the spine after spondylodesis: a feasibility study. Eur Spine J 2010;(19):1771–5.

M.J.-q. Peng et al. / European Journal of Radiology 81 (2012) e406–e413 [4] Pichler BJ, Judenhofer MS, Wehrl HF, et al. PET/MRI hybrid imaging: devices and initial results. Eur Radiol 2008;(18):1077–86. [5] Okane K, Ibaraki M, Toyoshima H, et al. 18 F-FDG accumulation in atherosclerosis: use of CT and MR co-registration of thoracic and carotid arteries. Eur J Nucl Med Mol Imaging 2006;33:590–4. [6] Schlemmer H-P, Pichler BJ, Krieg R, et al. An integrated MR/PET system: prospective applications. Abdom Imaging 2009;(34):668–74. [7] Lv Y-G, Yu F, Yao Q, et al. The role of survivin in diagnosis, prognosis and treatment of breast cancer. J Thorac Dis 2010;(2):100–10. [8] Schulz V, Torres-Espallardo I, Renisch S, et al. Automatic three-segment MRbased attenuation correction for whole-body PET/MR data. Eur J Nucl Med Mol Imaging 2011;(38):138–52. [9] Nakamoto Y, Tamai K, Saga T, et al. Clinical value of image fusion from MR-PET in patients with head & neck cancer. Mol Imaging Biol 2009;(11):46–53. [10] Heiss W-D. The potential of PET/MR for brain imaging. Eur J Nucl Med Mol Imaging 2009;36(Suppl 1):S105–12. [11] Antoch G, Bockisch A. Combined PET/MRI: a new dimension in whole-body oncology imaging? Eur J Nucl Med Mol Imaging 2009;36(Suppl 1):S113–20. [12] Wang Y-X. Medical imaging in new drug clinical development. J Thorac Dis 2010;(2):245–52. [13] Brix G, Nekolla EA, Nosske D, et al. Risks and safety aspects related to PET/MR examinations. Eur J Nucl Med Mol Imaging 2009;36(Suppl 1):S131–8. [14] Kartachova MS, Valdés Olmos RA, Haas RLM, et al. Mapping of treatment induced apoptosis in normal structures: 99mTc-Hynic-rh-annexin V SPECT and CT image fusion. Eur J Nucl Med Mol Imaging 2006;33(8):893–9. [15] Giovanella L, Lucignani G. Hybrid versus fusion imaging: are we moving forward judiciously? Eur J Nucl Med Mol Imaging 2010;(37):973–9. M. J. Peng Senior Medical Engineer, primary investigator of Dept. of Joint Surgery @ 1st Affiliated Hospital of Guangzhou Medical College, China, Computational Orthopedics Research Institute @ Joint Implantation Key Lab of Guangdong Province, China. Tel: +86 20 8306 2253, +86 189 2234 6634, fax: +86 20 8306 2253; [email protected]; [email protected]. Address: 151 YanJianXi Road, GuangZhou City, GuangDong Province, PR China 510120. Senior engineer and medical scientist with Computational Orthopedics Research Institute; interests include radiologic imaging and virtual surgery; principal investigator of projects of “high dimension images fused across multiple modules of diverse sources in unification” funded by Oversea Scholar Foundation of Guangzhou Medical College; of “Virtual operation in Dynamic Surgical Simulation based on In-out fixed instrument for Hard tissue” funded by Guangzhou Service of Scholar Exchange; of “Training program of Virtual-operation in Dynamic Surgical Simulation for Hard tissue” funded by Guangzhou dept. of Educational administration; of “Creative Thermal-Simulation based on 4D Virtual Thoracic Operation” funded by Guangdong Natural Science Foundation, China. Dr. Xiangyang [email protected], Tel.: +44 7923481386; fax: +44 141 211 9737. 378 Sauchiehall St. Glasgow G2 3JZ, United Kingdom. Medical Engineer, received BE in naval architecture in 1986 and ME in experimental mechanics in 1989 from Tianjin University, China; PhD in mechanical engineering in Nottingham Trent University, UK 1997; has researched 3D capturing and shape modeling over 15 years; pioneered development of high resolution 3D photogrammetry system funded by BBSRC; proposed shape conformation approaches which conform anatomical template models to the captured 3D model in order to establish dense correspondence between captured 3D for morphometrical analysis funded by Fraday Imaging

e413

Partnership. Under a Scottish Enterprise funded Proof of Concept project these techniques provided core IP to launch the company dimentional Imaging. Dr. Balvinder S. [email protected], Tel.: +44 01412119604; fax: +44 01412119604. Room G14, 378 Sauchiehall Street, Glasgow, G2 3JZ, United Kingdom. Is a senior clinical lecturer/consultant in Orthodontics at the Glasgow Dental School; has a longstanding interest in correction of facial dysmorphology working closely with his Oral and Maxillofacial surgical colleges; interests lie in 3D planning for orthognathic patients; has evaluated several 2D planning computer packages and is currently developing the “virtual patient” which will be used for 3D planning. Dr. Ashraf F. [email protected], Tel: +44 01412119604; fax: +44 01412119604. 378 Sauchiehall Street, Glasgow, G2 3JZ, United Kingdom. Professor, head of, Biotechnology and Craniofacial Sciences Research Group supervising Master’s and PhD’s in area of stereophotogrammetry to study facial deformities using recent advances in 3D imaging; his research team was successful in obtaining several research grants to conduct an extensive multidisciplinary research projects to characteristics facial deformities and quantify surgical changes; the results of these investigations were presented at several internationally conferences, has received several awards and honors; editor of the oral surgery section of the International Journal of Oral and Maxillofacial Surgery; published around 40 research articles on this field of interest and lectured on the same topic. Dr. Chin-Tu [email protected], Tel: +001 7737026269; fax: +001 7737026269. 5841 South Maryland Avenue, MC2026, USA. Director, professor of, Functional and Molecular Imaging Core, University of Chicago; Education: 1974, BSc (Physics), National Tsing-Hua University, Hsinchu, Taiwan; 1978, MS (Physics), Northwestern University, USA; 1986, PhD (Medical Physics), University of Chicago, USA. Grants: 1987–1994, Department of Energy; 1989 – present National Institute of Health; 1990 Louisiana Board of Regents; 1990 Radiological Society of North America; 1992 National Science Foundation; 1993 National Health Research Institutes (Taiwan), with total funding 37,13,537$; achieved 5 national U.S. patents in Nuclear Medicine; first editor of 6 Books (or Book Chapters) written in Multimodality Images; published internationally around 150 research papers in Functional and Molecular Imaging. B. BAI Director of Surgeon at Orthopedic Surgery Department, professor of Orthopedic implantation key lab of Guangdong Province, China, 1st Affiliated Hospital of Guangzhou Medical College, Guangzhou, China. Tel.: +86 20 8306 2680; fax: +86 20 8306 2680; [email protected]; [email protected]. Address: 151 YanJianXi Rd, GuangZhou City, GuangDong Province, China 510120. September 1978–August 1983: degree of MD Medical School of Guangzhou Medical University; September 1984–August 1989: degree of PhD Sun Yat-sen University of Medical Sciences; October 1996–August 1997: fellowship, Department of Orthopedic Surgery, Health Science Center at Syracuse, State University of New York, USA; September 1997–December 1999: fellowship: Hospital for Joint Diseases Orthopedic Institute, New York University Medical Center, USA; 1984–1989: resident and chief resident, Dept. of Orthopedic Surgery, First University Hospital, Sun Yat-sen University of Medical Science, 1992–1996: associate professor, Dept of Orthopedic Surgery, The first Affiliated Hospital, Guangzhou Medical University, 2000–Now: professor and chairman, Dept of Orthopedic Surgery, The First Affiliated Hospital Guangzhou Medical University.