Optik 124 (2013) 4266–4273
Contents lists available at ScienceDirect
Optik journal homepage: www.elsevier.de/ijleo
A non-rigid cardiac image registration method based on an optical flow model Xiaoqi Lu, Yongjie Zhao, Baohua Zhang ∗ , Jianshuai Wu, Na Li, Weitao Jia School of Information Engineering, Inner Mongolia University of Science & Technology, BaoTou City, Inner Mongolia 014010, PR China
a r t i c l e
i n f o
Article history: Received 3 July 2012 Accepted 29 December 2012
Keywords: Non-rigid registration Medical image SIFT algorithm Optical flow model
a b s t r a c t According to non-rigid medical image registration, new method of classification registration is proposed. First, Feature points are extracted based on SIFT (Scale Invariant Feature Transform) from reference images and floating images to match feature points. And the coarse registration is performed using the least square method. Then the precise registration is achieved using the optical flow model algorithm. SIFT algorithm is based on local image features that are with good scale, rotation and illumination invariance. Optical flow algorithm does not extract features and use the image gray information directly, and its registration speed is faster. The both algorithms are complementary. SIFT algorithm is used for improving the convergence speed of optical flow algorithm, and optical flow algorithm makes the registration result more accurate. The experimental results prove that the algorithm can improve the accuracy of the nonrigid medical image registration and enhance the convergence speed. Therefore, the algorithm has some advantages in the image registration. © 2013 Elsevier GmbH. All rights reserved.
1. Introduction Medical image registration is an important technology in medical image analysis and it is the basis of medical image fusion. The purpose of medical image registration is to find a series of space transforms for a medical image and make it match the corresponding points on another medical image in space or anatomical structure. In the diagnosis process, because some different objective factors (i.e. the different physical mechanisms, the imagining parameter changes and the different imaging device resolution), different modes images show different properties. So there are many limitations if the doctors themselves register two pieces or two groups of different modes images in space, this method has a greater subjectivity and it’s inevitable to cause errors. Especially in the fields of stereotactic radiosurgery, cardiac surgery visualization and so on, the image registration accuracy must be higher than traditional methods, heart and other softtissue organs are usually involved. The medical image registration becomes a necessary and difficult task. In many cases, the characters of deformation are non-rigid and non-linear, even for the head images which are nearly rigid object that cannot simply use the rigid registration. So rigid transformation and affine transformation can’t stimulate the local soft-tissue organs deformation efficiently. For some special parts, due to the differences between soft tissue and the air susceptibility, the magnetic field has a larger change that results in heavily geometric
∗ Corresponding author. Tel.: +86 13644723970. E-mail address: zbh
[email protected] (B. Zhang). 0030-4026/$ – see front matter © 2013 Elsevier GmbH. All rights reserved. http://dx.doi.org/10.1016/j.ijleo.2012.12.055
distortion. To meet clinical practice needs, most domestic and foreign researchers are interested in the deep study of non-rigid medical image registration algorithms. Moravec first proposed corner detection operator to realize the stereoscopic vision matching [1], based on which, Harris has improved the Moravec operators [2]. Harris corner operator with scale, rotation invariant is widely introduced to many image registration algorithms. But it is sensitive to illumination changes, scale, viewpoint and the anti-noise capacity is worse [3]. In 2004, Lowe, from Colombia University, proposed a new feature point extraction algorithm—SIFT algorithm [4]. The algorithm can extract the stable feature and has a powerful ability for matching images. It is invariant to scale, illumination, and rotation. And it also immunes to the changes of viewpoint, radioactive, and noise. Therefore this method is successfully used in image registration field. Horn related with 2D velocity fields and gray scale innovatively, introduced optical flow constraint equation, and achieved the basic algorithm of optical flow computation. Because the displacement field and the velocity field that are from registration and optical flow model are similarity. So, Palos introduced optical flow model to image registration, but these methods are also based on Horn model. Jean-Philipe Thirion proposed “demons-base” algorithm [5] that is a simple flexibility registration method based on the image gray scale information and is similar to the Maxwell’s experiment principle in the 19th century. In the algorithm, to judge all pixel points motion direction in registration images is the first step and then elastic registration is achieved moving pixel points [6–8]. In the algorithm of optical flow, when ||(R(x))|| is equal to zero, the direction of movement at pixel x can’t be decided. Gray level distribution of image are different, and the gray level distribution
X. Lu et al. / Optik 124 (2013) 4266–4273
of the same anatomical position is also different, so gray difference of image and gray gradient are not enough for image transform, so Demons algorithm can only be used for mono-modality image registration. In order to move the pixel correctly, the improved Demons algorithm used for multi-modality registration is proposed in this article and the algorithm improves mutual information between two images. The implement method is that another force is attached to the former force of image deformation, that is, the floating image is adjusted to match with reference image, mutual information is used for measuring the changes of the two images. When the mutual information gets to the maximum, the two images are registered. The main registration methods are medical image registration based on image feature and medical image registration based on intensity information. The both methods have defects. The former method needs to segment images to extract the image features and it is hard to preselect feature by artificial control, so the registration speed and the accuracy are not good enough. The later method has to calculate the entire image directly, so the registration time, the speed and robustness are also not satisfied. With the aim of solving the defects of present registration technology, this article proposes the non-rigid cardiac image classification registration method. The precise registration based on the optical flow algorithm is realized after the coarse registration is finished by SIFT algorithm. Feature point extraction is the basis of medical image registration and its accuracy affects the matching results directly. SIFT algorithm is based on local image features with good scale, rotation and illumination invariant. Optical flow algorithm doesn’t need to extract features and it speeds up registration process using image gray information immediately. The both algorithms are complementary. SIFT algorithm is used for improving the convergence speed of optical flow algorithm, and optical flow algorithm makes the registration result more accurate. In the article the interpolation method combines the linear interpolation and the central difference method, and final registration is achieved using multi-resolution strategy.
2. Cardiac motion analysis based on the theory of non-rigid motion Time series cardiac image is researched in the article, a cardiac image is analyzed to build a cardiac image model that is used to simulate the deformation process. And then the registration results are analyzed. Above all, it is important and necessary to analyze the research objects using the method of motion analysis. During cardiac cycle, the cardiac motion and deformation are complicated. According to Potel’s discovery [9], except for global cardiac deformation (such as contraction and expansion), there are also rigid cardiac motion and local cardiac deformation. About 90% ventricular wall motions are away from or close to ventricular systole, which means that contraction and expansion are more important than translation, torsion and rotation. Based on the non-rigid motion analysis theory and Potel’s research, the cardiac motion can be divided into global and local motion (Fig. 2.1). Based on the non-rigid movement theory, cardiac motion analysis is the motion decomposition process .It is also an estimation process from coarse to precise registration and from global to local. The main steps are shown as follows:
(1) Moving reference coordinate system is constructed to calculate the global motion and the cardiac deformation. (2) Extended super quadrics are fitted to estimate global cardiac deformation in the moving reference coordinate system. (3) The local cardiac deformation is estimated.
4267
Fig. 2.1. Cardiac motion analysis.
2.1. Global cardiac motion analysis First, the global rigid cardiac motion is estimated and separated from the global motion during motion decomposition. The localcoordinate system is built, the axis is defined as the cardiac principal axis, and the origin is defined as the systole center. Therefore, global rigid cardiac motion between two moments can be determined according to the changes of the local-coordinate location and the direction [10]. Global cardiac deformation consists of contraction, expand and torsion along with long axis in three orthogonal directions. The global rigid motion compensation is performed before global deformation analysis. The 3-D vessel skeleton points between two moments are fitted into a spatial surface when local-coordinate is concentric. At last, global cardiac deformation is estimated after analysis of the surface deformation. 2.2. Local cardiac motion analysis Sun Zheng used a 3-D vessel centerline motion estimation method [11] to get the corresponding relation between the vascular skeleton points of two moments (i.e. the displacement of points). According to the calculated global motion and deformation parameter, the skeleton points at the second movement are recovered to the former position. Without considering the errors, the difference of position between corresponding points is the displacement caused by local motion. 3. SIFT feature vector matching algorithm and coarse registration 3.1. SIFT feature vector generation Image feature points with rich information are very important to the image local features. It is convenient to be expressed and measured, and it can adapt to the change of light condition, geometrical distortion and occlusion. The intuitive define of feature point is that the image gray scale change is larger in two directions, such as the corner of outline, the end of segments and so on. Image feature point detection plays an important role in image registration. Not only it can reduce computation greatly, but also reduce the effects of noise. It also has a better adaptability to gray scale transformation, image deformation, occlusion and others. SIFT feature is image local feature. SIFT algorithm is divided into two steps. The first step is to produce SIFT feature vectors and extract feature vectors that is invariant to rotation, scale and illumination from reference images and floating images. The second step is to realize the feature vectors matching [12,13]. The registration images are normalized before generation of SIFT feature vectors. The image size is expanded to double and the noise is eliminated by prefiltering. At last, the bottom of Gauss
4268
X. Lu et al. / Optik 124 (2013) 4266–4273
Table 3.1 The construction of Gaussian pyramid. Order
Layer
1 2 3 4
1
2
3
4
5
k2 k4 k6
k k3 k5 k7
k2 k4 k6 k8
k3 k5 k7 k9
k4 k6 k8 k10
Table 3.2 The construction of DOG pyramid. Order
m(x, y) =
1 2 3 4
k2 k4 k6
2 k k3 k5 k7
3 2
k k4 k6 k8
4 3
k k5 k7 k9
pyramid is achieved (i.e. the first layer in the first order). It includes four steps to generate SIFT vectors: (1) First, Scale space is constructed and extremums are detected. Scale transformation is performed on the registration images using Gaussian kernel function, scale space sequences are achieved at multi-scales and then scale space features are extracted from these sequences. Formula (1) is 2-D Gaussian kernel function and is the variance of Gaussian distribution. G(x, y, ) =
2
(L(x + 1, y) − L(x − 1, y)) + (L(x, y + 1) − L(x, y − 1))
2
(3)
(x, y) = tan−1 ((L(x, y + 1) − L(x, y − 1))/(L(x + 1, y) − L(x − 1, y))
Layer 1
Because DOG value is sensitive to noise and edge, so the detected local extreme points in DOG scale space have to go through further tests to be set as feature points precisely. (3) The feature point size and direction are determined. Every feature point direction that is one direction or multidirectional is set to remain the operator rotational invariance. The gradient direction distribution characteristics that are from neighborhood pixels of feature points are introduced in this step. Feature points modulus and direction formula are shown in formula (3) and (4).
1 2 2 2 e−(x +y )/2 2 2
(1)
Different scale space representation L(x,y,) can be achieved by the convolution of 2-D image I(x,y) and Gaussian kernel function G(x,y,). As formula (2) shows: L(x, y, ) = G(x, y, ) × I(x, y)
(2)
where L represents scale space (x,y) represents a point of image I, is scale factor. The scale factor is greater, the image is smoother and vice versa. Choosing appropriate scale factor is important to establish the scale space. DOG (Difference-of-Gaussian)pyramid is constructed by two adjacent scale images subtraction. Gaussian pyramid usually has 4 orders and 5 layers in every order. The constructions of Gaussian pyramid and DOG pyramid are shown in Tables 3.1 and 3.2. In the constructed DOG scale space pyramid, if a point value is the maximum or minimum among 26 adjacent pixels in present layer or in upper and lower layers of DOG pyramid, then the point is considered as a feature point in this scale. As Fig. 3.1 shows: (2) Feature points are positioned and the unstable edge points and low-contrast feature points are eliminated.
(4)
where (x,y) is the coordinate point of multi-scale space, L is the scale of each feature point. The sample range is a neighborhood window which is centered on the feature points, and gradient direction of neighborhood pixel is counted using gradient orientation histogram. Its peak represents the main direction of the neighborhood gradient of the feature point, which is the direction of the feature point. The main peak of a feature point is P1 and another peak is P2. P is quotient, P = P2/P1. If P is greater than 0.8, its direction is ragarded as the auxiliary direction of the feature point. Based on the method above, a feature point may be specified with a number of different directions, which can enhance the robustness of feature matching. (4) SIFT feature vectors genetation The coordinate axis is rotated to the feature point direction. The feature point is centered to achieve a 16 × 16 window. The window is divided into 16 sub-blocks and each sub-block size is 4 × 4. The gradient histogram of eight directions is calculated in each 4 × 4 sub-block, and then seed points with eight directions can be achieved. The feature point can generate 128-D feature vectors. SIFT feature vectors have eliminated the effects of geometric deformation (i.e. rotation, scale change, etc.). And then, feature vector length is normalized and the illumination change is eliminated.When SIFT feature vectors of two registration images are generated, the next step is to match feature vectors and realize the coarse registration. 3.2. Feature vectors matching and coarse registration realization The feature vectors are matched to find corresponding relationships of the feature vectors between two images after finding SIFT feature vectors. (m) (m) (m) (1) The image M feature point set is Fm = {f1 , f2 , . . . , fLm } (n)
(n)
(n)
and the image N feature point set is Fn = {f1 , f2 , . . . , fLn }. where Lm and Ln are the number of feature points in the images M and N. The distance of two feature points is calculated by Euclidean formula. When the feature vector dimension is K, the distance formula
k (m) (n) 2 is d(Fm , Fn ) = (f −f ) . i
i
i=1
Scale
k (m) (n) 2 d(Fm , Fn ) = (f −f ) i
i
(5)
i=1
Feature matching is realized according to the distance-ratio standard. Distance-ratio is the ratio of the shortest Euclidean distance d1 and the second shortest Euclidean distance d2. Where d1 and d2 are the distance between a feature point and an image.
Fig. 3.1. Determining an extreme point of scale space.
ratio > ε, success; Else, failure.
(6)
X. Lu et al. / Optik 124 (2013) 4266–4273
In formula (6), where ε is a preset threshold value, radio is equal to d1/d2. If the distance-ratio is larger than the threshold value, the feature points matching is successful, conversely, it is failed. (2) Mismatch is eliminated. The potential matching pairs can be achieved by the similarity measure. Some mismatches are inevitable, so the geometric constraints and other additional constraints are necessary to eliminate the mismatches and improve the robustness. Through the least square method similarity transformation parameters are obtained, and then the floating image position in the reference image are found for coarse registration.
information gradient is regarded as another force. The direction corresponding to mutual information increased is the direction of image transformation, if the mutual information gets to the maximum, the registration finishes. Vn+1 (x) = G ⊗ (Vn (x) +
(7)
F(x) and R(x) are the gray values of image F and image R on the x position of the coordinates. V(x) is the offset from image F to image R, (R(x)) is the gray gradient of image R at x position of the coordinates. The offset from image F to image R is necessary to image registration.
=
1 + log
PVR,F (I1 , I2 ) P R (I1 )PVF +εH (I2 )
(11)
PVR,F (I1 , I2 ) P R (I1 )PVF (I2 )
dI1 dI2
(12)
If the deformation region V is transformed to V + εH, then the variation form is:
∂ε
PVR,F (I1 , I2 ) log
I(V ) =
PVR,F (I1 , I2 ) ∂ PVR,F (I1 , I2 ) log ∂ε P R (I1 )PVF +εH (I2 )
∂PVR,F (I I ) +εH 1, 2
+ ˛∇ MI(Vn (x)))
In the mutual information gradient theory from Hermosillo [16,17], the reference image R is aimed as target of floating image F for registration. The point x in image R is mapped to the point x in image F during the local transformation U = x + V(x). Mutual information gradient is to take the derivative of the mutual information between two images with respect to space displacement vector V, image intensity joint distribution PVR,F (I1 , I2 ) is changed into continuous function using the Parzen window. I1 is the reference image gray scale on the point x and I2 is the floating image gray scale on the point x. The mutual information of the transformed image F and reference image R is:
Optical flow matching algorithm is an image registration method based on intensity and elasticity. The registration method based on image intensity that uses gray information directly instead of the extraction feature process is widely concerned and quickly developed [14]. The idea of the algorithm is as follows [15]: Where F is a floating image and R is a reference image.
2
4.2. Mutual information gradient
4.1. The optical flow algorithm
∂I(V + εH) |ε=0 = ∂ε
(F(x) − R(x))∇ (R(x)) ||∇ (R(x))||2 + (F(x) − R(x))
FIn the formula (11), MI(Vn (x)) is the gradient value that the mutual information between two images in pixel x respects to the current transformation. ˛ is the positive constant, it represents the iteration step.
4. The precise registration based on optical flow model
V (x)∇ (R(x)) = F(x) − R(x)
4269
|ε=0 dI1 dI2 +
dI1 dI2 ε=0
R,F PVR,F (I , I ) ∂PU+εH (I1, I2 ) +εH 1 2 F PU+εH (I2 )
∂ε
(13) dI1 dI2
P
V (x) =
(F(x) − R(x))∇ (R(x)) ||∇ (R(x))||2
(8)
P=
1
∂ε
F PU+εH (I2 )
If R(x) approaches zero, the offset V goes to infinity. The problem can be solved increasing a component in the denominator. V (x) =
(F(x) − R(x))∇ (R(x))
(9)
||∇ (R(x))||2 + (F(x) − R(x))2
The image deformation can be controlled to some extent in formula (9). The transformation image is achieved using the local image information. The offset is smoothed to keep the transformation continuously and image topological structure stable in global range after iteration. Gauss filter is introduced to realize the regularization. Formula (10) is shown as follows:
Vn+1 (x) = G ⊗
Vn (x) +
(F(x) − R(x))∇ (R(x)) ||∇ (R(x))||2 + (F(x) − R(x))2 −x2 /2 2
∂ = [ ∂ε
(
R,F PU+εH (I1, I2 )dI1 )dI2
F PU+εH (I2 )dI2 ]
(14) =0
The overlapped part of two images is used to estimate the combined entropy distribution of image gray scale, the number of pixels that belongs to overlapped part is marked as N. The 2-D Parzen-window function in which ı is the width is used in the estimating process. The Parzen window algorithm which estimates stochastic variable probability density from measure sample directly is a precise nonparametric estimation method PVR,F (I I ) = −εH 1, 2
(10)
The Gauss filter is G (x) = e The reference image gradient shows the relationship of the adjacent pixels in the reference image. It is also the internal force that causes the offset of the pixels. And the differences of corresponding pixel in two images are the external forces by which the image is deformed. The standard deviation of gauss filter is called elasticity coefficient. Mutual information gradient is to take the derivative of the mutual information between two images with respect to space displacement vector. Based on the initial deforming force, the mutual
F ∂PU+εH (I2 )
1 N
ı (R(x) − I1 , F(x + V (x) + εH(X)) − I2 )dx
(15)
ω
The derivative of PVR,F (I I ) can be easily got by calculation: −εH 1, 2 ∂PVR,F (I1, I2 ) ∂ε
1 = N
ı (R(x) − I1 , F(x + V (x) + εH(X)) − I2 ) ω
× ∇ F(x + V (x) + εH(x)) · H(x)dx
(16)
If ε is equal to 0, formula (17) can be achieved. ∂I(V + εH) |ε=0 = ∂ε
LV (I1 , I2 )ı (R(x) − I1 , F(x + V (x) + εH(X)) − I2 )∇ F(x + V (x)) · H(x)dxdI1 dI2
(17)
4270
X. Lu et al. / Optik 124 (2013) 4266–4273
In formula (17), F(x + V(x)) is the gray gradient of floating image F. Convolution symbol is signed as ⊗, formula (17) is expressed as formula (18). 1 ∂I(V + εH) |ε=0 = N ∂ε
[ ⊗ V
∂LV ](R(x), F(x + V (x)))∇ F(x + V (x))H(x)dx ∂I2
where ı (I1 ,I2 ) = Kı (I1 )Kı (I2 ), Kı (T ) = √ 1
2ı2
exp −
T2 2ı2
(18)
,
∂PUR,F (I1, I2 ) ∂LV 1 1 ∂PUF (I2 ) = R,F − F ∂I2 ∂I2 PU (I2 ) ∂I2 PU (I1, I2 )
(19)
Mutual information gradient is shown as formula (20): 1 ∇ IV = N
dLV ı ⊗ dI2
Fig. 5.1. The registration images.
(R(x), F(x + V (x)))∇ F(x + V (x))
(20)
The mutual information between two images can’t increase untill the pixel x1 of floating image shifts along [∇ x IV (xI ), ∇ y IV (yI )]T . 4.3. The implementation process of precise registration based on optical flow model Mutual information gradient that Hermosillo put forward, is with high precision and high computational-complexity. Because all pixels of an image are traversed to compute mutual information gradient. In order to improve the efficiency of registration algorithm, Parzen-window estimation gray joint probability density can be achieved by the convolution of 2-D Gaussian kernel and the joint histogram. The spatial discretization of image gray gradient which is caused by image sampling has a strong dependency on the chosen interpolation method. The liner interpolation and central difference are combined in the article, at the same time, multi-resolution strategy is adopted to achieve the algorithm. The algorithm execution speed is improved effectively and local extremum is avoided by the multi-resolution method. The registration process is performed from coarse registration to precise registration. It costs less time on course registration at low resolution and more time on precise registration at high resolution. The super sampling is got from the offset transformation of the low resolution and it is regarded as the initial transformation of higher resolution [18]. The algorithm is as follows: Step 1: The initial offset VI of pixel I, which is from the position vector of the floating image, is set to 0 or the initial offset is achieved by rigid course registration. The reference image and the floating image are used to construct Gauss Multi-resolution Pyramid, and the level is K. Step 2: If resolution level is less than K, the position vector of the floating image pixel is xIN = xI + VIN−1 after N iterations, offset is calculated by the formula (11); Step 3: If mutual information increment between two images is less than the preset threshold value, iteration has regarded as convergent at the current resolution level and the iteration process will go to the next resolution. Then the step 4 begins. Or the N + 1 iteration starts. Step 4: Offset transformation gets from the former resolution is performed for over sampling to get the initial offset transformation. The ineration starts and turns to step 2. Step 5: The achieved offset region V is acted on the floating image and the registered image is achieved after interpolation after finishing various resolution iteration.
5. Experimental results and analysis To validate the feasibility and correctness of the proposed registration algorithm, several groups of different images have been selected for the experiments. The first group images are CT images of the heart, which comes from the same model but at different time series, and the image size is 256 × 256. The second group images include the same mode of heart images at certain times, and the image size is 512 × 512. All experiments are performed on a computer whose memory is 2G and main frequency is 2.4 GHz. The program is written under the Windows operating system by using VC++ 2005 language. The main indicators which are widely accepted for the non-rigid registration result evaluation are selected for the effect evaluation. The indicators of the accepted non-rigid registration include Mean Squared Error (MSE), Correlation Coefficient (COEF) [19], Normalized Mutual Information (NMI) [20] and so on. 5.1. 2-D CT heart image registration experiment (I) To validate the feasibility and rapidity of the proposed registration algorithm, the heart time-series images are registered, and the image size is 256 × 256. Fig. 5.1 is the image which needs to be registered, Fig. 5.1(1) is the reference image and Fig. 5.1(2) is the floating image Fig. 5.2 is the result of feature matching and coarse registration. Fig. 5.2(1) is the feature vectors produced by the reference image. Fig. 5.2(2) is the the feature vectors produced by the floating image. Fig. 5.2(3) is the result of feature matching Fig. 5.2(4) is the registered image after coarse registration. The image after coarse registration is used for registration with the reference image using optical flow registration. Finally, the precise registration result is shown as Fig. 5.3: To validate the feasibility and correctness of the proposed registration algorithm, the algorithm based on feature and based on B-spline are selected for comparison. The registration results are shown in Fig. 5.4, and Fig. 5.4(1) is the registration result based on feature and Fig. 5.4(2) is the registration result based on B-spline. Comparison results of the main parameters are shown in Table 5.1: 5.2. 2-D CT heart image registration experiment (II) In this experiment, 2-D CT images are used. Fig. 5.5(1) is the floating image, and Fig. 5.5(2) is the reference image, and the experimental image size is 512 × 512.
X. Lu et al. / Optik 124 (2013) 4266–4273
4271
Table 5.1 The performance table of registration. Registration methods
RMS
Correlation
Normalized mutual information
Based on feature Original optical flow Proposed method
19.0304 18.1256 17.033
0.98496 0.98723 0.99237
1.63253 1.70825 1.81743
Fig. 5.4. The registration results of different algorithms.
Fig. 5.5. The registration images.
Fig. 5.2. The registration results of SIFT algorithm.
Fig. 5.3. The result of precise registration.
Fig. 5.6 is the result of feature matching and course registration. Fig. 5.6(1) is the feature vector produced by reference image. Fig. 5.6(2) is the the feature vector produced by floating image. Fig. 5.6(3) is the result of feature matching. Fig. 5.6(4) is the registered image after the coarse registration. The image after coarse registration is used for registration with the reference image using optical flow registration. Finally, the fine registration result is shown in Fig. 5.7. As it is shown in Fig. 5.8, Fig. 5.4(1) is the registration result based on feature and Fig. 5.4(2) is the registration result based on B-spline. Comparison results of the main parameters are shown in Table 5.2: Compared with the parameter of the table can be known that: in the registration algorihm which is put forward in the article, the root-mean-square reduces and the normalized mutual information and the correlation increase after the registration, all of which have more advantages than the registration method based on feature and original optical flow field. It shows that the resemble degree of the registration images are high and the result of the registration is satisfactory.
4272
X. Lu et al. / Optik 124 (2013) 4266–4273
Table 5.2 The performance table of registration. Registration methods
RMS
Correlation
Normalized mutual information
Based on feature Original optical flow Proposed method
16.1413 16.3575 14.7855
0.97962 0.98334 0.98903
1.53356 1.66754 1.80545
Fig. 5.8. The registration results of different algorithms.
6. Conclusion
Fig. 5.6. The coarse registration results and feature matching.
The research of this article is non-rigid medical image registration algorithm based on the optical flow model and the solution of classification registration is put forward. First, SIFT algorithm is used for coarse registration, and then precise registration is achieved using the optical flow model algorithm. The image feature points extraction is the basis of medical image registration and its accuracy affects the result of image matching.SIFT, which is based on local image features, can’t be affected by scale transformation, rotation, light conditions and so on.The optical flow algorithm does not require feature extraction process and uses the image gray information directly, and its registration speed is faster. The both algorithms are complementary. SIFT algorithm is used for improving the convergence speed of optical flow algorithm, and the optical flow model algorithm makes the registration result more accurate. The experiment results prove that the algorithm can improve the accuracy of non-rigid medical imagine registration, the convergence speed, the anti-noise ability and the robustness of image registration, which has some advantages in the image registration. Acknowledgements This work was supported by the National Natural Science Foundation of China (61179019), (61261028), Ministry of Education project (Z2009-1-01033), Inner Mongolian Natural Science Foundation (2010MS0907), Inner Mongolian College research projects (NJ10097). References
Fig. 5.7. The precise registration results.
[1] H. Moravec, Rover visual obstacle avoidance, in: International Joint Conference on Artificial Intelligence, SanFrancisco CA, Mongan Kaufmann Pulishers Inc, 1987, pp. 785–790. [2] C. Harris, M. Stephens, A combined corner and edge detector, in: Fourth Alvey Vision Conference, Manchester, UK, 1988, pp. 147–151. [3] D.G. Lowe, Object recognition from local scale-invariant features, in: International Conference on Computer Vision, IEEE Computer Society, Washington DC, 1999, pp. 1150–1157. [4] T. Linderberg, Scale space theory: a basic tool for analyzing structures at different scales, J. Appl. Stat. 21 (1994) 224–270. [5] P.W. Pluim Josien, F.J. Michael, Image registration, IEEE Trans. Med. Imaging 22 (11) (2003).
X. Lu et al. / Optik 124 (2013) 4266–4273 [6] X. Lu, Y. Zhao, B. Zhang, H. Ma, Study on non-rigid medical image registration based on optical flow model, in: 2012 International Workshop on Image Processing and Optical Engineering, United States, SPIE, 2011. [7] H. Van Luong, M. Kim Jong, A massively parallel approach to affine transformation in medical image registration, in: 11th IEEE International Conference on High Performance Computing and Communications, 2009, pp. 117–123. [8] B. Fang, H. Tang, Image registration based on thin plate spline non-rigid medical, Comput. Simul. 27 (6) (2010) 275–278. [9] M.J. Potel, S.A. MacKay, J.M. Rubin, et al., Three-dimensional left ventricular wall motion in man coordinate systems for representing wall movement direction, Invest. Radiol. 19 (6) (1984) 499–509. [10] Z. Sun, D. Yu, X. Chen, et al., 3-D heart motion model based on coronary artery angiographic image sequence, Chin. J. Biomed. Eng. 24 (2) (2005) 194–198. [11] Z. Sun, D. Yu, J. Huang, et al., 3-D motion estimation of coronary artery from signal-plane cineangiogram sequences, J. Biomed. Eng. 23 (2) (2006) 428–432. [12] Y. Dong, Automatic Registration Based on Improved SIFT for Medical Microscopic Sequence Images, Harbin Engineering University, Harbin, 2009.
4273
[13] G. Wang, X. Chen, Study on SIFT feature matching algorithm, J. Yancheng Inst. Technol. Nat. Sci. Ed. 20 (2) (2007) 1–5. [14] X. Lin, T. Qiu, et al., The study of active demons algorithm for deformable image registration, Chin. J. Biomed. Eng. 27 (4) (2008) 636–640. [15] H. Zhang, J. Zhang, J. Sun, Non-rigid medical image registration based on improved Demons algorithm, Optik Prec. Eng. 15 (1) (2007) 145–150. [16] M.B. Zadeh, C. Jutten, K. Nayebi, Differential of the mutual information, IEEE Signal Proc. Lett. 11 (1) (2004) 48–51. [17] C. Chefd Hotel, G. Hermosillo, et al., A variational approach to multi-modal image matching, in: Proceedings of the IEEE Workshop on Variational and Level Set Methods(VLSM’01), Canada, 2001, pp. 21–28. [18] H. Zhang, Research of medical image registration algorithms, Tianjin Univ. (2007) 72–73. [19] W. Wang, W. Lu, S. Bao, et al., Image registration and fusion in medical imaging, CT Theor. Appl. 6 (2) (1997) 34–39. [20] G. Chen, The research and implementation of non-rigid medical image registration, Nanjing Univ. Sci. Technol. (2009) 13–17.