Available online at www.sciencedirect.com
Medical Image Analysis 11 (2007) 540–554 www.elsevier.com/locate/media
Boundary element method-based regularization for recovering of LV deformation Ping Yan *, Albert Sinusas, James S. Duncan Biomedical Engineering, Radiology and Medicine Departments, Yale University, P.O. Box 208042, New Haven, CT 06520, USA Received 4 August 2006; received in revised form 17 April 2007; accepted 26 April 2007 Available online 22 May 2007
Abstract The quantification of left ventricular (LV) deformation from noninvasive image sequences is an important clinical problem. To date, feature information from either magnetic resonance (MR), computed tomographic (CT) or echocardiographic image data have been assembled with the help of different regularization models to estimate LV deformation. The currently available regularization models have tradeoffs related to accuracy, lattice density, physical plausibility and computation time. This paper introduces a new regularization model based on the boundary element method (BEM) which can overcome these tradeoffs. We then employ this new regularization model with the generalized robust point matching (GRPM) strategy to estimate the dense displacement fields and strains from 3D LV image sequences. The approach is evaluated on in vivo cardiac magnetic resonance image sequences. All results are compared to displacements found using implanted markers, taken to be a gold standard. The approach is also evaluated on the 4D real time echocardiographic image sequences and the results demonstrate that the approach is capable of tracking the LV deformation for echocardiography. 2007 Elsevier B.V. All rights reserved. Keywords: BEM; GRPM; Biomechanical model
1. Introduction The estimation of left ventricular (LV) deformation has been an area of active research within the medical image analysis community as ischemic heart disease remains a major clinical problem. There have been many methods proposed for the estimation of LV deformation. Methods based on tracking image features have the advantage of being applicable to modalities other than magnetic resonance such as echocardiography. However, image-derived features are in general very noisy and available only at sparse and unevenly sampled spatial positions. Interpolation and smoothing from discrete features in either 2D or 3D space are required in many situations. For instance, in estimating LV deformation, displacements estimated from image feature information are usually noisy and at *
Corresponding author. E-mail address:
[email protected] (P. Yan).
1361-8415/$ - see front matter 2007 Elsevier B.V. All rights reserved. doi:10.1016/j.media.2007.04.007
sparse positions. To form robust, dense displacement estimates, a regularization model is required to smoothly propagate the displacements associated with the feature points onto regularly sampled dense positions in the domain. Here, we briefly review several popular regularization models. Generic models have no particular prior information incorporated except for some generic smoothness constraints. The typical examples are radial basis functions (RBF) (Arad et al., 1994), free form deformation (FFD) (Sederberg and Parry, 1986; Hsu et al., 1992), multilevel free form deformation (MFFD) (Lee et al., 1996) and extended free form deformation (EFFD) (Coquillart, 1990). RBF defines the interpolation function as a linear combination of radial symmetric basis functions, each centered at a data point. The unknown coefficients for the basis functions are determined by solving a linear system of equations. Popular choices for the basis functions include Gaussian, multiquadratics, etc. Cachier and
P. Yan et al. / Medical Image Analysis 11 (2007) 540–554
Ayache proposed a generalization of RBF for linear elasticity in Cachier et al. (2004). This method assumes that underlying object is infinite, which is not suitable for interpolating cardiac motion. Recently, a B-Spline based regularization model FFD was introduced to directly manipulate an object. It calculates the pseudo-inverse of a matrix containing B-Spline basis function values to minimize the approximation error. This model suffers from a tradeoff between the shape smoothness and approximation accuracy. Lee proposed a multilevel B-Spline interpolation algorithm MFFD to circumvent this tradeoff (Lee et al., 1996). In the algorithm, a function from a coarse lattice provides a rough approximation, which is further refined in accuracy by functions derived from finer lattices. However, this method is computationally complicated and does not have physical constraints. The EFFD was first introduced for regularizing the cardiac motion field in previous work from our group (Lin and Duncan, 2004). The advantage of EFFD over FFD is that the EFFD model can use arbitrarily-shaped lattices, which allows the EFFD model to be more appropriate for cardiac motion tracking. However, the EFFD also has a tradeoff between the shape smoothness and approximation accuracy, and does not have physical constraints. When using physical models, physical properties can constrain the deformation of underlying objects. It can be based on the kinematics of continuum mechanics such that the deformation has a physical meaning. For example, to estimate linear-elastic-like solid deformation, elastic models are adopted (Papademetris et al., 2002; Shi et al., 1999), and for viscous fluid deformation, fluid models (Christensen et al., 1996) are used. Using the concept of continuum mechanics to model the myocardium for image-based motion analysis was proposed in Shi et al. (1999) and Papademetris et al. (2002). A biomechanical model is usually constructed using finite element methods (FEM) (Clough and Tocher, 1965). This approach involves creating some type of triangulation on the set of data points to delimit local neighborhoods over which surface patches are defined. The FEM is considered closer to the physical reality by comparing to other methods, but has drawbacks in: sensitivity to data distribution, expensive computation and complicated formulation. A stronger constraint comes from statistical prior models or eigen-deformation models, which have become very popular recently. If we already know the basic patterns of deformation, we can incorporate this statistical information into the nonrigid mapping framework. The object cannot deform arbitrarily but in some ‘‘eigen-pattern’’ way. The typical techniques of this category include the ‘‘Active Shape Model’’ (ASM) (Cootes et al., 1995), ‘‘Active Appearance Model’’ (AAM) (Cootes et al., 2001) and FFD-based statistical motion models (Chandrashekara et al., 2003). The differences among them line in the objects to be trained: ASM uses shape, AAM uses intensity and FFD-based method uses control points. A training stage is required to generate a statistical prior. The main
541
disadvantage of such models is that they require complete types of motion, which is often based on unrealistic assumptions, and often lead to computational problems. In this paper, we propose a novel BEM-based regularization model which can circumvent the tradeoffs of previous models. Over the several past decades, the BEM has emerged as a versatile and powerful alternative to the FEM for the solution of engineering problems (Brebbia and Dominguez, 1998). The most important advantage of the BEM over the FEM is that it only requires discretization of the surface rather than the volume. The node number of BEM is much less than that of FEM. Hence the BEM can speed up the solution. When the BEM is used for solving the isotropic and linear problem, it is as accurate as the FEM. The other advantage of BEM is its simple formulation. We use the formulation of BEM and inverse techniques (Hsu et al., 1992) to construct a new regularization model. This regularization model can easily be combined with the GRPM framework. The limitation of BEM is having difficulty in solving the nonlinear and nonhomogeneous problems. In this paper, we assume that our problem is linear and homogeneous. In our experiments on recovering the deformation of simulated data, the new BEM-based regularization model has the most accurate estimations when comparing with B-Spline based regularization models. The BEM-based regularization model is also proven to be computationally efficient. 2. Methods 2.1. B-Spline based regularization models: FFD and EFFD The FFD is a technique presented for deforming solid geometric models in a free-form manner. It was first proposed by Sederberg and Parry (1986) in 1986. A good physical analogy explanation is to consider a rubber-like box in which the object that we want to deform is embedded. To deform the object, one just needs to deform the box which is defined by the control points. The reader can refer to Sederberg and Parry (1986) for the details of mathematical formulation. Here, we only introduce some important principles. Let X = {(x, y, z)j0 6 x < L, 0 6 y < M, 0 6 z < N} be a cube in the xyz-plane. Consider a set of scattered points P = {(xc,yc,zc)} in X. U is an (L + 3) · (M + 3) · (N + 3) lattice which spans the integer grid in X. Let /ijk be the value of the ijkth control point on the lattice U. The approximation function f is defined in terms of these control points by: f ðx; y; zÞ ¼
3 X 3 X 3 X
Bl ðrÞBm ðsÞBn ðtÞ/ðiþlÞðjþmÞðkþnÞ ;
ð1Þ
l¼1 m¼1 n¼1
where i = ºxß 1, j = ºyß 1, k = ºzß 1, and r = x ºxß, s = y ºyß, and t = z ºzß. ºxß, ºyß and ºzß denote the largest integers smaller than x, y, z, respectively. Bl, Bm and Bn are uniform cubic B-Spline basis functions defined as:
542
P. Yan et al. / Medical Image Analysis 11 (2007) 540–554 3
B0 ðtÞ ¼ ð1 tÞ =6; 3
2
B1 ðtÞ ¼ ð3t 6t þ 4Þ=6; B2 ðtÞ ¼ ð3t3 þ 3t2 þ 3t þ 1Þ=6;
ð2Þ
B3 ðtÞ ¼ t3 =6; where 0 6 t < 1. They serve to weigh the contribution of each control point to f(x, y, z) based on its distance to (x, y, z). An example of FFDs applied to cardiac MR images is shown in Fig. 1. The approximation function f(x, y, z) is a function of 64 neighboring control points. With coarse lattice, the estimations are smooth but may lose the local details. With fine lattice, the estimations have local details but are not smooth.
In comparison with FFD, EFFD uses non-parallelepipedical lattices to allow arbitrarily-shaped deformations. The formulation of EFFD is similar to that of FFD. The only difference is that the lattice is arbitrarily-shaped for the EFFD. An example of the EFFD applied to cardiac MR images is shown in Fig. 2. The arbitrarily-shaped lattice makes the EFFD a better model for cardiac deformation, but it still suffers from the tradeoff between smoothness and accuracy depending on the lattice density. 2.2. The boundary element method (BEM) The BEM is a technique for solving partial differential equations by reformulating a dense field problem to the
Fig. 1. 3D FFD mesh overlapped on cardiac MR images.
Fig. 2. 3D EFFD mesh overlapped on cardiac MR images.
P. Yan et al. / Medical Image Analysis 11 (2007) 540–554
integral equation over the boundary of the field (Brebbia and Dominguez, 1998). Only the boundary needs to be partitioned. The deformation of an elastic object can be expressed by Navier’s equation: lr2 u þ ðl þ kÞrðr uÞ ¼ F ;
ð3Þ
where l and k are the Lame’s constants of elastic material, u is the displacement vector and F is the outside force. This biomechanical model is used not to predict deformation, but to regularize and constrain image-derived information. In this chapter, we assume that F is zero. We can rewrite this equation to the integral equation known as Somigliana’s identity (Brebbia and Dominguez, 1998): Z Z uil þ plk uk dC ¼ ulk pk dC; ð4Þ C
C
where ulk and plk represent the fundamental solutions of displacement and traction at any point in the k direction when a unit load is applied at ‘i’ in the l direction. The fundamental solution is the solution of the elastic problem with the same material properties as the body under consideration but corresponding to an infinite domain loaded with a concentrated unit point load. p is a traction vector (pressure on the surface) and C is the boundary. As we know, the myocardium is anisotropic and nonlinear, but for the simplicity, the linear and isotropic model for myocardium is used widely in the image analysis literature (Shi et al., 1999). Recently, many anisotropic and nonlinear myocardium models constructed using FEM were proposed (Papademetris et al., 2002; Hunter et al., 1996; MacKenna et al., 1996). BEM is known to have difficulties in dealing with anisotropic and nonlinear material. In anisotropic analysis, the closed form of a fundamental solution is hard to get and its calculation becomes very time-consuming. However, by using the Wilson and Cruse interpolation scheme (Wilson and Cruse, 1978), computation time is almost the same as for isotropic analysis. We will talk about this method in detail in the end of this chapter. However, we still use isotropic model in this thesis. The Young’s Modulus of myocardium is set as 0.05 GPa, which is roughly an average value from the results in Hoffmeister et al. (1996). Although we know that the myocardium is nearly incompressible, we set Poisson’s Ratio to 0.4 instead of 0.5, because the Poisson’s ratio close to 0.5 causes numerical instabilities such as mesh-locking, etc (Babuska and Suri, 1992). 0.4 is almost incompressible but safely stays away from 0.5. For isotropic material, the fundamental solutions of displacement ulk and traction plk are from Brebbia and Dominguez (1998): 1 ½ð3 4mÞdlk þ r;l r;k ; 16plð1 mÞjrj 1 or ½ð1 2mÞdlk þ 3r;l r;k plk ¼ 2 8pð1 mÞjrj on þð1 2mÞðnl r;k nk r;l Þ ; ulk ¼
ð5Þ
ð6Þ
543
where r is the distance between the loading point to any point under consideration and m is Poisson’s ratio. Notice that r,l is the derivative of r in the l direction. nl and nk are the direction cosines of the normal with respect to l and k direction. When ‘i’ is taken to the boundary, Eq. (4) transforms into: Z Z cilk uil þ plk uk dC ¼ ulk pk dC: ð7Þ C
C
When ‘i’ is at apoint where the boundary is smooth: 1; l ¼ k cilk ¼ 12 dlk . dlk ¼ . By dividing the boundary 0; l 6¼ k into a series of elements, Eq. (7) can now be written as: (Z ) N X i i clk u þ plk UjJ jdn1 dn2 j¼1 j
u ¼
N X j¼1
(Z
Cj
)
ulk UjJ jdn1 dn2
pj ;
ð8Þ
Cj
where N is the number of element on the boundary and U is the shape function. {n1, n2, g} is defined as local system, where n1, n2 are oblique coordinates and g is in the direction of the normal. The Jacobian of the transformation is dC = jJjdn1 dn2. Eq. (8) can also be expressed in matrix form as: H ij uj ¼ Gij pj ;
ð9Þ
where ( ij R H ¼ Cj plk UjJ j dn1 dn2 R H ij ¼ Cj plk UjJ j dn1 dn2 þ cjlk Z ulk UjJ j dn1 dn2 : Gij ¼
i 6¼ j; i ¼ j;
Cj
When ‘i’ represents interior points, Eq. (4) transforms into: Z Z ulk pk dC: ð10Þ uil þ plk uk dC ¼ C
C
Thus, matrix form reads: b ij pj H b ij uj ; ui ¼ G
ð11Þ
b and H b have the same definition as G and H, but where G they need to be calculated anew for each different internal point. 2.3. The BEM-based regularization model The procedure of this model is described in Fig. 3. The red1 points and arrows in Fig. 3a are the feature points and estimated displacements derived from image sequences. In the first step, we map these displacements associated with feature points to the boundary grid nodes as shown in 1 For interpretation of the references to colour in Fig. 3, the reader is referred to the web version of this article.
544
P. Yan et al. / Medical Image Analysis 11 (2007) 540–554
Fig. 3. The procedure of BEM-based regularization model: (a) displacements associated with scattered points; (b) model interpolated displacements on the boundary nodes; and (c) model interpolated dense displacement fields.
Fig. 3b. Then we interpolate the dense displacements on the dense grids as shown in Fig. 3c. Let Q = {qj, j = 1, 2, . . ., J} be the nodes at regular positions of boundary. Consider a set of scattered points A = {ai, i = 1, 2, . . ., I} are extracted from the image. G = {gl, l = 1, 2, . . ., L} are the dense points at regular positions. With known displacements associated with the scattered points in the A, our goal is finding the optimal displacements for all the points in the G. The points in the A are red1 points as shown in Fig. 3a. The points in the Q are the green1 points as shown in Fig. 3b and the points in the G are the yellow1 points in Fig. 3c. As described in previous section, the displacements U(qj) and tractions P(qj) of the boundary node qj can be written as:
e i ðqj Þ at the boundary the value ~uðai Þ, the displacements U nodes must satisfy:
HU ðqj Þ ¼ GP ðqj Þ;
C ij ~uðai Þ e i ðqÞ ¼ P U : J 2 j¼1 C ij
ð12Þ
where H and G are the fundamental matrix for the boundary nodes. Then the traction P(qj) can be written as: P ðqj Þ ¼ G1 HU ðqj Þ;
ð13Þ
where G1 is the inverse matrix of G. With Eq. (11), the displacement at any point can be derived from known boundary displacements U(qj) and tractions P(qj): uðai Þ ¼
J X
b ðqj Þ H b U ðqj Þ; GP
ð14Þ
j¼1
b and H b are where u(ai) is the displacement at the point ai. G the fundamental matrix of the point ai. Substituting P(qj) with Eq. (13), we can get: J J X X b 1 H H b U ðqj Þ ¼ uðai Þ ¼ C ij U ðqj Þ; ð15Þ GG j¼1
j¼1
b 1 H H b. where C ¼ GG Here, we use the similar method used in Hsu et al. (1992) and Lee et al., 1996 to determine the unknown displacements at boundary nodes. We first consider one data point ai in the A. Here we denote the measured displacement associating with the point ai as u˜(ai). From Eq. (15), we know that u˜(ai) relates to the displacements at the boundary nodes. For the displacement at the point ai to take on
~uðai Þ ¼
J X
e i ðqj Þ: C ij U
ð16Þ
j¼1
e i ðqj Þ that are solutions to Eq. There are many values of U (16). Here we choose the solution in the least-squared sense such that: ! J X ~ i ðqÞ ¼ arg min e 2 ðqj Þ : U U ð17Þ i
j¼1
Simple linear algebra using pseudo-inverse is used to derive the solution (Hsu et al., 1992): ð18Þ
Now we consider all the data points in the A. For each point in the A, Eq. (18) can be used to determine the displacements of boundary nodes. However, different point in the A may assign different displacements to boundary nodes. Here we choose the displacements U(q) of boundary nodes to minimize the sum of square difference between the displacements u(ai) and ~uðai Þ of all the points in the pointset A, such as: ! I X 2 U ðqÞ ¼ arg min ki kuðai Þ ~uðai Þk ; ð19Þ i¼1
~ðai Þ. where ki is the weight based on the confidence in the u Both the u(ai) and ~uðai Þ must satisfy Eq. (15), thus: 0 2 1 X I J J X X ~ i ðqj Þ ki C ij U ðqj Þ C ij U U ðqÞ ¼ arg min @ A; j¼1 i¼1 j¼1 ð20Þ e i ðqj Þ is defined by Eq. (18). By differentiating where U 2 X I J J X X e i ðqj Þ ki C ij U ðqj Þ C ij U j¼1 i¼1 j¼1 with respect to U(q), we get:
P. Yan et al. / Medical Image Analysis 11 (2007) 540–554
PI U ðqÞ ¼
2 e i¼1 ki C ij U i ðqÞ : PI 2 i¼1 ki C ij
ð21Þ
Once we know the displacements of all the boundary nodes, the displacements of any point gl in the dense point-set G can be calculated by: U ðgl Þ ¼
J X
C lj U ðqj Þ:
ð22Þ
j¼1
Fig. 4 shows an example of the 3D BEM-based regularization model applied on cardiac MR image. 2.4. The BEM-GRPM computational framework In this paper, We use a BEM-based regularization model with GRPM strategy to estimate the deformation of the LV. Assume the feature point-sets A = {ai,i = 1, 2, . . ., I} and B = {bk,k = 1, 2, . . ., K} are derived from two subsequent images I1 and I2. The boundary node-set is Q = {qj,j = 1, 2, . . ., J}. The dense point-set is denoted as G = {gl,l = 1, 2, . . ., L}. To match set A onto B, it proposed to minimize the following objective energy function: EðMÞ ¼
I X K X i¼1
mik ½kf ðai Þ bk k2 þ kA gðkjA ðai Þ
k¼1
jB ðbk Þk2 Þ þ kkLf k2 þ T
I X K X i¼1
þ T0
I X i¼1
mi;Kþ1 log mi;Kþ1 þ T 0
mik log mik
k¼1
K X
mIþ1;k log mIþ1;k ;
k¼1
ð23Þ where M is correspondence matrix. g(Æ) is a strictly increasing function and kA balances the significance between distance and curvature. A mesh is generated on the bound-
545
ary of I1. T is the annealing temperature gradually decreasing to zero as the matching iteration begins. T0 is the annealing temperature for outliers. jA(ai) and jB(bk) are two principal curvatures at ai and bk, respectively. f is the non-rigid transformation function and L is an operating function on f. In this thesis, we used BEM-based regularization model for f. The flowchart of BEM-GRPM is shown in Fig. 5. Following steps summarize the scheme of BEM-GRPM. BEM-GRPM manipulation Step 1: Feature point extraction The optimal feature points are those that can best represent the shape of object. Lin et al. extracted feature points by using thresholded curvature (Lin et al., 2003). When image quality is poor, the feature points extracted using this method is not suitable for shape tracking. In this paper, we use the Canny Edge Detector to extract feature points. First, we manually segment the first frame which is usually the frame in the end-diastole. The region of interest (ROI) is chosen between the endocardium and epicardium. Inside the ROI, we use the Canny Edge Detector to derive the points which are on or close to the boundary. The extracted feature points from a 3D echocardiographic image of dog by using Canny Edge Detector and thresholded curvature (Lin and Duncan, 2004) are shown in Fig. 6. The feature points extracted by using the Canny Edge Detector represent the shape better than the method using thresholded curvature. Step 2: Estimate the correspondence matrix M between the points aj and bk Different from the RPM, the correspondence matrix M is calculated using both the distance and shape information:
Fig. 4. 3D BEM mesh overlapped on cardiac MR images.
546
P. Yan et al. / Medical Image Analysis 11 (2007) 540–554
Original image I1
Surface curvatures are important geometric features for the computer-aided analysis and diagnosis of 3D medical images. In general, there are two approaches to compute curvature estimates for isosurfaces from a volume data set (Huang et al., 2005). The first approach works directly on gray-value information by exploiting the local differential structure of the image. This approach is favored for computation efficiency, but can yield erroneous results when the image is noisy. The second approach employs a surface patch fitting strategy. This approach is more accurate than the first one but is computationally expensive. In this paper, we use the first approach for the computation reason. We utilize geometry-based curvature operators: a geometric surface for a region is extracted from the image, and this surface is differentiated with respect to some parameterized space to yield curvatures (Do Carmo, 1976). However, curvature can directly be expressed in terms of partial derivatives of the gray level images by using area-based operators (Ter Haar Romeny et al., 1994). For example, the Gaussian curvature jgaussian and the Mean curvature jmean from a 3D volume data are given by:
Deformed image Deformed imageII22
Extract the feature point sets from I1 and I2 A = {ai, i=1,2,…,I} 1,2,…,I} B = {bk , k=1,2,…,K} } Step 1: Estimate the correspondence matrix M between the points aj and bk 1
M ik =
2πT 2
e
−
f ( ai ) −bk
I +1
K +1
i =1
k =1
2
+ λ A g κ A ( ai ) −κ B ( bk ) T
2
∑ mik = 1
∑ mik = 1 and
Step 2: Calculate the corresponding points bˆi to ai and the confident parameters λ i K
K
k =1
k =1
K
bˆi = ∑ mik bk / ∑ mik
λi = ∑ mik k =1
Step 3: Find the transformation function f by I
f = arg min ∑ λi f (ai ) − bˆi
2
i =1
f
Step 4: T < Threshold? Reach the maximum iteration number?
No
T = T ⋅δ
jgaussian ¼
ðL2x þ L2y þ L2z Þ
2ðL2x þ L2y þ L2z Þ
Yes
Strains Fig. 5. The flowchart of the BEM-GRPM.
Fig. 6. The feature points extracted by using thresholded curvature (left) and Canny Edge Detector (right). 2
kf ðai Þ bk k þ kA gðkjA ðai Þ jB ðbk Þk Þ 1 T ; mik ¼ pffiffiffiffiffiffiffiffiffiffi2 e 2pT ð24Þ Iþ1 X i¼1
mik ¼ 1 and
Kþ1 X
mik ¼ 1:
k¼1
We denote the outlier clusters as aN+1 and bK+1.
3=2
ð25Þ
; ;
ð26Þ
where L(x, y, z) denotes the image function. Lx, Ly and Lz are the first derivatives of L in x, y, and z direction. Lxx, Lyy and Lzz are the second partial in x, y and z direction. Two principal curvatures j1 and j2 in 3D case can be calculated from the Gaussian curvature jgaussian (25) and the Mean curvature jmean (26): qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð27Þ j1 ¼ jmean þ j2mean jgaussian ; qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi j2 ¼ jmean j2mean jgaussian : ð28Þ
Dense displacement fields
2
2
L2x ðLyy þ Lzz Þ þ L2y ðLxx þ Lzz Þ þ L2z ðLxx þ Lyy Þ
jmean ¼
Step 5:
L2x Lyy Lzz þ L2y Lxx Lzz þ L2z Lxx Lyy
Step 3: Calculate corresponding points and confident parameters Let ^bi be the corresponding point to ai and ki the confidence in the match: X K K K X X ^bi ¼ mik bk mik ; ki ¼ mik : ð29Þ k¼1
k¼1
k¼1
When ki is very small, we will denote the corresponding point as an outlier. Step 4: Update the non-rigid transformation function The non-rigid transformation function f is defined by: f ¼ arg min f
I X
ki kf ðai Þ ^bi k2 :
i¼1
Here, f(ai) = ai + u(ai), ^bi ¼ ai þ ~uðai Þ.
ð30Þ
P. Yan et al. / Medical Image Analysis 11 (2007) 540–554
~ uðai Þ and u(ai) are the estimated and smoothed displacements at the point ai, respectively. Then Eq. (30) can be written as: uðaÞ ¼ arg min uðaÞ
I X
ki kuðai Þ ~ uðai Þk2 :
547 e_{C}
e_{R} e_{L}
ð31Þ
i¼1
This equation is similar to Eq. (19). By using the interpolating method described in previous section, we first calculate the displacements on the boundary nodes by Eq. (21). Then we can calculate the smoothed displacement of feature point ai by: ð32Þ
Fig. 7. The local coordinate system of LV. Directional strains: radial strain, circumferential strain and longitudinal strain are calculated based on the orthonormal basis {eR, eC, eL}.
After we get the smoothed displacements of all the feature points, we update f(ai): f(ai) = ai + u(ai).
(Spencer, 1980). Assuming a general direction e, the strain along this direction is given by
uðai Þ ¼
J X
C ij U ðqj Þ:
j¼1
Step 5: Annealing process If T is bigger than a threshold, T = T · d, and go back to the step 2. d is the annealing rate of T Step 6: Dense displacement fields and strains In the end of iteration, we derive the dense displacement fields by using Eq. (22). Once the dense displacement fields have been estimated, the strains can be calculated from the dense displacement fields by using finite difference method. Given the dense displacement vectors (u(x, y, z),v(x, y, z), w(x, y, z)) in Cartesian coordinate system of (x, y, z). The Lagrangian strain tensor E is defined by: 0 1 Exx Exy Exz ; B C E ¼ @ Eyx Eyy Eyz ; A: ð33Þ Ezx Ezy Ezz ; where E is a symmetric matrix, i.e. Exy = Eyx, Exz = Ezx and Eyz = Ezy. And "
2 2 # 2 ou 1 ou ov ow þ Exx ¼ þ þ ; ox 2 ox ox ox "
2 2 # 2 ov 1 ou ov ow þ þ ; Eyy ¼ þ oy 2 oy oy oy "
2 2 # 2 ow 1 ou ov ow þ þ þ ; Ezz ¼ oz 2 oz oz oz ð34Þ
1 ou ov ou ou ov ov ow ow þ þ þ þ ; Exy ¼ 2 oy ox ox oy ox oy ox oy
1 ov ow ou ou ov ov ow ow þ þ þ þ ; Eyz ¼ 2 oz oy oy oz oy oz oy oz
1 ow ou ou ou ov ov ow ow þ þ þ þ : Ezx ¼ 2 ox oz oz ox oz ox oz ox From the Lagrangian strain tensor, we can calculate the normal, shear, principal and directional strain values which describe the local deformation of the myocardium
Eee ¼ eT Ee:
ð35Þ
In the cardiac strain analysis, we are more interested in the directional strains along the radial axis (eR), the circumferential axis (eC) and the longitudinal axis (eL) (Cerqueira et al., 2002). The local coordinate system used in this paper is shown in Fig. 7. 3. Experimental results 3.1. Comparisons of BEM-based regularization model with B-Spline based regularization models Comparisons of results using the BEM-based regularization model with those from B-Spline based regularization models are evaluated on a synthetic 3D circle as shown in Fig. 8a. We deform this 3D cylinder by giving the displacements on the boundary. The deformation is calculated using a biomechanical model constructed from FEM. We use these results as the ground truth for later evaluation. The maximum displacement in this simulation is set as 0.5 mm. Young’s Modulus of the 3D circle is set as 0.05 GPa. Poisson’s Ratio is set to 0.4. The quadrilateral mesh is used for BEM in this thesis. The scattered distributed red points in Fig. 8a are randomly chosen. The green points are the boundary nodes. We denote u(a) and U(q) as the true displacements of the blue and green points, respectively. Suppose we already know that the measured displacements of the blue points are u 0 (a). u 0 (a) are noisy: u 0 (a) = u(a) + n where n is White Gaussian noise with a SNR of 11.7 dB. We use three different models to recover the deformation of the 3D cylinder based on the measured noisy displacements u 0 (a). We use different lattice densities for the FFD with spacings of one, two and three pixels between two neighboring nodes. The results using 3D images are shown in Fig. 8b–f. We view them from the top for better visualization. The blue and red arrows represent the true and recovered displacements from model, respectively. To
548
P. Yan et al. / Medical Image Analysis 11 (2007) 540–554
Fig. 8. Comparisons of FFD and EFFD to BEM-based regularization model. (b–d) The model regularized displacements by using FFD with different lattice densities (b) one pixel, (c) two pixels and (d) three pixels, (e) EFFD, (d) BEM-based regularization model. Blue arrows: the true displacements and red arrows: the algorithm derived displacements. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
evaluate the results, we calculate the absolute difference between the recovered displacements and the true displacements for the red and green points, respectively. The mean displacement errors at the red and green points for the three models are illustrated in Fig. 9. The errors associating with the red points represent the local errors and the errors associating with the green points represent the global errors. Fig. 9 shows that the BEM-based regularization model has the least error both locally and globally. There are several reasons why the BEM-based regularization model is more accurate than B-Spline based regularization models. First, there is a tradeoff existing between the shape smoothness and approximation accuracy of FFD and EFFD depending on the lattice density (Lee et al., 1996). The motion of a point only has effect on the 64 neighboring control points in 3D FFD and EFFD. In this experiment, the FFD with the finest lattice can approximate the local deformation well but it will cause large global errors. The opposite is true for coarser lattice. However, for BEMbased regularization model, the motion of a point has effect on all the points on the boundary. Thus, the lattice density does not affect the accuracy of results. Second, although the FFD has been proven to be a very efficient and promising technique, the intrinsic parallelepipedical shape of the FFD lattice prohibits arbitrarily-shaped deformations. The BEM-based regularization model uses an arbitrarilyshaped lattice. Third, the FFD and EFFD do not have physically based constraints, whereas BEM-based regularization model uses the interpolation basis function which is derived from the fundamental solutions of an elastic object. Thus BEM-based regularization model is more physically plausible than FFD and EFFD. The other attraction of the BEM-based regularization model is that it is computationally efficient. In this experiment, the computation time of BEM-based regularization model is compatible with the computation time of FFD
Fig. 9. Left: The mean of local displacement error (at the red points) and right: the mean of global displacement error (at the green points). FFD with the lattice density of green: one pixel, blue: two pixel, red: three pixel, cyan: EFFD, magenta: BEM-based regularization model. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
P. Yan et al. / Medical Image Analysis 11 (2007) 540–554 Table 1 The computation time for different regularization model: FFD with lattice density of one pixel (FFD1), two pixels (FFD2), three pixels (FFD3), EFFD and BEM-based Models
FFD1
FFD2
FFD3
EFFD
BEM-based
Time (s)
5
4
3
15
7
and EFFD. The computation time for FFD with different lattice density, EFFD and BEM-based regularization model is shown in Table 1. The most efficient model is FFD with a lattice density of three pixels and the least efficient model is EFFD. The computation time of BEMbased regularization model is between FFD and EFFD. The computation of EFFD is expensive because we need to locate and parameterize the position of scattered feature points in the control lattice in EFFD. Since the lattice of the EFFD model is not regular, this process is time consuming. The BEM-based regularization model does not need this process because the inside points are controlled by all the nodes on the boundary. During the iteration, we only need to calculate the displacements of points on the boundary. We calculate the dense displacement fields from the displacements on the boundary in the end of iteration. Hence the BEM-based regularization model can speed up the computation especially when we use it with the GRPM, which uses annealing strategy.
Fig. 10. An example of the models used to compare the FEM, BEM and EFFD approaches.
549
3.2. Comparisons among FEM, BEM and EFFD We have demonstrated that the BEM-based regularization model has several advantages over the B-Spline based regularization models using FFD and EFFD as described in the above section. We will compare the interpolated results by using FEM, BEM and EFFD. The experiments are carried out on four LV-shaped models as shown in Fig. 10. By giving the same synthetic displacements on the boundary, we use FEM, BEM and EFFD to interpolate the dense displacement fields associating with the red grid nodes in Fig. 10. One example of interpolated dense displacement fields by using three methods is shown in Fig. 11. The mean differences of the derived displacements between either two methods are shown in Fig. 12. The mean differences between the displacements derived from BEM and FEM are less than 0.4 pixel for all the datasets in this experiment. The displacement results derived from the EFFD model are different from the results from the
Fig. 12. The mean displacement differences between either two methods from FEM, BEM and EFFD. Green: BEM and FEM, blue: EFFD and FEM; red: BEM and EFFD. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
Fig. 11. The derived displacement fields from FEM (left), BEM (middle) and EFFD (right) by given the same boundary displacements.
550
P. Yan et al. / Medical Image Analysis 11 (2007) 540–554
FEM and the BEM. This experiment demonstrates two facts: (1) the bEM can be an alternative method for FEM for cardiac motion analysis and (2) the B-Spline based model EFFD might lead to a biased estimation because it does not include physical model-based information. 3.3. Strain results derived from 3D in vivo cardiac MRI data We then evaluate the BEM-GRPM algorithm on 4 in vivo cardiac MRI data. Each dataset has 16 frames during on cardiac cycle. The images have a size of 100 · 100 · 24 pixels. Fig. 13a and b shows a 3D cardiac MRI data and its 3D mesh. The mesh is generated on the manually segmented boundary at the first frame. The coordinates and the curvature value of the feature points are normalized in the beginning. We set starting temperature T as 0.3. In the annealing process, we gradually reduce it by an annealing rate of 0.9. The procedure of algorithm is described in Fig. 13. The feature points extracted from a 3D cardiac MRI dataset are shown in Fig. 15. Here, we show every 4 slice from apex to base. The blue and red points are the feature points at frames 1 and 2, respectively. For each dataset, strains are calculated between end-diastole (ED) and end-systole (ES). 2D slice of the 3D-derived radial strains (Err) and circumferential strains (Ecc) of one dataset are illustrated in Fig. 14. Note the normal behavior in the LV, showing positive radial strain (thickening) and negative circumferential strain (shortening) as we move from ED to ES. 3.4. Comparisons with implanted markers in MRI studies To further quantitatively validate resulted motion trajectories, we use four canine MRI datasets with implanted markers for point-by-point displacement comparison (see Papademetris et al., 2002 for more details on the marker implantation and localization). The mean displacement errors of BEM-GRPM for the four datasets are calculated and compare to the errors of EFFD-GRPM (Lin and Duncan, 2004) which used EFFD as the interpolation and smooth function with GRPM. EFFD is an extension of FFD by using arbitrarily-shaped lattice instead of parallelepipedical-shaped lattice. Lin et al. demonstrated in the Lin and Duncan (2004) that EFFD could achieve better results than FFD. The displacements errors of BEM-GRPM and EFFD-GRPM are illustrated in Fig. 16. It shows that the displacement errors of BEM-GRPM are less than those of EFFD-GRPM. 3.5. Strain results derived from echocardiographic image data Recently, a new phased array transducer was used to acquire realtime 4D volumetric image data from a Philips 7500 ultrasound machine. An example of a 4DE ClosedChest Canine image acquired from this transducer is shown
Fig. 13. An example of the procedure of BEM-GRPM: (a) the 3D in vivo cardiac MRI data; (b) the 3D mesh; (c) the extracted feature points from image I1; (d) the extracted feature points from image I2; (e) the correspondence of the feature points from two images; (f) the smoothly interpolated displacements of the boundary nodes; (g) the interpolated dense displacement fields; and (h) the original mesh (yellow) and deformed mesh (red). (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
in Fig. 17. Orthogonal slices from a contrast-enhanced full volume 3D B-mode canine acquisition is displayed at three time periods from ED to ES. We used a contrast agent (definity) to enhance the quality of image. A more detail description of echocardiography using contrast agent can be found in Meza et al. (1996). We evaluated our algorithm on the B-mode echocardiographic images. A 2D slice of the 3D-derived radial (Err) and circumferential strains (Ecc) are illustrated in Fig. 18. Although we cannot evaluate our results from echocar-
P. Yan et al. / Medical Image Analysis 11 (2007) 540–554
551
Fig. 14. A 2D slice of 3D radial strains Err (top) and circumferential strains Ecc (bottom) derived from an in vivo cardiac MRI data (ED ! ES). Positive values represent thickening and negative values for shortening.
Fig. 15. The feature points extracted from a 3D cardiac MRI dataset. Blue: feature points at frame 1; red: feature points at frame 2. The slice number from left to right, top to bottom is 1, 5, 9, 13, 17, 21. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
diographic image data quantitatively, the derived LV motion patterns are visually correct. 4. Conclusions and future work There have been many regularization models proposed for LV motion tracking, but they have tradeoffs related to accuracy, lattice density, physical plausibility and computation time. An improbable regularization model might lead to biased estimations, or is not applicable because of expensive computation. This paper has presented a novel regularization model based on BEM. This regularization model can provide advantages over currently available regularization models in that: (1) it can avoid the tradeoff between smoothness and approximation accuracy of the
function based on the lattice density; (2) it is more physically plausible because it is constrained by the physical properties and arbitral shape of the object; and (3) it is computationally efficient and easy to be implemented. We then incorporate this new regularization model with GRPM strategy for the estimation of LV deformation and name it BEM-GRPM. This approach is capable of accurately estimating LV motion with only one segmentation of myocardium at the first frame. This approach is robust and can be applied to images contaminated by noise and complicated by missing or diffuse edges. This approach is both computationally efficient and physically plausible. In this paper, we first evaluate the BEM-based regularization model on a synthetic data. The experiment shows that the motions derived by using BEM-based regulariza-
552
P. Yan et al. / Medical Image Analysis 11 (2007) 540–554
Fig. 16. Absolute displacement error vs. implanted markers. The errors are estimated between ED to ES. Blue: the error of BEM-GRPM; red: the error of EFFD-GRPM. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
tion model are with the least errors by comparing with those derived by using B-Spline based regularization models FFD and EFFD. We also see that FFD has the tradeoff between the smoothness and accuracy based on the lattice
density, whereas BEM-based regularization model can avoid this tradeoff because the interpolation is based on the physical properties of myocardium and the shape of LV, not on the positions of feature points and control points. In this experiment, the computation of BEM-based regularization model is compatible with FFD and faster than EFFD. The comparisons among FEM, BEM and EFFD show us that the interpolation results by using FEM and BEM are very close to each other, but the interpolation results by using EFFD are far away from the results by using FEM and BEM. We then use the BEM-GRPM to derive the dense displacement fields from 3D LV image sequences. The approach is first applied on the four in vivo cardiac MRI data. By comparing displacement results using implanted markers, we identify that BEM-GRPM has smaller errors than our previous model EFFD-GRPM. It demonstrates, once again, that the BEM-based regularization model is more accurate than B-Spline based regularization model. We also apply this approach on echocardiographic image. It derives plausible motions and strains of LV. This new approach has many advantages over previous approaches. However, there still remains an improvement space to be addressed. In this paper, we use linear elastic
Fig. 17. 4DE Closed-Chest Canine echocardiographic image acquisition. Three frames: orthogonal slices from contrast-enhanced full volume 3D B-mode canine acquisition displayed at three time periods from end diastole (ED) to end systole (ES).
Fig. 18. A 2D slice of 3D radial strains Err (top) and circumferential strains Ecc (bottom) derived from a contrast-enhanced echocardiographic images (ED ! ES). Positive values represent thickening and negative values for shortening.
P. Yan et al. / Medical Image Analysis 11 (2007) 540–554
material model for myocardium. It may cause the inaccuracy because myocardium is anisotropic. In one of our preparing paper, we will use anisotropic material model for myocardium. We also assume that the outside force is zero in this model. In the future, we will consider the pressure and active force. For echocardiographic analysis, further research could be conducted to include the phase-sensitive speckle as a complementary motion information for mid-wall area tracking. And the integration of both mid-wall speckle tracking and boundary shape-based tracking into the BEM-GRPM framework would be expected to generate more robust and accurate deformation estimation for 4D echocardiographic images. We also find that a proper feature extracting model is very important. We are using Canny Edge Detector to extract the feature points from images in this paper. Most feature points are in the endocardium. There are few points located in the epicardium. Although the motions in the epicardium are very small, it may still introduce the errors. The curvature calculated by derivatives on gray-value is computationally efficient, but it may not be accurate when the image is noisy. Our future work will try to find an algorithm to calculate curvature in a accurate and efficient way. In this paper, we do not have a way to verify the results from echocardiography. In the future, we will verify the results from echocardiography by comparing with the results from MRI tagging. Acknowledgments This research is supported by NIH Grant 5R01 HL44803. The first author thanks people in Dr. A. Sinusas’ group for access to the MRI and echocardiographic datasets. Appendix A. Singular integration When the load point i is located in the same element as j, the Gij and Hij kernels become singular because they contain terms of order r1 and r2, respectively. We will talk about the Lachat–Watson transformation used in BEM for singular integration (Lachat and Watson, 1976). The detail of this method can be found in the book (Gaul et al., 2003). The essence of this technique is to map the triangular sub-element into the square intrinsic element space (Lachat and Watson, 1976; Mustoe, 1984). Thus the gray triangle in Fig. A.1a is transformed to rectangular in Fig. A.1b. By introducing the new coordinates f1 and f2: 1 n1 ¼ f1 n2 ¼ ðf1 þ 1Þðf2 þ 1Þ 1; 2
ðA:1Þ
we obtain for the Jacobian: J¼
on1 on2 on1 on2 1 ¼ ðf þ 1Þ: of1 of2 of2 of1 2 1
ðA:2Þ
553
Fig. A.1. Lachat–Watson transformation.
If the singular point is at the position [1, 1] in the n1 n2 system, the Euclidean distance between the singular point and any other points: qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2 ðA:3Þ r ¼ ðn1 þ 1Þ þ ðn2 þ 1Þ : Substitute the n1 and n2 with f1 and f2: sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
2ffi 1 2 ðf þ 1Þðf2 þ 1Þ r ¼ ðf1 þ 1Þ þ 2 1 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 2 ðA:4Þ ¼ ðf1 þ 1Þ 4 þ ð1 þ f2 Þ : 2 There is no singular point in the new coordinate system. References Arad, N., Dyn, N., Reisfel, D., Yeshurun, Y., 1994. Image warping by radial basis functions: applications to facial expressions. Graphical Models and Image Processing 56, 161–172. Babuska, I., Suri, M., 1992. On locking and robustness in the finite element method. SIAM Journal of Numerical Analysis 29 (5), 1261–1293. Brebbia, C.A., Dominguez, J., 1998. Boundary Elements: An Introductory Course. Computational Mechanics Publications. Cachier, P., Ayache, N., energies, Isotropic, 2004. filters and splines for vectorial regularization. Journal of Mathematical Imaging and Vision 20 (3), 251–265. Cerqueira, M., Weissman, N., Dilsizian, V., et al., 2002. Circulation 105, 539–542. Chandrashekara, R., Rao, A., Sanchez-Ortiz, G.I., Mohiaddin, R.H., Rueckert, D., 2003. Construction of a statistical model for cardiac motion analysis using non-rigid image registration. In: Information Processing in Medical Imaging. Proceedings of the 18th International Conference (IPMI’03). pp. 599–610. Christensen, G.E., Rabbitt, R.D., Miller, M.I., 1996. Deformable templates using large deformation kinematics. IEEE Transactions on Image Processing 5 (10), 1435–1447. Clough, R., Tocher, J., 1965. Finite element stiffness matrices for analysis of plates in bending. Proceedings of the Conference on Matrix Methods in Structural Mechanics, 515–545. Cootes, T.F., Cooper, D., Taylor, C.J., Graham, J., 1995. Active shape models – their training and application. Computer Vision and Image Understanding 61 (1), 38–59. Cootes, T.F., Edwards, G.J., Taylor, C.J., 2001. Active appearance models. PAMI 23, 681–685. Coquillart, S., 1990. Extended free-form deformation: a sculpturing tool for 3D geometric modeling. Computer Graphics 24 (4), 187–196. Do Carmo, M.P., 1976. Differential Geometry of Curves and Surfaces. Prentice-Hall, Englewood Cliffs, NJ. Gaul, L., Kogl, M., Wagner, M., 2003. Boundary Element Methods for Engineers and Scientists: An Introductory Course with Advanced Topics. Springer, Berlin.
554
P. Yan et al. / Medical Image Analysis 11 (2007) 540–554
Hoffmeister, B.K., Handley, S.M., Wickline, S.A., Miller, J.G., 1996. Ultrasonic determination of the anisotropy of Young’s modulus of fixed tendon and fixed myocardium. Journal of the Acoustical Society of America 100 (6), 3933–3940. Hsu, W., Huahes, J., Kaufman, H., 1992. Direct manipulation of freeform deformations. Computer Graphics 26 (2), 177–184. Huang, A., Summers, R.M., Hara, A.K., 2005. Surface curvature estimation for automatic colonic polyp detection. Medical Imaging: Physiology, Function, and Structure from Medical Images 5746, 393–402. Hunter, P.J., Smaill, B.H., Nielsen, P.M.F., LeGrice, I.J., 1996. A mathematical model of cardiac anatomy. Computational Biology of the Heart, 173–217. Lachat, J.C., Watson, J.O., 1976. Effective numerical treatment of boundary integral equation. International Journal of Numerical Methods in Engineering 10, 991–1005. Lee, S., Wolberg, G., Chwa, K., Shin, S.Y., 1996. Image metamorphosis with scattered feature constraints. VCG 2 (4), 337–354. Lin, N., Duncan, J.S., 2004. Generalized robust point matching using an extended free-form deformation model: application to cardiac images. In: International Symposium on Biomedical Imaging. Lin, N., Papademetris, X., Sinusas, A.J., Duncan, J.S., 2003. Analysis of left ventricular motion using a general robust point matching algorithm. In: Medical Image Computing and Computer-Assisted Intervention (LNCS 1496), 2003, pp. 556–563. MacKenna, D.A., Omens, J.H., Covell, J.W., 1996. Left ventricular perimysial collagen fibres uncoil rather than stretch during diastolic filling. Basic Reserach in Cardiology 91, 111–122.
Meza, M., Greener, Y., Hunt, R., Perry, B., Revall, S., Barbee, W., Murgo, J.P., Cheirif, J., 1996. Yocardial contrast echocardiography: Reliable, safe, and efficacious myocardial perfusion assessment after intravenous injections of a new echocardiographic contrast agent. American Heart Journal 132 (4), 871–881. Mustoe, G.G.W., 1984. Advanced integration schemes over boundary elements and volume cells for two- and three-dimensional non-linear analysis in developments in boundary element. Method-3. Elsevier, London. Papademetris, X., Sinusas, A.J., Dione, D.P., Constable, R.T., Duncan, J.S., 2002. Estimation of 3-D left ventricular deformation from medical images using biomechanical models. IEEE TMI 21 (7), 786– 800. Sederberg, T.W., Parry, S.R., 1986. Free-form deformation of solid geometric models. Proceedings of SIGGRPAPH’86 in Computer Graphics 20 (4), 151–160. Shi, P., Sinusas, A.J., Constable, R.T., Duncan, J.S., 1999. Volumetric deformation analysis using mechanics-based data fusion: applications in cardiac motion recovery. International Journal of Computer Vision 35 (1), 87–107. Spencer, A.J.M., 1980. Continuum Mechanics. Longman, London. Ter Haar Romeny, B.M., Florack, L.M.J., Salden, A.H., Viergever, M.A., 1994. Higher order differential structure of images. IVC 12 (16), 317– 325. Wilson, R.B., Cruse, T.A., 1978. Efficient implementation of anisotropic three dimensional boundary-integral equation stress analysis. International Journal for Numerical Methods in Engineering 12, 1383–1397.