ISA Transactions xxx (xxxx) xxx
Contents lists available at ScienceDirect
ISA Transactions journal homepage: www.elsevier.com/locate/isatrans
Research article
Attitude measurement based on imaging ray tracking model and orthographic projection with iteration algorithm ∗
Xiaoting Guo, Jun Tang, Jie Li, Chong Shen , Jun Liu Key Laboratory of Instrumentation Science & Dynamic Measurement, Ministry of Education, School of Instruments and Electronics, North University of China, Taiyuan, 030051, PR China
highlights • The mapping model is expressed by non-linear high order polynomial. • AOPIT is based on IRT model associating image coordinates and space projection ray. • The specific implementation procedure of the proposed IRT-AOPIT is given in detail.
article
info
Article history: Received 10 September 2018 Received in revised form 15 April 2019 Accepted 11 May 2019 Available online xxxx Keywords: Attitude estimation Imaging ray tracking Generic imaging model Orthographic projection Iterations algorithm
a b s t r a c t In the field of vision-based attitude estimation, camera model and attitude solving algorithm are the key technologies, which determine the measurement accuracy, effectiveness and applicability. Aiming at this issue, in this paper we probe into the generic imaging model and then develop a corresponding generic camera calibration method using two auxiliary calibration planes. The camera model is named as imaging ray tracking model. Based on the imaging ray tracking camera model and with the knowledge of the calibration parameters, an advanced attitude solving algorithm, imaging ray tracking model and attitude from orthographic projection with iterations algorithm, is deeply investigated, which is inspired by the classical POSIT algorithm. The initial attitude value is provided by the orthographic projection of the object on the two calibration planes and then refined by iteration to approximate the true object attitude. Experimental platform is setup to conduct the imaging ray tracking camera calibration procedure and further evaluate our attitude estimation algorithm. We show the effectiveness and superiority of our proposed attitude estimation algorithm by thorough testing on real-data and by comparison with the POSIT algorithm. © 2019 ISA. Published by Elsevier Ltd. All rights reserved.
1. Introduction Vision-based attitude measurement technology [1–7] has widespread applications in the areas of control, positioning, tracking and navigation. And vision-based attitude measurement mainly contains two stages: image processing to extract useful information from captured images, and attitude estimation itself [8]. Vision-based pose estimation based on point markers [9– 11] is the well-known Perspective-n-Point (PnP) problem, which usually contains the calculation of rotation and translation [12– 14]. And rotation is the main question studied in this paper. To obtain attitude by vision method based on point markers, the geometrical relationship between the point markers on the measured object is known in advance [15]. And in the motion process, the feature points are driven by the measured object and form continuously updated image information, which is captured ∗ Corresponding author. E-mail addresses:
[email protected] (C. Shen),
[email protected] (J. Liu).
by vision sensor [16]. By image processing, the identified image points and the object points form a series of correspondence, which are known as point pairs. Based on these point pairs, the unknown rotation between the object coordinate system and the camera coordinate system can be obtained by specific attitude solving algorithm. In the field of vision-based attitude estimation, the most commonly used visual sensors are pinhole cameras. And the pinhole projection model with a perspective projection form is the dominant imaging model [17,18], which directly defines a mapping from 3D space object points to 2D image coordinates. In detail, the pinhole camera model can be divided into two procedures: the projection from 3D space object coordinates to 3D camera coordinates, and the projection from 3D camera coordinates to 2D image coordinates. These two projections need to know external parameters and intrinsic parameters of used camera respectively. Therefore, camera calibration is a fundamental process. And Tsai calibration method [19] and Zhang calibration method [20] are two commonly adopted camera calibration methods in practical
https://doi.org/10.1016/j.isatra.2019.05.009 0019-0578/© 2019 ISA. Published by Elsevier Ltd. All rights reserved.
Please cite this article as: X. Guo, J. Tang, J. Li et al., Attitude measurement based on imaging ray tracking model and orthographic projection with iteration algorithm. ISA Transactions (2019), https://doi.org/10.1016/j.isatra.2019.05.009.
2
X. Guo, J. Tang, J. Li et al. / ISA Transactions xxx (xxxx) xxx
work. In addition, many scholars have thoroughly studied the camera calibration and many methods are put forward according to different application scenarios [21–24]. On the whole, these methods are mainly aimed at pinhole camera model. And they are the camera calibration in the normal sense, which signifies explicitly calculating a camera’s physical parameters, including intrinsic parameters (image center, focal length, lens distortion) and external parameters (rotation and translation). With the continuous deepening and expansion of the visionbased attitude measurement applications, the limitations of pinhole camera model have gradually appeared [25]. Firstly, the assumption that all the projecting rays intersect at a single point, which is known as the optical center or camera center, applies only to ideal pinhole camera. Actually, there may be some beams of light off-center, resulting in degraded accuracy of attitude estimation. Besides, the geometric distortion caused by finite sized apertures increases from the center to image edges, which bring in distorted image coordinates especially at the edge of the captured image. Additionally, there are some other factors failure of the ideal pinhole camera model, such as magnification varying with object distances, edge blur and perspective errors. Secondly, the pinhole camera model is not applicable to certain cameras such as non-central catadioptric cameras [26] or telecentric cameras [25]. The non-central catadioptric cameras do not possess a single viewpoint and the telecentric cameras take orthographic projection model [25,26]. This means that it is necessary to find other camera calibration methods for general use and study the corresponding attitude measurement algorithm. Grossberg and Naya [27] firstly proposed the generic imaging model, which could be used to represent arbitrary imaging system. In their model, the projection process was treated as a black box. The mapping was defined from projection rays in space to pixels in the image and each image pixel corresponds to one projection ray in 3D space, which was referred as the raxel. The set of all raxels formed the complete generic imaging model. A corresponding calibration procedure was also presented by Grossberg and Nayar [28] to define the raxel. However, the use of binary coding pattern made their calibration procedure complicated. Besides, although the calibration process was detailedly discussed, the mathematical model expressing the raxel was not mentioned, which made it difficult to recover their work. And as far as we know, their work just deals with the camera calibration and does not concern attitude calculation. To extern the generic imaging model, by using only one image of pattern, the complexity and flexibility of their calibration method was superior, compared with the calibration of Grossberg and Nayar [28]. Besides, they also made some research on attitude measurement. By building the correspondence between a 3D line and its image, Miraldo and Araujo proposed an iterative and direct solution to estimate pose [1]. Besides, they presented a non-iterative algorithm by using planes for pose estimation [29]. However, both of the pose estimation algorithms were absence of clear physical meaning and lack of corresponding mathematical model, which made them obscure and intuitively difficult to understand. Sun [4] used the generic camera model for pose estimation. In their paper, both the calibration model and the pose algorithm were presented. However, as key parameters in the pose algorithm, they did not discuss how to solve the two scaling factors and the relationship between the two factors was not given. Zhang [5] also studied the generic camera model for pose estimation. And their focus was on the calibration procedure. For pose estimation, they presented a method based on geometric constraint using coplanar target. However, the constraint conditions greatly decreased for four points from noncoplanar to coplanar target, which made their algorithm easy to receive
external interference. Sun [30] introduced the popular orthogonal iteration algorithm into the generic camera model to solve the pose estimation problem for multiple feature points. As there were so many points, the correspondences between the image points and object points must be determined by the method of SoftPosit in advance in their algorithm, which made their algorithm complex. It should be pointed out that the above mentioned attitude measurement methods are not specifically proposed for the generic imaging model and they have their own limitations when combining with the model. Considering the limitations and the applicability of the existing attitude measurement algorithm for the generic imaging model and to find more suitable attitude algorithm for generic imaging model, in this paper, we combine the generic camera model [30], which we call imaging ray tracking (IRT) model here, with a novel attitude estimation method, attitude from orthographic projection with iteration (AOPIT) algorithm, which is inspired by the classic iteration algorithm POSIT [17]. And the main contributions in this paper are summarized as follows.
• In our IRT, the mapping model is expressed by non-linear high order polynomial. By using the polynomial model, the lens distortion is naturally included in the mapping model, as polynomial model can be used to correct imaging distortion [31]. • We deeply investigate the characteristics of the POSIT algorithm and ingeniously integrated it with the IRT model. Just as the POSIT algorithm is derived according to the characteristics of pinhole camera model with perspective projection, the AOPIT is derived based on the IRT model which associates the image coordinates with the space projection ray. Therefore, our proposed attitude estimation method is targeted at generic camera model. • The specific implementation procedure of the proposed IRTAOPIT is given in detail, including the calibration process and the attitude measurement process. All the relevant parameters in the algorithm are fully defined and thoroughly discussed, which makes the meaning and the implementation of the algorithm clear. The rest of the paper is organized as follows. The mathematical model and the calibration procedure of the imaging ray tracking camera model is given in Section 2. In Section 3, the attitude from orthographic projection with iteration algorithm based on imaging ray tracking camera model is described. The results of evaluation experiments and discussions are reported in Section 4. The paper ends with a few concluding remarks in Section 5. 2. Imaging ray tracking camera model As for attitude measurement, camera physical parameters are not necessarily required [32]. What we are interested in is just the mapping from space objects to image coordinates. The projection model can be ignored and the mapping itself can be any form, which can be called as implicit calibration as the physical parameters are not solved. Because of its advantages, the generic camera model has attracted more attention. As a kind of generic camera model, the IRT model can be used to express so many cameras [27,28,33]. It does not deal with specific projection model between the 2D image coordinates and the 3D space object points but connects them with projection rays. As to define a ray needs at least two points, the space projection rays are specified by two auxiliary planes, which is shown in Fig. 1. As depicted in Fig. 1, the projection model can be any type and the path that incoming ray arrives at the image plane can
Please cite this article as: X. Guo, J. Tang, J. Li et al., Attitude measurement based on imaging ray tracking model and orthographic projection with iteration algorithm. ISA Transactions (2019), https://doi.org/10.1016/j.isatra.2019.05.009.
X. Guo, J. Tang, J. Li et al. / ISA Transactions xxx (xxxx) xxx
Fig. 1. Imaging ray tracking model which connects 2D image coordinates to 3D space projection rays.
3
Fig. 3. The object points projection on the image plane and two calibration planes.
example camera coordinate system. The solved rotation matrices represent the absolute attitude between the object and the camera. And by comparing the current rotation matrix with the initial rotation matrix, the relative attitude of the object can be achieved. By introducing orthographic projection into the IRT model, the initial approximation is used in the proposed method. To obtain accurate attitude, iteration calculation is introduced into the algorithm to gain the projection from initial approximation to true projection relationship.
Fig. 2. The parameters calibration of imaging ray tracking model.
3.1. Problem description be arbitrarily complex [27,30]. Besides, it can also conclude that the mapping function from light rays to image coordinates [34] is a non-injective non-surjective function. To specify the mapping function and solve model parameters, two auxiliary calibration planes are used, which is obtained by locating a feature circle direction pattern at two parallel positions. The specific calibration process is displayed in Fig. 2. As displayed in Fig. 2, the camera captures the direction pattern at position A With ( b two ) sets ( aanda )position aB( respectively. ) b a a u , v ↔ u , v ↔ M x , y , a and I of point pairs I t t t t t t t t t ( ) Mtb xbt , ybt , b (It represents the tth image points; Mta /Mtb represents the tth calibration plane coordinate points projection on the a/b calibration plane), the mapping function can by parameterized by non-linear high order polynomial [31] with the correspondences between image coordinates and pattern feature points can be expressed as
⎧ q q−r ∑ ∑ ( p )r ( p )s ⎪ ⎪ p p p p ⎪ x = g (u , v ) = Crsp ut vt ⎪ x t t t ⎪ ⎪ ⎨ r =0 s=0 zt = p = a/b ⎪ q q−r ⎪ ∑ ∑ ( p )r ( p )s ⎪ ⎪ p p p p ⎪ y = g (u , v ) = Dprs ut vt ⎪ y t t ⎩ t
(1)
To estimate the attitude of an object, the feature points atmn tached to the object are named as M0m , M1mm , . . . , Mtn , . . . , MM . The subscript represents the index of the feature point. The superscript indicates which plane it locating, which is parallel to the two calibration planes. As shown in Fig. 3, M0m − uvw is the object coordinated system with M0m is the origin, and O − ijk is calibration plane coordinate system. The coordinates m of the object point M0m are (xm 0 , y0 , m)(where superscript is the z coordinate of the calibration plane coordinate system and the subscript is the serial number). The image point I0 (u0 , v0 ) of M0m maps into the calibration plane A and B at two points M0a (xa0 , ya0 , a) and M0b (xb0 , yb0 , b)(M0a , M0b the 0th object point projection on the projection plane a, b) by rectilinear propagation. As mentioned above, three kinds of coordinate system are totally defined, i.e. the image coordinate system, the object coordinate system, the calibration plane coordinate system. Our objective is to calculate the rotation matrix R of the object. Actually, the purpose of R is to rotate the object vectors such n as Mm 0 Mt into coordinates defined in the calibration plane coorn dinate system. The dot product Mm 0 Mt · i means the projection m n of the vector M0 Mt on the unit vector i of the calibration plane coordinate system. The rotation matrix can be expressed as
r =0 s=0
where t = 0, 1, . . . , N − 1 and N is the total number of available p p p p feature points. (xt , yt ) and (ut , vt ) are respectively the calibration plane coordinate and image coordinate of the extracted feature p circle center. And r and s are the order of the polynomial. Crs and p Drs are the parameters that need to be solved. For q order polynomial, there are (q + 1)(q + 2)/2 parameters, which means at least (q + 1)(q + 2)/2 useful points needed. In fact, the available points are much more than needed and the Levenberg–Marquardt method can be used to solve the parameters. 3. Attitude from orthographic projection with iterations algorithm The problem of the attitude estimation for an object means computing a series of rotations, which define the rigid transformations between the object and predefined coordinate system for
iu ju ku
iv jv kv
[ R=
iw jw kw
] (2)
where iu , iv , iw are the coordinates of i in the coordinate system (M0m u, M0m v, M0m w ) of the object system. To calculate the rotation matrix, we only need to calculate i and j in the object coordinate system. And the vector k can be obtained by cross product i × j. Once the IRT model is calibrated, the correspondences between space projection rays and image coordinates are determined, which means that the light rays are also determined. For arbitrary object point Mtλ (xλt , yλt , λ) with its projection point on plane A Mta (xat , yat , a) and plane BMtb (xbt , ybt , b), Mta and Mtb can be obtained by substitute its image point It (ut , vt ) into the (1). Then the projection rays defined by the two projection points can be expressed as xλt − xat xbt
−
xat
=
yλt − yat ybt
−
yat
=
λ−a b−a
(3)
Please cite this article as: X. Guo, J. Tang, J. Li et al., Attitude measurement based on imaging ray tracking model and orthographic projection with iteration algorithm. ISA Transactions (2019), https://doi.org/10.1016/j.isatra.2019.05.009.
4
X. Guo, J. Tang, J. Li et al. / ISA Transactions xxx (xxxx) xxx
As point Mtm′ is the orthographic projection of point Mtn , the x and y coordinates of the two points are the same. Thus, the following dot products are zero.
⎧ ⎨
Mm Mnt · i = 0 t′
(6)
⎩Mm Mn · j = 0 t′
t
where vectors i, j have been defined in Section 3.1. n Then, the projection of the vector Mm 0 Mt on the unit vector i of the calibration plane coordinate system can be written as.
(
m m a a n Mm 0 Mt · i = M0 Mt ′ · i = xt ′ − x0
( =
b−m b−a
+
m − a xbt − xb0 b−a
xat
−
)
xa0
(
)(
m a a xm t − x0 / xt − x0
xat′ − xa0
) (
)
)
(7)
( a ) 1 = s− xt ′ − xa0 x where the term sx are defined as sx = (b − a)(xat − xa0 )/((xat − xa0 )(b − m) + (xbt − xb0 )(m − a)) Fig. 4. Orthographic projection model in the rectilinear propagation model of projection rays.
In (2), the only unknown quantities are the coordinates (xt , yλt , λ) of point Mtλ . Rewriting (2), the calibration plane coordinates of Mtλ can be represented as λ
⎧ ( b ) λ a a ⎪ ⎪ ⎨xt = xt + xt − xt (λ − a) /(b − a)
ztλ = λ ⎪ ⎪ ( ) ⎩ λ yt = yat + ybt − yat (λ − a) /(b − a)
(4)
on the unit vector Similarly, the projection of the vector j of the calibration plane coordinate system can be written as. m m n Mm 0 Mt · j = M0 Mt ′ · j
( =
b−m b−a
+
m − a ybt − yb0 b−a
yat
−
ya0
)
(
yat′ − ya0
)
(9)
) ( a 1 = s− yt ′ − ya0 y where the term sy are defined as sy = (b − a)(yat − ya0 )/((yat − ya0 )(b − m) + (ybt − yb0 )(m − a)) (10)
For arbitrary visible object point, as x and y coordinate values can be expressed by z coordinate value. Thus, the only quantity is the z coordinate value namely λ. If it is determined, the coordinates of object point in the calibration plane coordinate system can be solved. And the object attitude are fully defined once i, j, z are determined. 3.2. Orthographic projection model and rectilinear propagation model To solve the object attitude, we introduce the orthographic projection model [4,17] as an approximation in the rectilinear propagation model of projection rays. The specific projection relationships between the two projection models are shown in Fig. 4. For object points M0m , Mtn on rectilinear propagation ray l0 , lt , they respectively project on the calibration plane A at points M0a , Mta and plane B at points M0b , Mtb . Plane M is the newly introduced auxiliary plane, with M0m on the plane and parallel to plane A and plane B. Ray lt intersects the plane M at point Mtm by rectilinear propagation. Object point Mtn intersects the plane M at point Mtm′ by orthographic projection. For point Mtm′ , it is on ray lt ′ and intersects on plane A and B at Mta′ and Mtb′ respectively. 3.3. Equations for two projection models m n n The vector Mm 0 Mt , which is formed by point M0 and point Mt as displayed in Fig. 4, can be expressed by piecewise vectors in the following form. n m m m n Mm 0 Mt = M0 Mt ′ + Mt ′ Mt
(8)
n Mm 0 Mt
(5)
m m n where vectors Mm 0 Mt ′ and Mt ′ Mt are formed in the same rules as n vector Mm M . t 0
In order to prove sx = sy , we just need to prove the following equation true. xbt − xb0 xat − xa0
=
ybt − yb0
(11)
yat − ya0
As displayed in Fig. 4, ray l0 , lt intersect two parallel planes A and B at M0a , Mta and M0b , Mtb respectively. Thus, the following equation holds. xbt − xb0 ybt
−
yb0
=
xat − xa0
(12)
yat − ya0
Then, (11) is also true and sx = sy . And Eqs. (7) and (9) can be merged into
⎧ ⎨
a a n Mm 0 Mt · I = xt ′ − x0
(13)
⎩Mm Mn · J = ya − ya t 0 0 t′ where I = i · sx and J = j · sy . The relationship between the z coordinates of object point M0m and Mtn in the calibration plane coordinate system can be written as n n = m + Mm 0 Mt · k = m + εt
(14)
where the terms εt are defined as n εt = Mm 0 Mt · k
(15) n Mm 0 Mt
which means the orthographic projection of vector on the unit vector k of the calibration plane coordinate system. As shown in Fig. 4, it is just the z coordinate difference between point M0m and point Mtn .
Please cite this article as: X. Guo, J. Tang, J. Li et al., Attitude measurement based on imaging ray tracking model and orthographic projection with iteration algorithm. ISA Transactions (2019), https://doi.org/10.1016/j.isatra.2019.05.009.
X. Guo, J. Tang, J. Li et al. / ISA Transactions xxx (xxxx) xxx
5
As the corresponding image point of ray lt ′ is unknown. xat′ , yat′ in (13) cannot be directly solved by (1). Other expression methods have to be considered for further calculation. Considering the orthographic projection relationship between point Mtn and Mtm′ , we can directly obtain m n xnt = xm t ′ , yt = yt ′
(16)
Based on this and the vector proportional relation between plane A and plane M, the xat′ and yat′ can be expressed as
⎧ ⎪ ⎪ a a ⎪ ⎨xt ′ = xt +
(
xbt − xat
)(
xat − xa0
)
) εt ) ( ( (b − m) xat − xa0 + (m − a) xbt − xb0 ( b ) ( ) ⎪ y − ya yat − ya0 ⎪ ⎪ ) εt ( ( a t a )t ⎩yat′ = yat + (b − m) yt − y0 + (m − a) ybt − yb0
(17)
Now, if initial values are given to εt and m, (13) provide linear systems of equations in which the only unknowns are respectively I and J. Once I and J have been calculated, i and j can be obtained by normalizing I and J, and m can be updated by (8). We call this algorithm, which finds the attitude by solving linear systems, AOP (Attitude from orthographic projection). The solutions of the AOP algorithm are approximations if the values given to εt and m are not exact. But when the unknowns i and j have been calculated, more exact values can be calculated for the εt and m, and the equations can be solved again with these better values. The iterative algorithm is named as AOPIT (AOP with Iterations). The desired effect is that i, j and m converge toward values which correspond to a correct attitude in a few iterations. 3.4. Solving equations and attitude estimation by iteration process To solve (13), rewrite them in a more compact form
⎧ ⎨
n Mm 0 Mt · I = ξt
⎩Mm Mn · J = η 0
t
(18)
t
with I = i · sx , J = j · sy ,
⎧ ⎪ ⎪ ⎪ ⎪ ξt = xat′ − xa0 = xat − xa0 ⎪ ⎪ ) ( b )( ⎪ ⎪ xt − xat xat − xa0 ⎪ ⎪ ⎪ ) εt ( ) ( ⎨+ (b − m) xat − xa0 + (m − a) xbt − xb0 ⎪ ⎪ ⎪ ⎪ ηt = yat′ − ya0 = yat − ya0 ⎪ ( b )( ) ⎪ ⎪ ⎪ yt − yat yat − ya0 ⎪ ⎪ ⎪+ ( ) ( ) εt ⎩ (b − m) yat − ya0 + (m − a) ybt − yb0
Fig. 5. Flow chart of the proposed algorithm.
(19)
and where the terms εt and m have been updated at the previous iteration steps. The dot products of these equations in form of vector coordinates in the object coordinate system can be written as [Ut Vt Wt ] [Iu Iv Iw ]T = ξt [Ut Vt Wt ] [Ju Jv Jw ]T = ηt
{
(20)
where the T represents the matrix transposition. As the unknowns are vectors I and J, there are linear equations. xa0 , ya0 , xb0 , yb0 and xat , yat , xbt , ybt are the coordinates of ray l0 and ray lt projection on the calibration plane A and plane B. Ut , Vt , Wt are the known coordinates of point Mt in the object coordinate system. Writing (13) for the N object points, M0 , M1 , . . . , Mt , . . . , MN −1 and projection points on plane A and plane B, we generate linear systems for the coordinates of the unknown vectors I and J
{
GI = x′ GJ = y′
(21)
where G is the matrix of the coordinates of the object points Mt in the object coordinate system, x′ is the vector with tth coordinate ξt and y′ is the vector with tth coordinate ηt . As the rank of matrix G is 3, if we have three object points other than M0 and these points are noncoplanar, and the solutions of the linear systems in the least square form can be expressed as
{
I = Hx′ J = Hy′
(22)
where H is the pseudoinverse of the matrix G, which can be called as object matrix. To estimate the attitude of an object by iteration process using the proposed algorithm, the detailed operating procedures are concluded in the following. Step 0 (initial step, executed once for a set of simultaneously visible feature points ): Write matrix G and calculate the object matrix H as the pseudoinverse matrix of G. (1) ] Set εt = 0, which means xat′ = xat and yat′ = yat namely Mta and Mta′ coincide. Simultaneously, n = m. (2) Beginning of loop, Calculate i and j: (a) Calculate the vectors x′ and y′ by (19).
Please cite this article as: X. Guo, J. Tang, J. Li et al., Attitude measurement based on imaging ray tracking model and orthographic projection with iteration algorithm. ISA Transactions (2019), https://doi.org/10.1016/j.isatra.2019.05.009.
6
X. Guo, J. Tang, J. Li et al. / ISA Transactions xxx (xxxx) xxx
(b) Multiply the object matrix H and vectors x′ and y′ , to obtain I and J by (22). (c) Calculate the term sx and sy , sx = (I · I)1/2 and sy = (J · J)1/2 . (d) Calculate the unit vectors i and j: i = I/sx , j = J/sy . (3) Update m by sx using (8) or by sy using (10) in a similar manner. (4) Update εt : (a) Calculate unit vector k as the cross product of i and j. (b) Calculate εt by (14). (5) If ⏐εt (it ) − εt (it −1) ⏐ > Threshold, it = it + 1 (it means the number of iterations); Go to Step 2. (6) Else calculate n by (14) and output attitude using values found at last iteration: the rotation matrix is the matrix with row vectors i, j and k.
⏐
⏐
The flow chart of the detailed operating procedures of the proposed algorithm is given in Fig. 5. In the rectangle box of image process to extract the image feature points, different conditions are set for different binary threshold values suitable for different noise levels. Specifically, to consider measurement noise in the proposed algorithm, the image threshold value should change automatically with the image itself in the process of image processing [35,36]. In our algorithm, the gray level difference is used as a reference namely the difference between the maximum and the minimum of the gray level value as shown in Fig. 5. For different gray level difference, corresponding coefficient is multiplied by image threshold value, which is obtained by peak– peak-threshold algorithm. Adaptive threshold selection ensures the algorithm applies to different image noises [37–39].
Fig. 6. IRT camera model calibration experimental system.
Fig. 7. Feature circle direction pattern.
4. Practical evaluation experiments and discussions 4.1. IRT model calibration results Experimental setup for IRT camera model calibration is set up as shown in Fig. 6, to performance the parameters calibration of the system. The vision sensor used in our experiment is CCD camera, with image resolution 1024 × 1024 pixels and pixel size 0.00645 mm × 0.00645 mm. The CCD camera is fixed onto the support bracket, facing directly to the calibration system namely the calibration pattern, which is fixed on the linear translation stage. The linear translation stage is made by Zolix with repetitive positioning accuracy of 3 µm. The controller is used to control the parallel movement of translation stage and the pattern fixed on the stage. The used feature circle direction pattern is shown in Fig. 7. The distances between the centers of each two circles are 20 mm. The four big circles in the center of the pattern are to indicate direction and convenient for central positioning. Dense dot distribution guarantees the high accuracy of calibration results. The pattern is printed on A3 paper and pasted on composite glass. The flatness of the used pattern is required to ensure the z coordinates of all the feature points are the same [40]. To calibrate the parameters of the IRT camera model, the linear translation stage carry the pattern to stay at three parallel positions, plane A and plane B (as mentioned in the previous contents), and the beginning position which as the base location, is also necessary (If position is chosen as the base location, only two parallel positions are needed). But only the images of calibration pattern at plane A and plane B are needed for calibration model parameters solution. The captured image of the pattern after feature extraction at plane B is displayed in Fig. 8. The
Fig. 8. Processed calibration pattern image and its local enlarged image.
features of the pattern are circles and the feature information used here is the center of the circles. The bottom big circle is chosen as the origin of the pattern plane and the coordinates of other feature points are established relative to the bottom big circle. Actually, the origin position of the base location is just the origin of coordinate system O − ijk. And the coordinates in O − ijk system are given in Fig. 8. For the calibration pattern, with the coordinates in the calibration plane coordinate system and extracted image coordinates, the IRT camera model parameters can be calculated by (1). In our experiment, the order of the equations is chosen as 5. Thus, there p p are totally 21 parameters for Crs or Drs (p = a or b). The calibration results are given in Table 1. From Table 1, it can be seen that the parameters of higher order term are quite small. For 5-order terms, the parameters are nearly all small than 10−15 . And for 4-order terms, the parameters are with 10−12 level. And small values imply the slight effect of the corresponding terms on the camera model.
Please cite this article as: X. Guo, J. Tang, J. Li et al., Attitude measurement based on imaging ray tracking model and orthographic projection with iteration algorithm. ISA Transactions (2019), https://doi.org/10.1016/j.isatra.2019.05.009.
X. Guo, J. Tang, J. Li et al. / ISA Transactions xxx (xxxx) xxx
Fig. 9. Calculated calibration pattern positions and its standard positions. Table 1 IRT camera model calibration parameters. Parameters
Plane A(130 mm) a
(C00 , D00 ) (C05 , D05 ) (C04 , D04 ) (C14 , D14 ) (C03 , D03 ) (C13 , D13 ) (C23 , D23 ) (C02 , D02 ) (C12 , D12 ) (C22 , D22 ) (C32 , D32 ) (C01 , D01 ) (C11 , D11 ) (C21 , D21 ) (C31 , D31 ) (C41 , D41 ) (C10 , D10 ) (C20 , D20 ) (C30 , D30 ) (C40 , D40 ) (C50 , D50 )
g x (ua , v a )
−1.51E+02 2.35E−15 −5.99E−12 −5.23E−16 4.80E−09 1.52E−12 −1.45E−16 −4.37E−06 −4.65E−10 1.09E−11 −5.99E−15 2.79E−03 4.16E−07 −1.35E−08 1.02E−11 −2.47E−15 2.95E−01 2.42E−06 −2.02E−08 2.69E−11 −9.98E−15
a
g y ( ua , v a )
−1.50E+02 1.64E−14 −3.76E−11 2.92E−15 3.23E−08 1.64E−12 −2.72E−15 −1.41E−05 −6.85E−09 5.75E−12 −1.59E−15 2.97E−01 −2.28E−06 1.54E−09 2.45E−12 2.92E−16 2.86E−03 −1.29E−06 −4.58E−09 4.12E−12 −1.53E−15
Plane B(180 mm) b
g x ub , v b
(
)
−1.34E+02 4.36E−16 −9.53E−13 −2.23E−15 5.38E−11 4.22E−12 6.70E−16 −2.24E−06 −1.93E−09 9.60E−12 −5.96E−15 2.42E−03 3.32E−07 −1.18E−08 8.62E−12 −1.55E−15 2.70E−01 −3.58E−06 −2.75E−09 6.16E−12 −1.47E−15
b
g y ub , v b
(
)
−1.38E+02 9.03E−15 −2.03E−11 2.96E−15 1.89E−08 −3.19E−12 6.17E−16 −1.02E−05 −6.82E−10 2.19E−12 −2.30E−15 2.71E−01 −5.10E−06 7.72E−09 −6.42E−12 5.15E−15 3.37E−03 −5.07E−06 3.54E−09 −2.35E−12 9.89E−17
7
To evaluate the effectiveness of the IRT camera model, the projection errors are calculated. With the solved model parameters, the extracted feature points at base position, position A and position B are projected into the calibration plane coordinate system by (1) as shown in Fig. 9. And the standard positions correspond to the coordinates of the calibration pattern in the O − ijk system. To compare the projection points with the actual calibration plane coordinates of the feature points, the projection errors are given in form of the square root of x coordinate error and y coordinate error. And the error distributions at base position, position A and position B are detailedly displayed in Fig. 10 by error contour maps and 3D error distribution maps. It can be directly inferred that the projection position errors are irregular from the error contour maps of Fig. 10 and the calculated errors are quite small from the 3D error distribution maps of Fig. 10. Actually the maximum projection position errors of base position, position A and position B are respectively 0.184, 0.067 and 0.076. The irregularity of error distributions and the small level of error values imply the feasibility and effectiveness of the IRT camera model and the calibration method.
4.2. Attitude estimation results 4.2.1. Experimental system For the final goal of attitude estimation, the experimental system is established as depicted in Fig. 11, which is similar to the calibration system except for the movement part and the pattern. In the process of measurement, the movement part is integrated turntable [41–43] and the pattern is four-point noncoplanar target. The turntable includes three rotation parts: Zolix RAK-200 for the yaw direction and RAK-100 for the pitch and roll direction. The repeatability of RAK-200 and RAK-100 is both 0.005◦ . The four-point pattern has the minimum points to satisfy the attitude estimation algorithm. With as few feature points as possible, it is quite easy to conduct feature recognition, extraction and matching. The codes of imaging system calibration, image processing and attitude calculation algorithm are based on Visual Studio 2010 environment on a computer with 2.0 GHz CPU, with OpenCV2.4.4 as additional library for image processing.
Fig. 10. Projection errors of base position (column 1), position A (column 2) and position B (column 3). The first row is error contour maps and the second row is 3D error distribution maps.
Please cite this article as: X. Guo, J. Tang, J. Li et al., Attitude measurement based on imaging ray tracking model and orthographic projection with iteration algorithm. ISA Transactions (2019), https://doi.org/10.1016/j.isatra.2019.05.009.
8
X. Guo, J. Tang, J. Li et al. / ISA Transactions xxx (xxxx) xxx
Table 2 Motion form pseudocodes of biaxial motion and tri-axial motion.
Fig. 12. Four sample images from different attitudes; the red crosses denote the position of centroid of infrared LED control points; the green Arabic numerals indicate the serial number of feature points . (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.) Table 3 Statistical results of attitude estimation errors for uniaxial motion: X-axis and Z-axis. Maximum errors Fig. 11. Attitude estimation experimental system.
4.2.2. Three kinds of motion experiments Experiments including uniaxial motion, biaxial motion and triaxial motion are conducted to evaluate the attitude estimation performance of the proposed method. And x–y–z axis is respectively defined as pitch–roll–yaw axis. During the motion process, the non-coplanar pattern is fixed onto the turntable and rotates with its movement. The infrared LEDs feature points inlaid into the pattern are captured by the camera and used as the object points. Four sample captured images of the pattern from different attitudes are displayed in Fig. 12 with feature points extract results marked in the image. Within certain range of motion, four feature points can be completely captured and correctly extracted. And in the process of image processing and feature matching, there is small workload as the number of control points is small, which improves the computational efficiency. The specific motion trajectories of four-point non-coplanar pattern are shown in Fig. 13, which includes uniaxial motion (zaxis), biaxial motion (x–z-axes) and tri-axial motion (x–y–z-axes). In uniaxial motion, the turntable rotates around the pitch axis and the measurement pattern is captured at every three degrees. And in biaxial motion and tri-axial motion, the measurement pattern is captured at every five degrees. And the pseudocodes of biaxial motion and tri-axial motion used in our experiment can be described as Table 2. For uniaxial motions, the attitude estimation errors of the proposed IRT-AOPIT method are depicted in Fig. 14 and the errors of the classical POSIT method are also given for comparison. As displayed in Fig. 14, both the pitch angle errors and roll angle errors of the proposed method are smaller and the yaw angle error of the proposed method is larger than the POSIT method, which indicates the superiority of the proposed method on the whole. To provide more persuasive results, three groups of uniaxial motion experiments are conducted to testify the reliability
Single axis RMSE
RMSE
Pitch
Roll
Yaw
Pitch
Roll
Yaw
Total
Aa
1b 2 3
0.731 0.722 0.699
1.165 1.147 1.133
−0.288 −0.289 −0.298
0.296 0.294 0.276
0.774 0.771 0.773
0.126 0.126 0.126
0.839 0.835 0.831
B
1 2 3
0.260 0.231 0.232
−0.483 −0.477 −0.497
0.393 0.391 0.386
0.104 0.101 0.106
0.299 0.297 0.300
0.157 0.159 0.160
0.353 0.352 0.356
a
A, B in the first column respectively represent the z-axis uniaxial motion by POSIT, and the z-axis uniaxial motion by IRT-AOPIT. b The Arabic numerals in the second column represent the serial number of repetitive experiments.
and repeatability between the two methods and the statistical results are displayed in Table 3. For z-axis motion the proposed IRT-AOPIT method is superior to the POSIT method with smaller error values. The statistical results with good stability indicate that the proposed method can be reproduced. For x–z-axes biaxial motion form, there are also three groups of experiments conducted. And the attitude estimation error statistical results and the error graphs are respectively shown in Fig. 15 and Table 4. Compared with the classical POSIT method, the performance of the proposed IRT-AOPIT method is superior. The total attitude estimation errors are obviously decreased by using of the proposed method. And the proposed method has more compact error distribution as displayed in the tri-axial error separation maps of Fig. 15(d), (e) and (f). And the good stability of the statistical results shows that the proposed method has excellent repeatability. More complex tri-axial motion is conducted to further evaluate performance of the proposed method. The error distributions are shown in Fig. 16. It can directly see that both the total error values and separation errors calculated by the proposed methods are smaller which indicates the effectiveness of the IRT-AOPIT algorithm. Besides, from the error separation maps of Fig. 16(c), (d) and (e), it can be intuitively seen that the error distributions
Please cite this article as: X. Guo, J. Tang, J. Li et al., Attitude measurement based on imaging ray tracking model and orthographic projection with iteration algorithm. ISA Transactions (2019), https://doi.org/10.1016/j.isatra.2019.05.009.
X. Guo, J. Tang, J. Li et al. / ISA Transactions xxx (xxxx) xxx
9
Fig. 13. Motion trajectories of four-point non-coplanar pattern.
Fig. 14. Attitude estimation errors of uniaxial motions.
Fig. 15. Attitude estimation errors of biaxial motion, where in (a) and (b) the magnitudes of the error values are described by the color of the scatter points using thermal map. The correspondence is from pitch–roll–yaw-color to x–y–z-error. For x–z biaxial motion, the roll angles are always zeros. Thus the correspondence can be changed as from pitch–yaw- magnitude to x–z-error as in (c). The error values in (a), (b) and (c) are correspondingly the same. They are just in different forms . (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
of the proposed method are more compact, which indicates the variances are smaller. More complex tri-axial motion is carried out to further evaluate the performance of the proposed method. And the error distribution maps are shown in Fig. 16. It can be directly seen that both the total error values and the separation errors calculated by the proposed methods are smaller which indicates the effectiveness of the IRT-AOPIT algorithm. Besides, from the error separation maps of Fig. 16(c), (d) and (e), it can be intuitively seen that the error distributions of the proposed method are more compact, which indicates the variances are smaller.
In addition, from Fig. 16(a) and (b), it can be seen that small attitude estimation errors are at small-angle motion area and large attitude estimation errors are at large-angle motion area both for the POSIT method and the proposed method, which reveals the facts that the errors tend to be greater in the range far away from the camera’s optical axis. One reason for this is that the feature circles have bigger projective deformation at the large angle area. Roundness of the circles becomes smaller. And they turn out ellipse, which makes the feature extraction inaccurate. Another explanation is that the captured images of large-angle motion area always appear the edge of the field of view (FOV)
Please cite this article as: X. Guo, J. Tang, J. Li et al., Attitude measurement based on imaging ray tracking model and orthographic projection with iteration algorithm. ISA Transactions (2019), https://doi.org/10.1016/j.isatra.2019.05.009.
10
X. Guo, J. Tang, J. Li et al. / ISA Transactions xxx (xxxx) xxx
Fig. 16. Attitude estimation errors of tri-axial motion, where in (a) and (b) the correspondence is from pitch–roll–yaw-color to x–y–z-error . (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
Fig. 17. The local screenshot of captured image at the same position with different levels of noises. Table 4 Statistical results of attitude estimation errors for biaxial motion: x–z-axis. Maximum errors
Single axis RMSE
Table 5 Statistical results of attitude estimation errors for two tri-axial experiments.
RMSE
Maximum errors
Pitch
Roll
Yaw
Pitch
Roll
Yaw
Total
Pitch
Roll
Yaw
Pitch
Roll
Yaw
Total
−0.569 −0.551
0.622 0.630 0.627
0.220 0.225 0.222
0.682 0.692 0.687
0.337 0.341 0.338
0.244 0.256 0.255
0.450 0.459 0.456
Aa
1b 2 3
0.497 0.491 0.485
1.411 1.414 1.406
0.537
0.175 0.175 0.172
B
1 2 3
0.407 −0.397 −0.395
0.745 0.743 0.735
0.652 0.740 0.751
0.170 0.172 0.170
a
A, B in the first column respectively represent the POSIT method, and the IRT-AOPIT method. b The Arabic numerals in the second column represent the serial number of repetitive experiments.
as displayed in Fig. 12, where severe deformation of the camera is located. The captured images have more serious distortion, which also can make the feature extraction inaccurate. As key input information of attitude estimation algorithm, the inaccurate image coordinates of feature points will certainly bring out large calculation errors. The integrated turntable moves far away from the camera along the optical axis by 100 mm and another tri-axial experiment is conducted to prove the robustness of the system. The statistical results of the two tri-axial experiments are given in
Single axis RMSE
RMSE
Aa
1b 2
−0.926 −0.942
1.388 −1.407
−1.145 −1.218
0.312 0.336
0.671 0.675
0.433 0.434
0.857 0.870
B
1 2
0.928 0.843
−0.757
0.974 −0.973
0.253 0.243
0.328 0.313
0.235 0.259
0.476 0.474
0.707
a
A, B in the first column respectively represent the POSIT method, and the IRT-AOPIT method. b The Arabic numerals in the second column represent the serial number of repetitive experiments, 1 represents the position one and 2 represents the position two.
Table 5. Both the maximum attitude errors and the RMSE are decreased by using the proposed method, which shows its reliability and repeatability is superior. 4.2.3. Resisting noise ability of the proposed method To test how well the proposed algorithm resisting noise, another group of experiments is also conducted. Different levels of Gaussian noise are added to the captured images of the tri-axis motion. The mean of all the Gaussian noises is set to zero. And the variances (σ ) of Gaussian noise levels are set to 5 pixels, 10 pixels, 20 pixels, and 30 pixels, respectively. The used function in the
Please cite this article as: X. Guo, J. Tang, J. Li et al., Attitude measurement based on imaging ray tracking model and orthographic projection with iteration algorithm. ISA Transactions (2019), https://doi.org/10.1016/j.isatra.2019.05.009.
X. Guo, J. Tang, J. Li et al. / ISA Transactions xxx (xxxx) xxx
11
Fig. 18. Attitude estimation errors of tri-axial motion with different levels of Gaussian noises.
code is cvRandArr (&rng, Imgnoise, CV_RAND_NORMAL, cvScalarAll(0), cvScalarAll(30)), and the last parameter is the variance. The local screenshots of one same captured image with different levels of noises are shown in Fig. 17. It can be directly seen from Fig. 17 that the features of the images, including lines and points, have become blurred to some certain extent. For σ = 20 and σ = 30, the added noise points are quite apparent. In the process of image processing, for images with different noise levels, there are no obvious differences among feature point extraction results. As displayed in Fig. 17, even if there is much noise added on the images, the feature points can also be exactly extracted. The only difference is the set threshold value of image binarization processing. For images with no or less added noise, the set threshold value is smaller and for those with more added noise, the set threshold value is larger. Once the feature points can be recognized, the center points of ellipse features are always believable. Then, the proposed IRT-AOPIT algorithm and the POSIT algorithm are both used to calculated attitude based on the noise images. And the errors of four groups of calculated results are shown in Fig. 18. It can be directly seen that the attitude calculation errors range from −1.5◦ ∼1.5◦ , or −1.0◦ ∼1.5◦ , or −1.5◦ ∼1.0◦ .
The attitude calculation errors of noise-free image range from −1.5◦ ∼1.5◦ , or -1.0◦ ∼1.0◦ , or −1.5◦ ∼1.0◦ . From the attitude errors comparison between noise images and noiseless images, the proposed algorithm show strong robustness to Gaussian noises. And from the attitude errors comparison between the proposed algorithm and the POSIT algorithm for the same noise images set, our algorithm has better performance. The resisting noise ability of our algorithm is superior and the main reason lies in that the iterative process is included in the attitude solving procedure. 5. Conclusion The attitude estimation algorithm from orthographic projection with iteration based on the generic imaging ray tracking camera model proves itself to be of high accuracy, stability and robustness, and thus not only demonstrate that the proposed imaging model represented by high-order polynomial can characterize the imaging properties of the central camera lenses very well, but also demonstrates that the validity and effectiveness of the attitude from orthographic projection with iteration algorithm to solve attitude combining with the imaging ray tracking model.
Please cite this article as: X. Guo, J. Tang, J. Li et al., Attitude measurement based on imaging ray tracking model and orthographic projection with iteration algorithm. ISA Transactions (2019), https://doi.org/10.1016/j.isatra.2019.05.009.
12
X. Guo, J. Tang, J. Li et al. / ISA Transactions xxx (xxxx) xxx
Besides, the improved accuracy indicates its superiority over classical pinhole camera model and POSIT algorithm. In addition, different forms of motion experiments were conducted and the superior performance of the proposed method indicates its reliability and stability. Noise experiments showed the robustness of the proposed algorithm to different levels of noises. The performance of the method for non-central cameras is the main contents of our future work and the online processing capabilities and its real time coding specifications will be also considered in our future works. The proposed algorithm is an alternative attitude estimation method for central cameras and will provide new thought for non-central cameras. Acknowledgments This work was supported in part by the National Natural Science Foundation of China (61603353, 51705477), and the Preresearch Field Foundation, China (6140518010201) and the Scientific and Technology Innovation Programs of Higher Education Institutions in Shanxi, China (201802084), the Shanxi Province Science Foundation for Youths, China (201601D021067), the Program for the Top Young Academic Leaders of Higher Learning Institutions of Shanxi, China, the Young Academic Leaders Foundation in North University of China, Science Foundation of North University of China (XJJ201822), and the Fund for Shanxi ‘‘1331 Project’’ Key Subjects Construction. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. References [1] Miraldo Pedro, Araujo Helder, Gonçalves Nuno. Pose estimation for general cameras using lines. IEEE Trans Cybern 2015;45(10):2156–64. [2] Huang Haoqian, Zhou Jun, Zhang Jun, et al. Attitude estimation fusing Quasi-Newton and cubature kalman filtering for inertial navigation system aided with magnetic sensors. IEEE Access 2018;6:28755–67. [3] Guo Xiaoting, Sun Changku, Wang Peng. Multi-rate cubature Kalman filter based data fusion method with residual compensation to adapt to sampling rate discrepancy in attitude measurement system. Rev Sci Instrum 2017;88(8). 085002. [4] Sun Pengfei, Sun Changku, Li Wenqiang, Wang Peng. A new pose estimation algorithm using a perspective-ray-based scaled orthographic projection with iteration. PLoS One 2015;10(7). e013402. [5] Zhang Zimiao, Zhang Shihai. An efficient vision-based pose estimation algorithm using the assistant reference planes based on the perspective projection rays. Sensors Actuators A 2018;272:301–9. [6] Cui Bingbo, Chen Xiyuan, Tang Xihua, Huang Haoqian, Liu Xiao. Robust cubature kalman filter for GNSS/INS with missing observations and colored measurement noise. ISA Trans 2018;72:138–46. [7] Shen Chong, Song Rui, Li Jie, et al. Temperature drift modeling of MEMS gyroscope based on genetic-Elman neural network. Mech Syst Signal Process 2016;72–73:897–905. [8] Lepetit Vincent, Fua Pascal. Monocular model-based 3D tracking of rigid objects: A survey. Found Trends Comput Graph Vis 2005;1(1):1–89. [9] Collins Robert T, Liu Yanxi, Leordeanu Marius. On-line selection of discriminative tracking features. IEEE Trans Pattern Anal Mach Intell 2005;27(10):1631–43. [10] Kollery D, Daniilidisy K, Nagely H-H. Model-based object tracking in monocular image sequences of road traffic scenes. Int J Comput Vis 1993;10(3):257–81. [11] Asl Hamed Jabbari, Yoon Jungwon. Adaptive vision-based control of an unmanned aerial vehicle without linear velocity measurements. ISA Trans 2016;65:296–306. [12] Fischler Martin A, Bolles Robert C. A paradigm for model fitting with applications to image analysis and automated cartographpy. Commun ACM 1981;24(6):381–95. [13] Lepetit Vincent, Moreno-Noguer Francesc, Fua Pascal. EPnP: An accurate O(n) solution to the PnP problem. Int J Comput Vis 2009;81(2):155–66.
[14] Li Shiqi, Xu Chi, Xie Ming. A robust O(n) solution to the perspective-n-point problem. IEEE Trans Pattern Anal Mach Intell 2012;34(7):1444–50. [15] Guo Xiaoting, Sun Changku, Wang Peng, Huang Lu. Vision sensor and dual MEMS gyroscope integrated system for attitude determination on moving base. Rev Sci Instrum 2018;89(1). 015002. [16] Lu Chien-Ping, Hager Gregory D, Mjolsness Eric. Fast and globally convergent pose estimation from video images. IEEE Trans Pattern Anal Mach Intell 2000;22(6):610–22. [17] Dementhon Daniel F, Davis Larry S. Model-based object pose in 25 lines of code. Int J Comput Vis 1995;15(1–2):123–41. [18] David Philip, DeMenthon Daniel, Duraiswami Ramani, Samet Hanan. SoftPOSIT: Simultaneous pose and correspondence determination. Int J Comput Vis 2004;59(3):259–84. [19] Tsai RY. A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE J Robot Autom 1987;3(4):323–44. [20] Zhang Z. A flexible new technique for camera calibration. IEEE Trans Pattern Anal Mach Intell 2000;22(11):1330–4. [21] Wu F, Hu Z, Zhu H. Camera calibration with moving one-dimensional objects. Pattern Recognit 2005;40:755–65. [22] Cornic Philippe, Illoul Cédric, et al. Another look at volume self-calibration: calibration and self-calibration within a pinhole model of Scheimpflug cameras. Meas Sci Technol 2016;27(9). 094004. [23] Samper David, Santolaria Jorge, et al. Analysis of Tsai calibration method using two- and three-dimensional calibration objects. Mach Vis Appl 2013;24(1):117–31. [24] Carlos Ricolfe-Viala, Sánchez-Salmerón AJ. Using the camera pin-hole model restrictions to calibrate the lens distortion model. Opt Laser Technol 2011;43(6):996–1005. [25] Chen Kepeng, Shi Tielin, et al. Calibration of telecentric cameras with an improved projection model. Opt Eng 2018;57(4). 044103. [26] Branislav Mičušík, Pajdla T. Autocalibration & 3D reconstruction with non-central catadioptric cameras. In: Computer vision and pattern recognition, 2004. CVPR 2004. Proceedings of the 2004 IEEE computer society conference on IEEE, Vol. 1. 2004, p. I–58–I–65. [27] Grossberg Michael D, Nayar Shree K. A general imaging model and a method for finding its parameters. In: Computer vision, 2001. ICCV 2001. Proceedings. Eighth IEEE international conference on IEEE, (2). 2001, p. 108–15. [28] Grossberg Michael D, Nayar Shree K. The raxel imaging model and ray-based calibration. Int J Comput Vis 2005;61(2):119–37. [29] Miraldo Pedro, Araujo Helder. Pose estimation for non-central cameras using planes. J Intell Robot Syst 2015;80(3–4):1–14. [30] Sun Changku, Dong Hang, Zhang Baoshang, Wang Peng. An orthogonal iteration pose estimation algorithm based on an incident ray tracking model. Meas Sci Technol 2018;29(9). 095402. [31] Sun Changku, Guo Xiaoting, Wang Peng, Zhang Baoshang. Computational optical distortion correction based on local polynomial by inverse model. Optik 2017;(132):388–400. [32] Sadeghzadeh-Nokhodberiz Nargess, Poshtan Javad, Wagner Achim, Nordheimer Eugen, Badreddin Essameddin. Distributed observers for pose estimation in the presence of inertial sensory soft faults. ISA Trans 2014;53:1307–19. [33] Pedro Miraldo, Araujo Helder, Queiró Joao. Point-based calibration using a parametric representation of the general imaging model. In: International conference on computer vision IEEE computer society. 2011, p. 2304–11. [34] Pjanic Petar, Grundhöfer Anselm. Paxel: A generic framework to superimpose high-frequency print patterns using projected light. IEEE Transactions on Imaging Processing 2018;27(7):3541–55. [35] Shen Chong, Liu Xiaochen, Cao Huiliang, Zhou Yuchen, et al. Brain-like navigation scheme based on MEMS-INS and place recognition. Appl Sci 2019;9(8):1708. [36] Guo Hao, Zhu Qiang, Tang Jun, Nian Fushun, et al. A temperature and humidity synchronization detection method based on microwave coupled-resonator. Sensors Actuators B 2018;261:434–40. [37] Zhijian Wang, et al. A novel fault diagnosis method of gearbox based on maximum kurtosis spectral entropy deconvolution. IEEE Access 2019;7:29520–32. [38] Wang Zhijian, Du Wenhua, Wang Junyuan. Research and application of improved adaptive MOMEDA fault diagnosis method. Measurement 2019;140:63–75. [39] Wang Z, et al. Application of parameter optimized variational mode decomposition method in fault diagnosis of gearbox. IEEE Access 2019;7:44871–82. [40] Guo Xiaoting, Tang Jun, Li Jie, Wang Chenguang, Shen Chong, Liu Jun. Determine turntable coordinate system considering its non-orthogonality. Rev Sci Instrum 2019;90(3). 033704. [41] Cao Huiliang, Zhang Yingjie, Han Ziqi, Shao Xingling, Gao Jinyang, Huang Kun, Shi Yunbo, Tang Jun, Shen Chong, Liu Jun. Pole-zerotemperature compensation circuit design and experiment for dual-mass MEMS gyroscope bandwidth expansion. IEEE/ASME Trans Mechatronics 2019;24(2):677–88. http://dx.doi.org/10.1109/TMECH.2019.2898098.
Please cite this article as: X. Guo, J. Tang, J. Li et al., Attitude measurement based on imaging ray tracking model and orthographic projection with iteration algorithm. ISA Transactions (2019), https://doi.org/10.1016/j.isatra.2019.05.009.
X. Guo, J. Tang, J. Li et al. / ISA Transactions xxx (xxxx) xxx [42] Shen Chong, Yang Jiangtao, Tang Jun, Liu Jun, Cao Huiliang. Note: Parallel processing algorithm of temperature and noise error for micro-electromechanical system gyroscope based on variational mode decomposition and augmented nonlinear differentiator. Rev Sci Instrum 2018;89(7). 076107.
13
[43] Cao Huiliang, Zhang Yingjie, Shen Chong, Liu Yu, Wang Xinwang. Temperature energy influence compensation for MEMS vibration gyroscope based on RBF NN-GA-KF method. Shock Vib 2018;283068.
Please cite this article as: X. Guo, J. Tang, J. Li et al., Attitude measurement based on imaging ray tracking model and orthographic projection with iteration algorithm. ISA Transactions (2019), https://doi.org/10.1016/j.isatra.2019.05.009.