CJA 550 23 October 2015 Chinese Journal of Aeronautics, (2015), xxx(xx): xxx–xxx
No. of Pages 9
1
Chinese Society of Aeronautics and Astronautics & Beihang University
Chinese Journal of Aeronautics
[email protected] www.sciencedirect.com
5
Real-time drogue recognition and 3D locating for UAV autonomous aerial refueling based on monocular machine vision
6
Wang Xufeng a,b, Kong Xingwei a, Zhi Jianhui a, Chen Yong a, Dong Xinmin a,*
3
4
7 8
9
a b
School of Aeronautics and Astronautics Engineering, Air Force Engineering University, Xi’an 710038, China State Key Laboratory of Intelligent Technology and Systems, Tsinghua University, Beijing 100084, China
Received 26 January 2015; revised 26 March 2015; accepted 14 June 2015
10
12 13
KEYWORDS
14
Autonomous aerial refueling; Drogue 3D locating; Drogue attitude measurement; Drogue detection; Drogue recognition; Monocular machine vision
15 16 17 18 19 20
21
Abstract Drogue recognition and 3D locating is a key problem during the docking phase of the autonomous aerial refueling (AAR). To solve this problem, a novel and effective method based on monocular vision is presented in this paper. Firstly, by employing computer vision with red-ring-shape feature, a drogue detection and recognition algorithm is proposed to guarantee safety and ensure the robustness to the drogue diversity and the changes in environmental conditions, without using a set of infrared light emitting diodes (LEDs) on the parachute part of the drogue. Secondly, considering camera lens distortion, a monocular vision measurement algorithm for drogue 3D locating is designed to ensure the accuracy and real-time performance of the system, with the drogue attitude provided. Finally, experiments are conducted to demonstrate the effectiveness of the proposed method. Experimental results show the performances of the entire system in contrast with other methods, which validates that the proposed method can recognize and locate the drogue three dimensionally, rapidly and precisely. Ó 2015 Production and hosting by Elsevier Ltd. on behalf of CSAA & BUAA. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
* Corresponding author. Tel.: +86 29 84787400 801. E-mail addresses:
[email protected] (X. Wang),
[email protected] (X. Kong),
[email protected] (J. Zhi),
[email protected] (Y. Chen),
[email protected] (X. Dong). Peer review under responsibility of Editorial Committee of CJA.
Production and hosting by Elsevier
1. Introduction
22
Recently, unmanned aerial vehicles (UAVs) have been employed extensively in military and civil fields. With UAVs taking on diverse missions, one of the biggest limitations for UAVs is the lack of autonomous aerial refueling (AAR) capability, which results in a short loiter time for UAVs. Consequently, AAR has gained much attention and become more and more important for UAVs.1–5 Therefore, finding a solution to AAR has great socio-economic value.6 AAR has two hardware configurations known as ‘‘Boom and Receptacle” system and ‘‘Probe and Drogue” system, respec-
23
http://dx.doi.org/10.1016/j.cja.2015.10.006 1000-9361 Ó 2015 Production and hosting by Elsevier Ltd. on behalf of CSAA & BUAA. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). Please cite this article in press as: Wang X et al. Real-time drogue recognition and 3D locating for UAV autonomous aerial refueling based on monocular machine vision, Chin J Aeronaut (2015), http://dx.doi.org/10.1016/j.cja.2015.10.006
24 25 26 27 28 29 30 31 32
CJA 550 23 October 2015
No. of Pages 9
2 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94
tively.7–9 Compared with the ‘‘Boom and Receptacle” refueling system, the ‘‘Probe and Drogue” refueling system is more economic and flexible. As a result, the ‘‘Probe and Drogue” UAVs’ autonomous aerial refueling (PD-UAV-AAR) has received much more attention by many countries and researchers.10–14 Thus, PD-UAV-AAR will be considered in this research. There are many research institutes engaged in the research of PD-UAV-AAR. One of the key problems for the implementation of AAR is how to obtain precisely the relative position between UAV and the refueling drogue during the close proximity operations of PD-UAV-AAR.10,11 To cope with this problem, conventional relative navigation systems use GPS, which cannot guarantee the signal’s reliability because of the distortion in the final docking phase.12 With the development of optical sensors, it has become a hotspot to measure the relative position between UAV and drogue based on machine vision, by virtue of its high accuracy for close navigation, strong resistance of electromagnetism and self-determination.13,14 Though the machine vision-based methods have been employed in the docking phase of the PD-UAV-AAR, it is challenging to develop a robust drogue recognition and 3D locating method in real-time because of the drogue diversity and also the changes in environmental conditions.10,11,13,14 In Refs. 10,11,13, the superior performance of light emitting diodes (LEDs) allowed researchers to set the LEDs on the drogue as symbols and then measure the relative position between UAV and drogue by acquiring the spatial locations of the symbols. However, it is a difficult task due to the loss of information caused by projection of the 3D world on 2D images. In addition, LEDs set on the drogue would inevitably increase the power-supplied wires on the drogue, creating a great potential danger for PD-UAV-AAR. Though cameras have been utilized in vision-based measurement methods with computer vision,15,16 camera lens distortion is a significant problem because of the presence of nonlinear system errors and should be investigated in the entire measurement system.17 Furthermore, considering the measurement of the drogue movement with high sampling speed during the docking phase of the PD-UAVAAR, a large amount of date should be dealt with in time so as to ensure the real-time performance of the system, which requires a simpler structure and higher operation speed of the vision-based measurement. Moreover, the method should adapt to real situations when considering the airflow. A NASA flight test approach on aerial refueling system suggests that the drogue randomly meandered in the horizontal and vertical axes by as much as its real-drogue diameter (approximately 2 ft, or namely 61 cm), where the airflow is considered.18 This valuable information indicates that during the docking phase in the aerial refueling task, frontal images of the drogue can be reasonably expected.18–20 Compared with Ref. 18, a stronger airflow and a wider movement range of the drogue will be considered in our method. Motivated by the above discussion, a novel and effective drogue recognition and 3D locating method is proposed during the docking phase of the PD-UAV-AAR. Firstly, in order to guarantee the safety and ensure the robustness to the drogue diversity in the changing environmental conditions, this approach incorporates prior domain knowledge, using a high reflection red-ring-shape feature set on the parachute part of the drogue, and not LEDs, to achieve optimal performance of drogue recognition and 3D locating. Secondly, after the camera lens distortion is considered, based on the pinhole imaging
X. Wang et al.
Fig. 1 Flowchart of vision-based drogue recognition and 3D locating algorithm.
theory and a model of camera calibration, a monocular visionbased measurement method for drogue 3D locating is designed to ensure the accuracy and real-time performance of the system, in view of its simple structure and high operation speed. Thirdly, in comparison with Ref. 19, a practical implementation that considers the effect of airflow has been provided to further testify our proposed method in real-world experiments. A 20 cm diameter plausible drogue is randomly moved along horizontal and vertical axes by 2 m, 10 times as much as the plausible drogue diameter, and moved in front of the camera continuously for 10 m until it reaches it. Finally, the results on camera calibration and the analysis of its accuracy are depicted and the results of the drogue recognition and 3D locating as well as its measurement analysis are reported, together with a comparison between the proposed method and the competing methods. Our contributions can be summarized as follows. (1) A novel framework for drogue recognition and 3D locating in PD-UAV-AAR that combines both domain knowledge and monocular vision. (2) A novel drogue detection and recognition method that can guarantee safety and identify the drogue position regardless of the changes in environmental conditions. (3) An efficient monocular vision measurement method that can ensure the performance of the drogue 3D locating and attitude measurement.
95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111
112 113 114 115 116 117 118 119 120 121
2. System design
122
We present a method based on monocular machine vision for drogue recognition and 3D locating during the docking phase of PD-UAV-AAR, with the performance of safety, robustness, accuracy and real-time. Firstly, the vision-based drogue detection and recognition algorithm inputs the image acquired from the camera fixed on the UAV, estimates the red-ring feature region of the drogue and gives the center of the drogue feature region as shown in the image in real-time. Then, with the aid of this center, the relative spatial position between drogue and camera can be gained based on the drogue 3D locating algorithm in real-time. Meanwhile, the attitude of the drogue can be acquired in real-time with high accuracy. Fig. 1 shows the flowchart of vision-based drogue recognition and 3D locating algorithm.
123
Please cite this article in press as: Wang X et al. Real-time drogue recognition and 3D locating for UAV autonomous aerial refueling based on monocular machine vision, Chin J Aeronaut (2015), http://dx.doi.org/10.1016/j.cja.2015.10.006
124 125 126 127 128 129 130 131 132 133 134 135 136
CJA 550 23 October 2015
No. of Pages 9
Real-time drogue recognition and 3D locating for UAV autonomous aerial refueling based 137
3. Algorithm description
138
3.1. Drogue detection and recognition
139
By augmenting the computer vision with the red-ring-shape feature, the drogue detection and recognition algorithm guarantees the safety and ensures the robustness to the drogue diversity and the changes in environmental conditions, without using a set of infrared LEDs set on the parachute part of the drogue. The drogue detection and recognition algorithm has three key components: (1) color feature detection and recognition; (2) ring feature detection and recognition; (3) calculating the center (in image plane) of the red-ring-shape feature of the drogue. The flowchart of drogue detection and recognition algorithm is shown in Fig. 2. Prior research in the area of object detection and recognition has shown that when domain knowledge is considered performance is improved. To constrain the scope of the vision-based PD-UAV-AAR problem, we incorporate prior domain knowledge in our approach to achieve the optimal recognition and 3D locating performance of the refueling drogue during the docking phase of PD-UAV-AAR.21 The high reflection red-ring-shape feature is set on the parachute part of the drogue by principles of concision, separability and reliability. Fig. 3 shows the drogue with high reflection red-ringshape feature for PD-UAV-AAR. In Fig. 3, the ‘‘inner edge” and ‘‘outer edge” of the red-ring feature region of the drogue are labeled as ‘‘Inner loop” and ‘‘Outer loop”, respectively. The steps of the drogue detection and recognition are described as follows.
140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165
166 167 168
Step 1. The red color feature detection and recognition.The images of the drogue received from camera are converted from RGB color space to HSV color space. In this way,
Fig. 3
Drogue with red-ring-shape feature for PD-UAV-AAR.
we are able to perceive the red color feature region of the drogue through a color phase filter and image binary. Step 2. The ring feature detection and recognition. In order to eliminate the noise exiting in the color feature detection and recognition results, we improve the efficiency of drogue detection and recognition, and detect the center of the refueling drogue correctly, and we’d better estimate the ring shape feature of the drogue. Principle 1. Theoretically, the centers of the ‘‘Inner loop” and ‘‘Outer loop” acquired from edge detection and ellipse fitting of the feature region should be the same. Considering the error of detection and fitting, we need to set a distance threshold of the centers. Principle 2. Considering the deflection of the drogue, the ratio of the long axes between the ‘‘Inner loop” and ‘‘Outer loop” acquired from edge detection and ellipse fitting of the feature region should be within a range. The threshold and range used for fitting ellipse are obtained by a great many of experiments. In these experiments, we find that the system can detect and recognize the drogue accurately in real-time when the threshold is about twenty pixels and the range is about 0.7. Thus, in this paper, we use the threshold and range previously discussed in our work, and the final experimental results validate the accuracy and effectiveness in Section 4. Step 3. Calculating the center (in image plane) of the redring-shape feature.
169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196
Fig. 2
Flowchart of drogue detection and recognition algorithm.
Using least square ellipse fitting on the feature, we obtain the ellipse region where the feature of refueling drogue is located. Then we compute the center of the ellipse region, that is, the center of the red-ring feature region of the drogue. From all the above analysis, it can be seen that the drogue can be detected and recognized by machine vision successfully. We can not only estimate the red-ring feature region of the refueling drogue correctly, but also give the center of the refueling drogue as shown in the image in real-time through the three steps mentioned above for the drogue detection and recognition.
197
3.2. Drogue 3D locating
208
Considering the camera lens distortion, the monocular vision measurement algorithm for drogue 3D locating is designed to ensure the accuracy and real-time performance of the system. The algorithm of monocular vision measurement for drogue 3D locating is analyzed according to the pinhole imaging theory and the model of camera calibration, with the distortion of the camera lens considered. The flowchart of the drogue 3D locating algorithm is shown in Fig. 4.
209
Please cite this article in press as: Wang X et al. Real-time drogue recognition and 3D locating for UAV autonomous aerial refueling based on monocular machine vision, Chin J Aeronaut (2015), http://dx.doi.org/10.1016/j.cja.2015.10.006
198 199 200 201 202 203 204 205 206 207
210 211 212 213 214 215 216
CJA 550 23 October 2015
No. of Pages 9
4
X. Wang et al. Let xn be the normalized (pinhole) image projection, then we have x Xc =Zc ¼ ð1Þ xn ¼ y Yc =Zc
241
Considering the lens distortion, the new normalized point coordinate xd can be described as xd ð1Þ xd ¼ ð2Þ ¼ 1 þ kc ð1Þr2 þ kc ð2Þr4 xn þ dx xd ð2Þ
246
2
Fig. 4
Flowchart of drogue 3D locating algorithm.
Fig. 5
217 218 219 220 221 222 223 224 225 226 227 228
229 230 231 232 233 234 235 236
Perspective camera model.
The model of camera calibration with the camera lens distortion considered can be described as follows. Generally, geometric model of an optical camera is the perspective camera model (see Fig. 5). As shown in Fig. 5, OcXcYcZc is the camera frame; p, the image of P, is the point at which the straight line through P and Oc intersects the image plane; the 3-D point Oc is the center or focus of projection; the line through Oc and perpendicular to the image plane p is the optical axis Zc; the distance between p and Oc is the focal length f; oc is named the principle point or image center, which is the intersection between p and Zc. Define the intrinsic parameters fc, e, ac and kc as follows.
2
2
239 240
245
247 248
250
where r = x + y and dx is the tangential distortion vector: 2kc ð3Þxy þ kc ð4Þðr2 þ 2x2 Þ dx ¼ ð3Þ kc ð3Þðr2 þ 2y2 Þ þ 2kc ð4Þxy
251 252
Usually, the tangential distortion is far less than the radial distortion, so the tangential distortion vector dx can be omitted without considering when the accuracy can be promised. As long as distortion is applied, the final pixel coordinates (xp, yp) of the projection of P on the image plane can be expressed as " # xp fc ð1Þðxd ð1Þ þ ac xd ð2ÞÞ þ eð1Þ ¼ ð4Þ yp fc ð2Þxd ð2Þ þ eð2Þ
255
Therefore, the pixel coordinate vector (xp, yp) and the normalized (distorted) coordinate vector xd are related to each other through Eq. (5). 32 3 2 3 2 3 2 xd ð1Þ xp xd ð1Þ fc ð1Þ ac fc ð1Þ eð1Þ 76 7 6 7 6 7 6 fc ð2Þ eð2Þ 54 xd ð2Þ 5 ¼ K4 xd ð2Þ 5 ð5Þ 4 yp 5 ¼ 4 0 0 0 1 1 1 1
264
where K is known as the camera matrix. Therefore, with the aid of the center of the red-ring drogue feature region as shown in the image acquired from drogue detection and recognition, the relative spatial position between drogue and camera can be measured based on the optimized camera calibration model.
270
3.3. Drogue attitude measurement
276
Considering the parachute part of the refueling drogue has the characteristic of rolling symmetry, we do not measure the roll angle of the drogue, but only for the pitch angle h and yaw angle u. The theory of drogue attitude measurement is as follows.
277
(1) Focal length. The focal length in pixels is stored in the 2 1 vector fc. (2) Principal point. The principal point coordinates are stored in the 2 1 vector e. (3) Skew coefficient. The skew coefficient defining the angle between the x and y pixel axes is stored in the scalar ac. (4) Distortions. The image distortion coefficients (radial and tangential distortions) are stored in the 4 1 vector kc.
237 238
242 243
Let P be a point in the space of coordinate vector [Xc, Yc, Zc] in the camera reference frame. We project point P on the image plane according to the intrinsic parameters (fc, e, ac, kc). Fig. 6
u h rotating mode.
Please cite this article in press as: Wang X et al. Real-time drogue recognition and 3D locating for UAV autonomous aerial refueling based on monocular machine vision, Chin J Aeronaut (2015), http://dx.doi.org/10.1016/j.cja.2015.10.006
254
256 257 258 259 260 261
263
265 266 267
269
271 272 273 274 275
278 279 280 281
CJA 550 23 October 2015
No. of Pages 9
Real-time drogue recognition and 3D locating for UAV autonomous aerial refueling based
Fig. 7
282 283 284 285 286 287 288 289 290 291 292 293
Fig. 8
a b c rotating mode.
As shown in Fig. 6, the coordinate system s (OXYZ) is stationary and the rotating coordinate system b (Oxyz) rotates around the y axis and x axis successively, which is called the u h rotating mode, and the yaw angle u and pitch angle h are defined according to the rotation. And the rotating coordinate system b (Oxyz) turns into b1(Ox1y1z1) and b2(Ox1y2z2) in sequence. The rotation matrix Cbs can be obtained through the u h rotating mode. And any attitude of the drogue can be acquired from the initial attitude, vertical to camera, by using the u h rotating mode. The rotation matrices Cbs 1 and Cbb21 are produced during the u h rotating mode. Then we have
Parameters definition.
As Eq. (6) is equivalent to Eq. (7), we have 8 > < cos h sin u ¼ sin a sin b sin h ¼ cos a sin b > : cos h cos u ¼ cos b
313 314
ð8Þ
Then, the pitch angle h and yaw angle u can be measured as ( h ¼ arcsinðcos a sin bÞ a sin b ð9Þ u ¼ arcsin sincos h where a and b can be acquired from the ellipse fitting of the feature region.
316 317 318
320 321 322
4. Experiments and discussion
323
In this section, experiments are conducted to demonstrate the effectiveness of the proposed method. Moreover, the results and measurement accuracy of the drogue recognition and 3D locating are discussed in contrast with other methods, which will validate that the drogue can be recognized and 3D located rapidly and precisely.
324
4.1. Experimental environment
330
In our experiment, we choose the common colored industrial camera as the machine vision and image acquiring equipment. Based on a total of 16 images of a planar checkerboard acquired in experimental lab environment, using the MATLAB calibration tool, we can get the camera intrinsic parameters. The experiment of drogue 3D locating was conducted in the environment of experimental lab which proved the validity of the monocular vision measurement method for drogue 3D locating during the docking phase of PD-UAV-AAR. The plausible drogue model we used in the drogue 3D locating is shown in Fig. 9, with 20 cm diameter of the outer loop of the red-ring-shape feature region. The platform of the drogue 3D locating is shown in Fig. 10.
331
4.2. Experimental procedure and results
345
The steps of the drogue 3D locating experiment are shown as follows. Firstly, we fixed the LED light resource on the camera and moderated the LED light mode to be of high intensity, which can efficiently light the whole field of the camera view. Then, considering the effect of the airflow, we moved the 20 cm-diameter plausible drogue randomly along horizontal and vertical axes
346
294
Cbs ¼
Cbb21 Cbs 1 2 1 0 6 6 ¼ 4 0 cosh 0 2
0
76 6 sinh 7 54
sinh cosh cosu
6 ¼6 4 sinhsinu coshsinu
296
32
cosu
0
0
1
sinu
3
7 0 7 5 ð6Þ
sinu 0 cosu 3 sinu 7 sinhcosu 7 5
0 cosh
sinh coshcosu
306
The rotation matrix Cbs can also be obtained through the a b c rotating mode shown in Fig. 7, and the according parameters are defined in Fig. 8. As shown in Fig. 8, a ¼ \xOx0 is the angle between the long axis of the ellipse and the x axis in the image; the x0 axis and y0 axis are the major axis and minor axis of the ellipse; b ¼ jarccosðb=aÞj is the absolute value of arccosine of the ratio of minor and major axes of the ellipse captured when the circle is deflected; c is an auxiliary angle. Note that the x2 axis is the same as the x1 axis, and the z1 axis is the same as the Z axis.
307
The rotation matrices Csb1 , Cb02 and Cbb0 are produced during
297 298 299 300 301 302 303 304 305
b0
0
308 309 310
2
1
the a b c rotating mode. Then the rotation matrix be expressed as 2
cos c
sin c 0
32
1
0
0
32
cos a
Csb
sin a 0
can
3
0 7 76 76 6 Cbs ¼ Cbb0 C Cbs 1 ¼ 4 sin c cosc 0 54 0 cosb sin b 54 sin a cos a 0 5 2 0 0 1 0 sin b cosb 0 0 1 2 3 cosa cos c sin a cosb sin c sin a cos c þ cos a cos bsin c sin a sin c 6 7 ¼ 4 cos a sin c sin a cos bcos c sin a sin c þ cos a cos bcos c sin bcos c 5 sin a sin b cosa sin b cos b
b02 b01
312
ð7Þ
Please cite this article in press as: Wang X et al. Real-time drogue recognition and 3D locating for UAV autonomous aerial refueling based on monocular machine vision, Chin J Aeronaut (2015), http://dx.doi.org/10.1016/j.cja.2015.10.006
325 326 327 328 329
332 333 334 335 336 337 338 339 340 341 342 343 344
347 348 349 350 351
CJA 550 23 October 2015
No. of Pages 9
6
X. Wang et al.
Fig. 9
Drogue model with red-ring feature. Fig. 11
352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379
by 2 m, simulating the true movement of the drogue when affected by airflow during the docking phase of PD-UAVAAR. We performed the drogue heading the camera continuously from 11.2 m in front of the camera. Meanwhile, in the OpenCV programming environment, the images of the moving drogue were captured at a rate of 30 frames per second by the colored industrial camera, which can detect, recognize and spatially locate the drogue in real-time. The procedure and results of drogue 3D locating are shown in Figs. 11 and 12, respectively. Fig. 12(a) shows the results of color detection and recognition with much less noise in contrast to Ref. 21, primarily due to augmenting the computer vision with the red-ring-shape feature. The results of ring-shape detection and recognition are shown in Fig. 12(b), which implies good performance of our method in eliminating the noise and the efficiency of drogue detection and recognition. Fig. 12(c) shows the drogue 3D locating results in real-time, whose accuracy will be analyzed in the Section 4.3. Note that the precision of the drogue 3D locating relies on prior results of drogue detection and recognition, and only by detecting and recognizing the drogue in 2D precisely can we locate the drogue in 3D accurately. The algorithm is implemented with C++ programming language on a personal computer with Pentium dual-core 3.3 GHz CPU and 2 G RAM. For a video with a resolution of 640 480, the time cost of our algorithm is about 30 ms per frame, which is suitable for real-time drogue recognition and 3D locating applications during the docking phase of PD-UAV-AAR.
Fig. 10
Platform of drogue 3D locating.
Procedure of drogue 3D locating.
4.3. Accuracy analysis of drogue 3D locating and attitude measurement
380
To further test whether the precision of camera calibration satisfies the PD-UAV-AAR system requirements of high precision, the accuracy analysis of camera calibration is depicted in Fig. 13. Let (X, Y, Z) be the relative position between the drogue and camera, where X, Y and Z are the lateral axis, vertical axis and the forward axis, respectively. And (X, Y, Z) = (2, 2, Z), (2, 2, Z), (2, 2, Z), (2, 2, Z), (1, 1, Z), (1, 1, Z), (1, 1, Z), (1, 1, Z), where Z ¼ 0:1 m; 0:2 m; 0:3 m;:::; 10 m. The relative position in spatial space between the drogue and camera is measured based on the optimized camera calibration model with the application of trustregion dogleg algorithm using the center of the red-ring drogue feature region received from drogue detection and recognition. The results of calibration accuracy analysis are shown in Fig. 13. It can be seen from Fig. 13 that the error rates of the relative position between the drogue and camera in both lateral and vertical distance are less than 1.5% with the measuring precision of drogue 3D locating within a centimeter-level of accuracy, which can satisfy and even be superior to the PDUAV-AAR system requirement of precision. In order to validate the accuracy of the drogue 3D locating, we collected drogue images according to the range of the drogue movement and computed the relative position between the drogue and camera using the drogue recognition and 3D locating method proposed in this paper for PD-UAV-AAR based on the monocular machine vision. The performance of position measurement is shown in Table 1. The accuracy of the drogue 3D locating suggests that, when the relative forward distance is less than 10 m, the measurement precision of the relative position between drogue and camera is superior to 1 cm in lateral, vertical and forward distance and it can satisfy the requirements of stabilization, precision, as well as real-time performance for the drogue recognition and 3D locating during PD-UAV-AAR. Furthermore, it validates the feasibility of the method proposed in this paper. To further evaluate the performance of our proposed method, we compare it with other works10,11,13 in Table 2, where the performance analysis among different drogue recognition and 3D locating methods is reported. As shown in Table 2, in contrast to Pollini et al’s method,10 the Mati et al’s method11 improves the forward distance esti-
382
Please cite this article in press as: Wang X et al. Real-time drogue recognition and 3D locating for UAV autonomous aerial refueling based on monocular machine vision, Chin J Aeronaut (2015), http://dx.doi.org/10.1016/j.cja.2015.10.006
381
383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423
CJA 550 23 October 2015
No. of Pages 9
Real-time drogue recognition and 3D locating for UAV autonomous aerial refueling based
Fig. 12
Results of drogue 3D locating.
Please cite this article in press as: Wang X et al. Real-time drogue recognition and 3D locating for UAV autonomous aerial refueling based on monocular machine vision, Chin J Aeronaut (2015), http://dx.doi.org/10.1016/j.cja.2015.10.006
CJA 550 23 October 2015
No. of Pages 9
8
X. Wang et al. Table 3
Fig. 13
Analysis of calibration accuracy.
Table 1
Performance of position measurement.
Distance
Maximum error (cm)
Minimum error (cm)
Mean error (cm)
Standard variance (cm)
0.6 0.9 1.0
0.1 0.1 0.1
0.277 0.438 0.538
0.314 0.236 0.290
Lateral Vertical Forward
Table 2 Performance comparison between pre-methods and the proposed method. Method
Pollini et al’s method10 Mati et al’s method11 Xie and Wang’s method13 The proposed method
424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439
Maximum distance estimation error (cm) Lateral
Vertical
Forward
9
5
>10
<10
<10
<10
5
5
15
0.6
0.9
1
Experiment type
Virtual simulation Virtual simulation Virtual simulation Physical experiment
mation accuracy but makes little difference in lateral and vertical distances. Xie and Wang’s method13 improves the results for both lateral and vertical distance, but gets much weaker performance in the forward distance. Chen and Stettner14 presents a novel approach to the drogue recognition and 3D locating, however the work does not provide exact figures concerning the maximum distance estimation error of lateral, vertical, and forward distance. Similar to Ref. 14, we find that incorporating prior domain knowledge gives better performance to the drogue recognition and 3D locating during the docking phase of UAV autonomous aerial refueling. We achieve the maximum distance estimation error 0.6 cm laterally, 0.9 cm vertically and 1 cm forwardly, when the relative forward distance is less than 10 m. Thus, our proposed method has a better performance than the competing approaches in drogue recognition and 3D locating.
Performance of attitude measurement.
Performance
Maximum error (°)
Minimum error (°)
Mean error (°)
Standard variance (°)
Pitch angle Yaw angle
0.9 0.8
0 0
0.033 0.004
0.539 0.337
In order to validate the accuracy of the drogue attitude measurement, we measured the pitch angle and yaw angle varying from 30° to 30° using the method proposed in this paper. The performance of attitude measurement is shown in Table 3. The accuracy of the drogue attitude measurement suggests that, the measurement precision of the drogue attitude is superior to 1° in pitch and yaw angle, which verifies the effectiveness of the method proposed in this paper.
440
5. Conclusions
448
(1) The proposed method is able to accomplish drogue recognition and 3D locating simultaneously, which makes the scheme more efficient. Simulations and real world experiment results show that the proposed method is accurate, fast and able to identify the drogue regardless of the changes in environmental conditions. (2) The camera lens distortion has been considered in the monocular vision measurement algorithm for drogue 3D locating, which ensures the accuracy and real-time performance of the system. (3) The closer to the drogue the UAV is, the lower the error is. The errors in the estimation of spatial location and attitude of drogue are small enough and can meet the requirement for PD-UAV-AAR. (4) The proposed method has great potential in true environmental drogue recognition and 3D locating. The proposed method is competitive and attractive in practical applications. Further work is currently in progress. (5) However, as the drogue recognition and locating algorithms estimate drogue locations in every frame independently, false alarms are unavoidable. We will deal with this by combining tracking information in our future study.
441 442 443 444 445 446 447
450 449 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473
Acknowledgment
474
This study is supported by the National Natural Science Foundation of China (Nos. 61473307, 61304120).
475
References
477
1. Nalepka JP, Hinchman JL. Automated aerial refueling: extending the effectiveness of unmanned air vehicles report no.: AIAA-20056005; 2005. 2. Valasek J, Gunnam K, Kimmett J, Junkins JL, Hughes D, Tandale MD. Vision-based sensor and navigation system for autonomous air refueling. J Guid Control Dyn 2005;28(5):979–89. 3. Tandale MD, Bowers R, Valasek J. Trajectory tracking controller for vision-based probe and drogue autonomous aerial refueling. J Guid Control Dyn 2006;29(4):846–57. 4. Campa G, Napolitano MR, Perhinschi M, Fravolini ML, Pollini M, Mammarella M. Addressing pose estimation issues for
Please cite this article in press as: Wang X et al. Real-time drogue recognition and 3D locating for UAV autonomous aerial refueling based on monocular machine vision, Chin J Aeronaut (2015), http://dx.doi.org/10.1016/j.cja.2015.10.006
476
478 479 480 481 482 483 484 485 486 487 488
CJA 550 23 October 2015
No. of Pages 9
Real-time drogue recognition and 3D locating for UAV autonomous aerial refueling based 489 490 491
5.
492 493 494 495
6.
496 497 498
7.
499 500 501
8.
502 503 504
9.
505 506
10.
507 508
11.
509 510 511 512
12.
513 514 515
13.
516 517 518
14.
519 520 521 522
15.
523 524 525
16.
526 527 528
17.
529 530 531 532 533
18.
machine vision based UAV autonomous serial refuelling. Aeronaut J 2007;111(1120):389–96. Wang HT, Dong XM, Xue JP, Liu JL. Dynamic modeling of a hose-drogue aerial refueling system and integral sliding mode backstepping control for the hose whipping phenomenon. Chin J Aeronaut 2014;27(4):930–46. Dong XM, Xu YJ, Chen B. Progress and challenges in automatic aerial refueling. J Air Force Eng Univ Nat Sci Edn 2008;9(6):1–5 [Chinese]. Ding M, Wei L, Wang BF. Vision-based estimation of relative pose in autonomous aerial refueling. Chin J Aeronaut 2011;24 (6):807–15. Li BR, Mu CD, Wu BT. Vision based close-range relative pose estimation for autonomous aerial refueling. J Tsinghua Univ Sci Tech 2012;52(12):1664–9 [Chinese]. Mu CD, Li BR. Vision-based autonomous aerial refueling. J Tsinghua Univ Sci Tech 2012;52(5):670–6 [Chinese]. Pollini L, Campa G, Giulietti F, Innocenti M. Virtual simulation set-up for UAVs aerial refueling report no.: AIAA-2003-5682; 2003. Mati R, Pollini L, Lunghi A, Innocenti M, Campa G. Visionbased autonomous probe and drogue aerial refuelingProceedings of IEEE 14th mediterranean conference on control and automation; 2006 Jun 28-30. Piscataway (NJ): IEEE Press; 2006. p. 1–6. Pollini L, Mati R, Innocenti M, Campa G, Napolitano M. A synthetic environment for simulation of vision-based formation flight report no.: AIAA-2003-5376; 2003. Xie HW, Wang HL. Binocular vision-based short-range navigation method for autonomous aerial refueling. J Beijing Univ Aeronaut Astronaut 2011;37(2):206–9 [Chinese]. Chen CI, Stettner R. Drogue tracking using 3D flash lidar for autonomous aerial refuelingProceedings of SPIE 8037, laser radar technology and applications XVI; 2011 Jun 08. Bellingham: SPIE; 2011. p. 1–11. Han YX, Zhang ZS, Dai M. Monocular vision system for distance measurement based on feature points. Opt Precision Eng 2011;19 (5):1110–7 [Chinese]. Zhou J, Gao YH, Liu CY, Zhang YC. Attitude calculation of single camera visual system based on adaptive algorithm. Opt Precision Eng 2012;20(12):2796–803 [Chinese]. Krinidis M, Nikolaidis N, Pitas I. 3D head pose estimation in monocular video sequences by sequential camera self-calibration. IEEE Trans Circuits Syst Video Technol 2009;19(2):261–72. Hansen JL, Murray JE, Campos NV. The NASA dryden AAR project: a flight test approach to an aerial refueling system report no.: AIAA-2004-4939; 2004.
19. Vachon MJ, Ray RJ, Calianno C. Calculated drag of an aerial refueling assembly through airplane performance analysis report no.: AIAA-2004-0381; 2004. 20. Ro K, Kamman JW. Modeling and simulation of hose-paradrogue aerial refueling systems. J Guid Control Dyn 2010;33 (1):53–63. 21. Wang XF, Dong XM, Kong XW. Feature recognition and tracking of aircraft tanker and refueling drogue for UAV aerial refuelingProceedings of 25th IEEE Chinese control and decision conference; 2013 May 25-27. Piscataway (NJ): IEEE Press; 2013. p. 2057–62. Wang Xufeng received the B.S. and M.S. degrees from Air Force Engineering University in 2011 and 2013, respectively, where he is currently pursuing the Ph.D. degree. He has been a visiting scholar with the Department of Computer Science and Technology, Tsinghua University since 2014. His research interests include computer vision, autonomous aerial refueling, scene parsing and deep learning.
534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552
Kong Xingwei is a lecturer at Air Force Engineering University. He received the B.S. degree from Tsinghua University in 2005 and then received the Ph.D. degree from Tsinghua University in 2010. His current research interests include machine vision, autonomous aerial refueling, integrated navigation, flight control and intelligent robots.
553
Zhi Jianhui received the B.S. degree from Changchun University of Science and Technology in 2012, and the M.S. degree from Air Force Engineering University in 2014, where he is currently pursuing the Ph. D. degree. His research interests include computer vision, autonomous aerial refueling, dynamic control allocation and learning control.
559
Chen Yong is a lecturer at Air Force Engineering University. He received the B.S., M.S., and Ph.D. degrees from Air Force Engineering University in 2006, 2009 and 2012, respectively. His research interests include flight control theory and application.
565
Dong Xinmin is a professor at Air Force Engineering University. He received the B.S. and M.S. degrees from Northwestern Polytechnical University in 1983 and 1988, respectively, and then received the Ph.D. degree from Xi’an Jiaotong University in 1991. He has worked in the field of flight control for more than 20 years and his current research interests include modern control theory, machine vision and autonomous aerial refueling.
570
Please cite this article in press as: Wang X et al. Real-time drogue recognition and 3D locating for UAV autonomous aerial refueling based on monocular machine vision, Chin J Aeronaut (2015), http://dx.doi.org/10.1016/j.cja.2015.10.006
554 555 556 557 558 560 561 562 563 564 566 567 568 569 571 572 573 574 575 576 577