Measurement xxx (xxxx) xxx
Contents lists available at ScienceDirect
Measurement journal homepage: www.elsevier.com/locate/measurement
The real-time vision measurement of multi-information of the bridge crane’s workspace and its application Qingxiang Wu a,b,c, Xiaokai Wang a,b,c,⇑, Lin Hua a,b,c, Gang Wei a,b,c a
Hubei Key Laboratory of Advanced Technology for Automotive Components (Wuhan University of Technology), Wuhan 430070, China Hubei Collaborative Innovation Center for Automotive Components Technology, Wuhan 430070, China c School of Automotive Engineering, Wuhan University of Technology, Wuhan 430070, China b
a r t i c l e
i n f o
Article history: Received 22 August 2019 Received in revised form 26 September 2019 Accepted 23 October 2019 Available online xxxx Keywords: Monocular vision Vision measurement Calibration Auto-centering control Obstacle avoidance
a b s t r a c t The measurement of the workspace information of the bridge crane is the premise of its intelligent control and safety monitoring. Some existing vision systems can only obtain very limited information, like swing angle of the payload. In this paper, a monocular vision measurement method is proposed to acquire multi-information of the payload off-centered angle, the payload rotation angle and the obstacle height. A hierarchical calibration method is designed to divide the large lifting height of the crane into multi-layers. At each layer, four markers are fixed on a plate symmetrically to obtain the center position of hook via the Blob analysis and ellipse fitting algorithm. After calibration, two fitting equations of the payload lifting height and the pixel coordinate of vertical lifting center are established corresponding to the pixel distance between two markers and the height of each layer. On the base of the two fitting equations, the measurement models of the payload off-centered angle, the payload rotation angle and the obstacle height are established combining with the geometric relations in the crane’s workspace, respectively. Finally, a crane prototype is established to validate the effectiveness of the vision measurement models. Moreover, the strategies of auto-centering control and automatic obstacle avoidance are designed utilizing the vision measurement system. The corresponding experiments were carried out and achieved a good performance. The research results of this paper have practical significance to intelligent control and safety monitoring of the crane in future. Ó 2019 Elsevier Ltd. All rights reserved.
1. Introduction Bridge crane [1], consisting of trolley, cart and lifting mechanism with a flexible wire to hang the payload, has the merit to transport heavy payload quickly and plays an important role in the workshop, warehouse, metallurgical plant, nuclear industries, waste-derived energy production and so on. However, in view of the crane’s underactuation characteristic and diverse operating environment, there are lots of potential risk factors in its operation process, such as undesired payload off-centered lifting, surrounding moving workers and equipment with complicated structure. Thus, the crane system information as shown in Fig. 1, including payload lifting height, the payload off-centered angle, the payload rotation angle and the obstacle height, is essential for the automatic control of the crane, for instance, the auto-centering control [2] during the payload lifting process, the automatic rotation of ⇑ Corresponding author at: School of Automotive Engineering, Wuhan University of Technology, Wuhan 430070, China. E-mail address:
[email protected] (X. Wang).
payload and the automatic obstacle avoidance [3] throughout transportation. Many researchers have developed effective methods to measure the information of the bridge crane’s workspace. State observer [4–7], based on the mathematical model to the crane system, is a typical angle prediction method, but the difficulty is to accurately obtain the compensation error of mathematical model. In addition, the high accuracy sensor begins to be applied in the crane control system. For example, the rotary encoders [8–11], the accelerometers or inclinometers [12,13] and the payload cell sensor [14] are adopted to measure the swing angle of payload. Simultaneously, the rotary encoder can also be used to measure the rotation angle [15] and the payload lifting height [11]. Vision technology [16] with the non-contact measurement advantage has been extensively applied in the industrial field, such as estimating the three dimensions (3D) positions of the object [17], tracking of underwater robot [18,19], aircraft [20], and measuring the size of equipment [21]. It is also introduced to solve the problem of information acquisition for cranes and can be classified into three categories: monocular, binocular and
https://doi.org/10.1016/j.measurement.2019.107207 0263-2241/Ó 2019 Elsevier Ltd. All rights reserved.
Please cite this article as: Q. Wu, X. Wang, L. Hua et al., The real-time vision measurement of multi-information of the bridge crane’s workspace and its application, Measurement, https://doi.org/10.1016/j.measurement.2019.107207
2
Q. Wu et al. / Measurement xxx (xxxx) xxx
Nomenclature u, v X, Y x, y, z Xw, Yw, Zw p, q P’, q’ b DX w , DY w Du, Dv kx, ky uo1, vo1 R, t. f D Dn O3 O4 e eXw, eYw uo3, vo3 ue, ve ut, vt ua, va H
Image coordinate system in pixels Image coordinate system in millimeters Camera coordinate system World coordinate system Points in the world coordinate Imaging points in image coordinate Scale factor Actual distance between p and q Pixel distances between p’ and q’ Camera internal parameters Center of imaging chip Rotation and translation matrices Focus length Vertical distance from the image object to the camera Maximum vertical distance from image object to camera Lifting center of the payload Intersection of z-axis and plane O3XwYw Horizontal distance between O3 and O4 Vertical component of e in the Xw-axis and Yw-axis directions Pixel coordinates of the O3 Pixel coordinates of the marker Pixel coordinates of the target center Pixel coordinate of the center of the marker on the long payload Lifting height of the payload
Fig. 1. The measurement information of the bridge crane’s workspace.
multi-camera vision measurement methods. Binocular [22–24] and multi-camera vision measurement system are easy to construct 3D position information. But it is usually complex for their image processing algorithms to match the feature of two or more images. What’s more, various investigations using the monocular vision have also been published to acquire the information of the crane’s workspace, including the swing angle [25,26], the payload lifting height [27] and the distance [28] between the payload and the ambient equipment. The camera utilizes the wire to track the payload swing angle according to the color histogram matching [29] and binary images [30-32]. This method requires two cameras to estimate the spatial angle of the payload, which is expensive for
Hi Hob Hs l Dhi DH S Ssafe
Lifting height of each layer Obstacle height Height from the top of the obstacle Lifting rope length Height difference between adjacent layers Payload lifting height difference Side length of target plate Safe distance between the payload and the obstacle Slimit Limit safe distance DS Pixel distance between two markers A, B, C, D, E, F Parameters of the ellipse equation O31,O32, O3n Lifting center at different heights O31,O32,O33 Pixel coordinate of the lifting center at different heights DXwt, DYwt Off-centered distance between the target center and the vertical lifting center in the direction of the Xw-axis and Yw-axis, separately Dut3, Dvt3 Pixel distance between the target center and the vertical lifting center in the direction of the Xwaxis and Yw-axis, separately Maximum angle hmax hX, hY Payload off-centered angle in the direction of the trolley and cart hr Rotation angle of the long payload Lift cable tension FT G0 Threshold of the lift cable tension
the measurement system. Besides, the lifting height can’t be measured. On the other hand, the camera is mounted on the bottom of the trolley toward payload to measure the information of the crane’s workspace. The swing angle is obtained based on two fiducial markers mounted to the top of the payload [33] or vector code correlation (VCC) attached to the upside surface of a spreader [34,35]. The camera can obtain the payload rotation angle by two red markers [36]. The 3D workspace of the crane is established by stereo pair of images acquired by monocular cameras at the different crane position [37]. It should be noted that the current vision measurement can obtain the specific information of the crane’s workspace, consisting of the payload swing angle and the payload rotation angle, which is based on stereo pair of images or a single image. However, the current vision system also has the disadvantages, such as limited measurement information and single function. In this paper, a vision measurement method with monocular camera is proposed to acquire multi-information of the crane’s workspace from a single image, as shown in Fig. 1. A plate with four markers is installed on the hook to obtain the deep information of an image, which is the lifting height. To be specific, a hierarchical calibration strategy is designed to divide the lifting height of the payload into multi-layers. At each layer, the center pixel coordinate of four markers can be solved by the Blob analysis and ellipse fitting algorithm. Besides, the center pixel coordinate of the vertical lifting can be determined by four symmetrical markers. After calibration, two fitting equations of the payload lifting height and the pixel coordinate of the vertical lifting center are established corresponding to the pixel distance between two markers and the height of each layer. Further, the measurement models of the payload off-centered angle, the payload rotation angle and the obstacle height are established combining with the geometric relations in the crane’s workspace, respectively. Finally,
Please cite this article as: Q. Wu, X. Wang, L. Hua et al., The real-time vision measurement of multi-information of the bridge crane’s workspace and its application, Measurement, https://doi.org/10.1016/j.measurement.2019.107207
3
Q. Wu et al. / Measurement xxx (xxxx) xxx
the effectiveness of the vision measurement models is verified by experiments. The measurement models are also applied to the auto-centering control and automatic obstacle avoidance. 2. Vision measurement method 2.1. Hierarchical calibration strategy of monocular camera The image coordinates (u, v), the camera coordinate system (x, y, z) and the world coordinate system (Xw, Yw, Zw) are shown in Fig. 2. The object’s workspace position is represented by the world coordinates, and its corresponding image is denoted in the form of row and column pixel arrays in computer. When the camera coordinate system is introduced to the pinhole imaging model, the relationship between the image coordinate and the world coordinate will be established. The imaging model between the actual distance of points p and q and the pixel distance of the projected points p’ and q’ can be expressed as follows.
2
3 2 Du kx 6 7 6 b4 Dv 5 ¼ 4 0 1 0
0 ky 0
uO1
vO 1
1
3 0 7 R 05 T 0 0
3 2 DXw 6 t 6 DY w 7 7 7 6 1 4 Zw 5
ð1Þ
1
where, b is the scale factor. DX w and DY w are the actual distances between p and q in the Xw-axis and Yw-axis directions. The corresponding pixel distances in the row and column are Du and Dv . kx, ky, uo1 and vo1 are the camera internal parameters. Ignoring the manufacturing error of the camera, that is to say, the optical axis will be perpendicular to the imaging chip, (uo1, vo1) is the center of imaging chip, which is the origin of the image coordinate system (X, Y) in millimeters. Relative to the camera coordinate system, the rotation and translation of the world coordinate system are represented by matrices R and t. f is the focal length. The vertical distance from the image object to the camera is expressed by D. As shown in Fig. 3, when the camera is mounted vertically under the crane’s trolley, the payload in the world coordinate system does not rotate compared with the camera coordinate system.
Fig. 3. Diagram of camera installation position.
The trolley moves along the x-axis and the cart moves along the y-axis. The Zw-axis will be parallel to the optical axis z of camera, that is, the rotation of the payload in the world coordinate can be neglected. The payload position can be represented by its position relative to the lifting center O3. O4 is the intersection of z-axis and plane O3XwYw. The horizontal distance between O3 and O4 is denoted by e. Eq. (1) can be given by.
2
3 2 Du kx 6 7 6 b 4 Dv 5 ¼ 4 0 1 0
0 ky
uO1
vO
0
1
1
2 3 3 DXw þ eXw 0 6 7 76 DY w þ eY w 7 0 56 7 4D 5 0 1
ð2Þ
where, eXw and eYw represent the vertical component of e in the Xw-axis and Yw-axis directions, respectively. According to Eq. (2), the pixel distances of eXw and eYw can be kx eXw k e and y DY w , D kx eXw þ uO1 , O3 D
denoted by
respectively. The pixel coordinates of O3
are (uO3 ¼ v ¼ y DY w þ v O1 ). So Eq. (2) can be expressed in a simpler form as follows.
2
3 2 Du kx 6 7 6 b 4 Dv 5 ¼ 4 0 1 0
0 ky 0
uO3
vO
3
1
k e
3 2 3 DXw 0 6 7 76 DY w 7 0 56 7 4 D 5 0 1
ð3Þ
The camera calibration is adopted to calculate the parameters of Eq. (3). In the traditional calibration process of camera, it usually assumes that the vertical distance D is constant. The parameters of the imaging model are solved by a number of equations obtained by the precise calibration template. However, when the payload lifting height is large, the depth of field should be considered. So a hierarchical calibration method is proposed in this paper to divide the lifting height range into multi-layers, which reduces
Fig. 2. The pinhole imaging model.
Fig. 4. Hierarchical diagram of the lifting height range.
Please cite this article as: Q. Wu, X. Wang, L. Hua et al., The real-time vision measurement of multi-information of the bridge crane’s workspace and its application, Measurement, https://doi.org/10.1016/j.measurement.2019.107207
4
Q. Wu et al. / Measurement xxx (xxxx) xxx
the height of each layer as D decreases as shown in Fig. 4. Di is the vertical distance from the image object to the camera. The mathematical expression to define the lifting height of each layer Dhi is given by.
Dhi ¼ Dn ðcoshc Þni ð1 coshc Þ; i ¼ 1; 2; :::; n
where, hc 2 0; 45
ð4Þ
denotes the angle, which will affect the num-
ber of layers n. Dn is the maximum vertical distance, that is, the maximum lifting height of the crane. According to Eq. (4), the number of layer n decreases with the increase of hc. And hc is determined according to the specific application requirements. If the measurement accuracy of the lifting height does not satisfy the requirements, hc should be reduced. Besides, the payload lifting height expressed by Hi can be calculated by the following equation.
Hi ¼ Dn Di ; i ¼ 1; 2; :::; n
ð5Þ
Based on Eqs. (4) and (5), Eq. (3) can be rewritten as follows.
2
3 2 Dui kxi 6 7 6 b4 Dv i 5 ¼ 4 0 1 0
0 kyi 0
uO3
vO
3
1
2 3 3 DXwi 0 6 7 76 DY wi 7 0 56 7 ; 4 Hi 5 0 1
i ¼ 1; 2; 3:::n
ð6Þ
Eq. (6) shows the imaging model of pixel distance, actual distance and payload lifting height. 2.2. Image processing and vision location To achieve the vision location, a marker plate with four markers is designed (Fig. 5). Especially, the plate is a square with a side length, S. The four circular reflective stickers as the markers are fixed on the corners of the plate, and each marker is tangent to two edges of the plate. The four markers are center-symmetric, and the center of symmetry is the hook center. In the image progressing, the image of the markers is collected by the blob analysis including image acquisition, image segmentation, connectivity analysis and feature selection. More specifically, the marker plate image is distinguished from the background by the image segmentation based on the optimal threshold. The marker area is identified by the connectivity analysis and the pixels with the same attributes are connected to form a domain. The image of the four markers can be recognized by the image features, such as shape and area, as shown in Fig. 6. Furthermore, the pixel coordinates of the marker center will be located by the ellipse fitting method. Firstly, the edges of the marker image can be obtained by the canny edge detection algorithm, which smooths the edge with the gradient of the gray value determined by the first derivative. Then, the ellipse fitting algorithm is adopted to analyze the edge of the marker. The pixel coordinates of the marker center can be solved. When the parameters of the
Fig. 6. The image of the four markers identified by different color.
ellipse fitting formula are A, B, C, D, E and F, the general equation of the ellipse can be represented by the equation.
Ax2 þ Bxy þ Cy2 þ Dx þ Ey þ F ¼ 0
The algebraic distance yðA; B; C; D; E; F Þ from the pixel to the fitting elliptic curve can be expressed by.
yðA; B; C; D; E; F Þ ¼
n X
Ax2i þ Bxi yi þ Cy2i þ Dxi þ Eyi þ F
2
ð8Þ
i¼1
Based on the principle of the least square fitting, when function y is minimum, parameters A, B, C, D, E and F are optimal. The pixel coordinates of the marker center (ue, ve) can be calculated by the parameters.
(
ue ¼ BE2CD 4ACB2
ð9Þ
v e ¼ BD2AE 4ACB 2
The pixel coordinates (ut, vt) of the marker plate center is the intersection of the red lines as shown in Fig. 7. 2.3. The measurement of the payload lifting height based on monocular camera To measure accurately the lifting height of the payload, a hierarchical calibration method is proposed to establish the fitting equations of lifting height and pixel distance of diagonal markers. To make sure that the Zw-axis is parallel to the optical axis, the bracket with fixed and adjusted bolts is designed to install the camera on the cross beam of the trolley as shown in Fig. 8. The marker plate is placed on the horizontal plane, such as the horizontal ground. When the marker plate is located above the camera’s field of view, the diagonal pixel distance is obtained by the vision measurement system, as shown in Fig. 9(a). And when the marker plate is located below the camera’s field of view, the diagonal pixel distance is depicted in Fig. 9(b). If the difference of the diagonal pixel distances between the two locations is within two pixels, as shown in Fig. 9, the Zw-axis will be approximately parallel to the optical axis. Otherwise, the bolt should be adjusted again. In practical
(ue, ve)
Fig. 5. The schematic diagram of the marker plate and its installation on the hook.
ð7Þ
(ut, vt)
Fig. 7. Pixel coordinate of the marker center and the plate center.
Please cite this article as: Q. Wu, X. Wang, L. Hua et al., The real-time vision measurement of multi-information of the bridge crane’s workspace and its application, Measurement, https://doi.org/10.1016/j.measurement.2019.107207
5
Q. Wu et al. / Measurement xxx (xxxx) xxx
calculated by Eq. (4) as shown in Table 2. Based on the lifting height of each layer, the marker plate is lifted from Dn = 1.72 m. The pixel distance at each layer will be obtained by the vision system as shown in Fig. 10. In the similar method above, the camera is fixed in three different positions along the cross beam of the trolley, namely camera position 1, camera position 2 and camera position 3. At the height of each layer, the payload lifting height and the pixel distance of diagonal markers can be obtained as shown in Fig. 11(a). Fig. 11 (b) displays three groups of calibration data about the payload lifting height and the pixel distance of diagonal markers with the camera position 3. From the above data analysis, it can be seen that the relation of lifting height and pixel distance is not affected by the camera position and calibration process. A fitting equation to calculate payload lifting height can be solved by the numerical analysis of the pixel distance of diagonal markers and the payload lifting height in Fig. 10.
Fixed and adjusted bolts
Camera
Bracket Fig. 8. The camera and its bracket.
H ¼ - 0:009DS2 þ 10:320 DS 1621:710
ð10Þ
where, DS is the pixel distance of diagonal markers. Compared with Eq. (6), Eq. (10) has higher-order terms of pixel distance, because Eq. (6) does not consider the lens distortion. 2.4. Establish the measurement models of information of the bridge crane’s workspace
(a) Above
(b) Below
Fig. 9. The diagonal pixel distance.
application, the criterion of two pixels can be selected according to the measurement accuracy. With the decrease of the pixel number, the measurement accuracy will be improved. Considering the external environment, the marker plate may be not always stable and horizontal, that is, the pixel distance may be distorted or unavailable. The criterion of valid data in the calibration process will be considered. The two diagonal pixel distances are denoted by ‘S1’ and ‘S2’, respectively, in unit of pixels. In the calibration process, if the difference between the two diagonal pixel distances is greater than one pixel, the data should be remeasured, as shown by the position 3 in Table 1. If the difference is less than one pixels, the marker plate is considered horizontal and the diagonal pixel distance will be recorded, as shown by the positions 1 and 2. In practical application, the criterion of one pixel can be selected according to the measurement accuracy. Smaller pixel distance is conducive to improving the measurement accuracy of the proposed vision system.
According to the analysis in the previous section, the payload position on the horizontal plane can be expressed by its position relative to the lifting center. As shown in Fig. 12, O31, O32 and O3n denote different lifting centers, and the corresponding pixel coordinates are expressed by O031 , O032 and O033 . The pixel coordinates of the vertical lifting center O031 , O032 and O033 will be different with the Hi, because the camera can’t be installed right above the payload center. Therefore, it is necessary to establish the fitting equation for the pixel coordinate of the lifting center in the following introduction. Fig. 13 shows the pixel coordinates of the lifting center at different payload lifting heights with the camera installed at position 1, position 2 and position 3. From the data analysis in Fig. 13, the pixel coordinate of the lifting center is related to the installation position of the camera. The pixel coordinates of the lifting center decrease with the vertical distance decreases. A fitting equation to calculate the pixel coordinates of the lifting center can be solved by the numerical analysis of the data obtained at camera position 3 as follows.
(
uO3 ¼ 0:00007H2 0:041H þ 544:175
vO
3
¼ 0:00005H2 0:697H þ 682:196
ð11Þ
In the calibration process, when Dn = 1.72 m, hc ¼ 20 , the number of layer n is twelve and the lifting height of each layer can be
The off-centered distance between the target center and the vertical lifting center can be expressed as follows.
(
Table 1 The pixel distances of diagonal markers with the mark plate at different positions. position 1
position 2
position 3
253.282 252.538
252.968 252.607
252.044 250.889
position
ð12Þ
where, DX W t3 and DY W t3 represent the off-centered distance between the marker plate center and the vertical lifting center in the direction of the Xw-axis and Yw-axis, respectively. Dut3 and Dv t3 are the corresponding pixel distance. Further, the measurement model of the payload off-centered angle in the direction of trolley and cart, which are expressed by hX and hY , can be acquired by the following formula, respectively.
(
S1(pixel) S2(pixel)
DX wt3 ¼ DSS Dut3 DY wt3 ¼ DSS Dv t3
DX wt3 D DY w arctan D t3
hX ¼ arctan hY ¼
ð13Þ
Please cite this article as: Q. Wu, X. Wang, L. Hua et al., The real-time vision measurement of multi-information of the bridge crane’s workspace and its application, Measurement, https://doi.org/10.1016/j.measurement.2019.107207
6
Q. Wu et al. / Measurement xxx (xxxx) xxx
Table 2 The lifting height of each layer. n
2
3
4
5
6
7
8
9
10
11
12
5.24
5.58
5.94
6.32
6.72
7.15
7.61
8.10
8.62
9.17
9.76
10.39
1800
1600
1600
1400
Lifting height of the payload (mm)
Lifting height of the payload(mm)
Dhi (cm)
1
1400 1200 1000 800 600 400
1200 1000 800 600
camera position 1 camera position 2 camera position 3
400 200 0 200
200
250
300
350
400
450
500
550
600
Pixel distance(pixel) 0 200
300
400
500
(a) The camera mounted in different position
600
Pixel distance(pixel) 1600
Fig. 10. Pixel distance of diagonal markers and lifting height (a) The camera mounted in different position (b) The camera mounted in the same position.
hr ¼ arctan
ua ut
va vt
Lifting height of the payload (mm)
In details, the pixel distance of diagonal markers DS and the pixel coordinates (ut, vt) of the marker plate center can be obtained with the collected image. Based on this, the payload lifting height H can be acquired with Eq. (10). And then the pixel coordinates of the lifting centers (uo3, vo3) can be calculated by Eq. (11). Thus, the offcentered pixel distance Dut3 ¼ ut - uo3 and Dv t3 ¼ v t - v o3 can be acquired. Finally, the off-centered distance and off-centered angle can be obtained by Eqs. (12) and (13). The off-centered distance and off-centered angle in the direction of trolley are shown in Fig. 14. Besides, when the payload is long, such as angle steel, steel beam, the long payload will rotate in the horizontal plane. So in order to display the rotation angle of the payload in real time, two additional circular reflective stickers are attached to both ends of the long payload as shown in Fig. 15 and (ua, va) is their pixel coordinate. The measurement model to calculate the rotation angle of the long payload expressed by hr can be solved by the following equation.
1400 1200 1000 800 600
lifting height 1 lifting height 2 lifting height 3
400 200 0 200
300
400
500
600
Pixel distance(pixel)
(b) The camera mounted in the same position Fig. 11. Pixel distance of diagonal markers and lifting height.
ð14Þ
From the foregoing description, the lifting height can be solved by the vision measurement method with the marker plate. If the marker plate is fixed on the upper surface of equipment or payload, as shown in Fig. 16, the obstacle height expressed by Hob will be acquired in the above method. 2.5. Vision measurement system design and verification A vision measurement system for crane’s workspace information is designed. An industrial camera is used to capture images of the crane’s workspace in real time. A light source attached to the bottom of trolley is adopted to enhance the light in the workspace. The vision measurement system with an interactive interface, as shown in Fig. 17, operates on an industrial personal computer (IPC) with Microsoft Visual Studio environment by using
Fig. 12. Lifting center and corresponding pixel coordinates at different heights.
Please cite this article as: Q. Wu, X. Wang, L. Hua et al., The real-time vision measurement of multi-information of the bridge crane’s workspace and its application, Measurement, https://doi.org/10.1016/j.measurement.2019.107207
7
Q. Wu et al. / Measurement xxx (xxxx) xxx
700
Row pixel coordinate(pixel)
750
position 1 column pixel coordinate position 2 column pixel coordinate position 3 column pixel coordinate
700
650
650
600
600
550
550
position 1 row pixel coordinate position 2 row pixel coordinate position 3 row pixel coordinate
500
450 400
600
800
1000
Marker plate
1200
1400
500
Column pixel coordinate(pixel)
750
Obstacle
Payload
Fig. 16. The obstacle with the marker plate.
450 1600
Liftiing height of the payload (mm) Fig. 13. The pixel coordinates of the lifting center at different lifting heights of the payload.
Fig. 17. The interactive interface of the vision measurement system.
Collecte the plate image
Blob analysis and ellipse fitting
Fig. 14. The off-centered distance and off-centered angle.
Calculate the pixel coordinates centers of the marker
Calculate the lifting height of the payload
Long payload Solve the pixel coordinate of the lifitng center
Markers Fig. 15. Long payload with two additional markers.
Acquire the off-centered angle rotation angle and obstacle height Fig. 18. The total flowchart of the vision measurement process.
Please cite this article as: Q. Wu, X. Wang, L. Hua et al., The real-time vision measurement of multi-information of the bridge crane’s workspace and its application, Measurement, https://doi.org/10.1016/j.measurement.2019.107207
8
Q. Wu et al. / Measurement xxx (xxxx) xxx Table 4 The maximum measurement error of the payload lifting height.
Lifting inverter Trolley
Crane motion control system
Measurement results
Measurement error (mm)
Trolley inverter
1 2 3
Vision measurement system
Error 1
Error 2
9.998 13.204 14.521
11.238 16.681 16.683
Monitor
Camera
IPC
Fig. 19. Vision measurement system and motion control system.
Table 3 The device parameters of the experiment system. Device Name
Device Parameter
Basler acA1300-60gm Industrial lens IPC Siemens 1200 PLC Inclination sensor
Resolution 1280 1024 8 mm focus IntelÒ CoreTMi7 processor 14 input/10 output Measurement range ±15°
Yw-axis swing angle
Xw-axis swing angle (°)
Inclination sensor
4
4
Target
3
3
2
2
1
1
0
0
-1
-1
-2
-2
Yw-axis swing angle (°)
Light source
-3
-3
Xw-axis swing angle -4
-4 0
2
4
6
8
10
12
14
16
18
20
22
24
Time(s) Fig. 21. Vision measurement angle of the payload in the direction of Xw-axis and Yw-axis.
the win7 system. The overall vision measurement flowchart of the crane’s workspace information is displayed in Fig. 18. As shown in Fig. 19, an experiment system of the bridge crane is established, which consists of the vision measurement system and motion control system. The motion control system includes programmable logic controller (PLC), inverters, alternating current (AC) motor, joy-stick, trolley and lifting machinery structure. Parameters of some main equipment in the experimental system are shown in Table 3. To track the spatial position of the payload including the swing angle and rotation angle, the experiment of measurement accuracy of lifting height is carried out. When the payload is lifted to 600 mm, 900 mm, and 1200 mm, respectively, based on Eq. (10), the payload lifting heights measured by the vision system are shown in Fig. 20. The maximum measurement error of the payload lifting height are presented by the Error 1 in Table 4. Besides, if the number of layer n is six, another fitting formula to calculate the
lifting height can be obtained. Based on this, the maximum measurement error of the payload lifting height are shown by the Error 2 in Table 4. The results reveal that the measurement error of the lifting height is within 15 mm based on the proposed hierarchical strategy. The increase of the layer number will be beneficial to enhance the measurement accuracy. An experiment is to demonstrate the performance of the vision measurement system in tracking the angle of the payload. The swing angles in the direction of Xw-axis and Yw-axis are shown in Fig. 21. Besides, a contact sensor measures the contrast angle an inclination sensor fixed on lifting wire rope as shown in Fig. 19. The swing angles measured by the vision system and the inclination sensor are shown in Fig. 22.
4 1220
1500
1215
1400
1205 1
2
3
4
5
895
1100
890
1000
885
900
880 0 600
800
595
1
2
3
4
5
5
2
Swing Angle(°)
1200 0 900
1200
Lifting height(mm)
measuring results 1 measuring results 2 measuring results 3
1210
1300
inclination sensor results vision measurement results
3
1 0 -1 -2
590
700
585
600
580 0
-3 1
2
3
4
minimum swing angle with vision system minimum swing angle with inclination sensor
5
-4 0
500 0
5
10
15
20
25
30
5
10
15
20
25
30
35
40
Time(s)
Time(s) Fig. 20. The lifting height of the payload.
Fig. 22. Comparison of payload swing angles measured by the vision system and the inclination sensor.
Please cite this article as: Q. Wu, X. Wang, L. Hua et al., The real-time vision measurement of multi-information of the bridge crane’s workspace and its application, Measurement, https://doi.org/10.1016/j.measurement.2019.107207
9
Q. Wu et al. / Measurement xxx (xxxx) xxx
Before the swing of the payload, the angle acquired by the inclination sensor and the vision measurement system are approximately 0°. And the vision system can measure small angle with smaller angle error compared with the inclination sensor as shown in the enlargement section of Fig. 22. The maximum swing angle of the payload obtained by the inclination sensor is 2.57°, which is calculated by the vision measurement system is 2.498°. The minimum angles measured by the vision system and the angle sensor in each swing period of the payload is shown in the green line and the purple line, respectively. The results reveal that the angle measured by the proposed vision system can better reflect the attenuation rule of the payload. Thus, the vision measurement system can replace the contact sensor, and is even accurate to the angle measurement. Fig. 23 shows the rotation angle of the long payload. In the course of measurement, the long payload is always perpendicular to the trolley motion direction, which is realized by perpendicular long payload to the rail. The experimental results show that the maximum measurement error is 0.33°. 3. Applications of vision measurement system on crane When cranes lift the payload off the ground, the payload may slide or swing sideways unexpectedly. This will seriously threaten the safety of operators, cranes, payloads and surrounding equipment. Therefore, in the process of crane operation, the payload should be lifted vertically off the ground. This means that the lifting mechanism must be positioned directly above the payload center. In the traditional method, the limit switch is adopted to assist the operator, and the operator eliminates the off-centered distance by repeatedly adjusting the position of cart and trolley. However, considering the lifting height of tens or hundreds of meters, it is difficult for crane operators to judge if the lift cable is perfectly vertical before they start to lift. Besides, this method is timeconsuming. In another way, a dynamic model of the interaction between the payload and ground is developed to predict the dynamic effects of off-centered lift and prevent the dangerous levels of sliding and swing [38,39]. The dynamic model of crane is complex considering the different payload masses in industrial application. The operating environment of the crane is complex, such as the shape of the surrounding equipment is complex and the height is different. Therefore, if any accident occurs in the work area during the operation of the crane, it may result in injury or even property damage. Automatic obstacle avoidance, as a safety assistance tech-
90.50
Rotation angle(°)
90.25
90.00
nology, can avoid collision between the payload. Obstacles can be avoided by the horizontal bypass with accurate crane workspace models established by sensor [40]. In addition, the state of the payload, including edges and locations, should be obtained by sensor. Traditional crane control, including the elimination of the offcentered distance and anti-collision mentioned above, requires adding one or more sensors according to one function. Therefore, the realization of multi-function requires the cooperation of multiple sensors. This involves the fusion and processing of multi-sensor data, which will make the control system quite complex. In addition, the increase of measurement sensors will affect the measurement stability of the system, and increases the cost of crane system. In view of the above questions, the proposed vision system in this paper can collect the multi-information of crane’s workspace and the auto-centering control and automatic obstacle avoidance can be achieved. 3.1. The auto-centering control for bridge crane An auto-centering control system is designed by the proposed vision measurement method to quickly eliminate the offcentered angle. The off-centered distance and off-centered angle of the payload will be applied as the error signal of the auto-centering control system to calculate the velocity trajectory of trolley. In detail, if the off-centered distance is too large, the auto-centering control system will cut off the lifting of the payload until the trolley is within a safe region. The proportion integration differentiation (PID) controller sends commands to drive the trolley to reduce the off-centered angle. The block diagram of the vision measurement and auto-centering control system is shown in Fig. 24. Before the experiment of auto-centering control, the measurement errors of off-centered angle and off-centered distance are verified. When the payload is lifted to a certain height and remains stationary, the actual off-centered angle and off-centered distance of the payload under gravity are 0. In this case, the off-centered angle and off-centered distance can be obtained by the vision measurement system, as shown in Fig. 25. The measurement data shows that the error range of off-centered angle is within 0.06°, and the measurement error of off-centered distance is 0.001 m. The flowchart of the auto-centering control is shown in Fig. 26. A weight measuring sensor of payload is adopted to make sure that the camera could capture the marker plate image. When the lift cable tension FT is greater than the threshold G0, the rope will be in tension. Fig. 27 shows the auto-centering control process in the direction of the trolley. When the lifting mechanism stops operating, the off-centered distance and off-centered angle detected by vision system are 3.783° and 0.161 m, respectively. Based on the measuring information, the theory velocity of trolley can be obtained by the auto-centering control system as shown by the black line in Fig. 28. As shown in Fig. 28, the red curve detected by an encoder installed at the end of the trolley’s motor is the actual velocity trajectory. The off-centered angle tracked by the vision system in the auto-centering control process is shown in Fig. 29.
Measuring v
89.75
Crane Autocentering Controller
PID controller
Control v
Crane System
Setting v
Off-centered distance
89.50 0
5
10
15
20
25
Image Processor
Camera
Time(s) Fig. 23. The rotation angle of the long payload measured by vision system.
Fig. 24. Block diagram of the vision measurement system and auto-centering control system.
Please cite this article as: Q. Wu, X. Wang, L. Hua et al., The real-time vision measurement of multi-information of the bridge crane’s workspace and its application, Measurement, https://doi.org/10.1016/j.measurement.2019.107207
10
Q. Wu et al. / Measurement xxx (xxxx) xxx
2.0
-0.01 -0.02
1.5
Off-centered angle(°)
-0.03 -0.04
1.0
-0.05 -0.06 10
0.5
11
12
13
14
15
0.0 Fig. 27. A whole view of the auto-centering control. (The image was generated by overlapping three snapshots taken during operation.)
-0.5
0.20
-1.0 0
5
10
15
20
25
30
0.18
Time(s)
0.16
(a) Off-centered angle
theory velocity operating velocity
0.14
Velocity(m/s)
2.0 -0.0002
Off-centered distance(m)
1.5
-0.0004 -0.0006
0.12 0.10 0.08 0.06
1.0 -0.0008
0.04
0.5
-0.0010
0.02
10
11
12
13
14
15
0.00
0.0
0
1
2
3
4
5
6
7
8
9
10
Time(s) -0.5 Fig. 28. Velocity trajectory of the trolley in auto-centering control process.
-1.0
0
5
10
15
20
25
30 1.2
Time(s)
(b) Off-centered distance Fig. 25. The off-centered angle and off-centered distance.
Lift N
G0
0.0
Off-centered Angle(°)
Start
FT
maximum error
0.6
-0.6
minimum error
-1.2
off-centered angle -1.8 -2.4 -3.0
Y -3.6
cut off lifting, Xw and calculate
Cart auto-travel motions
Trolley autotravel motions
Yw
-4.2 0
1
2
3
4
5
6
7
8
9
10
Time(s)
Y
|
Yw |
0
|
Xw |
0
Y
Fig. 29. The off-centered angle measured by the vision system in auto-centering control process.
N
N End
Fig. 26. Flowchart of auto-centering control system of the crane.
The above experimental results demonstrate that the residual off-centered angle is 0.115°. The time to reduce the off-centered angle with the PID controller is 1.446 s. The auto-centering system
designed by the vision measurement system can effectively and quickly eliminate the off-centered angle of the payload compared with the traditional way. The working efficiency and operation safety of the crane can be improved. Further, the translation motion of the bridge crane’s cart and trolley is perpendicular to each other. And the cart’s control is similar to that of the trolley in practice. Besides, as shown in Fig. 30, the off-centered angle and off-centered distance in 3D space can
Please cite this article as: Q. Wu, X. Wang, L. Hua et al., The real-time vision measurement of multi-information of the bridge crane’s workspace and its application, Measurement, https://doi.org/10.1016/j.measurement.2019.107207
11
Q. Wu et al. / Measurement xxx (xxxx) xxx
be decomposed into the one in the direction of the cart and trolley. Based on this, it can be known that the cart’s auto-centering control is similar to that of the trolley in practice.
440
440
420
420
400
400
380
380
360
360
Hob
340
340
320
320
H
300
300
280
280
260
260
240
240
220
3.2. Obstacle recognition and automatic obstacle avoidance for bridge crane The payload lifting strategy will be adopted to achieve the automatic obstacle avoidance, because the real-time measurements of payload and obstacle states, including height and distance from them, can be acquired by the proposed vision system. Besides, when the shape of payload and obstacle can be neglected, it is easy for the proposed automatic obstacle avoidance system to be applied in industry application with lower cost. The realization process of the automatic obstacle avoidance system is shown in Fig. 31. The dashed line indicates the trajectory of the payload. To be specific, a limit safe distance Slimit from the payload center is preset. In the operating process of crane, the vision system is adopted to perform real-time identification of obstacles. When the obstacle enters into the camera’s field of view, the horizontal distance Ssafe from the payload to the obstacle is smaller than Slimit and H is smaller than Hob, the payload will be lifted DH, which is solved by the following equation.
DH ¼ Hob þ Hs H
ð15Þ
where, Hs is the height from the top of the obstacle, that is, the safe height. The maker plate is placed on the surface of the obstacle. This means that in order to be suitable for the installation and fixing of the marker plate, this method is more suitable for the situation Trolley Rail
Obstacle
Hob
Hs
Slimit
Payload H
Fig. 31. Schematic diagram of automatic obstacle avoidance.
Obstacle height(mm)
Fig. 30. Decomposition diagram of off-centered angle and off-centered distance.
Lifting height(mm)
Fig. 32. The whole view of the automatic obstacle avoidance. (The image was generated by overlapping four snapshots taken during operation.)
220 0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Time(s) Fig. 33. Height of the obstacle and the payload measured by vision system in automatic obstacle avoidance.
where the upper surface of equipment and payload is horizontal, such as sheet warehouse. Specifically, the maker plate can be fixed on the surface of the equipment and payload by the magnet, fixed bracket and so on. Fig. 32 shows the automatic obstacle avoidance process in the direction of trolley. The height of the obstacle acquired by the vision system is 346.6 mm with ±1.528 mm. Slimit and Hs are preset to 100 mm and 50 mm, respectively. When Ssafe is smaller than Slimit, the payload is lifted from 245 mm to 400 mm and the lifting height of payload tracked by the vision system as shown by the blue curve in Fig. 33. 4. Conclusions This paper proposes a single but efficient vision measurement method to acquire multi-information of the bridge crane’s workspace from a single image, such as the payload off-centered angle, the payload rotation angle and the obstacle height. The deep information of an image, that is the lifting height, can also be easily obtained by the proposed plate with symmetrical four markers. Experiments results that the proposed vision measurement method can efficiently detect the multi-information of the bridge crane’s workspace. The automatic centering and automatic obstacle avoidance of cranes can be realized by the designed vision measurement method. Thus, the designed vision measurement system in this paper will benefit the intelligent control and safety monitoring of the crane. Future improvement to proposed vision measurement systems can be made in considering specific industrial application requirement. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Please cite this article as: Q. Wu, X. Wang, L. Hua et al., The real-time vision measurement of multi-information of the bridge crane’s workspace and its application, Measurement, https://doi.org/10.1016/j.measurement.2019.107207
12
Q. Wu et al. / Measurement xxx (xxxx) xxx
Acknowledgements This work is supported by the Innovative Research Team Development Program of Ministry of Education of China (No. IRT_17R83), the National Natural Science Foundation of China (No. 51875428), and the 111 Project (No. B17034) of China.
References [1] W. He, S.S. Ge, Cooperative control of a nonuniform gantry crane with constrained tension, Automatica 66 (2016) 146–154. [2] A. Garcia, W. Singhose, A. Ferri, Three-dimensional dynamic modeling and control of off-centered bridge crane lifts, J. Dyn. Syst. Meas. Contr. 139 (4) (2017) 041005. [3] B. Ma, Y. Fang, Y. Zhang, Switching-based emergency braking control for an overhead crane system, IET Control Theory Appl. 4 (9) (2010) 1739. [4] H. Sano, K. Sato, K. Ohishi, et al., Robust design of vibration suppression control system for crane using sway angle observer considering friction disturbance, Electr. Eng. Jpn. 184 (3) (2013) 36–46. [5] Ohtomo S, Murakami T. Estimation method for sway angle of pay-load with reaction force observer, IEEE, International Workshop on Advanced Motion Control. IEEE, 2014:581–585. [6] K. Sato, K. Ohishi, T. Miyazaki, Anti-sway crane control considering wind disturbance and container mass, Electr. Eng. Jpn. 193 (1) (2015) 21–32. [7] C. Zhou, Z. Wang, J. Sun, Information fusion state observer of overhead crane system, Mach. Electron. (2015). [8] A. Aksjonov, V. Vodovozov, E. Petlenkov, Three-dimensional crane modelling and control using Euler-Lagrange state-space approach and anti-swing fuzzy logic, Electr. Control Commun. Eng. 9 (1) (2015) 5–13. [9] C.Y. Chang, T.C. Chiang, Overhead cranes fuzzy control design with dead zone compensation, Neural Comput. Appl. 18 (7) (2009) 749–757. [10] K.T. Hung, Z.R. Tsai, Y.Z. Chang, Switched two-level H1 and robust fuzzy learning control of an overhead crane, Math. Prob. Eng. 2013 (2013) 1–12. [11] L.A. Tuan, J.J. Kim, S.G. Lee, et al., Second-order sliding mode control of a 3D overhead crane with uncertain system parameters, Int. J. Precis. Eng. Manuf. 15 (5) (2014) 811–819. [12] B. Gao, Z. Zhu, J. Zhao, et al., A wireless swing angle measurement scheme using attitude heading reference system sensing units based on microelectromechanical devices, Sensors 14 (12) (2014) 22595–22612. [13] J. Xu-Nan, Research on acquisition of the cranes swinging angle based on accelerometer, Equip. Manuf. Technol. (2010). [14] M. Ebrahimi, M. Ghayour, S.M. Madani, et al., Swing angle estimation for antisway overhead crane control using load cell, Int. J. Control Autom. Syst. 9 (2) (2011) 301–309. [15] G. Lee, H.H. Kim, C.J. Lee, et al., A laser-technology-based lifting-path tracking system for a robotic tower crane, Autom. Constr. 18 (7) (2009) 865–874. [16] E.H. Trinklein, G.G. Parker, M.S. Zawisza, Active load damping of an extending boom crane using a low cost RGB-D camera, 2017 IEEE Sensors Applications Symposium (SAS), IEEE, 2017. [17] L.Y. Xu, Z.Q. Cao, P. Zhao, et al., A New monocular vision measurement method to estimate 3D positions of objects on floor, Int. J. Autom. Comput. 14 (2) (2017) 1–10. [18] T. Liu, L. Wan, X.W. Liang, A monocular vision measurement algorithm based on the underwater robot, Appl. Mech. Mater. 532 (2014) 165–169.
[19] L.I. Xiao-Gang, J.H. Liu, University B.F. Monocular Vision measurement of object pose based on dual quaternion, Packag. Eng. (2017). [20] Z. Wang, Y. Wang, W. Cheng, et al. A monocular vision system based on cooperative targets detection for aircraft pose measurement. 2017:012029. [21] Q. Xu, J. Wang, R. Che, 3D mosaic method in monocular vision measurement system for large-scale equipment. Proceedings of SPIE – The International Society for Optical Engineering, 2010, 7544. [22] Z.K. Chen, H.Y. Wang, W.H. Wang, et al., System design of crane robot based on binocular stereo vision, Appl. Mech. Mater. 303–306 (2013) 4. [23] M.S. Rahman, J. Vaughan, Simple near-real time crane workspace mapping using machine vision. ASME Dynamic Systems & Control Conference. 2014. [24] J. Smoczek, J. Szpytko, P. Hyla, Non-collision path planning of a payload in crane operating space, Solid State Phenom. 198 (2013) 6. [25] Y. Yoshida, H. Tabata, Visual feedback control of an overhead crane and its combination with time-optimal control, IEEE/ASME International Conference on Advanced Intelligent Mechatronics, IEEE, 2008. [26] E. Maleki, W. Singhose, S.S. Gurleyuk, Increasing crane payload swing by shaping human operator commands, IEEE Trans. Hum.-Mach. Syst. 44 (1) (2014) 106–114. [27] B. He, Y. Fang, N. Sun, A Practical Visual Positioning Method for Industrial Overhead Crane Systems. 2017. [28] M. Kajkouj, S.A. Shaer, K. Hatamleh, et al., SURF and image processing techniques applied to an autonomous overhead crane, Control & Automation, IEEE, 2016. [29] C.Y. Chang, Lie H. Wijaya, Real-time visual tracking and measurement to control fast dynamics of overhead cranes, IEEE Trans. Ind. Electron. 59 (3) (2012) 1640–1649. [30] H. Osumi, A. Miura, S. Eiraku, Positioning of wire suspension system using CCD cameras, IEEE/RSJ International Conference on Intelligent Robots & Systems, IEEE, 2005. [31] L.H. Lee, C.H. Huang, S.C. Ku, et al., Efficient vision feedback meth-od to control a three-dimensional overhead crane, IEEE Trans. Ind. Electron. 61 (8) (2014) 4073–4083. [32] P. Hyla, J. Szpytko, Vision method for rope angle swing measurement for overhead travelling cranes – validation approach, Commun. Comput. Inf. Sci. 395 (2013) 370–377. [33] K. Sorensen, H. Fisch, S. Dickerson, et al., A multi-operational-mode anti-sway and positioning control for an industrial bridge crane, IFAC Proc. Volumes 41 (2) (2008) 881–888. [34] H. Kawai, Y. Choi, Y.B. Kim, et al. Measurement system design for sway motion based on image sensor. International Conference on Networking, Sensing and Control. IEEE, 2009:185–188. [35] H. Kawai, Y.B. Kim, Y.W. Choi, Anti-sway system with image sensor for container cranes, J. Mech. Sci. Technol. 23 (10) (2009) 2757–2765. [36] J. Huang, Z. Liang, Q. Zang, Dynamics and swing control of double-pendulum bridge cranes with distributed-mass beams, Mech. Syst. Sig. Process. 54–55 (2015) 357–366. [37] J. Smoczek, J. Szpytko, P. Hyla, The application of an intelligent crane control system, IFAC Proceedings Volumes 45 (24) (2012) 280–285. [38] A. Garcia, W. Singhose, A. Ferri, Dynamics and control of off-centered crane lifts, Control Conference, IEEE, 2015, 1–6. [39] K. Peng, W. Singhose. Prediction and measurement of payload swing from offcentered crane lifts. 2015. [40] R. Sato, Y. Noda, T. Miyoshi, et al., Operational support control by haptic joystick considering load sway suppression and obstacle avoidance for intelligent crane, Conference of the IEEE Industrial Electronics Society, IEEE, 2009.
Please cite this article as: Q. Wu, X. Wang, L. Hua et al., The real-time vision measurement of multi-information of the bridge crane’s workspace and its application, Measurement, https://doi.org/10.1016/j.measurement.2019.107207