Robotics and Computer-Integrated Manufacturing 38 (2016) 52–66
Contents lists available at ScienceDirect
Robotics and Computer-Integrated Manufacturing journal homepage: www.elsevier.com/locate/rcim
A laser back-lighting based metal transfer monitoring system for robotic gas metal arc welding Zhenzhou Wang n Shenyang Institute of Automation, Chinese Academy of Sciences, China
art ic l e i nf o
a b s t r a c t
Article history: Received 20 December 2014 Received in revised form 6 September 2015 Accepted 7 October 2015
Gas metal arc welding (GMAW) can be considered the most widely used process in automated welding due to its high productivity. However, its solid to liquid metal transfer also complicates the process and causes fumes and spatters. It needs to be better understood and controlled for assured weld quality, improved process stability, and reduced fumes, spatters, and energy consumption. For automated robotic gas metal arc welding, automatic and efficient image processing algorithms are required to extract the metal transfer robustly. In addition, the machine vision apparatus in real welding environments should be compact and easy to handle. To this end, a simplified laser back-lighting based monitoring system is proposed to measure the metal transfer in this paper. To facilitate the image analysis, the arc light and the image are modeled based on the physical laws. A double-threshold method is proposed to segment the image robustly with a linear membership assigned to the fuzzy edge region. To compute the two thresholds accurately and simultaneously, slope difference is calculated for the histogram distribution and the gray-scale positions with the largest and second largest peaks are selected as the two thresholds respectively. Experimental results verified the effectiveness of the on line monitoring system and the subsequent automatic image processing methods. & 2015 Elsevier Ltd. All rights reserved.
Keywords: GMAW Modeling Image analysis Machine vision Threshold selection Segmentation
1. Introduction Gas metal arc welding (GMAW) is the most widely used process in automated welding industry. Because of its consumable electrode, GMAW has a higher productivity than another widely used arc welding process – gas tungsten arc welding (GTAW). However, although the consumable electrode increases the productivity, its solid to liquid metal transfer complicates the process. Especially, the metal-transfer process plays an important role in determining the arc stability, resultant weld quality, and production of fumes and spatters. To better understand and control GMAW process, the metal transfer process has been studied widely. A lot of metal-transfer sensing methods have been proposed [1–10]. In [7], a technique named as laser back-lighting is proposed to image the metal transfer with good quality and contrast. This back-lighting technique has been adopted by many other researchers to conduct metal transfer research [8–10]. In [8], the back-lighting apparatus is used to determine the surface tension of the metal transfer droplets. In [9], the back-lighting apparatus is used to image and classify the metal transfer modes by setting different welding parameters. In [10], the back-lighting apparatus is used to image and control the metal transfer. n
Corresponding author. E-mail address:
[email protected]
http://dx.doi.org/10.1016/j.rcim.2015.10.004 0736-5845/& 2015 Elsevier Ltd. All rights reserved.
However, none of them addressed the issue of automatic image processing algorithms for automated robotic welding. Instead, the past efforts were focused on the study of the relationship between the metal transfer and the welding parameters. Since machine vision has been widely recognized as the important component part of the next generation intelligent welding robots, automatic and robust image processing algorithms become essential for the potential applications of the related robotic welding techniques. To fill in the research gap and be well prepared for the next generation machine vision guided robotic welding, a series of image processing algorithms are proposed to automatically and robustly extract the useful metal transfer information in this paper. In [10], the authors developed a closed loop control system that controls the metal transfer by the detaching of the droplet imaged by the back-lighting apparatus. Fig. 1 shows the 16 adjacent images with both droplet and undetached liquid metal from the video captured by the back-lighting apparatus developed in [10]. The detaching of the droplet is determined by the droplet's downward momentum that is mainly affected by the current ejection level and the mass of the droplet. The trajectory of the droplet and current phase can be matched. The authors tried to achieve one droplet per pulse by switching the current to base current after the droplet's detachment. The position of the droplet was extracted in pixel and its trajectory was matched with the current phase to achieve one droplet per pulse. The conclusion was drawn that the
Z. Wang / Robotics and Computer-Integrated Manufacturing 38 (2016) 52–66
53
Fig. 1. Illustration of droplet formation/detaching and undetached liquid metal with images captured in [10].
droplet with greater size results in slower oscillation. Their monitoring system calculated the detaching of the droplet whose frequency was mainly determined by the mass of the droplet. Unfortunately, the mass of the imaged droplet was not calculated at that time due to the lack of automatic and robust image processing algorithms. For more accurate control, both the size of the droplet and the undetached liquid metal are required, which was not addressed by any past research adequately. In addition, the authors stated that “a significant limitation of the developed control system is the use of the imaging system. To actually apply the proposed metal transfer control principle to GMAW, a low cost compact sensor must be developed.” As can be seen, the complex back-lighting apparatus is not suitable for the practical robotic welding. To solve this issue, a simplified back-lighting based monitoring system was developed. In addition, automatic and robust image processing algorithms were proposed to extract the metal transfer information on line. This paper is organized as follows. First, the proposed monitoring system is described in Section 2. Some typical images captured by the proposed monitoring system are shown. In Section 3, the captured images are analyzed. In Section 4, state of art segmentation methods are evaluated and analyzed. In Section 5, double-threshold method is proposed to segment the droplet and the undetached liquid metal. Effective methods are also developed to extract the metal transfer information. More experimental results are shown in Section 6 to demonstrate the effectiveness of the proposed monitoring system and image processing algorithms. Finally, conclusions are drawn and future work is discussed in Section 7.
2. Simplified laser back-lighting based monitoring system Fig. 2 shows the diagram of the proposed monitoring system installed on a pulsed welding system which is powered by an adaptive pulse GMAW machine whose welding voltage and current are shown in Fig. 3(a) and (b). The mild steel electrode wire with the diameter 1.2 mm is used. The used shielding gas is 95%
Argon and 5% CO2 with a flow rate of 17 L/min. The pulsed GMAW system controls the welding current and wire feed speed through the central controller and obtains the metal transfer information through the developed laser backlighting system. A high speed camera connected with a high speed frame grabber is aimed at the imaging plane orthogonally and recorded the images in real time for on line feedback control. A 20 milliwatt laser centered at 685 nm with 10° divergent angle is projected across the metal transfer process and reaches the imaging plane placed on the other side. As pointed out in [10], the complex optical lens is difficult to set up in practical welding environments. In addition, the complex optical lens could not avoid the bad effect of arc light which fails the segmentation by a global threshold. Most importantly, effective image processing algorithms could extract the metal transfer information robustly no matter the images are captured with or without the complex optical lens. Hence, they are removed in the developed monitoring system. Without parallel rays, the imaged metal transfer will be enlarged and off-line calibration is needed to compute a scalar which is then used to transform the enlarged coordinate back to the real one. The wire with known width W1 in mm is imaged at the imaging plane and its width W2 in pixel in the camera view is computed. The scalar is computed by W1/W2. During on line monitoring, the computed width will be multiplied by W1/W2 and the computed size/area will be multiplied by (W1/W2)2 . A potential advantage of the divergent laser is that images with larger droplets are captured, which might increase the image processing accuracy. As shown in Fig. 2, the imaging plane is placed at a distance d2 = 300 mm from the welding torch where the intensity of the arc radiation has been sufficiently reduced while the laser is still strong enough. The laser projector is placed at a distance d1 = 250 mm from the torch on the other side to ensure that the metal transfer can be imaged fully by the laser beam and the beam is also strong enough when reaching the imaging plane. Three examples of typical metal transfer images are shown in Fig. 4(a)–(c). The droplet exists in (a) and the wire tapering is insignificant. In (b), there is no droplet and the wire tapering is still insignificant. A significant tapering is shown in (c). It is caused by
Fig. 2. Diagram of the proposed Monitoring System.
54
Z. Wang / Robotics and Computer-Integrated Manufacturing 38 (2016) 52–66
Fig. 3. Welding Parameters of P-GMAW (a) Current waveform; (b) Voltage waveform.
the fact that the peak current has exceeded the transition current while this current is considered the minimal current level needed to produce the droplet no larger than the wire's diameter. The image looks noisy and the contrast is low. The effectiveness of the Gaussian weighted kernel moving filter in suppressing the noise has been verified in the past research [11,12]. The Gaussian kernel is formulated below:
⎛ 2 x − μx + y − μy ⎜ 1 h( x, y) = Exp⎜ − σ 2π 2σ 2 ⎜ ⎝
(
)
(
2⎞
) ⎟⎟ ⎟ ⎠
(1)
where μx and μy are the pixel value of the kernel center. s is the standard deviation of the kernel. The filtered images are shown in Fig. 5(a)–(c). As can be seen, the fuzziness of the images is reduced sufficiently and the effectiveness of the filter is justified further.
The physical model of the arc light is used to analyze the captured image. The surface brightness Ib of a specific position p on the imaging plane caused by the arc light is modeled by the following equations [13]: →
⎯→ ⎯ → I I = 20 r
→
(2)
(3)
2hv 3 1 × hv / kT (4) c2 e −1 → where N is the surface normal at position p. α is the angle between the surface normal and the incident light. C denotes the color value. r is the distance between the arc light center and position p. v is the spectral frequency of arc light and h is the planck's constant. c is the speed of the light and k is the boltzmann's constant. T is the temperature of arc light source which is determined by the welding current. Since the welding current changes periodically, the temperature T changes accordingly with it. Consequently, the brightness Ib at position p changes alternately from frame to frame in the time domain. I0 =
Ib =
C 2hv 3 N×C d × hv / kT × 3 = 3b c2 r r e −1
(5)
where Cb is constant for one frame and varies from frame to frame with the value of T . d is the distance from the arc light center to the imaging plane. From Eq. (5), it becomes much obvious that the intensity distribution produced by the arc light effect on the captured image is inversely proportional to r 3. In the image, the intensity distribution caused by the arc light can be modeled as:
Cb
a( x , y ) =
2
(
[( x − x 0) + y − y0
2
)
+ d2]3/2
(6)
where (x0, y0 ) denotes the center of arc light distribution and it may lie outside of the image. Accordingly, the intensity distribution of the image impacted by both the laser and the arc light can be modeled as:
3. Image analysis
Ib = I × N × cosα × C
The brightness at different position on the imaging plane varies with its distance r to the arc light center. Combining Eq. (2–,4), we get:
Cb
I ( x , y) = p +
2
(
[( x − x 0) + y − y0
2
)
+ d2]3/2
(7)
where p is a constant representing the intensity caused by the laser light. The gray-scale distribution of the area blocked by the droplet or the wire tip should be 0 ideally. However, it is affected by the scattered laser and arc light. Thus it can be modeled as:
Ib( x, y) = β(x, y)I ( x, y)
(8)
where β(x, y ) denotes the light scattering coefficient at (x, y ). From the above equations, there are two major difficulties for the segmentation problems: 1), the gray-scale distribution of the blocked area is the modification of the light illuminated area, which means that a low threshold (close to 0) will not segment the droplet accurately. 2), the gray-scale distributions of these areas are unevenly distributed, which makes it difficult for a global threshold to segment the image accurately. Fortunately, the arc light distribution does not vary greatly due to the deliberately selected imaging distance where the arc light has already been attenuated greatly. Hence, the uneven intensity caused by the arc light in a small region will not cause the segmentation by threshold selection method to fail, which will be validated in the following sections. Thus, a region of interest (ROI) is defined for accurate segmentation of the droplet and undetached liquid metal.
Z. Wang / Robotics and Computer-Integrated Manufacturing 38 (2016) 52–66
Besides the uneven background, the image also looks fuzzy, which is mainly caused by the arc plasma which refracts the laser. The refraction of the laser affects the sharpness of the edge and the gradient of the gray-scale at the edge of the droplet and tapering. The variability of arc plasma in the arc welding causes the variability of the pixel intensity and fuzziness at the edges. To deal with the fuzziness effectively, [12] and [14] have proposed a second order model to fit the detected edges and compute the position and size of the droplet more accurately than threshold segmentation methods [15–22]. In this paper, we will introduce a linear membership function to deal with the fuzzy edge of the droplet and then propose a double-threshold segmentation method in Section 5.
55
evaluated. A typical droplet image is selected for segmentation by these five methods and the results are shown in Fig. 6. The results show that only histogram shape method and fuzzy entropy method could find an acceptable threshold which is 45 for the selected droplet image. However, the histogram shape method is easy to fall in local minimum for the unsupervised segmentation and the fuzzy entropy method has an undesirable computation complexity in finding the global maximum value for real time application. Most importantly, none of these methods deal with the fuzzy edge of the image satisfyingly since uncertainty exists at the edge region. In [13], a new threshold selection method that performs significantly better than all the other threshold selection method has been proposed. In this paper, we will make use of the proposed threshold selection in [13] and the implied two thresholds at the concave valley in Fig. 6(a).
4. Evaluation of state of art threshold selection methods As the most widely used segmentation method for gray-scale images, automatic threshold selection method has been studied extensively in the past decades [15–22]. We choose the most popular ones to segment the enhanced images. As analyzed in the above section, a ROI is defined first. Considering the resolution of the captured image and the average size of the droplet in the image, we define the ROI as 50 50 around the center of the droplet. The following threshold selection methods, histogram shape method, histogram bi-level function approximating method, Otsu's method, entropy method and fuzzy entropy method are
5. Proposed algorithms The objectives of the image processing are to (1) accurately identify the outlines or contours of the detached droplets and undetached liquid metal; (2) compute the characteristic parameters of the metal transfer image from the identified outlines. First, we need to determine if the droplet exists in the image. If the droplet exists, we need segment both the droplet and undetached liquid metal and compute their characteristic parameters. If no droplet exists, we only need to segment the undetached liquid metal and compute its characteristic parameters. The most
Fig. 4. Typical examples of acquired images (a) insignificant tapering with a droplet; (b) insignificant taper without droplet; (c) significant tapering.
Fig. 5. Enhanced images (a) insignificant tapering with a droplet; (b) insignificant taper without droplet; (c) significant tapering.
56
Z. Wang / Robotics and Computer-Integrated Manufacturing 38 (2016) 52–66
Fig. 6. Segmenting a droplet by threshold segmentation methods; (a) Droplet ROI and threshold at 45 (valley) by histogram shape method; (b) Threshold at 42 (peak) by bilevel function approximation method; (c) Threshold at 48 (valley) by Otsu's method; (d) Threshold at 48 (peak) by entropy method; (e) Threshold at 45 (peak) by fuzzy entropy method.
Fig. 7. Illustration of thresholds computation based on ROD. (a) The droplet in the 30 30 ROI; (b) The computed ROD; (c) The distribution of sorted pixels in the ROD; (d) The computed slope differences.
challenging part of this work is to segment the fuzzy zone of the droplet or the liquid metal accurately. 5.1. Droplet detection To detect if there is a detached droplet and then locate the droplet if it exists, the image is binarized first using a rough threshold determined by offline analysis. As discussed in Section 3, the intensity of the arc radiation on the image plane fluctuates from frame to frame with the current. Thus the threshold should vary accordingly. In addition, based on the model formulated by Eqs. (7) and (8) and the subsequent analysis, we need to define a ROI for the subsequent processing. We select the ROI as 200 100 around the center of the wire tip directly based on the fixed
imaging distance, imaging resolution and welding process. The threshold is then formulated as follows:
Tg = To + (1 +
(
i
− 1) )ΔT
(9)
where both To and ΔT are constants computed from the grayscales of several frames offline. i = 1, 2, 3, ... , N is the index of captured images. This global threshold is used to detect if there is a droplet instead of accurately segmenting it. 5.2. Droplet segmentation by double threshold method If a droplet is detected, a smaller ROI 30 30 is defined around the center of the detected droplet. To deal with the fuzziness of the edge region and identify the outline of the droplet accurately, we
Z. Wang / Robotics and Computer-Integrated Manufacturing 38 (2016) 52–66
57
Fig. 8. Illustration of thresholds computation based on slice for the droplet. (a) The slice in the 30 30 ROI; (b) The distribution of sorted pixels in the slice; (c) The distribution after eliminating pixels greater than 40; (d) The computed slope differences.
Fig. 9. Segmentation results of the droplet with thresholds computed based on slice. (a) The droplet in the 30 30 ROI; (b) The segmented small droplet; (c) The segmented large droplet; (d) The segmented fuzzy region.
58
Z. Wang / Robotics and Computer-Integrated Manufacturing 38 (2016) 52–66
Fig. 10. Illustration of the detected fuzzy zone and fitted boundary.
propose a double threshold method to find an underestimate for the droplet using a relatively low threshold T2 and find an overestimate for the droplet using a relatively high threshold T1. The region between the two contours is referred to as the fuzzy region where the laser refraction's effect is significant. While any pixel inside the inner contour or outside the outer contour is referred to as droplet or non-droplet respectively with confidence, it is uncertain if a particular pixel in the fuzzy region is a droplet point or non-droplet point. We use a linear membership function to assign a membership to each pixel in the fuzzy region instead of determining it as droplet point or non-droplet point. The linear membership function is based on the Euclidean distance between the gray-scale Gi and the threshold T1 in the gray-scale domain and it is formulated as:
⎧ (T − G )/(T − T )T ≥ G ≥ T i i 1 2 1 2 ⎪ 1 ⎪ Pi = ⎨ 1 T2 > Gi i = 1, ... , N ⎪ ⎪ Gi > T1 ⎩0
(10)
where N denotes the number of pixels in the ROI and Gi denotes the gray-scale value of the i th pixel at the fuzzy region. A region which contains the center region, the fuzzy region and laser illuminated region is defined as Region of Droplet (ROD). All pixels Oj′s(j = 1, ... , M ) in the ROD can be sorted based on the grayscale. If the pixel index is considered as x coordinate and the gray-
scale is considered as y0 , an equation y = y0 (x ) would be resulted and the increase of y0 (x ) with x would never be negative because of the sorting. It is apparent that when y0 (x ) (gray-scale) increases with x (the pixel index) from the center of the droplet, through the fuzzy region, to the laser illuminated region, y0 (x ) changes most abruptly at the two boundaries. Hence, it is appears to be reasonable to define an optimal T1 at the outer boundary and define an optimal T2 at the inner boundary respectively as formulated by Eq. (10). As can be seen, accurately determining the two thresholds, T1 and T2 is critical for the accurate segmentation of the droplet. From the state of art literatures [15–22], we could not find any automatic method that is capable of computing two thresholds for one image simultaneously and accurately. In the past research [13], a flexible threshold selection method has been proposed to compute multiple thresholds simultaneously and robustly. Here, we utilize it to compute two thresholds for this application. The previously proposed threshold selection method is based on the assumption that the variation of the histogram distribution is greatest at the threshold point. When there are multiple thresholds, multiple variation peaks will occur. When the grayscale value increases with the order index from the droplet center, through the fuzzy area points, to the laser illuminated area points, there should be an optimal threshold T1 at the outer boundary and an optimal A Laser Back−Lighting Based Metal Transfer Monitoring System for Robotic Gas Metal Arc Welding at the inner boundary.
Table 1 Comparison of accuracy and speed. Methods
Average areas
Average time
Double threshold Angle edge
156.8 pixels 157.7 pixels
0.004 s 0.027 s
Z. Wang / Robotics and Computer-Integrated Manufacturing 38 (2016) 52–66
59
Fig. 11. Illustration of thresholds computation based on slice for liquid metal. (a) The slice in the 20 40 ROI; (b) The distribution of sorted pixels in the slice; (c) The distribution after eliminating pixels greater than 21; (d) The computed slope differences.
Fig. 12. Segmentation results of liquid metal with thresholds computed based on slice. (a) The liquid metal in the 110 40 ROI; (b) The segmented small liquid metal; (c) The segmented large liquid metal; (d) The segmented fuzzy region.
Its implementation comprises the following four steps: Step 1: for each 30 30 ROI, a ROD is computed based on with the relative high threshold Tg + 20. All pixels in the ROD are sorted based on their gray-scales and form a distribution.
Step 2: for any point on the sorted distribution, there are two slopes, one on the left and the other on the right. They are computed by fitting a line model with seven adjacent points at each side. The line model is formulated and computed as:
60
Z. Wang / Robotics and Computer-Integrated Manufacturing 38 (2016) 52–66
Fig. 13. Illustration of solid–liquid boundary position for liquid metal. (a) The width distribution of the liquid metal; (b) The computed slope differences for the width distribution.
yi = axi + b
(11) −1
( )
[a, b]T = BT B
B=
Y=
⎡x ⎢ 1 ⎢ x2 ⎢ ⋮ ⎢ ⎣ x7
BT Y
1⎤ ⎥ 1⎥ ⋮⎥ ⎥ 1⎦
[y1 , y2 , …, y7 ]T
(12)
(13) (14)
Step 3: two slopes at point i , a1(i ) and a2(i), are then obtained from Eq. (12). Then the slope difference, sd(i ), at point i is computed:
sd( i) = a2( i) − a1( i); i = 8, …, Ns − 7
(15)
where Ns is the total number of pixels in the ROD. Step 4: Find the two positions with the largest peak and the second largest peak corresponding to T1 and T2 respectively. Fig. 7(a) shows the droplet in the 30 30 ROI. The bright area in Fig. 7(b) shows the ROD obtained using Tg + 20. A larger threshold is used to increase the number of points with higher gray-scale in order to calculate the slope of the fitted line on the right of the boundary points as can be seen in Fig. 7(b). The sorted gray-scale distribution is shown in (c). As can be seen from Fig. 7 (c), the gray-scale distribution is almost linear and the boundary points are difficult to identify visually. However, there are still variations and the maximum slope difference principle is thus applied to determine the boundary points statistically. Fig. 7 (d) shows the plot of the computed slope differences. It is seen that the global peak occurs at horizontal coordinate 65 which corresponds to gray-scale value 29.76 in Fig. 7(c). This position is expected to be the boundary between the center region and the fuzzy region and T2 is thus chosen as 29.76. There is also a secondary peak which occurs at horizontal coordinate 122 which corresponds to the gray-scale 33.92 in Fig. 7(c). Hence, 33.92 is selected as T1 for the boundary between the fuzzy region and laser illuminated region. In Fig. 7(c), the distribution appears too gradual and the
characteristics of the gray-scale gradient in these three regions are not observable. The results obtained from such a gradual distribution are not most trustable. To investigate further, the ROD is replaced with a 5 20 slice selected from the center of the 30 30 ROI. Fig. 9(a) shows the selected slice with corresponding pixels from the ROI and (b) gives its sorted gray-scale distribution. In (c), all points with gray-scale greater than 40 are not plotted to exclude apparent laser illuminated points for better visual effect. Now it is apparent that the distribution contains three regions: approximately [1 42] represents the center part of the droplet, approximately [43 62] represents the fuzzy region of the droplet, and approximately [63 84] represents the laser illuminated region around the droplet. Fig. 9(d) shows the plot of the slope differences computed from Fig. 9(c). As can be seen, two maximum peaks occur at horizontal coordinate 42 and 62 respectively. The corresponding T2 is 29.9 and the corresponding T1 is 34.04. Although they are similar with those computed earlier from the gray-scale distribution of the whole region, the confidence of the decision is improved because the decision is made based on observed characteristics of gray-scale gradient in the three regions. A careful thinking suggests that in the distribution generated from the whole region as shown in Fig. 8(c), the gradient change in a two-dimensional world has been re-arranged in one-dimensional world such that the more points in the laser illuminated area are included. As a result, the number of pixels used in the distribution increases as the distance from the center of the droplet toward the laser illuminated area. Hence, the gradient is reduced from its original visual impact as the distance increases and the confidence level is thus also reduced. By taking a slice, the one-dimensional characteristics of the gradient change across the three zones are preserved in its gray-scale distribution shown in Fig. 9(c) and the decision confidence and accuracy are improved. Substitute the two automatically determined thresholds T1 and T2 into Eq. (10), the droplet is segmented as shown in Fig. 9. The area of the droplet is computed as: N
Ad =
∑ Pi i=1
(16)
The position of the droplet is computed as follows:
xd =
yd =
N
1 Ad
∑ Pi × xi
1 Ad
∑ Pi × yi
i=1
(17)
N i=1
(18)
where x i and yi are the vertical and horizontal coordinates of the i th point respectively. 5.3. Droplet segmentation by angle-edge modeling method To verify the robustness of the proposed double-threshold method, we utilize the proposed method in [14] to compute the droplet and then compare the results with those of the proposed double-threshold method. Eqs. (11–15) are used to detect the edges of the droplet along the direction of the droplet's center to the boundary. The maximum peak is selected and the corresponding position is chosen as an edge. Fig. 10 (a) and (c) show two the detected edges for two different droplets. As can be seen, the detected edges form a fuzzy zone that fails all the one threshold segmentation methods. After all the edges are detected, the following second order model is used to fit the boundary of the droplet.
Z. Wang / Robotics and Computer-Integrated Manufacturing 38 (2016) 52–66
61
Fig. 14. Droplet segmentation results; (a) Frame 1; (b) Frame 2; (c) Frame 4; (d) Frame 5; (e) Frame7; (f) Frame 8.
r = a 0 + a1θ + a2θ 2
(19)
where r is the radius pointing from the droplet center to the detected edge and θ is the angle of the radius vector. a0 , a1 and a2 are
the coefficients that can be calculated by least squares estimation using all the detected edges which can be denoted by (ri, θi ) ; i = 1, 2, … , N .
62
Z. Wang / Robotics and Computer-Integrated Manufacturing 38 (2016) 52–66
and the resultant distribution is shown in Fig. 11(c). Fig. 11(d) plots the computed slope differences. Fig. 12 shows the segmentation results with the computed double thresholds. After the liquid metal is segmented, its width distribution is computed by the following equation. Xmax
w ( y) =
∑
P(x, y)
x=1
(25)
where P(x, y) is the probability of pixel at position (x, y ). Xmax is the size of the ROI in the row direction. The solid–liquid boundary position Ys is computed based on the maximum slope difference from the width distribution as illustrated in Fig. 13. As can be seen, the global maximum occurs at horizontal position 6. Hence, the boundary point occurs at position 6. A careful observation of the image suggests that this estimate is accurate. Then the length and area of the undetached liquid metal are computed by following equations respectively.
L = Ye − Ys Fig. 15. Droplets in different frames; (a) Areas; (b) X coordinates; (c) Y coordinates (the scale of the y axis is in pixel).
Ye
Al = T
−1 T
φ = (ψ ψ ) ψ Y
(20)
(26)
∑
w ( y)
y = Ys
(27)
where Ye denotes the point where the liquid metal stops.
where
Y = [ r1, r2, …, rN ]T
(21)
6. Results and discussions
⎡1 θ θ 2⎤ 1 1 ⎥ ⎢ ⎢1 θ θ 2⎥ 2 2 B=⎢ ⎥ ⋮ ⎥ ⎢⋮ ⋮ ⎢ 1 θ θ 2⎥ N ⎣ N ⎦
(22)
To verify the proposed methods, we select ten adjacent frames from the recorded video that contain both droplets and detached liquid metals. Since all the captured images comply with the model of Eqs. (7) and (8), thus the qualitative results are adequate to evaluate the accuracy.
φ = [ a 0, a1, a2]T
(23)
6.1. Processing results
After φ is computed, the boundary of the droplet can be fitted by Eq. (19) as shown in Fig. 10(b) and (d) denoted in blue. Accordingly, the area of the droplet can be computed by the following equation.
Ad =
1 2
π
∫−π (a 0 + a1θ + a2θ 2)2dθ
(24)
To verify the effectiveness of the proposed double-threshold method, the droplets in twenty adjacent frames are segmented and computed by the proposed double threshold method and angle edge modeling method respectively. The average areas computed by these two methods and their computational time are shown in Table 1. As can be seen, the accuracy of the proposed double threshold method is similar to that of the angle edge modeling method while it is much more efficient. Hence, the effectiveness of the proposed double threshold method is verified.
Ten adjacent images are processed by the proposed doublethreshold method and the results are shown in Fig. 14. Since some frames do not contain the droplet, the processing results return none. For each frame, we give the original image, the segmentation result by the smaller threshold, the segmentation result by the larger threshold and the segmented fuzzy area with probabilities. Only six droplets exist in these 10 frames and their areas and positions are plotted in Fig. 15. As can be seen, both the size and position of the droplet change periodically. Fig. 16 shows the segmentation results of undetached liquid metal and Fig. 17 shows the change of solid–liquid boundary position, the change of end position of the liquid metal, their computed lengths and average widths. From these results, we see that both the droplet are undetached liquid metal are segmented with adequate accuracy. The periodicity of metal transfer is conveyed successfully. 6.2. Real time processing evaluation
5.4. Undetached liquid metal segmentation by double threshold method A 120 40 ROI is selected for accurate segmentation of the undetached liquid metal. Similarly, a 20 40 slice is used to generate the gray-scale distribution and compute the slope differences. Fig. 11 (a) shows an example of the selected slice. Fig. 11(b) shows the sorted gray-scale distribution. As can be seen, it also contains the same three parts as in the droplet processing. We still eliminate the brightest laser illuminated part to exclude apparent laser illuminated points
In the developed on line monitoring system, a high speed camera connected with a high speed frame grabber is aimed at the imaging plane. The maximum capturing rate can be up to 420 frames per second. The frame rate used in this research work is 200 frames per second and the processing speed for doublethreshold method is about 100 frames per second, which are adequate for the control of the current period shown in Fig. 3(a). In [10], the frame rate of 800 is used and the plotted positions for one droplet are much more than those in this research. As plotted in
Z. Wang / Robotics and Computer-Integrated Manufacturing 38 (2016) 52–66
Fig. 15, only two positions for each droplet are computed. The used hardware for the monitoring system include 1), a computer with Intels Pentiums M processor dual core at
63
2.99 GHz and 8 G RAM; 2), a Matrox Meteor-II/Digital frame grabber with a maximum grabbing rate at 420; 3), a DALSA CA-D6 industrial camera with a maximum frame rate at 920; 4) a laser
Fig. 16. Undetached liquid metal segmentation results; (a) Frame 1; (b) Frame 2; (c) Frame 3; (d) Frame 4; (e) Frame 5; (f) Frame 6; (g) Frame 7; (h) Frame 8; (i) Frame 9; (j) Frame 10.
64
Z. Wang / Robotics and Computer-Integrated Manufacturing 38 (2016) 52–66
Fig. 16. (continued)
generator with 20 mW, 685 nm, 10° divergent angle; 5), an imaging plane which consists of a transparent glass and a white paper. The used software include 1) Windows XP operation systems; 2) Matrox image processing library.
7. Discussions The original contributions of this paper include: 1), the model of the arc light and image facilitates the subsequent image analysis and would benefit other researchers in coming up with effective image processing algorithms for the captured images affected by the arc light. 2), a double-threshold segmentation method is proposed to segment fuzzy edge images. The fuzzy region is computed by a linear membership function. The segmentation accuracy is verified by the previously justified method [12–14]. The advantage of the double-threshold method over other threshold selection methods [15–22] is its better accuracy in segmenting fuzzy images. The advantage of the double-threshold method over the previous angle-edge method [12–14] is its better efficiency and all sidedness. The double-threshold method could be used to compute both the droplet and the undetached liquid metal while the angle-edge
method could only be used to model the droplet. 3), a simplified laser back-lighting based monitoring system with available hardware and software is developed at the lab and its real time processing ability is verified by the real data. Compared to the past research [7–10], the simplified system is costeffective and easy to be handled in real welding environments. The proposed monitoring system is capable of robustly extracting the metal transfer information on line with the proposed automatic and efficient image processing algorithms while this important capability was not addressed by the past research [7–10]. When an image based real-time feedback control system is developed to control the metal transfer process, a much reduced arc length may be used without unintentional short-circuiting. This would improve the concentration of the arc such that the needed weld penetration is achieved with a reduced current. The reduced arc length also directly reduces the arc voltage. Both of these two reductions lead to a reduction in the energy consumption. In addition, the reduced arc length also reduces the impact of the droplets on the weld pool and thus reduces the chances to generate spatters. With the image based feedback, it is also possible to switch the current from the peak level rapidly right after the droplet is detached. The reduced application time of the peak current would greatly reduce the over-heating on the droplet
Z. Wang / Robotics and Computer-Integrated Manufacturing 38 (2016) 52–66
65
Fig. 17. Undetached liquid metal in different frames; (a) change of solid–liquid boundary position Ys ; (b) change of end position of liquid metal Ye ; (c) Length; (d) Average width (the scale of the y axis is in pixel).
metal that produces harmful smokes and metal vapors. In summary, a monitoring system based feedback control system would reduce the energy and be more environment-friendly. This study establishes part of the foundation toward a closed loop welding robot control system that makes use of the monitoring system [13,23–24] for metal transfer control.
8. Conclusion and future work This paper establishes a capability needed to monitor the metal transfer process for real-time control in the real welding environments. A cost-effective laser back-lighting based monitoring system is developed with available hardware and software to automatically extract the metal transfer information robustly and efficiently in real welding environments. For efficient and robust segmentation, the arc light and image are modeled mathematically, a double-threshold method is proposed to calculate two thresholds simultaneously from the histogram distribution and a linear membership function is proposed to calculate the fuzzy region. Experimental results verified the efficiency and robustness of the proposed image processing algorithms. Based on the on line monitoring results, a processing speed up to 100 frames per second was achieved, which is adequate for the control frequency. The future work includes, but not limited to: (1), establishing a closed loop control system for P-GMAW metal transfer monitoring. (2), monitoring and controlling other welding processes with the proposed algorithms and system.
Acknowledgments The experiments are conducted at University of Kentucky
under Grant DMI-0355324.
66
Z. Wang / Robotics and Computer-Integrated Manufacturing 38 (2016) 52–66
References [1] Y. Shao, Y.M. Zhang, Gas metal arc welding enhanced by pulsed laser, Weld. J. 93 (6) (2014) 205s–215s. [2] J.A. Johnson, N.M. Carlson, H.B. Smartt, D.E. Clark, Process control of GMAW: sensing of metal transfer mode, Weld. J. 70 (4) (1991) 91-s–99-s. [3] T. Siewert, B. Madigan, T. Quinn, M. Mornis, Through-the-arc sensing for gas metal arc weld quality in real time, Mater. Eval. (1992) 1314–1318 (Nov). [4] Q.L. Wang, P.J. Li, Arc light sensing of droplet transfer and its analysis in pulsed GMAW process, Weld. J. 76 (11) (1997) 458-s–469-s. [5] S. Rhee, E. Kannatey-Asibu Jr., Observation of metal transfer during gas metal arc welding, Weld. J. 71 (10) (1992) 381-s–386-s. [6] B. Bachmann, E. Siewert, J. Schein, In situ droplet surface tension and viscosity measurements in gas metal arc welding, J. Phys. D: Appl. Phys. 45 (2012) 175202. [7] C.D. Allemand, R. Schoeder, D.E. Ries, T.W. Eager, A method of filming metal transfer in weld arcs, Weld. J. 64 (1) (1985) 45–47. [8] S. Subramaniam, D.R. White, D.J. Scholl, W.H. Weber, In situ optical measurement of liquid drop surface tension in gas metal arc welding, J. Phys. D: Appl. Phys. 31 (1998) 1963–1967. [9] A. Scotti, V. Ponomarev, W. Lucas, A scientific application oriented classification for metal transfer modes in GMA welding, J. Mater. Process. Technol. 212 (2012) 1406–141413. [10] Y.M. Zhang, E. Liguo, R. Kovacevic, Active metal transfer control by monitoring excited droplet oscillation, Weld. J. 77 (9) (1998) 388-s–395-s. [11] Z.Z. Wang, Y.M. Zhang, X.J. Ma, Machine recognition of laser reflection from gas metal arc weld pool surfaces, J. Manuf. Sci. Eng. 133 (4) (2011) 041001–041013 (Aug). [12] Y. Shao, Z.Z. Wang, Y.M. Zhang, Monitoring of liquid droplets in laser enhanced
GMAW, Int. J. Adv. Manuf. Technol. 57 (2011) 203–214. [13] Z.Z. Wang, Monitoring of GMAW weld pool from reflected laser lines for real time control, IEEE Trans. Ind. Inform. 10 (4) (2014) 2073–2083. [14] Z.Z. Wang, Y.M. Zhang, Brightness based selection and edge detection based enhancement separation algorithm for low resolution metal transfer images, IEEE Trans. Autom. Sci. Eng. 6 (1) (2009) 181–187. [15] M. Sezgin, B. Sankur, Survey over image thresholding techniques andquantitative performance evaluation, J. Electron. Imaging 13 (2004) 146–165 (Jan). [16] M.E. Sieracki, S.E. Reichenbach, K.L. Webb, Evaluation of automated threshold selection methods for accurately sizing microscopic fluorescent cells by image analysis, Appl. Environ. Microbiol. 55 (1989) 2762–2772. [17] A. Rosenfeld, P.D. Torre, Histogram concavity analysis as an aid inthreshold selection, IEEE Trans. Syst. Man Cybern. SMC 9 (1983) 38–52. [18] N. Ramesh, J.H. Yoo, I.K. Sethi, Thresholding based on histogram approximation, IEE Proc-Vis. Image Signal Process. 142 (5) (1995) 271–279. [19] N. Otsu, A threshold selection method from gray level histogram, IEEE Trans. Syst. Man Cybern. SMC 9 (1979) 62–66. [20] P. Sahoo, C. Wilkins, J. Yeager, Threshold selection using Renyi'sentropy, Pattern Recognit. 30 (1997) 71–84. [21] L.K. Huang, F.K. Lam, Image thresholding by minimizing the measures offuzziness, Pattern Recognit. 28 (1995) 41–51. [22] H.D. Cheng, Y.H. Chen, Fuzzy partition of two-dimensional histogram andits application to thresholding, Pattern Recognit. 32 (1999) 825–843. [23] S.C. Huang, B.H. Chen, Automatic moving object extraction through a realworld variable-bandwidth network for traffic monitoring systems, IEEE Trans. Ind. Electron. 61 (4) (2014) 2099–2112. [24] J. Neuzil, O. Kreibich, R. Smid, A distributed fault detection system based on iwsn for machine condition monitoring, IEEE Trans. Ind. Inform. 10 (2) (2014) 1118–1123.