Advances in Engineering Software 83 (2015) 99–108
Contents lists available at ScienceDirect
Advances in Engineering Software journal homepage: www.elsevier.com/locate/advengsoft
Thin crack observation in a reinforced concrete bridge pier test using image processing and analysis Yuan-Sen Yang a,⇑, Chung-Ming Yang a,b, Chang-Wei Huang c a
National Taipei University of Technology (NTUT), Taiwan Chunghwa Telecom, Taiwan c Chung Yuan Christian University, Taiwan b
a r t i c l e
i n f o
Article history: Received 25 July 2014 Received in revised form 5 January 2015 Accepted 11 February 2015 Available online 3 March 2015 Keywords: Thin crack Crack observation Image analysis Reinforced concrete Structural experiments Surface strain measurement
a b s t r a c t In reinforced concrete (RC) structural experiments, the development of concrete surface cracks is an important factor of concern to experts. One conventional crack observation method is to suspend a test at a few selected testing steps and send inspectors to mark pen strokes on visible cracks, but this method is dangerous and labor intensive. Many image analysis methods have been proposed to detect and measure the dark shadow lines of cracks, reducing the need for manual pen marking. However, these methods are not applicable for thin cracks, which do not present clear dark lines in images. This paper presents an image analysis method to capture thin cracks and minimize the requirement for pen marking in reinforced concrete structural tests. The paper presents the mathematical models, procedures, and limitations of our image analysis method, as well as the analysis flowchart, the adopted image processing and analysis methods, and the software implementation. Finally, the results of applying the proposed method in full-scale reinforced concrete bridge experiments are presented to demonstrate its performance. Results demonstrate that this method can capture concrete surface cracks even before dark crack lines visible to the naked eye appear. Ó 2015 Elsevier Ltd. All rights reserved.
1. Background Crack observations play an important role in reinforced concrete (RC) structural tests. The directions, patterns, and density of crack distributions reveal different failure modes, the degradation of stiffness and strength of structures, and many other kinds of significant information to researchers (e.g., Figs. 1a and 1b) for failure mode studies [1] and numerical model development [2–3]. Conventionally, cracks are observed by suspending a test to allow inspectors to approach the concrete surfaces and manually sketch lines on the cracks (Fig. 2). However, this method can be more dangerous than it would seem. The specimen structures are damaged and partially unstable. During test suspension for crack observations, specimens are normally under tensioned, compressed, bent, or distorted conditions with tons of internal forces keeping cracks open. Normally hydraulic actuators that applying forces on specimens are still running control loops, slightly vibrating, and adjusting tons of applied forces on the specimens even if they seem to be paused. Helmets are insufficient to completely ensure human safety if any
⇑ Corresponding author. E-mail address:
[email protected] (Y.-S. Yang). http://dx.doi.org/10.1016/j.advengsoft.2015.02.005 0965-9978/Ó 2015 Elsevier Ltd. All rights reserved.
parts of the specimens, experimental devices, or connecting components unexpectedly become unstable. As digital imaging technology rapidly improves and hardware costs drop, image analysis offers an alternative and cost-effective way for structural experimental measurements. Displacements, strain fields, cracks, or even dynamic motions can be captured and measured by using image analysis. Many crack detection methods have been proposed in the literature. Most of them are based on edge detection methods that detect and measure large cracks, whose presence appears as dark shadow lines in images [4]. Recent studies include improvement on crack depth prediction [5], change in detection without image registration [5], crack pattern recognition based on artificial neural networks [6], applications to micro-cracks of rocks [7], and efficient sub-pixel width measurement [8]. However, the development of concrete cracks of interest start from thin cracks that are not thick enough to clearly evidence a distinct dark line in images taken by remote cameras. Image resolution of cameras that are typically used in structural tests is on the order of about 1 mm per pixel or coarser: dark shadow lines that are thinner do not appear in the images. To address this deficiency, this paper presents an image analysis method that is suitable for thin crack observations. In this method, cracks that are thinner than dark lines can appear in the images.
100
Y.-S. Yang et al. / Advances in Engineering Software 83 (2015) 99–108
The proposed image-based crack observation method includes two procedure types: (1) experimental procedures, and (2) image analysis procedures. The aims of the experimental procedures are to acquire clear photos that contain sufficient and accurate information for the image analysis procedures. On the other hand, the aims of image analysis procedures are for the transformation and analysis of the photos into crack observation information and for visualization. The experimental procedures include the following steps: (1) surface processing, (2) setting up of two cameras, (3) taking calibration photos, and (4) taking experimental photos. Surface processing consists of painting or marking sufficient features or textures so that the observed region of interest (ROI) contains sufficient image features for movement trace detection. The speckle spray method, which applies random speckles and texture patterns over a surface, has been widely used in past image measurements in structural experiments (e.g. Carroll et al. [9] and Yang et al. [10]). However, speckle painting may increase the difficulty of crack
observation by the naked eye. It is necessary to obtain agreement with other members of the testing team before applying speckle painting. When setting up the cameras, it must be ensured that the cameras are firmly fixed to take calibration photos and experimental photos. The cameras should be adjusted so that the ranges of camera views (or field of views, FOVs) sufficiently contain the region of interest of the specimen. The ROI of the specimen ought to be kept within the central region of the camera view (see Fig. 3), leaving the boundary part unused. This is suggested because there may not only be displacements larger than what we expect at the boundary, but also because the boundary normally contains more violent (high-order nonlinear) distortion effects that are more difficult to mitigate during calibration. In addition, the focal lengths and focal distances of the cameras should not be allowed to automatically adjust, and must instead be manually fixed. It is also suggested that the apertures, exposure time, and environmental lighting conditions be kept constant as much as possible for a consistent exposure condition for all photos taken in an experiment. A few pairs of photos of a calibrated object are taken to provide information to estimate the parameters of each camera (i.e., intrinsic parameters) and the transformation between the coordinate systems of the two cameras (i.e., extrinsic parameters) [11]. A chessboard-like pattern is one of the widely used models for camera calibration. Both cameras need to take photos at the same time for each pair of photos to ensure that objects in left photo are at the same 3D positions of those in the right photo. Some experts have suggested taking ten or more pairs of calibration photos of a 7-by-8 or larger chessboard [11]. The calibration object should be positioned at different positions to cover the entire ROI. Experimental photos are taken as a series of photos of the ROI of the specimen during the entire experiment. It is very important to keep the cameras firmly and constantly fixed without movement. Even a very slight touch may induce camera movement and affect the accuracy of the results. The camera shutters should be controlled by remote controllers to minimize any external inference. The remote controllers can be operated by a person manually, a timer (that triggers the shutters in an equal interval of time), or an electronic device that connects the shutters with an experimental data logging system (so that the photos can be easily synchronized with other experimental data). The image analysis procedures of the proposed method includes the following major steps: (1) stereo calibration, (2) control points positioning, (3) surface formula estimation, (4) metric rectification, (5) deformation analysis, and (6) visualization. The procedures were implemented into a tool named ImPro Stereo [12]. Most of the computer implementation of ImPro Stereo was written in MATLAB codes and mixing language for external calls to a
Fig. 1b. A 45-degree crack indicating shear failure.
Fig. 2. Manual pen marking on cracks.
Fig. 1a. A horizontal crack indicating flexural failure.
This method assumes that the materials at different sides of a crack have movement along different directions. A movement analysis method developed in the computer vision field was employed to analyze such small movements. The formula and the procedure are briefly introduced, followed by applications of this method to selected full-scale RC structural experiments. Dark line detection or edge detection-based approaches are not considered in this paper. However, this study does not reduce the values of the existing dark-line detection and edge-based methods. In practical applications, the proposed method and these past methods can be employed concurrently or combined for a more thorough method.
2. Procedures and analysis formulas
101
Y.-S. Yang et al. / Advances in Engineering Software 83 (2015) 99–108
Region of Interest (ROI) Control point Camera normal plane
P1
P3
P2
P4
Field of View (FOV)
Region of Interest (ROI)
Fig. 3. Example of an ROI and a selected FOV.
Fig. 4. Control points positioning using stereo triangulation.
C/C++ based OpenCV library [11]. The mathematical formula and the computer implementation of each step are introduced next. Stereo calibration estimates the extrinsic and intrinsic parameters of the stereo camera system. The extrinsic parameters represent the geometrical relationship between the left and right camera, i.e., translational and rotational matrices between the cameras’ coordinate systems). Meanwhile, the intrinsic parameters of each camera describe how to convert an image point to homogeneous plane (the z = 1 plane of the camera’s coordinate system), and mainly include the focal lengths, image size, principal point, and lens distortion. The mathematical formula for this calibration can be found widely in the literature (e.g., [11]) and is not repeated in this paper. Many computer tools are available for stereo calibration; this work employed a camera calibration toolbox named Bouguet’s Camera Calibration Toolbox (BCCT) [13]. Control points positioning calculates the 3D coordinates of the control points of the observed surface by using the stereo triangulation technique. The number of control points should be sufficient to determine the location and shape of the observed surface. In this work, four control points are used and positioned to determine the size and the position of the ROI cylinder surface on the pier specimen. The control points in the first pair of photos (initial photos taken by the left and right cameras) are selected manually by the user (e.g., using a computer mouse). The control points in later pairs of photos (those of the deformed specimen) are positioned automatically by image tracing using sub-pixel precision template matching [10], which is a modified version of template matching function in the OpenCV library [11]. After the control points are positioned in the image coordinates (i.e., pixel positions in photos), their 3D coordinates can be estimated by stereo triangulation [14]. The observed region is on a cylinder surface that is defined by four control points, as shown in Fig. 4. The user is supposed to pick the control points P1 to P4 on the photos where P1 and P4 are on a vertical line, P1, P2, and P3 are roughly at the same height on the cylinder, and P4 and P1 are on the same vertical longitudinal axis of the cylinder. The formula of stereo triangulation can be found in [14]. Popular implementations of stereo triangulation can be found in BCCT (MATLAB-based) and OpenCV library. Accordingly, this work employed BCCT for stereo triangulation and OpenCV optical flow analysis for deformation analysis. Surface formula approximation estimates the assumed formula of the surface of the observed region, as shown in Fig. 5. The surface formula is defined in a parametric form because it is convenient for generating a metric rectification image of the ROI that is introduced later. The control points P1 to P4 define a
C
Region of Interest (ROI) Control point
P1 P4
P3
P2 O
Fig. 5. Estimation of ROI surface formula.
cylinder coordinate system as shown in Fig. 5, where point O is * *
*
the origin, OC is the central axis, and x , y , and z are the three unit axis of the cylinder orientation. By assuming P1 and P4 are on a *
vertical line, z can be estimated by: *
z ¼ ðP1 P4 Þ=kP1 P4 k
ð1Þ
The central axis OC can be estimated by calculating the intersection of the perpendicular bisectors of lines P1–P2 and P1–P3. The estimation of OC is similar to stereo triangulation. The *
central axis OC is then adjusted so that it is parallel to z .
1 1 1 2P1 þ P02 þ P03 þ a12 v 0x12 þ a13 v 0x13 4 2 2 O ¼ C þ P4 P1
C¼
ð2Þ ð3Þ P20
0
where the positions of P2 and P3 are adjusted to and P3 while ensuring that P1, P20 , and P30 are on a plane that is perpendicular *
to z . Note that
v0x12 ; v 0x13 ; a12 , and a13 are calculated by:
vx12 ¼ ðP2 P1 Þ=kP2 P1 k * * vy12 ¼ ðz v x12 Þ=kðz vx12 Þk * * v0x12 ¼ ðv y12 z Þ=kðv y12 z Þk P02 ¼ P1 þ ððP2 P1 Þ v 0x12 Þv 0x12
ð4Þ ð5Þ ð6Þ ð7Þ
where v 0x12 is the vector pointing to C from the midpoint between P1 and P20 .
102
Y.-S. Yang et al. / Advances in Engineering Software 83 (2015) 99–108
v x13 ¼ ðP3 P1 Þ=kP3 P1 k * * v y13 ¼ ðz vx13 Þ=kðz vx13 Þk * * v 0x13 ¼ ðv y13 z Þ=kðv y13 z Þk P03 ¼ P1 þ ððP3 P1 Þ v 0x13 Þv 0x13
ð8Þ ð9Þ ð10Þ ð11Þ
where v 0x13 is the vector pointing to C from the midpoint between P1 and P30 . Note that a12 and a13 in Eq. (2) are obtained by solving a linear system as shown in Eqs. (12) and (13). Eq. (13) uses the least square method to solve the intersection of the perpendicular bisectors defined by P1–P20 and P1–P30 .
A ¼ ½v 0x12 jv 0x13
a12 a13
*
¼
ð12Þ
1 T 1 A A AðP03 P02 Þ 2
ð13Þ
*
x and y are estimated by *
x ¼ ðP1 CÞ=kP1 Ck
*
*
ð14Þ
*
y ¼ zx
ð15Þ
The radius of the cylinder is estimated with
R ¼ kP1 Ck
Image rectification in the proposed method is a process used to generate a rectangular image that represents the surface of the ROI. This process is similar to unfolding the curved surface of the ROI to a plane image as shown in Fig. 6. The first step is to generate a set of 3D re-sampling points by substituting uniformly distributed parameters h and h into Eq. (17). Each re-sampling point represents a pixel in the rectification image, which typically consists of millions of pixels. These re-sampling points are the points of a 2D structured mesh uniformly distributed over the ROI. Each point is then projected to the photo of one of the cameras, and the image coordinate of the projected point can be calculated (see Fig. 6(a)). The color intensity (gray level or red/green/blue levels) of the point can be interpolated from its neighboring pixels in the photo (see Fig. 6(b) and (c)). The intervals of h and h (i.e., the density of the mesh) depend on the image size of the rectified image. For example, if the increment of h (or Dh) is H/1000, the rectified image height will be 1001 pixels. To keep the aspect ratio of the rectification image unchanged, Dh and RDh are kept the same. To minimize information loss, Dh should be no greater than the physical length that is equivalent to one pixel in the photo. However, the physical length of one pixel is different over an image and selection of Dh is subjective. In this work, the parameters Dh and Dh are set according to Eqs. (20) and (21):
ð16Þ
Dh ¼ 0:5
The parametric form of the ROI is expressed as *
*
*
Pðh; hÞ ¼ O þ R cosðhÞ x þR sinðhÞ y þh z
ð17Þ
where
* 0 6 h 6 acos x ðP02 OÞ=kP02 Ok
ð18Þ
0 6 h 6 kP1 P4 k
ð19Þ
The formulas above are calculated for each pair of photos. They imply that the initial and deformed surface of the observed region can be described by a formula with variable parameters. Images of the observed region will be projected onto the formulated surface in the following image rectification step. A certain amount of spurious deformation of the surface will be generated if the observed region surface departs from the formulated surface. This study maintains that the observed region surface of the pier is approximately a cylinder, and any spurious deformation of this property is ignored as we focus on crack observation rather than accurate strain field measurement. This method can be adjusted for other shapes of structures, e.g., walls [10] or rectangular pillars. The accuracy of the ROI formula may be sensitive to the accuracy of the control points’ 3D coordinates. For example, as * *
*
the three orthonormal unit vectors x , y , and z are determined by the directions of the lines between P1–P2–P3 and P1–P4, the calculated vectors are sensitive to the error of the P1–P4 direction. Errors may be induced if this P1–P4 direction is not parallel to the central line of the pier.
Physical distance between P1 and P4 Pixel distance between P1 and P4
Dh ¼ Dh=R
ð21Þ
The factor 0.5 in Eq. (20) can be adjusted by the user. A smaller Dh minimizes the loss of image information during image rectification and leads to greater image sizes and longer computing times, while a small factor results in an increased risk of losing image details in the photos. Image rectification is quite time consuming in the employed of ImPro Stereo, probably because this process is implemented in MATLAB scripts rather than a compiled language such as C/C++. Generating a rectification image with a size of 1700 by 2400 pixels or so on an Intel Mobile 2.66-GHz T9550 CPU-powered laptop takes more than 20 s. The computing time may be shortened if the image rectification MATLAB code is recoded in C/C++ and compiled to a MATLAB external interface. Displacement and deformation analysis estimates the displacement and deformation of the rectified images of deformed pairs of photos with respect to the initial one. This process can be done by using digital image correlation [15], optical flow analysis [16], or modified versions of these (e.g., [17]). These methods are capable of estimating two-dimensional displacement fields (including horizontal u and vertical v ) by comparing the initial and deformed rectified images (see Fig. 7(a) for an example). While the rectified image has a constant ratio of length to pixel (i.e., Dh), it is easy to convert image displacement fields to physical displacement fields (see Fig. 7(b) for an example). This work employed an optical flow analysis method implemented by [17] and available in OpenCV [11] mainly because of its good performance in terms of
h a re-sampling point in 3D coordinate a pixel of the re-sampling point
(a) re-sampling in 3D coordinates
ð20Þ
(b) projection to a photo Fig. 6. Process of image rectification.
(c) generating a rectified image
103
Y.-S. Yang et al. / Advances in Engineering Software 83 (2015) 99–108
(a) grid (blue lines)
(b) re-positioned
(c) estimated
(d) estimated
(e) backward projection
and initial rectified
grid and deformed
displacement (uy)
strain (eyy)
of strain field
image of ROI
rectified image
Fig. 7. Displacement/strain estimation and visualization. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
computing time and sub-pixel accuracy. Analyzing a rectified image with a size of 1700 by 2400 pixels or so on the aforementioned laptop takes less than 0.2 s. The deformation of the ROI surface is estimated by using a central difference approximation. The deformation of the cell (a rectangle in the grid) on the ith row and jth column of the grid is estimated by the following equations if the cell is not on the boundary:
uiþ1;j ui1;j 2 Dx v v i;j1 eyy ffi i;jþ1 2Dy 1 ui;jþ1 ui;j1 uiþ1;j ui1;j cxy ffi þ 2 2Dy 2 Dx
exx ffi
ð22Þ ð23Þ ð24Þ
When the cell is on the boundary, a forward or backward difference approximation is used instead ([18]). See Fig. 7(c) for an example. The final step is visualization. ImPro Stereo provides two visualization techniques: presenting in plane plots (Fig. 7(b) and (c)) and projection on photos (Fig. 7(d)). Plane plots present displacement or deformation fields of the observed region with color bars indicating field values of different colors. The perspective effect is removed so that the size scale is consistent over the region.
Projecting displays the field on the photo, representing the observed region in an intuitive way. As mentioned previously, these procedures were implemented into a program named ImPro Stereo (see Fig. 8). This program is a MATLAB GUI (Graphical User Interface)-based program suitable for general mathematical operations with external C++ calls to the OpenCV library. It handles all of the image analysis procedures described in this paper to minimize the need to create additional code. This program utilizes some existing tools for stereo calibration, triangulation, and the kernel of image analysis. The tools used for each image analysis procedure are listed below: (1) Stereo calibration: mainly based on Bouguet’s Camera Calibration Toolbox (BCCT). (2) Control-points positioning: mainly based on BCCT. (3) Surface formula estimation: MATLAB scripts (see Eqs. (1)–(19)). (4) Metric rectification: MATLAB scripts (see Eqs. (20) and (21)) with some BCCT subprograms. (5) Deformation analysis: MATLAB scripts (see Eqs. (22) and (23)) with some OpenCV functions. (6) Visualization: MATLAB scripts with some BCCT subprograms.
Fig. 8. ImPro Stereo, an integrated image analysis tool.
104
Y.-S. Yang et al. / Advances in Engineering Software 83 (2015) 99–108
Hydraulic actuators
9.3 m
Cameras Region of Interest (ROI)
(a) Niu-dou Bridge Pier 4 being tested
(b) Experimental preparation
Fig. 9. The Niu-dou Bridge In-situ Test (Pier 4).
Table 1 Information of cameras used in this test.
Model Resolution Sensor Exposure time Focal length
Left camera
Right camera
Canon EOS 550D 5184 3456 CMOS 22.3 14.9 mm 1/125 s 55 mm
Canon EOS 500D 4752 3168 CMOS 22.3 14.9 mm 1/200 s 55 mm
This method has some limitations that should be considered before it is adopted: (1) The need for photos to be taken by two cameras to perform stereo triangulation of the control points. (2) The need for cameras to be firmly fixed to ensure the accuracy of stereo triangulation. (3) The need for stereo calibration to be carried out. (4) The need for photos of the initial status to be taken as a reference status. Therefore, this method is most suitable for laboratory tests or some in-situ tests for which the lighting environment is stable. However, it is not suitable if the following conditions exist: (1) Crack inspection in which the initial condition photos are not available. This is because this method is based on movement analysis by comparing a reference (initial) status and a deformed status.
(a) Left camera image
(2) Photos are taken by a single camera. This method requires the 3D positions of the control points, which are obtained by using stereo triangulation. (3) Cameras are uninstalled without performing stereo calibration: i.e., camera parameters are not evaluated. This method requires camera parameters to eliminate (or at least, to reduce) any perspective and lens distortion. (4) Cameras are not firmly fixed. Even a slight movement of the cameras can lead to severe errors in analysis results, as the accuracy of stereo triangulation and movement analysis may be much finer than a pixel. (5) The outdoor light environment changes radically. Movement analysis needs to be accurate and is sensitive to the light change of the photos. 3. In-situ test The proposed method was applied for concrete crack observation in an in-situ experiment of an old bridge named Niu-dou Bridge located in Yilan County, Taiwan. While the specimen construction cost of a full-scale bridge pier test is normally high, the bridge was scheduled to be demolished, and thus offered a relatively low cost for research purposes. The in-situ test was carried out for multiple purposes that drew researchers investigating various topics such as structural failure behaviors, numerical modeling strategies, and experimental techniques. For instance, Liao et al. verified a structural health monitoring method using piezoceramic sensors [19], Chao and Loh demonstrated the
(b) Right camera image
Fig. 10. Images taken by two cameras.
Y.-S. Yang et al. / Advances in Engineering Software 83 (2015) 99–108
105
Fig. 11. Definition of the ROI.
Fig. 12. Shadows of rugged surface and cables leading to incorrect template matching.
performance of their signal processing method [20], and Chiou et al. studied the lateral strength of bridge foundation in gravel [21]. In this paper, the top of a bridge pier named Pier 4 was loaded at its top with a cyclic displacement. Fig. 9(a) and (b) shows a 3D visualization model of the experiment setup and a field photo of the pier, respectively. The cameras were installed 5 m from the bottom of the pier. The information of the cameras is listed in Table 1. Fig. 10(a) and (b) presents a pair of photos showing the bottom of the pier taken by the left and right cameras, respectively. The two photos of each pair seem identical but actually contain sufficient three-dimensional information by using stereo triangulation with the aid of stereo calibration. The lower part of the bridge pier, including our ROI, was painted white with regular black grid lines without speckle painting. This is a typical painting pattern for general reinforced concrete structural experiments when image analysis is not used. Speckle painting or other types of paintings that can provide sufficient image features to facilitate image analysis was not used because some researchers preferred to use this conventional method (combined with image
analysis) for crack observation: painting the pier surface using white paint and draw black grids and finally, make pen marks on cracks. The only image features left on the concrete surface were black grid lines and roughness. The black grid lines painted in this test are separated by intervals of 0.1 m along both the horizontal and vertical directions. The concrete surface of the ROI is rough in places, resulting in an irregular light intensity and thin shadows, providing some distinguishing features for image analysis (see Fig. 10(a) and (b)). The ROI range used in this test was limited. Fig. 11 presents the ROI (region 1), which is a 50 cm by 70 cm curved rectangle near the bottom of the pier. Other regions in the photo were either highly distorted (e.g., region 2), lacked surface features for image analysis (e.g., region 3), or were affected by the swinging cables of contact sensors. In the highly distorted region (region 2), the calibration error would be magnified when carrying out metric rectification in the image analysis procedure, resulting in a spurious image deformation and unacceptable strain estimation error. The image at region 3 contains only a bright white image between black grid lines, and would not provide sufficient image features for a refined deformation analysis. An addition of speckle painting, which would add granularity and features to the white space, would make the region 3 more suitable for the proposed method. Swinging cables connected to other contact sensors (region 4) would disturb the deformation analysis and lead to wrong displacement results because the cable has stronger image contrast than the pier surface. Region 1 in Fig. 11 was selected because it contained roughness on the concrete surface that provided more features for image analysis, less calibration error, and would not be interfered with by the cables. In this test, the cameras took photos automatically with a constant time interval of 60 s throughout the 5-h testing period. This paper only presents the first 48 min (25 steps) of the test because the sunlight started to appear in the ROI and the shadows of the rugged surface and the cables started to change one hour after the test began. The slowly moving shadows changed the ROI image patterns (as shown in Fig. 12) and led to incorrect optical flow analysis. Fig. 13 shows the pre-defined cyclic displacement history
Fig. 13. Selected experimental steps.
106
Y.-S. Yang et al. / Advances in Engineering Software 83 (2015) 99–108
(a) Step A
(b) Step B
(c) Step C
(d) Step D
(e) Step E
(f) Step F
(g) Step G
(h) Step H
(i) Step I
(j) Step J
Fig. 14. Selected strain visualizations for crack observation.
(a) Pen stroke of
(b) image analysis
(c) Pen stroke of
(d) image analysis
(e) Pen stroke of
(f) image analysis
step D
of step D
step F
of step F
step I
of step I
Fig. 15. Comparison of image analysis crack observation and pen strokes.
that was applied on the top of the pier. The 25 steps shown in Fig. 13 were not carried out in a constant speed. The photos were mapped to the steps by matching the date/time tags of photos and those of experimental data. The three circles at steps D, F, and I, denote the planned suspended point that allowed graduate students to approach the specimen and mark pen strokes on the cracks. Triangles denote ten selected steps (A to J) that image analysis captured the deformation and cracks at the displacement peaks of the test. It should be mentioned that the displacement of the pier specimen was not smoothly controlled following a pre-defined protocol; rather, it was suspended and continued occasionally and randomly over the entire test due to technical issues such as unexpected wired signal loss and hydraulic actuator control issues. With the date/time tags recorded by the cameras and the experimental data logger, the proposed method does not require continuous displacement to generate accurate surveying results. Image analysis results of this test showed that concrete cracks were observable using the proposed method. These cracks were observed much earlier than the dark crack lines that appeared in the photo. Fig. 14 shows a progressive development of cracks. The strain estimation was projected to the photos in Fig. 14. Red color denotes a high positive value of estimated strain. While concrete typically cracks when it reaches a small tensile strain (normally less than 0.001), the red color indicates an estimated strain of 0.01, much higher than the strain that typical concrete material can deform. Therefore, a red line implies that a concrete crack occurred at that location. It can be seen that steps B, F, H, and I present relatively clear cracks, and they were at the displacement peaks in Fig. 13. Step I shows the most obvious crack distribution among these results, in which four horizontal cracks can be observed.
The image analysis manifested more crack development information than the pen marking method. Steps D, F, and I, during which manual pen marking was done, were selected and compared between the methods. Fig. 15(a), (c), and (e) shows the pen strokes of these three steps; while Fig. 15 (b), (d), and (f) shows the corresponding image analysis results of strain estimation. The pen strokes in these figures were specially re-plotted on the images because the original pen strokes were too thin to be seen clearly in the small figures. All of the cracks that were observed by the naked eye were also observed during image analysis. In addition, image analysis presented cracks earlier than when the naked eye could see them. For example, two cracks that were marked by pen at step F (see Fig. 15(c)) were actually observed at the earlier step B in the image analysis (see Fig. 14(b)). Image analysis results of step F, H, and J (see Fig. 14(f), (h), and (j), respectively) show cracks that were not observed by the naked eye until pen marking afterwards. In addition to the appearance of cracks and their rough locations, image analysis can estimate the development of the crack width. A crack induces a difference of displacement at two sides of the crack. The difference of displacements, which accords to the estimated crack width, can be calculated using the concentrated high strain value. The vertical and horizontal crack widths can be estimated by Eqs. (22) and (23), respectively. While the strain is non-dimensional, the crack width can be estimated by importing Dy, the actual height of a cell. This work estimated horizontal crack widths to 2eyy Dy, in which Dy is 16.67 mm in this test. The Dy can be calculated by the ratio of length (mm) to pixel, which is determined by ImPro Stereo automatically. This work estimated the widths of four observed cracks and obtained the development of the width of the cracks, as shown in Fig. 16. It should be noted that the ROI tended to be in a tensile status when the top
Y.-S. Yang et al. / Advances in Engineering Software 83 (2015) 99–108
107
Crack 1
Crack 2
Crack 3
Crack 4
Fig. 16. History of estimated crack width of the four observed cracks in this experiment.
(a) Initial state at the location where Crack 1 later appeared (no crack)
(b) Step 2 at the same location (an invisible 0.2-mm crack in red-dot region)
Fig. 17. Photos showing that Crack 1 was invisible to the naked eye. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
displacement of the pier was positive and the cracks were expected to have opened more (i.e., to have larger widths). The steps during which the top displacement of the pier (see Fig. 13) were positive were 1–3, 5–7, 9–11, 13–15, 17–19, and 21–23 (white background in Fig. 16). Cracks were apparent in the image analysis when their widths were larger than 0.1 mm using images that were taken by cameras that were as far as 5 m from the ROI. A 0.1 mm crack is not easy to observe to an unassisted human, even when standing less than 1 m in front of the specimen. It should be mentioned that the crack widths shown in Fig. 16 were calculated in a semiautomatic way. These widths were first calculated automatically and then some of the widths that were significantly affected by data noise were re-calculated manually by canceling out noise. Most of the cracks that occurred in the test were not visible in the photos and would not have been detected by edge-based crack detection. This is mainly because these cracks were too thin and there was a lot of background noise such as background grid lines and roughness on the concrete surface. Fig. 17(a) shows the initial photo of crack 1 (whose location is shown in Fig. 16) before the crack occurred. Fig. 17(b) shows the same photo, with the only exception being that it was taken when the crack was open. However, no dark line representing the crack can be observed directly in the photo. Using the proposed method, Fig. 16 shows that the crack distance of Crack 1 was about 0.2 mm at Step 2. While the physical distance per pixel in the photos in this test was about 1 mm per pixel (estimated between P1 and P4, see Eq. (20)), 0.2 mm is roughly equivalent to 0.2 pixels in the photos.
4. Summary This paper presents formulas and procedures for a novel image analysis method for crack observation in a concrete pier test. By using cylinder formula approximation and image rectification, the surface of the observed regions can be unfolded and presented in a plane image for following displacement and deformation analysis. An experiment study of an in-situ bridge cyclic test showed that the proposed method could identify and manifest concrete cracks that were as thin as 0.2 pixels in photos, even before the dark crack lines appear on the photos or became visible to the naked eye. The computer implementation of this method, ImPro Stereo, is available to the public and can be freely downloaded [12]. This method aimed to minimize the need for manual pen marking of cracks. It does not aim to replace edge-based crack detection methods; rather, it seeks to complement them in hazardous scenarios. This method requires two cameras to be firmly fixed, camera parameters to be obtained by stereo calibration, initial condition photos to be taken, and a stable light environment to exist. These conditions are relatively easy in laboratories, and can be also achieved in an outdoor environment. Acknowledgements The research work reported in this paper was partially supported by the National Center for Research on Earthquake Engineering and Ministry of Science and Technology, Taiwan (projects number: NSC 99-2221-E-027-042-MY3 and NSC 101-2211-E-126-MY3).
108
Y.-S. Yang et al. / Advances in Engineering Software 83 (2015) 99–108
References [1] Manos GC, Theofanous M, Katakalos K. Numerical simulation of the shear behaviour of reinforced concrete rectangular beam specimens with or without FRP-strip shear reinforcement. Adv Eng Softw 2014;67:47–56. [2] Wu CL, Kuo WW, Yang YS, Hwang SJ, Elwood KJ, Loh CH, Moehle JP. Collapse of a nonductile concrete frame: shaking table tests. Earthq Eng Struct Dynam 2009;38(2):205–24. [3] Yang YS, Yang CM, Hsieh TJ. GPU parallelization of an object-oriented nonlinear dynamic structural analysis platform. Simul Model Pract Theor 2014;40:112–21. [4] Abdel-Qader I, Abudayyeh O, Kelly ME. Analysis of edge-detection techniques for crack identification in bridges. J Comput Civ Eng 2003; 17(4):255–63. [5] Adhikari RS, Moselhi O, Bagchi A. Image-based retrieval of concrete crack properties for bridge inspection. Autom Construct 2014;39:180–94. [6] Lee BY, Kim YY, Yi ST, Kim JK. Automated image processing technique for detecting and analysing concrete surface cracks. Struct Infrastruct Eng 2013;9(6):567–77. [7] Arena A, Delle Piane C, Sarout J. A new computational approach to cracks quantification from 2D image analysis: application to micro-cracks description in rocks. Comput Geosci 2014;66:106–20. [8] Nguyen HN, Kam TY, Cheng PY. An automatic approach for accurate edge detection of concrete crack utilizing 2D geometric features of crack. J Sign Process Syst 2013:1–20. [9] Carroll JD, Abuzaid W, Lambros J, Sehitoglu H. High resolution digital image correlation measurements of strain accumulation in fatigue crack growth. Int J Fatigue 2013;57:140–50.
[10] Yang YS, Huang CW, Wu CL. A simple image-based strain measurement method for measuring the strain fields in an RC-wall experiment. Earthq Eng Struct Dynam 2012;41(1):1–17. [11] Bradski G, Kaehler A. Learning OpenCV computer vision with the OpenCV Library, O’Reilly Media; 2008. [12] Yang YS, Chen HM, Lee CS, Jien YH, Hu ZH, Lai WZ. ImPro Stereo. Official site of the ImPro Stereo; 2014.
. [13] Bouguet JY. Camera calibration toolbox for matlab; 2013.
. [14] Hartley R, Zisserman A. Multiple view geometry in computer vision. Cambridge University Press; 2003. [15] Pan B, Yu L, Wu D, Tang L. Systematic errors in two-dimensional digital image correlation due to lens distortion. Opt Lasers Eng 2013;51(2):140–7. [16] Baker S, Matthews I. Lucas-Kanade 20 years on: a unifying framework. Int J Comput Vis 2004;56(3):221–55. [17] Bouguet JY. Pyramidal implementation of the Lucas Kanade Feature Tracker description of the algorithm, technical report, Microprocessor Research Labs, Intel Corporation; 2000.
. [18] Burden RL, Faires JD. Numerical analysis. Boston: Brooks/Cole, Cengage Learning; 1993. [19] Liao WI, Wang JX, Song G, Gu H, Olmi C, Mo YL, Loh CH. Structural health monitoring of concrete columns subjected to seismic excitations using piezoceramic-based sensors. Smart Mater Struct 2011;20(12):125015. [20] Chao SH, Loh CH. Application of singular spectrum analysis to structural monitoring and damage diagnosis of bridges. Struct Infrastruct Eng 2014; 10(6):708–27. [21] Chiou JS, Ko YY, Hsu SY, Tsai YC. Testing and analysis of a laterally loaded bridge caisson foundation in gravel. Soils Found 2012;52(3):562–73.