Automation in Construction xxx (xxxx) xxx–xxx
Contents lists available at ScienceDirect
Automation in Construction journal homepage: www.elsevier.com/locate/autcon
Comparison and utilization of point cloud generated from photogrammetry and laser scanning: 3D world model for smart heavy equipment planning ⁎
Daeyoon Moona, Suwan Chunga, Soonwook Kwonb, , Jongwon Seoc, Joonghwan Shina a
Department of Convergence Engineering for Future City, Sungkyunkwan University, Cheoncheon-Dong, Jangan-gu, Suwon, Gyeonggi-do, Republic of Korea School of Civil, Architectural Engineering and Landscape Architecture, Sungkyunkwan University, Cheoncheon-dong, Jangan-gu, Suwon, Gyeonggi-do, Republic of Korea c Department of Civil and Environmental Engineering, Hanyang University, 222 Wangsimni-ro, Seongdong-gu, Republic of Korea b
A R T I C LE I N FO
A B S T R A C T
Keywords: Earthwork Terrestrial LiDAR UAV Image processing Point cloud
Inaccurate information regarding the terrain in construction projects represents a major challenge to the earthwork process. Both construction quality and productivity have to be addressed by means of efficient construction information management in large earthwork projects in order to ultimately improve the cost-effectiveness of such projects. Research into the technologies for creating precise three-dimensional data and maps of earthwork sites is progressing steadily. These technologies aim to make it possible to conduct unmanned operations, leading to the effective management of earth working equipment. In recent years, as the importance of three-dimensional (3D) shape information management has grown in the construction industry, the research and application of 3D point cloud acquisition methods has likewise increased. The current method for acquiring point cloud data through laser scanning renders it difficult to acquire point clouds in large construction projects, especially in earthwork projects, due to the topographic conditions of the site as well as the physical and material limitations of the laser scanning equipment. In order to overcome and compensate for the limitations of laser scanning, image-processing technology involving unmanned aerial vehicles (UAVs) has been used to acquire point cloud data, although its application has been limited due to its low accuracy. Therefore, this study proposed a method for generating and merging hybrid point cloud data acquired from laser scanning and UAVbased image processing. In addition, a comparison was conducted between the datasets acquired from laser scanning and image processing, using examples from some case studies. Finally, an analytical comparison was performed to verify the accuracy of the UAV-based image processing technology for earthwork projects.
1. Introduction In recent years, the growing importance of quality control and project management in the construction industry has resulted in an increased need to acquire three-dimensional (3D) image data [11]. More specifically, topographic data concerning construction sites based on 3D geometric information have been generated and used during the earthwork stage, which requires a lot of heavy equipment, to establish accurate equipment operations and construction plans based on the actual working environment. Indeed, 3D topographic data also enable onsite surveying and earth-volume calculation, since the data reflect the actual coordinate values [11]. In addition, as technical advancements have made it possible to generate 3D topographic data in a prompt and accurate manner, the data are actively used for various purposes, including real-time onsite monitoring and determining the progress of construction projects [18]. It is particularly important to create and
⁎
map the exact 3D topography of construction sites. This process will ultimately facilitate the technology needed for the automatic control and operation of earthwork equipment, which is linked to the technology for self-driving cars. This automation process will make it possible to calculate the amount of equipment required to match the daily workload and to plan ahead for effective management of work flow. This 3D geometric information is mainly acquired through laser scanning and photographic surveying technologies [12], and it is used for digital surface modeling (DSM), as-built building information modeling (BIM), digital elevation modeling (DEM), and digital terrain modeling (DTM) [3,11], which can all be performed by using either the point cloud data directly or data processing to achieve the purpose of using the information [11]. Laser scanning acquires the spatial coordinates for the surface of an object using the time-of-flight method, which measures the travel time of lasers through their emission and reflection [30]. As lasers directly
Corresponding author. E-mail addresses:
[email protected] (S. Kwon),
[email protected] (J. Seo).
https://doi.org/10.1016/j.autcon.2018.07.020 Received 9 June 2017; Received in revised form 12 July 2018; Accepted 27 July 2018 0926-5805/ © 2018 Elsevier B.V. All rights reserved.
Please cite this article as: Moon, D., Automation in Construction, https://doi.org/10.1016/j.autcon.2018.07.020
Automation in Construction xxx (xxxx) xxx–xxx
D. Moon et al.
2. Terrestrial laser scanning & UAV-based image processing research within civil engineering
arrive at the surface of an object and are reflected from it, this technology can precisely acquire the spatial coordinates with an error range of only 1–10 mm. However, laser scanners are expensive, and their use may be limited by specific circumstances that can distort measurements, including the penetration and diffused reflection of lasers [12,16]. While this technology certainly has limitations in terms of its usage, it has a high overall level of usability and it is widely used in the manufacturing, shipping, construction, and aviation industries. Photogrammetry acquires point cloud data by reorganizing 2D images that have overlapping intervals into 3D point clouds. The point cloud data are acquired using techniques such as structure from motion (SfM). This technique acquires spatial coordinates through the overlapping of multiple photos and the calculation of distances based on the keypoints of the photos. Photogrammetry is currently used and researched as an alternative technology for light detection and ranging (LiDAR), since it can easily acquire point clouds at a low cost when compared to LiDAR [16]. However, when compared to laser scanning, the photogrammetry technology does not feature a medium that directly measures objects. For this reason, it exhibits low measurement accuracy, with an error range of cm to m in terms of the unit of measurement [43]. Due to this lower accuracy level, it is mainly used in broad topographic surveying, cartography, agriculture, and forestry. The construction industry requires precision and therefore point clouds are typically acquired using LiDAR. However, as the industry becomes increasingly large and more complex, data acquisition by means of laser scanning presents some limitations. In particular, data limitations occur according to the degrees of complexity found in the transfer and installation of laser scanners and scanned objects [42]. In the case of large spaces, due to the nature of laser scanners, a longer distance leads to a corresponding decline in the accuracy of the point cloud data as well as increasing the number of scans and the time required for data acquisition. Therefore, the formulation of a scanning plan prior to data acquisition is essential. Moreover, data acquisition becomes difficult in areas where the diffused reflection of a laser occurs and a penetrated material exists, which was mentioned earlier as a limitation of laser scanners. Therefore, a new method is required to resolve this problem. As it is necessary to acquire large amounts of data in a prompt and accurate manner by compensating for the areas that cannot be acquired with laser scanners, acquiring point cloud data by means of photogrammetry using unmanned aerial vehicles (UAVs), which are characterized by outstanding mobility, seemingly represents a viable alternative [30]. This study is part of a rudimentary study on digital model creation technology. It was conducted to plan automated equipment operation planning on civil engineering work sites by creating the base data for a 3D world model using the data acquired from laser scanners and photogrammetry. First, the data accuracy was verified by comparing precision data from laser scans and photogrammetry data to propose hybrid data creation and optimization methods in order to use both types of data in an integrated manner. For smart heavy equipment planning, the creation of a 3D world model through point cloud mesh processing is essential because equipment operation and simulation are performed based on this model. In the 3D world model, the point cloud data is the base data used in the automated equipment operation of civil engineering machinery. Data about the topography of the civil engineering work site is also required but does not require as high of a resolution nor density as the 3D data of other processes, such as construction management and quality inspection. However, merged point cloud data faces some problems in terms of mesh-processing times and interconnection with other software applications due to its large volume. Therefore, in this study, a data optimization method for merged point cloud data is proposed, and the applicability of the optimized data is verified.
As the accuracy and precision of broadband scanning increases due to technical advancements in laser scanners, research studies are actively underway with regard to the use of 3D geometric information based on laser scanning in the construction industry. 3D geometric information is comprised of point clouds in which an individual point cloud is made up of a set of numerous data points. These studies aim to automatically create as-built models and BIM models based on point cloud data [1,2,4,5] and then use them in the construction industry. Furthermore, researchers are studying the expansion of the application targets and fields using various laser scanners in combination [38]. Some are researching the generation of 3D maps and the updating of data through the real-time processing of the data acquired from equipment [39]. As laser-scanning-based geometric information has the advantage of high resolution and accuracy, research has also been conducted on soil microtopography [6–9] and volume estimation [10,11] using geometric information. Laser scanning is costly due to the need for expensive scanning equipment and the skilled manpower necessary to acquire the initial basic data. It also consumes large amounts of both time and money prior to completion because it requires specialized programs and workers who can use those programs in the postprocessing of basic data and the production of images [12]. For this reason, research and technical development regarding the technologies used to acquire 3D geometric information using photos have recently begun in order to achieve the acquisition of 3D geometric information using low-cost equipment and minimal manpower and expenditure. Photogrammetry reorganizes objects into 3D shapes using multiple photos. This technology can be divided into traditional photogrammetry, which reorganizes objects into 3D shapes based on prior knowledge of the camera position and orientation, and SfM, which simultaneously enables camera poses and 3D organization using feature detection and matching techniques [13–16]. Unmanned aerial vehicles have increasingly been used for the data acquisition necessary for photogrammetry because they have relatively larger working areas than conventional ground surveying equipment as well as being less expensive than the traditional aerial surveying equipment. UAVs are recognized as a technology that can either replace or compensate for the existing surveying equipment [17], and they are used for topographic surveying in various fields, including civil engineering [18–20], the management and restoration of cultural heritage [12], disaster prevention [21,22], agriculture [23], and mining [24]. The recent market growth and technical development of UAVs have also led to the increased usage of UAV-based photogrammetry in the construction industry. The existing UAV-based photogrammetry has been employed to perform onsite surveying or promptly establish 3D spatial information from topographic changes due to the progress of construction at the construction site [18]. In addition, studies based on photogrammetry include the measurement of cracks in reinforced concrete (RC) members [25], the monitoring of bridges and structures [26], risk assessment for gas and oil pipelines [27], and the evaluation of usability in relation to the safety assessment of the construction site. As construction projects are typically large in scale and involve onsite complexity, numerous obstacles exist to data acquisition. Therefore, using only the laser scanning technology lowers the efficiency of data acquisition due to the occurrence of equipment-related and regional limitations. Studies focusing on the technical integration of photogrammetry and laser scanning have been conducted to overcome this problem. This approach can serve to increase the accuracy of photogrammetry as well as the amount of data acquired by maintaining high accuracy using LiDAR data and additionally merging the data acquired from UAV-based photogrammetry with the LiDAR data, rather than using each technology separately. Previous studies concerning the integration of photogrammetry and terrestrial laser scanning (TLS) have discussed the applicability and usability of this approach by 2
Automation in Construction xxx (xxxx) xxx–xxx
Determining differences in point cloud density between techniques
3D bridge reconstruction
Dynamic avalanche model
Point Cloud Data
Point Cloud Data, DEM
evaluating its usefulness through comparisons of datasets. These comparisons were conducted by either comparing surface models, such as the digital elevation model (DEM) and digital surface model (DSM) [28,29,36], or comparing point clouds [30,31,37]. To compare the results obtained using these two methods, surface models were created and compared rather than merging in the point cloud data stage, as was done in many previous studies [28,31,36]. The studies on data integration in the point cloud data stage proposed TLS, photogrammetry data creation, and integration schemes and comparatively verified the results. However, DEM and DSM creation have limitations due to the differences in the point cloud density acquired using TLS and SfM [30]. The targets require highly dense point cloud data, such as data on cultural heritage, displacement, and change prediction, in the creation of surface models. In this study, the targets were limited to earthworks with the purpose of investigating a) a data integration technique in the point cloud data stage to allow for the planning of unmanned equipment operation and b) a point cloud-minimizing scheme to maximize the interconnection and utilization in post-processing through noise elimination, triangular mesh, or contour layer creation. Table 1 1 displays the analysis of literature on the integration of TLS photogrammetry. 3. Scope and methodology The purpose of this study was to evaluate the potential and usability of data integration by comparing the data processed through photogrammetry based on laser scanning, with a focus on earthworks. The study also aimed to increase the utility of point cloud data by proposing a method for optimizing the creation and utilization of integrated hybrid point cloud data, as shown in Fig. 1. In this paper, we propose a full range of process and methodology for the creation of world models of earthworks. 1) image processing for converting 2-dimensional image acquired through UAV into 3D point cloud data, 2) data registration such as scale and coordinate transformation of photogrammetric data for merging with laser scan data, 3) Data Optimization for facilitating interoperation with other software and data handling. First, targets were installed for the ground control point (GCP) and the coordinate information for this GCP was acquired. The 2D images were acquired from the automated flight of the UAV, the flight plan having first been set. The 3D point cloud was created through image processing. In this study, feature detection and matching were implemented by using the Sift algorithm, and 3D scene reconstruction was implemented through use of Bundler [32] and CMVS/PMVS2 [33]. Laser scan data, which is on a 1:1 scale, has very high accuracy. In this study, an evaluation of the photogrammetry data was conducted by using the TLS data as the reference data. Furthermore, we optimized the data by setting a distance between points and removing points that have a lower distance than the specified value, and conducted the experiment to the level where the maximum error was not exceeded when the distance between points of the data set was gradually increased. 4. Comparison of TLS data and UAV-based image processing data 4.1. Reference data Data created by laser scanning is on a 1:1 scale and its accuracy value and data processing time will differ according to the resolution setting. However, it has the advantage that the data acquired has a higher precision level than the data acquired by image processing. This study therefore used the TLS data as its reference data and used the baseline marker (the datum point) (as provided by the National Geographic Information Institute) for verifying its accuracy. As each datum point has unique x, y, and z values, the accurate distance between the markers can be computed, and this is used for testing the
A. Prokop et al. [37] (2015)
D. Mader [31] (2015)
B. Joshua et al. [36] (2016) Xu et al. [30] (2014)
Creation and comparison of DEMs using TLS and photogrammetry 3D reconstruction using TLS and UAV-based SfM and comparative verification Creation and comparison of PCD through UAV-based laser scanning and photogrammetry TLS, total station, dynamic avalanche modeling using photogrammetry
Coastal change analysis Spatial cultural heritage modeling
Merging in the point cloud data stage Proposing a data-linking scheme for integrated data when data is large Merging in the point cloud data stage Determining differences in point cloud density between techniques Merging in the point cloud data stage Spatial cultural heritage modeling spatial archaeological analysis
DSM DSM, orthoimage, textured 3D models DEM Point Cloud Data H. Eisenbeiss [28] (2006) K. Lambers et al. [29] (2007)
Creation and comparison of DSMs using TLS and UAV imagery Use of a 3D space model integrating TLS with photogrammetry
Differentiation Application area Final output(s) Research summary
Table 1 Literature analysis on the integration of TLS with photogrammetry.
D. Moon et al.
3
Automation in Construction xxx (xxxx) xxx–xxx
D. Moon et al.
Fig. 1. Research process.
4.2. Generation of 3D point clouds
performance of the measurement equipment. Leica C5 scanning equipment was used for obtaining the reference data with the resolution set at a medium level so that the points could be created at 4 mm intervals in cases where the shooting ranged from 0 to 40 m. We checked the performance by comparing the distance between the baseline markers as indicated by point cloud with the actual distance between the markers. The error was approximately 2 mm. In addition, we compared the distance between columns, the height of the columns, and the distance between the stations of a neighboring building, comparing the actual measured values with the data. In the comparison of results, the average error was 3 mm, as shown in Fig. 2 and Table 2.
4.2.1. Image acquisition 3D point cloud data was created for the earthwork project on a construction site by using laser scan technology and photogrammetry technology. For the photogrammetry, images of objects were acquired by using DJI Phantom 3 equipment with a built-in Sony EXMOR Camera (12 megapixels, ccd width: 6.16, f/2.8). The ground sampling distance (GSD) was set at 0.86 cm/px for the images acquired with UAV. The flight planning tool, by Pix4D, was used for the image acquisition and the altitude was set at 20 m and the image overlap at 70%. The waypoint for two objects was set in advance before the flight, as shown in Fig. 3.
Fig. 2. Verifying accuracy for setting of reference data (TLS). 4
Automation in Construction xxx (xxxx) xxx–xxx
D. Moon et al.
Table 2 Error analysis of reference data.
Measured distance Laser scan Error
Distance between columns (A)
Column height (B)
Distance between stations (C)
Distance between base-line markers (D)
5.828 m 5.831 m +0.003 m
4.185 m 4.188 m +0.003 m
8.070 m 8.066 m -0.004 m
39.813 m 39.815 m -0.00222 m
Fig 3. Flight planning tool by Pix4D: altitude (20 m), camera orientation (vertical), image overlap (70%), flight speed (medium).
Fig. 4. Ground control point (GCP) target.
4.2.2. Image processing Image processing of the acquired images proceeded in the following order: feature detection and matching, bundle adjustment, sparse reconstruction, GCP coordinate information input and coordinate conversion, and dense scene reconstruction. Patch-based Multi-View Stereo (PMVS) parameters were set at threshold 0.7, cell size 2, image pyramid level 1. For coordinate referencing with the reference data, GCP targets of 2 m ∗ 2 m in size were installed before shooting, as seen in Fig. 4. GCP coordinate-based rotation and shift were implemented for the created
Table 3 TLS points based ground control points (GCP) coordination. TLS coordination
X
Y
Z
GCP1 GCP2 GCP3 GCP4
11.061032 10.940860 6.173809 3.083943
−12.355436 −11.754832 −13.332584 −8.948701
−0.183851 −0.418477 0.257104 −0.575688
5
Automation in Construction xxx (xxxx) xxx–xxx
D. Moon et al.
Table 4 Test overview. Image
Point cloud data
Equipment Number Range Time
TLS 4 stations 45 m ∗ 30 m (26 m ∗ 15 m comparison area) 20 min 26 min Soil
Acquisition Processing
Material
necessary adjustments [34]. The ICP data registration technique is based on a search for pairs of nearest adjacent points in the two data sets and then the transformation parameters between them are calculated [35]. Point cloud data transformation consists of matrices R (Eq. (1)) and T (Eq. (2)) and is used to convert the location and distance of the point cloud. In Eq. (1), α, β, and γ refer to the rotation angles of the x, y, and z axes, respectively, and in Eq. (2), tx, ty, and tz refer to the amount of displacement.
Table 5 A comparison of the accuracy distribution before and after the ICP registration. Result data
Point matching registration 10 cm (below)
Points of Result Points of
distribution (number points) data distribution (number points)
10–20 cm
82.723% 8.590% (720,202) (74,782) Point matching + ICP registration 10 cm (below) 10–20 cm 86.604% 5.249% (753,995) (45,698)
UAV (Image) 73 64 m ∗ 39 m (26 m ∗ 15 m comparison area) 5 min 32 min
20 cm (above) 8.687% (75,636) 20 cm (above) 8.147% (70,927)
0 ⎤ cos β 0 sin β ⎤ ⎡ 1 0 ⎡ cos α − sin α 0 ⎤ ⎡ Matrix R = ⎢ sin α cos α 0 ⎥ ⎢ 0 1 0 ⎥ ⎢ 0 cos γ − sin γ ⎥ ⎥⎢ ⎥ 0 1⎦ ⎢ ⎣ 0 ⎣− sin β 0 sin β ⎦ ⎣ 0 sin γ cos γ ⎦ (1)
point cloud data. Coordinate values of the reference data for the TLS data were used for the GCP coordinate, as shown in Table 3.
Matrix T = [t x t y t z]T
4.3. Data registration
(2)
This study implemented both the point matching registration and the ICP registration by using the open-source software Cloud Compare. Eqs. (3) and (4) are the transformation matrices that were applied to the point matching and ICP registration.
In order to create hybrid data, whereby inadequate TLS data are complemented by photogrammetry data, precise registration between the two data sets is required. Target-based, feature-based and ICP-based registration are the methods most frequently used for registration of point cloud data set [40] and several studies used various methods in combination [41]. Point matching registration, which is a technique for combining two data sets into one, first selects the matching point for each data set and then, using this point as the standard, it makes the
⎡ 2.50 0.08 0.43 1.35 Point Matching Transformation Matrix = ⎢ ⎢ − 0.35 2.18 ⎢ 0 ⎣0
0.55 − 2.13 1.30 0
− 13.473 − 0.019 30.617 1
⎤ ⎥ ⎥ ⎥ ⎦ (3)
Fig. 5. Splitting of dense point cloud (a) Dense point cloud registered with matching point method: accuracy distribution range −0–10 cm (left), accuracy distribution range-10–200 cm (center), Accuracy distribution range-20 above (right) and (b) Dense point cloud registered with matching point and ICP method: accuracy distribution range-0–10 cm (left), accuracy distribution range-10–200 cm (center), Accuracy distribution range-20 above (right). 6
Automation in Construction xxx (xxxx) xxx–xxx
D. Moon et al.
Fig. 6. (a) TLS data (b) photogrammetry data - 20 cm (above) (3) merging (a) and (b).
Fig. 7. Cloud to Cloud absolute distance (a) between TLS based GCP points and GCP points registered with matching point method (x, y, z axis and xyz), (b) between TLS based GCP points and GCP points registered with matching point method and ICP method (x, y, z axis and xyz).
ICP Transformation Matrix ⎡ 0.995490 − 0.001366 0.001378 0.995472 =⎢ ⎢ 0.001796 − 0.006360 ⎢ 0 ⎣0
− 0.001805 0.006358 0.995472 0
− 0.005347 − 0.079558 − 0.076208 1
⎤ ⎥ ⎥ ⎥ ⎦
method for each station. Data acquisition took a total of 20 min and data processing time was 26 min. For the photogrammetry, a total of 73 images were acquired from the UAV. The flight time was 5 min and image processing time was 32 min, as shown in Table 4. To measure the point cloud distance, UAV and TLS data were overlaid and the maximum distance was set. In addition, the distance to the UAV data was measured based on the points of the TLS data. Among the points of the UAV and TLS data, the distance of those that are closest were measured and the points that were farther than the set distance were excluded from the distance calculation. Furthermore, we identified the error values by expressing the distances between points as a distribution and confirmed the effective data generation rate satisfying the target error value of this study. Color differences reflect the distance between points and the data visualization interval was divided into below 10 cm and between 10 cm and 20 cm. The standard data visualization interval reflects the public measurement error range of ± 10 cm and the slope earthwork error range of ± 20 cm as defined in the specifications of the National Geographic Information Institute. It was also set so as not to exceed the GPS-average error range of 20–30 cm. The standard is changeable depending on the purpose and type of use in an earthwork project. In this study, point cloud data below 10 cm and point cloud data between 10 cm and 20 cm were regarded as effective data and categorized as usable data according to the standard. To test the effectiveness of data on the overlapping section, we first defined the area and compared the data for this area. After overlaying the TLS data and the photogrammetry data, we implemented point matching registration to increase the data coherence. As a point matching method, one can either choose features of geometric objects or use targets. The accurate selection of matching points between hundreds of thousands of potential points requires a lot of time and effort. Hence, this study used the GCP target as the
(4)
4.4. Data optimization TLS and photogrammetry data have different densities, and the hybrid data created through the registration of the two data types requires smooth interoperation with other software applications and processes to create a 3D world model. Furthermore, a scheme to minimize the point density and volume should be considered. Typical methods to decrease the density of point cloud data are to set a minimum distance between the points of data sets or to create cells and keep the points close to the cell centers [44,45]. In this study, the data was optimized by setting a distance between the points and removing points that had smaller differences in distance than the specified value. The minimum error value (based on the z-axis) for the creation of the 3D world model was set, and data that satisfied the error value was created by varying the distance between the points. 5. Experimental work & results 5.1. Verification of merged point cloud data Using TLS and UAV-based image processing, we compared the distance error between data in a place where earthwork and bridge building are simultaneously under construction using a point-to-point comparison. The TLS data for 4 Stations were acquired at medium resolution level and data registration was implemented using the same 7
Automation in Construction xxx (xxxx) xxx–xxx
172,204 0.23 m
matching point for implementing point matching registration for the TLS data and photogrammetry data rather than using features. To increase the data coherence, we also implemented ICP registration. Table 5 presents the points distribution both before and after the ICP registration and Fig. 5 shows the splitting of dense point cloud. Fig. 5 (a) depicts the results of dense point cloud registered with matching point method and Fig. 5(b) depicts the results of dense point cloud registered with matching point and ICP method. In the case of point matching registration only, the difference in the distribution of distance was 82.723% in the case of below 10 cm and 8.590% in the case of 10–20 cm, suggesting that 91.313% of the data were effective. When the ICP registration was implemented in addition to the point matching registration, the difference in the distribution of distance was 86.604% in the case of below 10 cm and 5.249% in the case of 10–20 cm, implying that 91.853% of the data created were effective. Compared to the situation where there was point matching only, the data below 10 cm showed a denser distribution where the error range among the effective data was smaller. Data that had differences of over 20 cm in distance resulted from a lack of accuracy and from noise in the image processing. The main reason for this is the blind spot in the TLS data where no data is created. In this situation, the distance between points cannot be calculated, or the error range exceeds that of the effective data, as shown in Fig. 6. The GCP coordinates were compared for accuracy measurement. The GCP coordinates used in the comparison were TLS point cloud, dense point cloud by point matching and dense point cloud by point matching method and ICP method. Fig. 7 presents the point to point distance between TLS based GCP and photogrammetry based GCP. The distances between points were calculated based on the X, Y, Z, and XYZ values. The x-axis of the graph represents the distance between the points in meters, and the y-axis represents the number of points. In the case of photogrammetry based GCP using point matching registration only, the cloud to cloud distance (xyz) were −0.054 m, −0.0675 m, −0.072 m and 0.0765 m. When the ICP registration was implemented in addition to the point matching registration, the cloud to cloud distance (xyz) were −0.04 m (2 points), 0.065 m and 0.117 m. After ICP registration, three of the four coordinates (−0.014, −0.0275, −0.007) improved the accuracy and the average of the coordinates was improved by 0.002. (Fig. 7).
446,892 0.12 m 855,072 0.11 m
262,440 0.15 m
Distance between points: 3 cm Distance between points: 2 cm
Distance between points: 4 cm
Distance between points: 5 cm
D. Moon et al.
1,410,067 0.05 m
The distances between the points and error values were set to optimize the hybrid point cloud data. The error value for creating the 3D world model was set to ± 20 cm (see Section 5.1). The distances between the points of the data set were increased by 1 cm intervals, and points with distances smaller than the specified distance were removed. The error value of the created data set was compared with that of the original data set as determined by the distance between neighboring data points based on the Z value. As shown in Table 6, the number of points was 1,410,067, which decreased by approximately 79.1% compared to the original when the minimum distance between the points was set to 1 cm. This number decreased to 855,072 (48.0%) when the minimum distance between the points was set to 2 cm; 446,892 (25.1%) when the minimum distance between the points was set to 3 cm; 262,440 (14.7%) when the minimum distance between the points was set to 4 cm; and 172,204 (9.7%) when the minimum distance between the points was set to 5 cm. When comparing the shape and error value of the data along a straight line based on the Z value, the shape changed more from the original shape as the distance between the points increased. The maximum error value also increased, as shown in Fig. 8. In the 1 cm (0.05 m), 2 cm (0.11 m), 3 cm (0.12 m), and 4 cm (0.15 m) distances, the error values were within the set value. However, in the 5-cm distance, the error value was 0.23 m, which was outside of the specified range.
Number of points (original PCD: 1781633) Error
Distance between points: 1 cm
Table 6 A comparison of point density before and after data optimization.
5.2. Verification of optimized hybrid point cloud data
8
Automation in Construction xxx (xxxx) xxx–xxx
D. Moon et al.
Fig. 8. A Comparison of original hybrid PCD and optimized PCD.
6. Conclusion
Technology Advancement (KAIA) grant funded by the Ministry of Land, Infrastructure and Transport (Grant 17SCIP-B079689-04).
The usefulness of point cloud data that depends only on photogrammetry is limited in the construction industry. Its shortcomings relate to the occurrence of environment-related errors in the construction site and the lower accuracy of photogrammetry relative to TLS. This study conducted an accuracy test and proposed an improvement by combining the use of TLS data with photogrammetry-based point cloud data, thereby aiming to create a model suited to an unmanned automated operation of civil engineering equipment in construction works. GCP targets were installed to increase the accuracy of the image processing as well as coherence with the TLS data. The TLS data were used as the reference data for coordinating the data, and the GCP target was used as the matching point in the point matching registration. The distance differences between data were analyzed after the additional implementation of ICP registration, and the data distribution below 10 cm was found to have improved from 82.723% to 86.604%. In the case of data that fell outside the error range of 10 cm, the distance differences between data were large as data were not created in the blind spot which occurred at the time of the TLS data acquisition. The hybrid point cloud data, in which TLS data and photogrammetry data were integrated, was used as the base data for automated equipment operation in earthworks. A data optimization method was proposed to maximize the utilization of this data in other software applications and tasks, and the error values and shapes were compared according to decreasing point cloud density. This study confirmed the usability of photogrammetry data in earthwork situations and suggested a data creation and registration method that can complement the geographic and physical limitations of TLS technology. Ultimately, the paper proposed point cloud-based model creation and usability by creating hybrid point cloud data. Through this, diverse applications, including the creation of a 3D surface model, field survey, and computation of the earth's volume all become possible. In the case of the model for the unmanned automatic operation of equipment, additional processes are required on the optimization of point cloud data by clarifying the error tolerance according to the types of equipment and construction.
References [1] S.W. Kwon, F. Bosche, C.W. Kim, C.T. Haas, K.A. Liapi, Fitting range data to primitives for rapid local 3D modeling using sparse range point clouds, Autom. Constr. 13 (2004) 67–81. [2] P. Tang, D. Huber, B. Akinci, R. Lipman, A. Lytle, Automatic reconstruction of asbuilt building information models from laser-scanned point clouds: a review of related techniques, Autom. Constr. 19 (2010) 829–843. [3] S. Gehrke, K. Morin, M. Downey, N. Boehrer, T. Fuchs, Semi-global matching: An alternative to LIDAR for DSM generation, Proceedings of the 2010 Canadian Geomatics Conference and Symposium of Commission I, vol. 2, 2010, p. 6. [4] F. Bosché, Automated recognition of 3D CAD model objects in laser scans and calculation of as-built dimensions for dimensional compliance control in construction, Adv. Eng. Inform. 24 (2010) 107–118. [5] X. Xiong, A. Adan, B. Akinci, D. Huber, Automatic creation of semantically rich 3D building models from laser scanner data, Autom. Constr. 31 (2013) 325–337. [6] J.J. Roering, L.L. Stimely, B.H. Mackey, D.A. Schmidt, Using dinsar, airborne lidar, and archival air photos to quantify landsliding andsediment transport, Geophys. Res. Lett. 36 (2009) (Article no. L19402). [7] J.U.H. Eitel, C.J. Williams, L.A. Vierling, O.Z. Al-Hamdan, F.B. Pierson, Suitability of terrestrial laser scanning for studying surface roughness effects on concentrated flow erosion processes in rangelands, Catena 87 (2011) 398–407. [8] C. Castillo, R. Perez, M.R. James, J.N. Quinton, E.V. Taguas, J.A. Gomez, Comparing the accuracy of several field methods for measuring gully erosion, Soil Sci. Soc. Am. J. 76 (2012) 1319–1332. [9] J.B. Sankey, S. Ravi, C.S.A. Wallace, R.H. Webb, T.E. Huxman, Quantifying soil surface change in degraded drylands: shrub encroachment and effects of fire and vegetation removal in a desert grassland, J. Geophys. Res. Biogeosci. 117 (2012) 1–11. [10] Slattery, T. Kerry, D.K. Slattery, Modeling earth surfaces for highway earthwork computation using terrestrial laser scanning, Int. J. Constr. Educ. Res. 9 (2) (2013) 132–146. [11] J.-C. Du, H.-C. Teng, 3D laser scanning and GPS technology for land slide earthwork volume estimation, Autom. Constr. 16 (5) (2007) 657–663. [12] J.B. Koo, The study on recording method for buried cultural property using photo scanning technique, Journal of Digital Contents Society 16 (5) (2015) 835–847. [13] C. Harris, M. Stephens, A combined corner and edge detector, Proceedings, Alvey Vision Conference, Manchester, UK, 1988. [14] D.G. Lowe, Object recognition from local scale-invariant features, Proceedings of the Seventh IEEE International Conference, Computer Vision, 1999, pp. 1150–1157 September 20–27, Kerkyra. [15] H. Bay, T. Tuytelaars, L. Van Gool, Surf: Speeded up Robust Features. Computer Vision–ECCV, Springer, 2006, pp. 404–417. [16] S.K. Nouwakpo, M.A. Weltz, K. McGwire, Assessing the performance of structurefrom-motion photogrammetry and terrestrial LiDAR for Reconstructing soil surface microtophography of naturally vegetated plots, Earth Surf. Process. Landf. 41 (2016) 308–322. [17] S. Siebert, J. Teizer, Mobile 3D mapping for surveying earthwork projects using an Unmanned Aerial Vehicle (UAV) system, Autom. Constr. 41 (2014) 1–14. [18] M.H. Park, S.G. Kim, S.Y. Choi, The study about building method of geospatial informations at construction sites by unmanned aircraft system (UAS), Korean Association of Cadastre Information 15 (1) (2013) 145–156.
Acknowledgements This work is financially supported by Korea Ministry of Land, Infrastructure and Transport (MOLIT) as U-City Master and Doctor Course Grant Program. This work is supported by the Korea Agency for Infrastructure 9
Automation in Construction xxx (xxxx) xxx–xxx
D. Moon et al.
ACM Transactions on Graphics, Proceedings of SIGGRAPH, 2006. [33] Y. Furukawa, J. Ponce, Accurate, dense, and robust multi-view stereopsis, IEEE Trans. Pattern Anal. Mach. Intell. 32 (8) (2010) 1362–1376. [34] B. Jian, B.C. Vemuri, Robust point set registration using gaussian mixture models, IEEE Trans. Pattern Anal. Mach. Intell. 33 (8) (2011) 1633–1645. [35] Y.D. Rajendra, S.C. Mehrotra, K.V. Kale, R.R. Manza, R.K. Dhumal, A.D. Nagne, A.D. Vibhute, Evaluation of partially overlapping 3D point cloud's registration by using ICP variant and CloudCompare, Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 40 (8) (2014) 891. [36] B. Joshua, C.F. Igwe, I.A. Adekunle, Modelling and assessment of coastal changes at Golspie Beach, Scotland, UK; an integration of terrestrial laser scanning and digital photogrammetric techniques, for an effective coastal land use management, International Journal of Scientific Research in Science and Technology 2 (3) (2016) 361–371. [37] A. Prokop, P. Schön, F. Singer, G. Pulfer, M. Naaim, E. Thibert, A. Soruco, Merging terrestrial laser scanning technology with photogrammetric and total station data for the determination of avalanche modeling parameters, Cold Reg. Sci. Technol. 110 (2015) 223–230. [38] S.M. Sepasgozar, S. Shirowzhan, Challenges and opportunities for implementation of laser scanners in building construction, International Symposium on Automation and Robotics in Construction (ISARC) 33 (2016) 18–21. [39] J. Chen, Y. Cho, Real-time 3D mobile mapping for the built environment, International Symposium on Automation and Robotics in Construction (ISARC) 33 (2016) 18–21. [40] P. Kim, Y. Cho, An automatic robust point cloud registration on construction sites, Proceedings of the 2017 International Workshop on Computing for Civil Engineering (IWCCE), 2017, pp. 25–27. [41] Y. Cho, C. Wang, P. Tang, C. Haas, Target-focused local workspace modeling for construction automation applications, ASCE Journal of Computing in Civil Engineering 26 (5) (2012) 661–670. [42] C.H. Hugenholtz, J. Walker, O. Brown, S. Myshak, Earthwork volumetrics with an unmanned aerial vehicle and softcopy photogrammetry, J. Surv. Eng. 141 (1) (2014) 06014003. [43] M.C. Kim, H.J. Yoon, A study on utilization 3D shape pointcloud without GCPs using UAV images, Journal of the Korea Academia-Industrial cooperation Society 19 (2) (2018) 97–104. [44] V. Potó, J.Á. Somogyi, T. Lovas, Á. Barsi, Laser scanned point clouds to support autonomous vehicles, Transportation Research Procedia 27 (2017) 531–537. [45] C. Moenning, N.A. Dodgson, Intrinsic point cloud simplification, Proc. 14th GrahiCon, vol. 14, 2004, p. 23.
[19] A. Lucieer, S. de Jong, D. Turner, Mapping landslide displacements using Structure from Motion (SfM) and image correlation of multi-temporal UAV photography, Prog. Phys. Geogr. 0309133313515293 (2013). [20] H.K. Lee, K.Y. Jung, Applicability evaluation of earth volume calculation using unmanned aerial image, Asia Pac. J Multimed. Serv. Converg. Art Humanit. Sociol. 5 (5) (2015) 497–504. [21] G. Astuti, D. Longo, C.D. Melita, G. Muscato, A. Orlando, HIL tuning of UAV for exploration of risky environments, Int. J. Adv. Robot. Syst. 5 (4) (2008) 419–424. [22] A. Birk, B. Wiggerich, H. Bülow, M. Pfingsthorn, S. Schwertfeger, Safety, security, and rescue missions with an Unmanned Aerial Vehicle (UAV), J. Intell. Robot. Syst. 64 (1) (2011) 57–76. [23] P.J. Zarco-Tejada, R. Diaz-Varela, V. Angileri, P. Loudjani, Tree height quantification using very high resolution imagery acquired from an unmanned aerial vehicle (UAV) and automatic 3D photo-reconstruction methods, Eur. J. Agron. 55 (2014) 89–99. [24] T. McLeod, C. Samson, M. Labrie, K. Shehata, J. Mah, P. Lai, J.H. Elder, Using video acquired from an unmanned aerial vehicle (UAV) to measure fracture orientation in an open-pit mine, Geomatica 67 (3) (2013) 173–180. [25] S.H. Woo, G.J. Ha, D.R. Lee, J.H. Ha, M.S. Ha, A study on BIM based reliability of RC measurements using 3D photo scan, The Conference of Korea Institute for Structural Maintenance and Inspection, 2015, pp. 37–38. [26] N. Metni, T. Hamel, A UAV for bridge inspection: visual servoing control law with orientation limits, Autom. Constr. 17 (1) (2007) 3–10. [27] J. Gao, Y. Yan, C. Wang, Research on the application of UAV remote sensing in geologic hazards investigation for oil and gas pipelines, ASCE ICPTT Sustainable Solutions For Water, Sewer, Gas, And Oil Pipelines, 2011, pp. 381–390. [28] H. Eisenbeiss, L. Zhang, Comparison of DSMs generated from mini UAV imagery and terrestrial laser scanner in a cultural heritage application, Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 36 (5) (2006) 90–96. [29] K. Lambers, H. Eisenbeiss, M. Sauerbier, D. Kupferschmidt, T. Gaisecker, S. Sotoodeh, T. Hanusch, Combining photogrammetry and laser scanning for the recording and modelling of the Late Intermediate Period site of Pinchango Alto, Palpa, Peru, J. Archaeol. Sci. 34 (10) (2007) 1702–1712. [30] Z. Xu, L. Wu, Y. Shen, F. Li, Q. Wang, R. Wang, Tridimensional reconstruction applied to cultural heritage with the use of camera-equipped UAV and terrestrial laser scanner, Remote Sens. 6 (11) (2014) 10413–10434. [31] D. Mader, R. Blaskow, P. Westfeld, H.-G. Maas, UAV-based acquisition of 3D point cloud - a comparison of a low-cost laser scanner and SfM-tools, Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. (2015) 335–341 suppl. W3; Gottingen XL.3. [32] N. Snavely, S.M. Seitz, R. Szeliski, Photo tourism: Exploring photo collections in 3D,
10