Integrating geometric models, site images and GIS based on Google Earth and Keyhole Markup Language

Integrating geometric models, site images and GIS based on Google Earth and Keyhole Markup Language

Automation in Construction 89 (2018) 317–331 Contents lists available at ScienceDirect Automation in Construction journal homepage: www.elsevier.com...

3MB Sizes 2 Downloads 30 Views

Automation in Construction 89 (2018) 317–331

Contents lists available at ScienceDirect

Automation in Construction journal homepage: www.elsevier.com/locate/autcon

Integrating geometric models, site images and GIS based on Google Earth and Keyhole Markup Language Duanshun Li, Ming Lu

T



Department of Civil and Environmental Engineering, University of Alberta, Edmonton, AB, Canada

A R T I C L E I N F O

A B S T R A C T

Keywords: Information Integration Visualization Augmented reality Google earth KML

Technologies for information management and visualization are instrumental in enhancing human perceptions and interpretations of complicated project information. 3D/4D modeling, Virtual Reality (VR), Building Information Models (BIM) and Geographical Information Systems (GIS) have been increasingly used for data management and analytics in construction. Apart from virtual models, it is essential to represent the everchanging site reality by integrating images captured with drones, mobile devices, and digital cameras. To improve the cognitive perception of the site environment from fragmented datasets, this paper proposes a framework to integrate unordered images, geometric models and surrounding environment in Google Earth using Keyhole Markup Language (KML). A ground-control-free methodology to geo-reference sequential aerial imageries and ground imageries is proposed in order to place unordered images into the physical coordinate system of Google Earth. To combine geometric models, site images and panoramic images with the site surrounding environment in 3D GIS, a KML and cloud storage based data management system is conceptualized to handle large scale datasets. The research provides construction engineers with a low-cost and low-technology-barrier solution to represent a dynamic construction site through information management, integration and visualization.

1. Introduction A successful project control system requires not only a practical plan to carry out, but also a feedback loop to check the current status of the site, evaluate the performance of field crews and adjust the plan based on updated circumstances. As such, with heterogeneous sources of asplanned and as-built information integrated and well-presented, a transparent construction site information system will be instrumental in problem identification, formulation and further analysis. It also provides the basis for a SmartSite [1] enabling timely optimized decision making during design, planning and construction stages. As an fragmented industry involving numerous stakeholders and specialist trades [2,3], data, documents and digital files which are obtained during the project life cycle to keep track of both design and actual construction processes– are also highly scattered, diversified and unstructured. The use of various data sources and formats makes information generated during the course of construction even more fragmented [4]. It raises potential difficulties in making a genuine decision which requires a complete assessment of all the information buried in each piece of data [5–7]. Often, failure to effectively manage and retrieve information may result in delays, missed opportunities or poor decisions [8]. Extra



cost upon data collection and storage is incurred and extra time on information retrieving is spent. If such huge amounts of data are not managed efficiently and interpreted in a timely and integral fashion, the available data would present construction managers with more of “noise” than “signal”, more of “waste” than “value”. With the ultimate goal to build an integrated site information management and visualization system for construction field applications, the present research proposes a methodology to seamlessly link data collection, processing, information management and visualization while also integrating as-planned information (represented by geometric models), as-built information (captured by images), and the site environment provided by GIS systems. This enables construction practitioners to have a fast-adaptive, comprehensive visualization and indepth perception of the dynamic construction site environment with more complete information. In particular, a UAV-centric image collection and processing method is formalized to align both sequential aerial images and unordered ground images into the physical coordinate system with a ground control free approach. Equipped with better localization units, UAV takes aerial images with localization accuracy much higher than ground imageries, which are usually taken by mobile devices and suffer from low quality localization units subject to multi-

Corresponding author. E-mail addresses: [email protected] (D. Li), [email protected] (M. Lu).

https://doi.org/10.1016/j.autcon.2018.02.002 Received 10 April 2017; Received in revised form 14 January 2018; Accepted 1 February 2018 0926-5805/ © 2018 Elsevier B.V. All rights reserved.

Automation in Construction 89 (2018) 317–331

D. Li, M. Lu

combine as-planned information and as-built information. Despite operational level applications, AR was used in most recent work [38] for electrical design communication during design and planning stage. The D4AR system [39] was capable to visualize project wide progress by highlighting the discrepancy between the as-planned model and the images captured with video cameras mounted on several stations surrounding the site using AR technology, thus enabling intuitive progress visualization. Nonetheless, the capacity for information integration and exploration is still limited due to fixed camera positions, unknown absolute scales. Most importantly, the absence of an accurate model of the surrounding environment, for example 3D site models provided by 3D GIS systems, makes it less instrumental for applications entailing frequent, intensive interactions between the facilities being built and the site environment, especially where the project is situated in crowded or environmentally fragile areas. Additionally, visualizing large volumes of images might also be challenging for D4AR without a comprehensive “level of detail” (LOD) system. Researchers also leveraged on the benefits of integrating BIM and AR [34,40–42]. However incorporating AR into BIM software is still practically infeasible due to inherent limits of BIM software handling large datasets and real-time rendering [34]. To tackle unstructured data, much related research has focused on the utilization of standardized extended markup language (XML) as shared project information models due to its extensibility and interoperability on the web schemas [4,43,44]. Both the open source BIM standard Industrial Foundation Class(IFC) [45–47], and Web GIS formats including LandXML [7,47], CityGML [48–51] and KML are based on XML. A combination of BIM providing detailed information of the structure and GIS with rich geospatial information surrounding the construction site also emerged as a very important research area. Some methods tried to map the IFC schema to CityGML schema using ontology based methods and instance mapping [51–53], others attempted to separate geometric information from properties information [53]. These researches demonstrated high flexibility of XML in handling diversified data with varying structures and formats.

path effect of GPS signals caused by surrounding environments. By taking the optimized geo-locations of aerial images as references, ground imageries are aligned in the physical coordinate system precisely in an automated manner. Additionally, a KML based construction site information management and visualization system is prototyped. The research investigated the capability of KML for handling unstructured data typically collected on construction site, especially unordered images and as-planned models. By selecting the Google Earth as the platform technology, this research also demonstrates the possibility of integrating images, geometric models, 3D GIS system, virtual reality and augmented reality in a seamless, straightforward fashion while still keeping the application cost low and making construction practitioners the eventual beneficiaries of the research deliverable. The paper is organized as follows: First, we investigate various types of information and data commonly available on a construction site. Then we review state-of-the-art technologies for integrating and managing heterogeneous information. Next, we introduce the general framework of the proposed methodology, along with two major components: UAV-centric image alignment and processing, and KML based image and 3D model management system, which are explained in detail in the following sections. After that, two case studies are presented to illustrate applications of the proposed methodology for pre-project planning and project control. At last, we summarize the advantages, contributions and limitations of the proposed methodology in terms of technological originality and industry impact. 2. Literature review In the past decades, data sources to support construction decision making have been significantly enriched due to the adoption of geographic information system (GIS), Building Information Models (BIM), various location sensors and material tracking devices, and ubiquitous images captured from mobile devices and unmanned aerial vehicle (UAV) systems [9–11]. As-planned models with rich geometric information have shown great potential in congestions analysis on construction site [12], temporary work and access planning [13], safety monitoring [14], environmental impact visualization [15], site layout planning [16,17], crane path and lift planning [18], and other construction site activities [13,19]. Considerable effort has also been put on the automation of progress monitoring [10,20,21] based on comparison between the as-planned model with 3D reconstruction for the site from unordered site images [22,23] and laser scanning [24–26]. Apart from original photos, most recent research focusing on adapting the panoramic image stitching algorithm also emphasized the importance of panoramic view upon site management [27]. However the adoption is greatly hindered by (1) high expenses on system development but unclear benefits of implementation [5,28,29], (2) inefficient visualization and oversimplified site modeling compared to complicated site environment [29], (3) insufficient integration and interoperability [30,31], and (4) technology barriers and organizational difficulties in information sharing and distribution [29,32]. Recent works [33] recognized challenges and limitations of current analytical approaches, including lack of spatiotemporal functionalities and capacity, absence of site logistics, accessibility and route planning, inefficient visualization, and lack of human involvement and interaction. Other works [29,34] increasingly emphasized the critical role of visualization, information integration and user involvement upon final plan validation and modification in site layout planning. However information visualization presents a distinctive challenge in general due to diversified usability requirements from different users, scalability to handle large scale of datasets, difficulty in integrated analysis of heterogeneous data from varying sources, in-situ visualization ability requiring timely and incremental updates, and errors and uncertainties in the data [35]. As a fragmented industry, these challenges become more severe in construction. Augmented reality (AR) [36,37] gained substantial attention recently due to its capability to

3. Methodology As the most popular virtual globe platforms, Google Earth is not only widely used by public users, but also scientists, and stakeholders in addressing environmental and construction planning issues because of its rich geographic information. Diversified geographical information is presented to the user through a combined visualization of digital elevation models, satellite imagery, 3D building models, street views and user uploaded images. Tiling and LOD for images and 3D models enable Google Earth to manage large datasets which is challenging for BIM software. However its ability to integrate fragmented data of a construction site taking a location based data management approach is far from explored. Besides, KML enriches the extensibility of the software significantly by providing users a standardized language to add customized data. The Google Earth system serves as a cost effective and low technology barrier information exploration platform, as well as an information management system. With temporal and spatial information associated with each object, the system enables efficient information retrieval through contents navigation, 3D exploration and time window filtering. What underlies the integrated construction site information management system is a KML based methodology that integrates information contained in unordered images, geometric models and 3D GIS system. As presented in Fig. 1, the system covers data collection, data processing, data management and information visualization and distribution. Efficient construction site data collection is critical for project planning and control which demands timely reaction. Therefore aerial and ground imageries of the construction site captured with UAV and mobile devices are selected as the major data sources for actual site monitoring. Apart from that, as-planned models and schedules are 318

Automation in Construction 89 (2018) 317–331

D. Li, M. Lu

Fig. 1. Integrated construction information management system based on KML and Google Earth.

Aerial images have a much larger coverage that is able to ensure overlapping common features between both subsequent aerial imageries and ground imageries. Recent works [54,55] revealed that aerial imageries with a few ground control points can achieve centimeter accuracy with a much denser 3D reconstruction compared to traditional methods. Besides, the navigation system of UAV system already provides absolute location and orientation of images with good precision. As such, go-referenced aerial imageries can be taken as Geo-references for both ground images and 3D models for information management purpose.

taken as major data sources to build up a virtual construction process. In addition, the 3D environment construction environment is provided by the visualization system itself. After data acquisition, images and models need to be processed such that they are suitable for KML management. Models are divided into parts to reflect the construction stages together with the schedule. Photogrammetry algorithms are used to align unordered images within the WGS84 coordinate system adopted by Google Earth. Panoramic views and 3D reconstruction of the construction site are also produced to facilitate better understanding of the construction environment. With a few control points, centimeter reconstruction accuracy can be achieved with much denser measurements compared to traditional surveying. Nonetheless, it is crucial to provide methods that requires as less as ground control as possible for a continuously changing construction site. As such, aerial imageries are essential and critical in the proposed method, because they will be used to provide Geo-references for other images. Geo-referenced and time stamped images and geometric models are then integrated together using KML documents. By storing hyperlinks of data on the cloud, KML enables efficient management of large volume of images and models. Sharing KML documents with limited size rather than the original datasets also makes distribution of information much easier. Google Earth is used for data visualization because it provides users various interactions with those data through exploration, zoom, VR and AR visualization and animation of the construction site at varying levels of detail from various angles.

4.1. Sequential image positioning and 3D reconstruction Unordered image positioning and 3D reconstruction has been fully automated with structure from motion (SfM) technology [56]. Both commercial and open source software systems are currently available to estimate the pose (position and orientation) of the images and to reconstruct the 3D structure of the scene. In this research, commercial software Pix4D [57] is used to estimate and optimize image pose, generate panoramic images from aerial images, and reconstruct the 3D model of the scene. Details of the 3D reconstruction algorithms can be found in relevant literature such as [56]. Most recent work on panoramic image creation from UAV images of a construction site can also be found in [27]. This section will focus on the methodology to align unordered imageries captured with varying sensors in Google Earth by taking aerial imageries as Geo-references. To provide sufficient details and reflect the changes of a construction site, images captured at different stages of the construction site with UAVs, mobile phones and cameras are added to the system incrementally as presented in Fig. 2. The combined use of aerial photos and ground photos provides much richer details due to their distinctive view angles and perspectives. Taking unordered images as input, the system outputs precise image position and orientation, 3D reconstruction of the site as point cloud or model, and panoramic image mosaics stitched from aerial imageries. The first aerial imagery is processed with default settings of Pix4D in a fully automated manner. Succeeding images are added into the processing pipeline incrementally. Tie points indicating common features between current imagery and aligned imageries are used to align the imageries together. With sufficient overlap between imageries, tie points can be detected automatically using feature matching and

4. UAV-centric unordered image processing and alignment Organizing contents on a GIS platform requires absolute position of the objects in the physical coordinate system, for example WGS84. To facilitate advanced visualization such as AR, the orientation of the images is also necessary. Different from D4AR that requires only relative orientation, the scale, location, and orientation of objects need to be fixed in absolute terms in order to integrate with GIS. Ground control points are typically applied for absolute orientation of images and 3D reconstruction, but it is difficult to maintain visible ground control points on a dynamically changing environment. Meanwhile, it is almost impossible to provide sufficient control points for sparsely distributed ground images with limited coverage. An alternative approach is to align sequentially obtained imageries to a previously aligned imagery. However sufficient common features must be ensured between the imageries. 319

Automation in Construction 89 (2018) 317–331

D. Li, M. Lu

Fig. 2. Sequential image alignment and processing pipeline.

Fig. 3. Three tie points are added to align ground imagery to aerial imagery.

coordinates of reconstructed 3D points are optimized from the initial pose of images with epipolar geometry constraints raised by millions of detected tie points as constraints. By optimizing the entire image collection as a whole, BA is able to improve the localization and orientation accuracy of images from noisy GPS measurements. It ensures images and models being precisely Geo-referenced within the corresponding coordinate system. With optimized poses, aerial images are stitched together to provide a better panoramic view of the site for entire site monitoring. Acting as a substitute of outdated high resolution satellite images, panoramas are especially vital for identification of layout and access constraints of a construction site. Subsequently, very dense reconstruction of the structure and mesh reconstruction can be performed to generate as-built 3D models of the actual site environment. Typical outputs of 3D reconstruction algorithms are presented in Fig. 4.

detection algorithms. As the appearance of the structure may change during the construction process, automatic tie point detection may fail due to the texture and color variations and the lack of overlapping. In this event, manual tie point editing is required. However, it is worth mentioning manual editing is usually limited to alignment of ground imageries to aerial imageries and the interface is usually straightforward and user-friendly in commercial software. As an example, three tie points between aerial images and ground images are shown in Fig. 3. With sufficient tie points (more than three), the pose of succeeding ground images can be aligned within the coordinate system automatically using epipolar geometry constraints. The 3D reconstruction algorithm starts from the estimation of the pose of the images, including both the position and orientation. With the pose of images, the 3D coordinates of each tie point are derived through triangulation. Bundle adjustment (BA) [58] is then applied whenever new images are added so that the pose of images and the 320

Automation in Construction 89 (2018) 317–331

D. Li, M. Lu

Fig. 4. 3D model, precise image pose estimation and high resolution panoramic image generated from 3D reconstruction of unordered imageries.

image coordinate system to the world coordinate system based on ω, φ, κ convention can be formulated with Eq. 1.

4.2. Image alignment in Google Earth Because different orientation parameterization systems are adopted in image processing and the Google Earth system, the orientation of images obtained from photogrammetry software needs to be converted to the parameterization used by Google Earth. The most commonly applied method in photogrammetry is the omega (ω), phi(φ), kappa(κ) convention, where ω, φ, κ are the successive rotation angles about the X, Y, and Z axes of the immediate coordinate system after previous rotation. The rotation in Google Earth follows the Euler angle convention defined with heading (h), tilt (t) and roll (r) angles around the Z, X and Z axes. The coordinate frameworks and the rotation sequences between the coordinate systems are presented in Fig. 5. In photogrammetry, the rotation matrix to transform points from an

Rx 1 0 0 ⎤ Rp = ⎡ ⎢ cos (ω) −sin (ω) ⎥ ⎢ sin (ω) cos (ω) ⎥ ⎦ ⎣

Ry

Rz

⎡ cos (φ) 0 sin (φ) ⎤ ⎢ 0 1 0 ⎥ ⎢−sin (φ) 0 cos (φ) ⎥ ⎦ ⎣

⎡ cos (κ ) −sin (κ ) 0 ⎤ ⎢ sin (κ ) cos (κ ) 0 ⎥ ⎢ 0 0 1⎥ ⎦ ⎣ (1)

In Google Earth, the rotation matrix to transform points in the Google Earth camera coordinate system to the world coordinate system is formulated with Eq. 2.

Fig. 5. Coordinate systems, orientation parameterization and rotation sequences of photogrammetry system and Google Earth system.

321

Automation in Construction 89 (2018) 317–331

D. Li, M. Lu

Rz RGE

Rx

Rz

0 0 ⎤ cos (h) −sin (h) 0 ⎤ ⎡1 =⎡ ⎢ sin (h) cos (h) 0 ⎥ ⎢ cos (t ) −sin (t ) ⎥ ⎢ 0 ⎢ ⎥ 0 1⎥ ⎣ ⎦ ⎣ sin (t ) cos (t ) ⎦

Table 1 Intensively used elements and their functions.

⎡ cos (r ) −sin (r ) 0 ⎤ ⎢ sin (r ) cos (r ) 0 ⎥ ⎢ 0 0 1⎥ ⎣ ⎦

s (h) s (t ) ⎤ ⎡ c (h) c (r ) − s (h) c (t ) s (r ) − c (h) s (r ) ⎥ ⎢ − s ( h ) c ( t ) c ( r ) ⎥ ⎢ = ⎢ s (h) c (r ) + c (h) c (t ) s (r ) − c (h) s (t ) ⎥ c (h) c (t ) c (r ) ⎥ ⎢ ⎥ ⎢ − s (h) s (r ) ⎥ ⎢ s ( t ) s ( r ) s ( t ) c ( γ ) c ( t ) ⎦ ⎣

3D models

⟨TimeStamp⟩ ⟨TimeSpan⟩ ⟨ExtendedData⟩

Panoramic mosaics Original images Image pose Schedule Documents, webs, et al.

with predefined primary basic shapes in KML or hyperlinks of models as KML files or XML-based COLLADA [59] files. ⟨Folder⟩ and ⟨Document⟩ are very helpful elements that can be used repetitively to organize hierarchical contents. An example of KML document for a COLLADA 3D model is given in Fig. 6. In the example, the model is nested in a ⟨Placemark⟩ element. The model is Geo-referenced with a specific location and orientation with 0 m elevation relative to ground. The 3D model itself is represented with a COLLADA 3D model with ‘.dae’ file extension. The model is linked in the KML file as a local path within the local file system. As local path is used, the model file needs to be packaged with the KML document or zipped as a compressed KMZ file. This increases the size of the file considerably, especially when raster images are the major elements. The size of KMZ file easily grows to gigabytes with high resolution mosaics or hundreds of ground images. By uploading the original data onto cloud and linking the data through permanent URLs, the size of a typical KML file can be limited up to megabytes. The combined use of cloud storage and KML substantially eases the information exchange by reducing the size of the file. On the other hand, the use of cloud also centralizes data management (including photos, models, and drawings) which is usually scattered in the current practice.

t = acos (Rp33 ) (4) th

where indicates the element at i row and j column of the rotation matrix Rp. Because anticlockwise angles are used for deduction but clockwise azimuth angle is used for heading in Google earth, the heading in Google Earth is the negation of h derived earlier. As a summary, the heading, tilt and roll required to define the orientation of image in Google Earth can be derived with Eq. 5 from ω, φ, κ obtained from PiX4d or other software.

heading = −atan2 (Rp23 , Rp13 ) tilt = acos (Rp33 ) roll = atan2 (Rp32,−Rp31)

3D model representation and visualization. Raster data alignment and overlay on Google Earth terrain. Image placement and orientation for AR visualization. Camera position and orientation for AR and navigation. Associate date/time for 4D exploration of objects and activities. Customized data organization and visualization.

⟨Camera⟩

h = −atan2 (Rp23 , Rp13 )

th

⟨Model⟩

(2)

Combining Eq. 2 and Eq. 3, the heading (h), tilt (t) and roll (r) angles for Google Earth can thus be obtained in Eq. 4.

Rpij

Objects

⟨PhotoOverlay⟩

(3)

r = atan2 (Rp32,−Rp31)

Function

⟨GroundOverlay⟩

where c and s stand for cos and sin respectively. As illustrated in Fig. 5, the image coordinate system and the Google Earth camera coordinate system are essentially the same. Therefore RGE is equal to Rp

RGE = Rp

Element

(5)

5. KML based image & 3D model management and visualization 5.2. Animation of as-planned models with KML 5.1. Basic KML elements for image and 4D models Though KML provides some basic elements of primitive geometry objects such as point, line, and polygon, editing and interactive functions are limited. Thus the as-planned models of structures still need to be generated in other 3D modeling software, such as Google SketchUp and Autodesk Revit. As the lowest level controllable objects in Google Earth are Feature elements, every object that the needs some interactions should be wrapped using a Feature element. To manipulate visibility of specific parts in accordance with the as-planned construction progress, the model needs to be divided into several distinguishable parts that reflect the construction states or activities. After that, timestamps or timespans can be associated to each element so as to control the visibility in the time lapse. Color and other properties can also be added for emphasizing the structure under construction. To represent the construction stages, a residential house in Fig. 7 is divided into five parts, including the foundation, first floor, railing, second floor and the roof. The level of the division depends totally on particular application needs. After that, each part is exported as a COLLADA model or a KMZ model which is packaged in the KMZ file. By associating each model with a time span specifying the starting date and demolishing date, the Google Earth visualization is able to change the visibility and other visualization properties along with the elapse of time controlled by a slider. The KML animation based on ⟨TimeSpan⟩ is given on the right hand side of Fig. 7. The ⟨Model⟩ and ⟨TimeSpan⟩ elements can be nested under a ⟨Placemark⟩, ⟨NetworkLink⟩ or other Feature elements. This procedure can be fully integrated with 4D BIM

Based on XML, KML uses tag-based structure with nested elements to manage data and information associated with an object in a hierarchical manner. Different from CityML which is designed to represent geometric objects, the focus of KML is visualization on web GIS platform. It defines basic elements to represent geometric objects, raster images, as well as their visualization effects. Elements predefined in KML are divided into several categories according to their functionality: Feature for vector and raster geo-data; Geometry for 3D objects, AbstractView for navigation, TimePrimitive for date and time, and others for visualization style, LOD and so on. As a tool for GIS, Geo-referencing elements are the most important elements for every object. Each object needs to be geo-referenced by ⟨Location⟩ and ⟨Orientation⟩ elements. A ⟨Scaling⟩ element is also available if scaling is necessary. Detailed information can be found through the KML reference, elements which are intensively used in this research are listed in Table 1. Objects defined with elements in the Feature category are listed on the navigation panel of Google Earth interface for interactive selection. These elements include the ⟨GroundOverlay⟩ and ⟨PhotoOverlay⟩ for images, as well as ⟨Placemark⟩, ⟨NetworkLink⟩ for geometries and models. ⟨GroundOverlay⟩ elements are used to align satellite images or panoramic images over the 3D terrain model. ⟨PhotoOverlay⟩ elements are intended to align normal images with the 3D environment for AR visualization. A 3D model can be placed under ⟨Placemark⟩ or ⟨NetworkLink⟩ elements. Geometric objects can be represented either 322

Automation in Construction 89 (2018) 317–331

D. Li, M. Lu

Fig. 6. Example KML document for a COLLADA 3D model.

stitched panoramic image is usually too large for real-time visualization. As a platform designed for handling massive data, Google Earth and KML provide users with an efficient mechanism to manage the LOD in data visualization with ⟨Region⟩ elements. A Region element is characterized by a bounding box ⟨LatLonAltBox⟩ of the object, and the LOD specified by the minimum pixels ⟨minLodPixels⟩ and maximum pixels ⟨maxLodPixels⟩ when the object is projected on the screen. Whenever the number of pixel on the screen of the image tile exceeds the range, Google Earth will room in/out to the next level. For the mosaic image, the longitude and latitude extents covered by the image can be applied as the bounding box, and the altitude of the bounding box can be ignored. A sample of ⟨GroundOverlay⟩ and ⟨Region⟩ elements of an image tile is given in Fig. 8. Taking the mosaic image in Fig. 9 for example, an image pyramid of the mosaic is created in a different resolution with a scaling factor of 2. For each resolution, the image is then divided into regularized tiles at a

software which has a project schedule linked with the 3D models. On the other hand, object related information can be assigned to each model with extended data elements. Through this approach, not only structured data stored in BIM software can be added to the system, but object related documents can also be appended to relevant 3D models for efficient information retrieval. 5.3. Organize images within KML 5.3.1. Ground overlay stitched panoramic images Aerial images provide a unique view angle of the construction site with fewer obstacles. Besides, these images can be captured regularly and efficiently on the site. The stitched panoramic image has much higher resolution compared to satellite images. The ⟨GroundOverlay⟩ element can be applied to replace the outdated lower resolution satellite image with most recent high resolution mosaics. However, the

Fig. 7. Model split into parts based on schedule for animation of construction stages.

323

Automation in Construction 89 (2018) 317–331

D. Li, M. Lu

Fig. 8. GroundOverlay and Region elements of an image tile.

integrated. Moreover, the utilization of AR and VR makes the construction site more transparent to managers through a visualized approach. Different from real-time AR technologies that demand considerable computing resources and are expensive for implementation, the ⟨PhotoOverlay⟩ element in KML provides a pragmatic approach for cost effective AR experience and site photo management. Each ⟨PhotoOverlay⟩ object is defined by: 1) a ⟨Camera⟩ element specifying the position and orientation of the image; 2) a ⟨ViewVolume⟩ specifying the field of view (FOV) of the image; 3) a ⟨Icon⟩ element to store the URL of the image; and 4) an optional ⟨TimeStamp⟩ element stating the date when the image is captured. The ⟨Icon⟩ and ⟨TimeStamp⟩ elements are straightforward. The ⟨Camera⟩ and the ⟨ViewVolume⟩ elements are critical for the alignment and the visual effect of the image. The structure of ⟨PhotoOverlay⟩ element and the inputs to define the elements are presented in Fig. 10. Given the rotation angles (omega, phi, kappa) obtained from photogrammetry software, the heading, tilt and roll are derived based on Eq. 5 presented in the previous section. The image capturing date and time can be easily extracted from the header of the image file. A time stamp can thus be added to each image to show the actual progress. It also enables time window based image retrieval when images during particular time are required. The view volume of the image can also be derived from the estimated focal length and the image size. A detailed example of ⟨Camera⟩, ⟨ViewVolume⟩ and ⟨TimeStamp⟩ elements is shown in Fig. 11.

user specified width. After that, a hierarchical KML document of ⟨Folder⟩ or ⟨NetworkLink⟩ elements is generated to specify the visualization mechanism. Regions are inherited from ⟨Folder⟩ and ⟨NetworkLink⟩ elements. Each Folder is associated with a tile defined by: 1) a ⟨Region⟩ element; 2) a ⟨GroundOverlay⟩ element, and 3) folders associated with higher resolution tiles for the current tile. Regions defined locally take precedence over regions defined higher in the folder hierarchy. The KML document structure of a tile at Level 1 resolution is presented in Fig. 9. The ⟨Folder⟩ for each tile can be repeated recursively until the most detailed tile with highest resolution is reached. To simplify the implementation, an open source software Geospatial Data Abstraction Library (GDAL) [60] is used to convert GeoTIFF images to tiled KMZ files. However, the local paths of the tiles are utilized instead of URLs. Therefore the generated KMZ file is extremely large and not very efficient for information distribution. Three additional steps implemented with Python are taken to: 1) upload the tiles onto the cloud; 2) generate and obtain the URLs; 3) replace the local paths with URLs. 5.3.2. Unordered photo alignment On a construction site, ground imageries are usually taken at “random” locations and orientations to capture a specific problem. Consequently, they are very fragmented in nature and only used as proofing material in documents in practice. However, by aligning the images at the exact location and orientation in relation to the 3D models and the site environment, fragmented information provided by individual images can be

Fig. 9. KML and LOD structure of an image tile with 3 levels of scaling factor 2.

324

Automation in Construction 89 (2018) 317–331

D. Li, M. Lu

Fig. 10. PhotoOverlay element generation and example.

potential in integrating and managing large scale datasets collected during the lifecycle of construction projects. With a location based information management system, it is able to effectively manage huge amounts of information embedded in unordered images which used to be infeasible with documents or file systems. On the other hand, the solution provides an all-in-one system that enables 360-degree view of both the facilities under construction and the construction site environment through the integration with as-planned/as-built geometric models and 3D GIS provided by Google Earth. Moreover, well organized images provide sufficient visuals to facilitate communication, to identify constraints for further planning, and to estimate field progress. The 3D reconstruction from these images can also be used for automated material quantity takeoff. Such a system provides construction practitioners a cost effective and low technology barrier method that can be applied to improve the cognitive perception of the construction environment and assist in making critical decisions in regards to:

The view volume element is used to define the perspective projection of the image used for AR visualization. For photogrammetry analytics and 3D reconstruction, the shift of the optical axis is critical. However, the shift is ignorable for visualization purpose considering the relatively tiny scale. Assuming the center of the optical axis is at the center of the image without any shift, the horizontal FOV (FOVH) and the vertical FOV (FOVV) can be derived from the estimated focal length f and the width W and height H of the imaging sensor as illustrated in Eq. 6.

W⎞ FOVH = 2·atan ⎛⎜ ⎟ ⎝ 2∙f ⎠ H ⎞ FOVV = 2·atan ⎛⎜ ⎟ 2 ⎝ ∙f ⎠

(6)

The ⟨leftFov⟩ and the ⟨rightFov⟩ each account for one half of the FOVH, the same applies for the ⟨bottomFov⟩ and ⟨topFov⟩. For the example given in Fig. 11, the focal length is 4097.40 pixels and the width and height of the image are 4608 and 3456 pixels respectively. It should be noted that the focal length and the size of the imaging sensor should be measured in the same unit. The horizontal and vertical FOVs calculated from Eq. 6 are 58.6232 and 45.6700 correspondingly. If the shift is considered, corresponding adjustment is required on the distribution of FOV on each side. The ⟨near⟩ element is used to define the size of the image on the Google Earth 3D virtual reality system. The value 2 means the image plane is drawn 2 m in front of the camera center. Moreover, a ⟨Shape⟩ element defines the projection surface of the image. It can also be nested into the photo overlay element for images of different types. It includes rectangle for ordinary images, cylinder for panoramas, and sphere for spherical panoramas.

1) Pre-project planning process for constructability verification, accesses planning, layout planning, temporal traffic planning and other tasks that require spatial information of the construction site and nearby buildings and neighborhoods. 2) Construction phase planning and control for accessibility constraints identification, site layout and activity monitoring, visual assisted progress monitoring, automated material quantity takeoff. The quantitative estimates and qualitative constraints derived based on the developed system feed inputs to further model based planning, such as layout/cost/schedule optimization using simulation or analytical techniques. 3) Operation and maintenance phase by providing detailed building and environment information, and image based infrastructure health monitoring with effective image documentation and visualization.

6. Information visualization for construction planning and control

Information integration, management and visualization are essential in every decision making process in the whole lifecycle of a

The proposed site information management system has shown great

Fig. 11. Example of Camera, ViewVolume and TimeStamop in KML.

325

Automation in Construction 89 (2018) 317–331

D. Li, M. Lu

projects for site layout planning, traffic planning, accesses planning, and environmental studies, as presented in upper images of Fig. 12. The lower part of Fig. 12 gives a 3D model of the viaduct, which is also the major visualization interface in BIM. However, without the surrounding environment, it is difficult to understand the construction site and its interactions with existing infrastructure systems and surrounding environments, such as space and access limits. Consequently, they provide limited support for construction method selection, layout planning and scheduling. With the proposed research, a 3D model of the viaduct is quickly drafted from the design drawings. To reflect the proposed construction sequence and facilitate the selective visualization of particular parts, the model was separated into abutment, piers, pier caps, girders, and decks per each segment. After the model was established, it was organized using KML document and loaded in Google Earth. Documents or text related to an object, such as drawing, could also be inserted to KML and be accessed with hyperlinks. Fig. 13 (a) shows the 3D viaduct with the 3D surrounding environment being explored from varying views. The 3D virtual reality system with as-planned model inserted was used to identify constructability issues and potential problems and risks during construction by examining specific points of interest through zooming in from different angles as presented in Fig. 13 (b, c, d) or selectively visualizing specific objects Fig. 13 (e). Its capability to associate additional structured or unstructured properties with individual parts is also demonstrated in Fig. 13 (f) where the drawing of the pier cap and quantities of concrete and rebar are linked with the pier cap. Based on a virtual walk-through inspection, several critical problems or constraints for pre-project planning were immediately identified:

construction project. The methodology proposed in research can be generalized and applied to solve any critical problems on the construction site. The rest of this section will present two case studies to demonstrate potential use of the proposed method for pre-project planning process and construction phase, respectively. However, as a general information technology research, the focus of the case studies will be mainly on information extraction and constraints identification rather than finding solutions to specific problems. The pre-project planning case demonstrates the usage of the system on an infrastructure project in the confined urban area with heavy traffic for constructability issues identification, along with constraints identification for accesses and traffic planning. Because no imagery is taken for this project, this case integrates geometric models and GIS only. Another residential project gives a more comprehensive demonstration of 4D animation, multi-source image processing and integration, ground overlay of the mosaic and the AR alignment of ground photos. 6.1. Pre-project planning in confined urban area with heavy traffic The potential impact of pre-project planning on project success has long been recognized by industrial practitioners. Two of the major tasks in pre-project planning are selecting project alternatives and developing the project definition package [61]. Both of them require sufficient understanding of the structure to be built, the surrounding environment, and the interactions between the structure and the environment to identify potential problems, to select construction methods and to estimate the cost and risks. In the case study below, the proposed system will be used to identify major problems, such as access, traffic, space, and risks, in the pre-project planning process of a viaduct construction project. As part of the port expansion project, an 800 m long viaduct with 558 m elevated by 13 segments of steel girders is designed to connect two busy terminals (Eastern and Western). The geographical overview of the project is presented in Fig. 12 with 2D drawings overlaid on satellite images. Located in crowded urban centers with complicated transportation infrastructure systems including both roadways and railways, it is essential to maintain existing traffic during the construction process to avoid influence on the operation of the port. Featured with high resolution satellite images, Google Earth has been widely applied in the pre-project planning process on infrastructure

1) There are one railway beneath the viaduct and the other one crossing the viaduct. One of them is used to connect the west side of the Western terminal to the Eastern terminal, thus special concern is raised to resolve the traffic issue. 2) The roadway beneath the viaduct needs to be closed. Being the only road directly connecting the two terminals, alternative routes need to be identified to divert the traffic. 3) The viaduct is very close to a nearby buildings, extra attention should be taken on dust and noise control during construction. 4) Limited space is available for equipment and storage in the middle of the viaduct because the northern side is crowded with buildings

Fig. 12. 2D drawings overlaid on satellite images and 3D model of the viaduct.

326

Automation in Construction 89 (2018) 317–331

D. Li, M. Lu

Fig. 13. 3D viaduct in Google Earth and its interactions with surroundings.

conditions by combining the real-world, accurate imagery, and 3D designs all together” (feedback from Rod Wales, Vice President of Ledcor Group Canada in [63]).

and the southern side is occupied by railways. 5) The east end of the viaduct involves construction beneath an existing bridge with limited space. The listed problems are major issues that may result in adjustments of the design or modifications in project manager's layout plan, accesses plan, traffic plan, environmental plan, and even the construction method and schedule. Effective information integration and sharing in this stage enables better constructability issue identification and earlier problem detection and resolution. During the course of 2017, the visualization platform was prototyped based on the proposed methodology and tested by engineers and managers who were bidding for projects in the real world. The first-handed user experiences have corroborated the advantages of the research deliverable as intended, which is publicized by one of the major applied research funding organizations in Canada (MITACS·CA) as a game-changing technology that has the potential to benefit the construction industry [62]. According to the feedback from industry partners, the platform facilitated sharing of information throughout the life cycle of a construction project, providing snapshots at key intervals to support decision making related to estimating, planning and control. Because ground images could be easily retrieved and checked against to the as-designed models, project teams were able to compare as-planned models to structures as they were being built, making it possible to track progress remotely and identify deviations on a timely basis. Compared to the traditional approach as presented in Fig. 12, the proposed methodology represents “a leap from flipping through binders of documents and photos to much more clear and much more accurate understanding of construction site

6.2. Continuous monitoring of a residential project under construction The construction site becomes even more complicated during the construction phase due to concurrent execution of various construction activities. The continuously changing site environment poses significant challenges for site management and for any analytical optimization of construction operations because significant changes quickly render any plan obsolete and any analysis invalid. Besides, thorough understanding of the site environment and short turnaround time in obtaining solutions are critical for problem solving in the construction field. The case presented below demonstrates the usage of the proposed system in as-planned and as-built information management and visualization, site layout monitoring and problem identification, material quantity takeoff, and a demo application for earthworks planning. To integrate as-planned and as-built information of the construction site, as-planned models, sequentially acquired unordered images and 3D GIS system are integrated together on Google Earth to provide stakeholders a transparent construction site through integrated and visualized information management and sharing. To present the construction milestones, the as-planned model was divided into five parts as shown in Fig. 7. A time stamp was associated to each part to indicate the start date of the relevant activities. To monitor the changing construction site, sequential imageries were captured along the construction progress. The first imagery with 160 aerial images was captured 327

Automation in Construction 89 (2018) 317–331

D. Li, M. Lu

Fig. 14. Distribution of images captured at different time with different sensors on the site.

images. A python program was implemented to generate KML for photo alignment based on the result obtained from SfM software automatically. The images were stored on a Dropbox folder and linked to the KML with hyperlinks. The final size of the KML document is less than 100 KB which makes it easier for information sharing and distribution. The final visualization of the collected data is presented in Fig. 15. From the mosaic overlay presented in Fig. 15 (a), detailed interactions of the construction site with the surrounding parks (blue) and the neighborhood (yellow) can be readily observed. The only available access to the site is also marked in red. The time slider of Google Earth can be used to filter data with timestamps, such as images in Fig. 15 (b) and (c), and to animate the as-planned construction process as presented in Fig.15 (d). By changing the time window, the user is able to check the as planned construction progress from the animation of the 3D model and the actual construction progress from the images captured during the time period. User can walk through the virtual construction (Fig. 15 (e)) and check actual progress of particular parts through AR view by mouse clicks on specific images (Fig. 15 (f)). With

with an Inspire One Pro UAV manufactured by DJI (the expense of this UAV system is around CAD $5, 000) without any ground control points. In addition, two sets of ground imageries were captured with a mobile phone and a digital camera separately later on. The distribution of those images captured with different sensors at different time periods on the construction site is presented in Fig. 14. It is noteworthy that ground images are much sparser than aerial images as shown in the figure. To make the proposed approach more flexible and applicable on site, no ground control points were used for image data processing. The first aerial imagery was processed automatically using Pix 4D. The images with optimized location and orientation are used as references for other images. A 300 megabits stitched image with 0.012 m ground resolution was generated from 116 original aerial images. The total time to obtain the results was about four hours on a desktop PC with 8 3.4GHz CPU cores and a 16GB RAM. Because only limited overlap between ground imageries and the aerial photos, ground imageries were sequentially added to the framework through manual selection of tie points. The estimated poses of all of these images are also presented in Fig. 14, together with a 3D mesh model reconstructed from the

Fig. 15. Integrated management and visualization of residential project data.

328

Automation in Construction 89 (2018) 317–331

D. Li, M. Lu

Fig. 16. Site layout planning and volume estimation.

On the other hand, lengths, areas and volumes can be easily acquired from the dense 3D reconstruction of the construction site. The 3D reconstruction can thus be used for quantity takeoff and status updates for certain activities, for example earthworks. Taking the presented case as an instance, the volume of the stockpiles can be precisely estimated from 3D reconstruction using Pix4D as presented in Fig. 17 (a). The relative positioning accuracy was evaluated using the width of the paved road in front of the house. The average width measurement out of 20 measurements is 8.008 m. Detailed measurements can be found in Fig. 17 (b). Compared with the actual width 8 m, the average error is about 8 mm and the standard deviation is around 29 mm. Note, the absolute positioning accuracy was not evaluated in this research due to the lack of ground truth; however the visualization effect also proves that the precision is acceptable for construction planning and monitoring purposes. Involving heavy equipment, earthmoving operations are heavily constrained by accessibilities and site layouts. Timely volume estimation is also essential for earthwork operations to maintain cut-fill balance and reduce the cost, especially for large scale projects with soil property uncertainties. Though an automated temporal-spatial conflict

accurate alignment, the AR view is capable to render a visual comparison between as-planned models and the actual progress captured in the images. Considering the challenges in existing automated progress tracking, the proposed method is more practical and efficient by enhancing human cognitive experiences through efficient information retrieval, information integration and visualized comparison. Despite visual analysis, analytical simulation or optimization for work planning requires knowledge of practical constraints on the construction site so to make a relevant problem definition. Taking site layout planning of the project for example, it is easy to identify one access road (pattern fill), four storage areas (solid lines) and three stockpiles (dashed lines) on the construction site from the high resolution panoramic image overlay presented in Fig. 16. It is noticeable that storage areas B and D are too close to the stockpiles A and C. This may result in material re-handling if the storage affects the earthmoving operations or presents potential hazards when both the earthmoving crew and the carpenter crew are working in the same area. These constraints provide basic inputs for advanced planning analysis by operation simulation, constraint programming and other analytical methods.

Fig. 17. Volume estimation from automated 3D reconstruction and relative positioning accuracy evaluation.

329

Automation in Construction 89 (2018) 317–331

D. Li, M. Lu

approach is not evaluated due to the limited availability of required data in the present research. Fourth, the reconstructed mesh models from images are not incorporated into the system because huge amounts of vertexes and edges in the mesh demand sophisticated LOD for effective visualization in Google Earth. Fifth, due to the restrictions on available projects, the research did not conduct a complete field test integrating unordered images and as-planned models on a larger scale. In-depth utilization of the system on specific tasks together with analytical approaches, such as automated earthworks operations planning, will be worthwhile endeavors in the future. Sixth, though very positive and optimistic feedbacks had been received from the industry, the research didn't quantitatively evaluate the effectiveness of the proposed system on an actual ongoing site. However it would be an interesting and important pursuit down the path to measure and confirm the effectiveness of applying proposed innovative technologies in the field.

free earthwork plan generation method [64] has been proposed, the applicability of the plan relies on precise modeling of the construction site, which includes: 1) the availability of access roads such as the access road and paved road in this case; 2) unit cost of hauling on particular roads because the cost on rough grading road and paved road are considerably different; 3) reserved areas such as the storage areas and the house under construction which have higher grading priority; and 4) cut/fill layout and volume estimation. With the proposed system, qualitative constraints including accesses and layouts can be easily identified from the information management system, and quantitative estimates can be quickly obtained from the dense 3D reconstruction of the site. The proposed system can also provide useful inputs for other analytics including operation simulation, layout planning, environmental impact analysis and so on. 7. Conclusion

Acknowledgement Information in various data formats is collected during the life cycle of a construction project in support of making critical decisions in construction estimating, planning, control and operations. However fragmented data without a complete view and an information visualization system severely increases information extraction cost while delaying the decision making process. This paper presents a methodology to integrate unordered images, geometric models, and surrounding environment of construction site with a visualization approach based on Google Earth and Keyhole Markup Language (KML), aimed to improve the cognitive perception of the site environment in support of decision making. The research has made it clear that unordered imageries can be aligned within the physical coordinate system with acceptable accuracy using a flexible approach without ground control. In addition, sequential ground imageries captured during different time periods can be aligned precisely taking aerial photos as references. The proposed method also demonstrates high potential on integration of large volumes of Geo-referenced images, models and high resolution panoramic views of construction sites using KML and cloud storage. Through integration with Google Earth, the structure being built is able to be rendered within the spatial context of existing 3D surrounding environment provided by Google Earth. This enables a much more realistic perception of the construction site and leads to sufficient work planning. By integrating as-planned models, actual images and AR technology, the research deliverable provides a cost effective methodology to check the construction progress and conduct visual analytics. To manage huge amounts of data, especially images, data sources are stored on the cloud and accessed with permanent hyperlinks in KML. Associated with time stamps, as-planned information and as-built information can be quickly retrieved for exploration with a time period filter. Various interaction interfaces provided by Google Earth give users more comprehensive understanding of the construction site through virtual reality and augmented reality visualization of multisource information. Two cases are given to demonstrate the potential of applying KML upon integration of fragmented information in varying data formats in construction industry. The first case demonstrates the integration of as-panned model and 3D GIS during the pre-project planning process of a viaduct construction project in a congested urban area; the other one demonstrates the integration of unordered images, as-planned models, and 3D GIS on site layout management, quantity estimation, and progress checking in a residential construction project. The limitations of the proposed method are as follows. First, as a method focusing on visualization, KML provides limited support for geometric model representation. 3D models needs to be divided into separate 3D models of individual parts in order to show the project progress. Second, the KML document for as-planned model animation is not generated from 4D BIM software in this research. Future research is recommended on the integration with 4D BIM and automated generation of the as-planned KML animation by developing relevant plugins. Third, the absolute positioning accuracy of the ground control free

This work was supported by Mitacs through Mitacs-Accelerate program and Ledcor Group of Companies (IT06594). Rod Wales, vicepresident of operations for Ledcor Infrastructure of the Ledcor Group Canada is sincerely acknowledged for sharing his experience, insight and vision and lending support and guidance to the research team. References [1] R. Kuenzel, J. Teizer, M. Mueller, A. Blickle, SmartSite: intelligent and autonomous environments, machinery, and processes to realize smart road construction projects, Autom. Constr. (2016), http://dx.doi.org/10.1016/j.autcon.2016.03.012. [2] H.C. Howard, R.E. Levitt, B.C. Paulson, J.G. Pohl, C.B. Tatum, Computer integration: reducing fragmentation in AEC industry, J. Comput. Civ. Eng. 3 (1989) 18–32, http://dx.doi.org/10.1017/CBO9781107415324.004. [3] N. Dawood, A. Akinsola, B. Hobbs, Development of automated communication of system for managing site information using internet technology, Autom. Constr. 11 (2002) 557–572, http://dx.doi.org/10.1016/S0926-5805(01)00066-8. [4] Y. Zhu, R.R.A. Issa, Viewer controllable visualization for construction document processing, Autom. Constr. 12 (2003) 255–269, http://dx.doi.org/10.1016/S09265805(02)00089-4. [5] V. Peansupap, D.H.T. Walker, Factors enabling information and communication technology diffusion and actual implementation in construction organisations, J. Inf. Technol. Constr. 10 (2005) 193–218 (doi:http://www.itcon.org/2005/14). [6] C.H. Caldas, L. Soibelman, L. Gasser, Methodology for the integration of project documents in model-based information systems, J. Comput. Civ. Eng. 19 (2005) 25–33, http://dx.doi.org/10.1061/(ASCE)0887-3801(2005)19:1(25). [7] T. Le, H.D. Jeong, Interlinking life-cycle data spaces to support decision making in highway asset management, Autom. Constr. 64 (2016) 54–64, http://dx.doi.org/ 10.1016/j.autcon.2015.12.016. [8] E.F. Finch, R. Flanagan, L.E. Marsh, Electronic document management in construction using auto-ID, Autom. Constr. 5 (1996) 313–321, http://dx.doi.org/10. 1016/S0926-5805(96)00156-2. [9] T. Omar, M.L. Nehdi, Data acquisition technologies for construction progress tracking, Autom. Constr. 70 (2016) 143–155, http://dx.doi.org/10.1016/j.autcon. 2016.06.016. [10] S. El-Omari, O. Moselhi, Integrating automated data acquisition technologies for progress reporting of construction projects, Autom. Constr. 20 (2011) 699–705, http://dx.doi.org/10.1016/j.autcon.2010.12.001. [11] A. Bradley, H. Li, R. Lark, S. Dunn, BIM for infrastructure: an overall review and constructor perspective, Autom. Constr. 71 (2016) 139–152, http://dx.doi.org/10. 1016/j.autcon.2016.08.019. [12] Z. Mallasi, Dynamic quantification and analysis of the construction workspace congestion utilising 4D visualisation, Autom. Constr. 15 (2006) 640–655, http://dx. doi.org/10.1016/j.autcon.2005.08.005. [13] A. Retik, A. Shapira, VR-based planning of construction site activities, Autom. Constr. 8 (1999) 671–680, http://dx.doi.org/10.1016/S0926-5805(98)00113-7. [14] Y. Zhou, L.Y. Ding, L.J. Chen, Application of 4D visualization technology for safety management in metro construction, Autom. Constr. 34 (2013) 25–36, http://dx.doi. org/10.1016/j.autcon.2012.10.011. [15] P.P. a Zanen, T. Hartmann, S.H.S. Al-Jibouri, H.W.N. Heijmans, Using 4D CAD to visualize the impacts of highway construction on the public, Autom. Constr. 32 (2013) 136–144, http://dx.doi.org/10.1016/j.autcon.2013.01.016. [16] Y.-C. Chang, W.-H. Hung, S.-C. Kang, A fast path planning method for single and dual crane erections, Autom. Constr. 22 (2012) 468–480, http://dx.doi.org/10. 1016/j.autcon.2011.11.006. [17] J. Wang, X. Zhang, W. Shou, X. Wang, B. Xu, M. Jeong, P. Wu, A BIM-based Approach for Automated Tower Crane Layout Planning, 59 (2015), pp. 168–178, http://dx.doi.org/10.1016/j.autcon.2015.05.006. [18] Y. Tan, Y. Song, X. Liu, X. Wang, J.C.P. Cheng, A BIM-based framework for lift planning in topsides disassembly of offshore oil and gas platforms, Autom. Constr.

330

Automation in Construction 89 (2018) 317–331

D. Li, M. Lu

[41] C.S. Park, D.Y. Lee, O.S. Kwon, X. Wang, A framework for proactive construction defect management using BIM, augmented reality and ontology-based data collection template, Autom. Constr. 33 (2013) 61–71, http://dx.doi.org/10.1016/j. autcon.2012.09.010. [42] X. Wang, M. Truijens, L. Hou, Y. Wang, Y. Zhou, Integrating augmented reality with building information modeling: onsite construction process controlling for liquefied natural gas industry, Autom. Constr. 40 (2014) 96–105, http://dx.doi.org/10.1016/ j.autcon.2013.12.003. [43] Z. Ma, H. Li, Q.P. Shen, J. Yang, Using XML to support information exchange in construction projects, Autom. Constr. 13 (2004) 629–637, http://dx.doi.org/10. 1016/j.autcon.2004.04.010. [44] Y. Song, M.J. Clayton, R.E. Johnson, Anticipating reuse: documenting buildings for operations using web technology, Autom. Constr. 11 (2002) 185–197, http://dx. doi.org/10.1016/S0926-5805(00)00097-2. [45] K. Afsari, C.M. Eastman, D. Castro-Lacouture, JavaScript object notation (JSON) data serialization for IFC schema in web-based BIM data exchange, Autom. Constr. 77 (2017) 24–51, http://dx.doi.org/10.1016/j.autcon.2017.01.011. [46] A. Redmond, A. Hore, M. Alshawi, R. West, Exploring how information exchanges can be enhanced through Cloud BIM, Autom. Constr. 24 (2012) 175–183, http://dx. doi.org/10.1016/j.autcon.2012.02.003. [47] T.W. Kang, Object composite query method using IFC and LandXML based on BIM linkage model, Autom. Constr. 76 (2017) 14–23, http://dx.doi.org/10.1016/j. autcon.2017.01.008. [48] I. Hijazi, M. Ehlers, S. Zlatanova, U. Isikdag, IFC to CityGML transformation framework for geo-analysis: a water utility network case, 4th International Workshop on 3D Geo-Information, 2009, pp. 123–127, http://dx.doi.org/10.13140/RG.2.1. 4623.0246. [49] R. De Laat, L. Van Berlo, Integration of BIM and GIS: the development of the CityGML GeoBIM extension, Advances in 3D Geo-Information Sciences, 2011, pp. 211–225, http://dx.doi.org/10.1007/978-3-642-12670-3_13. [50] M. El-Mekawy, Integrating BIM and GIS for 3D City Modelling: the Case of IFC and CityGML, (2010) (doi:urn:nbn:se:kth:diva-28899). [51] Y. Deng, J.C.P. Cheng, C. Anumba, Mapping between BIM and 3D GIS in different levels of detail using schema mediation and instance comparison, Autom. Constr. 67 (2016) 1–21, http://dx.doi.org/10.1016/j.autcon.2016.03.006. [52] E.P. Karan, J. Irizarry, Extending BIM interoperability to preconstruction operations using geospatial analyses and semantic web services, Autom. Constr. 53 (2015) 1–12, http://dx.doi.org/10.1016/j.autcon.2015.02.012. [53] T.W. Kang, C.H. Hong, A study on software architecture for effective BIM/GIS-based facility management data integration, Autom. Constr. 54 (2015) 25–38, http://dx. doi.org/10.1016/j.autcon.2015.03.019. [54] C.H. Hugenholtz, J. Walker, O. Brown, S. Myshak, Earthwork Volumetrics with an unmanned aerial vehicle and softcopy photogrammetry, J. Surv. Eng. 141 (2015) 6014003, http://dx.doi.org/10.1061/(ASCE)SU.1943-5428.0000138. [55] S. Siebert, J. Teizer, Mobile 3D mapping for surveying earthwork projects using an unmanned aerial vehicle (UAV) system, Autom. Constr. 41 (2014) 1–14, http://dx. doi.org/10.1016/j.autcon.2014.01.004. [56] N. Snavely, S.M. Seitz, R. Szeliski, Modeling the world from internet photo collections, Int. J. Comput. Vis. 80 (2008) 189–210, http://dx.doi.org/10.1007/ s11263-007-0107-3. [57] Pix4D, https://pix4d.com/, (2017) , Accessed date: 10 April 2017. [58] B. Triggs, P.F. McLauchlan, R.I. Hartley, A.W. Fitzgibbon, Bundle adjustment — a modern synthesis, Vision Algorithms: Theory and Practice, 1883 2000, pp. 298–372, http://dx.doi.org/10.1007/3-540-44480-7_21. [59] COLLADA, https://www.khronos.org/collada/, (2017). https://www.khronos.org/ collada/ (accessed April 3, 2017). [60] GDAL, http://gdal.org/, (2017) , Accessed date: 10 April 2017. [61] G.E. Gibson Jr, J.H. Kaczmarowski, H.E. Lore Jr, Preproject-planning process for capital facilities, J. Constr. Eng. Manag. 121 (1995) 312–318, http://dx.doi.org/10. 1061/(ASCE)0733-9364(1995)121:3(312). [62] Mitacs, Innovative use of 3D Technology Leaps Construction Projects into Future, Mitacs, 2017, http://mitacs.ca/en/newsroom/news-release/innovative-use-3dtechnology-leaps-construction-projects-future , Accessed date: 11 January 2018. [63] Rod Wales, Using 3D to Help Projects Leap into the Future - Construction Canada, https://www.constructioncanada.net/using-3d-help-projects-leap-future/, (2017) , Accessed date: 13 January 2018. [64] D. Li, M. Lu, Automated generation of work breakdown structure and project network model for earthworks project planning: a flow network-based optimization approach, J. Constr. Eng. Manag. 143 (2016) 4016086, http://dx.doi.org/10.1061/ (ASCE)CO.1943-7862.0001214.

(2017), http://dx.doi.org/10.1016/j.autcon.2017.02.008. [19] J.I. Kim, J. Kim, M. Fischer, R. Orr, BIM-based decision-support method for master planning of sustainable large-scale developments, Autom. Constr. 58 (2015) 95–108, http://dx.doi.org/10.1016/j.autcon.2015.07.003. [20] M. Golparvar-Fard, J. Bohn, J. Teizer, S. Savarese, F. Peña-Mora, Evaluation of image-based modeling and laser scanning accuracy for emerging automated performance monitoring techniques, Autom. Constr. 20 (2011) 1143–1155, http://dx. doi.org/10.1016/j.autcon.2011.04.016. [21] S. El-Omari, O. Moselhi, Integrating 3D laser scanning and photogrammetry for progress measurement of construction work, Autom. Constr. 18 (2008) 1–9, http:// dx.doi.org/10.1016/j.autcon.2008.05.006. [22] C. Kim, B. Kim, H. Kim, 4D CAD model updating using image processing-based construction progress monitoring, Autom. Constr. 35 (2013) 44–52, http://dx.doi. org/10.1016/j.autcon.2013.03.005. [23] K.K. Han, M. Golparvar-Fard, Appearance-based material classification for monitoring of operation-level construction progress using 4D BIM and site photologs, Autom. Constr. 53 (2015) 44–57, http://dx.doi.org/10.1016/j.autcon.2015.02.007. [24] Y. Turkan, F. Bosche, C.T. Haas, R. Haas, Automated progress tracking using 4D schedule and 3D sensing technologies, Autom. Constr. 22 (2012) 414–421, http:// dx.doi.org/10.1016/j.autcon.2011.10.003. [25] C. Zhang, D. Arditi, Automated progress control using laser scanning technology, Autom. Constr. 36 (2013) 108–116, http://dx.doi.org/10.1016/j.autcon.2013.08. 012. [26] F. Bosché, M. Ahmed, Y. Turkan, C.T. Haas, R. Haas, The value of integrating scanto-BIM and scan-vs-BIM techniques for construction monitoring using laser scanning and BIM: the case of cylindrical MEP components, Autom. Constr. 49 (2014) 201–213, http://dx.doi.org/10.1016/j.autcon.2014.05.014. [27] S. Bang, H. Kim, H. Kim, UAV-based automatic generation of high-resolution panorama at a construction site with a focus on preprocessing for image stitching, Autom. Constr. 84 (2017) 70–80, http://dx.doi.org/10.1016/j.autcon.2017.08.031. [28] V. Peansupap, D.H.T. Walker, Information communication technology (ICT) implementation constraints: a construction industry perspective, Eng. Constr. Archit. Manag. 13 (2006) 364–379, http://dx.doi.org/10.1108/09699980610680171. [29] F. Leite, Y. Cho, A.H. Behzadan, S. Lee, S. Choe, Y. Fang, R. Akhavian, S. Hwang, Visualization, information modeling, and simulation: grand challenges in the construction industry, J. Comput. Civ. Eng. 30 (2016) 4016035, http://dx.doi.org/10. 1061/(ASCE)CP.1943-5487.0000604. [30] N. Forcada, M. Casals, X. Roca, M. Gangolells, Adoption of web databases for document management in SMEs of the construction sector in Spain, Autom. Constr. 16 (2007) 411–424, http://dx.doi.org/10.1016/j.autcon.2006.07.011. [31] M. Ajam, M. Alshawi, T. Mezher, Augmented process model for e-tendering: towards integrating object models with document management systems, Autom. Constr. 19 (2010) 762–778, http://dx.doi.org/10.1016/j.autcon.2010.04.001. [32] P.T.I. Lam, F.W.H. Wong, K.T.C. Tse, Effectiveness of ICT for construction information exchange among multidisciplinary project teams, J. Comput. Civ. Eng. 24 (2010) 365–376, http://dx.doi.org/10.1061/(ASCE)CP.1943-5487.0000038. [33] A. Alsaggaf, A. Jrade, International construction a framework for an integrated BimGis decision support model for site layout planning, International Construction Specialty Conference, Vancouver, 2017, p. 182. [34] X. Wang, P.E.D. Love, M.J. Kim, C.-S. Park, C.-P. Sing, L. Hou, A conceptual framework for integrating building information modeling with augmented reality, Autom. Constr. 34 (2013) 37–44, http://dx.doi.org/10.1016/j.autcon.2012.10.012. [35] S. Liu, W. Cui, Y. Wu, M. Liu, A survey on information visualization: recent advances and challenges, Vis. Comput. 30 (2014) 1373–1393, http://dx.doi.org/10. 1007/s00371-013-0892-3. [36] X. Wang, M.J. Kim, P.E.D. Love, S. Kang, Augmented reality in built environment: classification and implications for future research, Autom. Constr. 32 (2013) 1–13, http://dx.doi.org/10.1016/j.autcon.2012.11.021. [37] H.-L. Chi, S. Kang, X. Wang, Research trends and opportunities of augmented reality applications in architecture, engineering, and construction, Autom. Constr. 33 (2013) 116–122, http://dx.doi.org/10.1016/j.autcon.2012.12.017. [38] J. Chalhoub, S.K. Ayer, Using mixed reality for electrical construction design communication, Autom. Constr. 86 (2018) 1–10, http://dx.doi.org/10.1016/j. autcon.2017.10.028. [39] M. Golparvar-Fard, F. Pena-Mora, S. Savarese, D4AR – a 4-dimensional augmented reality model for automating construction progress monitoring data collection, processing and communication, J. Inf. Technol. Constr. 14 (2009) 129–153 (doi:http://www.itcon.org/paper/2009/13). [40] S. Meža, Ž. Turk, M. Dolenc, Component based engineering of a mobile BIM-based augmented reality system, Autom. Constr. 42 (2014) 1–12, http://dx.doi.org/10. 1016/j.autcon.2014.02.011.

331