Implementation of ultra-light UAV systems for cultural heritage documentation

Implementation of ultra-light UAV systems for cultural heritage documentation

G Model CULHER-3721; No. of Pages 11 ARTICLE IN PRESS Journal of Cultural Heritage xxx (2020) xxx–xxx Available online at ScienceDirect www.science...

11MB Sizes 1 Downloads 56 Views

G Model CULHER-3721; No. of Pages 11

ARTICLE IN PRESS Journal of Cultural Heritage xxx (2020) xxx–xxx

Available online at

ScienceDirect www.sciencedirect.com

Original article

Implementation of ultra-light UAV systems for cultural heritage documentation Tolga Bakirman a,∗ , Bulent Bayram b , Burak Akpinar b , M. Fahri Karabulut b , Onur Can Bayrak b , Alper Yigitoglu c , Dursun Zafer Seker d a

Research and Application Center for Satellite Communications and Remote Sensing, Istanbul Technical University, 34469 Maslak, Istanbul, Turkey Geomatics Engineering, Yildiz Technical University, 34220 Esenler, Istanbul, Turkey c Motif Harita, 34805, Istanbul, Turkey d Geomatics Engineering, Istanbul Technical University, 34469 Maslak, Istanbul, Turkey b

a r t i c l e

i n f o

Article history: Received 2 May 2019 Accepted 10 January 2020 Available online xxx Keywords: Ultra-light drone Documentation Cultural heritage SfM Photogrammetry Point cloud Low-cost UAV

a b s t r a c t New technologies, such as unmanned aerial vehicles (UAVs) and terrestrial laser scanning systems, provide opportunities for the digital documentation of cultural heritage. Although ultra-light drones (ULDs) are usually used for recreational activities, in this study, we examined their ability for use in digital documentation. We investigated the efficiency of an ULD for documentation of a selected historical building. In this study, the structure from motion (SfM) method was used to create three-dimensional point cloud data of a historical building using ULD and a low-cost UAV. The resulting point clouds were compared with terrestrial laser scanner data. The maximum standard deviations were calculated as 0.62 cm and 1.87 cm for low-cost UAV and ULD, respectively. The results show that ULDs can be used under suitable circumstances as a low-cost alternative for cultural heritage documentation. © 2020 Elsevier Masson SAS. All rights reserved.

Research aims Historical monuments and buildings are vulnerable to disasters, tourism and human actions. Therefore, digital documentation of cultural heritage is an essential subject for their preservation and restoration. Use of unmanned aerial vehicles for this purpose is time-saving compared to traditional methods. In this study, we aim to provide a very low cost solution for cultural heritage documentation using an ultra-light drone. We explore implementation and usability of structure-from-motion photogrammetry to produce 3D point cloud and orthoimage using ultra-light-drone images. We also investigate the accuracy of obtained outputs compared to terrestrial laser scanner. We discuss the benefits and disadvantages of ultra-light-drones with respect to consumer grade low cost unmanned aerial vehicles and terrestrial laser scanners. Introduction Historical buildings are an essential part of cultural heritage. They reflect the architecture, social structure, development, and the

∗ Corresponding author. E-mail address: [email protected] (T. Bakirman).

lifestyle of a society. Cultural heritage documentation is a vital process in the conservation and sustaniable management of cultural monuments [1]. With the latest advances in surveying technology, three-dimensional (3D) digital documentation allows us to understand the present state of structures and conduct a structural analysis [2]. Traditional photogrammetric methods have been used for digital documentation purposes. Although the advantages of digital close-range photogrammetry have been mentioned in many studies [3–5], unmanned aerial vehicles (UAVs) and structure from motion (SfM) photogrammetry have created opportunities in many applications [6]. Digital-documentation-related studies with UAVs have become increasingly popular as UAVs are low-cost alternative. Due to rapid development and evolution of electronic sensors and instruments used in photogrammetry, the potential of this technique for digital documentation has increased by implementing these sensors on low-cost UAVs. Further details regarding UAV systems for photogrammetric mapping and assessment investigation were provided by Agüera-Vega et al. [7]. The integration of computer vision and photogrammetry increased the range of application possibilities for object modelling, 3D surface generation, point cloud generation, as well as 3D documentation applications for cultural heritage sites and buildings. An alternative to conventional measurement techniques,

https://doi.org/10.1016/j.culher.2020.01.006 1296-2074/© 2020 Elsevier Masson SAS. All rights reserved.

Please cite this article in press as: T. Bakirman, et al., Implementation of ultra-light UAV systems for cultural heritage documentation, Journal of Cultural Heritage (2020), https://doi.org/10.1016/j.culher.2020.01.006

G Model CULHER-3721; No. of Pages 11

ARTICLE IN PRESS

2

T. Bakirman et al. / Journal of Cultural Heritage xxx (2020) xxx–xxx

Fig. 1. Study area.

SfM photogrammetry provides the ability to generate 3D point clouds using unsorted images captured by cameras [8,9]. The SfM algorithm generates dense point cloud of objects including historical monuments. The combination of SfM and UAV technology with existing terrestrial photogrammetry comprehensively covers the structure of high and unreachable historical buildings. A complete cultural heritage site model and its surroundings can be fully modelled without any missing parts [10]. Different types of UAVs have been proposed for digital documentation. Xu et al. [11] used a camera-equipped UAV system and terrestrial laser scanning system (TLS) for the 3D reconstruction of complex monuments. Bolognesi et al. [12] analyzed the potential and constraints of using low-cost UAVs in the modelling of cultural heritage using a DJI Phantom 2 with an accuracy of approximately 2 cm. Hidayat and Cahyono [10] used DJI Phantom 2 and terrestrial images to document Singosari Temple. Fernández-Lozano and Gutiérrez-Alonso [13] investigated the performance of a low-cost UAV system for creating digital elevation models (DEMs) for the analysis of archaeological remains. Mesas-Carrascosa et al. [37] studied required system specifications of UAVs for archaeological applications. Deng et al. [15] proposed a framework for the 3D reconstruction of heritage sites using a small unmanned helicopter. Chiabrando et al. [16] 3D modelled an ancient village building damaged by an earthquake using a multi-sensor 3D survey system, which consisted of SLAM-based LiDAR and UAV. Murtiyoso et al. [17] recorded the 19th century St-Pierre-le-Jeune church in Strasbourg, France, using the Albris and DJI Phantom 3 commercial UAV system. A cultural heritage site was modelled using HTC Desire and Meizu M3 MAX smartphones by Shults et al. [18]. To avoid gaps in the created model, UAV images have been integrated with ter-

Table 1 Technical specifications of devices used. Parameter

UAV

ULD

Name Camera Resolution Image Resolution Focal Length Weight Flight Range Flight Duration Max. Speed

DJI Phantom Pro 4 20 MP 5472 × 3648 f/2.8 1388 g 3500 m 30 min 72 km/h

Corby Drone CX012 0.3 MP 480 × 640 – 50–100 g 50–70 m 5–6 min 5–10 km/h

restrial images using FocusDrone UAV for the upper areas of the cultural heritage site with 9 mm accuracy. Yastikli and Özerdem [19] used the low-cost GoPro Hero 4 fish eye lens camera and the UAS 3DR Solo model UAV, and generated a solid model of Otag-i Humayun, which was also selected as our study area. Murtiyoso et al. [20] generated an accurate 3D point cloud data of a historical church in the city centre of Strasbourg, France, using a low-cost UAV. To the best of our knowledge, few studies have used ultralight drones (ULDs) for digital documentation of cultural heritage [21]. In our study, we investigated the reliability of a ULD system considering low-cost commercial UAVs for historical building documentation. The most appropriate distribution and number of ground control points (GCPs) were considered to obtain similar results with both ULD and low-cost UAV systems.

Please cite this article in press as: T. Bakirman, et al., Implementation of ultra-light UAV systems for cultural heritage documentation, Journal of Cultural Heritage (2020), https://doi.org/10.1016/j.culher.2020.01.006

G Model

ARTICLE IN PRESS

CULHER-3721; No. of Pages 11

T. Bakirman et al. / Journal of Cultural Heritage xxx (2020) xxx–xxx

3

Fig. 2. GCPs, CPs and MTPs for (a) DJI and (b) ULD.

Table 2 DJI and ULD bundle adjustment results and average projection errors. GCP (Total 25)

DJI X (m) Mean

−0.00191

␴ (m) RMSE (m) ULD

0.00923 0.009424

Mean

X (m) −0.00085

␴ (m) RMSE (m)

0.018826 0.018845

Y (m) −0.00017 2.6 mm 0.006097 0.006099 GCP (Total 72) Y (m) 0.001214 1.6 mm 0.022556 0.022588

MTP

CP (total 23) Z (m)

X (m)

−0.00175

−0.00291

0.016392 0.016485

0.011233 0.011603

Z (m) −0.00061

X (m) 0.001013

0.019121 0.019131

0.01465 0.014685

Y (m) −0.00403 5.1 mm 0.009731 0.010534 CP (Total 38) Y (m) 0.005785 6.0 mm 0.014902 0.015986

Z (m) 0.000886 0.012642 0.012673 Z (m) −0.00094 0.016624 0.01665

Number of MTPs Average Projection Error (pixel) GSD (cm)

1

Number of MTPs Average Projection Error (pixel) GSD (cm)

20

0.585 0.81 MTP

1.02235 1.16

Please cite this article in press as: T. Bakirman, et al., Implementation of ultra-light UAV systems for cultural heritage documentation, Journal of Cultural Heritage (2020), https://doi.org/10.1016/j.culher.2020.01.006

G Model CULHER-3721; No. of Pages 11

ARTICLE IN PRESS

4

T. Bakirman et al. / Journal of Cultural Heritage xxx (2020) xxx–xxx Table 3 Comparison of statistics of all regions for DJI and ULD.

Materials and methods Study area and data acquisition

DJI (cm)

In this study, Ota˘g-ı Humayun (Sultan’s Pavilion) located in Davutpas¸a, Istanbul, Turkey was selected as the study area. The pavilion was built in the Ottoman period by Grand Vizier Koca Davut Pasha in 1483. Ota˘g-ı Humayun is made of smooth cut stone and has a domed structure with a square plan and two floors. Currently, the structure is within the Yildiz Technical University, Davutpasa Campus. Ota˘g-ı Humayun was originally located on the caravan road (Via Egnita) connecting Istanbul to Edirne. Ota˘g-ı Humayun was also partly demolished and rebuilt during the reign of Sultan Süleyman the Magnificent. The construction of the structure was restarted by Sultan III Mehmet (1595–1603) and completed during the reign of Sultan Ahmed I (1603–1617) [22]. The building was restored by Yildiz Technical University in 2010 (Fig. 1). In this study, we only focused on the front fac¸ade of the Ota˘g-ı Humayun as it has more complex details than the others. Images were taken using a DJI Phantom 4 Pro (DJI) and Corby Drone CX012 (ULD). We preferred to use the term ‘ultra-light’ since its price range is around 0.2% that of professional UAVs. Some technical specifications of the drones are provided in Table 1. A total of 240 and 839 images were captures via DJI and ULD, respectively. The altitude was 5–70 m and 3–15 m for DJI and ULD, respectively. The average object distance for ULD was 10 m. The sidelap and along-track overlapping were 85% and 40% for both drones, respectively. TLS data were used as reference for point cloud comparison and to create GCPs for point cloud generation via the SfM method. TLS data were collected using a Faro X 130, which has a range of up to 130 m. Its defined systematic measurement error (ranging error) between 10-25 m is ±2 mm, 1␴. A complete point cloud of the front fac¸ade was created using 11 scanning sessions. Methods Commercial Pix4D software was used for point cloud generation, which is based on the SfM method. This method has been implemented in many applications such as geomorphologic analysis [23], land relief determination [24], DEM creation [8], archaeology [25], and agriculture [26]. Unlike traditional photogrammetry, SfM does not require internal orientation parameters. Calibration and orientation parameters are calculated using a sufficient amount of GCPs. In this method, this problem is solved by iterative bundle adjustment using automatically extracted matched features from multiple overlapping two-dimensional (2D) images [8,27]. SfM starts with feature detection and is followed by matching step. This is solved using a scale invariant feature transform (SIFT) feature detection algorithm [28]. Although the SIFT algorithm is included in various computer vision applications, several modified SIFT algorithms have been proposed to avoid its computational complexity especially for real-time applications [29]. The SIFT algorithm is comprised of four main processes: scale–space extrema detection, key point localization, orientation assignment, and key point descriptor. Scale–space extrema detection identifies potential points of interest by searching over all scales and locations using the difference of the Gaussian function in Eq. (1) [30]: D = (x, y, ) = (G (x, y, k) − G (x, y, )) × I(x, y) = L (x, y, k) − L(x, y, )

(1)

where G = (x, y, ) is the convolution of a variable-scale Gaussian, I(x, y) is an input image, L(x, y, ) is the scale space of an image as a function, and × is the convolution operation. Key point localiza-

ULD (cm)

Region 1 5.00 0.00 0.38 0.30 Region 2 5.00 0.00 0.52 0.62 Region 3 5.00 0.00 0.55 0.31 Region 4 4.79 0.01 0.66 0.35 Region 5 4.00 0.01 0.84 0.40

Max. Min. Mean SD Max. Min. Mean SD Max. Min. Mean SD Max. Min. Mean SD Max. Min. Mean SD

20.00 0.00 1.86 1.87 21.25 0.01 2.67 1.50 19.97 0.00 1.01 0.93 15.47 0.01 1.30 1.27 17.93 0.01 1.22 1.03

tion is the next process where location and scale are determined at each potential feature by fitting a 3D quadratic function [31]. Then, orientation is determined at each key point by applying Eq. (2): m (x, y) =



2

(L (x + 1, y) − L (x − 1, y)) + (L (x, y + 1) − L (x, y − 1))

 (x, y) = tan−1



L (x, y + 1) − L (x, y − 1) L (x + 1, y) − L (x − 1, y)



2

(2)

where L(x, y) is an image sample, m(x, y) is the gradient magnitude, and (x, y) is orientation. Finally, unique descriptors for each key point are computed to conclude the SIFT process. Following the SIFT process, key point descriptors are matched between each pair of images using the approximate nearest neighbour [32]. Tracks are created with matches, which are connections of key points in multiple images. Finally, extracted key point correspondences are used to reconstruct the 3D scene via triangulation, which generally is called the sparse point cloud. Dense point clouds can be generated by exploiting different multi-view stereo algorithms. Thus, dense point clouds can be created at a minimum in one pixel resolution [33]. Pix4D uses a patch-based matching algorithm for point cloud densification using the following process: - The software takes an automatic tie point and selects the image closer to it - It then creates a patch (a 7 × 7 or 9 × 9 window). - All the images that have that patch and selects the best ones are computed, - The software then tries to fill all of the cells by matching them with the selected images [34]. The distributions of the used GCPs, check points (CPs), and manual tie points (MTPs) for DJI and ULD are depicted in Fig. 2. All GCPs and CPs were measured from TLS data in local coordinates. Since the amount and distribution of GCPs are crucial for calculating interior and exterior parameters [14], the dense point cloud from the DJI drone was generated using 25 GCPs, 23 CPs, and 1 MTP. We tested whether the same GCPs can be used for the ULD. However, due to the field of view (FoV) and picture size of the ULD’s camera, to approximate the DJI results, more GCPs and CPs were used. The

Please cite this article in press as: T. Bakirman, et al., Implementation of ultra-light UAV systems for cultural heritage documentation, Journal of Cultural Heritage (2020), https://doi.org/10.1016/j.culher.2020.01.006

G Model CULHER-3721; No. of Pages 11

ARTICLE IN PRESS T. Bakirman et al. / Journal of Cultural Heritage xxx (2020) xxx–xxx

Fig. 3. Raw point clouds of (a) TLS, (b) DJI, and (c) ULD.

Please cite this article in press as: T. Bakirman, et al., Implementation of ultra-light UAV systems for cultural heritage documentation, Journal of Cultural Heritage (2020), https://doi.org/10.1016/j.culher.2020.01.006

5

G Model CULHER-3721; No. of Pages 11

ARTICLE IN PRESS

6

T. Bakirman et al. / Journal of Cultural Heritage xxx (2020) xxx–xxx

Fig. 4. Divided regions used for statistical analysis of the DJI point cloud.

numbers of GCP, CP, and MTP were 72, 38, and 20 for the ULD, respectively.

Results and discussion This study was an experimental investigation into the usability of ULD for cultural heritage documentation. Low-cost UAVs are widely used for various applications such as digital orthophoto generation, target tracking, the film industry, and recreational activities. This is the first time a ULD was used for cultural structure documentation. In this study, state-of-the-art computer vision and photogrammetric point cloud generation were applied to obtain point cloud data from both UAVs and to test the efficiency of ULD. Bundle adjustment results were used to obtain a dense point cloud for both UAVs. The bundle adjustment results of GCPs, CPs, and the average projection errors of MTPs are listed in Table 2. These results provided preliminary insight into the stability of the generated point cloud data. Table 2 shows that the obtained mean errors for DJI were 2.6, and 5.1 mm for GCPs and CPs, respectively. The calculated mean errors for ULD were 1.6 and 6.0 mm for GCPs and CPs, respectively. These results indicate that more GCPs are required for the ULD system to achieve similar results as DJI. The root mean square errors (RMSEs) were acceptable for both systems compared with their calculated ground sampling distances (GSDs). Considering the minimum and maximum flight altitudes of both DJI and ULD systems (5–70 m for DJI and 3–15 m for ULD), the calculated theoretical GSD values for DJI were 0.17, 0.28, 0.85, and 3.95 cm for 3, 5, 15, and 70 m flight altitudes, respectively. Since the camera parameters of the ULD system are unknown, its GSD was evaluated considering the values for DJI. The calculated average GSD of the DJI system matches the theoretical GSD for 15 m flight altitude (Table 2). The calculated GSD value of the ULD system is slightly close to DJI for the 15 m flight altitude. The TLS and generated dense point cloud data are depicted in Fig. 3. The statistical outlier removal (SOR) method was implemented for noise removal from point cloud data. According to Rusu et al.

[35], this method is based on calculation of the  distance between pi (xi yi zi ) and its nearest neighbouring points nj as in Eq. (3).  i= mean

1 k

k j=1

|pi − n¯j |

(3)

Suppose that n is the number of nearest neighbouring points.  The average distance between each nj and pi (xi yi zi ) and the standard deviation of these distances are calculated using Eqs. (4) and (5), respectively.  = 

1 n



  = 

 i mean

1 n  i )2  − mean ( n−1 i=1

(4)

(5)

The nearest distance value is calculated using Eq. (6). t =  i + mul ×  

(6)

where mul is the multiplier of standard deviation, which is defined empirically by the user. An increase in mul results in decreased method sensitivity. In this study, the number of neighbouring points and mul were selected as 6 and 1, respectively, for both generated point cloud data. The point clouds were compared by taking the TLS point cloud as reference data. TLS-ULD and TLS-DJI were compared using CloudCompare [36] open source point cloud processing software. This software benefits from a specific octree structure that corresponds to the recursive partition of a cubical volume of space to calculate the nearest neighbour distance between the reference and compared point cloud. It searches for the nearest point in the reference cloud to the compared cloud, and calculates the Euclidean distance. This scenario is sufficient as long as the reference data are dense enough. Otherwise, a mesh model needs to be created that can be used as a reference surface. However, characteristics of the data may be lost during interpolation. In our case, TLS data were satisfactory for this type of distance calculation, since the data contained around 180 million points.

Please cite this article in press as: T. Bakirman, et al., Implementation of ultra-light UAV systems for cultural heritage documentation, Journal of Cultural Heritage (2020), https://doi.org/10.1016/j.culher.2020.01.006

G Model CULHER-3721; No. of Pages 11

ARTICLE IN PRESS T. Bakirman et al. / Journal of Cultural Heritage xxx (2020) xxx–xxx

Fig. 5. The distributions of the distances for DJI (a–e) and ULD (f–j) for regions 1–5, respectively.

Please cite this article in press as: T. Bakirman, et al., Implementation of ultra-light UAV systems for cultural heritage documentation, Journal of Cultural Heritage (2020), https://doi.org/10.1016/j.culher.2020.01.006

7

G Model CULHER-3721; No. of Pages 11

ARTICLE IN PRESS

8

T. Bakirman et al. / Journal of Cultural Heritage xxx (2020) xxx–xxx

Fig. 6. Comparisons for (a) DJI and (b) ULD with TLS.

To compare DJI and ULD point cloud data with TLS, the fac¸ade was divided into five regions as shown in Fig. 4. The 3D distances between TLS and DJI and the TLS and ULD point cloud data were calculated using the nearest neighbour method. The maximum, minimum, average distances, and standard deviations were calculated and are listed in Table 3. The distributions of the calculated distances are provided in Fig. 5 for all regions. Since we did not use a reference surface and performed comparisons using cloudto-cloud distance, the distances were calculated in absolute values. The maximum distance values in Fig. 5 are 5 and 20 cm for DJI and ULD, respectively. As a result, the maximum standard deviations among all selected regions were calculated as 0.62 and 1.87 cm for DJI in Region 2 and ULD in Region 1, respectively. The mean distances for DJI were similar for all five regions. The maximum mean value was calculated in Region 5 for DJI. ULD also had relatively similar results

between all regions, except Region 2, as shown in Fig. 6. Fig. 6 shows the general distribution of the measured distances for both systems. The blue represents minimum distances, which were 0.0023 cm for DJI and 0.0031 cm for ULD. The red indicates maximum distances, which were 1 cm for DJI and 4 cm for ULD. In addition to investigating the spatial reliability of both systems, we also considered the textural quality. For this purpose, digital orthophotos of Region 3 were created. The results are presented in Fig. 7, which shows that the orthophoto generated from the DJI system (Fig. 7b) has richer textural quality rather than that via ULD (Fig. 7c). However, the textural quality of the orthophoto generated from the ULD system cannot be ignored: sharp contours, main characteristic structure details, and transition between stones are distinguishable. The differences between the same details were measured on superimposed TLS- and ULD-derived orthophotos, and the measured distances can be seen in Fig. 8.

Please cite this article in press as: T. Bakirman, et al., Implementation of ultra-light UAV systems for cultural heritage documentation, Journal of Cultural Heritage (2020), https://doi.org/10.1016/j.culher.2020.01.006

G Model CULHER-3721; No. of Pages 11

ARTICLE IN PRESS T. Bakirman et al. / Journal of Cultural Heritage xxx (2020) xxx–xxx

9

Fig. 7. Digital orthophotos of Region 3: (a) TLS, (b) DJI, and (c) ULD.

Conclusions Cultural heritage are witnesses to history that connects the past and present. Therefore, their documentation has indispensable importance for understanding history and establishing the future. Historical documentation using low-cost UAV-based images is significantly time-saving compared with traditional measurement techniques. The cost of TLS and consumer-grade UAV systems are major factors that determine the budgets of cultural heritage documentation projects. Conventional geodetic measurements are also time-consuming and the number of measured points is limited. In mega cities such as Istanbul, documentation with TLS can be challenging due to traffic, pedestrian movements, and the complexity of geometric structure of historical buildings. Therefore, the proposed low-cost systems can be used for accurate documentation of historical buildings and monuments. The cost ratio between ULD and DJI is approximately 1:500 and the results obtained from the ULD system were encouraging. In many countries, such as Turkey, according to civil aviation rules, flight permission and a pilot certificate are required for UAV systems that are heavier than 500 g. In some cases, obtaining flight permission is almost impossible due to security reasons in urban areas. The permit processes can be slow due to bureaucratic reasons and the planned flight may not occur. Since ULD systems are exempt from these requirements and provide safer working environment, they are a potential candidate to overcome these bottlenecks. As ULD systems are easy to learn and are not much affected by peripheral factors, they are advantageous for the documentation of historical buildings.

The obtained results showed that the orthoimage produced from ULD has suitable resolution for drawing the details of the fac¸ade. Our experiments showed that ULD systems can be used effectively for investigations prior to restoration projects, rapid integrated documentation studies, and complementary data in some emergency situations, such as natural disasters and unwanted damage. One disadvantage of using ULDs is the limitation of the camera system and FoV angle, which result in an increase in the number of required GCPs. The development of ULD technology will soon overcome this challenge. Providing that the stability problems are solved and camera resolutions increase in the new ULD systems, these systems will play a significant role in digital documentation of historical buildings. This study is one of the pioneering attempts to demonstrate the application of ULDs to cultural heritage documentation. Many cultural objects have disappeared due to human actions and natural disasters, which are irreparable losses. These objects can only be reconstructed using digitally documented data. Our proposal allows the rapid, accurate, and inexpensive collection of data, which is crucial for cultural heritages, and requires minimal personnel and hardware configuration. Acknowledgements This study has been supported by TUBITAK, Turkey (The Scientific and Technological Research Council of Turkey) with project entitled "Automatic 3D Shoreline Extraction and Analysis from UAV and UAV-LiDAR Data for Sustainable Monitoring: Case Study of Terkos (Istanbul)" and grant ID 115Y718.

Please cite this article in press as: T. Bakirman, et al., Implementation of ultra-light UAV systems for cultural heritage documentation, Journal of Cultural Heritage (2020), https://doi.org/10.1016/j.culher.2020.01.006

G Model CULHER-3721; No. of Pages 11

ARTICLE IN PRESS

10

T. Bakirman et al. / Journal of Cultural Heritage xxx (2020) xxx–xxx

Fig. 8. Orthophoto comparison between ULD and TLS.

References [1] J. Sánchez, E. Quirós, Semiautomatic detection and classification of materials in historic buildings with low-cost photogrammetric equipment, J. Cult. Herit. (2017). [2] M. Vatan, M.O. Selbesoglu, B. Bayram, The use of 3D laser scanning technology in preservation of historical structures, Wiadomo´sci Konserwatorskie (2009) 659–669. [3] H.M. Yilmaz, M. Yakar, S.A. Gulec, O.N. Dulgerler, Importance of digital closerange photogrammetry in documentation of cultural heritage, J. Cult. Herit. 8 (2007) (2007) 428–433. [4] H.M. Yilmaz, M. Yakar, F. Yildiz, Documentation of historical caravansaries by digital close range photogrammetry, Autom. Constr. 17 (2008) (2008) 489–498. [5] M. Yakar, H.M. Yilmaz, A.A. Gulec, M. Korunmaz, Advantage of digital close range photogrammetry in drawing muqarnas in architecture, Inf. Technol. J. 8 (2) (2009) 202–207.

[6] A. S¸asi, M. Yakar, Photogrammetric modelling of hasbey dar’ülhuffaz (Masjid) using an unmanned aerial vehicle, Int. J. Eng. Geosci. (IJEG) 3 (1) (2018) 006–011, http://dx.doi.org/10.26833/ijeg.328919, ISSN 2548-0960. [7] F. Agüera-Vega, F. Carvajal-Ramírez, P. Martínez-Carricondo, Assessment of photogrammetric mapping accuracy based on variation ground control points number using unmanned aerial vehicle, Measurement 98 (2017) 221–227. [8] M.J. Westoby, J. Brasington, N.F. Glasser, M.J. Hambrey, J.M. Reynolds, Structure from Motion photogrammetry: a low cost, effective tool for geoscience applications, Geomorphology 179 (2012) 300–314. [9] A. Akar, Evaluation of accuracy of dems obtained from UAV-Point clouds for different topographical areas, Int. J. Eng. Geosci. (IJEG) 2 (03) (2017) 110–117, http://dx.doi.org/10.26833/ijeg.329717, ISSN 2548-0960. [10] H. Hidayat, A.B. Cahyono, Combined aerial and terrestrial images for complete 3D documentation of Singosari Temple based on Structure from Motion algorithm, in: IOP Conference Series: Earth and Environmental Science (Vol. 47, No. 1, p. 012004). IOP publishing, 2016.

Please cite this article in press as: T. Bakirman, et al., Implementation of ultra-light UAV systems for cultural heritage documentation, Journal of Cultural Heritage (2020), https://doi.org/10.1016/j.culher.2020.01.006

G Model CULHER-3721; No. of Pages 11

ARTICLE IN PRESS T. Bakirman et al. / Journal of Cultural Heritage xxx (2020) xxx–xxx

[11] Z.H. Xu, L.X. Wu, Y.L. Shen, F.S. Li, Q.L. Wang, R. Wang, Tridimensional reconstruction applied to cultural heritage with the use of camera-equipped UAV and terrestrial laser scanner, Remote Sensing 6 (2014) 10413–10434. [12] M. Bolognesi, A. Furini, V. Russo, A. Pellegrinelli, P. Russo, Testing the low-cost RPAS potential in 3D cultural heritage reconstruction, in: The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-5/W4, 2015, 3D Virtual Reconstruction and Visualization of Complex Architectures, 25-27 February 2015, Avila, Spain, 2015. [13] J. Fernández-Lozano, G. Gutiérrez-Alonso, Improving archaeological prospection using localized UAVs assisted photogrammetry: an example from the Roman Gold District of the Eria River Valley (NW Spain), J. Archaeol. Sci. Rep. (2016). [14] P. Martinez-Carricondo, F. Agüera-Vega, F. Carvajal-Ramírez, F.J. Mesas-Carrascosa, A. García-Ferrer, F.J. Pérez-Porras, Assessment of UAVphotogrammetric mapping accuracy based on variation of ground control points, Int. J. Appl. Earth Obs. Geoinf. 72 (2018) 1–10. [15] F. Deng, X. Zhu, X. Li, M. Li, 3D digitisation of large-scale unstructured great wall heritage sites by a small unmanned helicopter, Remote Sensing 9 (5) (2017) 423. [16] F. Chiabrando, G. Sammartano, A. Spanò, A comparison among different optimization levels in 3D multi-sensor models, in: A Test Case In Emergency Context: 2016 Italian Earthquake, The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-2/W3, 2017 3D Virtual Reconstruction and Visualization of Complex Architectures, 1–3 March 2017, Nafplio, Greece, 2017, pp. 155–162. [17] A. Murtiyoso, P. Grussenmeyer, T. Freville, Close range UAV accurate recording and modeling of St-Pierre-le-Jeune neo-romanesque church in Strasbourg (France), in: International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, 42(2/W3), 2017, pp. 519–526. [18] R. Shults, P. Krelshtein, I. Kravchenk, O. Rogoza, O. Kyselov, Low-cost photogrammetry for culture heritage, in: “Environmental Engineering” 10th International Conference, Vilnius Gediminas Technical University, Lithuania, 27–28 April 2017, 2017, http://dx.doi.org/10.3846/enviro.2017.237. [19] N. Yastikli, Ö.Z. Özerdem, Architectural heritage documentation by using Low cost UAV with fisheye Lens: Otag-I Humayun in Istanbul as a case study, in: ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume IV-4/W4, 2017 4th International GeoAdvances Workshop, 14–15 October 2017, Safranbolu, Karabuk, Turkey, 2017. [20] A. Murtiyoso, P. Grussenmeyer, N. Börlin, J. Vandermeerschen, T. Freville, Open source and independent methods for bundle adjustment assessment in closerange UAV photogrammetry, Drones 2018 (2) (2018) 3. [21] L. Carnevali, E. Ippoliti, F. Lanfranchi, S. Menconero, M. Russo, V. Russo, Close-range mini-UAVs photogrammetry for architecture survey, in: The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-2, 2018 ISPRS TC II Mid-Term Symposium “Towards Photogrammetry 2020”, 4–7 June 2018, Riva Del Garda, Italy, 2018.

11

[22] S. Gul, Otag-ı Humayun in Davutpasa, Istanbul J Hist. Civilization 7 (2015) 129–142 (Orig. in Turkish). [23] K.L. Cook, An evaluation of the effectiveness of low-cost UAVs and structure from motion for geomorphic change detection, Geomorphology 278 (2017) 195–208. ´ [24] W. Gruszczyn´ski, W. Matwij, P. Cwiakała, Comparison of low-altitude UAV photogrammetry with terrestrial laser scanning as data-source methods for terrain covered in low vegetation, ISPRS J. Photogramm. Remote Sens. 126 (2017) 168–179. [25] S. Pena-Villasenin, M. Gil-Docampo, J. Ortiz-Sanz, Professional SfM and TLS vs a simple SfM photogrammetry for 3D modelling of rock art and radiance scaling shading in engraving detection, J. Cult. Herit. 37 (2018) 238–246, http://dx.doi.org/10.1016/j.culher.2018.10.009. [26] W.H. Maes, K. Steppe, Perspectives for remote sensing with unmanned aerial vehicles in precision agriculture, Trends Plant Sci. 24 (2) (2019) 152–164. [27] N. Snevely, S.M. Seitz, R.M. Szeliski, Modeling the world from internet photo collections, Int. J. Comput. Vis. 80 (2) (2008) 189–210. [28] D.G. Lowe, Distinctive image features from scale-invariant keypoints, Int. J. Comput. Vis. 60 (2) (2004) 91–110. [29] L. Chiu, T. Chang, J. Chen, N.Y. Chang, Fast SIFT design for real-time visual feature extraction, IEEE Trans. Image Process. 22 (8) (2013) 3158–3167, http://dx.doi.org/10.1109/TIP.2013.2259841. [30] D.G. Lowe, Object recognition from local scale-invariant feature, in: The International Conference on Computer Vision, 20-27 Sept. 1999, Kerkyra, Greece, 1999. [31] M. Brown, D.G. Lowe, Invariant Features from Interest Point Groups, British Machine Vision Conference, Cardiff, Wales, UK, 2002, pp. 656–665. [32] S. Arya, D.M. Mount, N.S. Netanyahu, R. Silverman, A.Y. Wu, An optimal algorithm for approximate nearest neighbour searching fixed dimensions, J. Assoc. Comput. Mach. 45 (1998) 891–923. [33] A. Murtiyoso, P. Grussenmeyer, Documentation of heritage buildings using close-range UAV images: dense matching issues, comparison and case studies, Photogramm. Rec. 32 (159) (2017) 206–229. [34] Pix4D, https://support.pix4d.com/ (Accessed 08 November 2019). [35] R.B. Rusu, Z.C. Marton, N. Blodow, M. Dolha, M. Beetz, Towards 3D point cloud based object maps for household environments, Rob. Auton. Syst. 56 (2008) 927–941. [36] CloudCompare (version 2.10) [GPL software]. (2019) Retrieved from http://www.cloudcompare.org/. [37] F.J. Mesas-Carrascosa, D.N. García María, J.E.M. de Larriva, A. García-Ferrer, An analysis of the influence of flight parameters in the generation of unmanned aerial vehicle (UAV) orthomosaicks to survey archaeological areas, Sensors 2016 (16) (2016) 1838.

Please cite this article in press as: T. Bakirman, et al., Implementation of ultra-light UAV systems for cultural heritage documentation, Journal of Cultural Heritage (2020), https://doi.org/10.1016/j.culher.2020.01.006