A geometry- and texture-based automatic discontinuity trace extraction method for rock mass point cloud

A geometry- and texture-based automatic discontinuity trace extraction method for rock mass point cloud

International Journal of Rock Mechanics & Mining Sciences 124 (2019) 104132 Contents lists available at ScienceDirect International Journal of Rock ...

4MB Sizes 0 Downloads 71 Views

International Journal of Rock Mechanics & Mining Sciences 124 (2019) 104132

Contents lists available at ScienceDirect

International Journal of Rock Mechanics and Mining Sciences journal homepage: http://www.elsevier.com/locate/ijrmms

A geometry- and texture-based automatic discontinuity trace extraction method for rock mass point cloud Jiateng Guo a, *, Yinhe Liu a, Lixin Wu b, Shanjun Liu a, Tianhong Yang a, Wancheng Zhu a, Zirui Zhang a a b

College of Resources and Civil Engineering, Northeastern University, Shenyang, 110819, China School of Geosciences and Info-Physics, Central South University, Changsha, 410012, China

A R T I C L E I N F O

A B S T R A C T

Keywords: Rock mass Discontinuity trace Automatic extraction 3D point cloud Image processing

This paper presents an automatic extraction method for discontinuity trace mapping from 3D point cloud of natural rocky slopes. Unlike the methods based only on geometrical information, the proposed method also takes textural information into account. By using the texture mapping method, a texture map and a normal map are generated from the point cloud, which contain textural and geometrical information respectively. Then, the rolling guidance filter and a multi-scale edge chain detector are applied to the image to extract discontinuity traces. Finally, edge chains in the image were projected back to the 3D coordinate system through the previously established texture map. The proposed method is applied to three case studies, each representing a different situation. The results show that the proposed discontinuity trace extraction method is automatic, robust, extensible and performs better than geometry-based methods for smooth rock surfaces along which the curvature changes are not apparent. The proposed method could be used as a supplement to traditional contact discon­ tinuity trace mapping methods.

1. Introduction

attribute. Additionally, TLS-derived point cloud may contain RGB in­ formation from a camera mounted on the scanner. However, analysis of rock discontinuities from point cloud data is typically based on geometrical features and involves the use of the XYZ coordinates. The associated approaches are mainly divided into three categories. The first kind of methods is applied directly to the raw point cloud.11–14 Drews et al.15 used a 3D region growing algorithm to segment the point cloud and automatically detect the rock plane. The second kind of method analyses the discontinuities from the 3D mesh model or the digital surface model (DSM) derived from raw point cloud data.16,17 For example, Li et al.17 triangulated the point cloud and extracted traces through the mesh. Third, traces can be obtained through human-computer interactions.18 Mavrouli et al.15 used InnovMetric Software Polyworks to manually extract discontinuity information from a point cloud and the approximate block size was calculated. However, only rock discontinuities with obvious curvature changes can be detected in rough outcrop surfaces according to the geometrical infor­ mation, and traces embedded in a gradually changing smooth rock surface are easily ignored and undetected.19 The texture information, which reflects the reflectivity of the rock

A discontinuity trace is produced when a rock mass surface intersects a rock discontinuity. Rock slope discontinuities play a key role in the strength, permeability and stability analysis of rock masses.1,2 Discon­ tinuity trace mapping can be roughly divided into two types, namely, contact and non-contact approaches. The contact approach, which consists of manual surveys performed by an operator directly on the rock mass,3 is the most common and direct method but is generally costly, inefficient, dangerous and difficult to apply in less accessible areas.4,5 With the development of photogrammetry6,7 and light detection and ranging (LiDAR) techniques,8 rock outcrops can be reconstructed conventionally and rapidly with considerable resolution and accuracy; thus, such non-contact approaches have attracted considerable research attention.9,10 In general, point cloud data of a rock slope, representing a set of data points that store spatial XYZ coordinates, can be obtained by either a terrestrial laser scanner (TLS) or the structure from motion (SfM) tech­ nique. Furthermore, SfM-derived point cloud are generated from images and therefore always contain the red, green and blue (RGB) band

* Corresponding author. E-mail address: [email protected] (J. Guo). https://doi.org/10.1016/j.ijrmms.2019.104132 Received 4 February 2019; Received in revised form 16 September 2019; Accepted 12 October 2019 Available online 24 October 2019 1365-1609/© 2019 Elsevier Ltd. All rights reserved.

J. Guo et al.

International Journal of Rock Mechanics and Mining Sciences 124 (2019) 104132

surface in the RGB bands, can be used to extract an embedded fracture in a flat region. In earlier studies, texture-based discontinuities analysis methods were applied to 2D digital images of exposed rock faces rather than 3D point cloud.5,20–23 Such methods have been studied for a long time, and novel software has been developed.24 However, these methods were very limited in terms of the topography of the rock surface and the relative orientation between the camera and the rock surface. The geometrical information is also missing; therefore, the extracted traces have no accurate 3D coordinates. In addition, these methods are generally applied to a single image of a rock surface, which can hardly present the entire study area of complex topography. Therefore, with these methods, the range of a rock mass that can be processed at one time is limited. The point cloud data can contain both texture and geometrical in­ formation. The geometry of the rock is reflected by the XYZ coordinates of the point cloud. The texture data, which are stored as an assignment of RGB values with each XYZ coordinate, occupy half of the cloud data. However, in most cases, the texture information of point cloud data is only used for visualization and is discarded during discontinuity anal­ ysis.17,25 Therefore, it is necessary to propose an automated extraction method for rock discontinuities from point cloud based on both geometrical and textural information. Combining the theory of computer graphics and image processing, this paper proposes a new workflow to utilize both the geometrical and textural information from a rock point cloud. First, the texture and

normal map are generated from3D point cloud to establish a projection relationship between the 2D image and 3D geological model. The discontinuity traces can then be easily extracted from the 2D image by image processing methods and are then mapped back into 3D space. By using the proposed workflow, ideal extraction results were obtained from 3D point cloud datasets with different scales in three experimental areas, proving that this method has the advantages of automaticity, accuracy and extensibility. 2. Methodology Existing discontinuity trace extraction methods are generally based on either geometrical point cloud information or single digital images of rock surfaces.24 This paper proposed a new workflow based on both geometrical and textural information. The workflow of the proposed method involves four main steps: Step 1, generate image from point cloud: triangulate the point cloud and unfold the 3D mesh, then generate 2D images that contain both textural and normal information by the texture mapping method. Step 2, image pre-processing: apply a rolling guidance filter26 to the image to reduce the noise on the rock surface and refine the per­ formance of subsequent processing steps. Step 3, edge detection: extract edges from the texture and normal maps by a multi-scale edge detection method.27

Fig. 1. Point cloud datasets for the three cases. 2

J. Guo et al.

International Journal of Rock Mechanics and Mining Sciences 124 (2019) 104132

Step 4, result integration: merge the outputs of the previous step and project the traces back into 3D space.

2.2.1. Map generation Due to the unorganized data structure of the point cloud, we cannot directly apply the theory of image processing to the 3D point cloud.31 Therefore, 2D images need to be generated from the 3D point cloud first by means of UV unwrapping (here, the letters “U" and “V" denote the axes of the 2D texture image).32 This step guarantees that the entire geometrical and textural information of the original point cloud can be stored in an unabridged 2D image with no limitation of region size. Given any two surfaces with similar topologies, a one-to-one map­ ping between them can be calculated. If one of the surfaces is repre­ sented by a triangular mesh, the problem of computing this mapping is called mesh parameterization.33 UV mapping is a method for defining surface texture and colour information on a 3D model.34 Fig. 2 illustrates the relationship between a 3D model and the UV map in an example map projection. A bijective mapping between the 2D world map and the 3D earth surface is performed, and any point on earth represented by XYZ coordinates can be converted to the UV coordinates in the 2D image. To apply UV unwrapping on the point cloud, triangulation of the point cloud by Geomagic software was performed first. Considering the complexity of the shape of the rock face, using a primitive projection surface such as a cylinder or sphere may cause loss of information. The Blender software was used to automatically unwarp the mesh to guar­ antee the continuity and integrity of the texture. Once the mapping between mesh and image was established, the texture map can be con­ structed according to the RGB values of the original mesh. In addition to textural information, the geometrical information of the original mesh also needs to be stored in the image. A normal map35,36 is a special form of texture map that stores the components of the surface normal in the X, Y, and Z directions as regular RGB images. The unit normal vector is converted to the RGB value (ranging from 0 to 255). For instance, a normal pointing in the Y direction (0, 1, 0) is mapped to (128, 255, 128). Hence, surfaces oriented to the east appear green, and an intersecting surface appears red in the normal map (Fig. 3). The point cloud of the rock surface is eventually converted into a texture map and a normal map, which contain unabridged textural and geometrical information, respectively, and share the same bidirectional mapping process between the 2D image and the 3D mesh surface. Then, a series of image processing methods can be applied to the image.

2.1. Description of the datasets Three 3D point cloud datasets were used in this study (Fig. 1). The first dataset is a high-quality LiDAR dataset that is used to demonstrate the workflow of the proposed method, and the results are compared with those of a previous study.28 The second dataset is a large but sparse point cloud dataset, and the third dataset is a point cloud dataset generated by means of photogrammetry with slight distortion and lower accuracy. The second and third datasets were used to test the applicability of the proposed method in different situations. Considering that this method has multiple transformations of data formats, five or six control points were added to each case manually for accuracy evaluation. 2.1.1. Case A Case study A used complex high-precision and high-resolution LiDAR data obtained from the Rock Bench repository.29 The data were collected along Highway 15 near Ontario, Canada, and were obtained using a Leica HDS6000 laser scanning system, with a point spacing of less than 1 cm. A total of 2,087,187 points were collected. The data were scanned from a distance of approximately 15 m, and the entire study area was 13.28 m by 4.21 m by 3.71 m. This data has been widely used as a benchmark.30 2.1.2. Case B The raw data of case study B were collected in Padua, Italy, and was provided by Siefko Slob.11 The vegetation and peripheral sandstone data were removed manually. The size of the entire study area is 31.71 m by 47.86 m by 18.79 m, but the dataset contains only 135,118 points. As seen directly from Fig. 1, the density of the point cloud is relatively sparse. 2.1.3. Case C The data in case C are point cloud data obtained via the SfM tech­ nique along a road near Benxi city, Liaoning Province, China. Although this dataset contains 1,564,016 points, the data are not as accurate as the data obtained directly by the LiDAR method, and the boundary of the region is deformed to a certain extent.

2.2.2. Image pre-processing Due to the irregularity of the geological model surface, both the texture map and normal map contain obvious noise, and this noise will have a great impact on the edge extraction results, so the image needs to be pre-processed before edge detection. The traditional Gaussian filter can smooth an image and reduce noise but always results in the loss of the object contours and boundary. The rolling guidance filter26 is a simple and fast method that can smooth an

2.2. Automatic discontinuity trace extraction method Case A is taken as an example to introduce the method presented in this paper.

Fig. 2. Illustration of UV mapping. 3

J. Guo et al.

International Journal of Rock Mechanics and Mining Sciences 124 (2019) 104132

previous step. 2.2.3.1. Multi-scale edge detection algorithm. Edge detection is a basic method in image processing and computer vision. There are many al­ gorithms for edge detection, but most can be divided into two categories: search based and zero-crossing based. The search-based methods first calculate the first-order derivative, such as the gradient amplitude. Then, the estimate of the local orientation of the edge is computed (this orientation is usually the direction of the gradient) to search for the maximum amplitude in each direction. The zero-crossing-based methods search for zero crossings by using a second-order derivative expression derived from the image to extract the edges. The Canny edge detector37 is one of the most strictly defined and reliable edge detection methods and has been widely applied in various computer vision systems. However, the edges of the rock surface are irregular at different scales because of the complex geological structure. A low scale edge detection method can provide more accurate results but is particularly sensitive to noise. Edge chains detected at large scales are more robust against noise and textures but tend to suffer from dis­ placements of the edges from their actual position. However, the Canny algorithm can extract edges only on a single scale. Therefore, a multi-scale edge chain detector (MSEdge)27 was used to refine the per­ formance of the Canny algorithm. As shown in Fig. 5, the MSEdge process can be described as follows: First, a set of pyramid images with different scales are constructed by down-sampling the original image. Then, the Canny algorithm is used to detect and verify the edge chains from these pyramid images. However, since the size of the sampled images varies, the sizes of the extracted edges also vary at different scales. Next, these edges are fused at a single scale, and the Guo-Hall thinning algorithm38 is applied to connect the edge chain to obtain the final single pixel width edge chains. Fig. 5 shows that this method can simultaneously perform edge detection at different scales and fuse the detection results at the scale of the original image.

Fig. 3. UV mapping result.

image while retaining the obvious structure. The rolling guidance filtering contains only two steps: First, a Gaussian filter was applied to remove the small structure. Second, a joint bilateral filter was applied to detect the edge iteratively. The result in the t-th iteration Jtþ1 was obtained by a joint bilateral filtering form given the original input image I and the result of the previous iteration Jt . Initially, J1 was set as the result of the Gaussian filter in the first step. As the results in Fig. 4 show, the region in the cyan box exhibits an obvious division between two rock surfaces, but the edge was blurred after applying a Gaussian filter to the raw image. Therefore, the boundary cannot be located precisely. The rolling guidance filter can effectively suppress the noise generated by the texture of the geological body while preserve the accuracy of the boundary without blurring the image. This step can ensure the performance of subsequent processing.

2.2.3.2. Texture map edge detection. To apply MSEdge to the texture map, the gradient of the image must be calculated first. However, gradient calculation methods, such as the Sobel operator, are generally applied to greyscale images. Because the colour of the rock surface is not rich, the three bands of the original image contain almost the same in­ formation and can effectively represent the texture map; therefore, we can use the following equation to convert the colour image to a single grey image:

2.2.3. Trace extraction After image pre-processing, a series of image processing methods, such as image segmentation, edge extraction and object detection, can be applied to the image according to the application scene. In this paper, the edge detection method was applied to the image obtained in the

grey value ¼ ðR � 30 þ G � þ 59 þ B � 11Þ=100

Fig. 4. Comparison of image pre-processing results: (a) raw image, (b) 5 � 5 Gaussian filter, (c) 5 � 5 rolling guidance filter. 4

(1)

J. Guo et al.

International Journal of Rock Mechanics and Mining Sciences 124 (2019) 104132

Fig. 5. The framework of the MSEdge.27

Then, for any 3x3-pixel window in the image, the Sobel gradient in the horizontal and vertical directions of the central pixel point e can be calculated by the following two formulas: 2 3 2 3 1 0 1 a b c Gx ¼ Sx * A ¼ 4 2 0 2 5 * 4 d e f 5 ¼ c a þ 2f 2d þ i g (2) 1 0 1 g h i 2

1 Gy ¼ S y * A ¼ 4 0 1

2 0 2

3 2 1 a 0 5*4d 1 g

b e h

3 c f 5¼a i

g þ 2b

2h þ c

different scales simultaneously and maintain the positioning accuracy at the single pixel level (Fig. 6(d)). 2.2.3.3. Normal map edge detection. Unlike the texture map, the normal map RGB bands store the normal vector components of the triangulated mesh in the X, Y and Z directions. Simply using Eq. (1) to greyscale the image will cause great loss of the information of the model shape. The common calculation of the first derivative is often based on the greyscale image. Traditional colour image edge detection algorithms always process the three bands separately and then fuse the results.39 However, the practical significance of the information stored in the three bands of the normal map necessitates that they are not considered separately. Therefore, the algorithm needs to be modified according to the char­ acteristics of the normal map. To calculate the gradient of the normal map, the first derivative of the image was calculated using the Sobel operator. Unlike approaches that use the general greyscale image, this approach uses Eq. (8) to calculate the included angle between two normal directions to replace the difference of the greyscale value:

i (3)

where * denotes convolution. Then, the edges of the texture map at different scales can be extracted by inputting the obtained gradient value into MSEdge. As Fig. 6(b) shows, at a small scale, the edge can be located precisely but is more sensitive to noise, and some relatively large edges (green rectangular box in the figure) are difficult to recognize. On the other hand, due to the down-sampling of the image to a large scale, the extracted edge in Fig. 6(c) was very coarse, but this approach can detect the edges at a large scale. However, MSEdge can detect edges at

Fig. 6. Comparison of the results of different methods: (a) raw image, (b) Canny in low resolution, (c) Canny at a large scale, (d) MSEdge. 5

J. Guo et al.

θ12 ¼ arccos

International Journal of Rock Mechanics and Mining Sciences 124 (2019) 104132

vT1 v2 kv1 kkv2 k

Case C is a high-density but slightly deformed point cloud dataset ob­ tained by photogrammetry. After using Case B and Case C to test the applicability of the method for different datasets, we evaluated the ac­ curacy of the results of these three regions by determining the cumu­ lative error due to the multiple data conversions in this method. Finally, based on the experimental results, a sensitivity analysis of the parame­ ters that need to be set in the method was carried out. The first map generation step used Geomagic and Blender software to triangulate the point cloud and modify the UV map. The other operations are realized by MATLAB and OpenCV.

(4)

Where v ¼ 2½r g b�T ½255 255 255�T is used to convert the RGB value of the normal map into components of the normal vector in the X, Y and Z directions. After substituting Eq. (4) into Eq. (2) and Eq. (3), the gradients of the normal map in the horizontal and vertical directions can be obtained: Gx ¼ Sx *A ¼ θac þ 2θdf þ θig

(5)

Gy ¼ Sy *A ¼ θag þ 2θbh þ θci

(6)

3.1. Results for case study A: comparison with the existing method

The results of the multi-scale edge detection algorithm with improved gradient calculation method are shown in Fig. 7. Fig. 7(a) and (b) show that, after the transformation between the normal map and the greyscale image, the information of the model shape was lost, and adjacent discontinuity surfaces were fused together. Therefore, some valley and ridge lines cannot be detected from greyscale images (Fig. 7 (c)), but the normal angle based method can effectively detect these valley and ridge lines from colour normal maps, as shown in Fig. 7 (d).

The Case A dataset contains information on a complex geological structure with high resolution and accuracy. In section 2.2, we take Case A as an example to show the whole process of the proposed method. In this part, Case A is used as a benchmark to compare the proposed method and a previous method, which extract traces directly from point cloud without triangulation.28 Unlike the proposed method, the compared method selects the potential points based on the curvature of the point cloud, i.e., the geometry information, and then thins and as­ signs the potential points to the trace lines. In terms of the ability to extract trace lines, the proposed method not only utilizes the geomet­ rical information of the geological model but also references the textural information of the model. As shown in Fig. 9, taking a typical flat area in Case A as an example, the green lines are the traces extracted by simple curvature-based method,28 and the red lines are the traces extracted by the proposed method. Compared with the, the proposed method can not only extract edges with obvious curvature changes but can also detect traces embedded in areas with no obvious curvature changes (within the magenta rectangles in Fig. 9) through colour intensity changes, which has a better extraction performance than the methods based only on geometrical information.

2.2.4. Integrate result After processing the texture map and normal map, the final trace extraction results can be obtained by integrating the two sets of edges and projecting them back to 3D space. Because the edges with obvious curvature change are often also re­ flected in the texture, there are a lot of overlapped edges extracted from the two maps, and it is necessary to apply a simple process to the results to remove the redundant edges. The point cloud textural information will be affected by shadows and noise. Therefore, we believe that the shape information of the point cloud is more accurate when two edges overlap, and the edge extracted based on the model’s shape is therefore taken as the standard: In the texture map, the proportion of pixel chains whose neighbouring pixels overlap the edge chains in the normal map is calculated; when the overlap rate exceeds 50%, we believe that the edge needs to be discarded. In case A, 1259 edges were extracted from the texture map, and 1526 edges were extracted from the normal map. After fusion, 59 edges were discarded. Finally, the 2D coordinates of the edges were projected back to the 3D coordinates by the mapping established during UV unwrapping, and the 3D traces are finally obtained (Fig. 8).

3.2. Results for cases B and C: applicability tests Case B is a large but sparse point cloud dataset. Case C is a photogrammetry-based point cloud dataset with slight distortion, causing the dataset to have a lower precision than that in Case B. Cases B and C were introduced to verify whether the method performs well in different situations. As seen from Fig. 10, the proposed method can still effectively extract the traces from the Cases B and C datasets based on the textural and geometrical information. However, the method pre­ sented in this paper has no obvious advantages over the method based only on geometrical information.28 This lack of significant improvement is because the data are very sparse (and therefore provide less texture information) and even contain some unfavourable factors, such as shadows, which reduces the performance of the method in this paper. In Case C, although the data are slightly deformed, this issue does not affect the final extraction results. Therefore, the proposed method does not have obvious advantages when there are no trace lines embedded in the

3. Case studies To verify the effectiveness of the proposed method, experiments were carried out on the three different regions introduced in section 2.1. Because Case A is a point cloud dataset of a complex geological struc­ ture, it has typical traces embedded in flat areas and has high precision and resolution. Case A is mainly used to verify the effectiveness of the method and to compare this method with methods based only on geometrical information. In addition, Case B and Case C provide further verification. Case B is a large but relatively sparse point cloud dataset.

Fig. 7. Comparison between the improved and the original method: (a) raw image, (b) grey image, (c) MSEdge results in grey image, (d) improved MSEdge results in colour image. (For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.) 6

J. Guo et al.

International Journal of Rock Mechanics and Mining Sciences 124 (2019) 104132

Fig. 8. Discontinuity trace extraction results for Case A.

Fig. 9. Comparison of extraction results of different methods: (a) geometry-based method, (b) proposed method.

Fig. 10. Traces extracted from Cases B and C.

rock surface or when the point cloud does not provide enough texture information.

Table 1 Area density results for the two case studies. Case

3.3. Experiment result analysis 3.3.1. Quantitative analysis In addition to the statistics of trace number, there are better in­ dicators to quantitatively reflect the results of trace extraction. The area density is a good proxy for quantifying the development of fractures in a specific area. All the extracted trace lines are projected to the projection P plane to obtain the sum of projected fracture lengths L and the pro­ jected area S. The projection plane is the plane that is perpendicular to the overall normal vector of the point cloud. Then the area density f is calculated using the formula: X f¼ L=S (7)

Case A Case B

Points count

Proposed method

Previous curvature-based method28

Traces count

Area density

Traces count

Area density

2170033

2266

10.58

1890

3.83

135118

1010

2.24

509

1.35

results that due to the use of additional texture information, the density extracted by the proposed method in Case A is much higher than the method based on curvature alone, which makes the assessment of rock mass quality more accurate. In Case B, due to the low density of point cloud, it cannot provide effective texture information, so the method in this paper does not show obvious advantages.

The results of the Case A and B under the proposed method and curvature-based method are presented in Table 1. It can be seen from the 7

J. Guo et al.

International Journal of Rock Mechanics and Mining Sciences 124 (2019) 104132

3.3.2. Accuracy assessment Since the method proposed in this paper uses three format conver­ sions, namely, from the point cloud to the grid model, from the grid model to the 2D texture map, and from the 2D texture map to 3D space, the accumulation of errors in the conversion process is unavoidable. To test how much the precision is lost by using this method in practical applications, we first conducted an accuracy assessment. An additional five or six control points were evenly distributed in the original point cloud dataset, and the error of the converted control point coordinates and the original coordinates was calculated by qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi�ffiffiffiffiffi�ffiffi Error ¼ ΔX 2 þ ΔY 2 ΔZ 2 3 (8)

the angle with a neighbouring point is greater than 60� , the point is directly recognized as an edge, and when the angle is greater than 30� but less than 60� , the point is identified as a potential feature point. Further judgement is required. The change in the number of traces extracted under different thresholds in Case A is shown in Fig. 11. As the threshold increases, the number of traces extracted first decreases slowly. When the threshold reaches approximately 50, the rate of decline begins to accelerate. We think it is appropriate to set the threshold at the inflection point: 50. This decision can also be supported by the results extracted under different thresholds (Fig. 12). When the threshold was set to 10� , the threshold was too small, and some indistinct protrusions were also identified as traces; thus, the extraction results were relatively trivial. When the threshold was too high, for example, 70� , the apparent structural plane was also ignored, and only the boundaries with particularly significant changes were identified. In contrast, when the threshold was set to 50� , this method does not extract some of the useless details that were extracted when the threshold was set to 30� . Moreover, the threshold of 50� allows for the retention of obvious discontinuity traces and has an ideal performance. However, it is difficult to use a universal method for selection of the Canny operator threshold, requiring the user to select an

where ΔX; ΔY; andΔZ denote the differences between the converted and original X, Y, and Z coordinates. Table 2 shows that the error induced by the method in each of the three cases is within 1 cm in all cases, which is an acceptable range. 3.3.3. Sensitivity analysis On this basis, we conducted a sensitivity analysis based on the experimental process of the three regions and discussed the setting of relevant parameters in the method. In the first step, the point cloud was meshed only to enable the model to perform UV unwrapping, and the texture and noise of the generated mesh can be effectively removed by the image pre-processing step. Therefore, when converting a point cloud to a grid, there is no need to re-divide the triangle network, so the size of the grid does not need to be set. At the same time, the size of the image can be set to 4096 � 4096 by default, which is sufficient in most cases. In conclusion, the parameters that need to be artificially determined are concentrated in the image processing stage. In the pre-processing stage, σs and σ r determine the spatial and range weights, respectively, and the number of iterations needs to be specified. For Cases B and C, based on experience, we set σ s ¼ 5 and σr ¼ 0:05 for ten iterations. This set of parameters reliably suppresses noise and texture in the experimental area. The parameters of this step can also be artificially modified according to the characteristics of the actual area under investigation to improve the extraction effect. In the edge detection step, the threshold for non-maximum sup­ pression and edge selection needs to be specified; here, the recom­ mended threshold of the non-maximum suppression cth is 5. The double threshold of the Canny operator is difficult to determine and will have a great impact on the final result. Taking the edge extraction of the normal map as an example, the Canny operator needs to specify two thresholds, glow and ghigh . To avoid the complicated problem of selecting these two parameters, we set ghigh ¼ 2 � glow by default. That is, if glow ¼ 30, when

Fig. 11. Number of extracted traces with different low-angle thresholds.

Table 2 Accuracy assessment of the three case studies. Case

Control Point

Original Coordinate (m)

Case A

Pt1 Pt2 Pt3 Pt4 Pt5 Pt6

2.6877 0.2330 0.6378 1.5294 5.0243 7.0551

Case B

Pt1 Pt2 Pt3 Pt4 Pt5

1855.5200 1846.6801 1843.2000 1833.6899 1831.8500

8421.6299 8428.2002 8432.6104 8448.8301 8457.3896

29.7761 32.4688 29.3614 29.3964 36.2192

1855.5161 1846.6813 1843.1978 1833.6865 1831.8594

8421.6314 8428.1997 8432.6109 8448.8298 8457.3863

29.7767 32.4708 29.3579 29.4016 36.2140

0.0024 0.0014 0.0024 0.0036 0.0065

Case C

Pt1 Pt2 Pt3 Pt4 Pt5

3006.7633 3003.9045 3002.5444 2999.7872 3000.0293

4006.2498 4006.8667 4004.4445 4008.4668 4010.4640

204.3786 201.5394 199.1074 198.9595 200.7642

3006.7634 3003.9052 3002.5481 2999.7904 3000.0313

4006.2479 4006.8649 4004.4430 4008.4647 4010.4666

204.3700 201.5373 199.1069 198.9542 200.7673

0.0051 0.0016 0.0023 0.0038 0.0026

X

Y

Transformed Coordinate (m) Z

1.5793 0.1802 1.4993 0.6477 0.5868 1.7943

X 12.4403 13.1304 13.1604 13.2501 14.1942 14.3984

8

2.6870 0.2333 0.6378 1.5295 5.0273 7.0585

Y

Error (m) Z

1.5797 0.1802 1.5002 0.6480 0.5856 1.7931

12.4402 13.1308 13.1610 13.2509 14.1934 14.3978

0.0005 0.0003 0.0006 0.0005 0.0019 0.0022

J. Guo et al.

International Journal of Rock Mechanics and Mining Sciences 124 (2019) 104132

Fig. 12. Extraction results under different thresholds.

in the loss of information and thus affect the final extraction results. (2) Textural information not only provides more features for trace extrac­ tion but also introduces more interference, such as that due to the in­ fluence of shadows and other noise, and this interference needs to be eliminated. (3) Some parameters in the algorithm still need to be pro­ vided manually and determined by experience. More rigorous and effective parameter-setting standards need to be studied further.

appropriate value according to the characteristics of the actual study area. 4. Discussion and conclusions In this work, a new method for automatic discontinuity trace map­ ping from terrestrial LiDAR- and SfM-obtained point cloud data is pre­ sented. Unlike the geometry-based discontinuity trace extraction method, which has difficulty extracting traces embedded in a rock face that gradually changes shape, this method takes both geometrical and textural information into consideration. The proposed method was based on the following: (1) generation of texture and normal maps from a point cloud; (2) pre-processing of the image by using a rolling guidance filter; (3) application of a multi-scale edge detector to extract traces from the image; and (4) merging and mapping the traces back into 3D space. This method has three advantages: (1) Traces are extracted based on both the textural and geometrical information of the rock model; therefore, this method can perform well in smooth regions. (2) The process is completely automatic and requires no human intervention. However, the user can modify the relevant parameters to improve the extraction results. (3) This method is easily expandable: the proposed method is a general framework and can be easily extended for thematic applications by changing the image processing algorithm. Compared with the geometry-based automatic method, the proposed method can ensure sufficient accuracy but also uses textural information to extract the trace lines embedded in regions without obvious curvature changes. The proposed method was tested using three cases and showed good adaptability to the different data conditions. However, this approach raises new issues: (1) Undesirable UV mapping can be quite time consuming and produce highly deformed textures, which can result

Declaration of competing interest The authors declare no conflict of interest. Acknowledgements This work was financially supported by the National Natural Science Foundation of China [grant numbers: 41671404, 51874069], the Fundamental Research Funds for the China Central Universities [grant numbers: N170104019, N180101028] and the China Geological Survey Projects [grant number: DD20190416]. The raw laser scanner data for case study A was obtained from the Rock Bench open repository. The work carried out by the Rock Bench founders (Harrap, Lato, Kemeny, and Bevan) is kindly appreciated. The raw laser scanner data for case study B was privately provided by Siefko Slob and collected by Alessia Viero and Antonio Galgaro from the Department of Geosciences, Uni­ versity of Padova, Italy. The help from Siefko Slob is kindly appreciated. Appendix A. Supplementary data Supplementary data to this article can be found online at https://doi. org/10.1016/j.ijrmms.2019.104132. 9

J. Guo et al.

International Journal of Rock Mechanics and Mining Sciences 124 (2019) 104132

References

20 Crosta G. Evaluating rock mass geometry from photographic images. Rock Mech Rock Eng. 1997;30(1):35–58. 21 Reid TR, Harrison JP. A semi-automated methodology for discontinuity trace detection in digital images of rock mass exposures. Int J Rock Mech Min Sci. 2000;37 (7):1073–1089. 22 Lemy F, Hadjigeorgiou J. Discontinuity trace map construction using photographs of rock exposures. Int J Rock Mech Min Sci. 2003;40(6):903–917. 23 Deb D, Hariharan S, Rao UM, Ryu CH. Automatic detection and analysis of discontinuity geometry of rock mass from digital images. Comput Geosci. 2008;34(2): 115–126. 24 Healy D, Rizzo RE, Cornwell DG, et al. FracPaQ: a MATLAB™ toolbox for the quantification of fracture patterns. J Struct Geol. 2017;95:1–16. 25 Ge Y, Tang H, Xia D, et al. Automated measurements of discontinuity geometric properties from a 3D-point cloud based on a modified region growing algorithm. Eng Geol. 2018;242:44–54. 26 Zhang Q, Shen X, Xu L, Jia J. Rolling Guidance Filter. Springer International Publishing; 2014. 27 Lu X, Yao J, Zhang X, Li L, Liu Y. MSEdge: a multi-scale edge chain detector. In: The 5th International Conference on Computational Visual Media (CVM). Tianjin. 2017. 28 Guo J, Wu L, Zhang M, Liu S, Sun X. Towards automatic discontinuity trace extraction from rock mass point cloud without triangulation. Int J Rock Mech Min Sci. 2018;112:226–237. 29 Lato M, Kemeny J, Harrap R, Bevan G. Rock bench: establishing a common repository and standards for assessing rockmass characteristics using LiDAR and photogrammetry. Comput Geosci. 2013;50:106–114. 30 Riquelme AJ, Tom� as R, Abell� an A. Characterization of rock slopes through slope mass rating using 3D point clouds. Int J Rock Mesh Min Sci. 2016;84:165–176. 31 Wang R, Lai X, Hou W. Study on edge detection of LIDAR point cloud. In: International Conference on Intelligent Computation and Bio-Medical Instrumentation. 2011:71–73. 32 Robleda PG, Caroti G, Martínez-Espejo ZI, Piemonte A. Computational vision in UVMapping of textured meshes coming from photogrammetric recovery: unwrapping frescoed vaults. International Archives of the Photogrammetry. Remote Sens Spatial Inf Sci. 2016;41(B5):391–398. 33 Sheffer A, Praun E, Rose KJF, TiC Graphics, Vision. Mesh parameterization methods and their applications. 2007;2(2):105–171. 34 Heckbert PS. Survey of texture mapping. Comput Graphics Appl IEEE. 1986;6(11): 56–67. 35 Cohen J, Olano M, Manocha D. Appearance-preserving simplification. Proceedings of the 25th annual conference on Computer graphics and interactive techniques. ACM. 1998:115–122. 36 Cignoni P, Montani C, Rocchini C, Scopigno R. A general method for preserving attribute values on simplified meshes. In: Visualization’98. Proceedings. IEEE; 1998: 59–66. 37 Canny J. A Computational Approach to Edge Detection. Readings in Computer Vision.. Elsevier; 1987:184–203. 38 Guo Z, Hall RW. Parallel thinning with two-subiteration algorithms. Commun ACM. 1989;32(32):359–373, 3. 39 Hedley M, Yan H. Segmentation of color images using spatial and color space information. J Electron Imaging. 1992;1(4):374–381.

1 Barton N, Choubey V. The shear strength of rock joints in theory and practice. Rock Mech. 1977;10(1-2):1–54. 2 Hudson J, Harrison J. Engineering Rock Mechanics: An Introduction to the Principles. Oxford: Pergamon; 1997. 3 Barton N. Suggested methods for the quantitative description of discontinuities in rock masses. ISRM, Int J Rock Mech Min Sci Geomech Abstr. 1978;15(6). 4 Assali P, Grussenmeyer P, Villemin T, Pollet N, Viguier F. Solid images for geostructural mapping and key block modeling of rock discontinuities. Comput Geosci. 2016;89(C):21–31. 5 Kemeny J, Post R. Estimating three-dimensional rock discontinuity orientation from digital images of fracture traces. Comput Geosci. 2003;29(1):65–77. 6 Assali P, Grussenmeyer P, Villemin T, Pollet N, Viguier F. Solid images for geostructural mapping and key block modeling of rock discontinuities. Comput Geosci. 2016;89:21–31. 7 Sturzenegger M, Stead D. Close-range terrestrial digital photogrammetry and terrestrial laser scanning for discontinuity characterization on rock cuts. Eng Geol. 2009;106(3-4):163–182. 8 Fisher JE, Shakoor A, Watts CF. Comparing discontinuity orientation data collected by terrestrial LiDAR and transit compass methods. Eng Geol. 2014;181:78–92. 9 Kemeny J, Turner K, Norton B. LIDAR for rock mass characterization: hardware, software, accuracy and best-practices. In: Laser and Photogrammetric Methods for Rock Face Characterization. 2006:49–62. 10 Riquelme A, Cano M, Tom� as R, Abell� an Fern� andez A. Identification of rock slope discontinuity sets from laser scanner and photogrammetric point clouds: a comparative analysis. Procedia Eng. 2017;191:838–845. 11 Slob S, Van Knapen B, Hack R, Turner K, Kemeny J. Method for automated discontinuity analysis of rock slopes with three-dimensional laser scanning. Transp Res Rec: J Transp Res Board. 2005:187–194, 1913. 12 Guo J, Liu S, Zhang P, Wu L, Zhou W, Yu Y. Towards semi-automatic rock mass discontinuity orientation and set analysis from 3D point clouds. Comput Geosci. 2017; 103:164–172. 13 Wang X, Zou L, Shen X, Ren Y, Qin Y. A region-growing approach for automatic outcrop fracture extraction from a three-dimensional point cloud. Comput Geosci. 2017;99:100–106. 14 Drews T, Miernik G, Anders K, et al. Validation of fracture data recognition in rock masses by automated plane detection in 3D point clouds. Int J Rock Mech Min Sci. 2018;109:19–31. 15 Mavrouli O, Corominas J, Jaboyedoff M. Size distribution for potentially unstable rock masses and in situ rock blocks using LiDAR-generated digital elevation models. Rock Mech Rock Eng. 2015;48(4):1589–1604. 16 Umili G, Ferrero A, Einstein HH. A new method for automatic discontinuity traces sampling on rock mass 3D model. Comput Geosci. 2013;51(2):182–192. 17 Li X, Chen J, Zhu H. A new method for automated discontinuity trace mapping on rock mass 3D surface model. Comput Geosci. 2016;89:118–131. 18 Thiele ST, Grose L, Samsu A, Micklethwaite S, Cruden AR. Rapid, semi-automatic fracture and contact mapping for point clouds, images and geophysical data. Solid Earth. 2017;8(6):1–19. 19 Zhang P, Zhao Q, Tannant DD, Ji T, Zhu H. 3D mapping of discontinuity traces using fusion of point cloud and image data. Bull Eng Geol Environ. 2018;(4):1–13.

10