A prediction algorithm for data analysis in GPR-based surveys

A prediction algorithm for data analysis in GPR-based surveys

Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎ Contents lists available at ScienceDirect Neurocomputing journal homepage: www.elsevier.com/locate/neucom A predict...

4MB Sizes 2 Downloads 171 Views

Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

Contents lists available at ScienceDirect

Neurocomputing journal homepage: www.elsevier.com/locate/neucom

A prediction algorithm for data analysis in GPR-based surveys J.B. Rodriguez a, M.F. Pantoja a,n, X.L. Travassos b, D.A.G. Vieira c, R.R. Saldanha d a

Department of Electromagnetismo y Física de la Materia, Universidad de Granada, Granada 18071, Spain Centro de Engenharias da Mobilidade, Universidade Federal de Santa Catarina, Joinville, Brazil ENACOM Handcrafted Technologies, Belo Horizonte, Brazil d Departamento de Engenharia Elétrica, Universidade Federal de Minas Gerais, Belo Horizonte, Brazil b c

art ic l e i nf o

a b s t r a c t

Article history: Received 1 November 2014 Received in revised form 1 April 2015 Accepted 22 May 2015 Communicated by V. Palade

This paper presents a prediction algorithm for features detection in Ground Penetrating Radar (GPR) based surveys. Based on signal processing and soft-computing techniques, the coupled use of principalcomponent analysis and neural networks enable a definition of an efficient method for analyzing GPR electromagnetic data. To guarantee a low error rate, a study of the algorithm main numerical parameters was performed by means of electromagnetic synthetic-data models. Results for detecting features of geological layers demonstrate not only the method predictions accuracy but also the simple interpretation of its output through scenarios reconstructed images. & 2015 Elsevier B.V. All rights reserved.

Keywords: Ground-penetrating radar Neural network applications Radar signal processing Geophysics

1. Introduction In the last decades, the advent of commercial purposes groundpenetrating radar (GPR) has led to multidisciplinary revolution in the field of buried-object detection, with broad application in areas such as archaeology (e.g., planning of surveys) [1], geology (e.g., aquifer detection) [2], and military industry (e.g., non-metallic mine detection) [3]. One of the main challenges in GPR systems, beyond the mere detection of buried objects, is to gather information on the objects composition or the environment surrounding them. Although the electronic technology necessary for implementing these systems is now mature with constant developments [4,5], limitations persist in the detection and interpretation of the results provided. In recent years, two main lines have emerged to solve this problem. On the one hand, some systems apply tomographic techniques [6] as well as approaches using integral equations [7], but these have had only partial success due mainly to the field data complexity, which contains high levels of noise caused by non-homogeneities of the host media. On the other hand, techniques based on Neural Networks (NN) with different topologies [8–12] have been proposed to solve canonical electromagnetic-inversion problems, – e.g., a spheroid embedded in a host medium [13], and further improvements have been introduced in relation to more realistic geometrical forms related

n

Corresponding author. E-mail address: [email protected] (M.F. Pantoja).

to civil-engineering applications [14], even including the consideration of a non-homogenous host medium [15]. A common point in all these NN is the implementation, as a step prior to the training phase NN, of a computational model of GPR scenarios. In this way, the scattered field in a randomly generated scenario can be calculated by numerical methods, usually finite differences in the time-domain (FDTD) [16,17] or, for cases where numerical instabilities arise, the alternatingdirection implicit FDTD (ADI-FDTD) method [18,19]. One of the main shortcomings of applying NNs as a prediction system in GPR problems is the curse of dimensionality, which makes the training slow and the system prediction capacity poor [20]. Therefore, a key point is to reduce the high dimensionality of the scattered field data, enabling a reduced number of inputs for which the NN will be trained, making the process faster and more reliable. At this point, the application of signal-processing techniques such as Principal-Component Analysis (PCA) can be introduced as part of the algorithm. The usefulness of PCA as a compression technique with minimum loss of information in time-domain GPR signals has been shown in [15]. In this previous work, the objective is to estimate the depth and radius of buried tubes in a non-homogenous concrete structure. In this context, the present paper seeks to apply techniques based on PCA and NNs to build prediction systems for geological features in GPR-based geological surveys. Furthermore, this procedure main challenge is not only to achieve a high-rate of success in the predictions but also to build on previous works in this research line by producing B-scan graphic results. In this sense, the proposed algorithm outperforms previous NNs predictors, which

http://dx.doi.org/10.1016/j.neucom.2015.05.081 0925-2312/& 2015 Elsevier B.V. All rights reserved.

Please cite this article as: J.B. Rodriguez, et al., A prediction algorithm for data analysis in GPR-based surveys, Neurocomputing (2015), http://dx.doi.org/10.1016/j.neucom.2015.05.081i

J.B. Rodriguez et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

2

provide one-dimensional numerical outputs, enabling the interpretation of the solutions by users not specialized in the GPR data processing. The paper is structured as follows. First, a general overview of the background theory is provided, briefly describing the PCA algorithm, the NNs and the synthetic data creation with FDTD. Nextly, the scheme of the prediction system is presented, paying special attention to the differences in the implementation for Aand B-scan surveys. Then, another section shows the influence of some numerical parameters in the performance of the prediction system and, finally, illustrative examples related to the detection and prediction of geological layers are provided.

2. Background theory Fig. 1 shows the flowchart synthesizing the prediction system. The prediction algorithm can be described as a modular system which combines three different resources in the process: (1) numerical electromagnetic simulation codes, (2) signal-processing compression techniques, and (3) neural-networks theory. Further improvements and incoming advances in any of these theories could be accommodated separately at each stage of the process. The first step in developing a NN-based prediction algorithm is to gather representative situations data in which the neural network will work. Successful accomplishment of the NN training and configuration phases will be directly related to the diversity, quantity, and quality of the data provided. For GPR systems, the use of experimental data is hardly affordable because (1) it is time consuming and labor intensive, and (2) it is scarcely free of undesired objects and other experimental sources of errors. For these reasons, the use of incoming data from electromagnetic simulations is considered. The FDTD numerical approach for the solutions of Maxwell's equations has been broadly employed for GPR simulations [17]. These provide higher accuracy than ray-tracing methods [21] at a cost of increasing the computational burden. Moreover, realistic GPR scenarios can be solved due to the ability to deal with non-homogeneous and dispersive materials. However, in some cases, numerical instabilities can arise, invalidating the computed results. In such a case, an improved numerical version of the FDTD, called ADI-FDTD, which is based on an implicit finite-difference formulation of the Maxwell

Fig. 1. Flowchart of the prediction system. Numbers mean for external resources applied at each step (see text for details).

time-domain equations, can generate accurate results. Therefore, proper modeling of the GPR equipment and different possible scenarios (e.g., electromagnetic sources, feeding pulse, constitutive materials parameters, geometries of non-homogeneous soils) can be efficiently introduced and solved with the aid of scripts. The process automation is required since the training phase typically needs to run hundreds of cases until the NN can be determined. Signal-processing compression techniques have constituted an active field of research in the last few decades, mainly for applications related to audio and image processing [22]. Designed initially for communication systems, they are aimed at handling a large amount of information with the least data possible. In this sense, the problem considered here is analogous. The huge amount of data obtained from GPR electromagnetic simulations makes their direct use inefficient for the NN configuration, mainly due to the high complexity of the training algorithms, which require the handling of incoming data from hundreds of simulations in order to determine the variables and NN weights. Even in scenarios where a sufficiently high number of simulations can be calculated, it is possible that advanced training algorithms does not converge for high-dimensional NNs, primarily due to the difficulty of providing a non-sparse set of training data [23,24]. For this reason, it becomes necessary to process the synthetic data and remove the redundant information. This redundancy of data is a typical feature in the GPR systems, where exhaustive measurements are made over the same scenario and minor differences between adjacent traces appear. To exploit the strong correlation in the data, Principal-Components Analysis (PCA) can be applied [25]. PCA identifies similar patterns in data, and reorganizes the data in such a way that the similarities and differences are highlighted. Mathematically, this is achieved by an orthogonalization of a matrix constructed by adding rows with traces of input data, so that these rows are not correlated with each other. Another main feature of PCA is that once these patterns in the data are found (e.g., the orthogonal basis is determined), the data can also be compressed without significant loss of information by simply removing some of the basis vectors. In the present paper, the compression ability of PCA is used to extract the most relevant information from the data set employed in the NN training phase. Naming the principal components of a given trace as its components in the new orthogonal basis derived from the initial matrix data, composed of a large amount of examples, the NN input is precisely these principal components. As a reduction in the basis dimension can be performed with a low impact on the original trace, the number of principal components can be reduced leading thus to low-dimensional NNs. The next section will show that the decomposition of different GPR signals using the PCA is a key factor of the NN ability to reconstruct the original geological scenarios. Finally, some considerations concerning the NNs are necessary to finish the description of the theories on which the proposed system is founded. The initial development of the theoretical basis of NN systems [26–28] has in recent years been followed by a mature period of real-world applications [29–31]. A major decision in any problem to be solved is the choice of the network topology, because this topology has a significant impact on the system performance, and the same problem can often be successfully solved with different topologies. Among a considerable number of network topologies [32,33], the work described here has used a particular topology called Parallel-Layer Perceptron (PLP) [24,10]. Regarding the inversion problem, this topology has some advantages over other classical networks, such as the multilayer perceptron (MLP) [33] and the adaptive-network-based fuzzy-inference system (ANFIS) [34]. In addition, PLP offers better performance than MLP while maintaining the ability of ANFIS to handle complex problems. The training algorithm for PLP used in the present paper is a hybrid method based

Please cite this article as: J.B. Rodriguez, et al., A prediction algorithm for data analysis in GPR-based surveys, Neurocomputing (2015), http://dx.doi.org/10.1016/j.neucom.2015.05.081i

J.B. Rodriguez et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

on the back-propagation learning algorithm [33], which combines a least-square estimation (LSE) and the second-order correction algorithm of Levenberg–Maquardt [35]. Preliminary results with this training algorithm have proved satisfactory in terms of training time and accuracy of the solutions, and also it has shown a superior performance compared to alternative algorithms attempted, such as the gradient descent algorithm and a hybrid gradient descent and LSE method.

3. NN-PCA tool for GPR-based detection The algorithm developed in this project aims at predicting the composition and structure in GPR complex scenarios, where multiple non-homogeneities can appear. In this way, simulated grounds emulate physical hosts as far as possible, making it affordable to undertake further research on experimental measurements. To this end a Cartesian mesh for the scenario being tested is made, and the proposed system provides an output map of the constitutive parameter of each grid square. By transforming these values, the software can estimate a physical material for each part of the mesh, thereby reconstructing the original GPR image, in which each pixel corresponds to a box of the Cartesian mesh. In this section, a detailed description is presented for the 1Dmodel. This approach is called 1D because it is based only on GPR A-scans – i.e., only one trace from each example is considered for the training and prediction stages. Next, a subsection is devoted to explaining an extension of the previous 1D-model to consider B-scans, more frequent in experimental GPR surveys, resulting in an improved algorithm called the 2D-model. 3.1. 1D-model Fig. 2 shows graphically the process to generate a PLP-NN based on A-scan traces for prediction purposes. Fig. 3 shows the neural network training process with the neural network blocks (Fig. 3) and the parallel layer perceptron (Fig. 3). The main stages of the algorithm are described as follows: 1. Determination of the training matrix: The first stage consists of a random generation of meshed scenarios (i.e., varying permittivities at each block), which are simulated by means of ADI-FDTD. With the voltages measured in the receiving antenna (considered as an Hertzian dipole) at each scenario being taken as the output of the simulations, these series of voltages in time are called traces (as shown in Fig. 2 (a)). All the resulting traces are grouped and arranged to form a rectangular data matrix of dimensions numberof-examples  trace size, which is pointed as (b) in the same figure. The trace size value is determined by the number of time intervals solved by the electromagnetic GPR simulator. At this point, it bears mentioning that in some cases it is helpful to preprocess the data,

3

thereby removing the first reflection due to the interface air– ground, which has no interest in predicting buried materials. In practice, this is accomplished by subtracting the trace corresponding to a scenario composed exclusively of the host, without additional materials inside. 2. Data matrix processing: The data matrix has high dimensionality, but the trace components are closely correlated because they correspond to successive electromagnetic scans of geological scenarios which have different reflections in features of different permittivities (namely as scattering centers) embedded in the same host medium. In this case, the application of the PCA algorithm reduces the high dimensionality of the matrix data. This reduction is a key step in the prediction system performance, because the use of high-dimensional NNs e.g. NNs which contain a high number of input/outputs to address the information provided by scattered fields would lead to ineffective prediction systems, even in cases where the training stage of the NN can be accomplished. PCA extracts the main features of the traces, by means of an orthogonal decomposition of the input vectors e.g. rows of the matrix data into a set of non-physical traces grouped in a new matrix of dimensions number of PCA  trace size, named as Signal matrix. The mathematical projection of GPR-traces in the signal matrix will lead to the principal components of these traces, which correspond to the weight of each PCA signal to reconstruct the original trace. These can be arranged forming a number of examples  number of PCA matrix which is called as the PCA matrix. Mathematically, this operation is expressed as: ðData MatrixÞ ¼ ðPCA MatrixÞ  ðSignal MatrixÞ

ð1Þ

Well-known numerical procedures for the calculation of the signal matrix can be found in [25]. Nevertheless, it is worth noting the fundamentals of the reduction in the dimensionality of the NN. To this end, a key factor is the meaning of the parameter number of PCA. The higher this value the higher the accuracy in the decomposition of the original trace in the new vector basis. However, not all the vectors of the new basis contribute in the same way to that decomposition. PCA not only determines the signal matrix but also gives a measure of the information contained in each row of this matrix to the reconstruction of the data matrix, by attending to the numerical values which compose the PCA matrix. Thus, it is possible to selectively remove those rows with less contribution of information to the original data set, thus acting as a compression of the data matrix. If it is taken into account that the number of PCA parameter corresponds to the number of the neurons in the input layer, it becomes clear that there are computational advantages derived from an effective reduction of the new basis vector which affects only those signals without significant information. Moreover, the signal matrix plays an important role in the prediction stage. Once the compressed signal matrix is calculated, the set of PCA components corresponding to the new trace may be determined

Fig. 2. Illustrated scheme of the computational simulation of a single scenario (a) and the formation of the data matrix (b).

Please cite this article as: J.B. Rodriguez, et al., A prediction algorithm for data analysis in GPR-based surveys, Neurocomputing (2015), http://dx.doi.org/10.1016/j.neucom.2015.05.081i

J.B. Rodriguez et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

4

Fig. 3. Neural network training process.

mathematically for any scenario not covered in the training stage, as follows: ðDPCAÞ ¼ ðtraceÞ  ðSignal MatrixÞ

1

ð2Þ

where decomposed PCA (DPCA) is a vector that contains a number of values equal to the number of chosen PCA components. The DPCA vector corresponds to the input of the neural NN which, once trained, will provide the prediction of the geological features of the scenario being tested. 3. NN training: The next step in this system is to calculate the NN weights and parameters, which it is configured according to the PLP topology. Fig. 3 also shows a graphic summary of the procedure. A number of different NNs equal to the mesh boxes has to be determined (Fig. 3a). Then, PCA matrix rows act as input vectors, needed to train a number of PLP-NNs equal to the number of vertical boxes established in the scenarios mesh. In this way, each box corresponding to a geographical location has its own PLP-NN, which is trained separately by following the procedures described in [24,15]. 4. Test phase and error analysis: Once the NN block is fully determined, the system performance can be estimated in the test stage. For this, a set of input data is provided; these data, for which the corresponding original scenario is known, do not belong to the training set. The system goal is to compare the predicted and original scenarios by using the 1-D model. As synthesized in Fig. 3b, these test traces are firstly decomposed by using (2). The resulting vector is then introduced into each PLP-NN, thus achieving the estimated constitutive parameter of the box represented by that NN. As a measure for evaluating the performance of the system the error calculation is used:   y  y  o %errorpos ¼ d  100 ð3Þ yd where yd is the desired output (e.g., relative permittivity of the material) and yo is the output obtained from the NN. As a global error figure, the general error is also used, defined as the average error at all the positions of the reconstructed scenario. In practice, geological applications using GPR identify materials better than numerical maps of constitutive parameters. This approach also can be employed to depict better scenario reconstructions by applying threshold values to the outputs of each NN. These threshold values set the NN output values to a previously defined set of materials. For instance, in problems related to the geological distribution of two different kinds of materials (of relative permittivity ϵr1 and ϵr2), the threshold operator is defined as:     ( ϵr1 if y  ϵr1  r y  ϵr2      ð4Þ thresholdðyÞ ¼ ϵr2 if y  ϵr2  r y  ϵr1 

The application of threshold operator leads to the corresponding local and global figures of error called the threshold error. In this case, success is considered 100% when the threshold value derived corresponds to the original material, and 0% otherwise.

3.2. 2-D model This model extends the method explained in the 1-D case, by exploiting the availability of the adjacent traces in typical B-scans. The idea is to link the adjacent traces in the time domain from a Bscan data, as shown in Figs. 4 and 5. The hypothesis of the improvement in the results is the ability of the PCA decomposition to generate adequate basis vectors for any situation. For the extended traces (named as 2D-models), these basis vectors account for the physical information corresponding to the hyperbolas produced in the electric field reflected in GPR surveys with local nonhomogeneities. Therefore, a scattering center (located at the center of Fig. 4, trace number 21 and time of 2 ns) reflects any incoming electromagnetic signal forming a typical scattering hyperbola (with the upper part corresponding to A-scans with the source antenna just on top of the scattering center). Then, the correlation between adjacent traces (i.e., distinct A-scan traces chosen as shown in Fig. 4) should benefit from a properly trained neural network, affording some degree of improvement in the results compared to the 1-D model. Again, it bears noting the key role played by PCA decomposition, which allows the original algorithm to be extended to multiple dimensions in a straightforward way. An advancement of the degree of improvement offered by the extended method can be seen in Fig. 6, which compares the performance of three NN systems (1D-model, 2-D model with 3 traces and 2-D model with 5 traces) in a two-material GPR scenario. The NN-based prediction system has been trained with 150 examples, and the compression in the PCA processing is undertaken considering 4, 6, 8, 10, 12 and 14 PCA vectors. Focusing only on comparisons related to the performance of the different models, Fig. 6a and b shows that the general and threshold errors, respectively, decrease up to 12.5% by applying 2D-models. However, the degree of improvement for 5-traces systems is not very high compared with 3-traces systems. Although there is a certain tendency in the system to work better with higher number traces, the computational costs needed to achieve satisfactory training stages for such models discourages the use of systems based on 5 traces. In the next section, a more detailed study is provided concerning the impact of other relevant parameters in prediction accuracy, such as the number of PCA vectors or the number of examples in the training stage.

Please cite this article as: J.B. Rodriguez, et al., A prediction algorithm for data analysis in GPR-based surveys, Neurocomputing (2015), http://dx.doi.org/10.1016/j.neucom.2015.05.081i

J.B. Rodriguez et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

5

Fig. 4. Reconstruction of 2D-models from a B-scan data (red, green and yellow arrows correspond to 1, 3 and 5 traces). Inset illustrates the formation of traces detailed in Fig. 5. (For interpretation of the references to color in this figure caption, the reader is referred to the web version of this paper.)

Fig. 6. Comparison of one-dimensional system (or identically a two-dimensional system with 1 trace), the two-dimensional system with 3 traces and with 5 traces, for a simulation of 150 examples and a frequency of 1200 MHz a) general error, and b) threshold error.

Fig. 5. Traces structure for 1D case, and 2D for 3 and 5 traces.

4. Main parameters of the system The predictions success depends heavily on the choice of several parameters described above. The most important of these are the number of principal components used in the PCA processing, and the number of examples employed in the NN training stage. In addition, other choices associated to GPRs should be taken into account, as in example the pulse waveform and the survey frequency range. Regarding the PLP-NN topology, the activation function applied in the non-linear layer is a sigmoidal pulse, with the number of neurons in the intermediate layer ranging from 2 to 5, empirically optimized for better performance. This choice is relevant because the higher the number of neurons, the higher the number of examples needed in the training stage, this implying an additional computational cost. The training method applied is the hybrid LSE-Levenberg–Marquardt method, which offers a high rate of success at a low computational cost. Some guidelines governing the number of examples in the training stage and the PCA compression number can be derived from a canonical two-material problem. Thus, it is considered a host with the electrical characteristics of marble (ϵr ¼ 8:0), with a size is of 0.25 m  0.6 m (Fig. 7). Inside the host, a vertical column

Fig. 7. Simulation scenario employed in the 1-D and 2-D cases.

is placed, composed of 10 square zones of size 0.025 m  0.025 m, which are filled randomly with marble (ϵr ¼ 8:0) or dry sand (ϵr ¼ 4:0). The objective of the problem is to determine the correct material for each position, with a total number of 210 possible solutions. Computer simulations of the GPR survey were made considering a distance of 0.05 m between the transmission and

Please cite this article as: J.B. Rodriguez, et al., A prediction algorithm for data analysis in GPR-based surveys, Neurocomputing (2015), http://dx.doi.org/10.1016/j.neucom.2015.05.081i

6

J.B. Rodriguez et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

Fig. 9. Comparison between general and threshold errors for positions for the optimal line.

Fig. 8. Working surfaces for a 500 MHz one-dimensional system: (a) general error and, (b) threshold error.

receiver dipoles, and both of them located over the ground-air interface at a height of 0.05 m. Transmitter antenna was fed with a Ricker pulse with a central frequency of 500 MHz, widely used in geological surveys. For the sake of brevity, this paper shows the results for 500 MHz, but similar results were achieved with pulses of 900 MHz and 1200 MHz. Once the NN was trained, a sufficiently high number of test cases not used in the training phase were employed to evaluate the results. Fig. 8 shows the general and threshold error as functions of the PCA vectors and number of examples of training, which are represented scaled to the number of possible solutions (in our case, number of examples=210 ). It should be taken into account that a choice of a high number of PCA vectors decreases the general and threshold errors but increases the computational costs because a greater number of examples is needed in the training phase. Therefore, the error surfaces suggest a compromise in the choice of both parameters. Ratios less than 0.05 training examples result in extremely poor predictions, regardless of the number of PCA components selected. Larger sizes of the training set and number of PCA vectors offer more satisfactory results. With the use of 14 PCA vectors and 0.195 of ratio examples–possibilities, the general and threshold errors were 4% and 0.5%, respectively. In

Fig. 10. Comparison between general and threshold error for different resolutions for systems trained with 150 examples and taking 12 PC using 500, 900 and 1200 MHz frequencies.

general, good choices are located in the range of 6 PCA vectors– 100 examples, 14 PCA vectors–200 examples. Fig. 9 shows the results as a function of the depth for several parameters choices. It can be seen that the predictions in the regions nearer to the surface of the host (positions 1–5) have low errors, while deeper zones gave poorer results. This is because the transmitted signal is degraded due to the multiple reflections that occur in the propagation of the electromagnetic waves, diminishing their ability to illuminate deeper areas. Another useful study involves the size of the grid and different survey central frequencies. That is, different meshing resolutions have been considered (0.0125 m, 0.01 m and 0.005 m), with the general and threshold error being calculated for frequencies of 500 MHz, 900 MHz, and 1200 MHz. As only 10 unknown zones are considered, the maximum depths under study for the different resolutions are 0.125 m, 0.1 m, and 0.05 m. The prediction system has been trained in all cases with 150 examples, and 12 PCA vectors are employed. Fig. 10 shows that higher frequencies for survey work better for shallower depth, while lower frequencies function better for deeper explorations.

Please cite this article as: J.B. Rodriguez, et al., A prediction algorithm for data analysis in GPR-based surveys, Neurocomputing (2015), http://dx.doi.org/10.1016/j.neucom.2015.05.081i

J.B. Rodriguez et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

7

Fig. 11. (a) Parallel layers generated to test the system and (b) reconstruction of the test scenario 1 offered by the system at the output of the neural network.

5. Results The detection and reconstruction of geological layers is a paradigm in GPR-based surveys. This problem applies not only to geology but also to civil engineering, where GPRs can be implemented as a nondestructive tool to detect pavement thickness in roads [4,36]. Hence, this justifies the use of GPR in these cases, given that asphalt and soil are materials with low conductivity. For this reason, electromagnetic waves, in the range of hundreds of MHz, which propagate through these materials undergo low attenuation, making it possible to gain enough information to reconstruct the scenario. In this section, the algorithm above described is applied to the host media exploration, consisting of layers of materials varying in lengths and thicknesses. Specifically, the simulated scenarios to reconstruct are composed of dry asphalt and dry loamy soil, intended to resemble as far as possible materials that make up road pavement. Therefore, the main objective is to reconstruct, with maximum accuracy, a ground profile formed by layers of different materials, in terms of depth, thickness and geometry of each layer. The first test was performed in a soil profile composed of only two materials, dry asphalt (ϵr ¼ 3:0) and dry loamy soil (ϵr ¼ 8:0), in

parallel layers. Upper part of Fig. 11 shows the depth and the thickness of each layer. Electromagnetic pulse radiated by the GPR system is approximated as a Ricker waveform [37], of central frequency of 500 MHz. Training stage of the NN is completed according to the procedures above described. The lower part of Fig. 11 depicts, with colors corresponding to the permittivity predicted by the algorithm proposed, the reconstruction of the soil. Each layer of material of the soil can be appreciated, where darker colors resemble the dry asphalt layers and brighter ones represent the loamy layers. The intermediate tones corresponding to deeper layers are a consequence of the higher error rates existing at those depths even for well-trained NNs, as pointed out in the previous section. Another case of interest is the detection of geometrical features, such as slopes, in the buried layers. Based on the former example, but with only two different materials, the scenario depicted in Fig. 12 represents this situation. Using identical NN topology and numerical parameters as in the first example, lower part of Fig. 12 shows the output of the prediction system. The interpolation properties of the NNs enable the reconstruction of the shape of the interface between the two materials. This reconstruction can be considered as adequate because the main features of the

Please cite this article as: J.B. Rodriguez, et al., A prediction algorithm for data analysis in GPR-based surveys, Neurocomputing (2015), http://dx.doi.org/10.1016/j.neucom.2015.05.081i

8

J.B. Rodriguez et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

Fig. 12. (a) Diagonal layers scenario generated to test the system and (b) reconstruction of the test scenario 1 offered by the system at the output of the neural network.

scenario – i.e., the horizontal and the slope boundary separating air, asphalt a loam soil, are visually similar, with pixels of resolution limited computationally by the size of the NN. Finally, the test of the system is extended to the rebuilding of a scenario of parallel layers composed of three possible materials: dry asphalt (ϵr ¼ 3:0), dry loam soil (ϵr ¼ 8:0) and granite (ϵr ¼ 5:0). The introduction of a new material increases the number of possible solutions, for a total of 310 possibilities with the resolution considered, and the NN must be trained with a larger number of examples, 1000 in this example, to achieve satisfactory results. Fig. 13 illustrates the ability of the systems to reconstruct the original scenario, in the permittivity as well as in thickness and depth of the different layers. Notably, it provides good results despite the lower ratio, compared with the other examples, of training scenarios and possible solutions. It seems that the introduction of a new material of intermediate ϵr does not impair the performance, as might be expected initially. On the contrary, the prediction system is enhanced by the higher the number of training scenarios, the interpolation properties of the NNs being the main reason for such improvement.

At this point, we would like to emphasize the main challenges to apply the proposed method to realistic scenarios. Firstly, the practical equipment available includes realistic antennas with directive radiation patterns which should be taken into account. This effects will be noticeable at higher frequencies, and a proper calibration for the available antennas should be made in advance. Secondly, the host media in most of the applications will be a nonhomogenous soil. Then, the host media include a high number of small scattering centers acting as noise sources in the matrix data, and subsequently the prediction of the actual procedure will be lowered in these cases.

6. Conclusion A prediction system composed of a numerical electromagnetic solver, a NN system, and a data-compression algorithm constitutes a powerful tool for the interpretation of the data compiled in GPRbased geological surveys. Special attention needs to be paid in tuning the key parameters of the complete system, in cases where

Please cite this article as: J.B. Rodriguez, et al., A prediction algorithm for data analysis in GPR-based surveys, Neurocomputing (2015), http://dx.doi.org/10.1016/j.neucom.2015.05.081i

J.B. Rodriguez et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

9

Fig. 13. (a) Parallel-layer scenario generated to test the system and (b) reconstruction of the test scenario 1 offered by the system at the output of the neural network.

both A- and B-scan data are available. B-scan information, arranged as multiple traces (3 or 5), decreases the general error in comparison to the previously proposed A-scan predictors, particulary in scenarios including discrete scattering centers or non-horizontal boundaries. Therefore, results show adequate visual reconstructions for computerized geological surveys in which parallel and diagonal layers of buried material are arranged. The adaptability of neural networks to changes in the environment (e.g., randomly distribution of materials in the scenario), together

with the use of data-compression techniques, make this model promising for other field sensing applications where complete reconstructions of the subsoil are required.

Acknowledgements The authors would like to thank the Spanish Ministry of Education and Brazilian CAPES for their support of this work,

Please cite this article as: J.B. Rodriguez, et al., A prediction algorithm for data analysis in GPR-based surveys, Neurocomputing (2015), http://dx.doi.org/10.1016/j.neucom.2015.05.081i

10

J.B. Rodriguez et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎

through project PR2009-0067. The authors would also like to acknowledge partial support by the EU FP7/2007-2013, under GA 205294 (HIRF SE project), from the Spanish National Projects TEC2007-66698-C04-02, CSD200800068, DEX-5300002008105, and from the Junta de Andalucia Projects TIC1541. This work was also supported by the Brazilian agencies CNPq and FAPEMIG. References [1] C. Colla, C. Maierhofer, Investigations of historic masonry via radar reflection and tomography, in: 2000 8th International Conference on Ground Penetrating Radar (GPR), 2000, pp. 893–898. [2] S.A. Arcone, A.J. Delaney, Gpr images of hidden crevasses in antarctica, in: 2000 8th International Conference on Ground Penetrating Radar (GPR), 2000, pp. 760–765. [3] K. Ho Lee, C.-C. Chen, R. Lee, A numerical study of the effects of realistic gpr antennas on the scattering characteristics from unexploded ordnances, in: IGARSS '02, 2002 IEEE International Geoscience and Remote Sensing Symposium, 2002, vol. 3, 2002, pp. 1572–1574. [4] D. Daniels, Ground Penetrating Radar, IET Press, Stevenage, 2004. [5] C. van Coevorden, A. Bretones, M. Pantoja, F. Ruiz, S. Garcia, R. Martin, Ga design of a thin-wire bow-tie antenna for gpr applications, IEEE Trans. Geosci. Remote Sens. 44 (4) (2006) 1004–1010. http://dx.doi.org/10.1109/TGRS.2005.862264. [6] R.M. Morey, S.M. Conklin, S.P. Farrington, J.D. ShinnII, Tomographic Site Characterization Using CPT, ERT and GPR, 1999. [7] N. Joachimowicz, C. Pichot, J.-P. Hugonin, Inverse scattering: an iterative numerical method for electromagnetic imaging, IEEE Trans. Antennas Propag. 39 (12) (1991) 1742–1753. http://dx.doi.org/10.1109/8.121595. [8] C. Christodoulou, M. Georgiopoulos, Applications of Neural Networks in Electromagnetics, Artech House, Norwood, 2001. [9] Douglas Alexandre Gomes Vieira ,Rede perceptron com camadas paralelas (plp -parallel layer perceptron) (Ph.D. thesis), Universidade Federal de Minas Gerais, 2006. [10] W.M. Caminhas, D.A.G. Vieira, J.A. Vasconcelos, Parallel layer perceptron, Neurocomputing 55 (3–4) (2003) 771–778. [11] C. Christodoulou, J. Huang, M. Georgiopoulos, Design of gratings and frequency-selective surfaces using artmap neural networks, J. Electromagn. Waves Appl. 9 (1/2) (1995) 17–36. [12] M.K. Smail, Y.L. Bihan, L. Pichon, Fast diagnosis of transmission lines using neural networks and principal component analysis, Int. J. Appl. Electromagn. Mech. 39 (1) (2012) 435–441. http://dx.doi.org/10.3233/JAE-2012-1493. [13] S. Caorsi, G. Cevini, An electromagnetic approach based on neural networks for the gpr investigation of buried cylinders, IEEE Geosci. Remote Sens. Lett. 2 (1) (2005) 3–7. http://dx.doi.org/10.1109/LGRS.2004.839648. [14] L. Newnham, A. Goodier, Using neural networks to interpret subsurface radar imagery of reinforced concrete 4084 (2000) 434–440. [15] X. Travassos, D. Vieira, N. Ida, C. Vollaire, A. Nicolas, Characterization of inclusions in a nonhomogeneous gpr problem by artificial neural networks, IEEE Trans. Magn. 44 (6) (2008) 1630–1633. http://dx.doi.org/10.1109/ TMAG.2007.915332. [16] S.G. Garcia, A.R. Bretones, B.G. Olmedo, R.G. Martn, Finite difference time domain methods, in: D. Poljak (Ed.), In Time Domain Techniques in Computational Electromagnetics, WIT Press, 2003, pp. 91–132. [17] A. Giannopoulos, The investigation of transmission-line matrix and finitedifference time-domain methods for the forward problem of ground probing radar (Ph.D. thesis), University of York, 1997. [18] R. Godoy-Rubio, Mtodos de diferencias finitas incondicionalmente estables para la resolucin de las ecuaciones de maxwell en el dominio del tiempo (Ph. D. thesis), Universidad de Granada, 2005. [19] B.G. Olmedo, S.G. Garcia, A.R. Bretones, R.G. Martin, New trends in FDTD methods in computational electrodynamics: unconditionally stable schemes, in: Recent Res. Development in Electronics, Transworld Research Network, 2005. [20] R. Tibshirani, T. Hastie, J. Friedman, The Elements of Statistical Learning, Springer, New York City, 2001. [21] S. Campana, S. Piro, Seeing the Unseen, Geophysics and Landscape Archaeology, A Balkema Book, Taylor & Francis, London, 2008 〈http://books.google. com.br/books?id=KrOM3n-MT8sC〉. [22] Stergios Stergiopoulos, Advanced Signal Processing Handbook: Theory and Implementation for Radar, Sonar, and Medical Imaging Real Time Systems, CR Press, Boca Raton, 2000. [23] L. Travassos, D.A.G. Vieira, N. Ida, A. Nicolas, In the use of parametric and non parametric algorithms for the non destructive evaluation of concrete structures, Res. Nondestruct. Eval. 20 (2) (2009) 71–93. http://dx.doi.org/10.1080/ 09349840802513242. [24] Douglas Alexandre Gomes Vieira, Ricardo Hiroshi Caldeira Takahashi, Vasile Palade, João Antônio Vasconcelos, Walmir Matos Caminhas, The q -norm complexity measure and the minimum gradient method: a novel approach to the machine learning structural risk minimization problem, IEEE Trans. Neural Netw. 19 (8) (2008) 1415–1430. http://dx.doi.org/10.1109/ TNN.2008.2000442.

[25] I.T. Jolliffe, Principal Component Analysis, Springer, New York City, 2002. [26] F. Rosenblatt, The perceptron: a probabilistic model for information storage and organization in the brain, Psychol. Rev. 65 (6) (1958) 386–408. [27] F. Rosenblatt, Principles of Neurodynamics, Spartan Books, Washington, 1962. [28] M. Minsky, S. Papert, Principal Component Analysis, MIT Press, Cambridge, 1969. [29] S. Grossberg, Adaptive pattern classification and universal recoding: I. parallel development and coding of neural feature detectors, Biol. Cybern. 23 (3) (1976) 121–134. [30] T. Kohonen, Correlation matrix memories, IEEE Trans. Comput. C-21 (4) (1972) 353–359. http://dx.doi.org/10.1109/TC.1972.500897 5. [31] K. Fukushima, Neocognitron: a self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position, Biol. Cybern. 36 (4) (1980) 193–202. http://dx.doi.org/10.1007/BF00344251. [32] J.J. Hopfield, Neural networks and physical system with emergent collective computational abilities, Proc. Natl. Acad. Sci. 79 (2010) 2554–2558. [33] D.E. Rumelhart, G.E. Hinton, R.J. Williams, Learning internal representations by error propagation, in: Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. 1, MIT Press, Cambridge, MA, USA, 1986, pp. 318–362. [34] J. Shing Roger Jang, Anfis: Adaptive-network-based fuzzy inference system, IEEE Trans. Syst. Man Cybern. 23 (1993) 665–685. [35] M. Hagan, M.-B. Menhaj, Training feedforward networks with the Marquardt algorithm, IEEE Trans. Neural Netw. 5 (6) (1994) 989–993. http://dx.doi.org/ 10.1109/72.329697. [36] C. Guattari, F. Amico, A. Benedetto, Integrated road pavement survey using gpr and lfwd, in: 2010 13th International Conference on Ground Penetrating Radar (GPR), 2010, pp. 1–6. http://dx.doi.org/10.1109/ICGPR.2010.5550077. [37] K.-Y. Huang, K.-S. Fu, Decision-theoretic approach for classification of ricker wavelets and detection of seismic anomalies, IEEE Trans. Geosci. Remote Sens. GE-25 (2) (1987) 118–123. http://dx.doi.org/10.1109/TGRS.1987.289721.

Jesús Babío Rodríguez Rodriguez received his B.S., in telecomunication engineering from the University of Granada, Granada, Spain, in 2009. Since 2010 he is working as a senior consultant in an IT Consultancy Company. His research interests are focused mainly on prediction algorithms based on signal processing, neural networks and electromagnetic synthetic-data models.

Mario Fernández Pantoja received the B.S., M.S., and Ph.D. degrees in electrical engineering from the University of Granada, Granada, Spain, in 1996, 1998, and 2001, respectively. Between 1997 and 2001, he was an Assistant Professor at the University of Jaen, Spain. He then joined the University of Granada where, in 2004, he was appointed an Associate Professor. He has been a Guest Researcher at the Dipartimento Ingenieria dell’Informazione at the University of Pisa, Italy, and with the Antenna and Electromagnetics Group at Denmark Technical University. His research is focused mainly on the areas of time-domain analysis of electromagnetic radiation and scattering problems, and optimization methods applied to antenna design.

X. L. Travassos received his B.Sc. and M.Sc. degrees from the Universidade Federal de Santa Catarina, Brazil, in 2002 and 2004, respectively. He finished his Ph.D. at L´École Centrale de Lyon, France, in 2007. In 2007, he was a researcher at Faculté Polytechnique de Mons, Belgium. Currently, he is with the Mobility Engineer Center at UFSC. His research interests are in the design and optimization of electromagnetic devices, electromagnetic compatibility and antennas and propagation.

Please cite this article as: J.B. Rodriguez, et al., A prediction algorithm for data analysis in GPR-based surveys, Neurocomputing (2015), http://dx.doi.org/10.1016/j.neucom.2015.05.081i

J.B. Rodriguez et al. / Neurocomputing ∎ (∎∎∎∎) ∎∎∎–∎∎∎ D.A.G. Vieira was born in Brazil in 1980. He received the B.Sc. and Ph.D. degrees in electrical engineering from Universidade Federal of Minas Gerais (UFMG), Brazil, in 2003 and 2006, respectively. In 2005, he was a visiting researcher at Oxford University, UK, and in 2007, an associate research at Imperial College London, UK. Currently is the Executive Director of the ENACOM Handcrafted Technologies. His interests are in: multiobjective optimization, machine learning and applications.

11 Rodney R. Saldanha was born in Belo Horizonte, Brazil, in 1954. He received the B.Sc. degree in electrical engineering from the Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, Brazil, in 1980, the M. Sc. degree in electrical engineering from UFMG In 1983, and the Ph.D. degree in electrical engineering from the Institut National Polytechnique de Grenoble, Grenoble, France, in 1992. Currently, he is with the Department of Electrical Engineering, UFMG. His research interests are in the design and optimization of electromagnetic devices, optimization theory, network optimization, multiobjective optimization, stochastic and deterministic optimization algorithms.

Please cite this article as: J.B. Rodriguez, et al., A prediction algorithm for data analysis in GPR-based surveys, Neurocomputing (2015), http://dx.doi.org/10.1016/j.neucom.2015.05.081i