Image and Vision Computing 32 (2014) 568–578
Contents lists available at ScienceDirect
Image and Vision Computing journal homepage: www.elsevier.com/locate/imavis
Timely autonomous identification of UAV safe landing zones☆ Timothy Patterson a,⁎, Sally McClean a, Philip Morrow a, Gerard Parr a, Chunbo Luo b a b
School of Computing and Information Engineering, University of Ulster, Cromore Road, Coleraine, BT52 1SA Northern Ireland, United Kingdom School of Computing, University of the West of Scotland, Paisley, Scotland PA1 2BE, United Kingdom
a r t i c l e
i n f o
Article history: Received 14 September 2012 Received in revised form 14 February 2014 Accepted 26 June 2014 Available online 3 July 2014 Keywords: UAV safe landing zone detection Terrain classification Fuzzy logic UAV safety
a b s t r a c t For many applications such as environmental monitoring in the aftermath of a natural disaster and mountain search-and-rescue, swarms of autonomous Unmanned Aerial Vehicles (UAVs) have the potential to provide a highly versatile and often relatively inexpensive sensing platform. Their ability to operate as an ‘eye-in-thesky’, processing and relaying real-time colour imagery and other sensor readings facilitate the removal of humans from situations which may be considered dull, dangerous or dirty. However, as with manned aircraft they are likely to encounter errors, the most serious of which may require the UAV to land as quickly and safely as possible. Within this paper we therefore present novel work on autonomously identifying Safe Landing Zones (SLZs) which can be utilised upon occurrence of a safety critical event. Safe Landing Zones are detected and subsequently assigned a safety score either solely using multichannel aerial imagery or, whenever practicable by fusing knowledge in the form of Ordnance Survey (OS) map data with such imagery. Given the real-time nature of the problem we subsequently model two SLZ detection options one of which utilises knowledge enabling the UAV to choose an optimal, viable solution. Results are presented based on colour aerial imagery captured during manned flight demonstrating practical potential in the methods discussed. © 2014 Elsevier B.V. All rights reserved.
1. Introduction Unmanned Aerial Vehicles (UAVs) have the potential to revolutionise current working practices for many military and civilian applications such as assisting in search-and-rescue missions [1] and environmental monitoring [2]. The widespread availability of multiple Commercial Off-The-Shelf (COTS) air frame designs when coupled together with recent advances in sensing technologies such as lightweight, high resolution colour cameras ensures that in comparison to manned aircraft UAVs offer a versatile and often inexpensive solution to many such applications. A key advantage provided by UAVs is the removal of humans from situations which may be classified as dull, dangerous or dirty, for example power line inspection, aerial surveillance and monitoring of atmospheric pollution. There are two main types of UAV control, piloted and autonomous. Piloted UAVs are controlled in real-time by a human operator often located many miles from the deployment area. On the other hand, autonomous UAVs generate low-level flight control commands in response to high-level goals, for example GPS waypoints. One such project concerned with the creation and evaluation of autonomous UAVs is the Sensing Unmanned Autonomous Aerial Vehicles (SUAAVE) project [3]
☆ This paper has been recommended for acceptance by Konrad Schindler, PhD. ⁎ Corresponding author. E-mail addresses:
[email protected] (T. Patterson),
[email protected] (S. McClean),
[email protected] (P. Morrow),
[email protected] (G. Parr),
[email protected] (C. Luo).
http://dx.doi.org/10.1016/j.imavis.2014.06.006 0262-8856/© 2014 Elsevier B.V. All rights reserved.
which has a primary focus of utilising swarms of communicating, autonomous UAVs for a mountain search-and-rescue type scenario. Currently, there are three types of adapted, Ascending Technologies rotor based UAV platforms used within the SUAAVE project. Each platform has a flight speed of around 10 m/s with approximately 20 min battery life. Colour cameras of varying dimensions, power requirements and capabilities are fitted to the UAVs. Additionally, all platforms contain an IEEE 802.11 wireless networking card, a GPS receiver, an Inertial Navigation System (INS) and an ATOM processing board enabling automated, in-flight processing and analysis of captured colour aerial imagery and other sensory inputs.
1.1. Aims and motivation As with manned aircraft the dependability and integrity of a UAV platform can be influenced by the occurrence of various endogenous and exogenous events, for example a change in wind strength may impact upon the UAV's remaining battery life. Due to the potential ethical and legal implications of an in-flight UAV failure the UK Civil Aviation Authority's UAV regulations are currently similar to those specified for model aircraft [4]. As such one regulation is that UAVs must remain within 500 m of the human operator at all times thereby limiting the usefulness of UAVs for many real-world applications. Before these operational constraints can be relaxed there are a number of safety related technical challenges which must be addressed including sense-andavoid capabilities and provision of a safe landing system.
T. Patterson et al. / Image and Vision Computing 32 (2014) 568–578
For many safety critical situations the safest course of action may be to instruct the UAV to land as quickly and safely as possible. In particular there are three types of error which may impact upon the safe operation of a UAV thus necessitating an emergency landing. 1. Loss of communication link. A key system requirement of the SUAAVE project is that each swarm member must maintain a direct or multi-hop communication link with the base station. This communication link is via the IEEE 802.11 wireless networking protocol and enables each UAV to receive commands such as return home or land immediately from a human-in-the-loop. The loss of this link would potentially result in the UAV being uncontrollable and is therefore deemed a safety critical event. 2. Hardware/software errors. The most serious of this class of error, for example an actuator failure, may require the UAV to descend immediately in a controlled fashion and land on the ground directly below. Other less serious errors, for example a software module crashing, may require the UAV to land and perform a soft reset. 3. GPS failure. During normal flight conditions the UAVs used within the SUAAVE project navigate using coordinates obtained from a
569
GPS receiver. However, the signal strength and reliability of GPS can be influenced by terrain profile [5]. Whilst a method of position estimation using Ordnance Survey (OS) map data and aerial imagery has been developed [6], in the event of prolonged loss of GPS signal the UAV would be commanded to land. Intuitively it cannot be assumed that the ground directly beneath the UAV is suitable for landing in, as it may contain humans, animals or hazards. Furthermore, due to the limited flight time of the UAVs it cannot be assumed that previously determined landing sites are attainable. It is therefore desirable to have an autonomous method of Safe Landing Zone (SLZ) detection which can be executed on colour aerial imagery obtained from onboard, passive colour cameras. Due to the flight speed of the UAVs, algorithms using such imagery, for example SLZ detection must execute in real-time as failure to do so will result in areas on the UAV's flight path remaining unprocessed. In Fig. 1 the primary phases of UAV operation are illustrated in conjunction with an overview of SLZ detection. Upon bootstrap success the UAV becomes airbourne and recursively cycles through the self diagnostics and operation modes. During operation mode the SLZ detection
Fig. 1. Primary modes of UAV operation demonstrating when SLZ detection is considered as a soft or hard real-time system. Potential SLZs are identified within an input aerial image using a combination of edge detection and dilation. These potential SLZs are then assigned a safety score and depending on the mode of operation either used immediately as a SLZ or stored for future use.
570
T. Patterson et al. / Image and Vision Computing 32 (2014) 568–578
algorithm is continuously executed on sensed aerial imagery. Detected SLZs with a high safety score are stored in a database thereby providing an invaluable measure of robustness should a UAV encounter an error in a location where no SLZs exist. We consider this mode of operation to be a soft real-time system whereby it is not regarded as a safety critical event if processing a colour image requires slightly longer than a predetermined threshold. Should diagnostics fail, the UAV will attempt to recover by, for example navigating back within communication range with other swarm members. In the event where this recovery fails the UAV is issued with an abort command and must determine if a suitable SLZ exists from its current location. We consider abort mode to be a hard real-time system whereby failure to process a colour aerial image within the required time frame may have catastrophic consequences. Within this paper we present an extension of our previous work on autonomous SLZ detection discussed in [7] and [8] resulting in a SLZ detection algorithm which has the capability of exploiting knowledge in the form of OS data in addition to utilising the multichannel nature of colour aerial imagery. There are two primary contributions. Firstly, we incorporate OS data into the potential SLZ detection phase thus providing a measure of robustness against image noise. Furthermore, we use OS data to assist with the assignment of SLZ safety scores, fusing the OS data and multichannel aerial imagery to perform terrain classification and also to determine the Euclidean distance between a SLZ and man-made objects. Secondly, given the scenario of an abort command, the available time frame under which the hard real-time system must execute is variable and strongly influenced by the UAV's remaining battery life. Whilst incorporating knowledge into the SLZ detection algorithm provides a more reliable result there is a time overhead incurred. We therefore model the execution time of two SLZ detection options one of which incorporates knowledge enabling the UAV to choose an optimal, viable method. The remainder of this paper is structured as follows; in Section 2 a brief overview of work related to SLZ detection is given. In Section 3 we discuss autonomously identifying potential SLZs. These potential SLZs are then assigned a safety score as presented in Section 4. In Section 5 we model the execution times of two SLZ detection options thus enabling the UAV to choose an optimal, viable, SLZ detection method. An evaluation is presented in Section 6. Finally, conclusions and proposed further work are outlined in Section 7. 2. Related work There are two main types of SLZ detection algorithms within the literature namely, semi-autonomous and fully autonomous. Semiautonomous approaches rely on a human operator delineating a general suitable landing area after which the UAV detects a specific landing site. Alternatively fully autonomous approaches rely solely on SLZ detection algorithms on-board a UAV. For completeness a brief overview of the relevant literature is provided in the following subsections. 2.1. Semi-autonomous SLZ detection Many of the early semi-autonomous approaches to UAV landing such as the work by Sharp and Shakernia [9] and Saripalli et al. [10] focused on specially constructed landing pads of known size, shape and location. The design of many of these landing pads enabled the UAV to utilise image processing techniques in order to reliably estimate altitude and pose thus providing low level flight control with invaluable positional information. For example, Merz et al. [11] propose a landing pad consisting of 5 circle triplets. The pose of the UAV is determined from the projection of three circles positioned on the corner points of an equilateral triangle. This is fused with inertial measurement data using a Kalman filter to provide a more accurate estimation of UAV pose and altitude. A further example landing site design is presented by Lange et al. [12] where the pattern consists of concentric circles with increasing diameters. Each circle has a unique ratio of inner-to-outer radius which
can be used in conjunction with camera properties to estimate the UAV's height above ground level (AGL). However, extending an approach which utilised landing pads for use in a safety critical situation would necessitate a significant amount of human effort as such pads would be required in multiple geographic locations. More recent advances have focused on reconstructing the terrain profile of an area chosen by a human operator. Such approaches are based on the underlying assumption that planar regions are suitable landing areas. This assumption is reasonable given an operator ability to use human intuition in collating contextual information in order to choose a potentially suitable landing area. The use of active sensing devices, for example laser scanners provides a relatively robust and accurate method of determining terrain profile [13,14] however, due to their high weight and power requirements such sensors are generally impracticable for small rotor-based platforms. In the semi-autonomous SLZ detection algorithms proposed by Templeton et al. [15] and Cesetti et al. [16] passive sensors, for example colour cameras are used in conjunction with image processing techniques such as computation of optical flow to detect planar regions. In the work by Templeton et al. [15] the terrain is reconstructed using a single camera. However, in order to achieve this multiple passes of the same area are required. In the scenario of an emergency forced landing this may not be achievable due to limited battery life. A further disadvantage is the requirement of an accurate estimation of camera movement. For a UAV with constantly changing velocity this may be difficult. In the semi-autonomous approach by Cesetti et al. [16] a user chooses a safe landing area via a series of navigation waypoints either from an aerial image or from the live UAV camera feed. A modification of the Scale Invariant Feature Transform (SIFT) [17] is used to extract and match natural features in the navigation waypoints to the UAV images. A hierarchical control structure is used to translate high level commands, for example navigation waypoints, to low level controls, for example rotor speed. The SIFT algorithm was chosen in [16] due to the difficulties in robustly identifying reliable features in an outdoor environment from a moving platform. SIFT image descriptors are invariant to scale, translation, rotation and, to some extent, illumination conditions and can therefore overcome these difficulties with relative success. However one of the main disadvantages of this algorithm is that the SIFT feature description of an image can be computationally expensive to compute. Cesetti et al. overcome this by dividing an input image into subimages upon which SIFT is executed only if the sub-image under consideration has a high contrast threshold value which is based upon the sum and mean of the sub-image's greyscale intensity. This results in SIFT image descriptors being computed at a rate of 5 frames per second on 320 × 240 pixel images. The computed SIFT features are utilised for two possible safe landing site identification scenarios. In the first scenario, the UAV is maintaining a steady altitude and with a translational motion is tasked with identifying a landing site. The optical flow between two successive images is estimated using the SIFT features and used to estimate depth structure information. An assumption is made that areas with low variance between optical flow vectors indicate flat areas and are therefore deemed to be a safe landing site. A threshold for determining the boundaries between safe and unsafe areas is calculated during a supervised training phase. The second scenario is where the UAV is descending vertically. In this scenario, a graph of distances is calculated between the features of two successive images. A hypothesis is proposed that linear zooming over a flat surface will result in constant variance in graph distances between two images. Areas with a low variance between successive images are considered to be safe as they are assumed to be flat. Both heuristics, i.e. that areas with a low variance between optical flow vectors and that low variance between successive image features for linear zooming indicate flat areas are validated to some extent using both simulations and real data. For the first scenario, an example
T. Patterson et al. / Image and Vision Computing 32 (2014) 568–578
is provided using real data in which the variance of the optical flow vector for an unsafe landing site is 302.06 as opposed to a safe landing site which is 2.78. Cesetti et al. [18] further the potential autonomy of their original work by incorporating terrain classification into the SLZ detection algorithm however human interaction is still required. Fundamentally there are two primary reasons why a semiautonomous approach does not provide a robust solution to SLZ detection for our application of multiple autonomous UAVs conducting a mountain search-and-rescue mission. Firstly, there is a requirement of an available communication link between the UAV and a human operator, the failure of which may have been the very source of the error. Secondly, the human operator is responsible for a number of tasks including validating images which have been flagged as potentially containing a missing person. Placing an additional burden of identifying suitable, attainable SLZs may result in the neglecting of other tasks possibly negatively impacting upon the overall success of the mission.
2.2. Autonomous SLZ detection The work contained within [19–21] presents an approach to autonomous SLZ detection using colour aerial imagery for a fixed wing UAV. The system architecture is divided into two main stages. Firstly, potential landing sites are identified using edge detection, dilation and the identification of homogeneous areas of sufficient size. Secondly, the suitability of potential landing sites is determined based on the terrain classification and slope. The terrain of potential landing sites is classified using a back propagation neural network. By using a multi-layered classification approach, selecting appropriate input features and implementing an automated subclass generation algorithm an overall terrain type classification accuracy of 91.95% was achieved. An estimation of terrain slope was derived from digital elevation maps (DEM). The DEM used in the work by Fitzgerald et al. was a grid based model of approximately 90 m intervals, i.e. one square in the grid represented an area of 90 m2. The DEM data is then projected onto the image plane and each pixel assigned a linguistic slope measure based on the maximum DEM value between 4 grid points. However, it should be noted that the work by Fitzgerald et al. is primarily for a large fixed wing UAV which generally operates at much higher altitudes than the quadrotor UAV used within SUAAVE. In the case of a UAV determining slope at a lower altitude, a DEM of this resolution is not sufficient. Within [19–21] results are presented indicating a 92% potential landing site detection accuracy and a 93% terrain classification accuracy. However, when considered from the prospective of SLZ detection for a small quadrotor UAV there are two main limitations with the approach described. Firstly, the identification of potentially suitable SLZs is solely based upon edge detection on a greyscale representation of the input image which may render the method susceptible to noise such as shadows. Secondly, the textural features utilised are not invariant to scale and rotation which may reduce terrain classification accuracy for imagery captured at multiple scales and rotations caused by frequent UAV movements such as altitude and yaw adjustments. A further limitation is that, aside from Digital Elevation Models (DEM), potentially useful external knowledge such as OS map data is not exploited. Within this paper, we therefore seek to address these limitations.
3. Potential SLZ detection Following our previous work and that of Fitzgerald et al. [19,20], the SLZ detection algorithm is divided into two main components (Fig. 1). Firstly, potential SLZs are detected within an input colour aerial image. Secondly, these potential SLZs are assigned a numeric safety score which either confirms or discounts their suitability (Section 4).
571
3.1. Identification of region and object boundaries The focus of region and object boundary identification is to detect areas in an input aerial image which are relatively homogeneous and free from obstacles, for example animals. Two sources of data are utilised for this stage namely aerial imagery and OS data. 3.1.1. Edge detection Whilst OS data specifies region boundaries, due to its static nature it cannot ensure that such boundaries accurately reflect the real world area. Such examples include locations where houses, roads or paths are constructed after the survey date. It is therefore desirable to complement static OS data with real-time aerial imagery. Consequently, the process of edge detection is of fundamental importance to the overall success or failure of the SLZ detection algorithm. Edge detection identifies points within an image exhibiting a steep change, in typically, intensity values. This property renders it particularly useful for the problem of locating suitable SLZs as generally speaking such areas, for example grass regions have relatively constant intensity values and therefore, at higher altitudes do not contain edges. Furthermore there are types of manmade objects such as power pylons or wind turbines which pose a risk to safe UAV landing however may not be represented in OS data. Such objects typically exhibit sudden changes in greyscale intensity deviation and are therefore identified by edge detection, and the neighbouring area subsequently discounted as a potential SLZ. At this stage in development the Canny edge detector [22] is used. This method conducts a smoothing operation using a Gaussian filter as a prerequisite to edge detection thereby reducing its susceptibility to noise. The width of the Gaussian filter, w, can be defined thus providing a useful advantage as it is likely that at very low altitudes safe areas such as grass regions may contain a number of edges which do not represent significant region boundaries. It is therefore desirable that in a real world implementation the width of the Gaussian filter would be related to altitude. Further user defined parameters are a high, tH, and low, tL threshold. Due to the fundamental role of edge detection within the SLZ identification algorithm it is desirable that detected edges correspond, as accurately as possible to region boundaries. Whilst it is possible to set low values for thresholds tH and tL this results in many edges which would be considered spurious. With this in mind an offline training phase is conducted during which a human operator chooses edge detection parameters which yield intuitive results for a series of images. For the work presented in this paper these parameters are fixed to tH = 16.75, tL = 8.5 and w = 3 however in the future will be related to UAV altitude. 3.1.2. OS data For the majority of locations within the UK, invaluable knowledge regarding the landscape may be derived from OS data. This data specifies regions as a series of line, point or polygonal features in easting/ northing coordinates to an accuracy of ±0.4 m [23]. Whilst such data is inherently historic and its incorporation increases the overall execution time of the algorithm, it is nevertheless an invaluable resource which can be utilised to complement captured aerial imagery in assisting with the detection of potential SLZs. Of particular interest to this stage of the algorithm is the relatively reliable specification of region boundaries such as roads, buildings and vegetation extents. In order to ensure seamless compatibility with the image based components of SLZ detection, vector format OS data for an area enclosed by an image's geographic bounding box is converted into raster format. Raster format data represents a real world area as a matrix with each cell containing a value. For this component of the algorithm a matrix of dimensions equal to those of the input colour aerial image is created with each cell containing either 1, i.e. indicating if a region boundary is present, or conversely 0. For the purposes of identifying potential SLZs we consider relevant region boundaries specified in OS data analogous to edges detected
572
T. Patterson et al. / Image and Vision Computing 32 (2014) 568–578
Fig. 2. (A) is an input aerial image on which edge detection is executed resulting in the edges highlighted in white displayed in (B). In (C), OS data is combined with the output of the edge detection phase. After dilating the edges in (C), regions which do not contain edges are considered as potential SLZs (D).
within an image. This enables the lightweight fusion of such boundaries with the results of edge detection. An example which demonstrates the potential usefulness of utilising OS data in this way is displayed in Fig. 2. In Fig. 2B the results of executing edge detection on a greyscale version of an input colour aerial image are overlaid in white. However it can be seen that whilst many region boundaries are successfully detected, portions of a hedge, road and a path are unnoticed. Such boundaries are generally specified in OS data (Fig. 2C). Therefore the results of edge detection are combined with OS data using logical OR ensuring that all specified and detected edges are included in the output. Whilst logical OR currently yields an acceptable output it may be desirable in the future to consider combining the edges from OS data and the results of edge detection using, for example weighted linear combination [24] or the generative model described in [25]. This would subsequently enable the reliability of both sources at detecting certain types of edges to be weighted in a principled fashion. 3.2. Dilation The morphological process of dilation increases the width of the detected edges discussed in the previous subsection. The dilation of a binary image containing edges, E, with a structuring element, S, is denoted by E⊕S and defined as: n h i o E⊕S ¼ zj ^S ∩E ⊆E ; z
ð1Þ
where E and S have coordinates in 2-D integer space, Z2. This equation is based on reflecting S about its origin to form ^S and translating the reflection by z. For the objective of identifying potentially suitable SLZs, dilation has two main purposes. Firstly, from a safety prospective, assuming detected edges correspond to region or object boundaries the process of dilation enables a safety buffer to be placed around such boundaries. This safety buffer allows for a margin of error when performing the actual
landing. Furthermore, the process of dilation closes small gaps in region boundaries helping to ensure consistency between detected and real world boundaries. Secondly, an important component of potential SLZ detection is the identification of areas of sufficient size to contain the UAV. In our previous work [7] and that of Fitzgerald et al. [19,20], a second safety related parameter was included which specifies the size of an area surrounding a candidate pixel or group of pixels1 which must be free from edges before such candidates can be considered as potential SLZs. This was implemented by passing a mask over each pixel(s) in the image. If the region under the mask did not contain edges then the pixel(s) were flagged as a potentially suitable SLZ. As a post processing step groups of adjacent flagged pixels were merged to from larger regions which correspond to potentially suitable SLZs. However, a similar result may be achieved solely using dilation as a pixel is specified as an edge in the output dilated image if E and ^S overlap by ≥1 element [26]. We therefore determine the width, Sw, in pixels, of the square structuring element by: Sw ¼ ðb=Ir þ n=Ir þ u=Ir Þ;
ð2Þ
where b is the required buffer size in metres to be placed around each detected edge. Ir is the spatial resolution in metres of a single pixel, n is the required surrounding neighbourhood size in metres which must be free from edges before a pixel/group of pixels are considered as a potential SLZ. The size of the UAV in metres is represented by u. An example demonstrating the result of dilation is shown in Fig. 2 where a set of edges (Fig. 2C) is dilated resulting in the potential SLZs displayed in Fig. 2D. 1 Whilst at a higher altitude the size of a real world area represented by a single pixel may be sufficient to contain the UAV, it is likely that at lower altitudes it will be necessary to analyse an area surrounding groups of pixels.
T. Patterson et al. / Image and Vision Computing 32 (2014) 568–578
4. Assignment of SLZ safety score Having identified a set of potential SLZs within the input image which are of sufficient size to contain the UAV and additionally are located within homogeneous regions the next stage is to assign each SLZ a safety score in the range [0–1]. This safety score is a measure of a SLZ's suitability as a UAV landing area. For UAV safety critical contingency management such as the identification of SLZs, Cox et al. [27] propose 3 main objectives: 1. Minimise the expectation of human casualty. 2. Minimise the expectation of external property damage. 3. Maximise the chance of survival for the aircraft and its payload. With these priorities in mind we evaluate each potential SLZ and assign it a safety score based on terrain classification, roughness and distance to man-made objects. 4.1. Terrain classification Intuitively a key parameter when determining the suitability of a SLZ is its terrain classification, for example in the majority of scenarios it may be assumed that grass is more suitable for landing in than water. At this stage in development a Maximum Likelihood Classifier (MLC) is used which estimates the probability of a pixel represented by a multivariate feature vector x belonging to class ωi by, [28]
−1=2
ð2πÞ
jΣi j
−1=2
p ðxjωi Þ ¼ 1 t −1 exp − ðx−mi Þ Σi ðx−mi Þ : 2
ð3Þ
The covariance matrix, Σi and mean vector, mi for each class i are calculated during an offline training phase during which a human expert delineates examples of each class. Within the literature there are many types of features which may be used to assist in discriminating between classes. These include statistical measures derived from grey-level co-occurrence matrices [29], the use of Gabor filters [30] and utilising colour based features such as mean RGB within a pixel's neighbourhood [31]. It is likely that in an actual implementation the input aerial image will be subject to rotation and scaling due to UAV movements. We therefore focus entirely on colour based statistical features computed within each pixel's neighbourhood as these features are invariant to such movements thereby ensuring that training data accurately reflects the spectral appearance of classes, regardless of the UAV's pose. In order to determine an appropriate set of features a manually labelled dataset was created using aerial imagery captured during manned flight. A total of 490 samples were created for 9 classes. The terrain classes subsequently used throughout this work are, ω1 = Gorse, ω2 = Grass, ω3 = Heather, ω4 = Path, ω5 = Scrubland, ω6 = Stone, ω7 = Tarmac, ω8 = Trees 1 − (Coniferous), and ω9 = Trees 2 − (Deciduous). A series of tests were conducted during which 70% of the labelled data set was used for training and 30% for testing. Fifty iterations were conducted for each test and the overall classification results averaged to form a mean classification accuracy for each group of features. It was subsequently decided to use mean RGB and mean HSV computed within a 3 × 3 pixel window. For the labelled dataset this provided an overall
573
accuracy of 85.6% in comparison to 78.6% when using mean RGB and 79.1% when using mean HSV. A key advantage provided by the probabilistic nature of the MLC classifier is the ability to leverage in prior knowledge, p(ωi) in a principled fashion using Bayes' rule, pðωi jxÞ ¼
pðxjωi Þpðωi Þ : pðxÞ
ð4Þ
For the purposes of SLZ terrain classification such prior knowledge regarding an area terrain may be inferred from feature codes specified within OS data. Of particular relevance to this stage of the algorithm are the feature codes for roads, paths and vegetation. To enable compatibility between OS feature codes and the probabilities returned by the MLC classifier an offline knowledge solicitation phase is required during which a human expert quantifies the prior probability of a class, p(ωi) given a specific feature code. A list of the feature codes used for the terrain classification component and the associated prior probabilities are presented in Table 1. There are two significant advantages provided by incorporating expert knowledge in this way. Firstly, whilst OS data is generally a relatively reliable indicator as to the type of terrain in an area it is nevertheless historic in nature. As such, new features for example, roads or paths may be constructed over previously green field areas. Furthermore terrain changes caused by precipitation may result in previously suitable landing areas becoming unsafe, for example water logged fields. Conversely changes induced by evaporation may result in previously unsafe landing areas such as lakes or streams becoming suitable. Thus by using prior probabilities the likelihood of such changes may be incorporated. Secondly, an OS feature code may be imprecise. For example, the OS feature code ‘1228’ specifies an ‘extent of vegetation’ which may refer to classes ranging from gorse and grass to trees. Thus higher prior probabilities may be assigned across a number of classes given an imprecise feature code. The output probabilities from Eq. (3) are fused with the relevant priors specified in Table 1 using Bayes' rule and subsequently assigned membership of a class using the decision rule, x∈ωi if pðωi jxÞNp ω j jx for all j≠1;
ð5Þ
resulting in a set of classified pixels for each SLZ. A SLZ is subsequently assigned membership to the class for which the majority of its constituent pixels belong. When knowledge in the form of OS data was fused with the probabilities returned by colour based MLC the terrain classification accuracy for the labelled dataset increased to 88.1%. Intuitively different terrain types have varying levels of suitability as a SLZ. Therefore each terrain type is assigned a suitability measure in the range [0–1] by a human expert (Table 2). These values are allocated bearing in mind the overall objectives outlined at the beginning of Section 4. Highest priority is given to ensuring that expectation of human casualty is minimised resulting in terrain type road and path receiving low suitability values. The suitability value associated with a SLZs terrain classification is input to the terrain suitability membership function (Fig. 3) which in turn determines an output classification of either ‘unsuitable’, ‘risky’ or ‘suitable’ along with an associated degree of membership subsequently influencing the overall safety score which a SLZ receives.
Table 1 OS feature codes with assigned prior probabilities of class membership. OS feature code
Road Track Vegetation Parcel print
Assigned prior probabilities of class occurrence Gorse
Grass
Heather
Path
Scrubland
Stone
Tarmac
Trees 1
Trees 2
0.07 0.09 0.14 0.175
0.07 0.09 0.1 0.25
0.07 0.09 0.14 0.06
0.07 0.5 0.03 0.06
0.07 0.04 0.14 0.175
0.09 0.09 0.12 0.1
0.5 0.04 0.03 0.06
0.03 0.03 0.15 0.06
0.03 0.03 0.15 0.06
574
T. Patterson et al. / Image and Vision Computing 32 (2014) 568–578
Table 2 Terrain types with assigned suitability measure. Class
Gorse
Grass
Heather
Path
Scrubland
Stone
Tarmac
Trees 1
Trees 2
Suitability measure
0.4
1
0.4
0.1
0.7
0.5
0.1
0.3
0.3
4.2. Distance from man-made objects In order to ensure that expectation of human casualty and damage to property is minimised it is heuristically speaking undesirable to land in close proximity to man-made structures such as houses, roads or schools. We therefore utilise OS data to assist in calculating the distance from a SLZ to nearby man-made structures. There are two main advantages provided by exploiting OS data for this task in comparison to relying solely on image processing based techniques. Firstly, an image may contain noise such as fog or shadows thus obscuring potentially important details such as corners. Secondly, it is possible that a man-made structure may be located ‘off-frame’ resulting in an erroneous assumption that a SLZ is located in an area free from such structures. At this stage we focus solely on static man-made structures such as roads, paths and buildings however as part of future work it may be desirable to incorporate the ability to detect moving objects such as cars. The Euclidean distance measure is used to compute the distance between a SLZ's centroid position and each point of a man-made structure. The minimum distance in metres is subsequently used as input to the man-made structure distance membership function to determine a fuzzy classification of ‘near’, ‘medium’ and ‘far’ (Fig. 4). 4.3. Roughness For the purposes of preserving the UAV and its payload it is undesirable to choose areas which may be considered rough, for example stony areas. With natural textures such areas generally exhibit high variance of greyscale values. Following [32] we therefore use the greyscale standard deviation of a SLZ's member pixels as a simple, albeit relatively effective approach to determining the roughness of a SLZ. An offline training phase is conducted during which a human expert specifies examples of ‘very rough’, ‘rough’ and ‘smooth’ textures from which ranges for class membership are computed (Fig. 5). 4.4. Combination of attribute values In order to calculate an overall safety score for a SLZ it is necessary to combine the attribute values of terrain suitability, roughness and distance to man-made features. Fuzzy logic is used for this stage primarily
as it enables human experts to linguistically describe rules in an intuitive manner, for example if terrain suitability = suitable and roughness = smooth and distance to man-made structures is far then a SLZ is safe. The extent to which a SLZ is considered safe, i.e. the input safety score (Fig. 6) is determined by the values of the three input parameters. A fuzzy linguistic rule base is created offline containing such rules. When the fuzzy attributes of a potential SLZ are input into the fuzzy system the relevant rules are fired, aggregated and then defuzzified to give a crisp numeric output which is the SLZ safety weighting. Depending on the mode of operation (Fig. 1) SLZs with a high safety weighting are either stored for future use or used in conjunction with the decision control process described in [8]. 5. Modelling options Whilst it may be desirable to always incorporate knowledge into the SLZ detection algorithm there may be occasions where such an inclusion is impracticable as the increased execution time which it requires may be greater than the UAV's remaining battery life. Within this section we model two SLZ detection options one of which incorporates knowledge and illustrates how these models may assist with choosing an optimal, viable solution. We assume a log-normal distribution which is commonly used when modelling duration data such as execution times [33]. This assumption was made as the profile of the log-normal Probability Density Function (P.D.F.) fitted the histogram of observed times. These times and subsequently the parameters for each model, μ and σ were obtained by executing a C++ implementation of the SLZ detection algorithm on a dataset of 1024 colour aerial images captured during manned flight. These aerial images are of the Antrim Plateau region in Northern Ireland and primarily contained mountainous and agricultural terrain. An Ascending Technologies Pelican UAV [34] equipped with an ATOM processing board containing a 1.6 GHz processor and 1 GB RAM was used to obtain the timings. In Table 3 the measured μ and σ parameters are displayed for the major components of each option. It is likely that with a state-of-the art onboard computer such as the Ascending Technologies Mastermind [35] and with further optimization and refinement of code it would be possible to process multiple frames per second. However, at this stage in development the presented timings
Fig. 3. The membership functions of terrain suitability which determine the extent to which an input terrain is appropriate for landing in.
T. Patterson et al. / Image and Vision Computing 32 (2014) 568–578
575
Fig. 4. The membership functions which determine if an input distance in metres from a SLZ to a man-made structure is considered as ‘near’, ‘medium’ or ‘far’.
provide a useful indication as to the performance of the SLZ detection algorithm. The time required to identify boundaries and perform dilation is relatively constant across all images for each option with a correspondingly low σ value. It can be seen that there is additional overhead incurred by incorporating OS data for this phase. This overhead is primarily caused by the ‘selector’ operation which reads relevant OS data from a large shapefile and also the process of converting OS vector format data to raster format. As the process of computing SLZ attributes is conducted for each SLZ the required execution time is directly related to the number and size of detected SLZs within a colour image. Generally more SLZs were detected when knowledge was not incorporated as there were less region and object boundaries thereby increasing the number of potential SLZs. Within the dataset a total of 8166 SLZs were detected when knowledge was included resulting in a mean execution time when assigning a safety score of 0.06 s per SLZ. In comparison when knowledge was not used a total of 9388 SLZs were detected with a mean execution time to assign a safety score of 0.03 s per SLZ. Thus the difference in execution time between options when considered from the prospective of a single SLZ is greater than indicated in Table 3. It should be further noted that when knowledge is not incorporated into the SLZ detection algorithm the distance from a SLZ to man-made objects is not computed.
A key process in the overall SLZ detection algorithm is determining if it is viable to incorporate knowledge (Fig. 1). Bearing in mind the overall safety objectives outlined at the beginning of Section 4 we consider incorporating knowledge to provide a more robust and reliable option for SLZ detection and therefore consider it to be optimal in terms of ensuring the safety objectives are met. In order to determine the viability of incorporating knowledge we use the parameters obtained from the experiments to construct a model of the execution time of each option (Fig. 7) which can be used in conjunction with an execution threshold. If the UAV is in normal operation mode this threshold is a soft realtime constraint representing a maximum desired execution time. This is particularly useful if the previous execution of the SLZ detection algorithm required longer than expected resulting in the formation of a queue of unprocessed images. Upon receiving an abort command the threshold is based upon the UAV's remaining battery life and represents a hard real-time constraint. Given the models and a required threshold the probability of an option completing execution can be calculated using the Cumulative Distribution Function thus enabling the UAV to choose an optimal, viable solution. This is illustrated in Fig. 7 where a threshold of 5 s is input. Using the models it can be computed that option 1 has a probability of 0.492 and option 2 has a probability of 0.66 of completing execution before the threshold is breached. In an implementation of this approach to
Fig. 5. The membership functions determining if a SLZ is considered ‘very rough’, ‘rough’ or ‘smooth’.
576
T. Patterson et al. / Image and Vision Computing 32 (2014) 568–578
Fig. 6. The membership functions which determine the overall safety score which a SLZ receives.
decision making in a real-time system it is likely that an acceptable lower limit would be set. Thus, for example if option 1 had a probability of completion of more than 0.6 it would be chosen. However, in this example the UAV would subsequently choose to execute option 2. Whilst a constraint of 5 s is used to illustrate the approach to decision making it should be noted that in a real-world implementation an abort command would be issued when the UAV's remaining battery life falls below a predetermined threshold which would be in the range of 1–2 min.
6. Experimental results/evaluation For the purposes of evaluating SLZ detection accuracy, identified SLZs within a subset of 100 aerial images were manually validated by a human expert using a Matlab based GUI. To collate the results we consider a true positive (TP) to be a correctly identified SLZ with a high safety weighting, a true negative (TN) is a correctly identified SLZ but is subsequently assigned a low safety weighting. A false positive (FP) is an area which is incorrectly identified as a SLZ or in an unsafe area which is assigned a high safety weighting. A false negative (FN) is a suitable SLZ which is assigned a low safety weighting. An overview of the SLZ detection results is displayed in Table 4. Due to the boundaries specified in OS data less potential SLZs were identified when knowledge was included within the algorithm. However, overall it was found that incorporating knowledge provided a more reliable method of SLZ detection with 94.7% of potentially suitable SLZs assigned a correct safety score. In contrast when knowledge was not included there was a significant amount of false positives (10.1%) with 86.3% of potential SLZs being assigned a correct safety score. Table 3 Timings in seconds per image for each SLZ detection option. Task
Option 1 incl. knowledge
Option 2 no knowledge
μ
σ
μ
σ
Potential SLZ detection Boundary identification Dilation
0.15 0.2
0.1 0.05
0.093 0.144
0.009 0.03
Compute SLZ attributes Terrain classification Roughness Distance to man-made objects Misc. functions Total time
0.81 0.032 0.0006 0.435 .63
1.01 0.03 0.0001 0.03 .04
0.656 0.034 NA 0.325 .254
0.82 0.03 NA 0.026 .857
A fundamental component in the computation of a SLZ's overall safety score is its terrain classification with each terrain type assigned a suitability measure (Section 4.1). In both cases all of the false negatives were caused by misclassified terrain. When knowledge was fused into the terrain classification component a slightly greater number of SLZs had misclassified terrain. This was not significant however somewhat unexpected and to some extent most likely caused by priors not accurately reflecting the existent terrain types within small portions of the dataset. Additionally, in both cases an overarching cause of terrain misclassification is likely to be low separability within the chosen feature space for certain classes. Such examples include grass which may appear similar to deciduous trees and scrubland which can appear similar to paths. As paths and trees were both assigned low suitability measures, SLZs which were erroneously classified as these terrain types were subsequently assigned a low safety score. A key advantage provided by incorporating knowledge is the ability to compute the minimum distance between a SLZ and nearby manmade structures such as roads or paths. SLZs which are very close to such structures are assigned a low safety weighting as they are deemed to present a risk to humans. This resulted in substantially more true negatives when knowledge was incorporated into the SLZ detection algorithm. In comparison when OS data was not used, 100 SLZs (9.6%) were erroneously assigned a high safety weighting, i.e. false positives, despite their close proximity to such structures. Furthermore, 2 SLZs spanned a man-made path which did not exhibit steep changes in greyscale intensity at its borders and therefore remained unidentified by the edge detection stage. A large shadow region cast over a road by trees also resulted in 2 areas being considered as potential SLZs however these were assigned a low safety score as they were classified as coniferous trees. We consider false positives to be the most serious type of error as it may result in wholly unsuitable landing areas being assigned high safety scores and therefore deemed safe. This may subsequently have potentially catastrophic consequences for humans, property and the UAV and its payload. 7. Conclusions/future work Within this paper we have presented an autonomous approach to SLZ detection which utilises colour aerial imagery and additionally has the potential to incorporate external knowledge in the form of OS data. We propose incorporating knowledge into the potential SLZ detection phase by combining the boundaries specified in OS data with the results of edge detection thus providing a measure of robustness against image noise. We further use knowledge in the assignment of safety
T. Patterson et al. / Image and Vision Computing 32 (2014) 568–578
577
Fig. 7. P.D.F. of execution time for each option along with example execution threshold.
scores, fusing OS data with the probabilities of class membership returned by a MLC using Bayes' rule. Additionally, the distance between a SLZ and nearby man-made structures are computed enabling SLZs which are close to such structures to be assigned a low safety score thereby helping minimise the expectation of human casualty. Whilst the boundaries of many manmade features exhibit sudden changes in greyscale intensity and are thus identified during the edge detection stage, it may be useful to consider additional manmade classes that are likely to obstruct a UAV's flight path. One such example is power pylons which may not be explicitly represented in OS data. Autonomously identifying such structures in 3D space would require see-and-avoid capabilities such as those described in [36] and is therefore likely to form an important part of an overall UAV safety system. A key potential improvement to the SLZ detection algorithm may be to consider sequences of images, i.e. video, as opposed to considering the images in isolation. When used in conjunction with an estimate of the UAV's motion and feature descriptors such as SIFT, such sequences of imagery would enable moving objects such as animals to be detected thus forming an important part of a real-world implementation. A further area of future work may be to incorporate additional knowledge in the form of DTMs thus enabling the slope of a SLZ to be taken into consideration. Within this paper we show that by using knowledge the accuracy of potential SLZ detection and the subsequent assignment of safety scores can be improved. Overall 94.7% of detected SLZs were assigned a correct safety score when knowledge was incorporated in comparison to 86.3% when knowledge was omitted. Whilst incorporating knowledge into the algorithm provides a more reliable method of SLZ detection there is an additional computational overhead incurred resulting in increased execution time. Therefore, due to the real-time nature of the problem of SLZ detection, and the potential, hard constraints imposed by remaining battery life it may not always be practicable to include knowledge. We therefore model the execution times of two SLZ detection options and demonstrate how they could be used to assist a UAV in autonomously choosing an optimal, viable solution. Results are presented based on colour aerial imagery captured during manned flight of the Antrim Plateau region in Northern Ireland. A
Table 4 Validated results based on SLZs detected within 100 images.
Incl. knowledge No knowledge
TP
TN
FP
FN
Total SLZs
688 (76%) 807 (78%)
169 (18.7%) 86 (8.3%)
0 104 (10.1%)
47 (5.3%) 37 (3.6%)
904 1034
main advantage of utilising such imagery at this stage is that it is georegistered thereby enabling OS data to be readily incorporated into the algorithm. Whilst this imagery has enabled us to perform a proofof-concept implementation and evaluation of the SLZ detection algorithm an immediate extension of this work is to implement the approach using real and thus potentially noisy UAV aerial imagery. This is likely to present a number of technical challenges however it will enable us to further refine and validate the approach thereby helping to ensure the real-world usefulness of the SLZ detection algorithm. It is hoped that this will ultimately increase the safety of autonomous UAV systems thus expediting their integration into civilian airspace. Acknowledgements This research was supported by a Department for Employment and Learning studentship and through the Engineering and Physical Sciences Research Council (EPSRC) funded Sensing Unmanned Autonomous Aerial Vehicles (SUAAVE) project under grants EP/F064217/1, EP/F064179/1 and EP/F06358X/1. References [1] H. Almurib, Control and path planning of quadrotor aerial vehicles for search and rescue, no. 2, 2011. 700–705 (Tokyo, Japan). [2] M. Bryson, A. Reid, F. Ramos, S. Sukkarieh, Airborne vision-based mapping and classification of large farmland environments, J. Field Robot. 27 (5) (2010) 632–655. [3] S. Cameron, G. Parr, R. Nardi, S. Hailes, A. Symington, S. Julier, L. Teacy, S. McClean, G. Mcphillips, S. Waharte, N. Trigoni, M. Ahmed, SUAAVE: combining aerial robots and wireless networking, Unmanned Air Vehicle Systems, no. 01865, 2010, pp. 7–20, (Bristol). [4] C. Haddon, Whittaker, UK-CAA policy for Light UAV Systems, UK Civil Aviation Authority, London, 2004. [5] W.Y. Ochieng, K. Sauer, D. Walsh, G. Brodin, S. Griffin, M. Denney, GPS integrity and potential impact on aviation safety, J. Navig. 56 (1) (2003) 51–65. [6] T. Patterson, S. McClean, P. Morrow, G. Parr, Utilizing geographic information system data for unmanned aerial vehicle position estimation, 2011 Canadian Conference on Computer and Robot Vision, IEEE, St. Johns, Newfoundland, 2011, pp. 8–15. [7] T. Patterson, S. McClean, P. Morrow, G. Parr, Towards autonomous safe landing site identification from colour aerial images, 2010 Irish Machine Vision and Image Processing Conference, Cambridge Scholars Publishing, Ireland, 2010, pp. 291–304. [8] T. Patterson, S. McClean, G. Parr, P. Morrow, L. Teacy, J. Nie, Integration of terrain image sensing with UAV safety management protocols, The Second International ICST Conference on Sensor Systems and Software, S-Cube 2010, Springer, Miami, Florida, USA, 2010, pp. 36–51. [9] C. Sharp, O. Shakernia, A vision system for landing an unmanned aerial vehicle, International Conference on Robotics and Automation, no. 1720–1727, IEEE, Seoul, Korea, 2001, pp. 1720–1727. [10] S. Saripalli, J. Montgomery, G. Sukhatme, Vision-based autonomous landing of an unmanned aerial vehicle, IEEE International Conference on Robotics and Automation, 2002, Proceedings. ICRA'02, vol. 3, 2002, pp. 371–380.
578
T. Patterson et al. / Image and Vision Computing 32 (2014) 568–578
[11] T. Merz, S. Duranti, G. Conte, Autonomous landing of an unmanned helicopter based on vision and inertial sensing, 9th International Symposium on Experimental Robotics, Springer, Singapore, 2004, pp. 57–65. [12] S. Lange, N. Sunderhauf, P. Protzel, A vision based onboard approach for landing and position control of an autonomous multirotor UAV in GPS-denied environments, Advanced Robotics, 2009, IEEE, Munich, Germany, 2009, pp. 1–6. [13] K.W. Sevcik, N. Kuntz, P.Y. Oh, Exploring the effect of obscurants on safe landing zone identification, J. Intell. Robot. Syst. 57 (1–4) (2009) 281–295. [14] S. Scherer, L. Chamberlain, S. Singh, First results in autonomous landing and obstacle avoidance by a full-scale helicopter, (Accepted) IEEE International Conference on Robotics and Automation, IEEE, St. Paul, Minnesota, USA, 2012. [15] T. Templeton, D.H. Shim, C. Geyer, S. Sastry, Autonomous vision-based landing and terrain mapping using am MPC-controlled unmanned rotorcraft, Proceedings of the IEEE International Conference on Robotics and Automation, 2007, pp. 1349–1356, (Vol.). [16] A. Cesetti, E. Frontoni, A. Mancini, P. Zingaretti, S. Longhi, A vision-based guidance system for uav navigation and safe landing using natural landmarks, J. Intell. Robot. Syst. 1–4 (2010) 233–257. [17] D. Lowe, Distinctive image features from scale-invariant keypoints, Int. J. Comput. Vis. 30 (2) (2004) 91–110. [18] A. Cesetti, E. Frontoni, A. Mancini, P. Zingaretti, Autonomous safe landing of a vision guided helicopter, Mechatronics and Embedded Systems and Applications (MESA), IEEE/ASME International Conference on, IEEE, Qingdao, China, 2010, pp. 125–130. [19] D. Fitzgerald, R. Walker, D. Campbell, A computationally intelligent framework for UAV forced landings, IASTED Computational Intelligence Conference, Calgary, Canada, 2005, pp. 187–192. [20] D. Fitzgerald, R. Walker, D. Campbell, A vision based emergency forced landing system for an autonomous UAV, Australian International Aerospace Congress, Melbourne, Australia, 2005, pp. 60–85. [21] L. Mejias, D. Fitzgerald, P. Eng, L. Xi, Forced landing technologies for unmanned aerial vehicles: towards safer operations, in: M.T. Lam (Ed.), Aerial Vehicles, 1st edition, In-Tech, Kirchengasse, Austria, 2009, pp. 415–442, (Ch. 21). [22] J. Canny, A computational approach to edge detection, IEEE Trans. Pattern Anal. Mach. Intell. (1986) 679–698. [23] Ordnance Survey Northern Ireland, OSNI Large-Scale Technical Specification, Accessed September 2012 http://www.osni.gov.uk/large-scale_spec.pdf.
[24] I. Oruç, L.T. Maloney, M.S. Landy, Weighted linear cue combination with possibly correlated error, Vision Res. 43 (23) (2003) 2451–2468. [25] C. Zhou, B.W. Mel, Cue combination and color edge detection in natural scenes, J. Vis. 8 (2008) 1–25. [26] R.C. Gonzalez, R.E. Woods, Digital Image Processing, 3rd edition Pearson Education, New Jersey, 2008. [27] T.H. Cox, C.J. Nagy, M.A. Skoog, I.A. Somers, Civil UAV Capability Assessment, Tech. Rep, NASA, December 2004. [28] J. Richards, J. Xiuping, Remote Sensing Digital Image Analysis, 3rd edition Springer, New York, 1999. [29] R.M. Haralick, Statistical and structural approaches to texture, Proc. IEEE 67 (5) (1979) 786–804. [30] L. Chen, G. Lu, D. Zhang, Effects of different gabor filter parameters on image retrieval by texture, Multimedia Modelling Conference, Brisbane, Australia, 2004, pp. 273–278. [31] B. Majidi, A. Bab-Hadiashar, Real time aerial natural image interpretation for autonomous ranger drone navigation, Proceedings Digital Image Computing: Technqiues and Applications, IEEE, Australia, 2005, pp. 448–453. [32] A. Howard, H. Seraji, Multi-sensor terrain classification for safe spacecraft landing, IEEE Transactions on Aerospace and Electronic Systems, vol. 40, 2004, pp. 1122–1131. [33] H. Pacheco, J. Pino, J. Santana, P. Ulloa, J. Pezoa, Classifying execution times in parallel computing systems: a classical hypothesis testing approach, in: C. San Martin, S.-W. Kim (Eds.), Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications, Vol. 7042 of Lecture Notes in Computer ScienceSpringer Berlin, Heidelberg, 2011, pp. 709–717. [34] Ascending Technologies, AscTec Pelican, (Accessed January 2013) http://www. asctec.de/uav-applications/research/products/asctec-pelican/. [35] Ascending Technologies, AscTec Mastermind, http://www.asctec.de/uavapplications/research/products/asctec-mastermind/ Accessed July 2013. [36] T. Zsedrovits, A. Zarandy, B. Vanek, T. Peni, J. Bokor, T. Roska, Visual detection and implementation aspects of a UAV see and avoid system, 2011 20th European Conference on Circuit Theory and Design (ECCTD), IEEE, 2011, pp. 472–475.