Optimization Model of Routing-and-Searching for Unmanned Aerial Vehicles in Special Target-Detection Operations

Optimization Model of Routing-and-Searching for Unmanned Aerial Vehicles in Special Target-Detection Operations

7th IFAC Conference on Manufacturing Modelling, Management, and Control International Federation of Automatic Control June 19-21, 2013. Saint Petersbu...

309KB Sizes 0 Downloads 71 Views

7th IFAC Conference on Manufacturing Modelling, Management, and Control International Federation of Automatic Control June 19-21, 2013. Saint Petersburg, Russia

Optimization Model of Routing-and-Searching for Unmanned Aerial Vehicles in Special Target-Detection Operations B. Kriheli*, E. Levner**, A. Spivak** * Ashkelon Academic College, Ashkelon 78109 Israel (e-mail:[email protected]) ** Holon Institute of Technology, Holon 58102 Israel ( e-mail: {levner, spivak}@ hit.ac.il).

Abstract: We model the problem of routing and search for a hidden target in a directed graph in which nodes represent possible target locations and arcs correspond to possible moves of an unmanned aerial vehicle (UAV) between them. Prior probabilities of locations are known for each target.. The inspections are imperfect: a probability of overlooking the hidden target and a probability of a "false alarm" exist. The decision maker has to sequentially inspect the locations so that to find the target with the minimum cost, within a priori given level of confidence An index-based algorithm for finding the optimal search strategy is developed. Keywords: search and detection, discrete search, imperfect inspection, greedy algorithm.

1. INTRODUCTION The need to search for hidden or lost objects arises in many civilian and military applications. Suppose a single target or a set of several targets are hidden in an area. Until they are found, they cause damage or loss to the whole system, the damage scale being dependent on the location of the targets and the time needed to detect and neutralize them. The problem is to efficiently detect and remove or neutralize the targets. A typical example arising in military logistics is a discretetime path-search optimization problem where a single searcher moves through a discretized 3-dimensional airspace and needs to find a (moving or static) target operating in a finite set of possible geographical locations. In general, the searcher is subject to path continuity and search time constraints, fuel consumption and other factors. Whereas the typical objective of the search is to maximize the probability of detecting the target (Dell et al. 1996, Kress and Royset 1999, Sato and Royset 2002), we consider an essentially more general formulation involving the search time and cost. Such a problem arises in military search, surveillance, and reconnaissance operations with patrol aircraft and unmanned aerial vehicles (UAVs) where physical and operational constraints limit the probability of detection. The resulting optimization problem is quite challenging from the computational point of view because its general resource-and time-constrained version is proved to be NP-hard (Trummel and Weisinger 1986). The choice of a search strategy strongly influences the search costs and losses incurred by the search as well as the probability of finding the target within a specific time limit. In military search missions with UAVs, the problem of finding good strategies for UAVs performing the search-anddetection mission is especially complicated because, like 978-3-902823-35-9/2013 © IFAC

manned aircraft, these mobile agents need to operate in hostile environments. Under hostile, sabotage and insidious circumstances, two types of errors in search-and-detection tests usually occur: (i) a so-called "false-negative" detection wherein a target object or location is overlooked; and (ii) a "false-positive detection" (also called "a false alarm") which wrongly classifies a good object as failed or malicious. Unfortunately, the problem of selecting the "best" search strategy is fundamentally hard due to its stochastic nature and the nonlinearity induced by the detection probability. In particular, looking twice into the same location by a searcher generally does not double the detection probability. Recent reviews and texts provide a deeper background and further applications of search problems (see, e.g., Washburn 2002, Song and Teneketzis 2004, Kress et al. 2008, Sato and Roysen 2010, and Chung and Burdick 2012). In this work, we study a stochastic search-and-detection process subject to the false-negative and false-positive inspection outcomes. The general resource-constrained problem being NP-hard, we restrict our study to finding locally optimal strategies. The aim of the search is to minimize the losses; so the objective of the search (which we will call the "search cost") is in minimizing the losses. We are interested in organizing the search process by using greedy locally optimal strategies. This is a sequential strategy wherein, at each step, the decision maker computes a current "search effectiveness" for each component and recommends searching next a component with the highest current effectiveness. We focus on an important special case of the problem in which locally optimal strategy yields a global optimal. Being attractive because of its simplicity and computational efficiency, such local search strategy guarantees finding an optimal search sequence with a given confidence level, for the problem considered in this paper.

1838

10.3182/20130619-3-RU-3018.00354

2013 IFAC MIM June 19-21, 2013. Saint Petersburg, Russia

2. RELATED WORK The discrete search problem is one of the oldest problems of Operations Research and Artificial Intelligence. Its initial study was made by Bernard Koopman and his team during the World War II to provide efficient methods for detecting submarines (Koopman 1956). The search theory has been used by the US Navy in the search for the H-bomb lost in the ocean near Palomares, Spain, in 1966 (see a historical survey of search theory and the recent bibliography of the discrete search literature in Stone (1989) and Washburn (2002). Much of the early research relies on the assumption that only the false-negative detection ("the overlook") occurs while the false-alarm probability is zero. Kadane (1971) and Stone (1989) considered a more general situation in which times and probabilities of fault detections change during the search. Kadane(1971) H[DPLQHG WKH RSWLPDO ³ZKHUHDERXWV´ search problem of locating a stationary target or, if unsuccessful within a fixed search budget, specifying its most likely location. Chew (1973), Wegener (1975), Assaf and Zamir (1985), and Levner (1994) also considered only falsenegative detection probabilities but with varying costs that are associated with inspecting different components. Chew (1973) and Kadane (1971) considered a situation where the probability of detection is to be maximized subject to a cost constraints and showed that a locally optimal strategy is optimal when the cost is measured in the number of searches. Kadane and Simon (1977) proposed a unified approach to the min-cost and max-reward (finite-horizon) search problems. Trummel and Weisinger (1986) and Wegener (1985) showed that the general minimum-expected-time-to-detection and cost-with-switches search problems are NP-hard. The criteria for the termination of the search process, called the optimal stopping rule, are treated by Chew (1973) and Ross (1969); however, such termination conditions still rely on the assumption of zero false-positive detections and, thus, only correctly capture the false-negative search process. The robotics community has also made great strides in the problem of sequential search. Research presented in Chung and Burdick (2012) and the references therein presented a Bayesian construction for the problem of searching for multiple lost targets by robots, using the probability of detection of these targets as the objective for optimal search trajectory. However, this work also does not address the presence of false-positive detections, assuming that any positive detection always reveals a true target. Kress et al. (2008) and Chung and Burdick (2012) are the only researchers known to us which address the possibility of false-positive detections. Their specific contributions are the following: (a) they show that a local index-based policy is optimal when each positive detection by the (autonomous) search sensor is followed by an investigation by the (human) verification team; (b) if the verification time is prohibitively longer than the search time, an alternative measure of effectiveness, namely, the probability that the first positive detection is a true one, is introduced and analysed. However, these authors does not study a stopping rule in the case where the human verification team is absent. In contrast with this

work, we study below a different situation wherein no human inspection team is being involved. A main contribution of Chung and Burdick (2012) is the decision-theoretic representation of the spatio-temporal search process, where the sequentially gathered observations GULYH WKH VHDUFKHU¶V VHDUFK GHFLVLRQ LQ WKH VHDUFK DUHD 7KH evolution of the search decision provides a unified approach for both the affirmative search problems, i.e., finding an target known to be present in an area, as well as the search quitting option, specifying a termination criterion to stop the search process and declare the object absent. Another major contribution of this work is the evaluation of several proposed (myopic and biology-inspired) search strategies minimizing the search time. Complementary to this research, in the present paper we will consider a more general search cost objective function, develop a greedy index-based strategy for its minimization and prove its optimality. The general framework and algorithm developed in the present paper include, as special cases, several earlier known models and algorithms for perfect inspections (see Rabinowitz and Emmons 1997), as well as models and algorithms for the inspections with only false negative outcomes.

3. PROBLEM FORMULATION In this paper, the sequential discrete search problem is defined as follows. A discrete search area of interest (AOI) may be a complex urban system or a set of possible locations of stationary or moving targets. The subject of the search are hidden or lost targets. We start with a study when one and only one target is to be found within the considered framework. The target location in the search area is uncertain, namely, there are nonzero probabilities that the target is located in given locations; they are available to the decision maker before the search commences. Given the stationary nature of the target, the object is assumed to be present throughout the entire duration of the search process, i.e., the object does not leave the search area. We assume that that an inspection of each location is LPSHUIHFW 7KLV PHDQV WKDW WKHUH JLYHQ D SULRU SUREDELOLW\ .i of the false alarm (in a clean location) and, in addition, there JLYHQ D SULRU SUREDELOLW\ RI RYHUORRNLQJ WKH WDUJHW LQ D correctly found location. This implies that examination of each location can happen more than once. In other words, a search sequence, in general, may be infinite. When the search starts, a single mobile searcher or their group working in parallel should perform a set of sequential inspections in order to identify the target. The times and costs of inspections of all locations being given, the goal of the search is to determine a search strategy which the searcher should employ in order to locate the target with the minimum or near-minimum cost. Let us give a formal description of the problem. A system contains N PRGXOHV «N. The input data are the following: When the search starts, each location i, i = «, N, is characterized by the following known parameters: x pi - prior probability that location i is "infected" and dangerous;

1839

2013 IFAC MIM June 19-21, 2013. Saint Petersburg, Russia

x

.i - prior probability of a "false alarm" , or a falsepositive outcome, that is a conditional probability that an inspection declares that target is in i whereas in fact it is not in this location; i - prior probability of overlooking, or a falsenegative outcome, that is a conditional probability that an inspection declares that target is not in i whereas in fact it is there; ti ± expected time to inspect location i; ci - search cost rate per unit time when searching location i.

f

F(s) =

¦ P > @c > @ T s n

s n

s[n].

n 1

Each sequential inspection strategy specifies an infinite sequence

In the above notation, the stochastic scheduling problem studied in this paper is to find a sequence s* minimizing the expected total search cost F(s). :KHQ DOO WKH .i i =0 we have a special case of so-called perfect inspections extensively studied in scheduling literature (see, for example, Kadane and Simon (1985), and Rabinowitz and Emmons (1997). When DOO .i EXW i • this is a case of false negative inspecWLRQV LI .i• EXW DOO i = 0, we have the false-positive inspections. In what follows, the terms strategy s and sequence s will be used interchangeably.

s = @ « s[n@ «!

4. PROBLEM ANALYSIS

where s[n] denotes the location (further called element), more exactly, the number of the element which is inspected at the n-th step of sequence s, all s[n] • ^ « 1` DQG S[0] is an initializing sub-sequence which will be defined below. Given the above input data, the optimal search scenario is specified by the following conditions: (i) the elements are inspected sequentially; (ii) for any search strategy and any failure location, the outcomes of inspections are independent; (iii) the stopping rule is defined as follows: (a) First, for any integer h, we define the conditional probability a[i,h] that element i is really failed under condition that it is declared to be failed in h inspections; a[i,h] depends on given pi .i DQG i and is computed in the next section; (b) Second, a confidence level CL is given a priori, say CL =0.95, and for each element, parameter Hi called "height" is defined as the minimum positive integer such that probability a[i,h] exceeds the confidence level CL, that is, a[i,h] • CL. (Notice that all Hi can be computed by the decision maker before the search process starts); (c) In any sequence s, the search ends when, at some step, for some element i, the outcome of inspection is: "the element i is failed for the Hi±th time in s".

We start with the following notation: Event Bi ={Inspection declares that element i is a target }. Event Ci ={Element i is really a target }. Using the aforementioned notation of Section III, probability of Ci and probabilities of the errors of two types are expressed as follows:

x

x x

pi = P(Ci

n

¦

x

P Bi

P Ci P Bi / Ci

pi ˜ 1 Ei

1 pi D i

fi

P Ci P Bi / Ci

The probability to correctly detect the target element i within a single inspection is equal to

P Ci / Bi

P Ci P Bi / Ci P Ci P Bi / Ci P Ci P Bi / Ci

pi ˜ 1 E i pi ˜ 1 E i 1 pi D i

ts[m].

Claim 1. Given a sequence s, the probability a[i,h] that a module i is really a target under condition that it is discovered to be a target within exactly h inspections in s (where h is some integer) VDWLVILHV the following relations:

m 1

x

= P Bi / Ci .

TKH following claim is straightforward:

Ts[n] = T(s[n],s) ± (accumulated) time spent to detect the failed component s[n] on the n-th step of strategy s; Ts[n] =

i

The probability that element i is detected as failed is equal to

For a given sequence s, we shall use the following notation: x

.i = P Bi / Ci ;

Ps[n] ± unconditional probability that an element s[n] is detected as failed Hs[n] times up to the n-th step of strategy s, if s[n] is known to be failed. (The values Hs[n] and Ps[n] GHSHQGLQJ RQ SDUDPHWHUV . DQG DQG JXDUDQWHHLQJ D UHTXLUHd confidence level will be defined below). F(s) - the expected total search cost.

In accordance with the above conditions (i) and (ii) of the considered search scenario, the expected (linear) total search cost, F(s), is defined as follows:

1840

a>i ,h @

P Ci / Bi 1 ˆ Bi 2 ˆ ... ˆ Bi h P Ci ˜ P Bi ˆ Bi ˆ ... ˆ Bi / Ci 1

2

h

P Bi ˆ Bi ˆ ... ˆ Bih 1

2

1

P Ci P h Bi / Ci 1

P Ci P h Bi / Ci pi ˜ 1 Ei pi ˜ 1 Ei

h

1

P Ci P h Bi / Ci

h

1 pi D ih

2013 IFAC MIM June 19-21, 2013. Saint Petersburg, Russia

Corollary. Given a predetermined confidence level CL (say,

Qs>n@

WKH µKHLJKW¶ Hi for any element i is computed as

CL

cs>n@ § s* > n @ 1· ˜¨ ¸˜ ts>n@ ¨© H s>n@ 1 ¸¹

the minimum integer satisfying the following condition:

pi ˜ 1 Ei

a>i ,h@

pi ˜ 1 Ei



˜ (1 f s>n@ )



1 pi D i



t CL

The search strategy is defined as an infinite sequence of component numbers, where, at each step n, a module s[n] is inspected and tested whether or not is a target; s = @ « s[n@ «! In this sequence, S(0) denotes an initializing sub-sequence defined in such a way that the probability to stop during this sub-sequence is zero. Now we are capable to compute the conditional probability Ps[n] that an element s[n] is detected as a target (not necessarily successively) exactly Hi times up to the nth step of strategy s if this element s[n] is known to be a target. (Recall that this probability is needed for computing the problem's objective function F(s)). For this aim, we need to introduce an auxiliary parameter s*[n]. For any given s and s[n], let s*[n] be the total number of inspections of module s[n] (not necessarily successively, one after the other) up to the nth step of strategy s; evidently, s*[n@ ” n, for all n. Notice that s*[n] can be easily computed as soon as the sequence s is known up to the n-th step. Claim 2.

§ s* > n @ 1· s* > n@ H s> n@ Ps>n@ ¨ f s> n @ ¸ (1 f s>n@ ) ¨ H s> n @ 1 ¸ © ¹ for n t 1 and Ps>n@ 0 for n 0.

H s> n@

The time Ts[n] spent for the inspection of all the elements S> @ « s[n] up to the nth step of strategy s is the following: n 1

N

Ts >n @

¦t i 1

x

x

i

Hi 1

¦T>

m,s @

t s >n @

f s> n @

H s> n@

are arranged in non±increasing order, for all n • Proof is done by the interchange argument. Theorem 2. Let be F strategy s . Denote N

T

s the expected cost attributed to the

¦ ti , T0 i 1

N

¦t

i

Hi 1 .

i 1

The following inequality holds

F s d ª § 1· º °T0 T ˜ « H i ¨1 ¸ 1» f ˜ c ® i © ¹ ¦ i ¬ ¼ i 1 ° ¯ n 1 i ˜ ti N

½ ° ¾ f ° ¿

Notice that we call the search policy in Theorem 1 greedy or index-based, because, in order to find an optimal sequence of steps, each time the searcher selects to inspect the next component with the maximal ratio. 4. CONCLUSIONS

This claim immediately follows from the above definitions, and the binomial distribution of Hs[n]-1 inspections of the element s(n), with the outcome "a target", within a total number s*[n] of inspections of the element s(n) up to the n-th step of s. Now we can define more exactly all the components of the problem's objective function F(s). x

s* > n @ H s> n@

n t1

m 1

The unconditional joint probability that a target element s[n] is detected as a target exactly Hi times at the nth step of strategy s is Ps[n]. The search cost attributed to strategy s is

The greedy algorithm below permits to find the optimal strategy according to the following Theorem. Theorem 1. Let Hifi < 1 for all components i. The strategy s* is an optimal strategy iff the ratios

In this work, we study a stochastic search-and-detection process subject to false-negative and false-positive inspection outcomes. We extend previous results for the discrete search of the lost, hidden or target components to a more general case where the searcher performs imperfect inspections with the errors of two aforementioned types. For optimizing the search process, we use a greedy strategy called the indexbased strategy which is proven to be optimal when the objective is to minimize the expected cost of the search-anddetection. This is a sequential strategy which, at each step, computes a current "search effectiveness" for each component and recommends to inspect next the component with the highest current effectiveness at each step. Being attractive because of its computational efficiency and simplicity, such local search strategy guarantees finding an optimal (minimum-cost) search sequence with an arbitrary pre-specified confidence level. This work gives a starting point for exploring other search scenarios and models. Different search scenarios (e.g., with multiple mobile agents, multiple targets, resource constraints, precedence relations, agents-with-memory, etc) are of theoretical and practical importance. Advanced solution methods like dynamic programming, branch-and-bound, biology-motivated algorithms, and stochastic programming can be employed for solving more complicated search problems. Another perspective future research is to compare

1841

2013 IFAC MIM June 19-21, 2013. Saint Petersburg, Russia

the efficiency of different search methods using modern analytical and simulation tools. 5. EXAMPLE: OPTIMAL ROBOTIC SEARCH FOR A LOST TARGET Consider a problem of searching for a lost target by a single robot in a stochastic setting described in Chung and Burdick (2012)]. An area of interest is divided into N possible locations in one of which a target object is hidden. The specificity of the considered robotic search is that this autonomous device, at each step of search, has a limited memory and so does not remember all the outcomes of its previous search steps. The robot only remembers how many times a target has been detected in each location, up to a current step in the search sequence; the search stops as soon as the number of such outcomes in one of the locations reaches its pre-specified height, Hi. Further details of the search for the lost target by an autonomous device are given in Chung and Burdick (2012) and skipped here. For simplicity, we consider an area with only three locations numbered 1, 2 and 3. The input data are given in Table 1; a confidence level CL = 0.95. Table 1. Input data Locations pi = P(Ci .i = i

=

1 0.1

2 0.15

3 0.75

P Bi / Ci ;

0.04

0.06

0.12

P Bi / Ci

0.1

0.07

0.05

5 1

8 1

10 1

WL FL

The results of computations according to the above mentioned formulas in Section III are given in Table 2. Table 2. Numerical results Locations 1 Hi fi =

pi ˜ 1 Ei

1 pi Di

0.126

0.190

0.743

This example shows that the probability of stopping the search in an incorrect location (caused by a false alarm), like ratios Qi, quickly decreases as the number of steps in s grows. Notice that the searcher does not use in this setting a full history of all outcomes obtained at each step of the search sequence. Incorporating this information into the search model leads to a more complex dynamic programming setting that falls out of the scope of this paper and will be explored in future research.

REFERENCES Assaf, D., and Zamir, S. (1985) Optimal sequential search: A Bayesian approach, Ann. Statist., vol. 13, (3), pp. 1213± 1221. Chew, C. Jr, (1973) Optimal stopping in a discrete search problem, Oper. Res., vol. 21, (3), pp. 741±747. Chung, T.H., and Burdick, J. W. (2012) Analysis of search decision making using probabilistic search strategies, IEEE Transactions on Robotics, 28 (1), pp. 245-256. Dell, R.F., Eagle, J.N.,. Martins, G.H.A., and Santos, A.G. (1996) Using multiple searchers in constrained-path, moving-target search problems. Naval Research Logistics, (43), pp. 463-480. Kadane, J. B. (1971) Optimal whereabouts search, Oper. Res., vol. 19, (4), pp. 894±904. Kadane, J.B., and Simon, H.A. (1985) Optimal strategies for a class of constrained sequential problems, The Annals of Statistics, 5, pp. 237-255. Koopman, B. (1956) The theory of search, II. Target detection, Operations Research., 4, 324-346. Kress, M., Royset, J. and Rozen, N. (2012), European Journal of Operational Research, 220, (2), pp. 550-558. Kress, M. and Royset, J. O. (2008) Aerial search optimization model (ASOM) for UAVs in special operations. Military Operations Research,13,(1),pp. 23-33. Kress, M., Lin, K. Y., and Szechtman, R. (2008) Optimal discrete search with imperfect specificity, Math. Methods Oper. Res., 68, (3), pp. 539±549. Levner, E. (1994) Infinite-horizon scheduling algorithms for optimal search for hidden ibjects, Int. Trans. Opl. Res., 1(2), pp. 241-250. Rabinowitz G., and Emmons, H. (1997) Optimal and heuristic inspection schedules for multistage production systems, IIE Transactions, 29 (12). Ross, S. M. (1969) A problem in optimal search and stop, Oper. Res., vol. 17, 6), pp. 984±992. Sato, H. and Royset, J. O. (2010) Path optimization for the resource-constrained searcher, Naval Res. Logist., vol. 57, (5), pp. 422±44. Song, N.-O., and Teneketzis, D. (2004) Discrete search with multiple sensors, Math. Methods Oper. Res., vol. 60, (1), pp. 1±13. Stone, L. D. (1989) Theory of Optimal Search, 2nd ed. New York: Academic Press. Trummel, K.E. and Weisinger, J.R. (1986) The complexity of the optimal searcher path problem. Operations Research, 34(2), pp. 324-327. Washburn, R. (2002) Search and Detection (Topics in Operations Research Series), 4th ed. INFORMS. Wegener, I. (1985) Optimal search with positive switch cost is NP-hard, Inf.Process. Lett., vol. 21, no. 1, pp. 49±52. Zabarankin, M., Uryasev, S., and Murphey, R. (2006) Aircraft routing under the risk of detection. Naval Research Logistics, 53, pp. 728-74.

1842