Three problems in searching for a moving target between two sites

Three problems in searching for a moving target between two sites

Acta Mathematica Scientia 2015,35B(2):359–365 http://actams.wipm.ac.cn THREE PROBLEMS IN SEARCHING FOR A MOVING TARGET BETWEEN TWO SITES∗ {ã) Jing...

314KB Sizes 0 Downloads 7 Views

Acta Mathematica Scientia 2015,35B(2):359–365 http://actams.wipm.ac.cn

THREE PROBLEMS IN SEARCHING FOR A MOVING TARGET BETWEEN TWO SITES∗

{ã)

Jinghu YU (



“©¯)

Wenmin YE (

Department of Mathematics, School of Sciences, Wuhan University of Technology, Wuhan 430070, China E-mail : [email protected] Abstract Suppose that a moving target moves randomly between two sites and its movement is modeled by a homogeneous Markov chain. We consider three classical problems: (1) what kind of strategies are valid? (2) what strategy is the optimal? (3) what is the infimum of expected numbers of looks needed to detect the target? Problem (3) is thoroughly solved, and some partial solutions to problems (1) and (2) are achieved. Key words

Search theory; moving target; Markov chain

2010 MR Subject Classification

1

60J20; 90B40

Introduction

Moving target search was given much attention in the last few decades [Ahlswede et al (1979), Ross (1983), Stone (1975)], both theory [Assaf et al (1994), Benkoski et al (1991), Eagle (1984), Schweitzer (1971)] and algorithms [Brown (1980), Moldenhauer et al (2009), Singh et al (2003)] about optimal search were extensively studied due to their wide-ranging applications. Many mathematics models were employed to deal with moving target and Markov chains were proved to be fairly effective ones among them [Kan (1977), Macphee et al (1995), Nakai (1973), Pollck (1970), Weber (1986)]. Drawing upon existing results in this area, some main problems received considerable attention. Those problems include: what strategy will produce the minimum expected number of looks needed to detect the target? what strategy will produce the maximum probability of detecting the target with n(n ≥ 1) looks available? and what is the value of this maximum probability? et al. Although comprehensively discussions were made on those problems, there still exist some weakness that could be improved. Let us consider a simple Markov model of moving target: the target moves randomly between two sites, referred to from here on as 0 and 1 respectively. It is always assumed that (1) at each time t = 0, 1, 2, · · · , the target must be located in one of the two sites; (2) at the start t = 0, the target is located in any one of the two sites according to a given initial distribution π = (π0 , π1 ), and then it proceeds to move between the two sites according to a ∗ Received

December 24, 2013; revised January 8, 2014. author.

† Corresponding

360

ACTA MATHEMATICA SCIENTIA



Markov transition matrix P = 

p00 p01

Vol.35 Ser.B



, at every unit interval of time; (3) π and P are p10 p11 known to the searcher in advance; (4) at each time t = 0, 1, 2, · · · , only one of the two sites is looked by the searcher just before each transition of the target, and the target can be detected absolutely if the searcher looks into the site where the target is in. As a result of assumption (4), the searcher gets a sequence of looks to be made in the two sites conditional upon the target not having been detected up to any time, then the sequence of looks is called as a strategy. In this article, three questions are posed: (1) what search strategy is valid (or what strategy is invalid)? (2) what strategy is the optimal? (3) how to derive the expected number of looks needed to detect the target? Those problems are classical themes in search theory all the time. With the aid of stopping time of stochastic process and constructing a shift transformation on strategy space, rigorous mathematical descriptions and proofs about the above mentioned problems are provided, and those results improve the existing ones. In order to state them clearly and precisely, the most of the main concepts and notations of this article should be presented here at first. Let C = {0, 1}∞ , which is termed as the strategy space about a moving target between two sites, named as “0” and “1”. The element c ∈ C is then called strategy. Specially, we call (1, 1, · · · , 1, · · · ) and (0, 0, · · · , 0, · · · ) uniform strategy; (1, 0, 1, · · · , 0, 1, 0, 1, · · · ) and (0, 1, 0, · · · , 0, 1, 0, 1, · · · ) alternative strategy. The search is conducted as follows: the nth (n ≥ 1) look is done before the nth transition, and the nth look is carried out in site cn if the searcher adopts strategy c = (c1 , c2 , · · · , cn , · · · ). A homogeneous Markov chain {Xn : n ≥ 1} is introduced to describe the movement of target. The {Xn ; n ≥ 1} is assumed that takes values in {0, 1} with initial distribution π and transition matrix P . Let τc = inf{k ≥ 1 : Xk = ck } for any given strategy c. Then, τc denotes the first hitting time that strategy c meets the moving target, also it means the number of looks that the searcher can detect the target by using strategy c. If P {τc < ∞} = 1, then strategy c is called valid, otherwise, c is called invalid. So, a valid strategy is that one that can detect target in finite steps with probability one. The collection of all valid strategies is denoted by Cv . For ∀0 ≤ p ≤ 1 and c ∈ C, let Ep τc represent the expected number of looks needed to detect the target in the case that the strategy c is applied and at the start of search, the target is located in site 0 with probability p. For any two strategies c, d ∈ Cv , c is called prior to d (denoted by c  d), provided Eπ0 τc ≤ Eπ0 τd . A strategy c is called optimal if c  d for any d ∈ Cv . This article is arranged as follows. The answer to Question (1) is addressed in Section 1, Section 2 tackles question (2) and question (3), and special efforts are paid to the infimum of all expected number of looks. In Section 3, an example is presented to show how to get the expression of infimum in concrete problem, moreover, some unsolved problems are provided.

No.2

2

J.H. Yu & W.M. Ye: THREE PROBLEMS IN SEARCHING

361

Valid Strategies

In this section, we show the existence of valid strategy when the target’s movement can be modeled by Markov chain. Note that, for any strategy c ∈ C, we have P {τc = 1} = P {X1 = c1 } = πc1 , P {τc = 2} = P {X1 = 1 − c1 , X2 = c2 } = π1−c1 p1−c1 ,c2 , .. . P {τc = n} = P {X1 = 1 − c1 , X2 = 1 − c2 , · · · , Xn−1 = 1 − cn−1 , Xn = cn } = π1−c1 p1−c1 ,1−c2 × · · · × p1−cn−1 ,cn , ∀n > 2. Thereby, the probability P {τc < ∞} that the target can be detected in finite steps by using P P {τc = n}. From the expression of P {τc < ∞}, we can see that it is not easy strategy c is n≥1 P P {τc = n} equals 1 or not for general strategy c, however, the judgement to justify whether n≥1

for some designated strategies can be made easily, and we list our results one by one according to the structure of transition matrix P . Result 1 If transition matrix P satisfies that p00 = p11 = 21 , then for any initial distribution π, all strategies are valid. Result 2 If transition matrix P satisfies that p00 < 1, then for any initial distribution π, uniform strategy (1, 1, · · · , 1, · · · , ) is valid; If transition matrix P satisfies that p11 < 1, then for any initial distribution π, uniform strategy (0, 0, · · · , 0, · · · , ) is valid. Result 3 If transition matrix P is symmetric or if P has the same rows, then for any initial distribution π, alternative strategies c = (0, 1, 0, 1 · · · ) and c = (1, 0, 1, 0, · · · ) are valid. Their proofs are stated below. Proof of Result 1 For any c ∈ C, P {τc = 1} = πc1 and for any n ≥ 2  n−1 1 P {τc = n} = π1−c1 p1−c1 ,1−c2 × · · · × p1−cn−1 ,cn = π1−c1 . 2 Thereby, P {τc < ∞} =

X

n≥1

P {τc = n} = πc1 + π1−c1



1 + 2

 2  3  1 1 + + · · · = 1. 2 2 

Proof of Result 2

For uniform strategy c = (1, 1, · · · ), we have P {τc = 1} = π1 , P {τc = 2} = π0 p0,1 , P {τc = 3} = π0 p0,0 p0,1 = π0 p0,1 p0,0 ,

P {τc = 4} = π0 p0,0 p0,0 p0,1 = π0 p0,1 (p0,0 )2 , .. . P {τc = n} = π0 p0,0 p0,0 p0,1 = π0 p0,1 (p0,0 )n−2 .. .

362

ACTA MATHEMATICA SCIENTIA

Vol.35 Ser.B

Thereby, P {τc < ∞} = π1 + π0 p0,1 [1 + p0,0 + (p0,0 )2 + · · · + (p0,0 )n + · · · ] = 1. Developing the same process to strategy c = (0, 0, · · · ) will complete the whole proof of result 2.  Proof of Result 3

For alternative strategy c = (0, 1, 0, 1, · · · ), we have

P {τc = 1} = π0 ,

P {τc = 2} = π1 p11 = π1 p11 (p10 p01 )0 ,

P {τc = 3} = π1 p10 p00 = π1 p10 (p01 p10 )0 p00 ,

P {τc = 4} = π1 p10 p01 p11 = π1 (p10 p01 )1 p11 ,

P {τc = 5} = π1 p10 p01 p10 p00 = π1 p10 (p01 p10 )1 p0,0 ,

P {τc = 6} = π1 p10 p01 p10 p01 p1,1 = π1 (p10 p01 )2 p11 ,

P {τc = 7} = π1 p10 p01 p10 p01 p10 p00 = π1 p10 (p01 p10 )2 p00 , .. . So, for any initial distribution π, we always have   p11 p10 p00 p11 + p10 p00 P {τc < ∞} = π0 + π1 + = π0 + π1 × . 1 − p10 p01 1 − p01 p10 1 − p10 p01

11 +p10 p00 . If P is symmetric, then p01 = 1 − p00 = p10 , p11 = p00 = 1 − p01 , so Let A = p1−p 10 p01 that A = 1, and hence P {τc < ∞} = π0 + π1 = 1. If P has the same rows, then p10 = p00 , p11 = 1 − p10 = 1 − p00 , so that A = 1, thereby, P {τc < ∞} = 1. Similar process can be performed to alternative strategy c = (1, 0, 1, 0, · · · ). 

Remark No matter what kind of transition matrix P will be, its structure belongs to one case of those described in Result 2 and Result 3, so valid strategy always exists for any transition matrix P and initial distribution π.

3

Expected Number of Looks

A basic purpose of searcher is that the target can be detected with finite expected number of looks, which means that strategy c is improper if Eπ0 τc = ∞. Now clearly, Eπ0 τc = ∞ if c is P invalid strategy, and Eπ0 τc = nP {τc = n} for any valid strategy c. A question to be raised n≥1

here is: does there exist valid strategies c such that

Eπ0 τc < ∞? Our answer is affirmative. In the next, it will be shown that those designated strategies in Result 1-Result 3 possess finite mean looks. Our conclusions about those mean looks, respectively, are as follows. Conclusion 1 If transition matrix P satisfies that p00 = p11 = 21 , then Eπ0 τc = πc1 + 3π1−c1 for any strategy c = (c1 , c2 , · · · ). Conclusion 2 If transition matrix P satisfies that p00 < 1, then Eπ0 τc = 1 + pπ010 for uniform strategy (1, 1, · · · , 1, · · · , ) ; If transition matrix P satisfies that p11 < 1, then Eπ0 τc = 1 + pπ101 for uniform strategy (0, 0, · · · , 0, · · · , ).

No.2

363

J.H. Yu & W.M. Ye: THREE PROBLEMS IN SEARCHING

Conclusion 3 If transition matrix P is symmetric or if P has the same rows, then for alternative strategy c,  1 + p10   if c = (0, 1, 0, 1, · · · ),  1 + π1 1 − p p 10 01 Eτc = 1 + p01    1 + π0 if c = (1, 0, 1, 0, · · · ). 1 − p10 p01 It is easy to get Conclusion 1 and Conclusion 2, so their proofs are omitted here. Now, we are in the position to prove Conclusion 3. Proof of Conclusion 3 For alternative strategy c = (0, 1, 0, 1, · · · ), X Eτc = nP {τc = n} n≥1

= π0 + π1 p11 [2 + 4p10 p01 + 6(p10 p01 )2 + · · · ] + π1 p10 p00 [3 + 5p10 p01 + 7(p10 p01 )2 + · · · ] 2π1 p11 π1 p10 p00 2π1 p10 p00 = π0 + + . + (1 − p10 p01 )2 1 − p10 p01 (1 − p10 p01 )2

11 +p10 p00 As proved in Result 3, A = p1−p = 1 when transition matrix P is symmetric or P has 10 p01 1+p10 the same rows. This fact immediately implies that Eτc = 1 + π1 1−p . 10 p01 Repeating the same procedure to alternative strategy c = (1, 0, 1, 0, · · · ) complete the proof of Conclusion 3. 

Remarks (1) Conclusion 1 shows that all strategies have the same expected numbers of looks if p00 = p11 = 21 , thereby each strategy is optimal; (2) Conclusion 2 shows that if p00 < 1, p11 < 1, and pπ010 < pπ101 , then (1, 1, · · · )  (0, 0, · · · ). But for any other two strategies c and d, it is hard to check which one is prior to another. The analogous analysis can be made by Conclusion 3. The above analysis reveals that it is hard to get an exact optimal strategy in general cases, so from now on, we focus our mind on inf Eπ0 τc (= inf Eπ0 τc ), the infimum of all expected c∈C

c∈Cv

numbers of looks. It is sufficient to notice that this infimum is bounded from the above by a finite number according to Conclusion 2 and Conclusion 3, but what is the infimum? To solve this problem, a shift transformation S should be defined on strategy space C at first. Let S be a map from C to C defined as: Sc = (c2 , c3 , · · · ) for any c = (c1 , c2 , c3 , · · · ). The map S is then termed as shift transformation on C. For convenience in notation, we assign p00 = a, p11 = b. Then, applying the properties of conditional expectation, we obtain the fact that for any 0 ≤ p ≤ 1, Ep (τc ) = Ep [Ep [τc |X1 ]] = P {X1 = 0}Ep [τc |X1 = 0] + P {X1 = 1}Ep [τc |X1 = 1] = pEp [τc |X1 = 0] + (1 − p)Ep [τc |X1 = 1]. So, we have Ep (τc ) =

  p + (1 − p)[1 + E

1−b (τSc )] = 1 + (1 − p)E1−b (τSc ) if c1 = 0,  p[1 + Ea (τSc )] + (1 − p) = 1 + pEa (τSc ) if c1 = 1.

Let m(p) = inf Ep (τc ). Then, we have c∈C n m(p) = min 1 + (1 − p) inf ∞ c∈{0,1}

,c1 =0

E1−b (τSc ), 1 + p

inf∞

c∈{0,1}

,c= 1

o Ea (τSc ) .

364

ACTA MATHEMATICA SCIENTIA

Vol.35 Ser.B

Notice that {Sc : c = (c1 , c2 , · · · , ) ∈ C, c1 = 0} = {Sc : c = (c1 , c2 , · · · , ) ∈ C, c1 = 1} = C, thus the expression of m(p) becomes to be m(p) = 1 + min{(1 − p)m(1 − b), pm(a)}. Let p∗a,b =

m(1−b) m(a)+m(1−b) .

Then, we have   1 + (1 − p)m(1 − b) if p ≥ p∗ , a,b m(p) =  1 + pm(a) if p < p ∗a,b .

We now proceed to determine p∗a,b for given a and b. To do this, let us first consider the relations between a, b and p∗a,b . For any given a, b, their relation is one of the following four cases: (1) 1 − b ≥ p∗a,b , a ≥ p∗a,b ; (2) 1 − b ≥ p∗a,b , a < p∗a,b ; (3) 1 − b < p∗a,b , a < p∗a,b ; (4) 1 − b < p∗a,b , a ≥ p∗a,b . Next, we deal with them one after the other to derive the expression of p∗a,b . If their relation is Case 1, then from the equations that m(1 − b) = 1 + bm(1 − b) and 1 m(a) = 1 + (1 − a)m(1 − b), we get m(1 − b) = 1−b , m(a) = 1 + 1−a 1−b . At this situation, 1 p∗a,b = 3−(a+b) . If their relation is Case 2, then from the equations that m(1 − b) = 1 + bm(1 − b) and 1 1 m(a) = 1 + am(a), we have m(1 − b) = 1−b , m(a) = 1−a . At this situation, we obtain 1−a p∗a,b = 2−(a+b) . If their relation is Case 3, then from the equations that m(a) = 1 + am(a) and m(1 − b) = 1 1−b 1 + (1 − b)m(a), we have m(a) = 1−a , m(1 − b) = 1 + 1−a . At this situation, we obtain 2−(a+b) p∗a,b = 3−(a+b) . If their relation is Case 4, then solve the equation   m(1 − b) = 1 + (1 − b)m(a),  m(a) = 1 + (1 − a)m(1 − b).

(1−b)(2−a) 2−b 2−a We obtain m(1 − b) = 1 + 1−(1−a)(1−b) = 1−(1−a)(1−b) and m(a) = 1−(1−a)(1−b) . At this 2−b situation, we have p∗a,b = 4−(a+b) . Now, four possible expressions of p∗a,b has been provided. In next section, an example will show how to chose the right one for specific problem.

4

Example and Questions

A exhaustive analysis about the analytical expression of infimum m(p) has been made in Section 2, so we start with an example in this section to show how to determine the concrete expression of m(p) for given a and b. Example Assign b = 14 , a = 12 . According to the expressions of p∗a,b described in Section 7 2, p∗ 21 , 14 may take values 49 , 25 , 59 , and 13 , respectively. For given a = 21 , b = 14 , which one is right? 7 Note that 1− b = 43 , a = 12 , so we get 1 − b > 49 , a > 49 , 1 − b > 25 , a > 52 , 1 − b > 59 , 1 − b > 13 .

No.2

J.H. Yu & W.M. Ye: THREE PROBLEMS IN SEARCHING

365

If p∗ 21 , 14 = 94 , then a and 1 − b should satisfy that 1 − b ≥ 49 , a ≥ 94 . Fortunately, those two inequalities hold. If p∗ 12 , 14 = 25 , then a and 1 − b should satisfy that 1 − b ≥ 25 , a < 52 . Unfortunately, the second inequality fails. If p∗ 21 , 41 = 59 , then a and 1 − b should satisfy that 1 − b < 59 , a < 95 . Unfortunately, the first inequality fails. 7 7 7 If p∗ 21 , 14 = 13 , then a and 1 − b should satisfy that 1 − b < 13 , a ≥ 13 . Unfortunately, the first inequality fails. The above analysis leads to our conclusion that only the relation described in Case (1) holds. Thereby, the expression of m(p) is  4(1 − p) 4   if p ≥ , 1 + 3 9 m(p) =  4   1 + 5p if p < . 3 9

In addition, if initial distribution p = P {X1 = 0} = 13 = 1 − P {X1 = 1}, then m( 13 ) = 3 3 1 4 4 1 + 35 × 13 = 14 9 ; if initial distribution p = 4 , then m( 4 ) = 1 + 4 × 3 = 3 . Problems Up to now, the problem that how to get the minimum expected number of looks needed to detect the target is solved completely, but there are still some questions unanswered: does there exist a strategy such that its expected number of looks equals the infimum? if there exist some ones, then they are optimal strategies without doubt; if not, what is the optimal strategy on earth? et al. Those problems are very important because they can help the searcher to make up an effective search plan. References

[1] Ahlswede R, Wegener I. Search problems. Salisbury: John Wiley, 1979 [2] Assaf D, Sharlin-Bilitzky A.Dynamic search for a moving target. J Appl Prob, 1994, 31: 438–457 [3] Benkoski S J, Monticino M G, Weisinger J R. A Survey of the search theory Literature. Nav Res Log Quart, 1991, 38: 469–494 [4] Brown S S. Optimal search for a moving target in discrete time and space. Operations Research. 1980, 28: 1275–128 [5] Eagle J N. The optimal search for a moving target when the search path is constrained. Operations Research, 1984, 32: 1107–1115 [6] Kan Y C. Optimal Search for a moving target. Operations Research, 1977, 25: 864–870 [7] Macphee I M, Jordan B P. Optimal search for a moving target. Probability in the Engineering and Information Sciences, 1995, 2: 159–182 [8] Moldenhauer C. Sturtevant N R. Optimal Solutions for Moving Target Search. 8th International Conference on Autonomous Agents and Multi-agent Systems, 2009: 1249–1252 [9] Nakai T. A Model of search for a target moving among three xoxes: Some special cases. J of Operations Reacher Society of Japan, 1973, 16: 151–162 [10] Pollock S M. A simple model of search for a moving target. Operations Research. 1970, 18: 883–903 [11] Ross S M. Introduction to stochastic dynamic programming. New York: Academic Press, 1983 [12] Schweitzer P J. Threshold probabilities when search for a moving target. Operations Research, 1971, 19: 707–709 [13] Singh S, Krishnamurthy V. The Optimal search for a Markovian target when the search path is constrained: The infinite-horizon Case. IEEE Transactions on automatic control, 2003, 48(3): 93–497 [14] Stone L D, Theory of optimal search. New York: Academic Press, 1975 [15] Weber R R. Optimal search for a randomly moving object. J Appl Prob, 1986, 23: 708–717