Three-step-ahead prediction for object tracking

Three-step-ahead prediction for object tracking

Accepted Manuscript Three-step-ahead prediction for object tracking Marjan Firouznia, Karim Faez, Hamidreza Amindavar PII: DOI: Reference: S0262-885...

1MB Sizes 0 Downloads 50 Views

Accepted Manuscript Three-step-ahead prediction for object tracking

Marjan Firouznia, Karim Faez, Hamidreza Amindavar PII: DOI: Reference:

S0262-8856(18)30028-3 doi:10.1016/j.imavis.2018.03.005 IMAVIS 3679

To appear in: Received date: Accepted date:

17 June 2017 16 March 2018

Please cite this article as: Marjan Firouznia, Karim Faez, Hamidreza Amindavar , Threestep-ahead prediction for object tracking. The address for the corresponding author was captured as affiliation for all authors. Please check if appropriate. Imavis(2018), doi:10.1016/j.imavis.2018.03.005

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

ACCEPTED MANUSCRIPT

Three-step-ahead prediction for object tracking Marjan Abdechiri*a,b, Karim Faeza, Hamidreza Amindavara, Javad Alikhani koupaeib a b *

Electrical Engineering Department, Amirkabir University of Technology, Tehran, Iran

Mathematics Department, Payamenoor University, P.O Box 19395-3697, Tehran, Iran Corresponding author: [email protected].

AN

US

CR

IP

T

Abstract In this paper, a three-step-ahead prediction method is introduced using chaotic dynamics for state estimation in object tracking. The nonlinear movement of an object is embedded into a low-dimensional state space to utilize the short-term predictions of chaotic systems. The computational architecture of the method is structured as follows. A pseudo-orbit methodology is presented to embed the high dimensional observations of non-linear movement into the pseudo trajectory in the state space with chaotic characteristics. After the Grey theory is applied into the pseudo trajectory in order to reduce the dimension of trajectory, the fractal method is used for three state predictions of the object’s movement. For state correction, ensemble members are used to select the best state based on the likelihood function of the color model of candidates. In order to evaluate the efficiency of the chaotic tracker, we compare the chaotic tracker against tracking by detection and stochastic methods. The numerical results demonstrate that the method predicts the target in full occlusions and abrupt motion with a high level of accuracy. Thus, the chaos-based method for making target prediction is vastly superior to existing trackers. The tracker can localize small targets in video sequences accurately. The proposed algorithm is about two times faster than the particle filter method while the error of the particle filter is more than the error of the proposed tracker. The limitations of the proposed method are also illustrated in clutter background and complex scene.

M

Keywords: Multi-step ahead prediction, fractal theory, Grey theory, pseudo-orbit, object tracking.

ED

1. Introduction

AC

CE

PT

Object tracking is an important topic in computer vision which is used in applications of the motion analysis and pattern recognition. The tracking methods are applicable in motion analysis, traffic monitoring, and video analysis. The tracking is a challenging task, because it is hard to estimate the target state with fast motion, low frame rate, and uncertain motion in image sequences. The tracking methods can be categorized into two groups; namely deterministic [1] and stochastic [2] methods. The mean shift algorithm is a kernel-based deterministic procedure which exploits a region to maximize a similarity measure of a template image and the current image [3]. The method is computationally efficient, but it is sensitive to background distraction and clutter [4]. The Kalman filter [5, 6] and particle filter [7, 8] estimate the next state based on Bayesian theory. The Kalman filter is developed for linear and a Gaussian observational noise [9] but it cannot be applied to nonlinear movement. The particle filter method can maintain the nonlinearity and uncertainty of the model evaluation and analysis step in visual tracking [10]. In the particle filter method, multimodal distribution may lead to a noisy estimation of the target position. The particle filter method is used for one step-ahead prediction. Many extended trackers have been proposed to improve the weaknesses of the particle filter. The main drawbacks of the particle filters are the limitations of dynamic model, the computational complexity, and one-step prediction. However, efficient and effective filtering based on deterministic dynamics of motion information can reduce the computations and prediction errors in multi-step-ahead prediction for tracking [11, 12]. Multi-step ahead prediction (MSP) method has been presented to handle the problems based on chaos theory [11]. The weaknesses of MSP are 2D object tracking (Ikeda map in chaotic system) for global search and ensemble members for local search. The chaotic particle filter (CPF) has been introduced to improve the local search of MSP based on the particle filter method [12]. The purpose of the paper is to introduce a chaos-based prediction method to generate a simple tracker which can predict and locate chaotic motions in video sequences. In this paper, we propose a multi-step-ahead prediction using chaotic dynamics to handle the main weaknesses of probabilistic and deterministic methods. The stochastic dynamics of object movement have high dimensional dynamics with noise in video sequences. In order to use chaotic dynamics for state estimation in object tracking, the high dimensional observations of non-linear movement can be embedded into a pseudo trajectory in the state space. For this purpose, Ikeda map is applied to create a

1

ACCEPTED MANUSCRIPT

CR

IP

T

deterministic trajectory in the state space for past observations. Then the Grey theory is applied to the pseudo orbit to reduce the dimension of the state space for low order prediction. The fractal prediction method is introduced to predict three next states based on the transformed data. The correction step is used to correct the state for each frame using the chaotic ensemble members and the color models of the candidates. We compare the performance of threestep-state estimation based on chaos theory against some stochastic methods based on particle filters on 15 video sequences. The results demonstrate that the chaos-based method significantly outperforms all trackers in fast motion, and occlusion. All trackers suffer from tracking a small object, while our method can predict the target based on the dynamical information of motion. For fast motion and abrupt changes, the chaotic method tracks the object accurately while particle filter-based method cannot localize the target. To highlight the advantages of the chaotic tracker, the accuracy of proposed algorithm is compared with the accuracy of other algorithms on a large dataset. The results indicate that the proposed method increases the performance of the traditional particle filter in nonlinear movement and abrupt motion changes. The proposed algorithm decreases the parameter settings of the MSP and CPF. The paper is organized into several sections. The basic concepts of the stochastic trackers are described in section 2. In section 3, we present a chaos-based prediction methods for state estimation of nonlinear dynamics. In Section 4, we introduce the architecture of our tracking algorithm. In Section 4, the effectiveness of the proposed method is validated for different challenges. Finally, section 5 concludes this paper with some suggestions for future research.

US

2. Stochastic methods

AN

Motion estimation is an important step to predict the target’s location in each frame. In video sequences, the state of object evolves over time. In the stochastic view, the filtering methods can be used for motion estimation. The target state is modeled as an n-dimensional random vector xt . The state evaluation is described by using a stochastic model from state xt to xt 1 . The target dynamics is modeled by

xt 1  F ( xt , vt )

M

(1)

ED

where F is evaluation function over time, xt 1 and vt are current state and noise of system respectively. The observations are modeled by the measurement equation

zt  h( xt , wt )

(2)

PT

where h is the measurement function and wt is the measurement noise. The Bayes filter is a recursive tracking

CE

model which handles multimodal problems of nonlinear and non-Gaussian systems. The Bayesian theory includes two main steps: prediction and updating. The posterior density is defined by

p( xt | z1:t 1 )   p( xt | xt 1 ) p( xt 1 | z1:t 1 )dxt 1

(3)

AC

where the transitional density is p( xt | xt 1 ) . The measurement update PDF is

p( xt | z1:t ) 

p( zt | xt ) p( xt | z1:t 1 )

 p( z | x ) p( x | z t

t

t

1:t 1

(4)

)dxk

where p( zt | xt ) is the likelihood function [13]. However, the approach is useful only when the dimensions of the state and observation space are low because of the computational complexity. The particle filter is introduced based on an approximation model of Eq. (3) and Eq. (4) using a set of particles {x1t , x2t ,..., x Nt } and weights {w1t , w2t ,..., wNt } .

wti  wti1

p ( zt | xti ) p ( xti | xti1 ) q( xti | xti1 , zt ) 2

ACCEPTED MANUSCRIPT where q ( xt | xti1 , zt )  p ( xt | xti1 ) and the posterior density is N

p( xt | z1:t )   wti ( xt  xti ). i 1

N

where  (.) is the Dirac delta function with the condition  wti  1 . The method has an important limitation in the i 1

T

filter divergence which causes to generate inappropriate samples in complex scene and occlusion.

is approximated as N

US

CR

IP

The observation model based on the area and invariant moments in object region can be applied into the weights of particles to solve the uncertainty of the object information in the complex environment. In the clutter backgrounds, the common features of background and target lead to failure and short-term errors. To solve this problem, a two-step estimation method is introduced to prevent excessive particle drifting [14]. However, in this method, the error will be accumulated because the prediction and updating cannot reduce the degeneracy problem. The first-order Markov chain presents the high-order particle filtering method for tracking [15]. For m -order Markov chain of state space, the current xt state depends on the past m states p( xt | xt 1, xt  2 ,..., xt  m ) and the posterior density

p( x0:t | z1:t )   wti ( x0:t  x0i :t )

AN

i 1

The posterior filtered density is

N

(5)

M

p( xt  m 1:t | z1:t )   wti ( xt  m 1  xti m 1:k ). i 1

AC

CE

PT

ED

The method is traditional particle filter, when m  1 . The approach requires more memory for particle storage. The computational time of the method is not suitable for online tracking [15]. The particle filter suffers from the heavy computational load because the method requires a large number of particles for robust tracking. To reduce the particles, hierarchical Kalman-particle filter estimates global linear motion using the Kalman filter and local nonlinear motion using particle filter [16]. Many extended particle filters have been proposed to improve the weaknesses of particle filter in object tracking, including the deep learning representation [17], iterative particle filter [13], mean state estimation and resampling of particle filter with rough-set-theoretic fuzzy cues-based object tracking [18], and the extended target box-particle filter [19]. Particle filter can handle abrupt motion challenges using a saliency map of current frame [20]. The mean shift algorithm is embedded into the particle filter to reduce the number of particles [21]. The top–down visual computational model presented based on frequency analysis and particle filter for the abrupt motion challenges [22]. The adaptive convolution particle filter introduced the theoretical expression of the generalized likelihood function in the presence of clutter [23]. The Kalman and particle filters have main limitations in motion model and high-dimensional search for visual object tracking. 3. Chaos prediction methods There have been many methods aimed at chaotic modeling to process nonlinear dynamics. Chaos theory improves the short-term predictability and enhances understanding of the nonlinear dynamics [24]. Chaos theory can be utilized for different applications, including compressive sensing [25, 26], object representation [27, 28], and optimization [29]. These methods need to find an optimal dimension and delay of state space based on observed data in state space. The trajectory obtained in the state space presents the deterministic and regularity of chaotic systems. The deterministic model can be used for global chaos prediction. The deterministic model Y (t  1)  f (Y (t )), t  1,2,...

3

ACCEPTED MANUSCRIPT is approximated by the map fˆ . Although the global chaos prediction methods have good performance in the different applications, the computational complexity of finding the minimum cost



N

[Y (t  1)  fˆ (Y (t ))]2 is high.

t 1

The fractal prediction can reduce the computational complexity using the Grey theory for small number of data. Definition1: Grey theory. Suppose that an original time series is x( 0) (t )  {x( 0) (1), x( 0) (2),..., x( 0) (n)} and accumulating generation operator transforms the data to x (1) (t ) then

x (1) (t )  {x (1) (1), x (1) (2),...,x (1) (n)}

T

and t

IP

x (1) (t )   x ( 0) (i) i 1

(1)

(0)

US

CR

where x (t ) is one order accumulation of x (t ) [30]. In this case, the dimension of the state space is 1. The inverse accumulating generation operator can be applied to obtain n-step-ahead prediction. The short-term prediction methods have been demonstrated by using chaotic characteristics of time series. The fractal prediction uses the scale-invariance and self-similarity of the trajectory which the characteristics are helpful for simplifying and analyzing problems.

AN

Definition2: Fractal theory. Fractal function can be defined by the power exponent distribution as follows:

N (r )  cr  D

(6)

where N (r ) is observation data in the time r [31]. The fractal dimension D is

PT

ED

M

N  ln  A  N D  B r  ln  B   rA 

c  N ArAD  N B rBD

AC

CE

where c is constant value. The fractal theory can be used for chaos prediction methods. The data sequences cannot fit with a fractal dimension. Therefore, the sectioned variable dimension is introduced to make predictions. The Grey theory can be used to improve the prediction accuracy. The optimal dimension is estimated to reduce the intersections of points in this phase space directly [31]. For the predicted existence of chaos, three conditions are necessary in nonlinear systems, including a trajectory represented by close path or a stable predicted limit cycle, unstable equilibrium point, and a suitable filtering effect to create the stability properties between predicted limit cycle and equilibrium point in the system. 4. The proposed algorithm To propose a simple chaotic prediction for object tracking, we can present an effective filtering of the system to reduce the error of prediction. The filtering method reduces the dimension of the system using chaotic motion prediction and yield the instability of the system. In the first step of the proposed method, for the existence of chaos in object movement, we present the filtering using Ikeda map and gradient minimization to guide the past observations of motion into chaos. Then we use fractal theory to localize the target. In object tracking problem, the object moves in time. At First, it was assumed that such movements are a consequence of random and unpredictable events. The chaos theory can be used to understand the random-like dynamics using a deterministic model which are predictable in the short term with simple modeling. To handle the limitations of stochastic methods, we propose a chaotic framework that captures the evolution of the object’s state over time in video sequences.

4

ACCEPTED MANUSCRIPT

U ( 0)  {X n 1 ,..., X n  m }

Pseudo orbit:

Initial point of GD

yn1 ,..., y1 , y0

Frame C (U ) un

Dimensionality reduction and Multi step ahead prediction

Pseudo-orbit using Ikeda map F

Observations {X n1 ,..., X nm }

{X n , X n1 , X n3 }

{Ni },{N1i },...,{N 4i }

IP

Figure 1. The chaos prediction for object tracking.

T

U U 

The Grey theory and fractal prediction

ED

M

AN

US

CR

However, a filtering method using deterministic chaotic dynamics can reduce the computations and the prediction errors for tracking significantly across frames. The current paper presents a three-state estimation method to predict an object in video sequences based on nonlinear chaotic dynamics. A framework of the object tracking based on the chaos prediction is shown in Figure 1. The past observations of object’s states have high dimensional stochastic dynamics. In order to use chaos theory for state estimation in object tracking, the high dimensional observations should be embedded into a chaotic or pseudo trajectory in the state space. For this purpose, a pseudo-orbit methodology is applied to the previous observations using chaotic maps in order to extract a trajectory of observations. The chaotic map is used to create a deterministic map between successive observations. Then, as a preprocessing step, the Grey theory is applied into the pseudo trajectory in order to reduce the dimension of the state space for low order prediction using Def. 1. The transformed data can be modeled by using the fractal theory to predict three next states as a multi-step-ahead prediction step using Eq. (6) of Def. 2. The ensemble members of the state are generated by using the chaotic system to correct the state. The color model is considered as an appearance model to select the object template from candidates based on likelihood function. The likelihood function is applied to correct the states of the object. Then the target model updating is applied to update the target color model in order to reduce drift. 4.1. Problem formulation

CE

PT

We manually initialize a bounding box around an object as target for tracking. The state of the target is considered as 2-dimension vector X n  ( xn , yn ) . The parameters xn and yn are the object’s location. The object movement is modeled as follows:

X n 1  F ( X n , a)

(7)

The map F states the dynamics of movement. The pseudo orbit approach is used to m frames with states

AC

{X n1 ,..., X nm} in order to generate a trajectory. The Fractal prediction is used to estimate states { X n , X n 1 , X n  3} from

the past m states. In this step, the optimal dimension is found using Def. 2 of the transformed data. Then, the algorithm generates a set of ensemble members to correct the three states using the corresponding weights of the observation models (color model) [32, 33]. To evaluate how likely a candidate state can represent the target, we use the histogram measurement of the ensemble members X~ . The distance between histogram reference and candidates is computed using KL divergence [34]. The likelihood is computed by comparing color histograms of RGB space or histogram of gray space. For gray space, we use the Matlab function "imhist" and LAB histograms for RGB space. We convert RGB space of image to LAB space with the parameters L, a, and b [35]. The histograms of the channels are computed in the edge of the histogram bins. The histograms are concatenated into a single multi-channel histogram. The normalized histogram is considered as a LAB histogram. The color model of the bounding box defined by X n has been extracted using the included function LAB histogram. The likelihood is modeled as * ~ p  p( z n | X )  e  dist ( hist ,hist )

5

(8)

ACCEPTED MANUSCRIPT where dist is the KL divergence of two histograms, hist is a color histogram of image corresponding to the previous frame and hist * is a color histogram of ensemble members corresponding to X~ n . The parameter  is used as a hyperparameter to adjust the sensitivity of the likelihood function. The next state is the mean state vector computed from all members by using a weighted averaging of state vectors as a solution. 4.2. Pseudo orbit methodology

where      (1  xn2  y n2 ) ,   6 , 

 0 .4 ,

IP

(9)

CR

 x    u ( xn cos  yn sin  ) F   n 1  yn 1  u ( xn sin   yn cos )

T

In the pseudo orbit method, a chaotic map is applied to generate a trajectory in the state space for the past observations [36]. The chaotic model F is the Ikeda map [36] and the parameters of the system are a  R l . The dynamical system is described by the equation

  1, and u  0.83 . The observation is

S n  h( X n )   n , where h(.) is the

US

observation operator and the observational noise is n  R 2 . The number of observations is considered m in each window and the dimension of model F is M . Therefore, the observations are embedded into M * m dimensional state space. In the state space with m * n points, some

un are trajectories of model un 1  F (un ) and some are not

AN

un 1  F (un ) [36]. A pseudo orbit U  {um1 ,..., u1 , u0 } is considered as a point in the m * n dimensional state space for which

observations

un1  F (un ) . The gradient descent (GD) algorithm is used to minimize the mismatch error en  F (u n )  u n 1 , n  m  1,...,1 with the cost function C (U ) 

e

as can be seen in Figure 2. Ikeda Attractor F is considered to

PT

ED

M

model the object movement in the state space.

2 n

CE

Figure 2. Pseudo orbit method.

AC

The pseudo-orbit yn1 ,..., y1 , y0 from observations is obtained using the direction n  m  1   un1  F (un )d n F (un ) C (U )   2  un  F (un1 )  un1  F (un )d n F (un )  m  1  n  0 un  u  F (u ) n0 n1  n

(10)

where d n F (un ) is the Jacobian matrix of F and the pseudo orbit U is updated as C (U ) un to generate an optimal chaotic trajectory in the state space. U U 

4.3. Multi-step-ahead prediction and tracking

6

(11)

ACCEPTED MANUSCRIPT

In this paper, we use the past observations (m) to predict some states (p). The parameters have significant effect on the accuracy of the proposed method. We illustrate the effect in section 5.3. The results demonstrate that m=20 and p=3 are the best parameters for prediction. For First 20 frames, a reference trajectory can be generated based on a middle component of pseudo trajectory

ym / 2 and equation zn1  F ( zn ) with ym / 2  zm / 2 . The starting point zm/ 2 is perturbed to generate some candidate trajectories using random variable  , which is Gaussian with zero mean and standard deviation of the difference



IP

1 0 h( zn* )  Sn T  1 h( zn* )  Sn  2 n m / 2

(12)

CR

L( z * ) 

T

between the truth and zm/ 2 [36]. Then, ensemble initial conditions are selected from the candidate trajectories using a likelihood function

US

where  1 is the inverse of the covariance matrix, z n* is candidate trajectory and the end component of selected candidate trajectory is considered as an ensemble member. Then, we use the chaotic model to predict two remaining states.

AN

For next frames, we use the 20 previous states to predict the three states as multi-step-ahead prediction. The fractal prediction is applied to pseudo-orbit trajectory for three-step prediction. For fractal prediction, the Grey theory is applied to the 20 states of the trajectory to reduce the dimension of state space. Suppose that the observations are {Ni }  {N1, N2 ,..., Nn} [31]. The cumulative-sum series of {N i } are

ED

{N 2i }  {N 21, N 22 ,..., N 2n }, {N 3i }  {N 31 , N 32 ,..., N 3n },

M

{Ni }  {N1, N2 ,..., Nn }, {N1i }  {N11, N12 ,..., N1n },

and {N 4i }  {N 41, N 42 ,..., N 4n}, where N1i 



i

j 1

N j , N 2i 



i j 1

N1 j ,

N 3i 



i j 1

N2 j ,

N 4i 



i j 1

N3 j .

The

PT

preprocessing generates the similar section fractal dimension for the observations. The inverse accumulating generation operator is used to predict states based on the best fractal parameter D using Eq. (6) [30] as follows: (13)

CE

xˆ i (n  1)  xˆ i1 (n  1)  xˆ i1 (n).

AC

Therefore, our method can be applied for multi-step-ahead prediction based on the short-term predictions of chaotic systems for two states of object location. The overview of the proposed method is shown in the Algorithm 1. Finally, we can update the target model because the appearance of target changes during the tracking process. The model is updated for each bin as histt  (1   )histt 1  histE

(14) where  is a persistence factor to control the abrupt motion and appearance changes. The mean state of the object is estimated at each time E   px n . The parameter N

 weights the contribution of the mean state histogram.

n 1

Algorithm 1. The chaotic tracker

7

ACCEPTED MANUSCRIPT Input: the previous states

xn20 ,..., xn2 , xn1 , set the parameters of

Ikeda map, i=100. Read frame (n=1:N) Find the Pseudo-orbit trajectory. Using the gradient descent method to update U  U  C (U ) Using Eq. (10) for i steps. u n If n<20 Using Eq. (12) and ensemble members to predict three states [36].

IP

T

Else Three-step estimation. The Grey theory and Eq. (6) is used to predict the sates with Eq. (13).

CR

End if Apply the correction method. A set of ensemble members is used to correct the states using the corresponding weights of the observation models using Eq. (8).

US

Update the observation model using Eq. (14). n=n+3. End for n

AN

In occlusion and sudden changes, the model is not updated when the tracker has lost the object. Therefore, the updating condition is pbest  pTr where the observation probability of best state is pbest and pTr is a threshold [16].

M

4.4. Prediction error analysis

PT

ED

In this section, we analyze the error of prediction of the fractal method. Let X 1 ,..., X m be observations of object states. The prediction method can estimate three states X T 1 , X T 2 , X T 3 . The prediction error of random data increases when the prediction interval increases [37]. For chaotic dynamics, the prediction error starts out small for small prediction interval. For chaotic system and short-term predictions, the prediction errors reach to zero for minimum embedding dimension [24]. In the pseudo-orbit methodology, we extract a pseudo trajectory and use the accuracy of short-term chaos prediction. The variance of the prediction error an (p+1)-step-ahead prediction can be smaller than that of pstep-ahead prediction  p 1   p , when  p 1   p and {F[ F p ( x)]}2  1  1 /  p [38] based on Kullback-Leibler and

CE

Fisher information [38]. Thus, m-step-ahead prediction can reduce the prediction errors for object tracking. 5. Experimental results

AC

To evaluate the performance of the proposed algorithm, we conduct experiments on 15 benchmark sequences with 10 trackers. The sequences have some challenges, including Illumination changes, fast motion, and occlusion. The proposed method is applied on Bike, David3, Face, Jogging, Subway, Walking, and Woman for occlusion. In the same vein, Boy, Car scale, Couple, Crossing, and Owl sequences are investigated for fast motion. The Car4, Dog1, and Singer1video sequences include illumination, scale, and pose changes. The proposed object tracking is compared against VTD [39], Struck [40], TLD [41], CT [42], KCF [43], LRT [44], SH [45], PF [46], LGT [47], and RPF [14] trackers. The PF algorithm uses color histogram for features and position, scale, and velocity for state of the target with 100 particles. The RPF method applies the color histogram and the histogram of gradients (HOG) with two-step prediction to prevent the occlusions. We implemented some trackers with MATLAB, including PF, KCF, and SH with reported parameters in the original papers. For tracking by detection method, we set them as follows. In the SH algorithm, the number of positive templates is set to 10 and binary hash function K is set as 100. The number of negative templates is set to 1000 and the balancing parameters   0.01 and   0.7 for hash function optimization. In the proposed tracker, we set the number of past observations for prediction to m  20 and the number of step predictions is set to p  3 . The number of iterations for pseudo orbit K=50 and the number of ensembles M=30 which leads to decrease computational complexity in the estimation process. In our method, both K and M are much smaller than the particles in other algorithms. Our

8

ACCEPTED MANUSCRIPT algorithm independently runs ten times on each dataset, and then we compute the average of ten times. The algorithms are implemented on a computer with 2 cores CPUs at 2.2 GHz and 2 GB RAM running under the windows 7. In this paper, two metrics are used to evaluate the performance of the algorithms quantitatively. The center location error and the percentage rate are used to investigate the precision and tracking robustness of tracker method. The center location error can be used to measure the Euclidean distance between the central tracked position and the label of ground truth [48]. The center location error defined as

error (t )  dist (center (t ), gt (t )),

(15)

 error(t ) K

k 1

CR

1 error(t )  k

IP

T

where t is the number of frame and dist is the Euclidean distance. The center (t ) is the center location of tracked object and gt (t ) is the ground truth center of the object location. The average of center error location over K runs can be calculated

(16)

US

The success rate computes the number of successful tracked locations in video sequence. We use a PASCAL score PS  BR  BT to calculate the success rate. The ground-truth bounding box is BT and the tracked bounding box BR  BT

AN

is BR . If BR  BT  0.5 , then a track result is considered correct [48]. BR  BT

5.1. The efficiency of the proposed method in different challenges

AC

CE

PT

ED

M

All trackers are failed while tracking a small object, while our method can predict the target based on the dynamical information of motion. The particle-based methods are sensitive to confidence scores of particles and features. In Figure 3, the tracked target is shown in the blue bounding box and the ensemble members in the proposed method and the particles in the PF method are shown in the green bounding box. The Figure 3 shows that the tracker can handle the fast motion of small objects in car sequences #201 while the particle filter miss the target with 100 particles.

Figure 3. The results of small object tracking with fast motion for Car sequences. The results of the particle filter method (left caption) with 100 particles and the proposed tracker (right caption) with 10 ensemble members for frame 201. The right figure is the proposed method with the blue bounding box of tracked target. The left figure is the tracked target and particles in the PF method.

9

ACCEPTED MANUSCRIPT

#60

#1

#50

#143

#375

#225

#778

PF

CR

Ours

IP

T

#1

AC

CE

PT

ED

M

AN

US

Figure 4. The sampled tracking results of PF method and our algorithm. The frames #1 show the target. David and Face occluded sequences with fast motion and varieties of occlusions. The particle filter uses the 100 particles while the proposed method uses 10 ensemble members to localize the target. For fast motion and abrupt changes, the chaotic method tracks the object accurately while the particle filter method [46] cannot estimate the next state. As can be seen in Figure 4, the accuracy of our method is higher than that of the particle filter with fast motion and motion blur in David sequences (frame 120 and 205). For occlusion (frame 50 and 779), the proposed tracker localizes the object state. The particle filter loses the target on occlusion changes in Face occluded. Although the proposed method is able to effectively and efficiently deal with the occlusion and fast motion, the color features and a two state vector for motion estimation are the main limitations of the proposed method. The tracker cannot well handle the heavy pose, illumination changes, and background clutter, since it is not equipped with a discriminative appearance model. As can be seen in Figure 5, although the proposed method can effectively track target, the color features is not good features to separate the target from its background in the blur motion and background clutter where the color features of the background and the target are the same. In the motion estimation, the Ikeda map can be used to predict target based on a two dimensional vector of center coordinates of object. Therefore, the proposed method cannot fit the bounding box into the object’s region, which causes to poor updating of the appearance model. This case is shown in Figure 5. To solve these challenges, we can use the chaotic dynamics with higher dimension to estimate the scale, rotation, and translation to improve the correction state step and the accuracy of the algorithm.

Ours PF Figure 5. Tracking results of background clutter and complex scene. The results of the proposed method and particle filter for Car dark video sequences Frames 1, 125, 293, and 303. 5.2. Comparison of the proposed tracker with state of the arts In this section, we compare the performance of three-step state estimation based on chaos theory against some stochastic methods based on particle filters. The purpose of the experiments is to test the robustness of the proposed tracker against abrupt changes, and occlusion. As can be seen in Table 1, the tracker is more robust under the challenges. The algorithm locates the object using low-dimensional state space and multi-step-ahead prediction, while particle filters have some assumptions of the motion model and use high-dimensional search method in visual object tracking. Object tracking based on chaotic dynamics is a low-dimensional deterministic method which can

10

ACCEPTED MANUSCRIPT

T

model the complex dynamics of movements. The proposed method uses rich motion information to extract trajectory of movement with past 20 frames. The Grey theory and fractal prediction can predict the three states in the low dimensional space. The information and color histogram help to find object positions in the next four frames under different challenges. The particle filters may lose the target or they cannot locate the target position correctly. As can be seen in Tables 1 and 2, the proposed method significantly outperforms all the stochastic and probabilistic methods for the average of sequences. According to average over the both metrics, our method significantly outperforms all the other methods. In the occlusion sequences (Bike, David3, Face, Jogging, Subway, Walking, and Woman), the chaotic tracker is very effective for heavy occlusion changes. For the tracking success rates, the proposed method obtains the best accuracy in all occlusion sequences. For the error metric, the method has the first or second lowest error in 5 out of 7 videos of occlusions. The VTD [39], CT [42], TLD [41], Struck [40], KCF [43], and LRT [44] trackers perform extremely badly in the most video sequences.

US

CR

IP

1) In the occlusion: VTD, TLD, and CT trackers fail due to the template distraction of the updating process. The success rates and center errors of theses method show that the methods lose the target in occlusions. As can be seen Table 2, the precision of VTD, TLD, and CT in Jogging is low. In Walking, the VTD, TLD, and CT trackers are not robust to changes in occlusions while the proposed method and SH tracker obtains success in occlusion changes. The success rates and center errors of the proposed method show that the chaos-based tracker can localize the target accurately. The SH method is based on the local patch pooling method and the adaptive template updating. Therefore, the proposed method using motion information can handle occlusions better than the SH tracker. Our algorithm re-finds the target after it loses the target under heavy occlusion by motion estimation with dynamical information of motion. The proposed method can locate the target in the occlusion without model updating. Threestep prediction can handle drifting when the target is occluded.

ED

M

AN

2) In the fast motion: As can be seen in Table 1 and Table 2, our method significantly outperforms all trackers in Boy, Car scale, Couple, Crossing, and Owl sequences with fast motion. In the occlusion sequences, the chaotic tracker is very effective for tracking the target with fast motion. In the blurred image, tracking methods use different method to extract discriminative features and to separate the target from background. In the stochastic methods, the feature extraction methodology is very important for tracking in the fast motion. The proposed method can track the target accurately while the method uses the simple color features and dynamic information for state estimation. In the Owl sequences, the proposed method and SH tracker handle the fast motion challenges. For these methods, the blurred image is difficult to extract effective features for representation. The proposed method performs well based on chaos theory. Meanwhile, the dynamic information contributes to the more accurate target location.

CE

PT

3) In the illumination and appearance changes: In Car4, the CT and LRT trackers lose the target in the illumination and scale variations. In the singer1 sequences, the TLD, CT, and Struck trackers cannot locate the object accurately because the features of small object are not sufficient for discriminative features. In the Car4, Dog1, and Singer1 video sequences, the proposed method can adapt to this issue by benefiting from motion information and online updating. Table 1.The center errors of the state-of-the-art trackers and our algorithm. Bold fonts indicate the best performance. VTD [39] 9.8 7.6 12.3 38.5 104 26.1 66.7 11.0 11.1 83.3 86.8 4.1 141 5.8 137 49.67

Stuck [40] 8.6 3.8 7.7 36.4 11.3 2.8 107 5.7 6.9 62.1 71.9 21.9 4.5 4.6 4.3 23.96

AC

Bike Boy Car4 Car scale Couple Crossing David3 Dog1 Face Jogging Owl Singer1 Subway Walking Woman Average

TLD [41] 4.5 2.5 24.3 4.2 15.4 6.7 30 10.2 -

CT [42] 214 9.0 234 26.0 36.4 3.6 88.7 7.0 30.7 92.5 150 19.4 11.1 1.9 113 69.15

KCF [43] 7.7 2.9 9.9 16.1 47.5 2.2 4.3 4.2 16.0 88.3 183.4 12.8 3.0 4.0 10.1 27.49

11

LRT [44] 9.4 14.8 83.8 7.9 115.5 4.2 86.0 3.7 17.4 109 176.2 13.1 148.8 3.3 157.3 63.36

SH [45] 11.5 6.7 11.4 15.3 8.0 4.2 13.1 3.5 4.2 6.1 7.1 6.3 3.5 2.1 4.3 7.15

PF [46] 38 16 41 17 13 145 79 -

LGT [47] 52 54 6 14 92 6 6 -

RPF [14] 9 24 10 16 12 9 4 -

Ours 7.4 2.3 7.3 7.6 6.9 2.8 4.1 8.5 5.7 16.4 6.9 10.4 3.0 8.4 8.0 7.04

ACCEPTED MANUSCRIPT Table 2.The success rates of the state-of-the-art trackers and our algorithm. Bold fonts indicate the best performance. CT [42] 0.14 0.60 0.24 0.44 0.47 0.69 0.31 0.54 0.60 0.18 0.10 0.34 0.58 0.01 0.17 0.36

KCF [43] 0.71 0.78 0.48 0.42 0.20 0.71 0.77 0.55 0.75 0.71 0.19 0.35 0.76 0.53 0.71 0.57

LRT [44] 0.70 0.49 0.26 0.48 0.07 0.70 0.41 0.68 0.73 0.70 0.09 0.42 0.18 0.69 0.16 0.45

SH [45] 0.60 0.70 0.72 0.57 0.60 0.64 0.61 0.82 0.90 0.60 0.77 0.79 0.73 0.72 0.76 0.70

PF [46] 0.28 0.35 0.31 0.51 0.55 0.09 0.30 -

LGT [47] 0.31 0.43 0.55 0.60 0.09 0.53 0.48 -

RPF [14] 0.44 0.47 0.52 0.63 0.57 0.52 0.69 -

T

TLD [41] 0.20 0.67 0.64 0.43 0.78 0.41 0.10 0.59 0.65 0.77 0.60 0.79 0.19 0.46 0.60 0.52

IP

Stuck [40] 0.71 0.77 0.49 0.41 0.54 0.69 0.29 0.71 0.83 0.17 0.19 0.35 0.66 0.59 0.61 0.53

CR

VTD [39] 0.70 0.63 0.73 0.44 0.07 0.32 0.41 0.60 0.77 0.16 0.12 0.79 0.16 0.46 0.15 0.43

Bike Boy Car4 Car scale Couple Crossing David3 Dog1 Face Jogging Owl Singer1 Subway Walking Woman Average

Ours 0.63 0.79 1.00 0.52 0.86 0.75 0.77 0.75 0.93 0.79 0.80 0.75 0.80 0.76 0.81 0.78

US

The proposed method is a high-order estimation and multi-step ahead prediction method, and the tracker is a memory-based method. Therefore, the proposed method improves the performance of the algorithm for object tracking under fact motion and occlusion.

AN

5.3. Effect of parameters and runtime

PT

ED

M

There is a trade-off between two parameters of the proposed method. We investigate the effects of the parameters by using the occlusion and fast motion sequences for 12 videos. In the proposed method, the number of observations is very important parameter for prediction. If the number of observations is too small, the performance of the algorithm is not robust with occlusion and fast motion. The method can capture the dynamical information of movement, when the number of observations is large enough. In the prediction step, the algorithm needs more memory to keep the state of past observations, while the stochastic methods need more memory for particles. The number of observations m has an important effect on the stability of the chaos-based methods. To investigate the stability of the method, we consider four scenarios for occlusion and fast motion sequences with different observations. Scenario 1, we use the past observations to predict the remaining two states. Scenario 2 uses the observations to predict the remaining three states. Scenario 3 applies the observation to predict the remaining four states. In the last scenario, we use the observations to predict the five states. The results are shown in the Figure 6.

Fast Motion

8

7

6

The average of center errors

CE

9

Occlusion 11

Two step predictions (p=2) Three step predictions (p=3) Four step predictions (p=4) Five step predictions (p=5)

AC

The average of center errors

10

5

4 15

16

17

18

19

20

21

22

23

24

10 9.5 9 8.5 8 7.5 7 15

25

The number of observations (m)

Two step predictions (p=2) Three step predictions (p=3) Four step predictions (p=4) Five step predictions (p=5)

10.5

16

17

18

19

20

21

22

23

24

25

The number of observations (m)

Figure 6. The average center error of the occlusion and fast motion sequences with different parameters m and p.

The results illustrate that when less than 17 observations are adopted to predict the remaining states, the fractal prediction fails to make an accurate prediction to the video sequences. The failure of prediction becomes more serious for small number of observations. Therefore, the number of the observations should be larger than 18. The

12

ACCEPTED MANUSCRIPT stability of the fractal method show that the time series can be predicted by using less anterior data [31]. When the initial 17 data were adopted to predict the remaining data, the fractal prediction failed to make an accurate prediction [31] and the failure of prediction became more serious if the number of data was smaller [31]. The results show that the proposed method achieves best performance with 20 observations and 3 states for prediction to balance the accuracy and speed. Figure 6 illustrates that prediction accuracy of the method with m=20 is higher than that of the method with m=18. In the four and five step predictions, the proposed method needs the more number of observations. In the cases, the fractal prediction cannot find the optimal dimension for prediction directly. The results of three-step-ahead prediction are shown in Table 3 for 15 video sequences. The success rates and center errors demonstrate that the proposed method achieves best performance with 20 past observations.

CR

IP

T

Table 3. Effect of parameter m on all video sequences for three-step-ahead predictions. #observations m Average of center error Average of success rate Scenario 1 <17 12.15 0.61 Scenario 2 18 7.10 0.76 Scenario 3 19 7.08 0.74 Scenario 4 20 7.04 0.70

AN

US

The PF tracker has more complexity time to locate the target for object tracking due to the large number of particles. The proposed algorithm needs 0.00126 seconds for feature extraction and the run time for each frame with estimation is 9.6970 s while the run time of particle filter is 4.6382s. The three-step prediction reduces the run time for whole sequences. Therefore, the computational time of the proposed method is lower than the complexity cost of the particle filter in each video sequence. The proposed method uses fractal estimation with a deterministic process in the low-dimensional state space search and tends to predict states of next three frames. Therefore, the proposed method is about 2 times faster than the particle filter method.

M

5.4. Accuracy of the proposed method on a large dataset

AC

CE

PT

ED

In this section, the proposed method is applied to a large dataset [49]. We apply our proposed method on a large benchmark which includes 50 video sequences and 29 tracking algorithms [50]. The robustness metrics are the success rates. The metric presents the percentage of correctly predicted target within location error thresholds. The precision score is the value metric when the threshold is 20 pixels. The success plots show the success rates for threshold 0 to 1. The area under curve (AUC) of success plot is used to rank the algorithms. Three metrics are used for evaluation one-pass evaluation (OPE), temporal robustness evaluation (TRE), and spatial robustness evaluation (SRE). Both TRE and SRE are evaluated based on overlap precision. TRE is used to evaluate the temporal robustness of the algorithms with 20 runs with different started frames. SRE evaluates that a tracker is sensitive to spatial initialization errors. For the metric, different shift and scale is applied into the ground-truth bonding box of the object. As can be seen in Figure 7 (a), the success rate of the proposed method is higher than that of Struck and TLD when a threshold is small. The success rate of the proposed algorithm is comparable with Struck, but it is better than TLD. The results show that the chaos-based tracker handle the fast motion. Our method has the highest ranking on the fast motion. Figure 7 (b) shows the robustness of the proposed method in occlusions. On the occlusion, our algorithm, Struck, and SCM outperform others because of the learning process and rich representations in the Struck and SCM and powerful motion estimation in the proposed method. As can be seen in Figures 7 (c) and (d), the robustness of the chaotic estimation in TRE is higher than OPE. The chaotic method can extract motion information of the target. Therefore, our algorithm performs well in TRE and SRE metrics. Object tracking based on dynamical information of movement is useful approach to track the target under full occlusions. The variance of error reduces over time after a fast motion or irregular motion by using three-step ahead prediction, while the errors of the stochastic methods increase over time in fast and irregular motion. In stochastic methods, the optimal state is achieved by searching all state space, while the proposed method searches the state space based on the chaotic characteristics. The stochastic methods cannot handle abrupt changes over the previous frames because of losing the information of motion.

13

ACCEPTED MANUSCRIPT 6. Conclusion and future work

Success plot of SRE:Occlusion (29)

Success plot of SRE:Fast motion (17) 0.8

0.9 Ours [0.468] Struck [0.451] TLD [0.385] CXT [0.348] OAB [0.322] MIL [0.319] CPF [0.310] CSK [0.309] RS-V [0.306] DFT [0.304]

0.5 0.4

0.6

0.3

0.5 0.4 0.3 0.2

0.2

0.1

0.1 0.2

0.4

0.6

0.8

overlap threshold

0 0

1

M

0 0

0.6

0.8

1

Success plots of TRE

ED

0.9

0.7 0.6

PT

0.5 0.4 0.3

CE

0.2

0.2

0.4

SCM [0.499] Ours [0.487] Struck [0.474] TLD [0.437] ASLA [0.434] CXT [0.426] VTS [0.416] VTD [0.416] CSK [0.398] LSK [0.395]

Ours [0.541] Struck [0.514] SCM [0.514] ASLA [0.485] CXT [0.463] VTD [0.462] VTS [0.460] CSK [0.454] TLD [0.448] LSK [0.447]

0.8 0.7

Success rate

0.8

Success rate

0.4

b) SRE of Occlusions

0.9

0 0

0.2

overlap threshold

a) SRE of fast motion sequences Success plots of OPE

0.1

Ours [0.418] Struck [0.405] SCM [0.398] TLD [0.384] LSK [0.384] ASLA [0.381] VTD [0.359] CPF [0.359] VTS [0.354] CSK [0.352]

US

0.6

Success rate

Success rate

0.7

0.7

AN

0.8

CR

IP

T

In this paper, we propose a three step-ahead predictions for object tracking algorithm based on chaotic system. We used the pseudo-orbit method to extract dynamic information of motion for object tracking. After the pseudo-orbit method is applied to past observations, the Grey theory is applied to reduce the dimension of data. The fractal prediction method is used to predict the object location. The algorithm can localize the object’s position accurately using short-term predictions of pseudo trajectory in the state space. The method can track the small targets in video sequences while the stochastic methods are sensitive to the poor features of small target. We compare the performance of the proposed tracker against some stochastic and tracking by detection methods. The experimental results demonstrate that the proposed method is more effective in comparison with the state-of-the-art methods and is more robust on abrupt motion and occlusion. The algorithm predicts the target in video sequences on sudden dynamic changes using a low-dimensional state space. Therefore, the proposed method can be applied to efficiently and effectively predict the target state in video object tracking. In the future, we can use Lorenz system to handle the limitation of the Ikeda map for two-dimensional state estimation. This method improves the global search and the accuracy of estimation.

0.6 0.5 0.4 0.3 0.2 0.1

0.6

0.8

0 0

1

overlap threshold

0.2

0.4

0.6

0.8

1

overlap threshold

References

AC

c) OPE of all sequences d) TRE of sequences Figure 7. Plots of SRE, OPE, and TRE on the large dataset [50]. The AUC scores of trackers are shown in the legend. The top ten trackers are presented in plots.

[1] Comaniciu, Dorin, Visvanathan Ramesh, and Peter Meer. "Kernel-based object tracking." IEEE Transactions on pattern analysis and machine intelligence 25, no. 5 (2003): 564-577. [2] Sun, Xin, Hongxun Yao, and Xiusheng Lu. "Dynamic multi-cue tracking using particle filter." Signal, Image and Video Processing 8, no. 1 (2014): 95-101. [3] Kim, Dae-Hwan, Hyo-Kak Kim, Seung-Jun Lee, Won-Jae Park, and Sung-Jea Ko. "Kernel-based structural binary pattern tracking." IEEE Transactions on Circuits and Systems for Video Technology 24, no. 8 (2014): 1288-1300.

14

ACCEPTED MANUSCRIPT

AC

CE

PT

ED

M

AN

US

CR

IP

T

[4] Del Bimbo, Alberto, and Fabrizio Dini. "Particle filter-based visual tracking with a first order dynamic model and uncertainty adaptation." Computer Vision and Image Understanding 115, no. 6 (2011): 771-786. [5] Sun, Jun, Fa-zhi He, Yi-lin Chen, and Xiao Chen. "A multiple template approach for robust tracking of fast motion target." Applied Mathematics-A Journal of Chinese Universities 31, no. 2 (2016): 177-197. [6] Ait Abdelali, Hamd, Fedwa Essannouni, Leila Essannouni, and Driss Aboutajdine. "An Adaptive Object Tracking Using Kalman Filter and Probability Product Kernel." Modelling and Simulation in Engineering 2016 (2016). [7] Li, Wanyi, Peng Wang, and Hong Qiao. "Top–down visual attention integrated particle filter for robust object tracking." Signal Processing: Image Communication 43 (2016): 28-41. [8] Yi, Shuangyan, Zhenyu He, Xinge You, and Yiu-Ming Cheung. "Single object tracking via robust combination of particle filter and sparse representation." Signal Processing 110 (2015): 178-187. [9] Cai, Zebin, Zhenghui Gu, Zhu Liang Yu, Hao Liu, and Ke Zhang. "A real-time visual object tracking system based on Kalman filter and MB-LBP feature matching." Multimedia Tools and Applications 75, no. 4 (2016): 2393-2409. [10] Arulampalam, M. Sanjeev, Simon Maskell, Neil Gordon, and Tim Clapp. "A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking." IEEE Transactions on signal processing 50, no. 2 (2002): 174-188. [11] Firouznia, Marjan, Karim Faez, Hamidreza Amindavar, Javad Alikhani Koupaei, Pietro Pantano, and Eleonora Bilotta. "Multi-step prediction method for robust object tracking." Digital Signal Processing 70 (2017): 94104. [12] Firouznia, Marjan, Karim Faez, Hamidreza Amindavar, and Javad Alikhani Koupaei. "Chaotic particle filter for visual object tracking." Journal of Visual Communication and Image Representation (2018). [13] Fan, Zhenhua, Hongbing Ji, and Yongquan Zhang. "Iterative particle filter for visual tracking." Signal Processing: Image Communication 36 (2015): 140-153. [14] Xiao, Jingjing, Rustam Stolkin, Mourad Oussalah, and Aleš Leonardis. "Continuously Adaptive Data Fusion and Model Relearning for Particle Filter Tracking With Multiple Features." IEEE Sensors Journal 16, no. 8 (2016): 2639-2649. [15] Pan, Pan, and Dan Schonfeld. "Visual tracking using high-order particle filtering." IEEE signal processing letters 18, no. 1 (2011): 51-54. [16] Yin, Shimin, Jin Hee Na, Jin Young Choi, and Songhwai Oh. "Hierarchical Kalman-particle filter with adaptation to motion changes for object tracking." Computer Vision and Image Understanding 115, no. 6 (2011): 885-900. [17] Ding, Jianwei, Yongzhen Huang, Wei Liu, and Kaiqi Huang. "Severely Blurred Object Tracking by Learning Deep Image Representations." IEEE Transactions on Circuits and Systems for Video Technology 26, no. 2 (2016): 319-331. [18] Chiranjeevi, Pojala, and Somnath Sengupta. "Rough-Set-Theoretic Fuzzy Cues-Based Object Tracking Under Improved Particle Filter Framework." IEEE Transactions on Fuzzy Systems 24, no. 3 (2016): 695-707. [19] Zhang, Yongquan, Hongbing Ji, and Qi Hu. "A box-particle implementation of standard PHD filter for extended target tracking." Information Fusion 34 (2017): 55-69. [20] Su, Yingya, Qingjie Zhao, Liujun Zhao, and Dongbing Gu. "Abrupt motion tracking using a visual saliency embedded particle filter." Pattern Recognition 47, no. 5 (2014): 1826-1834. [21] Dou, Jianfang, and Jianxun Li. "Robust visual tracking based on joint multi-feature histogram by integrating particle filter and mean shift." Optik-International Journal for Light and Electron Optics 126, no. 15 (2015): 1449-1456. [22] Li, Wanyi, Peng Wang, and Hong Qiao. "Top–down visual attention integrated particle filter for robust object tracking." Signal Processing: Image Communication 43 (2016): 28-41. [23] De Freitas, Allan, Lyudmila Mihaylova, Amadou Gning, Donka Angelova, and Visakan Kadirkamanathan. "Autonomous crowds tracking with box particle filtering and convolution particle filtering." Automatica 69 (2016): 380-394.

15

ACCEPTED MANUSCRIPT

AC

CE

PT

ED

M

AN

US

CR

IP

T

[24] Casdagli, Martin. "Nonlinear forecasting, chaos and statistics." In Modeling complex phenomena, pp. 131-152. Springer New York, 1992. [25] Abdechiri, Marjan, Karim Faez, and Hamidreza Amindavar. "Exploring chaotic attractors in nonlinear dynamical system under fractal theory." Multidimensional Systems and Signal Processing (2017): 1-17. [26] Abdechiri, Marjan, Karim Faez, Hamidreza Amindavar, and Eleonora Bilotta. "The chaotic dynamics of highdimensional systems." Nonlinear Dynamics 87, no. 4 (2017): 2597-2610. [27] Abdechiri, Marjan, Karim Faez, and Hamidreza Amindavar. "Visual object tracking with online weighted chaotic multiple instance learning." Neurocomputing 247 (2017): 16-30. [28] Abdechiri, Marjan, Karim Faez, Hamidreza Amindavar, and Eleonora Bilotta. "Chaotic target representation for robust object tracking." Signal Processing: Image Communication 54 (2017): 23-35. [29] Alikhani Koupaei, Javad, and Seyed Mohammad Mehdi Hosseini. "A new hybrid algorithm based on chaotic maps for solving systems of nonlinear equations." Chaos, Solitons & Fractals 81 (2015): 233-245. [30] Kayacan, Erdal, Baris Ulutas, and Okyay Kaynak. "Grey system theory-based models in time series prediction." Expert systems with applications 37, no. 2 (2010): 1784-1789. [31] Wu, Jun, Jian Lu, and Jiaquan Wang. "Application of chaos and fractal models to water quality time series prediction." Environmental Modelling & Software 24, no. 5 (2009): 632-636. [32] Karami, Amir Hossein, Maryam Hasanzadeh, and Shohreh Kasaei. "Online adaptive motion model-based target tracking using local search algorithm." Engineering Applications of Artificial Intelligence 37 (2015): 307-318. [33] Pérez, Patrick, Carine Hue, Jaco Vermaak, and Michel Gangnet. "Color-based probabilistic tracking." In European Conference on Computer Vision, pp. 661-675. Springer Berlin Heidelberg, 2002 [34] Sugandi, Budi, Hyoungseop Kim, Joo Koi Tan, and Seiji Ishikawa. "A color-based particle filter for multiple object tracking in an outdoor environment." Artificial Life and Robotics 15, no. 1 (2010): 41-47. [35] Ciobanu, Adrian, Ioan Pavaloi, Mihaela Luca, and Elena Musca. "Color feature vectors based on optimal LAB histogram Bins." In 2014 International Conference on Development and Application Systems (DAS). 2014. [36] Du, Hailiang, and Leonard A. Smith. "Pseudo-orbit data assimilation. Part II: Assimilation with imperfect models." Journal of the Atmospheric Sciences 71, no. 2 (2014): 483-495. [37] Leonardi, Mary L. Prediction and geometry of chaotic time series. Naval Postgraduate School Monterey CA, 1997. [38] Yao, Qiwei, and Howell Tong. "On prediction and chaos in stochastic systems." Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences 348, no. 1688 (1994): 357369. [39] Kwon, Junseok, and Kyoung Mu Lee. "Visual tracking decomposition." In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pp. 1269-1276. IEEE, 2010. [40] Hare, Sam, Amir Saffari, and Philip HS Torr. "Struck: Structured output tracking with kernels." In 2011 International Conference on Computer Vision, pp. 263-270. IEEE, 2011. [41] Kalal, Zdenek, Krystian Mikolajczyk, and Jiri Matas. "Tracking-learning-detection." IEEE transactions on pattern analysis and machine intelligence 34, no. 7 (2012): 1409-1422. [42] Zhang, Kaihua, Lei Zhang, and Ming-Hsuan Yang. "Real-time compressive tracking." In European Conference on Computer Vision, pp. 864-877. Springer Berlin Heidelberg, 2012. [43] Henriques, João F., Rui Caseiro, Pedro Martins, and Jorge Batista. "High-speed tracking with kernelized correlation filters." IEEE Transactions on Pattern Analysis and Machine Intelligence 37, no. 3 (2015): 583596. [44] Zhang, Tianzhu, Si Liu, Narendra Ahuja, Ming-Hsuan Yang, and Bernard Ghanem. "Robust visual tracking via consistent low-rank sparse learning." International Journal of Computer Vision 111, no. 2 (2015): 171-190. [45] Zhang, Lihe, Huchuan Lu, Dandan Du, and Luning Liu. "Sparse hashing tracking." IEEE Transactions on Image Processing 25, no. 2 (2016): 840-849. [46] Nummiaro, Katja, Esther Koller-Meier, and Luc Van Gool. "An adaptive color-based particle filter." Image and vision computing 21, no. 1 (2003): 99-110.

16

ACCEPTED MANUSCRIPT

AC

CE

PT

ED

M

AN

US

CR

IP

T

[47] Cehovin, Luka, Matej Kristan, and Ales Leonardis. "Robust visual tracking using an adaptive coupled-layer visual model." IEEE transactions on pattern analysis and machine intelligence 35, no. 4 (2013): 941-953. [48] Smeulders, Arnold WM, Dung M. Chu, Rita Cucchiara, Simone Calderara, Afshin Dehghan, and Mubarak Shah. "Visual tracking: An experimental survey." IEEE Transactions on Pattern Analysis and Machine Intelligence 36, no. 7 (2014): 1442-1468. [49] Wu, Yi, Jongwoo Lim, and Ming-Hsuan Yang. "Object tracking benchmark." IEEE Transactions on Pattern Analysis and Machine Intelligence 37, no. 9 (2015): 1834-1848. [50] Wu, Yi, Jongwoo Lim, and Ming-Hsuan Yang. "Online object tracking: A benchmark." In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2411-2418. 2013.

17