Active energy image plus 2DLPP for gait recognition

Active energy image plus 2DLPP for gait recognition

ARTICLE IN PRESS Signal Processing 90 (2010) 2295–2302 Contents lists available at ScienceDirect Signal Processing journal homepage: www.elsevier.co...

1MB Sizes 0 Downloads 23 Views

ARTICLE IN PRESS Signal Processing 90 (2010) 2295–2302

Contents lists available at ScienceDirect

Signal Processing journal homepage: www.elsevier.com/locate/sigpro

Fast communication

Active energy image plus 2DLPP for gait recognition Erhu Zhang, Yongwei Zhao , Wei Xiong Department of Information Science, Xi’an University of Technology, Xi’an 710048, China

a r t i c l e in fo

abstract

Article history: Received 24 June 2009 Received in revised form 10 December 2009 Accepted 28 January 2010 Available online 2 February 2010

This paper proposes a novel active energy image (AEI) method for gait recognition. Existing human gait feature representation methods, however, usually suffer from low quality of human silhouettes and insufficient dynamic characteristics. To this end, we apply the proposed AEI for gait representation. Given a gait silhouette sequence, we first extract the active regions by calculating the difference of two adjacent silhouette images, and construct an AEI by accumulating these active regions. Then, we project each AEI to a low-dimensional feature subspace via the newly proposed twodimensional locality preserving projections (2DLPP) method to further improve the discriminative power of the extracted features. Experimental results on the CASIA gait database (dataset B and C) demonstrate the effectiveness of the proposed method. & 2010 Elsevier B.V. All rights reserved.

Keywords: Gait recognition Active energy image Dynamic gait characteristics Two-dimensional locality preserving projections

1. Introduction During the past decade, gait recognition has been extensively investigated in computer vision community and a number of gait recognition methods have been proposed [1–10]. These methods can be mainly classified into three categories: model-based, appearance-based and spatiotemporal-based. Model-based methods [3,4] aim to model the body and shape of a person when he/she is walking. Due to the highly flexible structure of nonrigid human body and self-occlusion, the performance of these methods is usually limited. Appearance-based methods focus more on extracting the static information of a walking person, and many such algorithms have been proposed in the literature [5–8]. Among them, gait energy image (GEI) [7] has been proved to be the most effective one. GEI represents human motion by using a single image, which not only reduces the storage and computational costs, but also is less sensitive to the noise in human silhouettes. Spatiotemporal-based methods uncover gait shape variation information in both the spatial

 Corresponding author.

E-mail address: [email protected] (Y. Zhao). 0165-1684/$ - see front matter & 2010 Elsevier B.V. All rights reserved. doi:10.1016/j.sigpro.2010.01.024

and temporal domains, and one such representative work includes shape variation-based frieze feature [9,10]. While existing gait recognition methods can attain good performance under controlled conditions, there is still some room to obtain a further improvement as these methods are sensitive to many factors, sush as viewpoint constraint, clothing variety, carrying object, different walking speed, low quality silhouettes and insufficient dynamic chatracteristics. To address the problem of low quality silhouettes, Han and Bhanu [7] proposed the GEI method to reduce the influence of noise, and achieved an encouraging performance in gait recognition. However, GEI only represents a gait sequence by using an image as a holistic feature and arguably loses some intrinsic dynamic characteristics of the gait pattern. In our previous work [11], we proposed a dynamic gait energy method to better extract the dynamic information of gait sequences to address this issue. Moreover, Yang et al. [8] also proposed an enhanced GEI (EGEI) gait representation method to apply dynamic region analysis to improve dynamic information of the features extracted, and better performance can be obtained than the conventional GEI method. However, these improvements are still heavily affected by other factors such as clothing and carrying object.

ARTICLE IN PRESS 2296

E. Zhang et al. / Signal Processing 90 (2010) 2295–2302

Fig. 1. An example of silhouette extraction in CASIA_B: (a) raw image, (b) background image, (c) background subtraction, (d) shadow elimination, (e) post-processing and (f) normalized silhouette.

Aiming at solving the low quality silhouette problem and enhancing dynamic characteristics in gait feature representation, we propose in this paper a novel active energy image (AEI) method for gait recognition. The basic idea of AEI is to extract the active regions of a gait sequence by calculating the difference between two adjacent silhouette images in a gait sequence. AEI focuses more on dynamic regions than GEI, and can include more dynamic information for discriminant. Moreover, it can alleviate the effect caused by low quality silhouettes, carrying object and clothing variety. To further enhance the discriminative power of AEI, we apply a newly proposed two-dimensional locality preserving projections (2DLPP) method [12] to reduce the feature dimension and improve the separable capability of these features. Experimental results on the CASIA gait database (dataset B and C) demonstrate the effectiveness of the proposed method. The rest of the paper is organized as follows. Section 2 proposes the AEI method. Section 3 presents 2DLPP to reduce the feature dimension of AEI. Section 4 presents and analyzes the experimental results, and Section 5 concludes the paper. 2. Active energy image 2.1. Human silhouette extraction The performance of a gait recognition algorithm is heavily affected by low quality silhouettes. In this study, two different kinds of gait dataset, namely CASIA_B and CASIA_C, are used. CASIA_B is an indoor dataset collected by visible cameras, and CASIA_C is an outdoor one collected by an infrared camera at night. Hence, there are intrinsically different between these two datasets in terms of image contrast, and we apply two different preprocessing methods to extract the human silhouettes, respectively.

For the CASIA_B dataset, the human silhouettes are extracted by background subtraction and thresholding similar to the approach proposed in our previous work [1]. It consists of backbround extraction, object segmentation, connected component analysis and morphological operations. Fig. 1 shows an example of human silhouette extraction of a gait sequence from the CASIA_B gait dataset, where (a) is an original gait frame, (b–f) are the results of the background image extracted, background subtraction, shadow elimination, morphological postprocessing and normalized silhouette, respectively. For the CASIA_C dataset, we use the preprocessing method proposed in [13,14] to extract the human silhouettes. Fig. 2 shows two raw images and the corresponding extracted silhouettes. It can be seen that the silhouettes extracted from this dataset are incomplete, and the aim of using this dataset is to evaluate the robustness of the proposed method.

2.2. Active energy image For a walking person, there are two kinds of information in his/her gait signature: static and dynamic [8,11,15]. According to [7,8], we can see that GEI and EGEI are two effective gait representations extracting these two types of information explicitly or implicitly. However, these methods compromise some useful information as a single 2D GEI image may lose some important walking information, such as the fore-and-aft relations, in gait feature description. For example, if the frames in a gait sequence are disordered, the GEI and EGEI representations could be the same as those of the original sequence. Moreover, GEI and EGEI are appearance-based methods and easily affected by clothing variety and carrying object. Fig. 3 shows some GEI and EGEI samples of one person under normal walking, walking with bag and walking on coat conditions, where (a, c, e) are the GEI

ARTICLE IN PRESS E. Zhang et al. / Signal Processing 90 (2010) 2295–2302

2297

Fig. 2. Two examples of silhouette extraction in CASIA_C: (a) example 1 and (b) example 2.

Fig. 3. Some GEI and EGEI samples.

samples, and (b, d, f) are the corresponding EGEI ones, respectively. We can observe from Fig. 3 that EGEI has little effect for the gait sequences collected under different carrying and clothing conditions. Since EGEI needs to obtain the dynamics weight mask (DWM) [8] using all the sequences in the training set, and the locations of the bag and the appearances of the coat may be different for different individuals, the effectiveness of DWM is limited under these two conditions. In Fig. 3(d and f), we can see that the major parts of bag and coat which should be removed are treated as static regions and remained in the EGEI samples, which indicates that they are treated as dynamic regions in both GEI and EGEI. Moreover, due to the calculation of DWM using all the training sequences, one need to recalculate the DWM and update all the EGEI according to the new DWM when adding or removing some gait sequences into the training set. To solve the above problems, we propose the active energy image (AEI) feature representation method in the following. Given a preprocessed binary gait silhouette sequence F ¼ ff 0 ; f 1 ; . . . ; f N1 g, f t represents the tth silhouette and N is the number of frames in this sequence, we first calculate the difference image between two adjacent silhouettes as follows: ( Dt ðx; yÞ ¼ From between and it is dynamic

f t ðx; yÞ

t¼0

Jf t ðx; yÞf t1 ðx; yÞJ

t 40

:

ð1Þ

Eq. (1), it is easy to know Dt is the difference f t and f t1 , i.e., Dt is the active region in time t, desirable to use difference image to extract the parts of the moving body.

Having obtained the difference images, we formally define AEI as Aðx; yÞ ¼

1 1 NX Dt ðx; yÞ: Nt¼0

ð2Þ

It is generally believed that walking is a dynamic procedure of the parts of a body. When one is walking, the moving parts are external protrusions (such as the limbs) and the primary body (such as the trunk) is almost stationary relative to the centroid corresponding to the previous and the next frames. From Eqs. (1) and (2), we can see that AEI only extracts the moving parts and throws away the stationary parts. In other words, AEI not only concentrates on the dynamic parts of the body, but also remains the walking information when the person is walking. The intensity of a pixel in AEI reflects the frequency (i.e., the energy) of active parts occurring in the position of this pixel in a walking procedure. Besides, AEI representation also eliminates the influences of carrying bag and clothing variety effectively due to the bag and clothes has little movement between two adjacent frames, and they are treated as stationary regions in the difference image. Moreover, AEI is calculated from each gait sequence rather than all the sample sequences of the training set as EGEI. Hence, it does not need to recalculate all the AEI when adding or removing sequences. Some AEI samples extracted from the gait sequences collected under normal walking, walking with bag and walking on coat conditions are shown in Fig. 4. Comparing Fig. 4 with Fig. 3, we can see that the proposed AEI method not only concentrates on dynamic regions, but also eliminates or suppresses the influence of carrying bag and clothing variety.

ARTICLE IN PRESS 2298

E. Zhang et al. / Signal Processing 90 (2010) 2295–2302

Fig. 4. Some AEI samples: (a) AEI samples of normal walking, (b) AEI samples of walking with bag and (c) AEI samples of walking on coat.

For the low quality silhouette problem, we consider two cases: the incomplete silhouettes occur inconsecutively and consecutively. For the first case, when f t is incomplete and f t1 is complete, the lost body portions of the silhouette can be contained in Dt , but the portions should be removed in the image differencing operation in our original expectation. Even so, the values of them still can be reduced through the averaging operation in Eq. (2). However, in practice, incomplete silhouettes usually appear consecutively [15], as shown in Fig. 5. For this case, the lost body portion should appear in each frame. That is to say the lost portion is a stationary region and should be removed in AEI. Hence, the AEI result is also not heavily affected. Fig. 6 shows some AEI samples obtained from the CASIA_C dataset. From these samples, it can be seen that the incomplete silhouettes just have little influence on the corresponding AEIs. Our AEI representation is similar to the newly proposed frame difference energy image (FDEI) method [15]. However, the dominant energy image (DEI) of FDEI is a stationary region containing little dynamic characteristics, and AEI concentrates more on the dynamic regions

to reflect the walking manner of an individual [8]. Moreover, DEI is easily affected by the change of gait appearances. Fig. 7 shows some DEIs, positive portions and FDEIs of a subject from the CASIA_B dataset collected under different conditions. In Fig. 7, the DEI, positive portion and FDEI are shown in the first, second and third columns respectively. Comparing Fig. 7 with Fig. 4, it can be observed that FDEI is sensitive to appearance changes and AEI is more robust.

3. 2DLPP Similar to GEI and EGEI, AEI also suffers from the ‘‘curse of dimensionality’’ problem and it is desirable to reduce the feature dimension of AEI by using a subspace learning algorithm. Recent advances have demonstrated that matrix-based subspace learning methods usually outperform the vector-based ones as they can preserve the spatial structure information of the original images and alleviate the ‘‘curse of dimensionality’’ problem. Based on this consideration, we apply the newly proposed

ARTICLE IN PRESS E. Zhang et al. / Signal Processing 90 (2010) 2295–2302

2299

Fig. 5. Some consecutive incomplete silhouettes from CASIA_C.

Fig. 6. Some AEI samples from CASIA_C.

two-dimensional locality preserving projections (2DLPP) method [12] to achieve this goal. Given a set of training AEI images Xi whose size is m  n, i ¼ 1; 2; . . . ; N; the objective function of 2DLPP is defined as min

X1 i;j

2

JYi Yj J2 Sij ;

ð3Þ

where Yi ¼ Xi v and v is a n-dimensional transformation vector and S is the similarity matrix which is defined as Sij ¼ expfJXi Xj J2 =tg, if Xi is among k nearest neighborhoods of Xj or Xj is among k nearest neighborhoods of Xi , otherwise Sij ¼ 0, where t and k are two suitable constants. Here, k defined the size of the local neighborhood. By simple algebra operation, the objective function of Eq. (3) will be X1 i;j

2

JYi Yj J2 Sij ¼

X1

2  X ¼ vT Sij ðXi XTi Xi XTj Þ v i;j

 X X ¼ v ð XTi Sij Xi  XTi Sij Xj Þ v T

i;j

i;j

¼ vT XT ðL  Im ÞXv;

ð4Þ

¼ ½XT1 ; XT2 ; . . . ; XTN ,

which is a ðmN  nÞ matrix where X generated by arranging all the images matrices in column form and L ¼ DS is the Laplacian matrix and D is a P diagonal matrix with Dii ¼ j Sij . To remove an arbitrary scaling factor in the embedding, one constraint is added as follows: X X Dii YTi Yi ¼ 13vT ð Dii XTi Xi Þv ¼ 13vT XT ðD  Im ÞXv ¼ 1; i

XT ðD  Im ÞXv ¼ lXT ðL  Im ÞXv:

i

ð5Þ

ð6Þ

As the matrices XT ðD  Im ÞX and XT ðL  Im ÞX are both symmetric and positive semi-definite, the eigenvalues of Eq. (6) will be all greater than zero. Let v1 ; v2 ; . . . ; vd be the first unitary orthogonal solution vectors of Eq. (6) corresponding to the d smallest eigenvalues, ordered according to their values 0 r l1 r l2 r    r ld and we can construct the transformation matrix V ¼ ½v1 ; v2 ; . . . ; vd , then each image Xi can be projected into Yi : Yi ¼ Xi V; i ¼ 1; 2; . . . ; N;

JðXi Xj ÞvJ2 Sij

i;j

T

where operator ‘‘’’ in Eqs. (4) and (5) denotes the Kronecher product of two matrices and Im is the identity matrix of order m. Now the transformation vector v can be obtained through solving the following general eigenvalue problem:

ð7Þ

where Yi is a ðm  dÞ feature matrix of Xi and V is a ðn  dÞ transformation matrix. For recognition, suppose Yp and Ygi are the lowdimensional features of the probe sequence and the ith gallery sequence, respectively, then the Euclidean distance between Yp and Ygi ði ¼ 1; 2; . . . ; NÞ is calculated to perform recognition by using the nearest neighbor rule: c ¼ arg minJYp Ygi J:

ð8Þ

i

4. Experiments and results The proposed method was tested on the CASIA gait database (Dataset B and C) [13,16] to evaluate its effectiveness and robustness. The CASIA_B dataset contains 124 subjects’ gait sequences collected under

ARTICLE IN PRESS 2300

E. Zhang et al. / Signal Processing 90 (2010) 2295–2302

Fig. 7. Some samples during the construction of FDEI on the CASIA_B: (a) walking with bag, (b) walking on coat and (c) normal walking.

different views, clothing and carrying conditions. There are 11 views (01, 181,y,1801) for each subject and 10 sequences per view. Among all the 10 sequences, two sequences are walking with bag, two on coat, and the rest are normal walking. We used the sequences collected at the 901 view to test our proposed method. The CASIA_C dataset consists of 153 subjects collected outdoor under different conditions at night. There are four walking conditions: normal walking, slow walking, fast walking and normal walking with a bag. Two samples of this dataset are shown in Fig. 2. Hence, most

silhouettes extracted from CASIA_C are incomplete, as shown in Fig. 5. Therefore, we can evaluate the effectiveness and robustness of our proposed method versus different walking conditions and low quality human silhouettes in our experiment. We compared AEI with GEI [7], EGEI [8] and FDEI [15] in terms of the recognition accuracy. Moreover, we also applied some representative subspace learning methods including principal component analysis (PCA), linear discriminant analysis (LDA), locality preserving projections (LPP) [17], two-dimensional PCA (2DPCA) [18]

ARTICLE IN PRESS E. Zhang et al. / Signal Processing 90 (2010) 2295–2302

Table 1 Recognition rates (%) of normal walking in CASIA_B. PCA GEI EGEI AEI FDEI

PCA +LDA

PCA+ LPP

66.34 85.67 86.02 63.34 81.17 87.10 72.58 88.71 90.73 Frieze feature + HMM Wavelet feature + HMM

2301

Table 2 Two experiments on CASIA_B for robustness test.

2DPCA

2DLDA

2DLPP

Gallery set

Probe set

Gallery sequence

Probe sequence

87.17 80.67 89.52

93.33 93.84 94.35 96.51a 93.28a

95.17 95.67 98.39

Normal Normal

Bag Coat

124  3 124  3

124  2 124  2

a The recognition results are a little higher than the results of Ref. [11] because of the human silhouettes we use in the experiment are almost complete.

Table 3 Recognition rates (%) of walking with bag in CASIA_B. PCA

and two-dimensional LDA (2DLDA) [19] for performance comparison.

GEI EGEI AEI FDEI

4.1. Recognition accuracy test In this subsection, the performance of AEI, GEI, EGEI, FDEI and PCA, LDA, LPP, 2DPCA, 2DLDA, and 2DLPP is first evaluated on the CASIA_B dataset, where all subjects with normal walking sequences were selected. Six sequences of each subject are divided into two sets and the methods were tested through a 2-fold cross validation (124  3 sequences were used for training and the rest were used for testing). For both the LDA and LPP methods, the number of samples in the training set is smaller than the dimension of each sample, which will result in the small sample size problem. In our experiments, we apply PCA to address this problem and keep 98% information in the sense of reconstruction error in the PCA step. The average recognition results of each method are listed in Table 1. From the above results, we can make the following two observations: (1) AEI has higher recognition rate than GEI, EGEI and FDEI, which demonstrates its effectiveness for gait recognition. (2) All 2D subspace learning methods outperform the corresponding 1D ones, and the gains are higher when the number of training samples is insufficient. Moreover, 2DLPP consistently outperforms other dimensionality reduction algorithms no matter which feature representation method is used.

4.2. Recognition robustness test The subsection evaluates the recognition robustness of our proposed method. Firstly, we tested the robustness against clothing variety and carrying objects on the CASIA_B dataset. Two experiments designed for robustness test are outlined in Table 2, and the recognition results of different methods are tabulated in Tables 3 and 4, respectively. Lastly, the robustness versus low quality silhouettes was evaluated on the CASIA_C dataset. Table 5 presents the details of the experiments, and the recognition results are listed in Tables 6–9, respectively.

PCA +LDA

PCA+ LPP

38.71 34.27 46.77 24.19 42.34 45.56 73.79 75.00 76.21 Frieze feature + HMM Wavelet feature + HMM

2DPCA

2DLDA

2DLPP

47.58 44.35 81.85

42.34 44.76 85.08 41.94 48.79

55.65 48.79 91.94

2DPCA

2DLDA

2DLPP

45.16 30.24 63.31

41.53 34.68 68.95 54.03 50.81

44.35 48.39 72.18

Table 4 Recognition rates (%) of walking on coat in CASIA_B. PCA GEI EGEI AEI FDEI

PCA +LDA

PCA+ LPP

29.84 25.81 31.45 14.52 21.37 30.24 54.84 57.26 56.85 Frieze feature + HMM Wavelet feature + HMM

Table 5 Four experiments in CASIA_C for robustness test. Gallery set

Probe set

Gallery sequence

Probe sequence

Normal Normal Normal Normal

Normal Fast Slow Bag

153  3 153  3 153  3 153  3

153 153  2 153  2 153  2

Table 6 Recognition rates (%) of normal walking in CASIA_C. PCA GEI EGEI AEI FDEI

PCA +LDA

PCA+ LPP

56.21 56.86 59.48 54.90 56.86 58.82 79.74 77.78 81.05 Frieze feature + HMM Wavelet feature + HMM

2DPCA

2DLDA

2DLPP

58.17 58.17 83.01

57.52 58.17 84.31 87.58 84.18

65.36 64.71 88.89

From the above experimental results, we can make the following observations:

(1) AEI gait representation obtains the best recognition performance in all cases. That is because AEI concentrates on the intrinsic dynamic characteristics of gait patterns to obtain comparatively high accuracy. (2) FDEI performs well in dealing with low quality silhouettes, but it is sensitive to varying appearances,

ARTICLE IN PRESS 2302

E. Zhang et al. / Signal Processing 90 (2010) 2295–2302

Table 7 Recognition rates (%) of fast walking in CASIA_C. PCA GEI EGEI AEI FDEI

PCA+ LDA

PCA +LPP

60.46 58.50 62.75 58.50 56.54 61.76 79.08 82.03 88.24 Frieze feature+ HMM Wavelet feature + HMM

2DPCA

2DLDA

2DLPP

61.11 60.13 87.25

59.15 58.82 88.56 89.87 88.89

63.40 67.73 90.20

References

Table 8 Recognition rates (%) of slow walking in CASIA_C. PCA GEI EGEI AEI FDEI

PCA+ LDA

PCA+ LPP

57.84 50.98 54.25 61.44 52.94 60.46 79.74 80.39 83.01 Frieze feature + HMM Wavelet feature + HMM

2DPCA

2DLDA

2DLPP

60.78 62.09 83.99

57.52 58.50 84.97 88.89 86.60

62.42 63.07 89.22

Table 9 Recognition rates (%) of walking with bag in CASIA_C. PCA GEI EGEI AEI FDEI

PCA+ LDA

PCA+ LPP

37.91 24.51 31.05 30.39 23.06 38.56 74.84 73.86 77.12 Frieze feature + HMM Wavelet feature + HMM

images to locate and obtain the active regions in a walking sequence, and accumulated the active regions of each frame to construct an AEI. Then we performed 2DLPP analysis to reduce the dimensionality and to extract the discriminative features in AEI. The experimental results demonstrated that our proposed method outperforms other gait recognition methods in both accuracy and robustness. Our future work will be concentrated on the gait recognition on arbitrary viewpoint and develop more discriminative gait representation methods.

2DPCA

2DLDA

2DLPP

46.41 39.87 77.78

48.37 40.52 78.43 56.54 66.34

49.35 42.48 79.74

such as clothing variety and carrying bag. However, our proposed AEI method is more robust to appearance variety. The reason is FDEI consists of the positive portion of frame difference and the dominant energy image (DEI) which contains little dynamic information and is easily affected by varying appearances. So the recognition rates of walking with bag and on coat are low when FDEI was used. However, AEI only treats the active regions at a time that cannot only concentrate on the dynamic information, but also suppress the influence of carrying bag and clothing variety effectively because the bag and clothes have little impact between two adjacent frames as they are deleted as stationary regions in the difference image. (3) For all different dimensionality reduction methods, 2D subspace learning methods outperform the corresponding 1D ones, and the gains are higher when the number of training samples is insufficient. Moreover, 2DLPP has higher recognition rate than other subspace learning algorithms no matter which feature representation method is used. 5. Conclusion This paper described a novel active energy image (AEI) method for gait recognition. We first used difference

[1] J. Lu, E. Zhang, Gait recognition for human identification based on ICA and fuzzy SVM through multiple views fusion, Pattern Recognition Letters 28 (16) (2007) 2401–2411. [2] F. Jean, R. Bergevin, A.B. Albu, Computing and evaluating viewnormalized body part trajectories, Image and Vision Computing 27 (2009) 1272–1284. [3] S.L. Dockstader, M.J. Berg, A.M. Tekalp, Stochastic kinematic modeling and feature extraction for gait analysis, IEEE Trans. Image Processing 12 (8) (2003) 962–976. [4] L. Wang, H. Ning, T. Tan, W. Hu, Fusion of static and dynamic body biometrics for gait recognition, IEEE Trans. Circuits and System for Video Technology 14 (2) (2004) 149–158. [5] S. Sarkar, P.J. Phillips, Z. Liu, I.R. Vega, P. Grother, K.W. Bowyer, The humanID gait challenge problem: datasets, performance, and analysis, IEEE Trans. Pattern Anal. Mach. Intell. 27 (2) (2005) 162–177. [6] Z. Liu, S. Sarkar, Improved gait recognition by gait dynamics normalization, IEEE Trans. Pattern Anal. Mach. Intell. 28 (6) (2006) 863–876. [7] J. Han, B. Bhanu, Individual recognition using gait energy image, IEEE Trans. Pattern Anal. Mach. Intell. 28 (2) (2006) 316–322. [8] X. Yang, Y. Zhou, T. Zhang, G. Shu, J. Yang, Gait recognition based on dynamic region analysis, Signal Processing 88 (2008) 2350–2356. [9] Y. Liu, R. Collins, Y. Tsin, Gait sequence analysis using frieze patterns, in: Proceedings of the Seventh European Conference on Computer Vision, May 2001. [10] S. Lee, Y. Liu, R. Collins, Shape variation-based frieze pattern for robust gait recognition, IEEE Conference on Computer Vision and Pattern Recognition (2007) 1–8. [11] E. Zhang, H. Ma, J. Lu, et al., Gait recognition using dynamic gait energy and PCA+ LPP method, in: Proceedings of the Eighth International Conference on Machine Learning and Cybernetics, vol. 1, Baoding, China, 2009, pp. 50–53. [12] S. Chen, H. Zhao, M. Kong, B. Luo, 2D-LPP: a two-dimensional extension of locality preserving projections, Neurocomputing 70 (2007) 912–921. [13] D. Tan, K. Huang, S. Yu, et al., Efficient night gait recognition based on template matching, in: Proceedings of the 18th International Conference on Pattern Recognition, vol. 3, 2006, pp. 1000–1003. [14] D. Tan, K. Huang, S. Yu, et al., Recognizing night walkers based on one pseudoshape representation of gait, IEEE Conference on Computer Vision and Pattern Recognition (2007) 1–8. [15] C. Chen, J. Liang, H. Zhao, et al., Frame difference energy image for gait recognition with incomplete silhouettes, Pattern Recognition Letters 30 (2009) 977–984. [16] S. Yu, D. Tan, T. Tan, A framework for evaluating the effect of view angle, clothing and carrying condition on gait recognition, in: Proceedings of the 18th International Conference on Pattern Recognition, 2006, pp. 441–444. [17] X. He, P. Niyogi, Locality preserving projections, in: Proceedings of the Conference Advances in Neural Information Processing Systems, 2003. [18] J. Yang, D. Zhang, A.F. Frangi, J.Y. Yang, Two-dimensional PCA: a new approach to appearance-based face representation and recognition, IEEE Trans. Pattern Anal. Mach. Intell. 26 (1) (2004) 131–137. [19] M. Li, B. Yuan, 2D-LDA: a statistical linear discriminant analysis for image matrix, Pattern Recognition Letters 26 (2005) 527–532.