Dual-ellipse fitting approach for robust gait periodicity detection

Dual-ellipse fitting approach for robust gait periodicity detection

Neurocomputing 79 (2012) 173–178 Contents lists available at SciVerse ScienceDirect Neurocomputing journal homepage: www.elsevier.com/locate/neucom ...

845KB Sizes 0 Downloads 88 Views

Neurocomputing 79 (2012) 173–178

Contents lists available at SciVerse ScienceDirect

Neurocomputing journal homepage: www.elsevier.com/locate/neucom

Letters

Dual-ellipse fitting approach for robust gait periodicity detection Xianye Ben a,n, Weixiao Meng b, Rui Yan c a b c

School of Information Science and Engineering, Shandong University, No.27, Shanda South Road, Jinan City, Shandong Province, PR China School of Electronics Information Engineering, Harbin Institute of Technology, No.92, Western Dazhi Street, Nangang District, Harbin, Heilongjiang Province, PR China Department of Mechanical, Aerospace & Nuclear Engineering, Rensselaer Polytechnic Institute, 110 8th Street, Jonsson Engineering Center Rm.2049, Troy, NY, USA

a r t i c l e i n f o

abstract

Article history: Received 5 May 2011 Received in revised form 2 August 2011 Accepted 28 October 2011 Communicated by L. Shao Available online 15 November 2011

A new gait period detection algorithm, dual-ellipse fitting (DEF) approach, is proposed. DEF is that two regions of the whole silhouette divided by the centroid are fitted into two ellipses, respectively. We construct the gait fluctuation as a periodic function which depends on the eccentricities of two halves of the silhouette over time. Experimental results show that the proposed method is robust to scale, translation, direction of walking and carrying a bag. & 2011 Elsevier B.V. All rights reserved.

Keywords: Gait recognition Gait period detection Dual-ellipse fitting Eccentricity

1. Introduction A gait-sequence-image is a spatio-temporal periodic signal. If the entire gait video is regarded as a research object to complete human identification, not only will the computation be unnecessarily large with too much data, but also a large number of redundant information exist. Detecting the gait periodicity has gained an increasing research interest. BenAbdelkader et al. proposed two kinds of gait periodicity detection. The first one is that the similarity plot can be tiled into contiguous rectangular blocks, termed Units of Self-Similarity (USS), each of which consists of the person’s self-similarity over two periods of gait. Clearly such a different tiling is obtained for each starting phase of the periods [1]. The second one is to compute autocorrelation of bounding box width for time series of binary silhouettes [2]. Collins et al. [3] analyzed periodic width and height signals over time for silhouettes. Kale et al. [4] employed the norm of the width vector to show a periodic variation. Boulgouris et al. [5] not only partitioned the gait sequence into cycles by locating the frame indices at which the sum of the foreground pixels is minimized, but also identified the cycle length by calculating the autocorrelation. Sarkar et al. [6] estimated gait periodicity by counting the number of the bottom half of the silhouette pixels in the silhouette in each frame over time. Li et al. [7] claimed that zero crossing, local maximum and local minimum of locally linear

n

Corresponding author. Tel.: þ86 13945666293; fax: þ 86 15945688113. E-mail address: [email protected] (X. Ben).

0925-2312/$ - see front matter & 2011 Elsevier B.V. All rights reserved. doi:10.1016/j.neucom.2011.10.009

embedding (LLE) representation, respectively showed gait cycle. Veres et al. [8] achieved gait period detection by analyzing the variation of the distance between the legs. In Ref. [9], gait cycle estimation is performed based on normalized correlation on the distance vectors. Ma et al. [10] utilized the periodicity of swing distances to estimate gait period. Chen et al. [11] horizontally partitioned low one-fourth of the minimum human body outconnected into three equal sub-regions, counted their contour points and detected the gait cycle by their corresponding distributions of histogram feature. Mori et al. [12] detected the gait period by maximizing the normalized autocorrelation of the gait silhouette sequence for the temporal axis. Most of the aforementioned methods need to unify the size of all the gait images and make their center coincide with the center of the frame. Moreover, those correlation methods are suitable for a constant speed walking [1,2,5,9,12]. Those methods that regard the legs and feet of the lower body as cues of separating the gait cycle are seriously affected by shadows. In this paper, we propose a new gait periodicity detection which is based on dual-ellipse fitting (DEF). The experimental results on CASIA(B) gait database demonstrate the convincing performance of the proposed approach.

2. Gait periodicity detection In this section, the major and minor axes and their orientations of the fitted ellipse computed from spatial moments will be introduced. As a result of the fact that the variable of fitted ellipse

174

X. Ben et al. / Neurocomputing 79 (2012) 173–178

can describe the shape variation of gait silhouettes over time, a new gait periodicity analysis method based on fitted ellipse will be proposed and discussed. 2.1. Fitted ellipse With the same second-order spatial moments, the gait silhouette region R is fitted into an ellipse whose center is the origin, and then its ellipse formula can be expressed as R ¼ fðr,cÞ9 dr 2 þ 2erc þ f c2 r 1g

2.2. Gait periodicity analysis

ð1Þ

A relationship exists between the ellipse coefficients d, e, f and the second-order moments, i.e. second-order row moment mrr, second-order column moment mcc and second-order mixed moment mrc. It is given by ! ! mcc mrc d e 1 ð2Þ ¼ mrr e f 4ðmrr mcc m2rc Þ mrc The algebraic meaning can be given to the second spatial moments. Moreover, the fact that the lengths of major and minor axes and the orientation of the ellipse are determined by the coefficients d, e, and f indicates that they are also determined by the second-order moments mrr, mcc, and mrc. These spatial moments are used to measure row variation from the row mean, column variation from the column mean and row-column variation from the centroid, respectively. mrr, mcc and mrc are defined as follows:

mrr ¼

E is determined by spatial moments and normalized by L, which results in its robustness to translation and scale changes of silhouette. Therefore, E can specifically characterize the region of silhouette without each silhouette being centralized and the size of all gait images being unified into identical pixels. Thus, it should be reasonable to directly describe the shape variation of silhouettes with respect to time.

1 X ðrrÞ2 A ðr,cÞ A R

ð3Þ

To extract the walking silhouette, usually a difference image between captured image frame and an estimated background image is produced by background subtraction. First, single frame images are extracted from the primitive RGB video and grayscale transform is performed; secondly, either the original frame without any human body or the image constructed by Least Median of Squares method is selected as the initiating background image in the whole video images. Moreover, background images need updating in real time. The classification of pixels in kth frame can be judged as follows: a certain pixel cannot be used to update the background image if it is judged as a moving human body pixel; otherwise, this pixel can be used to revise the background image. We can distinguish the classification of the pixel (x,y) via the difference between current frame fk(x,y) and previous frame background Bk 1(x,y). If 9fk(x,y)Bk 1(x,y)94T where T is the threshold, then (x,y) is a moving human body pixel and the background of the previous frame is adopted to be the current background image, namely gk(x,y)¼Bk 1(x,y); otherwise (x,y) is a background pixel, thus gk(x,y)¼fk(x,y). The formula of background updates as follows: Bk ðx,yÞ ¼

1 X mcc ¼ ðccÞ2 A ðr,cÞ A R

ð4Þ

1 X mrc ¼ ðrrÞðccÞ A ðr,cÞ A R

ð5Þ

where A means that the area is just a count of the pixels in R, at the same time, r and c denote the row mean and column mean, respectively. X 1 ð6Þ A¼

k X 1 g ðx,yÞ m i ¼ km þ 1 i

ð8Þ

where m is the accumulative total number of frames. In addition, the initiating background image is supposed to be the background image of all the first m frames. In this paper, we choose m¼ 10, T¼35. Thirdly, the difference image is accomplished by subtraction between the current background and graying RGB frame; finally, the silhouette is extracted through shadow elimination, dilation, erosion and single-connectivity analysis. After removing redundant frames

ðr,cÞ A R

Table 1 shows the lengths of the major and minor axes and their orientations computed from second-order spatial moments. It is very convenient to compute the length of major axis (L), the length of minor axis (l) and the eccentricity (E) of fitted ellipse pffiffiffiffiffiffiffiffiffiffiffiffi 2 L2 l E¼ ð7Þ L Fig. 1 illustrates the orientations and axes of the dual-ellipse fittings from double halves of one gait silhouette. Due to the operation of centralization, these three second spatial moments are invariant to translation change of silhouette.

Fig. 1. Orientations and axes of the fitted ellipses.

Table 1 The lengths of the major and minor axes and their orientations computed from second-order spatial moments. Ellipse Major axis Length (L) Orientation (deg.) Minor axis Length (l) Orientation (deg.)

mrc ¼ 0, mrr 4 mcc 1=2

4mrr  90

1=2

4mcc 0

mrc ¼0, mrr r mcc 1=2

4mcc 0

1=2

4mrr –90

mrc a 0, mrr r mcc

mrc a0, mrr 4 mcc

½8fmrr þ mcc þ ½ðmrr mcc Þ2 þ 4m2rc 1=2 g1=2 n o tan1 m m þ ½ðm2mmrc Þ2 þ 4m2 1=2 rr cc rr cc rc

½8fmrr þ mcc þ ½ðmrr mcc Þ2 þ 4m2rc 1=2 g1=2 n o 2 2 1=2 rr mcc Þ þ 4mrc  tan1 mrr þ mcc þ ½ðm2 m rc

½8fmrr þ mcc ½ðmrr mcc Þ2 þ 4m2rc 1=2 g1=2 ½8fmrr þ mcc ½ðmrr mcc Þ2 þ 4m2rc 1=2 g1=2 At an angle 901 counterclockwise from the major axis

X. Ben et al. / Neurocomputing 79 (2012) 173–178

which contain an incomplete silhouette, we extract the centroids of the remainder frames and divide the whole silhouette into two halves for each one, which is illustrated in Fig. 1. Then, these two regions are fitted into two ellipses, respectively, whose eccentricities E1 and E2 are calculated by Eq. (7). Since the appearances of the ellipses directly depend on the limb’s fluctuation, we consider the gait fluctuation as a periodic function which depends on the eccentricities of two halves of the silhouette over time:

jt ¼

ðaE1 þ bE2 Þt minfðaE1 þ bE2 Þt g maxfðaE1 þ bE2 Þt gminfðaE1 þ bE2 Þt g

ð9Þ

where weights a,b ¼{0.1}, if the value of eccentricity can reflect gait periodicity, its value is set zero. (U)t is synthesized signal at time t (t¼1,y,N) and N is the total number of gait video frames. maxfUg and minfUg denote the maximum and minimum of the whole sequence, respectively. A gait cycle is defined as a sequence of silhouettes taken from one extreme value of the parameter jt to the following extreme value. Fig. 2 gives an overview of the proposed gait periodicity analysis method.

Gait-sequencevideo

Background updating

Gait periodicity

3. Experimental results To study the characteristics of the proposed gait periodicity analysis method, we evaluate it using CASIA(B) gait database [13] which contains 124 subjects captured from 11 views, and each subject has 10 sequences such as 6 normal gaits, 2 gaits with a bag and 2 gaits with a coat for each view. In this section, we first briefly introduce our experimental data, and report the performance of the proposed DEF approach. Then comparative experimental results including Kale’s method [4], Chen’s method [11] and Mori’s method [12] are given.

3.1. Performance of the DEF We select four kinds of typical gaits to explain the gait periodicity detection: (a) normal lateral-view gait; (b) lateralview gait with a bag; (c) front-view gait and (d) 162-degree-view gait, which are shown in Fig. 3. Their corresponding detection

Mathematical morphology Single-connectivity analysis

Kapur entropy binarization

RGB-frame graying

175

Background image

Eccentricity E1

Left silhouette

Eccentricity E2

Right silhouette

Centroid extraction

Redundant frames removal

Fig. 2. Overview of the proposed method.

Fig. 3. Four kinds of typical gaits: (a) normal lateral-view, (b) lateral-view with a bag, (c) front-view and (d) 162-degree-view.

176

X. Ben et al. / Neurocomputing 79 (2012) 173–178

results of gait periodicity are shown in Fig. 4. The top row and middle row show the curves of E1 and E2 smoothed by Gaussian filter over time, respectively, and the bottom row shows the synthesized signal curve using Eq. (9). A normal lateral-view gait is an ideal case. It is the nature of bipedal locomotion that demands these vertical oscillations of the body. When the legs are spread furthest apart, the eccentricities of two fitted ellipses drop to local minimums nearly at the same time. Therefore, a half gait cycle is between 2 extreme points of the synthesized signal with a ¼ b ¼1 because of gait’s symmetry. Once a bag is carried, the periodicity characteristic of human’s walking changes. The ellipse fitted by the right half silhouette has no periodic change, so b is set zero. Only the eccentricity E1 contributes to the detection result of the synthesized signal. When a person walks nearer towards the camera, the height of his body in the image plane seems to be longer. There’s one-fourth cycle difference between the periodic characteristics of the two halves of silhouette Therefore, a whole gait cycle is from the first extreme point to the third extreme point of synthesized signal with a ¼ b ¼1. A periodicity is defined as the internal between the first extreme point to the third extreme point for these three situations and it

contains two half periodicity because 2D symmetrical imaging that left and right limb swing alternately to the similar location is always the same neither in a lateral-view or a front-view. As for 162-degree-view, a periodicity is from the first local extreme point to the second, which is different from the above-mentioned. The method proposed in this paper achieves a 100% accuracy of gait recognition periodicity detected in the CASIA(B) gait database. 3.2. Comparative experimental results Owing to more efforts made to non-frontal gait period detection in the current study, front-view gait period detection is still a challenging problem. Therefore, we estimate the gait periodicity under frontal view using (a) Kale’s method [4], (b) Chen’s method [11] and (c) Mori’s method [12], respectively (see Fig. 5(a)–(c)). Each top subfigure and bottom subfigure corresponds to the results of being without or with normalization. For Kale’s method, as a human body flips nearer and nearer to the digital camera, the silhouette gets broader. So it is necessary that each gait image from sequences is placed in the middle and

Fig. 4. Detection results of gait periodicity.

50 0

Normalization

0

20

40

60

80

22 21 20 40 60 Frame

Normalization

Mori signal

20

-0.2

1.4

80

177

-0.4 -0.6 -0.8

100

23

0

Chen signal

Width

100

Normalization

X. Ben et al. / Neurocomputing 79 (2012) 173–178

0

20

40

60

80

100

0

20

40 60 Frame

80

100

1.2 1

100

1 0.5 0 0

20

40

60

80

100

0

20

40 60 Frame

80

100

0.96 0.94 0.92

Fig. 5. Detection results of other methods: (a) Kale’s method; (b) Chen’s method and (c) Mori’s method.

Table 2 The average time (s) consumed of the 4 methods (CPU: Intel(R) Core(TM)2 Duo T8300 @2.40 GHZ 2.39 GHZ, RAM:1 G).

autocorrelation of the gait silhouette sequence for Mori’s method, the time consumed by Mori’s method is longest and is more than 300 times as long as ours.

DEF Kale’s method Chen’s method Mori’s method Without normalization 4.4 Normalization –

2.4 3.7

4.5 5.2

18,330.4 1427.1

resized to a uniform second-tensor, such as 64  64 pixels. However, the width changes of gait silhouette are very small, and fluctuate within the four pixels, which can be seen in Fig. 5(a). Such result implies Kale’s method has not partitioning power for gait period. For Chen’s method, the normalization of resizing to 64  64 pixels has not performed better than no-normalization. It is because that the distribution of the signal constructed by Chen et al. depends on an aspect rate of which meaning is normalization. From Fig. 5(b), it can be seen that gesture variations over periods have not addressed by lower limbs, in addition, Chen’s method is no significantly better than Kale’s method. Mori et al. detected the normalized autocorrelation of the gait silhouette sequence for the temporal axis. For the same reason as Kale’s method, each gait image from sequences is centered and resized to 64  64 pixels. The descriptor derived by Mori et al. can describe the shape alteration of silhouette in some sort, but it is less clear than our method. We also compare time complexity of these above three methods with ours. Time complexity is actually related with the length of a video sequence. Hereby, we select all the sequences with 100 frames more or less to test the average runtime of every method, which are listed in Table 2. It can be seen that the time consumed by our method is closest to Kale’s method and Chen’s method. Since each of the pixels all over a video need to compute

4. Conclusion We propose a new dual-ellipse fitting approach for robust gait periodicity detection that two regions of the whole silhouette divided by the centroid are fitted into two ellipses, respectively. Experiment results demonstrate the effectiveness of gait periodicity detection for arbitrary direction of walking and carrying a bag. It is essential for periodicity detection to partition the video sequence into cycles automatically when identifying a periodic action over elapsed time; therefore, the present study appears to be a good addition to the work [14].

Acknowledgment We sincerely thank the Institute of Automation Chinese Academy of Sciences for granting us permission to use the CASIA(B) gait database. This project has been partly supported by the National Science Foundation for Post-doctoral Scientists of China (Grant no. 20110491087). References [1] C. BenAbdelkader, R. Cutler, L. Davis, Motion-based recognition of people in EigenGait space, in: Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition, 2002, pp. 267–272. [2] C. BenAbdelkader, R. Cutler, L. Davis, Stride and cadence as a biometric in automatic person identification and verification, in: Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition, Washington, DC, USA, 2002, pp. 357–362.

178

X. Ben et al. / Neurocomputing 79 (2012) 173–178

[3] R.T. Collins, R. Gross, Jianbo Shi, Silhouette-based human identification from body shape and gait, in: Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition, 2002, pp. 366–371. [4] A. Kale, A. Sundaresan, A.N. Rajagopalan, N.P. Cuntoor, A.K. Roy-Chowdhury, V. Kruger, R. Chellappa, Identification of humans using gait, IEEE Transactions on Image Processing. 13 (9) (2004) 1163–1173. [5] N.V. Boulgouris, K.N. Plataniotis, D. Hatzinakos, Gait recognition using dynamic time warping, in: Proceedings of IEEE International Symposium on Multimedia Signal Processing, 2004, pp. 263–266. [6] S. Sarkar, P.J. Phillips, Z. Liu, I.R. Vega, P. Grother, K.W. Bowyer, The human ID gait challenge problem: data sets, performance and analysis, IEEE Trans. Pattern Anal. Mach. Intell. 27 (2) (2005) 162–177. [7] Hong-Gui Li Cui-Ping Shi, Xing-Guo Li LLE based gait recognition, in: Proceedings of 2005 International Conference on Machine Learning and Cybernetics, 2005, pp. 4516–4521. [8] G.V. Veres, M.S. Nixon, L. Middleton, J.N. Carter, Fusion of dynamic and static features for gait recognition over time, in: Proceedings of the eighth International Conference on Information Fusion, 2005, pp. 1204–1210. [9] E. Gedikli, M. Ekinci, Silhouette based gait recognition, in: Proceedings of the IEEE 15th Signal Processing and Communications Applications, 2007, pp. 1–4. [10] Q. Ma, S. Wang, D. Nie, J. Qiu, Recognizing humans based on gait moment image, in: Proceedings of the Eighth ACI International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing (SNPD), vol. 2, 2007, pp. 606–610. [11] Chen Shi, Ma Tian-jun, Huang Wan-hong, Gao You-xing, A multi-layer windows method of moments for gait recognition, J. Electron. Inf. Technol. 31 (1) (2009) 116–119. [12] A. Mori, Y. Makihara, Y. Yagi, Gait recognition using period-based phase synchronization for low frame-rate videos, in: Proceedings of the 20th International Conference on Pattern Recognition (ICPR), 2010, pp. 2194– 2197. [13] S. Yu, D. Tan, T. Tan, A framework for evaluating the effect of view angle, clothing and carrying condition on gait recognition, in: Proceedings of the 18th International Conference on Pattern Recognition (ICPR06), Hong Kong, China, 2006, pp. 441–444. [14] L. Shao, L. Ji, Y. Liu, J. Zhang, Human action segmentation and recognition via motion and shape analysis, Pattern Recogn. Lett., 10.1016/j.patrec.2011.05.015.

Xianye Ben was born in Harbin, China, in 1983. She received the B.S. degree in electrical engineering and automation from the College of Automation, Harbin Engineering University, Harbin, China, in 2006, and the Ph.D. degree in pattern recognition and intelligent system from the College of Automation, Harbin Engineering University, Harbin, in 2010. She is currently working as an Assistant Professor in the School of Information Science and Engineering, Shandong University, Jinan, China. She has published more than 20 papers in major journals and conferences. Her current research interests include pattern recognition, digital image processing and analysis, machine learning.

Weixiao Meng was born in Harbin, China, in 1968. He received his B.Sc. degree in Electronic Instrument and Measurement Technology from Harbin Institute of Technology (HIT), China, in 1990. And then he obtained the M.S. and Ph.D. degree, both in Communication and Information System, HIT, in 1995 and 2000, respectively. Now he is a professor in School of Electronics and Communication Engineering, HIT. Besides, he is a senior member of IEEE, a senior member of China Institute of Electronics, China Institute of Communication and Expert Advisory Group on Harbin E-Government. His research interests mainly focus on adaptive signal processing. In recent years, he has published 1 authored book and more than 100 academic papers on journals and international conferences, more than 60 of which were indexed by SCI, EI and ISTP. Up to now, he has totally completed more than 20 research projects and holds 6 China patents. 1 standard proposal was accepted by IMT-Advanced technical group.

Rui Yan was born in Jilin, China, in 1988. He received the B.S. degree in automation from the College of Automation, Harbin Engineering University, Harbin, China, in 2011. He is currently a Ph.D. student in the Mechanical, Aerospace and Nuclear Engineering Department, Rensselaer Polytechnic Institute. His current research interests include pattern recognition and virtual reality.