Physiological emotion analysis using support vector regression

Physiological emotion analysis using support vector regression

Neurocomputing 122 (2013) 79–87 Contents lists available at ScienceDirect Neurocomputing journal homepage: www.elsevier.com/locate/neucom Physiolog...

2MB Sizes 2 Downloads 216 Views

Neurocomputing 122 (2013) 79–87

Contents lists available at ScienceDirect

Neurocomputing journal homepage: www.elsevier.com/locate/neucom

Physiological emotion analysis using support vector regression Chuan-Yu Chang a,n, Chuan-Wang Chang b, Jun-Ying Zheng a, Pau-Choo Chung c a b c

Department of Computer Science and Information Engineering, National Yunlin University of Science & Technology, Yunlin, Taiwan Department of Computer Science and Information Engineering, Far East University, Tainan, Taiwan Department of Electrical Engineering, National Cheng Kung University, Tainan, Taiwan

art ic l e i nf o

a b s t r a c t

Article history: Received 31 October 2012 Received in revised form 1 February 2013 Accepted 3 February 2013 Available online 14 June 2013

Physical and mental diseases were deeply affected by stress and negative emotions. In general, emotions can be roughly recognized by facial expressions. Since facial expressions may be controlled and expressed differently by different people subjectively, inaccurate are very likely to happen. It is hard to control physiological responses and the corresponding signals while emotions are excited. Hence, an emotion recognition method that considers physiological signals is proposed in this paper. We designed a specific emotion induction experiment to collect five physiological signals of subjects including electrocardiogram, galvanic skin responses (GSR), blood volume pulse, and pulse. We use support vector regression (SVR) to train the trend curves of three emotions (sadness, fear, and pleasure). Experimental results show that the proposed method achieves high recognition rate up to 89.2%. & 2013 Elsevier B.V. All rights reserved.

Keywords: Emotion recognition Emotion induction experiment Physiological signal Support vector regression Emotion trend curve

1. Introduction Emotional expressions play an important role in human-tohuman interaction. General expressions include tone of voice, body language, and word choice. Human emotions are accompanied by physiological signal changes. Some common human physiological signal patterns, including electrocardiography (ECG), heart rate, respiration, skin conductivity, blood volume pressure, finger temperature, electromyography (EMG), and electroencephalography (EEG), are used in psychology and physiology, medicine, and human-to-computer interactions. In addition, physiological signals can be used to help assess and quantify stress, tension, anger, and other emotions that influence health. In general, physiological responses are occurred by nonautonomic nerves in physiology. Physiological responses and the corresponding signals are difficult to control when a person is overcome with emotion. Many studies have analyzed physiological signals to determine and classify various kinds of emotion [1–3]. Leon et al. analyzed four kinds of signals to classify eight classes of emotion [4]. Mauss et al. calculated five features from ECG signals to analyze the impact of angry memories of participants [5]. Bailenson et al. combined human faces and physiological signals to obtain 251 features from which a feature selection method was designed to find significant features of emotions [6]. In emotion recognition researches, it is necessary to collect a lot of physiological signals that represent specific emotional states.

Kim et al. used dolls and situational stories to establish an environment that induced participants' emotions [7]. Mandryk and Atkins recorded the physiological signals of subjects playing video games [8]. Jonghwa designed a musical induction approach which naturally led subjects to real emotional states outside a lab setting, participants were requested to fill out an emotional model [9] (see Fig. 1). Katsis et al. designed a racing simulator and measured the excitement level of participants [10]. Many classifiers [11,12] have been used in emotion classification, such as support vector machines (SVMs) [7,8], multilayer perceptrons (MLPs) [5,11], adaptive neuro-fuzzy inference system (ANFIS) [10], and linear discriminant analysis (LDA) [9]. The features extracted from various physiological signals are used for classification. Since a given emotion has similar physiological responses, support vector regression (SVR) [13] is adopted in this paper to find the trend curve of each emotion. The rest of this paper is organized as follows. Section 2 presents the emotion induction experiment and the extracted physiological signals. The support vector regression approach is described in Section 3. Experimental results are given in Section 4. Conclusions are given in Section 5.

2. Emotion induction experiment and physiological signal sensors 2.1. Emotion induction experiment

n

Corresponding author. Tel.: +886 5 5342601x4337. E-mail address: [email protected] (C.-Y. Chang).

0925-2312/$ - see front matter & 2013 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.neucom.2013.02.041

In order to collect emotional physiological signals, an experiment for inducing emotion from participants was performed.

80

C.-Y. Chang et al. / Neurocomputing 122 (2013) 79–87

Experiment introduction and instruction

Wear five biosensors

Take a rest of 5 minutes

Fill out the post-test questionnaire

Watch the emotion induction movie

Fill out the pre-test questionnaire

Fig. 2. Flow diagram of the emotion induction experiment.

Fig. 1. The emotion model [9].

Audio-visual equipment has been used to induce emotional responses in some studies [7,14,15]. Bailenson et al. asked participants to watch 9-min film clips with amusing, sad, and neutral plots [7]. Nasoz and Lisetti designed a 45-min slide to induce five emotions (anger, surprise, fear, frustration, and amusement) [14]. We believe that short videos or static images are insufficient to induce emotions from participants. Therefore, participants in our study were required to watch a 1.5-h movie. In the proposed emotion induction experiment, we prepared three movies (Sad Movie, The Grudge: Old Lady in White, and Superhero Movie) to induce three emotions (sadness, fear, and pleasure). The selected movies are briefly described as follows: 1. Sad Movie is a romantic melodrama film published in South Korean, 2005. The movie describes a story about four relationships and their trials, pains, heartaches, and subsequent separations. This movie is used to induce sadness emotion. 2. The Grudge: Old Lady in White is a 2009 Japan horror film. The director is Ryuta Miyake. This movie describes that a curse would be born when someone died. At a certain house, a son brutally murdered all five of his family members after failing the bar exam. He then hanged himself, leaving behind a cassette recorder at the scene, which recorded the boy's voice, saying “Go…go, now”. This movie is used to induce fear emotion. 3. Superhero Movie is a 2008 United States spoof film. The director is Craig Mazin. Superhero Movie is a spoof of the superhero film, mainly from Spider-Man, and other modern Marvel Comics film adaptations. This movie is used to induce pleasure emotion. Fig. 2 shows procedures of our emotion induction-designed experiment. An audio-visual room was prepared for individual participants to watch the movies. When participants sat down, physiological signal sensors were placed and attached to them, and the function of the sensors was described. Then, the participants had to close their eyes for 5 min, at which point all lights were turned off. The participants were asked to fill out a pre-test questionnaire to measure their emotion states. The participants watched the movies alone in the audio-visual room. The movies were shown on a 42″ LCD TV. Fig. 3 shows a participant watching an emotion induction movie. Fig. 4 shows a participant wearing five physiological signal sensors to collect physiological signals. When the movie was finished, the participants were asked to fill out a post-test questionnaire that mainly asked questions about the plot. A non-verbal pictorial assessment technique, SelfAssessment Manikin (SAM), is adopted to measure a person's

Fig. 3. A participant was watching an emotion induction movie.

affective reaction to a wide variety of stimuli [16]. Fig. 5 shows the model of the Self-Assessment-Manikin. 2.2. Physiological signal sensors The physiological signals were acquired using the ML870 PowerLab 8/30 data acquisition system with five physiological signal sensors (see Fig. 4) that measured the electrocardiogram (ECG), respiration, galvanic skin response (GSR), blood volume pulse (BVP), and pulse signals. The sampling rate was 400 Hz for all channels. The ECG signals were measured from both wrists and the right ankle using the three-electrode approach. Respiration signals were measured from the chest with a flexible strap. The GSR signals were measured from two metal electrodes attached to the index and middle fingers of the right hand. The BVP signals were measured from a clamp which had an infrared sensor attached to the middle finger of the left hand. The pulse signals were measured from a piezoelectric sensor attached to the ring finger of the left hand.

3. The proposed method The flowchart of the proposed emotion recognition system is shown in Fig. 6. A preprocessing step is applied to remove noise and to extract significant features of the four physiological signals (ECG, GSR, BVP, pulse signals). After preprocessing, the emotional signal segments (R–R interval of ECG, GSR, peak of BVP, and peak of pulse) were input to train the SVR. Then three emotional trend curves (sadness, fear, and pleasure) were obtained. In the testing phase, according to similarity between the emotional trend curve and an unknown emotional curve, the emotion state of the input data was determined.

C.-Y. Chang et al. / Neurocomputing 122 (2013) 79–87

81

3.1. Preprocessing Biosensors used for physiological signal acquisition A

The electrocardiographic sensors

B The respiratory sensors C The GSR sensors D

The BVP sensors

E The pulse sensors

B

A D

A E

C

There are four processes in the preprocessing stage. We used a low-pass filter and a high-pass filter to remove noise and baseline wander from the physiological signals. In this paper, the cut-off frequency of low-pass filter was set respectively in each physiological signal: ECG was set to 15 Hz, respiration was set to 2 Hz, GSR was set to 1 Hz, BVP was set to 40 Hz, and pulse was set to 20 Hz. In addition, in order to remove baseline wander from ECG, respiration, and BVP, a high-pass filter was applied and the cut-off frequencies of high-pass filter were set to 4 Hz, 0.1 Hz, and 1 Hz, respectively. To obtain useful information, we calculated the duration between the two R waves (R–R interval) in ECG, the number of breaths taken within a set amount of time (respiration rate), peak of the BVP, and peak of the pulse, respectively. To reduce the amount of data, the GSR signal was re-sampled at 32 Hz; the R–R interval, respiration rate, and peaks of the BVP and pulse are re-sampled at 4 Hz [8]. In the post-test questionnaire, each question related to one movie played in the experiment. Participants selected their emotion induced by the movies. According to the questionnaire answers, we obtained of the emotions induced from the participants. Because emotion is a short-term physiological response, we extracted emotion response signals for 10 s. Moreover, the intensity of each signal was normalized to [0, 1] by x′ðtÞ ¼

A Fig. 4. Participant wears five bio-sensors to collect physiological signals.

xðtÞ−min max−min

ð1Þ

where x′ðtÞ is the normalized intensity of signal, x(t) is the intensity of original signal, min is a minimum intensity of original, and max is a maximum intensity of original. The emotion response signals were then input to the SVR model. 3.2. Support vector regression

Fig. 5. Self-Assessment Manikin Questionnaire [16].

Support vector machines (SVMs), proposed by Vapnik et al., are supervised learning machines [17]. SVMs, which provide good generalization, generate a hyperplane that separates two data sets. SVMs can also be applied to regression problems. SVMs used in regression problems are known as support vector regression (SVR) [13]. Like SVM, the goal of SVR is to find a hyperplane in a feature space. However, SVM finds a hyperplane that separates the data into two parts, whereas SVR finds a hyperplane that accurately predicts the distribution of the original data (see Fig. 7). In this paper, SVR is used to find the three kinds of emotion trend curve. Suppose that a training data set is given by fðx1 ; y1 Þ; ðx2 ; y2 Þ; …; ðxl ; yl Þg⊂xp  ℜ

ð2Þ

where xi denotes the time index, yj ∈fE; R; G; B; Pg denotes the signal intensities of E (R–R interval), R (respiration rate), G (GSR), B (peak of BVP), and P (peak of pulse), and l is the signal length. The parameter l is set to 40 for all signals except the GSR which is set to 320. The goal of ε-SVR is to find a function f(x), such that the ε-deviation between the actual target value yj and the predicted target y is as small as possible [18]. The function f is described as follows: f ðxÞ ¼ w⋅x þ b∀w∈X; b∈ℜ

ð3Þ

where w is the hyperplane solved by SVR. We can rewrite this problem as a convex optimization problem: minimize

2 1 2 jjwjj

subject to jjyi −ðw⋅xi −bÞjj ≤ε

Fig. 6. Block diagram of the proposed emotion recognition system.

ð4Þ

where ε≥0 denotes the maximum deviation between the actual and predicted target. However, noise exists in most applications. This can be described by introducing (non-negative) slack

82

C.-Y. Chang et al. / Neurocomputing 122 (2013) 79–87

Table 1 Number of emotional response signals of each physiological signal. Type of emotion

Type of physiological signal

Number of emotional response signal

Sadness

R–R interval GSR Peak of the BVP Peak of pulse R–R interval GSR Peak of the BVP Peak of the pulse R–R interval GSR Peak of the BVP Peak of the pulse

84 94 75 48 185 80 152 133 123 108 127 150

Fear

Pleasure

Fig. 7. Linear SVR in the feature space. (  ) input instance. (The input instances are indicated by crosses (  )).

variables ξi ; ξni to measure the deviation of training samples outside the ε-insensitive zone. Thus support vector regression is formulated as the minimization of the following functional: minimize

2 1 2 jjwjj

l

þ C ∑ ðξi þ ξni Þ i¼1

8 y −w⋅xi −b ≤ε þ ξi > < i n w⋅x subject to i þ b−yi ≤ε þ ξi > : ξ ; ξn ≥0 i

ð5Þ

i

In Eq. (5), each training sample has its own corresponding ξ and ξn values which are used to determine whether the training instance falls outside the scope of ε. The penalty parameter C 4 0 determines the trade-off between the flatness of f and the amount up to which deviations larger than ε are tolerated. The optimization problem of Eq. (5) can be solved by the Lagrange multiplier technique and the standard quadratic programming technique. Finally, the regression function of f(x) is given by N

f ðxÞ ¼ ∑ ðαi −αni ÞKðxi ; xÞ þ b

ð6Þ

i¼1

where Kðxi ; xÞ ¼ φT ðxi ÞφðxÞ is known as the kernel function. A number of coefficients, αi −αni , have nonzero values and the corresponding training instances are known as support vectors that have approximation errors equal to or larger than the error level ε. In this paper, an SVR using a radial basis function (RBF) kernel function is adopted to train the three kinds of emotion model.

4. Experimental results In the emotion induction experiment, we collected physiological signals from 11 participants (eight males and three females), aged 22–26. Three emotional physiological signals, namely sadness, fear, and pleasure, were used. After the preprocessing stage, a total of 1359 emotional response signals were obtained. Table 1 shows the number of emotional response signals of each physiological signal. The participants were randomly divided into two groups: one half was used for training, and the other for testing. Ten samples of the three emotional response signals obtained from a participant are shown in Fig. 8. The length of each response signal is 10 s. Since people breathe only 2 or 3 times in 10 s, the respiration signal is not suitable for emotion recognition in a short

time. Fig. 9 shows the emotional trend curves that were obtained by SVR training. In Figs. 8 and 9, the x-axis represents the time index and the y-axis represents the signal intensity. In Fig. 9(a), the R–R interval trend curve is decreasing slowly and increasing sharply in sadness. In Fig. 9(b–d), the peaks of BVP and pulse signals of sadness trend curves are similar to R–R interval signals, but slope increase is smaller. The GSR signals decreased rapidly between 2 and 10 s. In Fig. 9(e–h), the R–R interval, peak of BVP signals, and peak of pulse signals show the decreasing state in fear trend curves. The GSR trend curve represents an increase in the fear signal. In Fig. 9(i), the R–R interval produced a decreasing and increasing waveform for pleasure, but its response time was usually short. In Fig. 9(j–l), the peaks of the BVP and pulse signals for pleasure trend curves are similar to those for sadness, but the slope increase is different. The GSR signals decreased slowly between 2 and 10 s. In this paper, recognition accuracy is calculated by Accuracy ¼

TN N

ð7Þ

where N is the number of test samples, and TN is the number of successful recognitions. In the ε-SVR [19], the parameters ε, s, and C should be set appropriately to obtain high accuracy. Since the penalty parameter C does not affect the capacity of prediction, C is set to 1. s is the standard deviation in the RBF kernel function, which can be calculated according to the input instance of SVR. In this paper, s is set to 0.57738. To obtain a proper ε for SVR, experiments with various values of ε and fixed values of C and s were performed. The results are shown in Fig. 10. The highest accuracy (90.6%) was obtained when the parameter ε was set to 0.2. Accordingly, all experiments were carried out with C¼ 1, s ¼0.57738, and ε ¼0.2. In the testing stage, Fig. 11 shows the flow diagram of emotion recognition experiment. After preprocessing, the test signal was compared with three emotional trend curves. Least Squared Error (LSE) was used to determine the emotion. The test signal f(x) was compared with the emotional trend curves fk(t) (obtained from SVR) by LSE k ¼

1 N ∑ ½f ðt Þ−f ðxi Þ2 Ni¼1 k i

ð8Þ

where k∈fs; f ; pg represents the emotions sadness, fear, and pleasure, respectively. Three LSEk values were thus obtained. The potential emotion (PE) was obtained using PE ¼ arg minfLSEk g

ð9Þ

k

where PE is an index of the smallest values LSEs, LSEf, and LSEp. Since only three emotions were considered, if the difference

C.-Y. Chang et al. / Neurocomputing 122 (2013) 79–87

83

Fig. 8. Samples of physiological signals for three emotions: (a–d) the R–R interval, GSR, peak of BVP, and peak of pulse for sadness; (e–h) the R–R interval, GSR, peak of BVP, and peak of pulse for fear; (i–l) the R–R interval, GSR, peak of BVP, and peak of pulse for pleasure.

84

C.-Y. Chang et al. / Neurocomputing 122 (2013) 79–87

Fig. 9. Emotion trend curves for three emotions: (a–d) the R–R interval, GSR, peak of the BVP and Pulse of sadness trend curves; (e–h) the R–R interval, GSR, peak of the BVP and Pulse of fear trend curves; (i–l) the R–R interval, GSR, peak of the BVP and Pulse of pleasure trend curves.

C.-Y. Chang et al. / Neurocomputing 122 (2013) 79–87

85

Fig. 10. Accuracy of different ε.

Table 2 Accuracy of emotion recognition by R–R interval.

Test Signal

R–R interval

Preprocessing

Sadness

Compared with sadness trend curve

LSEsadness

Compared with fear trend curve

Compared with pleasure trend curve

LSEfear

Sadness Fear Pleasure

31 0 0 Average accuracy

Fear

Pleasure

Uncertain

Accuracy (%)

0 89 1

4 0 58

4 1 1

79.5 98.9 96.7 98.9

LSEpleasure

PE=argmin{LSEsadness ,LSEfear, LSEpleasure}

Table 3 Accuracy of emotion recognition by GSR. GSR

True

Uncertain emotion state

LSEPE >T

False

Emotion state

Sadness Sadness Fear Pleasure

45 0 12 Average accuracy

Fear

Pleasure

Uncertain

Accuracy (%)

0 32 1

0 6 40

0 0 0

100.0 84.2 75.5 86.6

Fig. 11. Flow diagram of the emotion recognition experiment.

between the potential emotion and the emotional trend curves was larger than a threshold T, the potential emotion was regarded as uncertain. That is  uncertain; if LSEPE 4 T emotion ¼ ð10Þ PE otherwise In this paper, the threshold T was set to 0.06. Tables 2–5 show the recognition results for various signals. The smallest average accuracy is 80.2%, and the highest one is 91.7%. In Table 3, the GSR signal has a lower accuracy for pleasure since the GSR trend curve is similar to that of sadness and pleasure. Since the GSR trend curve for pleasure decreases slower than that of sadness, and the physiological signals may sometimes be affected by noise, the signal will not be classified correctly. In Table 5, the peak of the pulse signal has a lower accuracy for fear. The peaks of the pulse for sadness and fear trend curves initially decrease, but then one decreases and one increases. This difference between the above cases causes misclassification. Table 6 shows the comparison results between our method and Nasoz's method [14]. In Nasoz's method, the features including the average of GSR, heart rate, and skin temperature with the k-Nearest Neighbor (KNN) classifier used to recognize emotion states. Since the heart rate was calculated by R–R interval, the R–R interval signal was used instead of heart rate signal. Moreover, the

Table 4 Accuracy of emotion recognition by peak of BVP signals. Peak of BVP signals Sadness Sadness Fear Pleasure

36 0 6 Average accuracy

Fear

Pleasure

Uncertain

Accuracy (%)

0 66 4

1 6 53

0 0 0

97.3 91.7 84.1 91

Table 5 Accuracy of emotion recognition by peak of pulse signals. Peak of pulse signals Sadness Sadness Fear Pleasure

20 12 13 Average accuracy

Fear

Pleasure

Uncertain

Accuracy (%)

3 49 0

1 0 58

0 2 2

83.3 77.8 79.5 80.2

skin temperature signal was not adopted. The experiment was conducted using our database to recognize three emotions (sadness, fear, and pleasure) and the database was divided into two parts: one half was for training, and the other was for testing.

86

C.-Y. Chang et al. / Neurocomputing 122 (2013) 79–87

Table 6 Comparison with Nasoz's method [15].

Sadness Fear Pleasure Average accuracy

Nasoz's method (%)

Our method (%)

81.6 72.8 70.1 74.8

89.8 91.6 86.1 89.2

In Table 5, the accuracies for recognizing sadness, fear, and pleasure are 81.6%, 72.8%, and 70.1%, respectively in Nasoz's method and the average accuracy is 74.8%. In our method, the accuracies are 89.8%, 91.6%, and 86.1%. The average accuracy of our method is 89.2%. Therefore, the experimental results show that the proposed method is better than Nasoz's method.

5. Conclusion In this paper, an SVR-based emotion recognition system that considers physiological signals is proposed. An emotion induction experiment was conducted to induce emotions from participants. In the emotion induction experiment, participants individually watched three movies. Five physiological signals including electrocardiography (ECG), galvanic skin responses (GSR), blood volume pulse (BVP), and pulse were collected to train the SVR. In the end, we obtained three emotional trend curves (sadness, fear, and pleasure). We compared the unknown input signal with the obtained emotional trend curves to determine the emotion. Experimental results show the proposed SVR-based method is useful for emotion recognition with an accuracy up to 89.2%. In the future, we will collect more movies with varieties and conduct more experiments with long-term analysis information to the emotion recognition system. Moreover, we will find out the correlations between various features and try to measure the participants' emotion level.

Acknowledgment This work was supported by the National Science Council Taiwan under the grants NSC 98–2218-E-006-004.

[10] C.D. Katsis, N. Katertsidis, G. Ganiatsas, D.I. Fotiadis, Toward emotion recognition in car-racing drivers: a biosignal processing approach, IEEE Trans. Syst. Man Cybern. Part A: Syst. Hum. 38 (2008) 502–512. [11] E. Leon, G. Clarke, V. Callaghan, F. Sepulveda, A user-independent real-time emotion recognition system for software agents in domestic environments, Eng. Appl. Artif. Intell. 20 (2007) 337–345. [12] L. Ogiela, M.R. Ogiela, Cognitive Techniques in Visual Data Interpretation, Studies in Computational Intelligence, vol. 228, Springer-Verlag, Berlin Heidelberg, 2009. [13] V. Vapnik, A. Lerner, Pattern recognition using generalized portrait method, Autom. Remote Control 24 (1963) 774–780. [14] F. Nasoz, C.L. Lisetti, MAUI avatars: mirroring the user's sensed emotions via expressive multi-ethnic facial avatars, J. Visual Languages Comput. 17 (2006) 430–444. [15] O. Pollatos, B.M. Herbert, E. Matthias, R. Schandry, Heart rate response after emotional picture presentation is modulated by interoceptive awareness, Int. J. Psychophysiol. 63 (2007) 117–124. [16] P.J. Lang, Behavioral treatment and bio-behavioral assessment: computer applications, Technology in Mental Health Care Delivery Systems, , 1980, pp. 119–l37. [17] V. Vapnik, S.E. Golowich, A. Smola, Support vector method for function approximation, regression estimation, and signal processing, Adv. Neural Inf. Process. Syst. 9 (1996) 281–287. [18] V. Vapnik, The Nature of Statistical Learning Theory, Springer, New York, 1995. [19] C.C. Chang, C.J. Lin. LIBSVM: a library for support vector machines. Available from: 〈www.csie.ntu.edu.tw/  cjlin/libsvm〉, 2001.

Chuan-Yu Chang received his M.S. degrees in electrical engineering from National Taiwan Ocean University, Keelung, Taiwan, in 1995, and his Ph.D. degree in electrical engineering from National Cheng Kung University, Tainan, Taiwan, in 2000. From 2001 to 2002, he was with the Department of Computer Science and Information Engineering, Shu-Te University, Kaohsiung, Taiwan, R.O.C. From 2002 to 2006, he was with the Department of Electronic Engineering, National Yunlin University of Science and Technology, Yunlin, Taiwan. Since 2007, he has been with the Department of Computer and Communication Engineering (later Department of Computer Science and Information Engineering), National Yunlin University of Science & Technology, where he is currently a full professor and the Dean of Research and Development Office. His current research interests include neural networks and their application to medical image processing, wafer defect inspection, digital watermarking, and pattern recognition. In the above areas, he has over 150 publications in journals and conference proceedings. Dr. Chang received the excellent paper award of the Image Processing and Pattern Recognition society of Taiwan in 1999, 2001 and 2009. He was also the recipient of the best paper award in International Computer Symposium in 1998 and 2010, and the best paper award in the Conference on Artificial Intelligence and Applications in 2001, 2006, 2007, and 2008 and the best paper award in National Computer Symposium in 2005. Dr. Chang is a senior member of IEEE, and a life member of IPPR (Chinese Image Processing and Pattern Recognition Society) and TAAI (Taiwanese Association for Artificial Intelligence), and is listed in Who's Who in Science and Engineering, Who's Who in the World, Who's Who in Asia, and Who's Who of Emerging Leaders. He is a member of the Program Committees of more than 50 conferences. He chaired over 30 technical sessions for many international conferences.

References [1] C.Y. Chang, J.S. Tsai, C.J. Wang, P.C. Chung, Emotion recognition with consideration of facial expression and physiological signals, Proc. IEEE Symp. Comput. Intell. Bioinformatics Comput. Biol. (2009) 278–283. [2] Q. Zhang, S. Jeong, M. Lee, Autonomous emotion development using incremental modified adaptive neuro-fuzzy inference system, Neurocomputing 86 (2012) 33–44. [3] G. Caridakis, K. Karpouzis, S. Kollias, User and context adaptive neural networks for emotion recognition, Neurocomputing 71 (13–15) (2008) 2553–2562. [4] E. Leon, G. Clarke, V. Callaghan, F. Sepulveda, Real-time detection of emotional changes for inhabited environments, Comput. Gr. 28 (2004) 635–642. [5] I.B. Mauss, C.L. Cook, J.Y.J. Cheng, J.J. Gross, Individual differences in cognitive reappraisal: experiential and physiological responses to an anger provocation, Int. J. Psychophysiol. 66 (2007) 116–124. [6] J.N. Bailenson, E.D. Pontikakis, I.B. Mauss, J.J. Gross, M.E. Jabon, C.A. C. Hutcherson, C. Nass, O. John, Real-time classification of evoked emotions using facial feature tracking and physiological responses, Int. J. Hum.-Comput. Stud. 66 (2008) 303–317. [7] K.H. Kim, S.W. Bang, S.R. Kim, Emotion recognition system using short-term monitoring of physiological signals, Med. Biol. Eng. Comput. 42 (2004) 419–427. [8] R.L. Mandryk, M.S. Atkins, A fuzzy physiological approach for continuously modeling emotion during interaction with play technologies, Int. J. Hum.Comput. Stud. 65 (2007) 329–347. [9] K. Jonghwa, E. Ande, Emotion recognition based on physiological changes in music listening, IEEE Trans. Pattern Anal. Mach. Intell. 30 (2008) 2067–2083.

Chuan-Wang Chang received his M.S. degree in electrical engineering from National Sun Yat-Sen University, Kaohsiung, Taiwan, in1995, and his Ph.D. degree in electrical engineering from National Cheng Kung University (NCKU), Tainan, Taiwan, in 2010. From 2000 to 2011, he was with the Department of Computer Application Engineering, Far East University, Tainan, Taiwan. Since 2012, he has been with the Department of Computer Science and Information Engineering, Far East University. His research interests include music retrieval systems, multimedia database, image processing and artificial intelligence.

Jun-Ying Zheng received his B.S. degree in electric engineering, National Yunlin University of Science and Technology, Taiwan, in 2008, and the MS. degree in computer science and information engineering, National Yunlin University of Science and Technology, Taiwan, in 2010. He is currently an Engineer at InterServ International Inc., Taipei, Taiwan. His research interests include neural networks and their applications to image processing.

C.-Y. Chang et al. / Neurocomputing 122 (2013) 79–87 Pau-Choo (Julia) Chung received the Ph.D. degree in electrical engineering from Texas Tech University, USA, in 1991. She then joined the Department of Electrical Engineering, National Cheng Kung University (NCKU), Taiwan, in 1991 and has become a full professor in 1996. She served as the Director of Institute of Computer and Communication Engineering (2008-2011), the Vice Dean of College of Electrical Engineering and Computer Science (2011), the Director of the Center for Research of E-life Digital Technology (2005-2008), and the Director of Electrical Laboratory (2005-2008), NCKU. She was elected Distinguished Professor of NCKU in 2005. She currently serves as Chair of Department of Electrical Engineering, NCKU. Dr. Chung’s research interests include image/video analysis and pattern recognition, biosignal analysis, computer vision and computational intelligence. She applies most of her research results to healthcare and medical applications. Dr. Chung served as the program committee member in many international

87

conferences. She was a Member on IEEE International Steering Committee, IEEE Asian Pacific Conference on Circuits and Systems (2006-2008), the Special Session Co-Chair of ISCAS 2009 and 2010, the Special Session Co-Chair of ICECS 2010, and the TPC of APCCAS 2010. Dr. Chung was the Chair of IEEE Computational Intelligence Society (CIS) (2004-2005), Tainan Chapter. She was the Chair of the IEEE Life Science Systems and Applications Technical Committee (2008-2009) and a member of the BioCAS Technical Committee and the Multimedia Systems & Applications Technical Committee of the CAS Society. She also serves as the Associate Editor of IEEE Transactions on Neural Networks and served as the Editor of Journal of Information Science and Engineering, the Guest Editor of Journal of High Speed Network, the Guest Editor of IEEE Transactions on Circuits and Systems-I, and the Secretary General of Biomedical Engineering Society of the Republic of China. She is one of the co-founders of Medical Image Standard Association (MISA) in Taiwan and is currently on the Board of Directors of MISA. Pau-Choo Chung is a member in BoG of CAS Society (2007-2009, 2010-2012). She served as an IEEE CAS Society Distinguished Lecturer (2005-2007). She is an ADCOM member of IEEE CIS. She is a Member of Phi Tau Phi honor society and is an IEEE Fellow since 2008.