Chaos, Solitons and Fractals 26 (2005) 685–694 www.elsevier.com/locate/chaos
Study on the affective property of music Xia Mao a
a,*
, Na Zhang a, Yun Sun a, Lee-Lung Cheng
b
School of Electronical Engineering, Beijing University of Aeronautics and Astronautics, P.O. Box 206, Beijing 100083, China b Department of Computer Engineering and Information Technology, City University of Hong Kong, Hong Kong Accepted 12 January 2005
Abstract In this paper, we obtain human psychological data through emotional tests on different kinds of music, and use factor analysis approach to find out three major factors that can truly represent the primary affective properties of music. Based on these three factors, the test music can further be classified into two groups using cluster analysis. By analyzing these music test data using short-time processing and comparing the classification results obtained from cluster analysis, both results are consistent, which indicates that a computer model for perceiving the affective characteristics of music can be established. A verification procedure is also conducted to support our conclusion. 2005 Elsevier Ltd. All rights reserved.
1. Introduction Emotion is a complex psychological phenomenon of human beings. Researches in medical and psychological fields indicate that emotion can play an essential role in human aptitude activities. The interaction of emotion and logistic ability contributes the human intelligence [1,2]. Artificial intelligence is a way to make machines to have intelligences and behavior like human beings, and the most important part for achieving this goal is to implement an emotional ability in the machines. Affective computing is a new and promising area of research giving machines the ability to recognize, understand and even express emotions. The work was first brought up by the Media lab of MIT in the USA. Although the concept of affective computing is still under development, it has already attracted attention from science and commercial enterprises to conduct researches in this field [3–6]. Many commercial enterprises are also beginning to develop products in this field, such as the Emotion Mouse of IBM. Music is an art for expressing our emotions and it shows a profuse affective world to us by combining different basic elements [7]. Dulcet music not only makes us feel happier, but also increases our working efficiency. This paper aims to develop a model such that computers are to perceive the affective characteristics of music like human. First, we collect human psychological data through emotional test on different kinds of music, and apply factor analysis to find out the essential emotional factors. Then, we use cluster analysis to classify test music data into two groups based on these factors. Using digital signal processing techniques, we then established measuring parameter, i.e. a coefficient a, for each piece of tested music and using statistical analysis, we also proved that the coefficient a is an effective parameter
*
Corresponding author. Tel.: +86 1082316739. E-mail address:
[email protected] (X. Mao).
0960-0779/$ - see front matter 2005 Elsevier Ltd. All rights reserved. doi:10.1016/j.chaos.2005.01.033
686
X. Mao et al. / Chaos, Solitons and Fractals 26 (2005) 685–694
to distinguish different groups of music. To conclude, we have found a coefficient a, which allows computers to perceive dulcet affective characteristic of music.
2. Data collection and pretreatment 2.1. Data collection Subjective evaluations on different kinds of music through a music emotional test are first conducted. In this test, 20 pieces of different kinds of music are used, which include light music of Bandari, Chinese Urheen music, Richard ClaydermasÕs piano, ChopinÕs piano music, rock and roll, symphony and so forth etc. Table 1 gives a list of these 20 pieces of selected music and their corresponding reference numbers. These numbers will be used throughout the text for convenient purposes. Then, we investigate these test music data with semantic difference (SD) scale method [8,9]. Eighteen pairs of antonyms are chosen as test variables, i.e. sorrowful–joyful, fluent–slack, strong–light, noisy–quiet, undulant–smooth, messy–regular, graceful–unconstrained, relaxing–boiling, solemn–dashing, soft–harsh, elegant–vulgar, sprightly–obscure, natural–stiff, harsh–euphonious, harmonious–discordant, plump–shriveled, bright–oppressive and Orphean–scrannel. During the analysis, these antonyms are labeled using 18 symbols, i.e. x1 . . . x18. Each variable is divided into 5 Levels, say 2, 1, 0, +1, +2, where Level ‘‘2’’ denotes a deeply affective characteristic of the left word in the pair of antonyms, Level ‘‘1’’ means a little affective characteristic of that word, Level ‘‘0’’ represents a neutral affective characteristic of the pair, Level ‘‘+1’’ indicates a little affective characteristic of the right word and Label ‘‘+2’’ provides a deeply affective characteristic of the right word. Since human emotion is very complicated and the opinions of different people on the same piece of music may be different, in order to obtain the common mutual taste of music of different groups of people, we select 60 people randomly from different social and age groups (range from 20 to 35 years old) in our university as the quizzees to evaluate the music that we have chosen. In order to avoid the interferences from any possible disturbances and stimulations from external environmental ambient, all the evaluation processes are taken place in our laboratory, which is fully soundproved. The laboratory room temperature is set to around 26 C and the humidity is set to 50%. All test music data in digital ‘‘wav’’ format are stored in a computer server and can be retrieved and played by users on computer terminals remotely. The questionnaires for evaluation are also displayed on the terminal screens. This web-based investigation system records scores from each quizzee automatically and stores them in the server for further analysis. Other information like ages, genders, habits of the quizzes are also recorded for future studies.
Table 1 Test music Number
Player
Music style
Name of music
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Bandari Bandari A Bing A Bing U2 NA Qttawan NA NA Deng Lijun Richard Claydermas Bandari Smile Richard Claydermas Richard Claydermas Richard Claydermas NA John Williams U2 Yanni
Light music Light music Urheen Urheen Rock and roll Saxophone Disco song Chinese Guzheng Chinese Guzheng Pop music Piano Light music Pop music Piano Piano Piano Symphony Guitar Rock and roll Piano
One day in spring Starry sky Er quan yin yue Ting song Pride Unchained melody D.I.S.C.O Yang chun bai xue Ping hu qiu yue Sweat Love melody Morning air Boys Blue love song Destiny For alice My heart will go on Cavatina New yearÕs day Before I go
X. Mao et al. / Chaos, Solitons and Fractals 26 (2005) 685–694
687
2.2. Data pretreatment In the music-emotional tests, 20 sets of test music data were used. Each set has 18 emotional test variables and each variable was tested using 60 evaluation values. Thus, the original data used in the testing is a 20 · 18 · 60 matrix. Before we process and analyze the original data, pretreatment of the data is essential. The pretreatment converts the original data into a form consisting of standard deviations and suitable for multivariant analysis. Suppose sijn is the evaluation value of the nth quizzee to the ith test music on the jth test variable, we can define the mean and standard deviation of the evaluation values based on 60 quizzees as: P60 sij ¼
n¼1 sijn
60 sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P60 n¼1 ðsijn sij Þ Ssij ¼ 60
ð1Þ
ð2Þ
where sij is the mean evaluation of the 60 quizzees to the ith test music on the jth test variable, Ssij is the standard deviation of all scores to the ith test music on the jth test variable. Using Eq. (2) we can obtain 20 · 18 standard deviation results where, 81.4% of the standard deviations obtained are less than 1. The mean scores sij are effective values for representing the general feelings of test music for most people. The 20 · 18 standard deviation matrix will further be use in the multivariate analysis.
3. Multivariate analysis 3.1. Factor analysis It is hard to understand or visualize the associated patterns among a vast amount of variables. Factor analysis is a method for reducing the dimensions of multivariate data and understanding the associated patterns among these variables. Any observed variable would correlate with only one or a few of the underlying factors. Any single factor will be associated with only few variables. The objective of factor analysis is to extract small number of factors from many original variables that their correlations are so complex to manipulate, and these variables can be clustered into several groups depends on the value of correlations, i.e. variables in the same group have a higher value of correlation, variables in the different groups have a lower value of correlation [10–15]. In our study, factor analysis is applied to extract the prime affective factors from the 18 original test variables. The first step is to calculate the correlation matrix R of the input data. Correlation matrix R is a 18 · 18 matrix of coefficients: R = (rij)18·18, where rij given by the following expression is the correlation coefficient between variable xi and xj: P20 xi Þðxaj xj Þ a¼1 ðxai ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffiq rij ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P P20 2 20 xi Þ x j Þ2 a¼1 ðxai a¼1 ðxaj
ð3Þ
In Eq. (3), xai denotes the ath element of the variable xi, while xaj is the ath element of the variable xj ; xi , xj are means of all the elements of variables xi and xj respectively. Then Eigenvalues and Eigenvectors for the correlation matrix R are calculated. The Eigenvalues (k1,k2, k18) for R are listed in Table 2 and the corresponding Eigenvectors are u1,u2, u18. Table 2 also shows the proportion and the cumulative proportion of the original information of the Eigenvalues, which denoted by V1 and V2 respectively. Subsequently, the question follows is that how many common factors are needed for the analysis model. Our objective is to rationalize the information of original variables as much as possible with the smallest number of common factors. From Table 2 we know that the first three Eigenvalues k1 = 10.818, k2 = 4.981 and k3 = 1.312 are much higher than other 15 Eigenvalues, and the cumulative proportion of variance for these three Eigenvalues is 95.056%. Therefore, we select three common factors instead of original variables to reduce the dimensions of the original data. The three common factors account for approximately 95.056% of the information available in the original 18 variables. With the three eigenvalues k1, k2, k3 and their corresponding eigenvectors u1, u2, u3, we calculate the factor loading matrix A using the following equation:
688
X. Mao et al. / Chaos, Solitons and Fractals 26 (2005) 685–694
Table 2 Total variance explained Component
k
V1(%)
V2(%)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
10.818 4.981 1.312 0.273 0.201 0.170 6.578E-02 4.873E-02 3.762E-02 3.514E-02 1.904E-02 1.550E-02 1.056E-02 8.230E-03 3.034E-03 1.504E-03 3.437E-04 1.709E-04
60.097 27.671 7.288 1.516 1.118 0.944 0.365 0.271 0.209 0.195 0.106 8.612E-02 5.869E-02 4.572E-02 1.686E-02 8.354E-03 1.910E-03 0.494E-04
60.097 87.769 95.056 96.573 97.691 98.635 99.001 99.271 99.480 99.676 99.781 99.868 99.962 99.972 99.989 99.997 99.999 100.000
2 6 6 A¼6 6 4
a1;1
a1;2
a2;1 .. . a18;1
a2;2 .. . a18;2
2
pffiffiffiffiffi pffiffiffiffiffi u1;1 k1 u1;2 k2 p ffiffiffiffi ffi pffiffiffiffiffi 6 a2;3 7 u2;1 k1 u2;2 k2 7 6 6 7 .. 7 ¼ 6 .. .. . 5 6 . . 4 pffiffiffiffiffi pffiffiffiffiffi a18;3 u18;1 k1 u18;2 k2 a1;3
3
pffiffiffiffiffi 3 u1;3 k3 pffiffiffiffiffi 7 u2;3 k3 7 7 7 .. 7 . 5 pffiffiffiffiffi u18;3 k3
ð4Þ
Based on the above analysis, we obtain the factor loading matrix shown in Table 3, F1 0 , F2 0 , F3 0 , represent the three common factors respectively. In order to obtain a better explanation of these common factors, we choose a different orientation of the factor loading matrix by rotating the solution of the factor such that the matrix was being simplified: Each variable has a relatively higher absolute-value loading for one of the factors, while its absolute-value loading for other factors are close to zero. In this paper, we choose varimax-rotation approach to simplify the factor-loading matrix.
Table 3 Factor loading matrix Test variable
F1 0
F2 0
F3 0
x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 x11 x12 x13 x14 x15 x16 x17 x18
0.585 0.485 0.914 0.958 0.861 0.969 0.938 0.944 0.705 0.963 0.884 0.346 0.837 0.780 0.832 0.340 0.405 0.656
0.641 0.744 0.297 0.172 0.282 1.88402 0.237 0.195 0.579 0.131 9.56702 0.902 0.506 0.567 0.491 0.712 0.858 0.698
0.439 0.155 0.237 0.171 0.405 9.54302 0.205 0.239 0.383 0.195 0.346 0.106 1.66502 9.88102 0.136 0.516 0.289 0.209
X. Mao et al. / Chaos, Solitons and Fractals 26 (2005) 685–694
689
Before rotating of factor loading matrix, we use Eq. (5) to compute the communality for each variable to remove the imbalance of them. Communality is the proportion of variation in each variable contributed by these three common factors, represented by h2i . The closer to one of the value of communality is the more information, which is available by original variables, can be obtained by these three common factors, i.e. h2i ¼
3 X
a2ij
i ¼ 1; . . . ; 18
ð5Þ
j¼1
In Eq. (5), aij is the element of factor loading matrix. Table 5 shows the communalities of 18 original test variables, which all close to 1 with the maximal = 0.987 and the minimal = 0.813. This indicates that the three common factors can represent adequately the information contained in the original variables. After the communality being obtained, we divide each element of the factor-loading matrix by their communality value, the resulting matrix was still called the factor-loading matrix, represented by A which is an 18 · 3 matrix. Then, we simplify the matrix by transforming the factor-loading matrix A by rotating the matrix. Choosing two from these three common factors randomly each time, the angle of rotation, which maximizes the total column variance, can be figured out from Extremum Theorem. After the first rotation, if the resulting matrix couldnÕt explain each common factor clearly, the rotation of factor loading matrix should be continued until the last resulting matrixes have little distinctions in the total variance. The last matrix we obtained is a rotated factor loadings matrix, which has a simpler structure. In order to clarify the structure of the rotation matrix, the loadings of original variables on the new three factors are sorted and shown in Table 4. F1, F2 and F3 are new common factors after the rotation. Now we try to explain these three common factors. Our goal is to come up with a term that can describe the content domain of the variables on that factor are highly loaded. From the rotated factor-loading matrix shown in Table 4, we can observe that each common factor has a clear meaning. The variables highly loaded on the first common factor are undulant–smooth, relaxing–boiling, strong–light, graceful–unconstrained, noisy–quiet, soft–harsh and messy–regular. We label this factor as ‘‘speed factor’’, which mainly describes the speed characteristic of music. In Table 4, we can see the loadings of seven variables on the first emotional factor have positive and negative values. The reason is that the seven variables, x5, x3, x4 and x6 have a negative relation with the speed factor, while x8, x7, x10 have a positive relation with the speed factor. No matter the loading value is positive or negative, the absolute value of loading is used to describe the relationship between the test variables and the speed factor. The relations between the next two affective factors and test variables are verified in the same way. The variables highly loaded on the second common factor are orphean–scrannel, plump–shriveled, harsh–euphonious, harmonious–discordant, natural–stiff, fluent–slack and elegant–vulgar, which can describe pleasant characteristic of music, and it was named as ‘‘pleasant factor’’. The variables highly loaded on the third common factor are sorrowful–joyful, solemn–dashing, sprightly–obscure and bright–oppressive. This factor can be named as tragicomic factor, which describes happy and sorrow characteristic of music. Table 4 Rotated factor loading matrix Test variable
F1
F2
F3
Meaning
x5 (undulant–smooth) x8 (relaxing–boiling) x3 (strong–light) x7 (graceful–unconstrained) x4 (noisy–quiet) x10 (soft–harsh) x6 (messy–regular)
0.958 0.903 0.896 0.884 0.868 0.834 0.803
7.669E-02 0.259 0.172 0.243 0.315 0.519 0.464
0.248 0.321 0.383 0.371 0.354 0.131 0.300
Speed factor
x18 (orphean–scrannel) x16 (plump–shriveled) x14 (harsh–euphonious) x15 (harmonious–discordant) x13 (natural–stiff) x2 (fluent–slack) x11 (elegant–vulgar)
0.252 0.184 0.437 0.467 0.542 0.339 0.443
0.940 0.923 0.864 0.856 0.813 0.712 0.703
0.115 4.515E-02 5.041E-02 4.366E-02 3.858E-02 0.438 0.467
Pleasant factor
x1 (sorrowful–joyful) x17 (solemn–dashing) x9 (sprightly–obscure) x12 (bright–oppressive)
0.283 0.277 0.399 0.352
6.824E-02 0.250 0.150 0.398
0.928 0.918 0.893 0.813
Tragicomic factor
690
X. Mao et al. / Chaos, Solitons and Fractals 26 (2005) 685–694
Table 5 Communality Test variable
Initial
Extraction
x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 x11 x12 x13 x14 x15 x16 x17 x18
1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
0.946 0.813 0.979 0.977 0.985 0.949 0.978 0.987 0.979 0.982 0.909 0.945 0.957 0.940 0.953 0.889 0.983 0.961
Usually, factor analysis is just a next step to further analysis of the data. For subsequent analysis, we need the location of each original observation in the reduced factor space. These values are called ‘‘factor scores’’. In our analysis, three common factors are all linear combinations of original test music and the coefficient matrix is the factor score matrix. By using factor analysis, the original 18-dimensional space is transformed into a new 3-dimensional space, in which each test music has a location and three loadings on three axes, which describe the affective degree on each emotion axis of test music. The factor score matrix is calculated and shown in Table 6, where S1, S2, S3 denote factor scores of each test music on the first, second and third affective factor respectively. According to the factor analysis, we can conclude that in the process of music appreciation human feelings are focused on speed, pleasant, and tragicomic characteristics, of which the pleasant characteristic is a prime description of
Table 6 Factor score matrix Music item
S1 0
S2 0
S3 0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
1.05340 0.04910 0.35417 0.52559 0.84113 0.65178 1.10764 0.25862 0.88304 1.06501 1.74772 1.25207 1.00489 0.28361 1.99849 0.68847 0.09736 0.96296 0.76755 1.19995
0.07631 1.37167 0.25572 0.54012 1.90845 0.04069 0.37930 0.11415 0.28096 0.53227 0.88985 0.42597 0.40451 0.89501 1.26993 0.63760 0.93056 0.63092 2.71221 0.12371
0.15044 0.14699 2.49700 0.69686 0.49341 0.44595 1.1318 0.45529 1.56047 1.6788 0.62772 0.31693 1.5316 0.49712 0.78170 0.71339 0.79186 0.41827 0.04602 0.19906
X. Mao et al. / Chaos, Solitons and Fractals 26 (2005) 685–694
691
music. As the second affective factor accounts for pleasant characteristic of music, then the second factor score is the evaluation of pleasant degree of the test music. We will further analyze the second factor scores using cluster analysis in the following section. 3.2. Cluster analysis Cluster analysis are used to rearrange a large number of observations into smaller groups, the observations within the same group possess largely the same characteristic and the observations in different classified groups are relatively dissimilar. For simplicity, we can represent one observation as a single point in a multi-dimension space and represent the similarity of two observations by the distance between their corresponding locations [10–15]. In our study, we use Minkowski p-metric to describe the similarities of the test music. Minkowski p-metric is known as a general class of distance metrics, which is defined by: !1=q p X q d ij ¼ jS ia S ja j ð6Þ a¼1
In our cluster analysis, i and j denote the ith and jth test music sample and dij is the comparability measurement of the ith and jth test music. Sia is the ath factor score of the ith test music; Sja is the ath factor score of the jth test music and a denotes the ath factor with a maximum value of 3. We choose the second factor scores of the test music for cluster analysis, so p = 1, then Equation (6) can be reduced as follows: d ij ¼ ðjS i2 S j2 jq Þ1=q ¼ jS i2 S j2 j
ð7Þ
The distance between the two clusters is defined as the distance between the two nearest objects from the two groups, and is expressed as Dij ¼ minxi 2Gi ;xj¼ 2Gj d ij , where Dij represents the distance between the two clusters and Gi and Gj denote cluster i and cluster j respectively. In the initiation of cluster analysis, all pieces of test music are represented by separate clusters denoted as cluster 1, 2, 3, . . ., 20, and the corresponding distance matrix can be formed accordingly. First we try to find the smallest distance between these clusters, which is D11,14 = 0.005, so a merged cluster of clusters 11 and 14 is re-named as cluster 21. Cluster 21 is added into the matrix, and cluster 11 and 14 are removed from the list. The same step is repeated until thereÕs only one cluster left. The dendrogram representing the single-linkage clustering solution is shown in Fig. 1. From Fig. 1, we can see that these 20 pieces of test music are classified into two clusters: Cluster I {1,2,3,4,6,7,8,9,10,11,12,13,14,15,16,17,18,20} and Cluster II {5,19}. Looking back at Table 6, we find that the second factor scores of the test music in Cluster I are comparatively small with the maximum of 0.63092 and the minimum of 1.37167, and its mean is 0.2567, The second factor scores of the test music in Cluster II are much higher than those of Cluster I, which are 1.90845 and 2.31033 respectively. The second factor score is the evaluation of pleasant degree of
Fig. 1. Rescaled distance cluster combine of 20 kind of test music.
692
X. Mao et al. / Chaos, Solitons and Fractals 26 (2005) 685–694
music and a higher value means more chaotic feeling of the music. Therefore, the test music in Cluster I is classified as pleasant music, while the music in Cluster II is classified as annoying music for human.
4. Short-time analysis In order to make computer be able to recognize affective characteristics of music, we employ the short-time analysis, which will be discussed below. Rhythm is one of the most important means by which music expressing affective information, and it is the soul of music. PeopleÕs feelings on the rhythm play a main role in their appreciation to music. As we know, rhythm is constructed in turn by many different pitches with respect to time, and a pitch is determined by the frequency of sound [8]. Hence, we use the method below to extract the rhythm of music. Those pieces of test music are all in ‘‘wav’’ format. In the first step, we read in the data of music with respect to time, and then divide it into many frames by hamming windows. The following Equation is used to obtain the hamming windows: ( wðnÞ ¼
0:54 þ 0:64 cos 0;
2n N 1
1 p ;
n ¼ 0 ðN 1Þ n ¼ else
ð8Þ
N is the size of each frame signal. In order to increase the pertinence between neighboring frame signals, each frame signal should overlap with the adjacent frames during window processing. The overlapped part is generally from 0 to 0.5 times of the frame size. Here, each frame signal has a size of 2048 points, and the frame shift is 1024 points, i.e. half of the frame size. Next we use fast Fourier transforms (FFT) to obtain their frequency spectrums of all frame signals [16]. For each frame, we extract a frequency, which has the maximal power amplitude of all frequency. Thus after computing all frames, we have a list of frequencies. Based on the definition of rhythm, the frequency list can be treated as the rhythm of the music. Analysis of properties of music rhythm is carried out by two steps: (1) Calculating power spectral density of music rhythm using the following formula: 1 Pb PER ðkÞ ¼ jX N ðkÞj2 N
ð9Þ
N is the size of music rhythm, XN(k) is the discrete Fourier transform of music rhythm. (2) Fitting the curve of power spectral density to one line and then figuring out the slope of the line and the coefficient a refers as the slope value. Then the power spectral densities of all test music are calculated. We find that the coefficients a of the different group test music are different. The mean, standard deviations, the maximum and the minimum of coefficients a in the two different clusters are shown in Table 7. Fig. 2 gives the two representative power spectral densities and their coefficients, and Picture (a) and (b) are for Cluster I and Cluster II respectively. In order to determine their differences more easily, the 1/f line was drawn in each picture. In Table 7, the mean of coefficient a for Cluster I is 1.27266 and for Cluster II is 0.49616, while standard deviations are all rather small, i.e. 0.304 for Cluster I and 0.0547 for Cluster II. Then, we conclude that the coefficient a can distinguish the two groups of music very well. When a is quite small (close to zero), the music is rather annoying, and when a is relatively high, the music is pleasant. So, we can develop a parameter for computers to acquire the ability to perceive pleasant characteristic of music through by measuring the value of coefficient a.
Table 7 The statistics of eigenvalue a Cluster no.
Music number
Mean
Standard deviation
Maximum
Minimum
Cluster I Cluster II
18 2
1.27266 0.49616
0.30365 0.05471
1.83869 0.53484
0.84539 0.45747
X. Mao et al. / Chaos, Solitons and Fractals 26 (2005) 685–694
693
Fig. 2. The power spectrum of rhythm. (a) Cluster I (a = 1.83869) and (b) Cluster II (a = 0.45747) are the power spectrum of rhythm belongs to two different groups respectively.
Fig. 3. Results of verification.
5. Verification The following verification is carried out in order to support the conclusion [17]. First we choose another two pieces of music, computing their Eigenvalues in the same way we have discussed automatically by computer. The music ‘‘Andante spianato.Tranquillo’’ created by Chopin has a Eigenvalue of 0.807671, which mean the computer will treat it as a piece of pleasant music; and ‘‘I will follow’’—a rock and roll music, the Eigenvalue is 0.311659 which means the computer will treat it as a piece of chaotic music. Another 5 new quizzees are invited to take part in our verification process. The mean evaluations of these 5 quizzes on the two pieces of test music are listed in Fig. 3, in which blue curve is the results of the music ‘‘Andante spianato.Tranquillo’’ and the red curve is the results of the music ‘‘I will follow’’. From Fig. 3, it is easily observed that the music ‘‘Andante spianato.Tranquillo’’ is more pleasant to the quizzes, and while the music‘‘I will follow’’ is more chaotic. Results match the computerÕs feelings, such that we have proved once again that the computer can no doubt perceive pleasant characteristics of music successfully by using Eigenvalue a.
6. Conclusion In this paper, we analyzed music in two ways. On one hand, we analyzed the evaluation data of people using multivariate analysis methods to obtain subjective results, three emotion factors representing the primary affective properties of music were extracted, and these 20 pieces of test music could be classified into two groups based on the second factor scores. On the other hand, we analyzed two clusters of music using short-time techniques and found that the value of coefficient a is consistent with the results of the mental test. Based on this finding the coefficient a can be used to judge whether music is pleasant or not by computer. So we conclude that computers can deduct the ability to perceive pleasant degree of music through the use of coefficient a.
694
X. Mao et al. / Chaos, Solitons and Fractals 26 (2005) 685–694
Acknowledgement Supported by the Nature Science Foundation of Beijing (No. 303313).
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17]
Damasio AR. DescartesÕerror: emotion, reason and the human brain. NY: Gosset: Putnam Press; 1994. Yao Fangchuan. Affective spirit obstacle. Hunan science and technology Press; 1998. Picard RW. Affective computing. England: MIT Press; 1997. Tu Yanxu. Artificial emotion. In: Proceedings of the tenth annual conference of chinese association for artificition intelligence (CAAI-10), Beijing, 2001, pp. 27–31. Wang Zhiliang. The development of research on artificial psychology. In: The 1st chinese conference on affective computing and intelligent interaction, 2003, pp. 25–30. Mao Xia. Kaisei information processing. J Telemetry, Tracking, and Command 2000;21(6):58–62. Vesterinen E et al., Affective computing. Tik-111.590 Digital media research seminar, Finland, 2001. Wang Cizhao. Music aesthetics. Beijing: Higher Education Press; 1994. Yang Bomin, Chen Shuyong. The statistical technique of psychology. Guangming Daily Press; 1989. Sumiya M, Agu M. Factor analysis of the human pleasant feeling of 1/f controlled electric fan and massager. J Japan Soc Inst Electron 1990;Vol. J73-D-II(3):478–85. Lattin JM, Douglas Carroll J, Green PE. Analyzing multivariate data. China Machine Press; 2003. Yu Xiulin, Ren Xuesong. Multivariate statistic analysis. China Statistics Press; 1999. Hu Dingguo, Zhang Yunchu. The analyzing method of multivariate data. Tianjing: Nankai university Press; 1990. Sun Wenshuang, Chen Lanxiang. Multivariate statistic analysis. China: Hign education press; 1994. Mao Xia. Study on Features of Fuzzy Dried Seaweed Image by Multivariate analysis and Fluctuation. Zuo He University; 1996. Zeng Shangcui, Yu Zhenli. FFT spectrum analysis and display based on MATLAB system. Bulletin of Science and Technology 2000;16(4). Mao Xia, Chen Bin. Affective property of image and fractal dimension. U.K. Chaos, Solitons & Fractals 2003;15(5):905–10.