ARTICLE IN PRESS
JID: KNOSYS
[m5G;July 27, 2016;17:59]
Knowledge-Based Systems 0 0 0 (2016) 1–9
Contents lists available at ScienceDirect
Knowledge-Based Systems journal homepage: www.elsevier.com/locate/knosys
Word sense disambiguation based sentiment lexicons for sentiment classification Chihli Hung∗, Shiuan-Jeng Chen Department of Information Management, Chung Yuan Christian University, Taiwan
a r t i c l e
i n f o
Article history: Received 18 April 2016 Revised 19 June 2016 Accepted 23 July 2016 Available online xxx Keywords: Sentiment analysis Opinion mining Word sense disambiguation SentiWordNet Sentiment lexicon
a b s t r a c t Sentiment analysis has attracted much attention from both researchers and practitioners as word-ofmouth (WOM) has a significant influence on consumer behavior. One core task of sentiment analysis is the discovery of sentimental words. This can be done efficiently when an accurate and large-scale sentiment lexicon is used. SentiWordNet is one such lexicon which defines each synonym set within WordNet with sentiment scores and orientation. As human language is ambiguous, an exact sense for a word in SentiWordNet needs to be justified according to the context in which the word occurs. However, most sentiment-based classification tasks extract sentimental words from SentiWordNet without dealing with word sense disambiguation (WSD), but directly adopt the sentiment score of the first sense or average sense. This paper proposes three WSD techniques based on the context of WOM documents to build WSD-based SentiWordNet lexicons. The experiments demonstrate that an improvement is achieved when the proposed WSD-based SentiWordNet is used. © 2016 Elsevier B.V. All rights reserved.
1. Introduction This paper investigates the disambiguation of ambiguous words and builds domain oriented sentiment lexicons based on a wellknown sentiment lexicon, the SentiWordNet [3], for the task of word-of-mouth (WOM) sentiment classification. With the development of the internet, online forums, micro-blogs, blogs, social networks and web platforms have become a primary channel for users to share their personal experiences, feelings and opinions regarding products, services, brands and events with family, friends, and the general public. This information includes large amounts of product WOM documents, which have long been a major influence on decision-making in consumer purchasing behaviors [6]. Through the sharing of WOM documents, consumers are able to reference the experiences and recommendations of others to help them decide whether or not they want to purchase a product or service [27]. For businesses and commercial entities, these WOM documents provide a crucial means to understanding the views and preferences of consumers [35]. In contrast to traditional market surveys, WOM document analysis can provide information that is both closer to real-time and to consumers’ opinions, and can thus create new opportunities and a competitive advantage [8,32,65]. The primary goal of sentiment analysis is to employ automated methods to extract positive, negative, or neutral emotions from ∗
Corresponding author. Fax: 88632655499. E-mail address:
[email protected] (C. Hung).
WOM documents, while determining the overall feeling that the writer of the document wishes to express, thereby extracting implicit information from the text. In the literature, sentiment analysis frequently utilizes a sentiment lexicon to help identify the words used in documents that reflect sentiment. The SentiWordNet sentiment lexicon uses WordNet [42] as its foundation and takes a semi-supervised approach toward constructing a vocabulary database that includes the ability to determine the emotional polarity of words. Although SentiWordNet can help identify sentimental words used in WOM documents, it can only be considered a general sentimental vocabulary database. This is due to the fact that when using SentiWordNet to conduct sentiment analysis, its accuracy is still affected by the issue of word sense identification. Many words have more than one meaning and the meanings expressed by words will change based on different backgrounds and environments [43]. When the meanings of words change, the sentimental attitudes that they express also change. Therefore the ability to select the correct meaning represented by the word in a particular text will also affect the effectiveness of the sentimental analysis results. In SentiWordNet each meaning of a word is considered a sense, and each sense is given a corresponding sentiment polarity score. Sense 1 represents the sense used most often in general situations [1,24,38,43]. There are two common approaches to the selection of sentiment scores for words that contain multiple meanings. The first method is to directly pick sense 1 of the word to serve as the
http://dx.doi.org/10.1016/j.knosys.2016.07.030 0950-7051/© 2016 Elsevier B.V. All rights reserved.
Please cite this article as: C. Hung, S.-J. Chen, Word sense disambiguation based sentiment lexicons for sentiment classification, Knowledge-Based Systems (2016), http://dx.doi.org/10.1016/j.knosys.2016.07.030
JID: KNOSYS 2
ARTICLE IN PRESS
[m5G;July 27, 2016;17:59]
C. Hung, S.-J. Chen / Knowledge-Based Systems 000 (2016) 1–9
meaning of the word in the text (e.g. [29]). This method does not consider the impact of domain knowledge and may result in biased sentiment analysis results. For example, in the context of movie reviews, the word “suck” is most likely to mean “inappropriate or lousy”, and is used to express the opinion that the movie is very poor. The sentiment, in this case, is negative. However, in SentiWordNet, the meaning represented by sense 1 of the word is “a sucking action” and is classified as having neutral sentiment. If the sense 1 meaning is automatically selected, the sentiment would clearly be incorrect. The second method is to take the average of the sense scores for all meanings for words with multiple meanings, and use the average sentiment score to conduct analysis (e.g. [44]). However, this method does not take the effects of domain knowledge into consideration, and could also result in issues with the accuracy of sentiment analysis. Therefore this paper proposes an approach in which word sense disambiguation (WSD) techniques are applied to movie and hotel review documents, to revise SentiWordNet 3.0 [3] from a general purpose sentiment lexicon into a movie domain oriented sentiment lexicon and a hotel domain oriented sentiment lexicon, in order to improve the performance of sentiment classification for the domain documents. The remainder of this paper is organized as follows. In Section 2, we briefly review related work including sentiment analysis and word sense disambiguation. Section 3 introduces the approach to building a WSD-based sentiment lexicon. Section 4 shows the experiment design and results. Finally, a conclusion and possible future work are presented in Section 5.
2. Related work Sentiment analysis is divided into the following six tasks: sentiment classification, subjectivity classification, opinion summarization, opinion retrieval, sarcasm and irony, and others [54]. Most sentiment analysis tasks focus on the sentiment classification of WOM documents according to their sentiment polarity, i.e. positive or negative [18,19,26,29,34,50,54,58]. These tasks can generally be completed more efficiently when an accurate and large-scale sentiment lexicon is used [22,30,54]. Various researchers have developed a number of sentiment lexicons, such as General Inquirer (GI) [57], WordNet-Affect [59], SentiWordNet [3,23], SenticNet [9,10,12]. Of these lexicons, SentiWordNet, based on the online English dictionary, WordNet [42], has become a frequently used sentiment lexicon due to its large-scale coverage. SentiWordNet defines each synonym set (or synset) in WordNet with three sentiment labels: positivity, neutrality and negativity. Each label has a specific value between zero and one, and the sum of the three labels is equal to one. Many sentiment analysis models are developed based on SentiWordNet [20,29,44,45,52,54]. For example, Ohana and Tierney [45] evaluated the function of sentimental scores in SentiWordNet for the automatic sentiment classification of film reviews. Their proposed approach using the sentiment values in SentiWordNet performs slightly better than the approach in which only the frequency of sentimental words is used. Saggion and Funk [52] applied SentiWordNet to opinion classification for a business-related data source. Devitt and Ahmad [21] classified financial and economic news based on SentiWordNet and analyzed whether or not these sentimental documents influenced the market. Hung et al. [30] applied SentiWordNet for tagging sentimental orientations and classifying documents into five qualitative categories. Hung and Lin [29] revised the sentiment scores for neutral words defined in SentiWordNet and obtained an improved sentiment classification performance. More complete literature reviews in the field of sentiment analysis or opinion mining can be found in [4,11,40,48,54,58].
One significant step in sentiment analysis is the discovery of sentimental words [64]. As many words in SentiWordNet contain more than one sense, they must be interpreted according to the context in which they occur. The task of automatic identification of meaning for words in context is called word sense disambiguation (WSD) [43]. WSD is an historical task in the field of natural language processing and has been used in various applications such as spam filtering, document classification, information retrieval, etc. [2,15,25,31,37,41,43,49,55,60]. For example, the traditional Lesk algorithm [39] disambiguates words in short phrases, based on the greatest number of common words shown in the definition sentence of each word in the same phrase. This algorithm looks up traditional dictionaries, such as the Oxford Advanced Learner’s Dictionary, for word definition. Banerjee and Pedersen [5] modified the traditional Lesk algorithm and proposed the adapted Lesk algorithm. The adapted Lesk algorithm disambiguates words in sentences. It compares the target word and its surrounding words by the glosses of their synonymous sets defined in WordNet. However, most sentiment-based classification tasks extract sentimental words from SentiWordNet without dealing with word sense disambiguation, but directly adopt the sentiment score of the first sense or average sense [29,44,54]. Unlike existing work in the literature, this paper focuses on the issue of ambiguous words and proposes three WSD techniques for improvement of the performance of sentiment classification by re-ranking the sentiment order in SentiWordNet.
3. Methodology This paper recognizes that words used in different domains may have different senses, different sentiment values and even different sentiment orientations. We propose three methods to revise a general sentiment lexicon, SentiWordNet, into a WSD-based or domain oriented sentiment lexicon, in order to improve its effectiveness for sentiment classification. Fig. 1 shows the structure of the proposed approaches. Our methodology is divided into four phases, which are preprocessing of WOM documents, tokenization, word sense disambiguation and building the WSD-based sentiment lexicon. Finally, the model that extracts sentiments from the traditional SentiWordNet for the unseen test set is treated as the benchmark. We then compare the benchmark with the proposed model for the unseen test set that extracts sentiments from the WSD-based SentiWordNet.
3.1. Preprocessing of WOM documents A number of general preprocessing steps are followed when dealing with text from the field. Firstly, we remove all nonnecessary HTML tags from WOM documents. Secondly, as a word may have different meanings when used in different parts of speech (POS), we use the Brill tagger [7] in a natural language toolkit (NLTK) to choose a suitable part of speech tag for that word. Thirdly, we lemmatize words to their base forms according to WordNet, as morphs usually present the same meaning. Fourthly, a stop list with 596 stop words is used for word cleansing. The preprocessing of sentences using POS and stop words is commonly used in the field. For example, Chaturvedi et al. [13] used POS and stop words in their work on multilingual subjectivity detection. As the purpose of this paper is to form domain-oriented sentiment lexicons, only words found in SentiWordNet are retained, and those completely neutral words whose sentiment values in both positive and negative orientations are zero are omitted.
Please cite this article as: C. Hung, S.-J. Chen, Word sense disambiguation based sentiment lexicons for sentiment classification, Knowledge-Based Systems (2016), http://dx.doi.org/10.1016/j.knosys.2016.07.030
JID: KNOSYS
ARTICLE IN PRESS
[m5G;July 27, 2016;17:59]
C. Hung, S.-J. Chen / Knowledge-Based Systems 000 (2016) 1–9
3
Fig. 1. The structure of the proposed approaches.
3.2. Tokenization A token is a basic unit for text processing and is a feature of the text. Two tokenization approaches, word-based and phrase-based tokenization, are used in this paper. 3.2.1. Word-based tokenization A word-based tokenization, also called a unigram approach, treats each single word in the text as a feature. After the preprocessing of WOM, we use term frequency (TF) to select further important features from the rest of the words. The frequency of a word in the text implies its significance in relation to the text. If a word occurs in a document several times, it can be considered to be an important concept to the document. This paper considers TF as a vector representation approach and as a feature selection criterion. We show the equation of TF as in (1) and remove words whose TF is under the threshold of 0.0 0 0 015. Various thresholds have been tested and this threshold was selected for reasons of computational capacity.
wi TFi = , k wk
(1)
where wi denotes the frequency of wordi shown in the WOM document collection. 3.2.2. Phrase-based tokenization The word-based tokenization approach treats each word as independent, so this approach may fail to recognize the relationships between words. Thus, this paper also uses the phrase-based tokenization approach to choose significant tokens for building the proposed domain oriented sentiment lexicon. The phrase-based method used in this paper is based on the bigram language model [16]. Co-occurrence relationships between words are determined through the calculation of the number of times that two continuous significant words appear together. The following sentence is
used as an example: “The movie critics like the stars in that film.” After WOM preprocessing, only the following four words are left: “movie”, “critic”, “like”, “star”. These four words form three bigrams, i.e. “movie critic”, “critic like”, “like star”. Each bigram is treated as a token and its TF is counted as (2). The number of different tokens for the bigram tokenization is much greater than that for its unigram counterpart. For reasons of computational capacity and comparison of the two tokenization approaches, bigrams are removed if their TF is below the threshold of 0.0 0 0 0 0195. This threshold is chosen because the number of tokens is similar to that in the word-based or unigram tokenization approach. Thus, we compare the two tokenization approaches based on similar dimensionality.
wi, i+1 TFi = , k wk, k+1
(2)
where wi,i+1 denotes the frequency of bigrami , whose first significant word is wordi and second significant word is wordi+1 after preprocessing of WOM documents. 3.3. Word sense disambiguation SentiWordNet is based on WordNet, which shows possible senses for a word and lists each sense by the order of usage frequency. A word may have different senses with different sentiment values or even different sentiment orientations. SentiWordNet is able to provide a word with a proper sentiment orientation and sentiment value only if the sense of this word is clear. This paper proposes three WSD techniques based on WordNet semantic relations and context of WOM documents. Thus a WSD-based lexicon is domain oriented. 3.3.1. Similarity of word and document A word may contain several senses, depending on the correct part of speech. Using the concept of the Lesk algorithm, the proper
Please cite this article as: C. Hung, S.-J. Chen, Word sense disambiguation based sentiment lexicons for sentiment classification, Knowledge-Based Systems (2016), http://dx.doi.org/10.1016/j.knosys.2016.07.030
ARTICLE IN PRESS
JID: KNOSYS 4
[m5G;July 27, 2016;17:59]
C. Hung, S.-J. Chen / Knowledge-Based Systems 000 (2016) 1–9
sense of a word can be found by comparing the target document with the glosses of each sense for this word defined in WordNet. The greatest value of cosine similarity for the sense is treated as the proper sense for this word. In this paper, the extended version of the Lesk‘s original work is proposed. We expand the target word with its related words by looking up the semantic relations in WordNet. This approach is required to ensure that the number of words in the example sentences is not too few. Besides synonyms shown in WordNet, different parts of speech have different semantic relation structures so methods used for the expansion also vary. In addition to the synonym, this approach expands the word’s hypernym, hyponym, entailment, and derivationally related form relations. A hypernym of a word is the generic term used to designate a whole class of specific instances. Word Y is a hypernym of Word X if Word X is a kind of Word Y. Conversely, a hyponym is the specific term used to designate a member of a class. Thus, as in the above example, Word X is a hyponym of Word Y. Both nouns and verbs have the hypernym and hyponym relationship. For example, canine is a hypernym of dog and dog is a hyponym of canine. The entailment relationship indicates a Word X entails Word Y if Word X cannot be done unless Word Y is done, or has been done. Only verbs have this semantic relation. For example, to sleep is entailed by to snore. The derivationally related form relation indicates words in different syntactic categories that have the same root form and are semantically related. It is a semantic pointer from one adjective cluster to another. For example, the derivationally related form of pretty is prettiness. When expanding nouns, this paper uses the synonym, hypernym and hyponym relations. The first-level hypernym and the first two-level hyponym relations of the target word are chosen. For expanding verbs, in addition to the synonym, hypernym and hyponym semantic relations, the entailment relation of the target is chosen. For expanding adjectives, in addition to the synonym relation, the derivationally related forms from the semantic relation structure of the target word are chosen. We then combine the example sentences of the target word with the example sentences of all other expanded relations. After sentence expansion of the word sense example is complete, we then conduct part of speech tagging and word cleansing, as previously outlined. The final example sentence vector after integration is P = (P1 , P2 ,…Pn ); the document vector is R = (R1 , R2 ,…Rn ). Cosine similarity is calculated as in Eq. (3). The greatest value of cosine similarity for senses of the target word is considered as its proper sense. For example, the adjective “bad” is the target word, which has 14 senses. The WordNet semantic relation structure is used to look up the words that share a derivative relationship with it, which returns the word “badness”. The sense 1 example sentence of the word “bad” is then combined with the example sentence of the word “badness”. Finally, the level of similarity between the final example sentence vector for the target word “bad” and the document vector is calculated, thereby determining the applicability of the proper sense in this document’s domain.
n
P ×R Scor ecos ine (P, R ) = n k=1 k kn 2 2 k=1 Pk × k=1 Rk
(3)
3.3.2. Similarity of word and its neighbors This method is based on the previous approach and further extends the example sentence from the neighboring words of the target word. The basic concept is that words located nearby in a sentence may have a similar sentimental or semantic concept. For example, “The new movie features two of my favorite actors” is a sentence in the document. The word “features” is a target word. After stop words are removed, the result is “movie feature favorite actor”. When the neighboring distance threshold is set to one, the
word in front of the target word and the word after it are extracted to serve as neighboring words, which are “movie” and “favorite”. In addition, the WordNet semantic relation structure will be used to expand the amount of information in the example sentences of the target word and neighboring words. Finally, the cosine similarities of the example sentences for each sense of the target word “features” and the example sentences for each sense of the neighboring words will be calculated. The closer the similarity value is to one, the higher the degree of applicability of the corresponding sense of the target word to documents of this domain. For this paper the neighboring distance threshold value is two. Different thresholds have been tried and similar conclusions have been reached. 3.3.3. WordNet path similarity The hypernyms and hyponyms defined by the WordNet semantic relation structure form a hierarchical tree. Thus the mutual distance between words implies their semantic similarity, i.e. WordNet path similarity. The third WSD approach decides the proper sense of the target word by evaluating the WordNet path similarity between the target word and its neighboring words according to the WordNet hierarchical tree. Two words are similar if they have the same hypernym. A different sense of a word has a different sense identifier and thus each sense of the word forms its own synonymous set. To calculate the WordNet path similarity of two words, the first stage is to look up the WordNet hierarchical tree for the same hypernym. The second stage is to calculate the shortest mutual distance between the two words as shown in Eq. (4).
WordNet path similarity(X, Y ) =
1 min(mutual distancex, y ) + 1 (4)
The calculation of the similarity of two nouns, “cat” and “dog”, can be used as an example. Based on a part of the WordNet hierarchical tree in Fig. 2, “cat” and “dog” have many of the same hypernyms, i.e. carnivore, placental, etc. The short mutual distance between the nouns “cat” and “dog” is 4. Thus the value of WordNet path similarity is 0.2 (= 15 ). However, only nouns and verbs have hypernym and hyponym relationships. This paper calculates the values of WordNet path similarity for each sense of the target word when its part of speech is a noun or verb, with its neighboring words in the same part of speech. The sense of the target word whose WordNet path similarity is the greatest is treated as the proper sense. For adjectives and adverbs, we continue to use the first sense as the proper sense. In this paper the neighboring distance threshold value is two. Fig. 3 shows the conceptual diagram for this approach. 3.4. Building the WSD-Based sentiment lexicon As the goal of this paper is to build a WSD-based sentiment lexicon, which is domain oriented, we retain the first sense policy, i.e. the first sense of a word represents the sense used most often [43]. Based on the subsection of Word Sense Disambiguation, we re-rank the sense order defined in SentiWordNet according to how similar a sense of a word is to that used in the specific domain. Since a word may appear in several documents, we average the similarity values of the word sense shown in review documents as Eq. (5). Then we re-rank the sense order according to the average similarity value in descending order. Thus the only difference between the WSD-based SentiWordNet for different domains and the original SentiWordNet is in the sense order. Table 1 shows the value of similarity for the verb “suck” based on the approach in the Similarity of Word and Document subsection for the
Please cite this article as: C. Hung, S.-J. Chen, Word sense disambiguation based sentiment lexicons for sentiment classification, Knowledge-Based Systems (2016), http://dx.doi.org/10.1016/j.knosys.2016.07.030
ARTICLE IN PRESS
JID: KNOSYS
[m5G;July 27, 2016;17:59]
C. Hung, S.-J. Chen / Knowledge-Based Systems 000 (2016) 1–9
5
was built.
n Average Similarityi =
i=1
Similarityi n
(5)
4. Experiment design and results 4.1. Datasets In this paper, two datasets, IMDb, and Hotels.com are used. IMDb (Internet movie database, http://www.cs.cornell.edu/people/ pabo/movie-review-data) includes 27,886 movie review documents and has become a common benchmark in the field of sentiment analysis [14,28,29,47,54,56,63]. Among the IMDb dataset, 10 0 0 documents have a pre-assigned positive sentiment tag, 10 0 0 documents have a pre-assigned negative sentiment tag and 25,886 documents have no pre-assigned sentiment tag. Thus, we treat the 25,886 untagged documents as the training set, which is used to build the proposed lexicon, and the 20 0 0 tagged documents as the unseen test set, which is used to evaluate the proposed lexicon. The hotel review dataset is collected from an online hotel booking website, Hotels.com. Hotels.com adopts a 5-star rating system. A five-star WOM document indicates an absolutely positive sentiment tendency to the target hotel from the viewpoint of the reviewer, whereas a one-star WOM document indicates an absolutely negative sentiment tendency. This paper collects 25,0 0 0 review documents from Hotels.com. Each star category includes 50 0 0 documents. Although all documents in Hotels.com are untagged, the rating system provides a useful hint to the sentiment tendency. Thus a 1- or 2-star WOM document is treated as a negative review while a 4- or 5-star WOM document is treated as a positive review. As we omit all 3-star documents the dataset contains 20,0 0 0 hotel review documents. From the dataset, this paper randomly chooses 10 0 0 positive reviews and 10 0 0 negative reviews as the unseen test set. The rest of the hotel WOM documents are the training set.
4.2. Sentiment vector space modeling
Fig. 2. A part of the WordNet hierarchical tree for nouns cat and dog.
movie domain. The original sense number 4 is re-ranked as the first sense due to its greatest average similarity. Further examples are shown in Table 2. For instance, the noun “taste” and the adjective “sensitive” are neutral words when the original SentiWordNet is used. Our approach re-ranks their original sense order for the movie domain. By this approach the WSD-based sentiment lexicon
Traditionally, the TFxIDF (term frequency x inverse document frequency) vector representation approach is used in text classification [53]. In this paper, the sentiment vector space modeling approach [29] is used. A document vector j is represented as Dj = [W1 , W2 , …, Wm ], where m indicates the total number of tokens in the dataset and Wi is the weight value of token i. Such tokens are filtered by the WOM preprocessing module and shown in SentiWordNet. The sentiment polarity and score are decided by looking up the first sense of the target in the proposed sentiment lexicon. The weight value of a positive word equals its term frequency (TF) multiplied by its positive score. The weight value of a negative word equals its term frequency multiplied by its negative score multiplied by (−1). The remaining neutral words are
Table 1 Average similarity of the verb “suck” for the movie domain. Word
Sense number
Sentiment score (P/O/N)
Gloss (Example sentence)
Similarity
New sense number
suck_v
1
0/1/0
0.0167
3
suck_v suck_v
2 3
0/1/0 0/1/0
0.0044 0.0171
4 2
suck_v
4
0/0.375/0.625
draw into the mouth by creating a practical vacuum in the mouth; “suck the poison from the place where the snake bit”; “suck on a straw”; “the baby sucked on the mother’s breast” draw something in by or as if by a vacuum; “Mud was sucking at her feet” attract by using an inexorable force, inducement, etc.; “The current boom in the economy sucked many workers in from abroad” be inadequate or objectionable; “this sucks!”; “this blows!”
0.0878
1
Please cite this article as: C. Hung, S.-J. Chen, Word sense disambiguation based sentiment lexicons for sentiment classification, Knowledge-Based Systems (2016), http://dx.doi.org/10.1016/j.knosys.2016.07.030
ARTICLE IN PRESS
JID: KNOSYS 6
[m5G;July 27, 2016;17:59]
C. Hung, S.-J. Chen / Knowledge-Based Systems 000 (2016) 1–9
Fig. 3. The conceptual diagram for the third WSD approach. Table 2 Some examples of sense order re-ranking. Word
Sentiment score (P/O/N)
Gloss (Example sentence)
Sense order re-ranking
Domain
taste_n
0/1/0
1->3
movie
taste_n
0.375/0.625/0
2->1
movie
taste_n
0.5/0.5/0
3->2
movie
sensitive_a
0/1/0
1->2
movie
sensitive_a
0.375/0.25/0.375
2->1
movie
reserve_v
0/1/0
1->2
hotel
reserve_v
0.125/0.875/0
2->1
hotel
center_n
0.25/0.75/0
5->1
hotel
free_v unconventional_a
0.5/0.5/0 0.25/0.125/0.625
the sensation that results when taste buds in the tongue and throat convey information about the chemical composition of a soluble stimulus; “the candy left him with a bad taste”; “the melon had a delicious taste” a strong liking; “my own preference is for good literature”; “the Irish have a penchant for blarney” delicate discrimination (especially of aesthetic values); “arrogance and lack of taste contributed to his rapid success”; “to ask at that particular time was the ultimate in bad taste” responsive to physical stimuli; “a mimosa’s leaves are sensitive to touch”; “a sensitive voltmeter”; “sensitive skin”; “sensitive to light” being susceptible to the attitudes, feelings, or circumstances of others; “sensitive to the local community and its needs” hold back or set aside, especially for future use or contingency; “they held back their applause in anticipation” give or assign a resource to a particular person or cause; “I will earmark this money for your research”; “She sets aside time for meditation every day” the choicest or most essential or most vital part of some idea or experience; “the gist of the prosecutor’s argument”; “the heart and soul of the Republican Party”; “the nub of the story” relieve from; “Rid the house of pests” not conventional or conformist; “unconventional life styles”
2->1 2->1
hotel hotel
assigned zero as shown in (6).
Wi =
⎧ ⎨T Fi × posWi ,
4.4. Experiment results
where Wi ∈ [ positive words]
T Fi × negWi × (−1 ),
where Wi ∈ [negative words]
0,
otherwise
⎩
(6)
4.3. Classification According to Serrano-Guerrero et al. [54], many recent studies on machine learning-oriented classification have used the support vector machine (SVM) algorithm [17] and obtained outstanding results. This classification algorithm is commonly used for sentiment analysis [29,36,54,61] and is also used in this paper. More specifically, both linear and radial basis function (RBF) kernel functions, and a more efficient version of SVM, the sequential minimal optimization (SMO) algorithm [46], are used. In addition to SVM, this paper adopts the J48 decision tree [62] and Naive Bayes classifiers [51] to evaluate the proposed models. All parameters used in this paper are the same, and default values of the Weka 3.6 package (http://www.cs.waikato.ac.nz/ml/weka) for these classification approaches are used.
We use two tokenization approaches, i.e. unigram and bigram tokenization, three word sense disambiguation techniques, i.e. similarity of word and document, similarity of word and its neighbors, and WordNet path similarity, to build the domain oriented SentiWordNet (DSWN) lexicons, i.e. the movie-domain and hoteldomain sentiment lexicons. The traditional domain independent sentiment lexicon, SentiWordNet (SWN), is treated as a benchmark. As many words in SentiWordNet are ambiguous, two common approaches are used to decide the proper sense and its sentiment score. The first uses the first sense policy and the second uses the average sentiment score of a word. The comparative results evaluated by accuracy are show in Tables 3 and 4. Generally, the SVM models perform best, followed by the Naive Bayes models. The decision tree based models perform worst, possibly due to the high-dimensionality of the dataset. The performance between the two tokenization approaches is similar. A possible reason for this is that SentiWordNet is a word-based sentiment lexicon, so not many bigrams can be found. In terms of
Please cite this article as: C. Hung, S.-J. Chen, Word sense disambiguation based sentiment lexicons for sentiment classification, Knowledge-Based Systems (2016), http://dx.doi.org/10.1016/j.knosys.2016.07.030
ARTICLE IN PRESS
JID: KNOSYS
[m5G;July 27, 2016;17:59]
C. Hung, S.-J. Chen / Knowledge-Based Systems 000 (2016) 1–9
7
Table 3 Experiment results evaluated by the criterion of accuracy for IMDb. Classification approach
Tokenization
SWN-first
SWN-average
DSWN-method1
DSWN-method2
DSWN-method3
SVM-Linear
unigram bigram
72.85% 72.85%
73.75% 73.75%
75.35% 76.00%
75.10% 74.35%
74.40% 73.05%
SVM-RBF
unigram bigram
74.80% 74.80%
75.45% 75.45%
75.75% 75.75%
75.70% 74.95%
74.70% 75.05%
J48
unigram bigram
66.60% 66.60%
66.50% 66.50%
67.60% 67.60%
69.20% 69.15%
66.15% 67.20%
Naive Bayes
unigram bigram
73.10% 73.10%
73.55% 73.55%
74.25% 74.30%
74.45% 74.40%
74.05% 74.45%
Average
unigram bigram
71.84% 71.84%
72.31% 72.31%
73.24% 73.41%
73.61% 73.21%
72.33% 72.44%
Table 4 Experiment results evaluated by the criterion of accuracy for Hotels.com. Classification approach
Tokenization
SWN-first
SWN-average
DSWN-method1
DSWN-method2
DSWN-method3
SVM-Linear
unigram bigram
77.61% 77.69%
73.80% 73.89%
77.89% 77.84%
78.29% 78.24%
77.65% 77.70%
SVM-RBF
unigram bigram
75.40% 75.40%
68.10% 68.10%
76.40% 76.00%
76.00% 75.80%
75.65% 75.70%
J48
unigram bigram
67.72% 67.84%
70.40% 70.29%
73.34% 73.34%
72.99% 72.24%
72.40% 72.40%
Naive Bayes
unigram bigram
73.34% 73.49%
72.15% 72.04%
74.14% 74.14%
74.19% 73.79%
73.25% 73.25%
Average
unigram bigram
73.52% 73.61%
71.11% 71.08%
75.44% 75.33%
75.37% 75.02%
74.74% 74.76%
IMDb, the first WSD approach used in this paper is suitable for SVM models, while the second WSD approach is suitable for decision tree and Naive Bayes models. However, for the Hotels.com dataset, the first WSD approach is better than the second one for SVM-RBF and J48. The third WSD approach performs worse than the other two WSD approaches, as only nouns and verbs have hypernym and hyponym relationships. Finally, the proposed models all perform better than the two benchmark models for all three classification techniques.
the DSWN-method3 and SWN-average when the SVM and Naive Bayes are applied. Except when applying the J48 decision tree, the SWN-first models significantly perform better than their associated SWN-average models. Based on the average accuracy for all models shown in Tables 5 and 6, we can conclude that our proposed DSWN-method1 and DSWN-method2 models significantly outperform the traditional SWN-first model. Our proposed DSWNmethod3 models are generally better than their associated SWNfirst models, but the difference between them does not reach the statistical significance level.
4.5. Statistical significance test 5. Conclusion and possible further work Finally, the t-test [33] is used to test whether or not the difference between our proposed DSWN models and the traditional benchmark model, i.e. SWN-first, reaches the statistical significance level. For the movie domain, we use a further 10 sub-datasets, which are formed by random sampling without replacement from the 20 0 0 tagged documents of the IMDb, to produce 10 0 0 review documents for each sub-dataset. According to the results of the t-test shown in Table 5, our proposed DSWN-method1 and DSWNmethod2 models significantly outperform their associated SWNfirst models, regardless of which tokenization and classification approaches are used. The proposed DSWN-method3 models are only significantly better than their associated SWN-first models when the Naive Bayes classification technique is used. Generally speaking, there is not a statistically significant difference between SWNfirst and SWN-average models. With regard to the hotel domain, 10 sub-datasets are also used for the significance test. We randomly choose 10 0 0 reviews for each sub-dataset from the 20 0 0 review documents. According to the results of the t-test shown in Table 6, like the movie domain, our DSWN-method1 and DSWN-method2 models are significantly better than the associated traditional SWN-first models. Unlike the movie domain, our DSWN-method3 models significantly outperform the SWN-first models only when the J48 decision tree is used. There is not a statistical significance between
The goal of sentiment analysis is to use an automated method to obtain implicit sentiment attitudes from word-of-mouth (WOM). Generally speaking, SentiWordNet is a useful sentiment lexicon for identification of a word’s sentiments. Like WordNet, many words in SentiWordNet may contain multiple senses. Thus each sense may have its own sentiment score or sentiment orientation for a same word form. Thus this paper argues that the task of word sense disambiguation (WSD) should be done before a proper sentiment score or sentiment orientation for a word in SentiWordNet can be justified. We proposed three WSD techniques, which are based on the context of domain WOM documents. These approaches thus are able to build a domain oriented sentiment lexicon for sentiment classification. We integrated two tokenization approaches with sentiment vector space modeling, using SVM-Linear, SVMRBF, the J48 decision tree and Naive Bayes classification, and evaluated them by the accuracy criterion. The experiments demonstrate that the proposed word sense disambiguation based SentiWordNet has the potential to improve sentiment classification. In terms of possible further work, this paper evaluates the performance of WSD by the accuracy measure of sentiment classification, which is an indirect evaluation approach. Thus one area which could be developed is how to directly evaluate the performance of WSD. Furthermore, the WSD techniques used in this
Please cite this article as: C. Hung, S.-J. Chen, Word sense disambiguation based sentiment lexicons for sentiment classification, Knowledge-Based Systems (2016), http://dx.doi.org/10.1016/j.knosys.2016.07.030
ARTICLE IN PRESS
JID: KNOSYS 8
[m5G;July 27, 2016;17:59]
C. Hung, S.-J. Chen / Knowledge-Based Systems 000 (2016) 1–9 Table 5 Statistical significance test results based on t-test for IMDb. Accuracy (p-value)
Tokenization
SWN-first
SWN-average
DSWN-method1
DSWN-method2
DSWN-method3
SVM-Linear
Unigram
70.13%
bigram
70.33%
70.45% (0.4686) 70.66% (0.4470)
72.39% (0.0021∗∗ ) 72.41% (0.0176∗ )
71.91% (0.0034∗∗ ) 71.90% (0.0 0 09∗∗∗ )
70.92% (0.0352∗ ) 70.91% (0.0750)
unigram
67.93%
bigram
67.93%
66.39% (0.4449) 66.40% (0.4461)
70.27% (0.0 0 0 0∗∗∗ ) 69.74% (0.0 0 0 0∗∗∗ )
69.03% (0.0081∗∗ ) 68.76% (0.0189∗ )
67.79% (0.5041) 67.69% (0.2683)
unigram
64.67%
bigram
64.62%
64.63% (0.9477) 64.25% (0.598)
66.43% (0.0 0 05∗∗∗ ) 66.06% (0.0354∗ )
66.65% (0.0010∗∗ ) 66.22% (0.0322∗ )
65.20% (0.1420) 65.17% (0.1248)
unigram
71.02%
bigram
71.20%
72.15% (0.0228∗ ) 72.27% (0.0308∗ )
72.93% (0.0 0 03∗∗∗ ) 72.46% (0.0049∗∗ )
72.63% (0.0011∗∗ ) 72.88% (0.0 0 08∗∗∗ )
71.83% (0.0 0 01∗∗∗ ) 71.90% (0.0 0 01∗∗∗ )
SVM-RBF
J48
Naive Bayes
denotes significance level α < 0.05. denotes significance level α < 0.01. ∗∗∗ denotes significance level α < 0.001. ∗
∗∗
Table 6 Statistical significance test results based on t-test for Hotels.com. Classification approach
Tokenization
SWN-first
SWN-average
DSWN-method1
DSWN-method2
DSWN-method3
SVM-Linear
unigram
77.20%
bigram
77.23%
76.24% (0.0389∗ ) 76.27% (0.0401∗ )
77.72% (0.0043∗∗ ) 77.75% (0.0023∗∗ )
77.87% (0.0069∗∗ ) 77.99% (0.0012∗∗ )
77.35% (0.6350) 77.42% (0.5914)
unigram
77.61%
bigram
76.62%
67.69% (0.0 0 0 0∗ ) 67.70% (0.0 0 0 0∗∗∗ )
77.33% (0.0173∗ ) 77.30% (0.0190∗ )
77.30% (0.0088∗∗ ) 77.37% (0.0166∗ )
76.78% (0.2527) 76.72% (0.4867)
unigram
69.24%
bigram
69.27%
70.03% (0.1421) 70.01% (0.1496)
72.09% (0.0075∗∗ ) 72.11% (0.0073∗∗ )
72.03% (0.0025∗∗ ) 71.97% (0.0011∗∗ )
71.35% (0.0054∗∗ ) 71.39% (0.0046∗∗ )
unigram
74.66%
bigram
74.73%
73.17% (0.0480∗ ) 73.11% (0.0278∗ )
74.76% (0.6953) 74.79% (0.8113)
75.21% (0.0209∗ ) 75.22% (0.0189∗ )
74.67% (0.9833) 74.61% (0.4789)
SVM-RBF
J48
Naive Bayes
denotes significance level α < 0.05. denotes significance level α < 0.01. ∗∗∗ denotes significance level α < 0.001. ∗
∗∗
paper are based on the WordNet ontology. Some supervised learning techniques for WSD may provide further developments in sentiment classification. The proposed approach is also suitable for other application domains. The use of more datasets including different languages could be explored, and possible cross-domain issues could be considered in further work. Acknowledgments This work was partially supported by the National Science Council of Taiwan No. NSC 102-2410-H-033-033-MY2. References [1] S. Agrawal, T. Siddiqui, Using syntactic and contextual information for sentiment polarity analysis, in: Proceedings of the 2nd International Conference on Interaction Sciences: Information Technology, Culture and Human, 2009, pp. 620–623. [2] B.F.Z. AL_Bayaty, S. Joshi, Empirical comparative study to supervised approaches for WSD problem: survey, in: Proceedings of 2015 IEEE Canada International Humanitarian Technology Conference, 2015, pp. 1–7. [3] S. Baccianella, A. Esuli, F. Sebastiani, SentiWordNet 3.0: an enhanced lexical resource for sentiment analysis and opinion mining, in: Proceedings of the 7th International Conference on Language Resources and Evaluation, 2010, pp. 2200–2204.
[4] J.A. Balazs, J.D. Velásquez, Opinion mining and information fusion: a survey, Inf. Fusion 27 (2016) 95–110. [5] S. Banerjee, T. Pedersen, An adapted Lesk algorithm for word sense disambiguation using WordNet, in: Proceedings of Third International Conference on Intelligent Text Processing and Computational Linguistics, 2002, pp. 136–145. [6] B. Bickart, R.M. Schindler, Internet forums as influential sources of consumer information, J. Interact. Mark. 15 (3) (2001) 31–40. [7] E. Brill, A simple rule-based part of speech tagger, in: Proceedings of the 3rd Applied Natural Language Processing Conference, 1992, pp. 152–155. [8] K. Cai, S. Spangler, Y. Chen, L. Zhang, Leveraging sentiment analysis for topic detection, Web Intell. Agent Syst. 8 (3) (2010) 291–302. [9] E. Cambria, C. Havasi, A. Hussain, SenticNet 2: a semantic and affective resource for opinion mining and sentiment analysis, in: Proceedings of 25th International Florida Artificial Intelligence Research Society Conference, 2012, pp. 202–207. [10] E. Cambria, D. Olsher, D. Rajagopal, SenticNet 3: a common and common-sense knowledge base for cognition-driven sentiment analysis, in: Proceedings of Eighth AAAI Conference on Artificial Intelligence, 2014, pp. 1515–1521. [11] E. Cambria, B. Schuller, Y. Xia, C. Havasi, New avenues in opinion mining and sentiment analysis, IEEE Intell. Syst. 28 (2) (2013) 15–21. [12] E. Cambria, R. Speer, C. Havasi, A. Hussain, SenticNet: a publicly available semantic resource for opinion mining, in: Proceedings of AAAI Commonsense Knowledge Symposium, 2010, pp. 14–18. [13] I. Chaturvedi, E. Cambria, F. Zhu, L. Qiu, W.K. Ng, Multilingual subjectivity detection using deep multiple kernel learning, in: Proceedings of Workshop on Issues of Sentiment Discovery and Opinion Mining, 21st ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2015.
Please cite this article as: C. Hung, S.-J. Chen, Word sense disambiguation based sentiment lexicons for sentiment classification, Knowledge-Based Systems (2016), http://dx.doi.org/10.1016/j.knosys.2016.07.030
JID: KNOSYS
ARTICLE IN PRESS
[m5G;July 27, 2016;17:59]
C. Hung, S.-J. Chen / Knowledge-Based Systems 000 (2016) 1–9 [14] C. Chen, D. Zeng, A dynamic user adaptive combination strategy for hybrid movie recommendation, in: Proceedings of IEEE International Conference on Service Operations and Logistics, and Informatics, 2012, pp. 172–176. [15] A.-G. Chifu, F. Hristea, J. Mothe, M. Popescu, Word sense discrimination in information retrieval: A spectral clustering-based approach, Inf. Process. Manage. 51 (2015) 16–31. [16] M.J. Collins, A new statistical parser based on bigram lexical dependencies, in: Proceedings of the 34th Annual Meeting on Association for Computational Linguistics, 1996, pp. 184–191. [17] C. Cortes, V. Vapnik, Support-vector networks, Mach. Learn. 20 (3) (1995) 273–297. [18] S. Das, M. Chen, Yahoo! for Amazon: Extracting market sentiment from stock message boards, in: Proceedings of Asia Pacific Finance Association Annual Conference, 2001. [19] K. Dave, S. Lawrence, D. Pennock, Mining the peanut gallery: Opinion extraction and semantic classification of product reviews, in: Proceedings of the 12th International Conference on World Wide Web, 2003, pp. 519–528. [20] K. Denecke, Using SentiWordNet for multilingual sentiment analysis, in: Proceedings of the 24th International Conference on Data Engineering Workshop, 2008, pp. 507–512. [21] A. Devitt, K. Ahmad, Sentiment polarity identification in financial news: A cohesion-based approach, in: Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, 2007, pp. 984–991. [22] E.C. Dragut, C. Yu, W. Meng, Construction of a sentimental word dictionary, in: Proceedings of the 19th ACM International Conference on Information and Knowledge Management, 2010, pp. 1761–1764. [23] A. Esuli, F. Sebastiani, SentiWordNet: A publicly available lexical resource for opinion mining, in: Proceedings of the 5th Conference on Language Resources and Evaluation, 2006, pp. 417–422. [24] C. Fellbaum, WordNet: An Electronic Lexical Database, MIT Press, 1998. [25] D. Fernandez-Amoros, R. Heradio, Understanding the role of conceptual relations in word sense disambiguation, Expert Syst. Appl. 38 (8) (2011) 9506–9516. [26] M. Gamon, A. Aue, S. Corston-Oliver, E. Ringger, Pulse: mining customer opinions from free text, in: Proceedings of the International Symposium on Intelligent Data Analysis (IDA), 2005, pp. 121–132. [27] D. Godes, D. Mayzlin, Using online conversations to study word-of-mouth communication, Mark. Sci. 23 (4) (2004) 545–560. [28] Y. Hu, Z. Wang, W. Wu, J. Guo, M. Zhang, Recommendation for movies and stars using YAGO and IMDB, in: Proceedings of 12th International Asia-Pacific Web Conference, 2010, pp. 123–129. [29] C. Hung, H.-K. Lin, Using objective words in SentiWordNet to improve word-of-mouth sentiment classification, IEEE Intell. Syst. 28 (2) (2013) 47–54. [30] C. Hung, C.-F. Tsai, H. Huang, Extracting word-of-mouth sentiments via SentiWordNet for document quality classification, Recent Pat. Comput. Sci. 5 (2) (2012) 145–152. [31] N. Ide, J. Véronis, Word sense disambiguation: the state of the art, Comput. Linguist. 24 (1) (1998) 1–40. [32] H. Jeong, D. Shin, J. Choi, FEROM: feature extraction and refinement for opinion mining, ETRI J. 33 (5) (2011) 720–730. [33] R.A. Johnson, G.K. Bhattacharyya, Statistics: Principles and Methods, Willey, Press, 2014. [34] G. Katz, N. Ofek, B. Shapira, ConSent: context-based sentiment analysis, Knowl.-Based Syst. 84 (2015) 162–178. [35] K. Khan, B.B. Baharudin, A. Khan, F. e-Malik, Mining opinion from text documents: a survey, in: Proceedings of the Third IEEE International Conference on Digital Ecosystems and Technologies, 2009, pp. 217–222. [36] J. Kranjc, J. Smailovic´ , Grcˇ ar Podpecˇ an, M. Žnidaršicˇ , N. Lavracˇ , Active learning for sentiment analysis on data streams: Methodology and workflow implementation in the ClowdFlows platform, Inf. Process. Manage. 51 (2015) 187–203. [37] C. Laorden, I. Santos, B. Sanz, G. Alvarez, P.G. Bringas, Word sense disambiguation for spam filtering, Electron. Commer. Res. Appl. 11 (2012) 290–298. [38] Y. Lee, J. Kim, J.H. Lee, Extracting domain-dependent semantic orientations of latent variables for sentiment classification, in: Proceedings of the 22nd International Conference on the Computer Processing of Oriental Languages, 2009, pp. 201–212.
9
[39] M. Lesk, Automatic sense disambiguation using machine readable dictionaries: How to tell a pine cone from an ice cream cone, in: Proceedings of the 5th Annual International Conference on Systems Documentation, 1986, pp. 24–26. [40] B. Liu, Sentiment Analysis and Subjectivity, in: N. Indurkhya, F.J. Damerau (Eds.), Handbook of Natural Language Processing, Chapman & Hall CRC Press, 2010. [41] D. McCarthy, Word sense disambiguation: an overview, Lang. Linguist. Compass 3 (2) (2009) 537–558. [42] G.A. Miller, WordNet: a lexical database for English, Commun. Assoc. Comput. Mach. 38 (11) (1995) 39–41. [43] R. Navigli, Word sense disambiguation: a survey, ACM Comput. Surv. 41 (2) (2009) 1–69. [44] T.H. Nguyen, K. Shirai, J. Velcin, Sentiment analysis on social media for stock movement prediction, Expert Syst. Appl. 42 (24) (2015) 9603–9611. [45] B. Ohana, B. Tierney, Sentiment classification of reviews using SentiWordNet, in: Proceedings of 9th IT&T Conference, 2009. [46] J. Platt, et al., Fast training of support vector machines using sequential minimal optimization, in: Schoelkopf, et al. (Eds.), Advances in Kernel Methods– Support Vector Learning, MIT Press, Cambridge, 1999, pp. 185–208. [47] S. Poria, E. Cambria, A. Gelbukh, F. Bisio, A. Hussain, Sentiment data flow analysis by means of dynamic linguistic patterns, IEEE Comput. Intell. Mag. 10 (4) (2015) 26–36. [48] K. Ravi, V. Ravi, A survey on opinion mining and sentiment analysis: tasks, approaches and applications, Knowl.-Based Syst. 89 (2015) 14–46. [49] P. Resnik, Semantic similarity in a taxonomy: an information-based measure and its application to problems of ambiguity in natural language, J. Artif. Intell. Res. 11 (1999) 95–130. [50] M. Rushdi-Saleh, M.T. Martín-Valdivia, A. Montejo-Ráez, L.A. Ureña-López, Experiments with SVM to classify opinions in different domains, Expert Syst. Appl. 38 (12) (2011) 14799–14804. [51] S. Russell, P. Norvig, Artificial Intelligence: A Modern Approach, Prentice-Hall International, 1995. [52] H. Saggion, A. Funk, Interpreting SentiWordNet for opinion classification, in: Proceedings of International Conference on Language Resources and Evaluation, 2010, pp. 1129–1133. [53] G. Salton, Automatic Text Processing: The Transformation, Analysis, and Retrieval of Information by Computer, Addison-Wesley, USA, 1989. [54] J. Serrano-Guerrero, J.A. Olivas, F.P. Romero, E. Herrera-Viedma, Sentiment analysis: a review and comparative analysis of web services, Inf. Sci. 311 (2015) 18–38. [55] J. Sreedhar, S.V. Raju, A.V. Babu, A. Shaik, P.P. Kumar, Word sense disambiguation: An empirical survey, Int. J. Soft Comput. Eng. 2 (2) (2012) 494–503. [56] A. Stanescu, S. Nagar, D. Caragea, A hybrid recommender system: user profiling from keywords and ratings, in: Proceedings of IEEE/WIC/ACM International Joint Conferences on Web Intelligence and Intelligent Agent Technologies, 2013, pp. 73–80. [57] P.J. Stone, D.C. Dunphy, M.S. Smith, D.M. Oglivie, The General Enquirer: A Computer Approach to Content Analysis, MIT Press, 1996. [58] H. Tang, S. Tan, X. Cheng, A survey on sentiment detection of reviews, Expert Syst. Appl. 36 (7) (2009) 10760–10773. [59] A. Valitutti, C. Strapparava, O. Stock, Developing affective lexical resources, PsychNology J. 2 (1) (2004) 61–83. [60] E.M. Voorhees, Using WordNet to disambiguate word senses for text retrieval, in: Proceedings of the 16th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 1993, pp. 171–180. [61] G. Wang, Z. Zhang, J. Sun, S. Yang, G.A. Larson, POS-RS: a random subspace method for sentiment classification based on part-of-speech analysis, Inf. Process. Manage. 51 (2015) 548–579. [62] I.H. Witten, E. Frank, Data Mining: Practical Machine Learning Tools and Techniques, second ed., Morgan Kaufmann Publishing, 2005, 2005. [63] H. Wu, X. Wang, Z. Peng, Q. Li, Actively building collaborative filtering recommendation in clustered social data, in: Proceedings of IEEE 16th International Conference on Computer Supported Cooperative Work in Design, 2012, pp. 693–697. [64] S. Yueheng, W. Linmei, D. Zheng, Automatic sentiment analysis for web user reviews, in: Proceedings of First International Conference on Information Science and Engineering, 2009, pp. 806–809. [65] J. Zhan, H.T. Loh, Y. Liu, Gather customer concerns from online product reviews – a text summarization approach, Expert Syst. Appl. 36 (2) (2009) 2107–2115.
Please cite this article as: C. Hung, S.-J. Chen, Word sense disambiguation based sentiment lexicons for sentiment classification, Knowledge-Based Systems (2016), http://dx.doi.org/10.1016/j.knosys.2016.07.030