SACPC: A framework based on probabilistic linguistic terms for short text sentiment analysis

SACPC: A framework based on probabilistic linguistic terms for short text sentiment analysis

Journal Pre-proof SACPC: A framework based on probabilistic linguistic terms for short text sentiment analysis Chao Song, Xiao-Kang Wang, Peng-fei Che...

1MB Sizes 1 Downloads 32 Views

Journal Pre-proof SACPC: A framework based on probabilistic linguistic terms for short text sentiment analysis Chao Song, Xiao-Kang Wang, Peng-fei Cheng, Jian-qiang Wang, Lin Li

PII: DOI: Reference:

S0950-7051(20)30059-9 https://doi.org/10.1016/j.knosys.2020.105572 KNOSYS 105572

To appear in:

Knowledge-Based Systems

Received date : 4 April 2019 Revised date : 28 October 2019 Accepted date : 23 January 2020 Please cite this article as: C. Song, X.-K. Wang, P.-f. Cheng et al., SACPC: A framework based on probabilistic linguistic terms for short text sentiment analysis, Knowledge-Based Systems (2020), doi: https://doi.org/10.1016/j.knosys.2020.105572. This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

© 2020 Published by Elsevier B.V.

*Revised Manuscript (Clean Version) Click here to view linked References

Journal Pre-proof

SACPC: A framework based on probabilistic linguistic terms for short text sentiment analysis Chao Song1, Xiao-Kang Wang1*, Peng-fei Cheng2, Jian-qiang Wang2,1, Lin Li3

pro of

1.School of Business, Central South University, Changsha 410083, PR China 2.Hunan Engineering Research Center for Intelligent Decision Making and Big Data on Industrial Development, Hunan University of Science and Technology, Xiangtan 411201, PR China 3. School of Business, Hunan University, Changsha 410082, PR China Correspondence should be addressed to Xiao-Kang Wang: [email protected]

re-

Abstract: Short text sentiment analysis is challenging because short texts are limited in length and lack context. Short texts are usually rather ambiguous because of polysemy and the typos these texts contain. Polysemy is the coexistence of multiple word meanings and commonly appears in every language. Various uses of a word may assign the word both positive and negative meanings. In previous studies, the variability of words is often

lP

ignored, which may cause analysis errors. In this study, to resolve this problem, we proposed a novel text representation model named Word2PLTS for short text sentiment analysis by introducing probabilistic linguistic terms sets (PLTSs) and the relevant theory. In this model, every word is represented as a PLTS that

urn a

fully describes the possibilities for the sentiment polarity of the word. Then, by using support vector machines (SVM), a novel sentiment analysis and polarity classification framework named SACPC is obtained. This framework is a technique that combines supervised learning and unsupervised learning. We compare SAPCP to lexicon-based approaches and machine learning approaches on three benchmark datasets. A noticeable improvement in performance is achieved. To further verify the superiority of SAPCP, state of the art

Jo

performance comparisons are conducted, and the results are impressive. Keywords: semantic change; sentiment analysis; probabilistic linguistic terms; polarity classification 1. Introduction

In recent decades, the rapid development of the Internet has promoted a transformation from the "reading Internet" to the "interactive Internet". Social networking has revolutionized the daily life of people. The Internet not only has become an important source of information for people but also provides an important platform for people to voice their views, share their experiences and express their feelings of anger, sadness 1

Journal Pre-proof and joy. The Internet allows for the presence of a large number of sentiment reviews. Such a huge flow of information seems trivial and complex, but the huge potential value hidden in the information should not be ignored. Extensive natural language processing (NLP) is applied to process this vital information. Sentiment analysis is an import branch of NLP that involves various disciplines such as statistics, linguistics, psychology

pro of

and artificial intelligence. NLP usually captures an evaluative factor (positive or negative) and potency or strength (degree to which the word, phrase, sentence, or document in question is positive or negative) towards a subject topic, person, or idea [1]. Sentiment analysis systems have been applied to many different kinds of texts including customer reviews [2-4], news [5, 6] and tweets [7-10].

There are two main categories for the automatic extraction of sentiment information: lexicon-based approaches and machine-learning approaches. An overview of the categorization of classification methods is given in Fig. 1. In most lexicon-based approaches, the sentiment polarity score of a word is expressed by a real

re-

number; when the number greater than zero, the word is labelled positive. Otherwise, the word is classified negative. In machine learning, a vector space model is a frequently used method for representing text documents, and each dimension corresponds to a specific word. The value of the word, known as the weight, is

lP

calculated on the basis of the frequency of the word. However, almost all approaches seem to ignore the

Jo

urn a

problem of the variability of words, and the sentiment polarity of a word is considered to be constant.

Fig. 1. Sentiment analysis methods.

Unlike analysis of sentences and documents, sentiment analysis of short texts is difficult. Short text sentiment analysis is rather ambiguous and fuzzy because of limited contextual information, polysemy and the typos the text contains. Polysemy is the coexistence of multiple word meanings and appears widely in every language. 2

Journal Pre-proof For example, in Fig. 2, the word “funny ” may mean someone or something that makes you laugh or smile, such as “a funny story”. However, "funny" can also imply something suspicious or illegal, such as “I suspect there may be something funny going on.” This double use of language, which gives the same word a positive and negative meaning, is not an accident. The sentiment polarity of a word fluctuates with changing context. Thus, it is unreasonable to express the emotional information of a word with a real number, which not only leads to the loss of information but also the fuzziness and uncertainty in human language is ignored. In

pro of

machine learning, the “bag of words” (BOW) model and word embedding model (like word2vec) ignore the issues of semantic change. Short texts lack contextual information and contain typos that pose challenges to machine learning approaches. Therefore, it is probably better to use a set of possible sentiment polarities to

urn a

lP

re-

describe a word when we do sentiment analysis.

Fig. 2. Different uses of the word “funny”

Similarly, there may be various sentiment polarities in a text review. Simply dividing reviews into negative

Jo

or positive reviews is only approximately 70% accurate [11]. Each individual text review may include active, neutral and passive information [12]. For instance, a consumer may comment on a restaurant where the food tastes only so-so, the price is suitable but service is rude. Neutral, positive, and negative, respectively, three kinds of sentiment information are coming together in a text review. The consumer may express that he is 20% sure that an item is good, 30% sure that the item is neither better nor worse, and 50% sure that the item is awful. Then, the value of the linguistic variable can be denoted:

3

Journal Pre-proof To fully describe and quantify this information, it is necessary to consider both possible sentiment polarity and the associated probabilistic information. This information can be interpreted as the probabilistic distribution [13, 14], importance [15], degree of belief [16] and so on. Ignoring this information leads to inaccurate analysis and may cause erroneous subsequent decisions. Thus, it could be better to calculate the probabilistic distribution of all levels of sentiment polarity when

pro of

we do research on sentiment analysis for short texts. In our research, we introduce PLTS and the relevant theory into short texts sentiment analysis and put forward a novel text representation model named Word2PLTS. Then, a novel framework named sentiment analysis and polarity classification (SAPCP) is proposed. In this framework, the sentiment information of a word can be expressed as a PLTS that is used to depict the fuzziness and uncertainty of the word. By aggregating the PLTSs of words, the sentiment information of a sentence can be obtained. Last, support vector machines (SVMs) are used to classify the text. This process can be roughly divided into three steps: pre-processing, text representation and classification. In

re-

addition, we evaluate the performance of our framework on three different datasets: movie reviews, Stanford twitter sentiment and TripAdvisor reviews. The experimental results show the validity and reliability of our proposed framework.

lP

The rest of this study is organized as follows. The related work is presented in section 2. Section 3 elaborates the sentiment analysis framework. In section 4, we specifically introduce the dataset that is used in this research. The results, evaluation and discussion are presented in section 5. Finally, section 6 concludes

2. Related work

urn a

this study and provides certain suggestions for future research directions.

In this section, several sentiment analysis techniques and certain basic concepts related to PLTS theory are be described.

2.1 Sentiment analysis

Sentiment analysis, also referred to as opinion mining, is a popular research topic in the field of NLP.

Jo

Sentiment analysis is used to analyse the writer's opinions, valuations, attitudes, and emotions towards a particular thing. All the approaches can be divided into two groups: lexicon-based approaches and machine learning approaches.

2.1.1 Lexicon-based approaches There are a large number of sentiment analysis methods that rely heavily on sentiment lexicons and related tools. We briefly provide an overview of five widely used techniques: a manually sentient lexicon, two sentiment analysis tools based on sentiment lexicons and two sentiment lexicons based on supervised 4

Journal Pre-proof learning. The General Inquiry1 (GI) [17] is one of the oldest manually sentient lexicons but is still popular. This lexicon was built in 1966 and has been improved constantly since; GI has been widely used in the sociological, psychological, economic, and anthropological fields [18]. This lexicon is composed of approximately 11789 words, and each word is labelled with one or more labels, such as negative, positive,

pro of

and pain. In this research, we only focused on the words labelled negative or positive. However, GI ignores the sentiment intensity. Thus, this lexicon is inappropriate for fine-grained sentient analysis. We downloaded the GI dictionary from this web page2.

TextBlob3 (TB) [19] is an open resource Python library for processing textual data that stands on the giant shoulders of NLTK4 and pattern5. Sentiment analysis is one of the most basic functions of TextBlob. TB has a lexicon of adjectives (e.g., awful, funny, worst, and badly) that occur frequently in reviews. By using regular expressions and this lexicon, TextBlob calculates the polarity intensity of given text. The polarity

re-

scores of words are improved based on the frequency of the adjectives and successive words. The polarity score is a real number within the range [−1.0, 1.0], where −1.0 is the most negative and 1.0 is the most positive.

lP

Valence Aware Dictionary and sEntiment Reasoner (VADER)6 [20] is rule-based sentiment analysis tool. This tool is specifically attuned to sentiments expressed in social media and works well on texts from other domains. VADER collected intensity ratings on common sentiment words (including adjectives, nouns, and

urn a

adverbs) from 10 independent human ratters. The sentient intensity is given in a range of

, which is

extreme negative to extreme positive. Compared with other well-established lexicons, VADER incorporates numerous lexical features common to sentiment expression in social media. A full list of Western-style emoticons, sentiment-related acronyms and commonly used slang with sentiment values are included in this lexicon. We accessed the VADER polarity score by access the online VADER API and a publicly available Python package.

Jo

SentiWordNet7 (SWN) [21, 22] is a lexical resource for opinion mining based on WordNet8. More than

1. http://www.wjh.harvard.edu/~inquirer 2. http://www.wjh.harvard.edu/~inquirer/spreadsheet_guide.htm 3. https://textblob.readthedocs.io/en/dev/ 4. http://www.nltk.org 5. https://www.clips.uantwerpen.be/pages/pattern-en 6 https://pypi.org/project/vaderSentiment/ 7. http://sentiwordnet.isti.cnr.it 5

Journal Pre-proof 140000 words are included in this lexicon, and each word has one or more synsets. SWN assigns two sentiment scores to each synset: a positive score (PosScore) and negative score (NegScore). The scores range from 0 to 1 and are calculated using a complex mix of semisupervised algorithms. A sample is given in Table 1. The POS tag can be ‘a’, ‘v’, ‘r’ or ‘n’, representing adjectives, verbs, adverbs and nouns, respectively. Although SWN contains many words, the performance level reported is not impressive. The SWN lexicon is

NLTK. Table 1. SentiWordNet 3.0 sample

pro of

very noisy; a large majority of synsets have no positive or negative polarity. We interfaced with SWN via

Synsets

PosScore

NegScore

Happy

happy.a.01

0.875

0

felicitous .s.02

0.75

glad.s.02

0.50

re-

Word

happy.s.04

0.125

0 0 0

SenticNet (SCN)9 is a public resource for emotionally aware intelligence [23]. There are more than 14000

lP

common sense concepts in the SCN lexicon, such as wrath, adoration, woe, and admiration. In the SCN lexicon, sentiment information is expressed in terms of four affective dimensions (pleasantness, attention, sensitivity, and aptitude) and the sentiment polarity of the concept is a numeric value ranging from

.

Instead of simply relying on counting word cooccurrence frequencies, SCN is constructed by the

urn a

bag-of-concepts model, a paradigm that exploits both AI and Semantic Web techniques. SCN 5 is available by calling to the Python library.

Almost all lexicons except SWN ignore the problem of polysemy. Although SWN provides a sentiment score for each synset, this lexicon does not assign a probability of occurrence for each synset. In certain studies [24, 25] based on SWN, the average sentiment score of the synsets is regarded as the sentiment score of a word. There is no doubt that information loss and deviation exist.

Jo

2.1.2 Machine learning approaches

It takes a large amount of time and labour to manually build a comprehensive sentiment lexicon and continually improve the lexicon. Many researchers have explored other methods of automatic identification of sentiment features [26]. Machine learning is technique that can “learn” the sentiment features of text. This

8. https://wordnet.princeton.edu 9 http://www.sentic.net/ 6

Journal Pre-proof method applies a training dataset to learn the model by employing a classifier that is used to the test dataset. There are a variety of classifiers; we cover several classifiers that are used in our experiments. The naive Bayes (NB) classifier [27] is a simple classifier that is extensively used and usually achieves good results. This classifier relies on Bayesian probability and assumes that the feature probabilities are independent of each other. Logistic regression (LR) [28] is a general machine learning technique that can be regard as a case

pro of

of the generalized linear model. By employing logistic functions, LR always gains good classification results. Unlike NB and LR, SVM [29] is a nonprobability classifier. This classifier constructs a hyperplane in a high-dimensional space, separates datapoints, then classifies. Pang[30] compared three machine learning approaches (NB, SVM and maximum entropy classification) on sentiment classification performance. SWM tends to do a little better than the other approaches.

Scikit-learn is a Python library that provides various machine learning algorithms; we access this library for the NB, LR and SVM classification models. However, there are certain problems with machine learning

re-

approaches. First, these approaches require a large amount of training data, and sometimes, the data are not easy to obtain. Machine learning requires considerable amounts of time and resources to manually label the dataset. Second, data sparsity is a major concern when the quantity of labelled data is insufficient to train a

lP

machine learning classifier. Third, when classification errors occur, the errors are difficult to explain and modify. 2.2 PLTS

urn a

A PLTS is a great mathematic tool to depict the fuzziness and uncertainty of language. Pang et al.[31] were the first to provide definitions and certain operations for PLTSs. However, there are still certain problems with the arithmetic operation rules of a PLTS. The operated values could go beyond the bounds of the given linguistic term set, which may lead to the loss of linguistic information in the results. To solve this problem, Gou et al. [32, 33] soon redefined more logical operations for PLTSs that have been widely used in later studies. The basic theory, operations and operators of PLTSs are provided in Appendix A.

Jo

PLTSs have caused widespread concern for scholars, and many studies based on PLTSs have emerged. Zhang et al. [34] applied a PLTS to assess investment risk and proposed a novel concept named probabilistic linguistic preference relation (PLPR). Peng et al. [35] proposed a cloud decision support model named probabilistic linguistic integrated cloud (PLIG). The novel model is used for hotel selection on TripAdvisor.com. Regarding problems with multicriteria decision making, Liao et al. [36] presented the probabilistic linguistic linear programming method to evaluate the hospital levels in China. Tian et al. [37] 7

Journal Pre-proof presented a multicriteria decision-making method based on PLTS and evidential reasoning. Luo et al.[38] used PLTS to evaluate the sustainability of constructed wetlands. Krishankumar [39] proposed a probabilistic linguistic preference relation (PLPR)-based decision framework. There are many studies about the application of PLTS in group decision making (GMD). Wu et al. [40] proposed a comprehensive multiple criteria group decision making method (MCGDMC) with a PLTS based on consensus measures and outranking methods.

pro of

Mao [41] applied an MCGDMC with a PLTS to finance. Nie [42] proposed a GDM support model integrated with prospect theory and PLTS.

An effectual approach to solving uncertain problems is the application of a PLTS to group decision making. A PLTS can comprehensively depict sentimental information, uncertainty and fuzziness in detail. In this study, the sentiment information of each word can be translated into a PLTS, as , where If

, greater values of

is the number of levels and

represents a sentiment polarity level.

indicate more positive sentiment polarity rankings. If

re-

indicate more negative sentiment polarity rankings; this way, the subset

.

, lower values of

is the probability of

. In

can quantify the negative meaning, the subset

can quantify the neutral meaning, and the subset

represented by a PLTS

lP

quantify the positive meaning. For example, supposing

can

, the sentiment information of “funny” may be . This representation means that the word

conveys positive information of level 3 at 70% probability and negative information of level 1 and level 3 at

urn a

20% and 10%, respectively. We can assign sentiment scores and probabilities to each meaning of a word and form a set. Then, the problem of polysemy can be readily solved. Obviously, for larger values of

, the

sentiment information can be depicted more adequately. In this way, PLTS depicts the sentiment information of word to the greatest extent.

3. Application of probabilistic linguistic terms in sentiment analysis As illustrated in section 1, certain deficiencies exist in prior studies; to account for these shortcomings, a

Jo

novel sentiment analysis classification framework is proposed. The framework is divided into three process, Preprocessing, Word2PLTS and Classification. The flowchart of this framework is shown in Fig. 3.

Fig. 3. Flowchart

8

Journal Pre-proof 3.1 Preprocessing To enhance data quality and perform better analysis, preprocessing is necessary. Preprocessing includes spelling correction, negation word checking, part of speech tagging, and stop word removal. 3.1.1 Spelling correction Because spelling errors in online reviews are very common, spelling correction is necessary. The spelling correction is performed by using TextBlob10, which is based on Peter Norvig’s “How to Write a Spelling

pro of

Corrector”11 as implemented in the pattern library. For example, the sentence "I havv goood speling!" can be corrected to “ I have good spelling!” . 3.1.2 Negation words checking

When negation words appear in the text, the sentiment polarity of the text may shift or reverse. When “not” or “no” negation words appear in the text, such as “not happy” and “no success.” “Happy” and “success" are positive words. If these words are modified by "not" and "no", the sentiment polarity of these words is

negative coefficient

is shown below:

re-

reversed. When there are multiple negation words in a paragraph, the polarity is reversed multiple times. The

-

is the number of negation words. If

sentence is reversed, and if

, the sentence is negative, and the sentiment polarity of

, the sentiment polarity remains unchanged.

lP

where

(1)

3.1.3 Part of speech (POS) tagging

POS tagging, also called grammatical tagging, is the process of tagging words as nouns, verbs, adjectives,

urn a

adverbs, etc. The Stanford POS tagger12 was employed in our study. The tagger uses Penn Treebank POS tags, which are converted into tags for verbs, nouns, adjectives and adverbs. The details are shown in Table 2. Only verbs, nouns, adjectives and adverbs are tagged in this research while the rest of the possibilities, such as prepositions and numerals, are ignored. Table 2. POS tags conversion table

Verb

Penn Tag

Jo

POS

VB (verb, base form) VBD (verb, past tense) VBG (verb, gerund or present participle)

10. https://textblob.readthedocs.io/en/dev/ 11. http://norvig.com/spell-correct.htmachine learning

9

Journal Pre-proof VBN (verb, past participle) VBP (verb, non-3rd person singular present) VBZ (verb, 3rd person singular present) Noun

NN (noun, singular or mass) NNP (proper noun, singular)

NNS(noun, plural) Adjective

JJ (adjective)

pro of

NNPS (proper noun, plural)

JJR (adjective, comparative) JJS (adjective, superlative) Adverb

RB (adverb)

RBR (adverb, comparative)

Other

Other

3.1.4 Stop word removing

re-

RBS (adverb, superlative)

lP

In a language, stop words refer to the most common words that have no special meaning, such as ‘it’, ’is’, and ‘the’. These words are not useful for text sentiment analysis. To reduce the calculations, storage space and influence of disturbance, a stop word list13 is used to filter these words. 3.2 Word2PLTS: A novel text representation model based on PLTS

urn a

As illustrated in section 2.2, a PLTS can fully depict the sentiment information of each meaning of a word and effectively solve the problem of uncertainty and fuzziness. In this phase, the novel text representation model based on PLTS is introduced in detail. In this model, every word can be converted to a PLTS, and we called the model “Word2PLTS”. The processes of this model can be divided into three steps: transformation, modification and calculation.

Jo

3.2.1 Transformation

In this section, a simple transformation between a word and the PLTS is provided, and we named this model “Word2PLTS”. We use the difference between the positive and negative scores of each synset from SWN as the sentiment valence of a word to distinguish differences in the sentiment intensities of words. The SynsetScore can be calculated as:

12. https://www.ranks.nl/stopwords 10

Journal Pre-proof (2) There are one or more synsets for each word, so we can obtain a one-dimensional array for the SynsetScore (DAOS) (3) is the number of synsets for a word.

Next, a function

is defined that can map SynsetScore into

:

pro of

where

In this way, we transform SynsetScore into

, the subscript of the linguistic term

(4) , and the DAOS

can be converted to an one-dimensional array of the linguistic term (DAOL).

Now, we count the occurrence frequency of each linguistic term

and roughly calculate the probabilities

. The sentiment polarity of a word can be converted to a PLTS:

(6)

re-

of each term

(5)

3.2.2 Modification

In the previous section, the procedures for extracting the sentiment information of a word from SWN and

lP

converting the information to a PLTS are introduced. However, these procedures are still insufficient. First, the PosScore and NegScore extracted from SWN are not 100% accurate. The scores were calculated using a complex mix of semisupervised algorithms (propagation methods and classifiers). This calculation inevitably contains many errors. For instance, the sentiment polarity scores of “absurd” are

obtained:

in

Table

4,

and

urn a

shown

the

PLTS

of

“absurd”

can

be

, which implies that “absurd” conveys a positive

intensity. However, “absurd” is a derogatory word. Table 4. Sentiment information of “absurd” from SWN Word

PosScore

NegScore

absurd.n.01

0

0

absurd.s.01

0.3750

0

absurd.s.02

0.6250

0

Jo

absurd

Synsets

Second, polysemy is a pervasive phenomenon in human language. However, the probabilities for the appearance of each meaning in the text are not equal. For example, there are ten meanings of "fine" in the SWN; certain meanings may be common, and certain meanings are rare. In SWN, only the sentiment polarity 11

Journal Pre-proof of each synset is provided but does not include the probability information. Last, the sentiment polarities of the words are sometimes determined by context. In different contexts, the sentiment polarity is transferred or strengthened. The same word may have the opposite polarity in different domains. We should fully take context into account rather than only extracting sentiment information from SWN.

pro of

In consideration of these problems that may lead to underfitting, it is necessary modify the PLTS from SWN. We used a training set to build a simple dictionary though word frequency distribution statistics. A novel concept named the context sentiment score (CSS) is introduced, and CSS is calculated by the equation: -

where

is a word,

frequency of

is the frequency of

(7)

in text labelled positive, and

in text labelled negative.

is the

. For example,

re-

means that the word ‘bad’ only appears in text labelled negative. Similarly, by employing Eq. (4), we map

from [-1,1] to a linguistic term subscript

and create a word frequency distribution information dictionary. We named the proposed dictionary CSS-V. Samples from CSS-V are shown in Table 5.

Term

lP

Table 5. Samples from the proposed CSS-V vocabulary Linguistic term subscript -2

nothing

-3

work unfaithful

Linguistic term subscript

usually

1

dull

-6

0

delightful

6

-2

wise

3

urn a

chicken

Term

Finally, by taking advantage of CSS-V, the PLTS extracted from SWN is modified, and the algorithm for

where

Jo

the modification is defined in the following:

is a word,

is queried from CSS-V.

(8)

is the PLTS of the word,

is the linguistic term of word and the subscript

can be considered as a modifying factor, and

modification is not conducted. When causes overfitting. In our study,

. When

, the

is larger, the degree of modification is higher but more easily was used to modify the PLTS from SWN.

3.2.3 Calculation A sentence can be considered as a sequence of words; this sequence implies a semantic sequence that 12

Journal Pre-proof decides the sentiment information of the sentence. Thus, by aggregating the sentiment information of words, the sentiment information of the sentence is available. Supposed, there are

words after preprocessing in a sentence, and

words can be converted into

PLTSs; thus, the operational laws of PLTSs are overly complex. To reduce computational complexity, neutral words are removed. The criterion for determining whether a word is neutral is defined as follows:

where

is the probability of

pro of

(9) . If a PLTS is neutral, that PLTS is removed.

After neutral word filtering, there are

PLTSs left. The sentiment polarity of the sentence can be

calculated by employing a probability linguistic averaging (PLA) operator.

is the PLTS of the sentence and ; otherwise,

.

Usually, there is a large number of

in

is a negative coefficient; if the sentence is negative,

re-

where

(10)

and little difference between adjacent

. It is

necessary to decrease computational complexity, reduce storage consumption and normalize storage schemas. to

and merge

lP

We map example, mapped into

. For

can

be

. that describes the possibilities for the

urn a

In Word2PLTS, a text-like sentence is represented as a

that have identical subscripts

sentiment polarity. This processing makes full use of the sentiment information from the sentiment lexicon and word frequency distribution information. 3.3 Classification

It may be difficult to judge the sentiment polarity of a sentence only though are still many elements in

because there

. To improve the classification performance, we make use of a

Jo

supervised learning algorithm, SVM, that is known to work best according to many researchers, and the radial basis function kernel is employed. In this classification model, linguistic terms corresponding probability

are regarded as features and feature values, respectively.

and the is then

converted into a format that is understandable for the SVM classifier. This format is as given below and in Fig. 4 :

13

Journal Pre-proof where is

and -1 for positive and negative, respectively, is the linguistic term

of the linguistic term.

pro of

is the probability

, and

Fig. 4. Input format for SVM 3.4 Example for SACPC

For easier understanding, in this section, we provide a step-by-step example of how to use SACPC for text sentiment analysis.

re-

Suppose that the SVM classifier is trained using the training set. Then, a model is generated that can be applied to classification. For example, there is a movie review that needs to be classified. “In its chicken heart, crush goes to absurd lengths to duck the very issues it raises.” The detailed procedure is given as follows, and visualization processing is provided in Appendix B:

tagging, are performed.

lP

(1) First, the preprocessing steps, including application of spelling correction, word segment and POS

('in', 'IN'), ('its', 'PRP$'), ('chicken', 'JJ'), ('heart', 'NN'), ('crush', 'NN'), ('goes', 'VBZ'), ('to', 'TO'),

urn a

('absurd', 'VB'), ('lengths', 'NNS'), ('to', 'TO'), ('duck', 'VB'), ('the', 'DT'), ('very', 'RB'), ('issues', 'NNS'), ('it', 'PRP'), ('raises', 'VBZ') (2) Second, stop word removal is applied, and only words with tags of verb, noun, adjective and adverb are kept.

('chicken', 'JJ'), ('heart', 'NN'), ('crush', 'NN'), ('goes', 'VBZ'), ('absurd', 'VB'), ('lengths', 'NNS'),

Jo

('duck', 'VB'), ('issues', 'NNS'), ('raises', 'VBZ') (3) Next, we extract the sentiment polarity score of the synsets of each word and convert the scores to PLTSs. For clarity, we take “funny” as an example to describe the extraction and conversion process. (3.1) First, we obtain the synsets of the word and corresponding sentiment intensity by access to SWN. The synsets for "funny" are shown in Table 3. Table 3. Sentiment information of “funny” from SWN Word

Synsets

PosScore 14

NegScore

Journal Pre-proof funny

funny_story.n.01

0

0

amusing.s.02

0.5

0

curious.s.01

0.125

0.375

fishy.s.02

0

0.5

funny.s.04

0

0.5

(3.3) In this step, using mapping function

pro of

(3.2) Next, by employing Eq. (3), we calculate the SynsetScore, and a DAOS can be obtained.

, the SynsetScore is transformed into

and we obtain the DAOL.

,

re-

(3.4) Finally, the transformation between “funny” and the PLTS is achieved.

Similarly, the PLTSs of word in this sentence can be obtained. chicken:

lP

heart: crush:

absurd: lengths: issues: raises:

urn a

goes:



(4) In this step, by referring to CSS-V, the PLTSs of the words are modified. For instance, the PLTS of “funny” extracted from SWN is

Jo

. Then, by

querying CSS-V, we can obtain the linguistic term subscript (8), the modification is conducted:

Last, by employing Eq. .

Similarly, the modification of PLTSs of word in this sentence are conducted. chicken:

S-8

heart:

15

Journal Pre-proof ,

crush: goes: absurd: lengths: issues:

pro of

raises: (5) To reduce the computational complexity, we eliminate the neutral words, and only 4 words are left. chicken: heart: crush: absurd:

to

and transform

into a format

urn a

(7) Now, we map

.

lP

calculated, where the negative coefficient

, is

re-

(6) At this step, by employing the PLA operator, the sentiment polarity of sentence,

.

that can be understood by machine learning. “33:0.0006 …34:0.0053 … 69:0.0015” (8) We use the model selection approach to classify; the review is then classified as positive or negative. 4. Datasets

In this study, we applied SAPCP to different short text corpora from three different domains.

Jo

Movies reviews (MR): This dataset was first used by Bo Pang and Lillian Lee14 [43]. There is a total of 10662 snippets that contains roughly one single sentence. Half the snippets are labelled positive, and half the snippets are labelled negative. The snippets are collected from Rotten Tomatoes webpages, a website in the United States that provides film related comments. These researchers assumed that snippets (from the Rotten Tomatoes webpages) from reviews marked ‘fresh’ are positive, and those from reviews marked ‘rotten’ are

13 http://www.cs.cornell.edu/people/pabo/movie-review-data/ 16

Journal Pre-proof negative. A total of 80% of all the data were randomly selected as training data, and the remaining 20% were testing data. The average length of snippets is 20. Stanford Twitter Sentiment (STS): The Stanford Twitter Sentiment15 corpus [44] is our second labelled corpus. The training set consists of 1.6 million tweets that were automatically labelled positive/negative. To reduce training time, we randomly selected 100k tweets as training data. The test set contains 182 positive

pro of

and 177 negative sentiment tweets with manual annotation. The average length of tweets is 19 words. TripAdvisor reviews (TR): TripAdvisor.com is the largest and most successful social networking and community site for tourism in the world. We crawled more than 10000 text reviews from this web involving 60 restaurants in New York and 1945 customers. We spilt the reviews into sentence-level snippets and used manual annotation. A total of 900 positive and 900 negative sentence-level snippets are available16. We randomly selected 720 positive and 720 negative snippets as training data and the remainder of the snippets were testing data. The average length of the reviews is 22 words.

re-

5. Results and discussion

As the most common evaluation indicators, accuracy (Acc), precision (Prec), recall (Rec), and F-measure

urn a

lP

(F-M) were used to comprehensively evaluate the classification model. (11) (12) (13) (14)

where TP, TN, FP, and FN represent true positives, true negatives, false positives, and false negatives, respectively. Accuracy is an important indicator for classification but only works when the samples are balanced. Precision measures how accurate your model predictions are, and recall measures how well your

Jo

model finds all the positives. However, precision and recall are mutually restrictive and one-sided. The F-measure is the harmonic mean of the precision and recall; this parameter is a comprehensive evaluation index. For the four indicators, the higher the index values are, the better the classification of the method is. First, we compared the performances of five well-established sentiment lexicons on the datasets: General Inquirer (GI), TextBlob (TB), VADER, SenticNet(SCN), and SentiWordNet (SWN). Detailed results for the

14 http://cs.stanford.edu/people/alecmgo/trainingandtestdata.zip 15 https://tianchi.aliyun.com/dataset/dataDetail?dataId=5177, I have uploaded the data to aliyun for everyone to download 17

Journal Pre-proof comparison are shown in Table 6. VADER achieves the best performance compared with the other lexicons, and TB is slightly inferior to VADER. There is little difference between SCN and SWN. GI tends to perform the worst, but the differences are not very large. The average precision, recall and F-measure for the five sentiment analysis tools/techniques on the movie reviews are 58.50%, 69.39% and 63.36%, respectively, on Stanford Twitter Sentiment are 62.36%, 57.92% and 59.77%, respectively, and on TripAdvisor reviews are

pro of

63.99%, 78.38% and 70.41%, respectively. The recall is higher than the precision for the lexicons, which means that the lexicon-based method is more sensitive to false positives than false negative. The highest values for F-M for MR, STS, TR are 68.90%, 66.50% and 77.67%, respectively, which shows that the performances of approaches based on lexicons are far from satisfactory.

Table 6. Detailed results for several sentiment analysis tools/techniques MR

STS

TR

Rec

F-M

Prec

Rec

F-M

Prec

Rec

F-M

GI

0.5542

0.6314

0.5902

0.5674

0.5969

0.5818

0.5892

0.6524

0.6192

TB

0.6052

0.7184

0.6569

0.6683

0.5623

0.6107

0.6896

0.8667

0.7680

VADER

0.6419

0.7551

0.6890

0.7275

0.6124

0.6650

0.6897

0.8888

0.7767

SCN

0.5563

0.6997

0.6198

0.5071

0.5902

0.5455

0.5992

0.7511

0.6666

SWN

0.5674

0.6648

0.6122

0.5854

0.6316

0.7600

0.6899

lP

re-

Prec

0.5341

Jo

urn a

0.6475

Fig. 5. Comparison of results averaged over the evaluation datasets Second, we compared SAPCP to certain machine learning approaches, such as NB, SVM, and LR. We trained the machine learning classifiers based on “bag of words" (BOW), a text representation model [45]. 18

Journal Pre-proof The detailed results are given in Table 7. By comparing the results of Table 6 and Table 7, the highest values of F-M for MR, STS and TR are 76%, 72% and 87%, respectively. The result show that methods based machine learning perform much better than lexicon-based methods. Then, the mean values of the results of the machine learning methods as the baseline and the improvement of SAPCP were calculated, which are given in Table 8. As a technique combines that unsupervised and supervised learning, SAPCP

pro of

achieves the highest performance improvement for the STS. These results show that as the coverage of the training data sets increases, the accuracy and F-measure improve. Smaller improvements are obtained in recall (6.5%) than in precision (9.4%). This observation leads to that our model is more sensitive to false negatives. It is possible that in TripAdvisor reviews people usually focus on the quality of food and hotels and tend to use similar words to convey emotion. However, in movie reviews and tweets, the contents that are expressed are more extensive and diffuse. Many obscure words are used to express feelings. In this case, it is not easy to extract effective features for machine learning approaches, and thus, the BOW model should

re-

be used because many word words only appear in the test set. However, SAPCP can effectively solve these problems by employing a lexicon and word frequency distribution information. Table 7. Detailed results of comparison between machine learning and SAPCP STS

Rec

F-M

Bow+NB

0.75

0.78

0.76

Bow+SVM

0.75

0.76

0.75

Bow+LR

0.76

SAPCP

0.87

TR

Prec

Rec

F-M

Prec

Rec

F-M

0.67

0.79

0.71

0.87

0.86

0.86

0.65

0.82

0.72

0.86

0.86

0.86

urn a

Prec

lP

MR

0.77

0.76

0.68

0.79

0.72

0.87

0.88

0.87

0.81

0.84

0.81

0.92

0.86

0.89

0.90

0.89

Table 8. SAPCP improvement over the baseline

STS TR Average

Rec

F-M

0.117

0.040

0.083

0.143

0.120

0.143

0.023

0.033

0.023

0.094

0.065

0.083

Jo

MR

Prec

Finally, many researchers have used movie reviews and the Stanford Twitter Sentiment as a benchmark to evaluate sentiment classification methods. To verify effectiveness and superiority of SAPCP, we compared this method to other techniques. Detailed comparison results are presented in Table 9. The results are 19

Journal Pre-proof encouraging. For movie reviews, SAPCP outperforms other state of the art techniques. For the Stanford Twitter Sentiment, SAPCP has slightly poorer performance than certain approaches based on deep learning, possibly because movie reviews are usually written in a more standardized and orderly way than tweets. Tweets always contain a variety of incomplete expressions, such as acronyms, slang, ill-formed words and emoji, that contain sentiment information. It is not easy for a lexicon to recognize incomplete expressions and provide sentiment information, and only word frequency distribution information works.

pro of

Table 9. Comparison with state of the art approaches for movie reviews and Stanford Twitter Sentiment Movie reviews Author/Year

Stanford Twitter Sentiment Method

Socher et al. Semi-Supervised (2011) [46]

Acc

Author/Year

77.7%

Speriosu et al. LProp

Recursive

et

Fuzzy Sets al CNN

(2014) [50]

(2014) [52]

81.52%

Socher et al. RNTN

85.4%

77.01%

Dos Santos et CharSCNN

85.7%

al. (2014) [53]

al. KPCNN

(2017) [54] SAPCP

82.9%

(2013) [49]

Word2PLTS+SW M

83.25

Zhao

urn a

et

84.7%

(2013) [51]

Zhang et al. CharCNN

Wang

Socher et al. MV-RNN

lP

Kim

Rules, 76.0%

re-

(RAE)

(2016) [48]

Acc

(2011) [47]

Auto-encoders

Appel et al. Semantic

Method

et

al. GloVe-DCNN

87.62%

(2018) [55]

84.22%

SAPCP

Word2PLTS+SWM

85.22%

In general, SAPCP is an approach than perfectly combines both the unsupervised and supervised

Jo

techniques. First, we extract the sentiment information of words from an existing lexicon and word frequency distribution dictionary. Every word can be represented as a PLTSs. Then, by aggregating the PLTSs of words, the PLTS of a sentence

is available. Last, SWM is used to improve the

classification performance. Though the comparison of results, SAPCP displayed excellent performance, this method has steady classification performances for different datasets. Compared with unsupervised classifiers, SAPCP can solve the problems of domain dependence and contextual information by using word frequency distribution information and introduction of SVM. Compared with supervised classifiers, SAPCP can extract 20

Journal Pre-proof the sentiment information of a word from an existing lexicon, which may not be easily obtained by only relying on data-training. This method can efficiently solve the problems of data unavailability and data sparsity. More importantly, by introducing PLTS and the relevant theory, we effectively solved the problem of polysemy. 6. Conclusions and future study

pro of

In this study, we proposed a novel text representation model named Word2PLTS. Then, by using SVMs, a short text sentiment analysis framework named SAPCP is presented. In the framework, we extract the sentiment information of words from SWN and a word frequency distribution dictionary. The information can be converted into PLTSs. By aggregating the PLTS of the words, the sentiment information of short text is available. Finally, an SVM algorithm is applied to improve classification performance. A noticeable performance improvement is achieved by evaluating SAPCP. In conclusion, several advantages of our work are summarized in the following.

re-

(1) The novel text representation model, Word2PLTS, introduces the idea of fuzzy mathematics into sentiment analysis. This model fully considers the fuzziness and uncertainty of human language. This model is a good way to effectively solve the problem of the variation of lexical meaning and describe

lP

sentiment information for short text.

(2) SAPCP is a technique that combines supervised and unsupervised methods. By comparison with unsupervised methods, this technique takes context into account and achieves great improvement in

urn a

performance. Compared with supervised methods, this method can extract sentiment information from a lexicon, which is effective in solving the problems of data unavailability and data sparsity. There are also some suggestion for future research and the development of the proposed method. (1) The operational laws of PLTSs are overly complex, which is why SAPCP is only applicable to short text sentiment analysis and not long text. Simplifying the arithmetic operation rules is necessary and significant.

Jo

(2) SAPCP achieved a great success in binary (positive/negative) classification. This method may be suitable for fine-grained sentiment classification. The relevant research is meaningful and promising. (3) Recently, some flexible and brilliant linguistic approaches, such as the intuitionistic linguistic approach[56], grey linguistic approach[57] and interval linguistic approach [58], have been developed to deal with ambiguity in human language. Therefore, the linkage of these linguistic approaches to sentiment analysis is interesting. 21

Journal Pre-proof

Acknowledgements We are grateful to the editors and anonymous reviewers for their helpful comments and suggestions. This work was supported by the National Natural Science Foundation of China (Nos. 71571193 and 71871228). Appendix A

pro of

Some basic concepts related to PLTS are introduced, including properties and operators. Definition 1. Let

where

be a LTS, a PLTS can be defined as:

denotes the linguistic term is the number of the elements in

associated with the corresponding probability

.

Definition 2. Let

be an LTS, . ,

. where

are the equivalent transformation functions.

lP

g and

and

be three PLTSs,

is the subscript of linguistic term

re-

be a positive real number, and

(15)

urn a

Then, the sum is given by the following.

(16)

(17)

Jo

(18) (19)

In our study, the multiplication of negative numbers and PLTS is defined. (20)

Definition 3. Let

be

PLTSs, where

linguistic term and the corresponding probability, respectively, in 22

and

are the

. Then, we can obtain the

Journal Pre-proof probability linguistic averaging (PLA) operator.

(21) Definition 4. Let

be

PLTSs, where

is the weight vector of

,

are the

, where

pro of

linguistic term and the corresponding probability, respectively, in

and

and

. We

can obtain the probability linguistic weighted averaging (PLAW) operator.

urn a

lP

re-

, then the PLAW reduces to PLA.

Jo

When

23

(22)

Journal Pre-proof

Jo

urn a

lP

re-

pro of

Appendix B

24

Journal Pre-proof References [1] C.E. Osgood, G.J. Suci, P.H. Tannenbaum, The measurement of meaning, University of Illinois Press, 1957. [2] J.A. Morente-Molinera, G. Kou, C. Pang, F.J. Cabrerizo, E. Herrera-Viedma, An automatic procedure to create fuzzy ontologies from users’ opinions using sentiment analysis procedures and multi-granular fuzzy

pro of

linguistic modelling methods, Information Sciences, 476 (2019) 222-238. [3] P. Ji, H.-Y. Zhang, J.-Q. Wang, A fuzzy decision support model with sentiment analysis for items comparison in e-commerce: The case study of PConline. com, IEEE Transactions on Systems, Man, and Cybernetics: Systems, 49(10) (2019) 1993-2004.

[4] L. Wang, X. Wang, J. Peng, J. Wang, The differences in hotel selection among various types of travellers: A comparative analysis with a useful bounded rationality behavioural decision support model, Tourism Management, 76 (2020) 103961.

re-

[5] N. Godbole, M. Srinivasaiah, S. Skiena, Large-scale sentiment analysis for news and blogs, International Conference on Weblogs and Social Media,(2007).

[6] J. Li, S. Fong, Y. Zhuang, R. Khoury, Hierarchical classification in text mining for sentiment analysis of

lP

online news, Soft Computing - A Fusion of Foundations, Methodologies and Applications, 20 (2016) 3411-3420.

[7] K. Ioannis, N. Azadeh, S. Matthew, S. Abeed, A. Sophia, G.H. Gonzalez, Analysis of the effect of

urn a

sentiment analysis on extracting adverse drug reactions from tweets and forum posts, Journal of Biomedical Informatics, 62 (2016) 148-158.

[8] P. Grandin, J.M. Adan, Piegas: A Systems for Sentiment Analysis of Tweets in Portuguese, IEEE Latin America Transactions, 14 (2016) 3467-3473.

[9] J.A. Morente-Molinera, G. Kou, K. Samuylov, R. Ureña, E. Herrera-Viedma, Carrying out consensual Group Decision Making processes under social networks using sentiment analysis over comparative

Jo

expressions, Knowledge-Based Systems, 165 (2019) 335-345. [10] F. Ali, D. Kwak, P. Khan, S. El-Sappagh, A. Ali, S. Ullah, K.H. Kim, K.-S. Kwak, Transportation sentiment analysis using word embedding and ontology-based topic modeling, Knowledge-Based Systems, 174 (2019) 27-42.

[11] M. Schuckert, X. Liu, R. Law, Hospitality and Tourism Online Reviews: Recent Trends and Future Directions, Journal of Travel & Tourism Marketing, 32 (2015) 608-621. 25

Journal Pre-proof [12] S.O. Proksch, W. Lowe, J. Wäckerle, S. Soroka, Multilingual sentiment analysis: A new approach to measuring conflict in legislative speeches, Legislative Studies Quarterly, 44 (2019) 97-131. [13] Y. Dong, Y. Wu, H. Zhang, G. Zhang, Multi-granular unbalanced linguistic distribution assessments with interval symbolic proportions, Knowledge-Based Systems, 82 (2015) 139-151. [14] Z. Wu, J. Xu, Possibility Distribution-Based Approach for MAGDM With Hesitant Fuzzy Linguistic

pro of

Information, IEEE Transactions on Cybernetics, 46 (2016) 694-705. [15] H. Liu, A fuzzy envelope for hesitant fuzzy linguistic term set and its application to multicriteria decision making, Information Sciences, 258 (2014) 220-238.

[16] J.B. Yang, Rule and utility based evidential reasoning approach for multiattribute decision analysis under uncertainties, European Journal of Operational Research, 131 (2001) 31-61.

[17] P.J. Stone, D.C. Dunphy, M.S. Smith, D.M. Ogilvie, General Inquirer, Philosophy, (1966). [18] P.D. Turney, M.L. Littman, Measuring praise and criticism:Inference of semantic orientation from

re-

association, Acm Transactions on Information Systems, 21 (2003) 315-346.

[19] L. Subirats, N. Reguera, A. Bañón, B. Gómez-Zúñiga, J. Minguillón, M. Armayones, Mining Facebook Data of People with Rare Diseases: A Content-Based and Temporal Analysis, International Journal of

lP

Environmental Research and Public Health, 15 (2018) 1877.

[20] C.J. Hutto, E. Gilbert, VADER: A Parsimonious Rule-Based Model for Sentiment Analysis of Social Media Text, Eighth international AAAI conference on weblogs and social media, (2014).

urn a

[21] A. Esuli, F. Sebastiani, SentiWordNet: A Publicly Available Lexical Resource for Opinion Mining,(2006). 417--422.

[22] S. Baccianella, A. Esuli, F. Sebastiani, SentiWordNet 3.0: An Enhanced Lexical Resource for Sentiment Analysis and Opinion Mining, International Conference on Language Resources and Evaluation, Lrec 2010, 17-23 May 2010, Valletta, Malta,(2010). 83-90. [23] E. Cambria, R. Speer, C. Havasi, A. Hussain, SenticNet: A publicly available semantic resource for

Jo

opinion mining, Aaai Csk, (2010).

[24] F.H. Khan, U. Qamar, S. Bashir, SentiMI: Introducing point-wise mutual information with SentiWordNet to improve sentiment polarity detection, Applied Soft Computing, 39 (2016) 140-153. [25] F.H. Khan, U. Qamar, S. Bashir, eSAP: A Decision Support Framework for Enhanced Sentiment Analysis and Polarity Classification, Information Sciences, 367-368 (2016) 862-873. [26] X. Xie, S. Ge, F. Hu, M. Xie, N. Jiang, An improved algorithm for sentiment analysis based on 26

Journal Pre-proof maximum entropy, Soft Computing, 23 (2019) 599-611. [27] H. Turtle, W.B. Croft, Inference networks for document retrieval,(1989). 1-24. [28] S.H. Walker, D.B. Duncan, Estimation of the probability of an event as a function of several independent variables, Biometrika, 54 (1967) 167-179. [29] V. Vapnik, SVM method of estimating density, conditional probability, and conditional density, IEEE

pro of

International Symposium on Circuits and Systems, 2000. Proceedings. ISCAS,(2000). 749-752 [30] T.B. Pang, B. Pang, L. Lee, Thumbs up? Sentiment Classification using Machine Learning, Empirical Methods in Natural Language Processing, (2002) 79-86.

[31] Q. Pang, H. Wang, Z. Xu, Probabilistic Linguistic Term Sets in Multi-Attribute Group Decision Making, Information Sciences, 369 (2016) 128-143.

[32] X. Gou, Z. Xu, Novel basic operational laws for linguistic terms, hesitant fuzzy linguistic term sets and probabilistic linguistic term sets, Information Sciences, 372 (2016) 407-427.

re-

[33] X. Gou, H. Liao, Z. Xu, F. Herrera, Double Hierarchy Hesitant Fuzzy Linguistic Term Set and MULTIMOORA Method: A Case of Study to Evaluate the Implementation Status of Haze Controlling Measures, Information Fusion, 38 (2017) 22-34.

lP

[34] Y. Zhang, Z. Xu, H. Wang, H. Liao, Consistency-based risk assessment with probabilistic linguistic preference relation, Applied Soft Computing, 49 (2016) 817-833. [35] H.G. Peng, H.Y. Zhang, J.Q. Wang, Cloud decision support model for selecting hotels on

68 (2018) 124-138.

urn a

TripAdvisor.com with probabilistic linguistic information, International Journal of Hospitality Management,

[36] H. Liao, L. Jiang, Z. Xu, J. Xu, F. Herrera, A linear programming method for multiple criteria decision making with probabilistic linguistic information, Information Sciences, 415 (2017). [37] Z.-P. Tian, R.-X. Nie, J.-Q. Wang, Probabilistic linguistic multi-criteria decision-making based on evidential reasoning and combined ranking methods considering decision-makers’ psychological preferences,

Jo

Journal of the Operational Research Society, (2019) 1-18. [38] S.-z. Luo, H.-y. Zhang, J.-q. Wang, L. Li, Group decision-making approach for evaluating the sustainability of constructed wetlands with probabilistic linguistic preference relations, Journal of the Operational Research Society, 70 (2019) 2039-2055. [39] R. Krishankumar, K. Ravichandran, M. Ahmed, S. Kar, S. Tyagi, Probabilistic Linguistic Preference Relation-Based Decision Framework for Multi-Attribute Group Decision Making, Symmetry, 11 (2019) 2. 27

Journal Pre-proof [40] X. Wu, H. Liao, A consensus-based probabilistic linguistic gained and lost dominance score method, European Journal of Operational Research, 272 (2019) 1017-1027. [41] X.-B. Mao, M. Wu, J.-Y. Dong, S.-P. Wan, Z. Jin, A new method for probabilistic linguistic multi-attribute group decision making: Application to the selection of financial technologies, Applied Soft Computing, 77 (2019) 155-175.

pro of

[42] R.-x. Nie, J.-q. Wang, Prospect Theory-Based Consistency Recovery Strategies with Multiplicative Probabilistic Linguistic Preference Relations in Managing Group Decision Making, Arabian Journal for Science and Engineering, (2019) 1-18.

[43] L.L. Bo Pang Seeing stars Exploiting class relationships for sentiment categorization with, Proceedings of the 43rd Annual Meeting of the ACL, ACL (2005) 115-124.

[44] A. Go, R. Bhayani, L. Huang, Twitter sentiment classification using distant supervision, CS224N Project Report, Stanford, 1 (2009).

re-

[45] D. Larlus, J. Verbeek, F. Jurie, Category Level Object Segmentation by Combining Bag-of-Words Models with Dirichlet Processes and Random Fields, International Journal of Computer Vision, 88 (2010) 238-253.

lP

[46] R. Socher, J. Pennington, E.H. Huang, A.Y. Ng, C.D. Manning, Semi-supervised recursive autoencoders for predicting sentiment distributions, Conference on Empirical Methods in Natural Language Processing, EMNLP 2011, 27-31 July 2011, John Mcintyre Conference Centre, Edinburgh, Uk, A Meeting of

urn a

Sigdat, A Special Interest Group of the ACL,(2011). 151-161. [47] M. Speriosu, N. Sudan, S. Upadhyay, J. Baldridge, Twitter polarity classification with label propagation over lexical links and the follower graph, Association for Computational LinguisticsProceedings of the First workshop on Unsupervised Learning in NLP,(2011). 53-63. [48] O. Appel, F. Chiclana, J. Carter, H. Fujita, A hybrid approach to sentiment analysis, Evolutionary Computation,(2016). 242-254.

Jo

[49] R. Socher, A. Perelygin, J. Wu, J. Chuang, C.D. Manning, A. Ng, C. Potts, Recursive deep models for semantic compositionality over a sentiment treebank, Proceedings of the 2013 conference on empirical methods in natural language processing,(2013). 1631-1642. [50] Y. Kim, Convolutional neural networks for sentence classification, arXiv preprint arXiv:1408.5882, (2014). [51] R. Socher, J. Bauer, C.D. Manning, Parsing with compositional vector grammars, Proceedings of the 28

Journal Pre-proof 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),(2013). 455-465. [52] X. Zhang, J. Zhao, Y. LeCun, Character-level convolutional networks for text classification, Advances in neural information processing systems,(2015). 649-657. [53] C. Dos Santos, M. Gatti, Deep convolutional neural networks for sentiment analysis of short texts,

pro of

Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers,(2014). 69-78.

[54] J. Wang, Z. Wang, D. Zhang, J. Yan, Combining Knowledge with Deep Convolutional Neural Networks for Short Text Classification, IJCAI,(2017). 2915-2921.

[55] J. Zhao, X. Gui, Deep Convolution Neural Networks for Twitter Sentiment Analysis, IEEE Access, PP (2018) 1-1.

[56] J.Q. Wang, Z.Q. Han, H.Y. Zhang, Multi-criteria Group Decision-Making Method Based on

re-

Intuitionistic Interval Fuzzy Information, Group Decision & Negotiation, 23 (2014) 715-733. [57] Z.P. Tian, J. Wang, J.Q. Wang, X.H. Chen, Multicriteria decision‐ making approach based on gray linguistic weighted Bonferroni mean operator, International Transactions in Operational Research, (2015)

lP

1635-1658.

[58] J.Q. Wang, J.J. Peng, H.Y. Zhang, T. Liu, X.H. Chen, An Uncertain Linguistic Multi-criteria Group

Jo

urn a

Decision-Making Method Based on a Cloud Model, Group Decision & Negotiation, 24 (2015) 171-192.

29

Journal Pre-proof *conflict of Interest Statement

Declaration of interests

pro of

☒ The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

☐The authors declare the following financial interests/personal

urn a

lP

re-

relationships which may be considered as potential competing interests:

Jo

Chao Song, Xiao-Kang Wang, Peng-fei Cheng, Jian-qiang Wang, Lin Li