Accepted Manuscript
IFS-IBA similarity measure in machine learning algorithms Pavle Miloˇsevi´c , Bratislav Petrovi´c , Veljko Jeremi´c PII: DOI: Reference:
S0957-4174(17)30525-0 10.1016/j.eswa.2017.07.048 ESWA 11464
To appear in:
Expert Systems With Applications
Received date: Revised date: Accepted date:
29 November 2016 17 June 2017 28 July 2017
Please cite this article as: Pavle Miloˇsevi´c , Bratislav Petrovi´c , Veljko Jeremi´c , IFS-IBA similarity measure in machine learning algorithms, Expert Systems With Applications (2017), doi: 10.1016/j.eswa.2017.07.048
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
ACCEPTED MANUSCRIPT
Highlights A novel similarity measure of intuitionistic fuzzy sets (IFS) is proposed. The measure is based on the equivalence relation in IFS-IBA approach. The proposed measure is flexible and easy to interpret. Benefits of the measure are shown on pattern recognition and classification problems. IFS-IBA similarity is applied for clustering Serbian medium-sized companies.
AC
CE
PT
ED
M
AN US
CR IP T
1
ACCEPTED MANUSCRIPT Point-to-point answers to Reviewer #1. ---------------------------------------------
Reviewer #1: In this manuscript, a new IFS similarity measure is proposed. The introduced measure is making use of the Interpolative Boolean Algebra (IBA). The technical presentation and the mathematical justification of the proposed measure are adequate. After a thorough study of the paper and the main references included by the authors, the following
CR IP T
statements/comments can be considered:
1) The main weakness of this work is the application of the proposed measure to only one dataset (clustering problem). More datasets should be used in order to highlight the efficiency of the proposed measure.
AN US
ANSWER: Thank you for this remark which enabled us to look beyond solely hierarchical clustering as an outlook for our method. Instead, we significantly buffed up this part of our paper by applying the proposed measure to 5 common pattern recognition problems with IF values. For detailed information, please see Section 5 of the manuscript.
M
Further, we have utilized IFS-IBA distance measure as a part of k-NN classifier in order to compare it with conventional IF distances in terms of classification accuracy. Classifiers have been applied
ED
on 4 standard datasets taken from the UCI-Machine Learning Repository (Iris, Wine, PIMA and
PT
BUPA datasets). For detailed information, please see Section 6 of the manuscript.
2) The comparison of the proposed measure with other measures from the literature is more
CE
qualitative and less quantitative. Please the authors to give more qualitative and concrete comparative results, which justify the utility of the IBA-IFS measure.
AC
ANSWER: In this version of the manuscript, we have added IF pattern recognition and classification problems and therefore the quantitative comparison of IFS-IBA measure with the measures from the literature is available. First, we have applied the proposed measure to 5 common pattern recognition problems with IF values. The patterns are successfully recognized only by using two of many IF measures. Page 11: ”These test examples were used in (Chen and Chang, 2015) and (Nguyen, 2016), showing that only the measures proposed in these papers overcome the drawbacks of existing similarity measures.” 2
ACCEPTED MANUSCRIPT
It is shown that special cases of the generic IFS-IBA similarity successfully deal with all of the presented pattern recognition examples, which represent added value of our paper. The main results are presented in Table 1. Page 12: ”As shown in Table 1, the IFS-IBA similarity measure with min as GP classifies all samples except the one in E4.
CR IP T
... For example, the IFS-IBA similarity measure with the product as GP clearly classifies the sample in Example 4. Similarities of sample B with patterns A1, A2 and A3 are 0.023, 0.013 and 0.017, respectively. Therefore, B is classified in the first group. Based on that, we can state that special cases of the IFS-IBA similarity successfully deal with all of the presented pattern recognition examples.”
AN US
Further, it is shown that IFS-IBA measure within k-NN algorithm outperforms standard IF measure on 4 classification problems in terms of classification accuracy. The results are presented in Table 2.
Pages 14-15: ”In a nutshell, the classification accuracy obtained using the k-NN with IFS-
M
IBA distance is distinctly higher on two of four datasets (PIMA and BUPA) compared to a kNN with other distance functions. On Iris and Wine datasets, the highest classification
ED
accuracies obtained with a classical IF distance and the one obtained with the IFS-IBA distance are in the same rank. Nevertheless, the IFS-IBA classifier achieves slightly better
PT
results on both problems.”
CE
3) The reviewer suggests applying the proposed measure to other problems such as pattern classification, decision support systems, etc. Since any similarity/distance measure is generic can be applied to any application where the comparison of some quantities is needed. For example, some
AC
results in classifying patterns could be very constructive in concluding the efficiency of the proposed measure.
ANSWER: Thank you for this remark. We have applied the proposed measure to 5 pattern classification problems, as it is previously elaborated. Further, we aim to apply this measure as a basis for stock trading decision support system which will be the subject of future work. Page 20: ”Furthermore, the IFS-IBA similarity measure may be utilized as a basis of an IF recommender system for stock trading. Due to a manner of stock price representation (as 2-
3
ACCEPTED MANUSCRIPT tuple or 4-tuple) and dependencies between samples, it seems that the proposed measure is suitable for discovering patters in stock price movement.”
The reviewer found this work very interesting. However, the above comments should be addressed in order to improve the manuscript and make it appropriate for publication in this valuable journal. ANSWER: We appreciate very much your comments and suggestions which greatly reshaped the
AC
CE
PT
ED
M
AN US
CR IP T
outlook of our work and results obtained.
4
ACCEPTED MANUSCRIPT Point-to-point answers to Reviewer #2. ---------------------------------------------
Reviewer #2: The authors in this paper introduce a similarity measure of intuitionistic fuzzy sets (IFSs). The topic is interesting. Even though the article is interesting in its current format, some aspects should be improved for possible publication and for a better understanding by the readers.
CR IP T
1) The authors should give the readers some concrete information to get them excited about their work. The current abstract only describes the general purposes of the article. It should also include the article's main (1) impact and (2) significance on expert and intelligent systems.
ANSWER: Thank you for this remark. We have rewritten the abstract in order to emphasize the benefits of the manuscript. Also, we have altered the title of the paper to better depict the
AN US
changes introduced into the revised version of the paper.
2) Please give a frank account of the strengths and weaknesses of the proposed research method. This should include theoretical comparison to other approaches in the field.
M
ANSWER: In this version of the manuscript, strengths and weaknesses of the IFS-IBA similarity measure are discussed from several points of view: background, generality, simplicity, graphical
ED
interpretation, including the uncertainty, and possibility of generating counter-intuitive results. Since the literature review of IF distances/similarities is also written with respect to these
PT
viewpoints, we believe that the theoretical comparison and the meaning of IFS-IBA similarity will be now more comprehensible to the readers.
CE
Page 9: ”IFS-IBA similarity measure may be seen as generic since various similarity measures can be easily derived, i.e. it has different realizations depending on the generalized product. Thus, this measure could describe/model different dependencies in the data.”
AC
Page 10: ” Perceiving similarity of IFSs in this manner is typical of measuring similarity in the IBA framework and it is in accordance with fuzzy similarity modeling presented in (Poledica et al. 2015). In this case, modeling similarity of two IFSs is very straightforward and easy to understand. Furthermore, this measure has a clear-cut meaning and unambiguous graphical interpretation. The similarity between IFSs A and B, presented on the left-hand side of Fig. 1, is equal to the sum of the gray surfaces on the right-hand side of Fig. 1. The gray surfaces represent the minimal level of membership and non-membership for observed sets at each point.”
5
ACCEPTED MANUSCRIPT Page 10: ”Furthermore, the IFS-IBA similarity measure does not include uncertainty in an explicit manner. However, uncertainty is implicitly involved in similarity modeling through the selection of GP.” Page 10: ”The proposed measure with min as GP may be found rigorous since it includes only a sum of minimal level of membership and non-membership. Therefore, it generates some counter-intuitive examples in the sense of (Li et al. 2007), which may be considered as the main limitation of the study. On the other hand, IFS-IBA similarity gives greater Consequently, IFS A 0,0
CR IP T
importance to IFSs that are more distinct, i.e. have a small level of uncertainty .
has a maximal level of uncertainty 1 and is not similar to
any IFS except to itself, since the user does not have any information about it. Hence, this measure compares IFS from a different viewpoint than the standard ones, emphasizing comprehension of information.”
AN US
Page 11: ”In addition to the conventional aggregation operator, the logic-base aggregations may be used in this approach since IBA-base background of IFS-IBA similarity/distance measure.”
M
3) In the related work section, a more rigorous investigation on the existing methods, such as comparison of previous approaches in terms of pros and cons, should be given. A summary table can
ED
be used in this regard.
ANSWER: Thank you for this remark. We have amended the related work section to underline the
PT
main characteristics of IF measure: the number of parameters taken into account, the possibility of generating counter-intuitive results, applicability, background, graphical interpretation and
CE
intelligibility. In this manner, theoretical comparison is facilitated. Page 6: ” IF distance and similarity measures are often discussed from the perspective of the
AC
number of parameters taken into account (Szmidt, 2014), the possibility of generating counter-intuitive results (Li et al. 2007, Papakostas et al. 2013), applicability and background (Papakostas et al. 2013), graphical interpretation and intelligibility, etc.” Pages 6-7: ” The first IF similarity measures, e.g. (Chen, 1997, Hong and Kim, 1999), utilize a two-term intuitionistic fuzzy set representation, i.e. similarity is computed by comparing only membership values and non-membership values. In measuring the IF distance using a three-term intuitionistic fuzzy set representation, the level of uncertainty is also explicitly taken into account (Szmidt and Kacprzyk, 2000). In the practical perspective a three-term approach seems to be more justified, although both types are correct from the mathematical point of view (Szmidt, 2014). Even though some authors (Yang and Chiclana, 2012) state that 6
ACCEPTED MANUSCRIPT the incorporation of the uncertainty part is mandatory, most novel IF measures usually include only membership and non-membership value in explicit manner. In (Li et al. 2007), the authors aim to analyze and summarize prominent IF similarities by providing their counter-intuitive examples regarding pattern recognition. Although geometric-based IF similarities/distances are the most widely used ones, they may obtain unreasonable results in some special cases (Liang and Shi, 2003, Julian et al. 2012), so they are unsuitable for some problems. Furthermore, it is shown that some measures share the
CR IP T
same counter-intuitive cases and that they are identical or very similar in nature. Since the problem of obtaining unreasonable results in comparing IFSs is considered as very
important, the conditions for a stronger definition of similarity measures for IFSs are introduced in (Intarapaiboon, 2016).
On the other hand, most novel IF measures (e.g. Hwang et al. 2012, Farhadinia, 2014,
Intarapaiboon, 2016) are complex and often do not have a clear geometrical interpretation
AN US
unlike geometrically inspired IF distances. Understanding and selecting appropriate
measures have a significant effect on the results, especially in the case of multi-attribute comparison when IF similarity measure is used along with some aggregation operator. Therefore, a user often needs to compromise between accuracy and simplicity. As it is previously noted, most IF distance measures are extensions of traditional distance
M
functions for comparing IFS. However, there are some IF distances with a logic-based background. These measures are derived using D-implication and tensor-or operator norm
ED
(Hatzimichailidis et al. 2016).The main advantage of these measures is their flexibility and
PT
applicability (Papakostas et al. 2013, Hatzimichailidis et al. 2016).”
4) I would like authors to increase the number of datasets.
CE
ANSWER: In this version of the manuscript, we have applied the proposed measure to 5 common pattern recognition problems with IF values. For detailed information, please see Section 5 of the
AC
manuscript.
Further, we have utilized IFS-IBA distance measure as a part of k-NN classifier in order to compare it with conventional IF distances in terms of classification accuracy. Classifiers have been applied on 4 standard datasets taken from the UCI-Machine Learning Repository (Iris, Wine, PIMA and BUPA dataset). For detailed information, please see Section 6 of the manuscript.
5) Moreover, I believe that it will make this paper stronger if the authors present insightful implications in at least one paragraph based on their experimental outcomes.
7
ACCEPTED MANUSCRIPT ANSWER: Thank you for this remark. We have complemented clustering results section (Section 7.2.) with following comments: Page 19: ”In general, the clustering results obtained using the IFS-IBA measure intuitively makes the most sense because the clusters are consistent without any exception. Companies C8, C13 and C19 are not in appropriate clusters when clustering is performed using IF Hamming and Euclidean distance functions. Company C8 is not in the correct cluster due to a negative value of attribute 3 (Earnings before Interest and Taxes / Total Assets). This
CR IP T
variable shows that C8 was not profitable in the previous accounting period. On the other hand, the values of other attributes support the fact that C8 is not financially endangered in a long term perspective, which is only recognized by the IFS-IBA measure. Company C19 has the largest values of attribute 1 (Working Capital / Total Assets) among all the bankrupt companies, which probably affected the clustering results. Company C13 was not easy to
AN US
cluster due to a high value of attribute 4 (Market Value of Equity / Book Value of Total Liabilities) comparing to other companies, which is not a characteristic of a bankrupt
company. In addition to this, active companies C6 and C9 are mutually very similar and always form a separate cluster. These companies have the largest values of attribute 5 (Sales / Total Assets) in the whole dataset, and a rather small value of attribute 1 (Working Capital / Total Assets). Hence, we may assume that C6 and C9 are retailers who should be treated
ED
M
differently compared to the other companies in the dataset.”
6) What are future endeavors of your study? The authors need to state and discuss several (say 4-5)
PT
useful and insightful future research directions. ANSWER: The main directions and ideas for the future work are listed and explained in the Section
CE
8.
Page 20: ” When dealing with multi-attribute object comparison, a simple average is used to
AC
aggregate IFS-IBA distances between corresponding attributes in this paper. This aggregation is too simple and it cannot capture the importance and dependencies of certain attributes. Thus, combining the IFS-IBA similarity measure with the expert given logic-based aggregation function will be the subject of future work. Also, we shall try to analyze the influence of different t-norms utilized as the generalized product on classification/clustering results, and to "learn" GP from the input data afterwards. Furthermore, the IFS-IBA similarity measure may be utilized as a basis of an IF recommender system for stock trading. Due to a manner of stock price representation (as 2-tuple or 4-tuple) and dependencies
8
ACCEPTED MANUSCRIPT between samples, it seems that the proposed measure is suitable for discovering patters in stock price movement.”
7) Finally, the language and grammar also require some work, and I noted a number of typographical errors. The paper needs a linguistic check, preferably by a native speaker. ANSWER: We fully agree with this observation and we did accordingly. We hope that the language
CR IP T
and the presentation are improved.
If the paper is resubmitted as a significantly reworked piece of work, offering a proper view with clear Point-to-Point responses on what is the novelty and significantly improving the evaluation, then I can imagine a more positive second evaluation.
AN US
ANSWER: Authors express gratitude for valuable comments which enhanced the quality of the
AC
CE
PT
ED
M
paper, paving the way for possible publication and better understanding by the readers.
9
ACCEPTED MANUSCRIPT
IFS-IBA similarity measure in machine learning algorithms Pavle Milošević1, corresponding author,
[email protected], +381113950852 Bratislav Petrović1,
[email protected]
CR IP T
Veljko Jeremić1,
[email protected] Faculty of Organizational Sciences, University of Belgrade, Jove Ilića 154, Belgrade, 11000, Serbia
1
AN US
Abstract: The purpose of this paper is to introduce a novel similarity measure of intuitionistic fuzzy sets (IFSs). The proposed measure is based on the equivalence relation in the IFS-IBA approach. Due to the logic-based background, this measure compares IFS from a different viewpoint than the standard measures, emphasizing comprehension of intuitionism. The IFS-IBA similarity measure has a solid mathematical background and can be combined with various IF aggregation operators. Additionally, we define IFS-IBA distance function as a complement of IFS-IBA similarity. Both IFS-
M
IBA similarity and distance functions may have different realizations that are easy to interpret. Hence, the measures are offering great descriptive power and the ability to model various problems. The
ED
benefits of the proposed measure are illustrated on the problem of pattern recognition and classification within k-NN algorithm. Finally, we show that the proposed measure is appropriate for IF hierarchical clustering on the problem of clustering Serbian medium-sized companies according to
PT
their financial ratios. Results obtained using the IFS-IBA measure are clear-cut and more meaningful
CE
compared to a standard IF distances regardless of the I-fuzzification method used. Keywords: IFS-IBA approach, intuitionistic fuzzy sets, interpolative Boolean algebra, similarity
AC
measure, classification, clustering
1. Introduction
Assessing similarity between objects is a crucial step for many applications such as clustering, pattern recognition, case-based reasoning, decision making, etc. Typically, measuring the similarity between two attributes involves comparison of crisp values. However, there are cases when crisp values are not sufficiently informative to describe the attribute properly. Therefore, different data representation models such as intuitionistic fuzzy sets (IFSs) (Atanassov, 1986, Atanassov, 2012) are introduced.
10
ACCEPTED MANUSCRIPT IFSs are the generalization of traditional fuzzy sets (Zadeh, 1965) in the sense that IFS takes into account both membership and non-membership degree of an element to a particular set. Essentially, the similarities/distances between IFSs are the extensions of conventional similarity/distance functions. In the case of IFS, the notion of classical similarity is adjusted by introducing a degree of similarity between IFSs (Dengfeng and Chuntian, 2002). Although they may produce unreasonable results in some special cases (Liang and Shi, 2003), the most common distance functions between IFSs are based on geometric models: the Hamming distance, the normalized
CR IP T
Hamming distance, the Euclidean distance, and the normalized Euclidean distance (Szmidt and Kacprzyk, 2000). Nevertheless, vast research on various IF similarities is currently in progress
(Szmidt, 2014). IF similarity measures based on Hausdorf distance were proposed in (Grzegorzewski, 2004, Chen, 2007). Further, Chen and Randyanto (Chen and Randyanto, 2013) used medians of
intervals and the ratio of uncertainty degrees to enhance these measures to overcome some of the
AN US
drawbacks regarding unreasonable results. Sugeno integral was used to define a similarity of IFSs and this measure was illustrated on the pattern recognition problem (Hwang et al. 2012). The IF similarity measure based on the convex combination of the endpoints of the interval which restricts the IFS was proposed in (Farhadinia, 2014). This measure does not produce counter-intuitive results and satisfies all required properties. In (Song et al. 2015), the authors introduced a new IF similarity measure with
M
high capacity for discriminating IFSs. A model for similarity measuring between IFS with a clear-cut interpretation based on the activation detection in medical image analyses was proposed in (Ngan, 2016). The model was illustrated on several examples emphasizing the robustness of the proposed
ED
measure. In (Nguyen, 2016), the author introduced a novel similarity measure based on the concept of a knowledge measure of information conveyed by the IFS. It is shown that the proposed measure
PT
overcomes the drawbacks of existing measures on the problem of pattern recognition. An extensive review of IF distance and similarity measures may be found in (Papakostas et al. 2013).
CE
The main benefit of applying IF values and IF similarity/distance measure to a particular problem is the ability to process not only the value of the object’s membership but also the values of non-
AC
membership and hesitation (Szmidt and Kacprzyk, 2000). As a result, more valuable information is included in assessing the similarity between the two objects, which is especially important for solving machine learning problems. The pattern recognition problems with intuitionistic fuzzy information are used as a common benchmark for IF similarity measures (Chen and Chang, 2015, Nguyen, 2016). In (Papakostas et al. 2013), the performance of the analyzed IF measures was compared by using the face recognition problem under several experimental configurations. Also, IF distance functions are applied within various clustering algorithms (Zhang et al. 2007, Chen et al. 2007, Zeshui, 2009, Cuong et al. 2012, Xu, 2013, Wang et al. 2014, Huang et al. 2015).
11
ACCEPTED MANUSCRIPT This paper aims to introduce a novel similarity measure between IFSs based on IFS-IBA approach (Milošević et al. 2015). The IFS-IBA approach generalizes the conventional IF operations using interpolative Boolean algebra (Radojević, 2000) in such a way that preserves the idea of intuitionism. The equivalence relation in this approach serves as the basis for the proposed IFS-IBA similarity measure. More precisely, the measure is derived as the membership part of IFS-IBA equivalence. The proposed measure has a solid mathematical background and is easy to be interpreted. In combination with various traditional and logic-based aggregation functions, it may serve as a foundation of the
CR IP T
framework for measuring the similarity between multi-attribute objects with IF values. The benefits and the limitations of IFS-IBA similarity measure are presented on the pattern recognition problem. Further, it is used within the k-NN classification algorithm with promising results. Finally, the
proposed measure is used for IF hierarchical clustering of Serbian medium-sized companies according to their financial ratios.
AN US
This paper is structured as follows. Section 2 presents a brief overview of intuitionistic fuzzy sets along with methods of I-fuzzification and IF similarity/distance measures. Section 3 provides a basic notion and benefits of the IFS-IBA approach. The similarity measure based on IFS-IBA approach is presented in Section 4. The properties and applicability of the proposed similarity measure are illustrated on several artificial IF pattern recognition problems in Section 5. Furthermore, the IFS-IBA
M
measure is applied as part of IF k-NN classifier in Section 6; it is also used for hierarchical clustering of Serbian medium-sized companies in Section 7. The final section summarizes the main findings and
ED
offers guidelines for future work.
2. Intuitionistic fuzzy sets
PT
Intuitionistic fuzzy sets are the generalization of traditional fuzzy sets, introduced by Atanassov (Atanassov, 1986). Atanassov’s intuitionistic fuzzy sets take into account not only the knowledge
CE
about having a certain trait, but also the knowledge about not having it. Formally, an intuitionistic
AC
fuzzy set A in a universe E is defined as an object: A
x,
A
x , A x x E A , A
,
(1)
where functions A x : E 0,1 and A x : E 0,1 define the degree of membership and the degree of non-membership of the element x to the IFS A. For every x E , the sum of degrees of membership and non-membership is less than or equal to 1:
0 A A 1. By definition IFS may include some degree of uncertainty (the degree of non-determinacy or hesitation) A x of the membership of the element x E to A: 12
(2)
ACCEPTED MANUSCRIPT A 1 A A .
(3)
Since the uncertainty may be modeled using IFS, it provides a richer semantic description in comparison with classical fuzzy sets. Obviously, if there is no uncertainty A x 0 , i.e. A A 1 , IFS A is reduced to a traditional fuzzy set. IF propositional calculus was introduced in (Atanassov and Gargov, 1998), and this notation shall be
CR IP T
used further in this paper instead of the set notation.
2.1. Intuitionistic fuzzification
The problem of defining appropriate values for membership and non-membership is one of the critical aspects when dealing with IFS. Although IFSs are close to human perception and reasoning, the
majority of the data is in numerical, crisp format. Intuitionistic fuzzification (I-fuzzification) is a
AN US
procedure analogous to classical fuzzification, i.e. it is a mapping of crisp values to IFS. Usually, IFSs are defined by experts, and therefore their knowledge, preferences and uncertainty are included in modeling. On the other hand, there are several strictly mathematical procedures developed for Ifuzzification. In the most of them values are transformed into a [0,1]-interval first (Bustince et al. 2000, Vlachos and Sergaidis, 2007) so that it includes classical fuzzification e.g. min-max
M
normalization. The obtained fuzzified values that represent fuzzy membership AF are subsequently transformed into IF values.
ED
In (Bustince et al. 2000), the authors proposed intuitionistic fuzzy generators based on fuzzy complements to construct IFS. In this approach IFS membership function A and fuzzy membership
PT
function AF are identical, while the sum of uncertainty and non-membership function is equal to the conventional fuzzy negation of fuzzy membership. To derive the non-membership function A , they
CE
applied Yager's fuzzy complement
1
A F A 1 AF
(4)
AC
and Sugeno's fuzzy complement
A F A
1 AF 1 AF
(5)
on fuzzy membership AF . To meet the definition of IFS, the Yager's fuzzy complement is the IF generator for 0,1 , while Sugeno's is for 0 . This approach is successfully used in practice (e.g. see (Chaira, 2011)). Another prominent I-fuzzification approach is based on the measure of the IF entropy (Vlachos and Sergaidis, 2006, Vlachos and Sergaidis, 2007). The first step in this I-fuzzification is to fuzzify 13
ACCEPTED MANUSCRIPT numerical values using min-max normalization. Further, fuzzified values AF are used to obtain IFS membership A and non-membership A according to a maximum intuitionistic fuzzy entropy principle (Vlachos and Sergaidis, 2006) with 0 in the following manner:
A 1 1 F X
A 1
F X
(6)
1
Since IFS membership A is not equal to fuzzy membership AF in general, the limit values of AF are
CR IP T
treated as uncertainty. The value may be fixed by an expert, as in the case of IF clustering
(Visalakshi Karthikeyani et al. 2014) where the authors suggested 0.95 . In the case when 1 , the value of the IFS membership is equal to the value of fuzzy membership.
Analogous to classical defuzzification, a procedure of deriving a fuzzy value from IFS is called
defuzzification of IFS (de-I-fuzzification). Besides the simplest approach of de-I-fuzzification where
AN US
the IF membership becomes fuzzy membership A AF , various de-I-fuzzification procedures are proposed in the literature (e.g. (Ban et al. 2008, Atanassova and Sotirov, 2012)).
2.2. Intuitionistic fuzzy similarity/distance
M
IF similarities/distances are essentially the extensions of conventional similarity/distance functions and they meet the standard formal definitions.
ED
Definition 1 (Deza and Deza, 2009). Let X be a set. A function s : X X R is called similarity on X if there holds for all x, y X : 1) s x, y 0 (non-negativity);
PT
2) s x, y s y, x (symmetry);
CE
3) s x, y s x, x with equality if and only if x y .
This definition is adjusted when dealing with IFS by introducing the degree of similarity between
AC
IFSs (Dengfeng and Chuntian, 2002). The stronger definition of a similarity measure for IFSs is provided in (Intarapaiboon, 2016) to avoid some unreasonable results. In the case of similarities/distances on the unit [0,1] interval, they may be mutually dependent in the following manner: d 1 s , d 1 s , etc. Since we aim to define the IFS-IBA similarity in this paper, and bearing in mind that normalized similarities and distances are duals, we shall here reflect on the prominent IF distance functions. The geometric distances, such as Hamming and Euclidean, are manly generalized for the purpose of comparing IFSs. The main benefit of the geometric approach for measuring IF distances is a 14
ACCEPTED MANUSCRIPT convenient graphical interpretation and the analogy with traditional metrics. The initial version of the respective distances involves only degrees of membership and non-membership, while later they are enhanced to take into account all the three parameters of IFS (Szmidt and Kacprzyk, 2000). To remain in the unit interval, the normalized Hamming and Euclidean IF distances are developed besides the classical distance functions:
d A, B
1 n A B A B A B 2 n i 1
1 n 2 2 2 A B A B A B 2 n i 1
(7)
CR IP T
d A, B
(8)
On the other hand, the Hausdorf IF distance (Grzegorzewski, 2004, Chen, 2007) may be observed as a generalization of the normalized Hamming and Euclidean IF distance functions:
d A, B
AN US
1 n d A, B max A B , A B n i 1
1 n 2 2 max A B , A B n i 1
(9)
(10)
These functions are also rather simple and well-suited to be used with linguistic variables. The mathematical background of the respective measures and their comparative advantages relative to
M
others are elaborated in (Grzegorzewski, 2004).
ED
IF distance and similarity measures are often discussed from the perspective of the number of parameters taken into account (Szmidt, 2014), the possibility of generating counter-intuitive results (Li et al. 2007, Papakostas et al. 2013), applicability and background (Papakostas et al. 2013),
PT
graphical interpretation and intelligibility, etc. Consequently, there is vast ongoing research on various IF distance and similarity measures (Szmidt, 2014, Dugenci 2016, Intarapaiboon, 2016, Xue et al.
CE
2016).
The first IF similarity measures, e.g. (Chen, 1997, Hong and Kim, 1999), utilize a two-term
AC
intuitionistic fuzzy set representation, i.e. similarity is computed by comparing only membership values and non-membership values. In measuring the IF distance using a three-term intuitionistic fuzzy set representation, the level of uncertainty is also explicitly taken into account (Szmidt and Kacprzyk, 2000). In the practical perspective a three-term approach seems to be more justified, although both types are correct from the mathematical point of view (Szmidt, 2014). Even though some authors (Yang and Chiclana, 2012) state that the incorporation of the uncertainty part is mandatory, most novel IF measures usually include only membership and non-membership value in explicit manner.
15
ACCEPTED MANUSCRIPT In (Li et al. 2007), the authors aim to analyze and summarize prominent IF similarities by providing their counter-intuitive examples regarding pattern recognition. Although geometric-based IF similarities/distances are the most widely used ones, they may obtain unreasonable results in some special cases (Liang and Shi, 2003, Julian et al. 2012), so they are unsuitable for some problems. Furthermore, it is shown that some measures share the same counter-intuitive cases and that they are identical or very similar in nature. Since the problem of obtaining unreasonable results in comparing IFSs is considered as very important, the conditions for a stronger definition of similarity measures
CR IP T
for IFSs are introduced in (Intarapaiboon, 2016). On the other hand, most novel IF measures (e.g. Hwang et al. 2012, Farhadinia, 2014, Intarapaiboon, 2016) are complex and often do not have a clear geometrical interpretation unlike geometrically
inspired IF distances. Understanding and selecting appropriate measures have a significant effect on results, especially in the case of multi-attribute comparison when IF similarity measure is used along
AN US
with some aggregation operator. Therefore, a user often needs to compromise between accuracy and simplicity.
As it is previously noted, most IF distance measures are extensions of traditional distance functions for comparing IFS. However, there are some IF distances with a logic-based background. These measures are derived using D-implication and tensor-or operator norm (Hatzimichailidis et al.
ED
2013, Hatzimichailidis et al. 2016).
M
2016).The main advantage of these measures is their flexibility and applicability (Papakostas et al.
3. IFS-IBA approach
The fact that the law of contradiction and the double negation rule are not satisfied in general in the
PT
IFS theory makes a basis for the terminological and theoretical debate regarding the term “intuitionistic” in IFS. This led to the emergence of new research concerning various IF negations and
CE
generalizations/alternations of the definition and operations in IFS theory (Deschrijver et al. 2004, Atanassov 2005, Bustince et al. 2008). One of the methods that address the respective issues is the
AC
IFS-IBA approach (Milošević et al. 2015). In the IFS-IBA approach, interpolative Boolean algebra (Radojević, 2000) is introduced as an appropriate algebra for intuitionistic fuzzy sets, while IFSs are used in its original form. The mathematical background of the operators in this approach is entirely based on the interpolative Boolean algebra. The IBA is a Boolean consistent [0,1]-valued algebra in the sense that all Boolean laws are preserved (Radojević, 2000). Previously, the IBA is used as a basis for Boolean consistent fuzzy logic (Radojević, 2013) and logical aggregation, an aggregation procedure applied in different domains (Radojević, 2008).
16
ACCEPTED MANUSCRIPT Logical operations of conjunction, disjunction and negation within IFS-IBA approach are defined in the following manner:
A B A B , A B A B A B A B A B , A B A A ,1 A
(11)
where is a generalized product (GP), an operator that can be realized as any t-norm that produces
CR IP T
the result greater than or equal to the Lukasiewicz t-norm and less than or equal to the minimum. In accordance with the IBA transformation rules (Radojević, 2008) and additional trivial IFS-IBA rule concerning the membership and non-membership of the same set (Milošević et al. 2015) A A 0,
(12)
every complex logical expression can be processed and transformed into a corresponding
AN US
mathematical form.
The validity of laws of contradiction and excluded middle is thoroughly examined within the IFS-IBA approach (Milošević et al. 2015). Namely, the law of contradiction is fully satisfied, while the classical law of excluded middle is not (the other versions of the law of excluded middle with IF nature are satisfied). In addition to this, the double negation rule is not preserved in IFS-IBA
M
approach. Thus, IFS-IBA approach is in accordance with the idea of intuitionism. Furthermore, this approach generalizes the conventional IF calculus i.e. when the minimum function is used as a
ED
generalized product an IF calculus may be obtained as a special case of IFS-IBA approach. The approach may also be seen as flexible, because various t-norm operators may be applied as a generalized product. As a result, numerous situations may be modeled and realized with greater
PT
descriptive power compared to the classical IFS.
CE
4. IFS-IBA similarity measure This paper presents an IF similarity measure based on IFS-IBA equivalence relation. It is derived
AC
from the well-known tautology A B A B A B .
(13)
In real-valued cases, this tautology suggests that the intensity of A and B, both having a certain property, should be treated equally as the intensity of the both not having the property (Poledica et al. 2015). Intuitively, this is in accordance with the notion of the IFS. In the IFS-IBA approach, this relation may be realized in the following manner:
17
ACCEPTED MANUSCRIPT A B
A B A B
A , A B , B A , A B , B
A B , A B A B A ,1 A B ,1 B
(14)
A B , A B A B A B ,1 A B A B A B , A B 2 A B .
Hence, the IFS-IBA equivalence is realized as an IFS. The degree of non-membership of the IFS-IBA equivalence is actually IBA distance between non-memberships. Since it does not include any
CR IP T
information about memberships of A and B we are interested only in the membership part of equivalence, i.e. the existence of equivalence relation. It shall be used as a basis for the IFS-IBA similarity:
A B A B 1, sIFS IBA ( A, B) A B A B , otherwise.
(15)
AN US
Bearing in mind that a fuzzy membership is often I-fuzzified as equal to IF membership, observing only the membership part of IFS-IBA equivalence as the IFS-IBA similarity measure is logical and justified. This may be seen as the simplest de-I-fuzzification. In this way, the transparency of the similarity measure is preserved, so more complex de-I-fuzzification procedures are not appropriate.
M
IFS-IBA similarity satisfies the properties of the similarity measure given in Definition 1: 1) Non-negativity: Since A B 0 and A B 0 then A B A B 0 .
ED
2) Symmetry: Since A B A B B A B A then sIFS IBA ( A, B) sIFS IBA ( B, A) . 3) Limited range: Since min function is the pointwise largest t-norm A A 1 and B B 1 A B A B min A , B min A , B . If
then then
PT
min A , B min A , B min A , B min 1 A ,1 B min A , B 1 max A , B 1
with equality only for min A , B max A , B , i.e. A B . In the case of A A 1 or
CE
B B 1 then min A , B min A , B 1.
AC
IFS-IBA similarity measure may be seen as generic since various similarity measures can be easily derived, i.e. it has different realizations depending on the generalized product. Thus, this measure could describe/model different dependencies in the data. For instance, we may model similarity of two IFSs as the sum of minimums of memberships and non-memberships if min function is used as GP ( : min ):
A B A B 1, min sIFS IBA ( A, B) min A , B min A , B , otherwise.
18
(16)
ACCEPTED MANUSCRIPT Perceiving similarity of IFSs in this manner is typical of measuring similarity in the IBA framework and it is in accordance with fuzzy similarity modeling presented in (Poledica et al. 2015). In this case, modeling similarity of two IFSs is very straightforward and easy to understand. Furthermore, this measure has a clear-cut meaning and unambiguous graphical interpretation. The similarity between IFSs A and B, presented on the left-hand side of Fig. 1, is equal to the sum of the gray surfaces on the right-hand side of Fig. 1. The gray surfaces represent the minimal level of membership and non-
AN US
CR IP T
membership for observed sets at each point.
M
Fig. 1 Graphical interpretation of IFS-IBA similarity measure with min as GP However, different realizations of GP imply different perspective to IF similarity. GP may be selected
ED
by an expert or may be learned from the data. The selection of the appropriate realization of GP to a certain problem is essential since it may have a significant influence on clustering performance or
PT
classification accuracy when using this measure. Furthermore, the IFS-IBA similarity measure does not include uncertainty in an explicit manner. However, uncertainty is implicitly involved in
CE
similarity modeling through the selection of GP. The proposed measure with min as GP may be found rigorous since it includes only a sum of minimal
AC
level of membership and non-membership. Therefore, it generates some counter-intuitive examples in the sense of (Li et al. 2007), which may be considered as the main limitation of the study. On the other hand, IFS-IBA similarity gives greater importance to IFSs that are more distinct, i.e. have a small level of uncertainty . Consequently, IFS A 0,0 has a maximal level of uncertainty 1 and is not similar to any IFS except to itself since the user does not have any information about it. Hence, this measure compares IFS from a different viewpoint than the standard ones, emphasizing comprehension of information. Based on the proposed IFS-IBA similarity, we may compute the distance as its complement:
19
ACCEPTED MANUSCRIPT d IFS IBA ( A, B) 1 sIFS IBA ( A, B).
(17)
This function, as a dual of similarity, shall be further used in the illustrative examples regarding classification and clustering. The IFS-IBA distance may have different realizations as well as the IFSIBA similarity measure. For instance, if the generalized product is realized as minimum, the measure is as follows:
(18)
CR IP T
A B A B 0, min d IFS IBA ( A, B ) 1 min A , B min A , B , otherwise. This realization of the IFS-IBA distance function generates the smallest result of all IFS-IBA distances since the minimum is the pointwise largest t-norm.
In the case of comparison of two multi-attribute objects, an aggregation operator is needed. In
addition to the conventional aggregation operators, the logic-base aggregations may be used in this
AN US
approach because of the IBA-base background of IFS-IBA similarity/distance measure. For instance, the prerequisite level of objects similarity can be easily modeled as a conjunction of attribute similarities. These more sophisticated aggregations and the flexibility of the IFS-IBA measures offer a possibility of modeling certain problems in a more detailed manner. Combining the IFS-IBA
M
similarity with the logic-based aggregation functions shall be the subject of further work.
5. Application of the IFS-IBA similarity measure to pattern recognition
ED
The applicability of IF similarity measures is often illustrated on the pattern recognition problem with intuitionistic fuzzy information in the finite universe of discourse X x1 ,..., xn . The idea is to
Aj
xi , A xi j
| xi X , j 1,..., m which is most similar to B, assuming that wi 1 n is
CE
Aj
PT
classify a sample B B xi , B xi | xi X in the group represented with the pattern
weight for xi.
AC
In order to investigate the potential of the IFS-IBA similarity measure in pattern recognition, we consider five artificial problems from the literature (Chen and Chang, 2015). The task is to classify a sample in one of three groups represented by corresponding patterns. In examples E1, E2, E4 and E5, the patterns consist of 3 IFSs, while in E3 the patterns consist of 4 IFSs (see Table 1). These test examples were used in (Chen and Chang, 2015) and (Nguyen, 2016), showing that only the measures proposed in these papers overcome the drawbacks of existing similarity measures. As shown in Table 1, the IFS-IBA similarity measure with min as GP classifies all samples except the one in E4. As it is previously stated from the theoretical point of view, the IFS-IBA similarity measure with min as GP is not suitable for problems with missing/ambiguous values e.g. IFSs with 20
ACCEPTED MANUSCRIPT high uncertainty. In such a case a different realization of GP should be used. For example, the IFSIBA similarity measure with the product as GP clearly classifies the sample in Example 4. Similarities of sample B with patterns A1, A2 and A3 are 0.023, 0.013 and 0.017, respectively. Therefore, B is classified in the first group. Based on that, we can state that special cases of the IFS-IBA similarity successfully deal with all of the presented pattern recognition examples. Table 1 Pattern recognition results obtained using IFS-IBA similarity measure with min as GP Patterns
Sample
A1 1,0 , 0.8,0 , 0.7,0.1 A3 0.6,0.2 ,
0.8,0 , 1,0
B 0.5,0.3 , 0.6,0.2 , 0.8,0.1
A 0.5,0.5 , 0.7,0.3 , 0,0.8 A 0.7,0.2 , 0.1,0.8 , 0.4,0.4 A 0.5,0.3 , 0.7,0 , 0.4,0.5 , 0.7,0.3 A 0.5,0.2 , 0.6,0.1 , 0.2,0.7 , 0.7,0.3 A 0.5,0.4 , 0.7,0.1 , 0.4,0.6 , 0.7,0.2 A 0.5,0.4 , 0.8,0 , 0.3,0.7 A 0.6,0.3 , 0.9,0.1 , 0.6,0.4 A 0.6,0.3 , 0.9,0.1 , 0.5,0.5 A 0.3,0.3 , 0.2,0.5 , 0.2,0.1 A 0.4,0.3 , 0.2,0.5 , 0,0.1 A 0.3,0.4 , 0.2,0.5 , 0,0.1
B 0.4,0.4 , 0.6,0.2 , 0,0.8
1
2
3
1
3
1
E5
B 0.4,0.3 , 0.7,0.1 , 0.3,0.6 , 0.7,0.3
M
2
AN US
3
E4
ED
2
B 0,0 , 0,0 , 0,0.1
0.867
A2
0.433 0.800 0.675
A1
0.725 0.033 ?
0.033 0.033 0.473
B 0.4,0.3 , 0.2,0.4 , 0,0.1
0.897
A2
0.763
PT
3
A3
0.533
2
E3
0.667 0.700
A1 0.1,0.1 , 0.5,0.1 , 0.1,0.9
E2
Result
0.633
A2 0.8,0.1 , 1,0 , 0.9,0
E1
IFS-IBA similarity (GP = min)
CR IP T
Example
6. IFS-IBA distance in k-NN classification
CE
The problem of classifying an example (instance) into a given set of categories is one of the typical machine learning tasks. In theory, a classification of the IF values should be more precise since IF values carry more information than fuzzy or crisp values. k nearest neighbour (k-NN) algorithm is one
AC
of the most commonly used classification methods primarily due to its simplicity and a long presence in science (Mitchell, 1997). In this paper, we use a traditional k-NN algorithm adapted to classify instances expressed with IF values. The dissimilarity between instances is measured by the IF distance function, while k-NN decision rule is crisp, i.e. the sample is classified in the class closest to the sample. The k-NN algorithm used in this paper should not be confused with several IF k-NN approaches from the literature (Hadjitodorov, 1995, Kucnehva, 1995, Todorova and Vassilev, 2009, Derrac et al.
21
ACCEPTED MANUSCRIPT 2014). These approaches utilize conventional distance functions, while k-NN rule is intuitionistic, i.e. its belonging to a particular class is expressed with membership and non-membership.
6.1. Experiment setup Hamming and Euclidean IF distances (see Eqs. 7 and 8), Hausdorf generalization of the normalized Hamming and Euclidean IF distances (see Eqs. 9 and 10) and IFS-IBA distance with min as a generalized product (see Eq. 18) are included in the experiment. In the case of the IFS-IBA distance,
CR IP T
the simple average is used to aggregate the similarities of attributes in order to perform the fair comparison.
Standard min-max normalization followed by I-fuzzification based on the measure of the IF entropy with suggested parameter value 0.95 is used to transform crisp to IF values. The analysis of the
AN US
classification accuracy with respect to different values of the parameter is beyond the scope of this research.
The comparison of the IFS-IBA distance with standard IF distances within the k-NN algorithm is performed on four datasets taken from the UCI-Machine Learning Repository (Iris, Wine, PIMA and BUPA dataset). All datasets are complete, i.e. do not contain missing values. For every dataset, we
M
apply a 10-fold cross-validation method to ensure low bias. Iris dataset is balanced, i.e. it contains 50 samples from each of three species of iris flower described by four attributes. Wine dataset defines a
ED
problem of classifying three types of wine characterized by 13 attributes. This unbalanced dataset contains 178 instances. Both Iris and Wine datasets were already used for comparing the accuracy of various IF similarity measures (Papakostas et al. 2013). PIMA and BUPA are datasets that are also
PT
often used in literature, e.g. (Luukka and Leppalampi, 2006). Both sets define binomial classification problem concerning medical diagnosis. PIMA dataset contains data about female Pima Indians tested
CE
for diabetes. The set consists of 768 instances with eight attributes. BUPA dataset is collected during the medical study concerning liver disorders. Six attributes represent blood test results and habits of 345 patients, while the output is whether the patient has or has not a liver disorder. Both sets are
AC
unbalanced.
6.2. Results and discussion Experimental results for Iris, Wine, PIMA and BUPA dataset and k 1,3,5,7,9 are presented in Table 2. The best results in terms of average accuracy are in bold. Iris and Wine datasets may be preserved as common and "not-very-hard to classify", e.g. in Iris dataset one class is even clearly separable. Thus, the obtained accuracies for all classifiers are high 22
ACCEPTED MANUSCRIPT and uniform, which is in line with expectations. However, the k-NN with IFS-IBA distance slightly outperforms other classifiers and achieves 96.33% accuracy for k 5,7 on Iris and 98.31% accuracy for k 1,7,9 on Wine dataset. The second best result on Iris dataset is 96.00%, obtained with Hausdorf IF distance with Hamming and Euclidean metric. The k-NN algorithm with Hamming and Euclidean IF distances achieve up to 97.75% prediction accuracy on Wine dataset. Table 2 Results of k-NN classification with different IF distances
PIMA
1-NN 3-NN 5-NN 7-NN 9-NN 1-NN 3-NN 5-NN 7-NN 9-NN 1-NN 3-NN 5-NN 7-NN 9-NN 1-NN 3-NN 5-NN 7-NN 9-NN
93.33% 94.67% 94.67% 94.00% 94.67% 97.75% 96.07% 96.63% 97.19% 97.19% 68.23% 71.48% 72.27% 71.61% 72.40% 63.19% 65.80% 66.09% 65.51% 64.93%
94.00% 95.33% 94.67% 95.33% 95.33% 96.07% 95.51% 97.19% 97.19% 97.75% 70.18% 71.22% 70.31% 71.35% 72.14% 62.61% 64.06% 62.03% 61.45% 63.48%
Hausdorf (Hamming) IF distance 88.67% 96.00% 96.00% 96.00% 95.67% 69.10% 68.54% 69.10% 67.42% 68.54% 59.64% 64.58% 65.36% 65.49% 65.10% 42.61% 43.77% 46.96% 47.54% 47.83%
Hausdorf (Euclidean) IF distance 88.33% 96.00% 96.00% 95.67% 95.67% 69.66% 68.54% 69.66% 66.85% 66.29% 59.38% 65.49% 66.02% 66.02% 65.49% 42.32% 43.77% 48.41% 47.54% 47.83%
IFS-IBA distance 94.67% 94.67% 96.33% 96.33% 96.00% 98.31% 97.75% 97.75% 98.31% 98.31% 70.05% 72.66% 73.44% 73.70% 74.22% 61.45% 66.67% 68.99% 69.57% 67.83%
ED
BUPA
Euclidean IF distance
CR IP T
Wine
Hamming IF distance
AN US
Iris
k-NN
M
Dataset
PT
On PIMA dataset, the k-NN algorithm with IFS-IBA distance and k 9 is the most efficient with 74.22% accuracy. This result is approximately 2% better compared to the best result obtained by k-
CE
NN with Hamming or Euclidean distance and 8% better compared to k-NN with Hausdorf distances. It should be noted that the IFS-IBA distance-based classifier outperforms other classifiers regardless
AC
of the value of k on this dataset. Finally, a classifier with IFS-IBA distance achieves significantly better results compared to other approaches on BUPA dataset. More precisely, the k-NN with IFS-IBA distance for k 7 classifies instances with almost 70% accuracy. A classifier with Hamming or Euclidean distance achieves 66% (for k 5 ) and 64% accuracy (for k 3 ), while accuracies of k-NN with Hausdorf distances are very low. In a nutshell, the classification accuracy obtained using the k-NN with IFS-IBA distance is distinctly higher on two of four datasets (PIMA and BUPA) compared to a k-NN with other distance functions. On Iris and Wine datasets, the highest classification accuracies obtained with a classical IF distance 23
ACCEPTED MANUSCRIPT and the one obtained with the IFS-IBA distance are in the same rank. Nevertheless, the IFS-IBA classifier achieves slightly better results on both problems.
7. IFS-IBA hierarchical clustering: the case of Serbian medium-sized companies One of the main tasks in data analysis is to find a structure in the data in order to organize it into sensible groups i.e. clusters. Cluster analysis is a formal study of methods and algorithms for grouping objects according to measured or perceived characteristics (Jain, 2010). Clustering algorithms
CR IP T
underlie the structure of the data since they are mainly based on the similarities between the objects. The objects to be clustered are usually represented as vectors of crisp values. On the other hand, there are situations when this representation is not sufficiently informative and descriptive. Therefore, different data representation such as fuzzy and IFS and appropriate similarity measures (Szmidt and Kacprzyk, 2000, Dengfeng and Chuntian, 2002, Liang and Shi, 2003, Chen and Randynto, 2013,
AN US
Song et al. 2015) and clustering algorithms (Zhang et al. 2007, Chen et al. 2007, Zeshui, 2009, Cuong et al. 2012, Xu, 2013, Wang et al. 2014, Huang et al. 2015) are introduced.
The concepts of the IF similarity degree, an IF similarity matrix, and a procedure for deriving an IF equivalence matrix are essential for IF clustering and they are introduced in (Zhang, 2007). Notions of association matrix, association coefficients and conventional clustering are generalized when dealing
M
with IFS and interval-valued fuzzy sets (Xu et al. 2008.). Furthermore, other conventional clustering techniques are generalized in a sense of IFS, e.g. IF C-means algorithm (Pelekis et al. 2008, Xu and
ED
Wu, 2010), IF minimum spanning tree (Zhao et al. 2012), IF kernel-based fuzzy C-means (Lin, 2014), etc.
PT
The IF hierarchical clustering algorithm used in this paper is based on the traditional hierarchical clustering procedure and the basic distance measures between IFSs together with a certain
CE
(intuitionistic fuzzy) aggregation operator (Zeshui, 2009). The IF hierarchical clustering procedure consists of the following steps:
AC
1) Intuitionistic fuzzification of the input values; 2) Dissimilarities between objects are calculated based on IF distance measure and an aggregation function; 3) Further, a conventional hierarchical clustering procedure is conducted (Zeshui, 2009).
7.1. Experiment setup Hamming (Eq. 7), Euclidean (Eq. 8), Hausdorf generalization of the normalized Hamming and Euclidean IF distances (see Eqs. 9 and 10) and IFS-IBA distance with min as a generalized product
24
ACCEPTED MANUSCRIPT (see Eq. 18) are included in the experiment. In order to perform the fair comparison, the simple average is used as an aggregation operator in the case of IFS-IBA distance. The experiment is conducted on the dataset provided by "Cube Risk Management Solutions". Although the database contains financial data on 1020 medium-sized companies in Serbia, in our experiment we aim to cluster only 22 of them for the sake of a clearer presentation of results. Our sample is balanced and randomly selected from the database. Companies C1-C11 are active while
CR IP T
companies C12-C22 are in the process of bankruptcy or liquidation. The dataset includes information from annual financial statements, balance sheets, income statements and statistical annexes for 2014 and 2015. In this experiment, companies are described with five financial ratios which are common indicators for bankruptcy prediction (Altman, 1968):
2) Retained Earnings / Total Assets;
AN US
1) Working Capital / Total Assets; 3) Earnings before Interest and Taxes / Total assets;
4) Market Value of Equity / Book Value of Total Liabilities; 5) Sales / Total Assets.
M
7.2. Results and discussion
We present the main clustering results in the following section. All results are presented in
ED
corresponding dendrograms, while clusters are marked using the default color threshold. Input values are I-fuzzified by the use of Yager’s and Sugeno’s IF generator and the approach based
PT
on the measure of IF entropy. All the obtained clustering results are remarkably similar regardless of the I-fuzzification method. Therefore, I-fuzzification method has no influence on results and only the
CE
results obtained using the measure of IF entropy with suggested parameter value 0.95 are
AC
presented and discussed in detail.
25
CR IP T
ACCEPTED MANUSCRIPT
Fig. 2 Clustering results in the case of the IF Hamming distance
AN US
The dendrogram in the case of the IF Hamming distance is presented in Fig. 2. The red cluster
contains companies that are active, with the exception of company C19. The company C13, which has also gone bankrupt, resembles more the red cluster. Companies C6 and C9 form a separate cluster. Although the respective companies are active, their cluster is more similar to the green cluster (which contains companies that are bankrupt), than to the red cluster. The company C8 is an exception in the
AC
CE
PT
ED
M
green cluster since it is active.
Fig. 3 Clustering results in the case of the IF Euclidean distance
The clustering results in the case of the IF Euclidean distance are illustrated in the dendrogram presented in Fig. 3. The red cluster contains mainly active companies. As in the case of the Hamming distance, the company C19 is an exception in the cluster of active companies. Active companies C6 and C9 form a separate cluster which is most similar to the red cluster. The green cluster consists only
26
ACCEPTED MANUSCRIPT of the companies that have gone bankrupt. Companies C13 and C8 are treated as standalone clusters. According to the results, C13 is more similar to the cluster of companies that have good financial health, while C8 is more similar to bankrupt companies. In both cases this may lead to misconceptions
AN US
CR IP T
in the analysis.
Fig. 4 Clustering results in the case of the IF Hausdorf distance based on Euclidean metric
M
Clusters obtained using the IF Hausdorf distance based on Hamming and Euclidean metric are identical. Therefore, only the dendrogram obtained using IF Hausdorf distance based on Euclidean
ED
metric are presented (see Fig. 4). The blue and the green clusters contain companies that are active. The red cluster consists of two smaller clusters: the one that contains 5 active (C3, C4, C5, C8 and C11) and 1 company (C19) that has gone bankrupt and the other one consists of 10 companies that
PT
have gone bankrupt and 1 active company (C10). This can imply that companies in green and blue clusters are active companies who operate steadily, while companies C3, C4, C5, C8, C11 and C19
CE
from the red sub-cluster are active with some difficulties. However, companies C10 and C19 may be seen as exceptions in clustering with this measure.
AC
In the case of IFS-IBA distance (see Fig. 5), all the bankrupt companies, except C13, are in the green cluster, while the companies that have good financial health are in the blue (C1, C2, C3, C4, C7) and the red cluster (C5, C8, C10, C11) respectively. The companies in the blue cluster have better values of financial ratios than the ones in the red cluster in general. Within these clusters there are no exceptions. Again, companies C6 and C9 form a separate cluster which is more similar to the green cluster. This may be an indicator that the respective companies, although they currently operate without financial difficulties, should be treated with caution.
27
CR IP T
ACCEPTED MANUSCRIPT
Fig. 5 Clustering results in the case of the IFS-IBA distance
AN US
In general, the clustering results obtained using the IFS-IBA measure intuitively makes the most sense because the clusters are consistent without any exception. Companies C8, C13 and C19 are not in appropriate clusters when clustering is performed using IF Hamming and Euclidean distance functions. Company C8 is not in the correct cluster due to a negative value of attribute 3 (Earnings before Interest and Taxes / Total Assets). This variable shows that C8 was not profitable in the
M
previous accounting period. On the other hand, the values of other attributes support the fact that C8 is not financially endangered in a long term perspective, which is only recognized by the IFS-IBA measure. Company C19 has the largest values of attribute 1 (Working Capital / Total Assets) among
ED
all the bankrupt companies, which probably affected the clustering results. Company C13 was not easy to cluster due to a high value of attribute 4 (Market Value of Equity / Book Value of Total
PT
Liabilities) comparing to other companies, which is not a characteristic of a bankrupt company. In addition to this, active companies C6 and C9 are mutually very similar and always form a separate
CE
cluster. These companies have the largest values of attribute 5 (Sales / Total Assets) in the whole dataset, and a rather small value of attribute 1 (Working Capital / Total Assets). Hence, we may assume that C6 and C9 are retailers who should be treated differently compared to the other
AC
companies in the dataset.
8. Conclusions and directions of future research
Similarity between two IFSs is usually assessed using a certain geometrically inspired measure. Despite many IF similarity measures presented in literature, most of them are not easy to interpret or they do not always produce reasonable results. This paper proposes a novel similarity measure based on the IFS-IBA approach. The proposed measure models similarity as a membership part of the IFSIBA equivalence relation. In this way it incorporates both membership and non-membership of
28
ACCEPTED MANUSCRIPT compared sets and their dependencies. Particularly, the proposed measure is flexible as its realization relies on the choice of the generalized product. Therefore, numerous situations may be modeled with more descriptive power. Furthermore, the IFS-IBA similarity measure may be combined with various aggregation functions to compare multi-attribute objects. Finally, we have applied the proposed measure in IF hierarchical clustering of Serbian medium-sized companies according to their financial ratios. The clustering results obtained using IFS-IBA measure with different I-fuzzification methods are compared with IF Hamming, Euclidean and Hausdorf
CR IP T
distance functions. All the results are remarkably similar regardless of the I-fuzzification method. Therefore, I-fuzzification method has no influence on results. The clustering results obtained using the IFS-IBA measure make the most sense in general because the clusters obtained using the IFS-IBA measure are consistent with no exceptions.
AN US
When dealing with multi-attribute object comparison, a simple average is used to aggregate IFS-IBA distances between corresponding attributes in this paper. This aggregation is too simple and it cannot capture the importance and dependencies of certain attributes. Thus, combining the IFS-IBA similarity measure with the expert given logic-based aggregation function will be the subject of future work. Also, we shall try to analyze the influence of different t-norms utilized as the generalized product on classification/clustering results, and to "learn" GP from the input data afterwards.
M
Furthermore, the IFS-IBA similarity measure may be utilized as a basis of an IF recommender system for stock trading. Due to a manner of stock price representation (as 2-tuple or 4-tuple) and
in stock price movement.
ED
dependencies between samples, it seems that the proposed measure is suitable for discovering patters
PT
Acknowledgments
The authors gratefully thank the Editor-in Chief and the anonymous referees for their valuable
CE
comments and suggestions which have improved the paper.
AC
References
Altman, E. I. (1968). Financial Ratios, Discrimination Analysis and the Prediction of Corporate Bankruptcy. Journal of Finance, 23(4), 589-609. Atanassov, K. (1986). Intuitionistic fuzzy sets. Fuzzy Sets and Systems, 20, 87-96.
Atanassov, K. (2005). On some types of intuitionistic fuzzy negations. Notes on Intuitionistic Fuzzy Sets, 11(4), 170-172. Atanassov, K. (2012). On Intuitionistic Fuzzy Sets Theory. Berlin: Springer-Verlag. Atanassov, K., & Gargov, G. (1998). Elements of intuitionistic fuzzy logic. Part I. Fuzzy Sets and Systems, 95(1), 39-52. Atanassova, V., & Sotirov, S. (2012). A new formula for de-i-fuzzification of intuitionistic fuzzy sets. Notes on Intuitionistic Fuzzy Sets, 18(3), 49-51. 29
ACCEPTED MANUSCRIPT Ban, A., Kacprzyk, J., & Atanassov, K. (2008). On de-I-fuzzification of intuitionistic fuzzy sets. ComptesRendus de l'Academiebulgare des Sciences, 61(12), 1535-1540. Bustince, H., Barrenechea, E., & Pagola, M. (2008). Generation of interval‐valued fuzzy and Atanassov's intuitionistic fuzzy connectives from fuzzy connectives and from Kα operators: Laws for conjunctions and disjunctions, amplitude. International Journal of Intelligent Systems, 23(6), 680714. Bustince, H., Kacprzyk, J., & Mohedano, V. (2000). Intuitionistic fuzzy generators application to intuitionistic fuzzy complementation. Fuzzy Sets and Systems, 114(3), 485-504.
CR IP T
Chaira, T. (2011). A novel intuitionistic fuzzy C means clustering algorithm and its application to medical images. Applied Soft Computing, 11(2), 1711-1717. Chen, S. M. (1997). Similarity measures between vague sets and between elements. IEEE Transactions on Systems, Man, and Cybernetics, Part B, 27(1), 153-158. Chen, T. Y. (2007). A note on distances between intuitionistic fuzzy sets and/or intervalvalued fuzzy sets based on the Hausdorff metric. Fuzzy Sets and Systems, 158(22), 2523-2525.
AN US
Chen, S. M., & Chang, C. H. (2015). A novel similarity measure between Atanassov’s intuitionistic fuzzy sets based on transformation techniques with applications to pattern recognition. Information Sciences, 291, 96-114. Chen, D. F., Lei, Y. J., & Tian, Y. (2007). Clustering algorithm based on intuitionistic fuzzy equivalent relations. Journal of Air Force Engineering University, 8, 63-65. Chen, S. M., & Randyanto, Y. (2013). A novel similarity measure between intuitionistic fuzzy sets and its applications. International Journal of Pattern Recognition and Artificial Intelligence, 27(7), doi:10.1142/S0218001413500213.
M
Cuong, B. C., Lanzi, P. L., & Thong, N. T. (2012). A novel intuitionistic fuzzy clustering method for geo-demographic analysis. Expert Systems with Applications, 39(10), 9848-9859.
ED
Dengfeng, L., & Chuntian, C. (2002). New similarity measures of intuitionistic fuzzy sets and application to pattern recognitions. Pattern Recognition Letters, 23(1), 221-225. Derrac, J., García, S., & Herrera, F. (2014). Fuzzy nearest neighbor algorithms: Taxonomy, experimental analysis and prospects. Information Sciences, 260, 98-119.
PT
Deschrijver, G., Cornelis, C., & Kerre, E. E. (2004). On the representation of intuitionistic fuzzy t-norms and t-conorms. IEEE Transactions on Fuzzy Systems, 12(1), 45-61. Deza, M. M., & Deza, E. (2009). Encyclopedia of distances. Berlin: Springer-Verlag.
CE
Dugenci, M. (2016). A new distance measure for interval valued intuitionistic fuzzy sets and its application to group decision making problems with incomplete weights information. Applied Soft Computing, 41, 120-134.
AC
Farhadinia, B. (2014). An efficient similarity measure for intuitionistic fuzzy sets. Soft Computing, 18(1), 85-94. Grzegorzewski, P. (2004). Distances between intuitionistic fuzzy sets and/or interval-valued fuzzy sets based on the Hausdorff metric. Fuzzy Sets and Systems, 148(2), 319-328. Hadjitodorov, S. (1995). An intuitionistic fuzzy sets application to the k-NN method, Notes on Intuitionistic Fuzzy Sets, 1, 66-69. Hatzimichailidis, A. G., Papakostas, G. A., & Kaburlasos, V. G. (2016). A distance measure based on fuzzy D-implications: application in pattern recognition. British Journal of Mathematics & Computer Science, 14(3), 1-14. Hong, D. H., & Kim, C. (1999). A note on similarity measures between vague sets and between elements. Information Sciences, 115(1-4), 83-96. 30
ACCEPTED MANUSCRIPT Huang, C. W., Lin, K. P., Wu, M. C., Hung, K. C., Liu, G. S., & Jen, C. H. (2015). Intuitionistic fuzzy c-means clustering algorithm with neighborhood attraction in segmenting medical image. Soft Computing, 19(2), 459-470. Hwang, C. M., Yang, M. S., Hung, W. L., & Lee, M. G. (2012). A similarity measure of intuitionistic fuzzy sets based on the Sugeno integral with its application to pattern recognition. Information Sciences, 189, 93-109. Intarapaiboon, P. (2016). A hierarchy-based similarity measure for intuitionistic fuzzy sets. Soft Computing, 20(5), 1909-1919.
CR IP T
Jain, A. K. (2010). Data clustering: 50 years beyond K-means. Pattern Recognition Letters, 31(8), 651-666. Julian, P., Hung, K. C., & Lin, S. J. (2012).On the Mitchell similarity measure and its application to pattern recognition. Pattern Recognition Letters, 33(9), 1219-1223. Kuncheva, L. I. (1995). An intuitionistic fuzzy k-nearest neighbors rule. Notes on Intuitionistic Fuzzy Sets, 1, 56-60.
Li, Y., Olson, D. L., & Qin, Z. (2007). Similarity measures between intuitionistic fuzzy (vague) sets: A comparative analysis. Pattern Recognition Letters, 28(2), 278-285.
AN US
Liang, Z., & Shi, P. (2003). Similarity measures on intuitionistic fuzzy sets. Pattern Recognition Letters, 24(15), 2687-2693. Lin, K. P. (2014). A novel evolutionary kernel intuitionistic fuzzy-means clustering algorithm. IEEE Transactions on Fuzzy Systems, 22(5), 1074-1087.
Luukka P., & Leppalampi, T. (2006). Similarity classifier with generalized mean applied to medical data, Computers in Biology and Medicine, 36(9), 1026-1040.
ED
M
Milošević, P., Poledica, A., Rakićević, A., Petrović, B., & Radojević, D. (2015). Introducing Interpolative Boolean algebra into Intuitionistic fuzzy sets. In J. M. Alonso, H. Bustince, & M. Reformat (Eds.), Proceedings of the 2015 Conference of the International Fuzzy Systems Association and the European Society for Fuzzy Logic and Technology (pp. 1389-1394). Atlantis Press. Mitchell, M. T. (1997). Machine learning. Boston: McGraw-Hill.
PT
Ngan, S. C. (2016). An activation detection based similarity measure for intuitionistic fuzzy sets. Expert Systems with Applications, 60, 62-80. Nguyen, H. (2016). A novel similarity/dissimilarity measure for intuitionistic fuzzy sets and its application in pattern recognition. Expert Systems with Applications, 45, 97-107.
CE
Papakostas, G. A., Hatzimichailidis, A. G., & Kaburlasos, V. G. (2013). Distance and similarity measures between intuitionistic fuzzy sets: A comparative analysis from a pattern recognition point of view. Pattern Recognition Letters, 34(14), 1609-1622.
AC
Pelekis, N., Iakovidis, D. K., Kotsifakos, E. E., & Kopanakis, I. (2008). Fuzzy clustering of intuitionistic fuzzy data. International Journal of Business Intelligence and Data Mining, 3(1), 45-65. Poledica, A., Milošević, P., Dragović, I., Petrović, B., & Radojević, D. (2015). Modeling consensus using logic-based similarity measures. Soft Computing, 19(11), 3209-3219. Radojević, D. (2000). New [0,1]-valued logic: A natural generalization of Boolean logic. Yugoslav Journal of Operational Research, 10(2), 185-216. Radojević, D. (2008). Logical aggregation based on interpolative Boolean algebra. Mathware & Soft Computing, 15(1), 125-141. Radojević, D. (2013). Real-Valued Realizations of Boolean Algebra are a Natural Frame for Consistent Fuzzy Logic. In R. Sesing, E. Trillas, C. Moraga, & S. Termini (Eds.), On Fuzziness: A Homage to Lotfi A. Zadeh, Studies in Fuzziness and Soft Computing 299 (pp. 559-565). Berlin: Springer-Verlag. 31
ACCEPTED MANUSCRIPT Song, Y., Wang, X., Lei, L., & Xue, A. (2015). A novel similarity measure on intuitionistic fuzzy sets with its applications. Applied Intelligence, 42(2), 252-261. Szmidt, E. (2014). Distances and similarities in intuitionistic fuzzy sets. Studies in Fuzziness and Soft Computing 307. Berlin: Springer-Verlag. Szmidt, E., & Kacprzyk, J. (2000). Distances between intuitionistic fuzzy sets. Fuzzy Sets and Systems, 114(3), 505-518. Todorova, L., & Vassilev, P. (2009). Application of K-Nearest Neighbor Rule in the Case of Intuitionistic Fuzzy Sets for Pattern Recognition. Bioautomation International Journal, 13(4), 265270.
CR IP T
Visalakshi Karthikeyani, N., Parvathavarthini, S., & Thangavel, K. (2014). An Intuitionistic Fuzzy Approach to Fuzzy Clustering of Numerical Dataset. In G. S. S. Krishnan, R. Anitha, R. S. Lekshmi, M. S. Kumar, A. Bonato, & M. Grana (Eds.), Computational Intelligence, Cyber Security and Computational Models, Advances in Intelligent Systems and Computing 246 (pp. 79-87), India: Springer.
AN US
Vlachos, I. K., & Sergiadis, G. D. (2007). The role of entropy in intuitionistic fuzzy contrast enhancement. In P. Melin, O. Castillo, L. T. Aguilar, J. Kacprzyk, & W. Pedrycz (Eds.), Foundations of Fuzzy Logic and Soft Computing, Lecture Notes in Artificial Intelligence 4529 (pp. 104-113), Berlin: Springer-Verlag. Vlachos, I. K., & Sergiadis, G. D. (2006). Intuitionistic Fuzzy Image Processing. In Soft Computing in Image Processing: Recent Advances, Studies in Fuzziness and Soft Computing 210 (pp. 385-416), Berlin: Springer-Verlag. Wang, Z., Xu, Z., Liu, S., & Yao, Z. (2014). Direct clustering analysis based on intuitionistic fuzzy implication. Applied Soft Computing, 23, 1-8.
M
Xu, Z. (2013). Intuitionistic Fuzzy Aggregation and Clustering. Berlin: Springer-Verlag. Xu, Z., Chen, J., & Wu, J. (2008). Clustering algorithm for intuitionistic fuzzy sets. Information Sciences, 178(19), 3775-3790.
ED
Xu, Z., & Wu, J. (2010). Intuitionistic fuzzy C-means clustering algorithms. Journal of Systems Engineering and Electronics, 21(4), 580-590.
PT
Xue, W., Xian, S., & Dong, Y. (2016). A Novel Intuitionistic Fuzzy Induced Ordered Weighted Euclidean Distance Operator and Its Application for Group Decision Making. International Journal of Intelligent Systems, 32(7), 739-753.
CE
Yang, Y., & Chiclana, F. (2012). Consistency of 2D and 3D distances of intuitionistic fuzzy sets. Expert Systems with Applications, 39(10), 8665-8670. Zadeh, L. A. (1965). Fuzzy sets. Information and Control, 8(3), 338-353.
AC
Zeshui, X. (2009). Intuitionistic fuzzy hierarchical clustering algorithms. Journal of Systems Engineering and Electronics, 20(1), 90-97. Zhang, H. M., Xu, Z. S., & Chen, Q. (2007). On clustering approach to intuitionistic fuzzy sets. Control and Decision, 22, 882-888. Zhao, H., Xu, Z., Liu, S., & Wang, Z. (2012). Intuitionistic fuzzy MST clustering algorithms. Computers & Industrial Engineering, 62(4), 1130-1140.
32