Available online at www.sciencedirect.com Available online at www.sciencedirect.com
Available online at www.sciencedirect.com
ScienceDirect
Procedia Computer Science 00 (2019) 000–000 Procedia Computer Science 00 (2019) 000–000 Procedia Computer Science 159 (2019) 1027–1034
www.elsevier.com/locate/procedia www.elsevier.com/locate/procedia
23rd International Conference on Knowledge-Based and Intelligent Information & Engineering 23rd International Conference on Knowledge-Based Systems and Intelligent Information & Engineering Systems
Comparison Comparison measures measures and and their their usage usage with with examples examples a Department a Department
Kalle Saastamoinena* Kalle Saastamoinena*
of Military Technology, National Defence University, P.O. Box 7, FI-00861 Helsinki of Military Technology, National Defence University, P.O. Box 7, FI-00861 Helsinki
Abstract Abstract In this article we will review theory of comparison which is a very crucial when decisions have to be done. We will present In this article we will review theorynon-metric-based of comparison which a very crucial decisions have to t-norm be done. will present Minkowski distance-based operator, pseudoissimilarity based when operator and combined andWe t-conorm operMinkowski distance-based operator, non-metric-based pseudo similarity based operator and combined t-norm and t-conorm ators. We will study these measures for comparison vs. to the measures that are used generally for comparison that are min, opermax, ators. We will studyand these measures for comparison vs. to the measures that are used generally for comparison that are min, max, Euclidean distance exponent. Euclidean and article exponent. Practicaldistance part of this show that presented comparison operators work well in classification example and in expert system Practical part of this article show that comparison operators in classification example and in expert system example. As a classification example wepresented will use classification done for work Imagewell Segmentation data and expert system example is example. As a classification example we will use classification done for Image and expert is about defining athlete’s aerobic and anaerobic thresholds. Classification results Segmentation are better thandata decision tree-, system KNN- example and SVMabout defining athlete’s aerobic and anaerobic thresholds. Classification results are better than decision tree-, KNNand SVMclassifiers give for Image Segmentation. Classification accuracy given by Shweizer & Sklar - Łukasiewicz - equivalence was classifiers give best for Image Segmentation. Classification givenwas bygiven Shweizer & Sklarmedium - Łukasiewicz - equivalence was 89.70%, while result from the classifiers selected foraccuracy comparison by decision tree classifier with 72.00% 89.70%, best system result from the classifiers selected for comparison givenestimations by decisionasmedium classifier with 72.00% accuracy.while In expert our comparison measure-based method giveswas similar given bytree sport medicine experts. accuracy. In expert system our comparison measure-based method gives similar estimations as given by sport medicine experts. c 2019 2019 The The Authors. Author(s). Published ElsevierB.V. B.V. © Published byby Elsevier c 2019 The Author(s). Published Elsevier B.V. This is an open access article underbythe CC BY-NC-ND BY-NC-ND license license (https://creativecommons.org/licenses/by-nc-nd/4.0/) (https://creativecommons.org/licenses/by-nc-nd/4.0/) This is an open access article under the CC BY-NC-ND Peer-review under responsibility of KES International. under responsibility of KES International. license (https://creativecommons.org/licenses/by-nc-nd/4.0/) Peer-review under responsibility of KES International. Keywords: similarity; t-norm; t-conorm; generalized mean; classification; expert system Keywords: similarity; t-norm; t-conorm; generalized mean; classification; expert system
1. Introduction 1. Introduction Automated decision making is becoming more and more important as artificial intelligence is used virtually everyAutomated making is becoming more more important as artificial intelligence is used where, where adecision man uses machines. Decisions are and always based on comparison and comparison itself virtually is based everyon the where, where a man uses machines. Decisions are always based on comparison and comparison itself is based on the function that is chosen to measure similarity. Function selected is in this article referred to as a comparison measure. function that is chosen to measure similarity. Function selected is in this article referred to as a comparison measure. When we do comparison, we need some ideal values. The fields of problem solving, categorization, data mining, When we do comparison, we need some reasoning, ideal values. fields processes of problem categorization, data mining, classification, memory retrieval, inductive andThe cognitive in solving, general require that the matter of how classification, memory retrieval, inductive reasoning, and cognitive processes in general require that the matter of how to assess sameness is understood. In practice however, methods used for comparison are based on a very intuitive to assess sameness is understood. In practice however, methods used for comparison are based on a very intuitive understanding of the theoretical backgrounds of mathematics or a naive idea of coupling the measure of sameness to understanding of the theoretical backgrounds of mathematics or a naive idea of coupling the measure of sameness to ∗ ∗
Kalle Saastamoinen. Tel.: +358407606489. Kalle Tel.: +358407606489. E-mailSaastamoinen. address:
[email protected] E-mail address:
[email protected]
c 2019 The Author(s). Published by Elsevier B.V. 1877-0509 c 2019 1877-0509 The Author(s). Published by Elsevierlicense B.V. (https://creativecommons.org/licenses/by-nc-nd/4.0/) This is an open access under the CC BY-NC-ND 1877-0509 © 2019 Thearticle Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/) Peer-review under responsibility of KES International. This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/) Peer-review under responsibility of KES International. Peer-review under responsibility of KES International. 10.1016/j.procs.2019.09.270
1028 2
Kalle Saastamoinen / Procedia Computer Science 159 (2019) 1027–1034 Author name / Procedia Computer Science 00 (2019) 000–000
the Euclidean distance. In general, Mathematical machinery used in engineering leans heavily on the use of metrics. It should always be a problem in hand that determine the choice of an axiom-set and the similarity measure that is used. This article is based on the theory of comparison measures presented in Thesis [1]. It is shown that these methods give good results. Article is organized as follows. In the first section, logical comparison measures like (1), combined comparison measure (CCM) (2), and the theory behind them is presented. In the second section we present classification example with Segmentation data set and an expert system to find out athletes aerobic and anaerobic thresholds. In the third section results achieved are presented and these results are compared vs. to the results achieved with KNN-classifier, medium tree classifier and SVM using Matlab Classification Learner application. Results from threshold automata are presented. In the fourth section conclusions are done and future directions of this research are given.
2. Similarities and Logical Comparison Measures Basic types of comparison measures are similarities that are based on the use of equivalences. Equivalence is logically a sentence, which states that something exists if and only if something else exists. For this reason, it is naturally suitable for the comparison of different objects. Implication means that if something exists then something else will exist. This means that implications are suitable for decision-making. Rule based classifiers have been quite popular in classification processes [2, 3]. Logically equivalence can be seen as a conjunction of two implications. One way of extending the implication is to first use the classical logic formula x → y ≡ ¬x ∨ y for all x, y ∈ {0, 1}. This is done by interpreting the disjunction as a t-conorm and negation by the use of a standard fuzzy complement (¬x ≡ 1 − x). This results in defining the implication with the formula a → b ≡ S (¬a, b) for all a, b ∈ [0, 1], which gives rise to the family of many valued implications called S -implications. Equivalences used in this article are of the form a ↔ b ≡ T (a → b, b → a). Here T and t refers to t-norm. It is noted that this kind of logical equivalences created are not reflexive. Symmetricity and triangular inequality have also been questioned in for example [4]. Tversky has especially shown that measures of similarity that conform to human perception do not satisfy the usual properties of a metric [4, 5]. Second type of comparison measures used in this article are based on compensated combination of t-norms and t-conorms. Connectives play an important role when we are modelling reality by equations. For example, when linguistic interpretations such as ”AND” or ”OR” are used for connectives in conjunction and disjunction, normally this does not require or mean crisp connectives. These connectives give best performance, when they are used in some degree. In such cases connectives called t-norms or t-conorms may be used. The t-norm gives minimum type of compensation, while the t-conorm gives maximum type of compensation. In practice t-norms give more value for the low values, while t-conorms give more value for the high values in the range in which they are used. In practice, neither of these connectives fit the collected data appropriately. There is still a lot of information that is left in between of these two connectives. When dealing with t-norms and t-conorms the question is how to combine them in a meaningful way? Neither of these connectives alone give a general compensation for the values where they are adapted. For this reason, one should use a measure that compensates for this gap in values between these two norms. Dyckhoff in article [6] shows how the generalized mean works as the compensative connective between minimum and maximum connectives. The scope of aggregation operators is demonstrated in Figure (1).
Fig. 1. Compensation of t-norms and t-conorms
The first researchers to compensate of t-norms and t-conorms were Zimmermann and Zysno in [7]. They used the weighted geometric mean in order to compensate the gap between fuzzy intersections and unions. When one uses the
Kalle Saastamoinen / Procedia Computer Science 159 (2019) 1027–1034 Author name / Procedia Computer Science 00 (2019) 000–000
1029 3
geometric mean equal compensation is allocated to the all values, and problems might occur if some of the values combined are relatively very low or high. We see that if we use equivalences of the form E(x, y) ≡ T (a → b, b → a) in expert systems and define expert opinion to be optimal e.g. give an expert opinion the valuation 1 let’s say a = 1 we reach E(x, y) ≡ b This follows from the following: 1. implications have neutrality of the truth I (1, b) = b and boundary condition ii) I (a, b) = 1 if and only if a ≤ b 2. every t-norm must satisfy the boundary condition, that is T (1, yi ) = yi . All the used comparison measures we have parameterized and combined them using generalized mean (GM), from this follows as example following measures. Example 1. Equivalence comparison measure based on Shweizer & Sklar - Łukasiewicz: n 1 mp m p p ES S L ( f1 (i) , f2 (i)) = wi 1 − f1 (i) − f2 (i)
(1)
i=1
Example 2. Combined comparison measure (CCM) based on the t-norm and t-conorm with a generalized mean and weights [1]: C f1 , f2 =
n i=1
wi T ip f1 (i) , f2 (i) +
(2)
m m1
(1 − wi ) (S ip f1 (i) , f2 (i))
where i = 1, . . . , n, p is a parameter combined to the corresponding class of fuzzy intersections T i and unions S i and wi are weights and i = 1, . . . , n. In case of expert systems supposing that the values set by experts are ideal, that is valuation xi = 1, for all i one always will end up to the formula: Example 3. n m1 m E (xi , yi ) = wi yi ,
(3)
i=1
where yi are the values given to the expert system, m is a mean value and wi are weights and i = 1, . . . , n. 3. Classification and Defining Aerobic and Anaerobic Thresholds 3.1. Classification Often, a set of data is already grouped into a number of classes and the task is to predict which class each new data belongs to. This is referred to as classification problem. We divide data randomly to the training set and test set [8].
1030 4
Kalle Saastamoinen / Procedia Computer Science 159 (2019) 1027–1034 Author name / Procedia Computer Science 00 (2019) 000–000
Segmentation is the first essential and important part of low level vision [9]. It means partitioning image data measured so that segments describe image as well as possible. Resulting digital representation is called pattern vector. Image segmentation is typically used to locate objects and boundaries (lines, curves, etc.) in images. Here we used Image Segmentation data that is located from UC Irvine Machine Learning Repository [10]. This dataset has 2310 instances, 19 attributes and 7 classes that are brick face, sky, foliage, cement, window, path and grass. • Image Segmentation data: Task is to determine which of seven image given attributes represent. The instances were drawn randomly from a database of 7 outdoor images. The images were manually segmented to create a classification for every pixel. Each instance is a 3x3 region. Number of instances is 2310. The number of attributes is 19. There are no missing attribute values. 3.2. Description of the Similarity Based Classifiers The classification task has been described in the flowchart in Fig. (2). Here classification procedure uses part of the data (learning) for weight optimization either using differential evolution or randomized weights depending of the choice. After this rest of the data (test) is used for classification and then this result is saved. Now if loop is done N-times max, min and mean values are saved and then this same classification procedure is repeated for the next parameter value p. After all, p-values have been done we start from the next mean value the loop again. When we choose to use randomized weights (RW) instead on using differential evolution (DE), we achieve a significant saving in computing time. Evenly distributed randomized weights were used to each p- and m-value hundred times. Step size was 0.01 and tested interval was [−8, 8] for both p- and m-value. We used sample size of 90. This sample size is big enough that results are statistically meaningful [11]. 3.3. Defining Aerobic and Anaerobic Thresholds We wanted to create a system which automatically estimates athlete’s aerobic and anaerobic thresholds from the data measured in laboratory. Measurement data sets were from KIHU Research Institute for Olympic Sports. Each file in set contains athlete’s measurements during incremental workout. All along we had 154 data files, which all included 11 variables. Variables used for defining aerobic and anaerobic thresholds are the following ones: 1) content of lactic acid in capillary blood, 2) ventilation, 3) consumption of oxygen, 4) production of carbon dioxide, 5) relative amount of oxygen in respiration air, 6) relative amount of carbon dioxide in respiration air, 7) ventilation equivalent for oxygen, 8) ventilation equivalent for carbon dioxide, 9) respiration quotient, 10) pulse in beats per minute and 11) time in minutes. [12] Aerobic threshold means the point when anaerobic production of energy begins. This can be defined quite easily for people who are in good condition. However, this point may be impossible to define for people whose physical condition is questionable, because their lack of aerobic endurance and lactic acid handling ability causes the raise of lactic acid level over starting level too early. Criteria deduced from the experts instructions for defining aerobic threshold, were the followings: 1. 2. 3. 4. 5. 6. 7.
Pulse is about 40 beat per minute below maximal pulse. Content of lactic acid in capillary blood begins to raise. Content of lactic acid in capillary blood is about 1.0 − 2.5 mmol per liter. Ventilation begins to raise from beginning level. Relative amount of oxygen in respiration air reaches its maximum. Ventilation equivalent for oxygen is lowest. Lactic acid divided by consumption of oxygen is lowest.
When load is raised over aerobic threshold begins muscles to work in aerobic-anaerobic level. If load is raised enough will the anaerobic production of energy increase over a point where muscles ability to remove lactic acid and control acidity is not sufficient. This point is anaerobic threshold. Corresponding criteria for defining anaerobic threshold were the following ones:
Kalle Saastamoinen / Procedia Computer Science 159 (2019) 1027–1034 Author name / Procedia Computer Science 00 (2019) 000–000
1031 5
Fig. 2. Simplified Flow Chart of the Classification Procedure
1. 2. 3. 4. 5. 6.
Pulse is about 15 beat per minute below maximal pulse. Content of lactic acid in capillary blood is about 2.5 − 4.0 mmol per liter. Content of lactic acid in capillary blood begins to raise radically. Ventilation equivalent for carbon dioxide changes radically. Ventilation equivalent for oxygen begins to raise radically. Relative amount of oxygen in respiration air begins to drop.
This was the data what we had in our hands when we started to create a system for automated thresholds definition. Because criteria are vague in their nature, fuzzy logic is suitable modeling method for this system.
1032 6
Kalle Saastamoinen / Procedia Computer Science 159 (2019) 1027–1034 Author name / Procedia Computer Science 00 (2019) 000–000
First, we define measures to define thresholds:
n m1 pi ·m S ωAet (PAet , Pi ) = ωi µ (Pi ) i=1
Aeti
(4)
for aerobic threshold and
n m1 p ·m i S ωAnt (PAnt , Pi ) = ωi µAnti (Pi )
(5)
i=1
for anaerobic threshold. Next thing was to define weights ωi , right mean value m and fitting factors pi for the equations above. Making of this computational model had the following steps. 1. 2. 3. 4. 5. 6.
Interpolation of measurement data which interpolates data on pulse and ventilation of oxygen. Setting criterions for aerobic and anaerobic thresholds in co-operation with sport medicine experts. Selection of the fuzzy membership functions which corresponds as good as possible criterions set by experts. Fuzzification of interpolated data which fuzzified data with membership functions. Counting the partial similarities and combining them to the total similarity. Estimating parameters for partial similarities with differential evolution and combining best parameters to the model. 7. Model is ready for the action. We noticed that the model with individual parameter values and generalized mean are giving the best results. Weighted version was better than non-weighted. 4. Results and Comparison Classification procedure described above gave 89.70% classification result. Classification result is visualized in Fig. (3). We also tested classification with combined Yager measure that is based on the use of Yager (1980) [13] class of t-norm and t-conorm and these are combined using generalized mean and weights as shown in [1]. This measure gave 69.78% accuracy for classification. Reichenbach based similarity measure [14] gave 85.28% accuracy. Matlab Classification Learner application with decision medium tree classifier gave 72.00% accuracy, which was best result from the classifiers selected for comparison. Classification result is visualized in Fig. (4). We tested all the other classification methods of Matlab Classification Learner, including KNN and SVM, but they gave worse results. Weighted KNN gave 70.9%, which was the closest to the decision tree classification result. As a result of our classification we got parameter value p, mean value m and weights, which can now be used to classify segment data [10] and Comparison measure based on Shweizer & Sklar - Łukasiewicz 1 gets the form:
19 1 1.4 1.4 1.4 ES S L ( f1 (i) , f2 (i)) = wi 1 − f1 (i) − f2 (i) i=1
(6)
Kalle Saastamoinen / Procedia Computer Science 159 (2019) 1027–1034 Author name / Procedia Computer Science 00 (2019) 000–000
1033 7
Fig. 3. Image Segmentation data Classification Results Using Similarity
Fig. 4. Image Segmentation data Confusion matrix Using Medium Tree
For aerobic threshold results did not show any statistical difference when we applied weights for similarity measure at the 95,0 % confidence level. For anaerobic threshold results were better when used weights, but already un-weighted version of our system worked so well that there was no statistically significant difference from the expert values at 95,0 % confidence level.
5. Conclusions In this paper it was proposed a method for creation of similarity classifier for image data classification. Classification results were good enough and as a result they gave designed comparison measure 6 which can be used to classify Image Segmentation data [10]. Results were achieved using quite long step-size of 0.01 for p and m values. Likely, results will get better using the smaller step-size. Also testing with different comparison measures can lead to the better results. One can also try to use differential evolution [15] for finding of correct weights. As a final conclusion, method presented in this article can be used to design classifiers when image data is given. In threshold detection our model proved to be effective in case we can rely our experts conclusions. In a future this method presented here could be used to model non-athlete’s thresholds and generally used for other systems where expert’s opinions are used.
1034 8
Kalle Saastamoinen / Procedia Computer Science 159 (2019) 1027–1034 Author name / Procedia Computer Science 00 (2019) 000–000
Acknowledgements We are grateful to the National Defence University, which gave us the opportunity to conduct this research. References [1] Saastamoinen K., Many Valued Algebraic Structures as Measures of Comparison, Acta Universitatis Lappeenrantaensis, PhD. Thesis, 2008. [2] Kuncheva, L. I.: How good are fuzzy if-then classifiers? IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 30(4), pp. 501-509, 2000. [3] Leski, J. M.: Fuzzy-Means Clustering and Its Application to a Fuzzy Rule-Based Classifier: Toward Good Generalization and Good Interpretability, IEEE Transactions on Fuzzy Systems, 23(4), pp. 802-812, 2015. [4] Tversky A., Features of similarity, Psychological Review, 84(4), pp. 327-352, 1977. [5] Tversky A. and Krantz D.H., The Dimensional Representation and the Metric Structure of Similarity Data, Journal of Mathematical Psychology, pp. 572-596, 1970. [6] Dyckhoff H. and Pedrycz W., Generalized Means as Model of Compensative Connectives, Fuzzy Sets and Systems, 14, pp. 143-154, 1984. [7] Zimmermann H.-J. and Zysno P., Latent connectives in human decision making, Fuzzy Sets and Systems, 4, pp. 37-51, 1980. [8] Hastie T. and Tibshirani R., The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Springer Series in Statistics, Springer, New York, 2001. [9] Pal, Nikhil R., and Sankar K. Pal., A review on image segmentation techniques, Pattern recognition, 26(9), pp. 1277-1294, 1993. [10] UC Irvine Machine Learning Repository, http://archive.ics.uci.edu/ml/, [Accessed January 15, 2017]. [11] Corder, G. W., and D. I. Foreman., Nonparametric statistics for non-statisticians: a step-by-step approach, John Wiley & Sons, 2009. [12] Saastamoinen K. and Ketola J., Fuzzy Logic and Differential Evolution Based Expert System for Defining Top Athlete’s Aerobic and Anaerobic Thresholds, Journal of Advanced Computational Intelligence and Intelligent Informatics, Vol 9(5), pp. 534-539, 2005. [13] Yager R.R., On a General Class of Fuzzy Connectives, Fuzzy Sets and Systems, 4, pp. 235-242, 1980. [14] Saastamoinen K., Classification of data with Similarity Classifier, International work-conference on Time Series, keynote speech, ITISE2016 Proceedings, 2016. [15] Price K.V., Storn R.M., Lampinen J.A., Differential Evolution - A Practical Approach to Global Optimization, Springer, Natural Computing Series, 2005.