Approximations and uncertainty measures in incomplete information systems

Approximations and uncertainty measures in incomplete information systems

Information Sciences 198 (2012) 62–80 Contents lists available at SciVerse ScienceDirect Information Sciences journal homepage: www.elsevier.com/loc...

1MB Sizes 0 Downloads 61 Views

Information Sciences 198 (2012) 62–80

Contents lists available at SciVerse ScienceDirect

Information Sciences journal homepage: www.elsevier.com/locate/ins

Approximations and uncertainty measures in incomplete information systems Jianhua Dai a,b,⇑, Qing Xu a a b

College of Computer Science, Zhejiang University, Hangzhou 310027, China Center for the Study of Language and Cognition, Zhejiang University, Hangzhou 310028, China

a r t i c l e

i n f o

Article history: Received 13 September 2011 Received in revised form 15 February 2012 Accepted 19 February 2012 Available online 1 March 2012 Keywords: Rough set theory Incomplete information systems Uncertainty measures Approximations Accuracy measure

a b s t r a c t There are mainly two methodologies dealing with uncertainty measurement issue in rough set theory: pure rough set approach and information theory approach. Pure rough set approach is based on the concepts of accuracy, roughness and approximation accuracy proposed by Pawlak. Information theory approach is based on Shannon’s entropy or its variants. Several authors have extended the information theory approach into incomplete information systems. However, there are few studies on extending the pure rough set approach to incomplete information systems. This paper focuses on constructing uncertainty measures in incomplete information systems by pure rough set approach. Three types of definitions of lower and upper approximations and corresponding uncertainty measurement concepts including accuracy, roughness and approximation accuracy are investigated. Theoretical analysis indicates that two of the three types can be used to evaluate the uncertainty in incomplete information systems. Experiments on incomplete reallife data sets have been conducted to test the two selected types (the first type and the third type) of uncertainty measures. Results show that the two types of uncertainty measures are effective.  2012 Elsevier Inc. All rights reserved.

1. Introduction Rough set theory [37–41], introduced by Pawlak, is a useful mathematic tool for dealing with vague and uncertain information. Many achievements have been made in rough set theory. For example, Grzymala-Busse [19] developed a system LERS for rule induction, which can handle inconsistencies and induce both certain and possible rules. Polkowski [42] worked on using granular rough mereological structures in classification of data. Skowron [50] et al. worked on the relation and the combination of rough set theory and granular computing [66]. Lin proposed granular computing model based on binary relations [31]. Yao studied three-way decisions in probabilistic rough set model [64,65]. Rough set theory has been applied successfully in many fields including machine learning, intelligent data analysis and decision making [2,7,11,13,15,16,21– 23,25,28,32,35,36,45,48–50,52–55,58–60,64,65]. Uncertainty measurement is an important issue in rough set theory. There are mainly two methodologies dealing with uncertainty measure problem in rough set theory: pure rough set approach and information theory approach. In pure rough set approach, the accuracy measure and the roughness measure are important numerical characterizations that quantify the imprecision of a rough set caused by its boundary region. Pawlak [37] proposed two numerical measures accuracy and roughness to evaluate uncertainty of a rough set. Recently, Yao [63] studied two definitions of approximations and associated

⇑ Corresponding author at: College of Computer Science, Zhejiang University, Hangzhou 310027, China. E-mail address: [email protected] (J. Dai). 0020-0255/$ - see front matter  2012 Elsevier Inc. All rights reserved. doi:10.1016/j.ins.2012.02.032

63

J. Dai, Q. Xu / Information Sciences 198 (2012) 62–80

measures based on equivalence relations. In information theory approach, entropy and its variants have been introduced into rough set theory [4,5,9,12,14,29,34,56]. Classical rough set model is based on equivalence relation or partition. Thus, the corresponding uncertainty measures are not suitable for incomplete information systems. An incomplete information system contains some missing values [8,18,20,24,33,44,51,57]. Several authors have defined uncertainty measures in incomplete information systems by information theory approach. Liang et al. introduced the concepts of information entropy, rough entropy, knowledge granulation and granularity measure in incomplete information systems [30]. Bianucci et al. [6] explored different (quasi) partial orderings and proposed entropy and co-entropy approach for uncertainty measurement of coverings (applicable to incomplete information systems). Qian et al. defined the combination entropy and combination granulation in incomplete information system, and gave some of their properties [43]. Recently, Qian et al. also proposed the conditional combination entropy, mutual information and defined a variant of combination entropy with maximal consistent block [46]. However, there are few studies on uncertainty measurement issue in incomplete information systems by pure rough set approach. In this paper, we mainly focus on extending Pawlak’s pure rough set uncertainty measures to incomplete information systems. The remainder of the paper is organized as follows. Some basic concepts in rough set theory are reviewed in Section 2. Three types of lower and upper approximations and their corresponding uncertainty measures are investigated in Section 3, and some properties are studied. In Section 4, two types of uncertainty measures are tested on some real-life data. Section 5 concludes the paper. 2. Preliminaries In this section, we will review some basic concepts in rough set theory, which can be found in [27,37,41,53]. 2.1. Incomplete information systems and incomplete decision systems An information system is a quadruple IS = hU, A, V, fi, where U is a non-empty finite set of objects, called the universe; A is a S non-empty finite set of attributes; V is the union of attribute domains, V = a2AVa, where Va is the value set of attribute a, called the domain of a; f: U  A ? V is an information function which assigns particular values from domains of attribute to objects such as "a 2 A, x 2 U, f(a, x) 2 Va, where f(a, x) denotes the value of attribute a for object x. If there exist x 2 U and a 2 A such that f(a, x) equals to a missing value (a null or unknown value, denoted as ‘‘’’), which means for at least one attribute a 2 A,  2 Va. Then we call the information system an incomplete information system (IIS). Otherwise, the information system is a complete information system (CIS). A decision system (DS) is a quadruple DS = hU, C [ D, V, fi, where C is a set of conditional attributes, D is decision attribute S S set, and C \ D = ;; V is the union of attribute domain, V = VC [ VD = {Vaja 2 C} {Vdjd 2 D}. If  R VD, but  2 VC, then we call the decision system an incomplete decision system (IDS). If  R VC and  R VD, then the information system is a complete decision system (CDS). 2.2. Approximations in rough set theory Given a complete information system CIS = hU, A, V, fi, for any subset of attributes R # A, there is a binary relation IND(R) on U, called indiscernibility relation, defined by

INDðRÞ ¼ fðx; yÞj8a 2 R; f ða; xÞ ¼ f ða; yÞg

ð1Þ

Example 1. Example for the indiscernibility relation. Table 1 shows a complete information system CIS = hU, A, V, fi, where U = {1, 2, 3, 4} and A = {a1, a2, a3, a4}. Suppose R = {a1, a4}  A, then

INDðRÞ ¼ fð1; 1Þ; ð1; 3Þ; ð2; 2Þ; ð3; 1Þ; ð3; 3Þ; ð4; 4ÞÞg Obviously, IND(R) is an equivalence relation, which is reflexive, symmetric and transitive. The family of all equivalence classes of IND(R) will be denoted by U/IND(R), or simply U/R; [x]R is an equivalence class of R-indiscernibility relation with x as a central object.

Table 1 An information table. U

a1

a2

a3

a4

1 2 3 4

1 2 1 2

1 3 1 1

2 1 2 1

3 1 3 2

64

J. Dai, Q. Xu / Information Sciences 198 (2012) 62–80

Let X be a subset of U, the lower approximation, the upper approximation and the boundary region are defined respectively as follows.

RðXÞ ¼

[

f½xR : ½xR # Xg

ð2Þ

f½xR : ½xR \ X – ;g

ð3Þ

x2U

RðXÞ ¼

[

x2U

BNR ðXÞ ¼ RðXÞ  RðXÞ

ð4Þ

From the definition, we can conclude that approximations are expressed in terms of granules of knowledge. The lower approximation of a set is the union of all granules which are entirely included in the set. The upper approximation of a set is the union of all granules which have non-empty intersection with the set. And the boundary region of a set is the difference between the upper approximation and the lower approximation. The lower approximation of a set X with respect to IND(R) is the set of all objects, which certainly belongs to X with respect to IND(R). The upper approximation of a set X with respect to IND(R) is the set of all objects, which possibly belongs to X with respect to IND(R). The boundary region of a set X with respect to IND(R) is the set of all objects, which belongs with certainty neither to X nor to Xc with respect to IND(R), where Xc denotes the complement of X in U. Approximations have the following properties: (L1) (L2) (L3) (L4) (L5) (L6) (L7) (L8) (L9) (L10) (U1) (U2) (U3) (U4) (U5) (U6) (U7) (U8) (U9) (U10) (CO) (LU)

RðXÞ ¼ ½RðX c Þc , where Xc denotes the complement of X in U. R(U) = U R(X \ Y) = R(X) \ R(Y) R(X [ Y)  R(X) [ R(Y) X # Y ) R(X) # R(Y) R(;) = ; R(X) # X X # RðRðXÞÞ R(X) = R(R(X)) RðXÞ ¼ RðRðXÞÞ RðXÞ ¼ ½RðX c Þc Rð;Þ ¼ ; RðX [ YÞ ¼ RðXÞ [ RðYÞ RðX \ YÞ # RðXÞ [ RðYÞ X # Y ) RðXÞ # RðYÞ RðUÞ ¼ U X # RðXÞ X  RðRðXÞÞ RðXÞ ¼ RðRðXÞÞ RðXÞ ¼ RðRðXÞÞ R(Xc [ Y) # [R(X)]c [ R(Y) RðXÞ # RðXÞ

Properties (L1) and (U1) state that two approximations are dual to each other. Properties (L5) and (U5) state that the approximation operators are monotonic with respect to the set inclusion. Let CDS = hU, C [ D, V, fi be a complete decision system, U/D = {Y1, Y2, . . . , Ym} be the set of decision classes of CDS, and R be an attribute set that R # C. Then, the R-lower and R-upper approximations of U/D are defined as

RðU=DÞ ¼ RY 1 [ RY 2 [    [ RY m

ð5Þ

RðU=DÞ ¼ RY 1 [ RY 2 [    [ RY m

ð6Þ

R-lower approximation of U/D is also well known as R-positive region POSR(D). 2.3. Uncertainty measures in rough set theory Rough sets can also be characterized numerically by accuracy measure and roughness measure, which can be used for evaluating uncertainty of a set. Besides, approximation accuracy can be used to evaluate the uncertainty of a rough classification [37]. The definitions of the uncertainty measures are shown as follows: Definition 1 [37]. Let CIS = hU, A, V, fi be an information system, X # U be a subset of the universe, R # A be an attribute subset. The accuracy of set X with respect to R is

aR ðXÞ ¼

jRðXÞj jRðXÞj

ð7Þ

J. Dai, Q. Xu / Information Sciences 198 (2012) 62–80

65

The roughness of set X with respect to R is

qR ðXÞ ¼ 1  aR ðXÞ ¼ 1 

jRðXÞj jRðXÞj

ð8Þ

It should be noticed that the roughness measure is in fact the well-known Marczewski–Steinhaus distance between the lower and upper approximations, according to Yao [62]. Definition 2 [37]. Let CDS = hU, C [ D, V, fi be a complete decision system, U/D = {Y1, Y2, . . . , Ym} be a classification of the universe U, and R be an attribute set that R # C. The approximation accuracy of U/D by R is defined as

aR ðU=DÞ ¼

jRðU=DÞj jRðU=DÞj

P

Y i 2U=D jRY i j

¼P

Y i 2U=D jRY i j

ð9Þ

According to the definitions of these measures, we know that the accuracy measure is equal to the degree of the completeness of a knowledge about the given object set X, while the roughness measure represents the incompleteness of the knowledge. Meanwhile, the approximation accuracy provides the percentage of possible correct decisions when classifying objects by employing the attribute set R. There is a partial order on the set of all classifications of U. Let CIS = hU, A, V, fi be a complete information system, P, Q # A. One can define

P Q () 8x 2 U; ½xP # ½xQ

ð10Þ

If P Q, then knowledge Q is said to be coarser than knowledge P (or knowledge P is finer than knowledge Q). If P Q and P – Q, Q is said to be strictly coarser than P or P is strictly finer than Q, denoted by P Q. Obviously, can represent the granularity of knowledge. If the accuracy measure, the roughness measure and the approximation accuracy measure are reasonable, they should have the following properties. (Monotonicity  accuracy) Let CIS = hU, A, V, fi be an information system, P, Q # A. If P Q, then aP(X) P aQ(X). (Monotonicity  roughness) Let CIS = hU, A, V, fi be an information system, P, Q # A. If P Q, then qP(X) 6 qQ(X). (Monotonicity  classification) Let CDS = hU, C [ D, V, fi be a complete decision system, P, Q # C. If P Q, then aP(U/ D) P aQ(U/D). Since R is an equivalence relation in Pawlak rough set model, the accuracy measure, roughness measure and approximation accuracy measure have the monotonicity with respect to the granularity of knowledge (shown in the following two theorems [37]). Hence, these measures are reasonable to be used as uncertainty measures in classical rough set theory. Theorem 1. Let CIS = hU, A, V, fi be an information system, P, Q # A. If P Q, then aP(X) P aQ(X) and qP(X) 6 qQ(X). Theorem 2. Let CDS = hU, C [ D, V, fi be a complete decision system, P,Q # C. If P Q, then aP(U/D) P aQ(U/D). 2.4. Tolerance relation and knowledge granularity in incomplete information systems S Definition 3 ([26,27]). Given an incomplete information system IIS = hU, A, V, fi,  2 V = a2AVa, for any subset of attributes B # A, let TIIS(B) denote the binary tolerance relation between objects that are possibly indiscernible in terms of values of attribute B. TIIS(B) is defined as

T IIS ðBÞ ¼ fðx; yÞj8a 2 B; f ða; xÞ ¼ f ða; yÞ ^ f ða; xÞ ¼  ^ f ða; yÞ ¼ g

ð11Þ

IIS

T (B) is reflexive and symmetric, but not transitive. Definition 4. The tolerance class of an object x with respect to an attribute set B is defined by IIS T IIS B ðxÞ ¼ fyjðx; yÞ 2 T ðBÞg ¼ fyj8a 2 B; f ða; xÞ ¼ f ða; yÞ ^ f ða; xÞ ¼  ^ f ða; yÞ ¼ g

ð12Þ

Liang et al. [30] extended the partial order to incomplete information systems based on above tolerance relation. Definition 5 [30]. Let IIS = hU, A, V, fi be an incomplete information system, P, Q # A. We say that Q is coarser than P (or P is IIS finer than Q), denoted by P Q, if and only if 8x 2 U; T IIS P ðxÞ # T Q ðxÞ. If P Q and P – Q, Q is said to be strictly coarser than P (or P is strictly finer than Q), denoted by P Q.

66

J. Dai, Q. Xu / Information Sciences 198 (2012) 62–80

Note that if P  Q, then P Q. Example 2. Table 2 shows an incomplete information table IIS = hU, A, V, fi, where U = {1, 2, 3, 4} is the universe, A = {a1, a2, a3, a4} is the attribute set. Let P, Q # A, P = {a1, a2} and Q = {a1}. It is obvious that P  Q. The tolerance classes of each object with respect to P and Q are:

T IIS T IIS P ð1Þ ¼ f1; 3; 4g Q ð1Þ ¼ f1; 2; 3; 4g T IIS P ð2Þ ¼ f2g

T IIS Q ð2Þ ¼ f1; 2; 4g

T IIS P ð3Þ

T IIS Q ð3Þ ¼ f1; 3g

¼ f1; 3g

T IIS P ð4Þ ¼ f1; 4g

T IIS Q ð4Þ ¼ f1; 2; 4g

IIS IIS IIS IIS IIS IIS IIS The result shows that T IIS P ð1Þ # T Q ð1Þ; T P ð2Þ # T Q ð2Þ; T P ð3Þ # T Q ð3Þ, and T P ð4Þ # T Q ð4Þ. From Definition 5, we have P Q. 0 Cattaneo and Ciucci [8] defined a partial order between different incomplete information systems in the dynamic rough set context [10].

Definition 6 ([8,10]). Let IIS1 = hU, A, V, f1i, IIS2 = hU, A, V, f2i be two incomplete information systems, they have the same object set, attribute set and attribute domains, but they may have different attribute values on some objects. If "a 2 A, x 2 U, f1(a, x) =  ) f2(a, x) = , and f2(a, x) –  ) f2(a, x) = f1(a, x), we say that IIS2 is coarser than IIS1 (or IIS1 is finer than IIS2), denoted by IIS1 0 IIS2. Note that if IIS1 0 IIS2, then $a 2 A, x 2 U, such that f1(a, x) –  but f2(a, x) = . It should be pointed out this paper does not consider dynamic rough set problems. We use the partial order 0 to evaluate the uncertainty measures of incomplete decision systems. For example, when some of the condition attribute values change to unknown values, the uncertainty of the decision system should arise if an uncertainty measure is reasonable. If IIS1 0 IIS2, then there are two cases for the value of f1(a, x) and f2(a, x): (a) "a 2 A, x 2 U, f2(a, x) = f1(a, x); (b) If $a 2 A, x 2 U, such that f2(a, x) – f1(a, x) ) f2(a, x) = , f1(a, x) – . Actually, if some known values in IIS1 become unknown (because of mistakes of acquiring of values), we obtain IIS2. Conversely, if some unknown values in IIS2 become known (more information available), we obtain IIS1. 0 can be used to evaluate granularity (finer or coarser) of knowledge with respect to degrees of missing values. Theorem 3. Let IIS1 = hU, A, V, f1i, IIS2 = hU, A, V, f2i be two incomplete information systems, B # A. If IIS1 0 IIS2, then IIS2 8x 2 U; T IIS1 B ðxÞ # T B ðxÞ.

Proof IIS2 (a) If "a 2 A, x 2 U, f2(a, x) = f1(a, x) then "x 2 U, we have T IIS1 B ðxÞ ¼ T B ðxÞ; IIS2 (b) If $a 2 A, y 2 U, such that f2(a, y) = , f1(a, y) – . Then, no matter y 2 T IIS1 B ðxÞ or not, we have y 2 T B ðxÞ. Hence, IIS2 T IIS1 ðxÞ # T ðxÞ. B B

This completes the proof. h

3. Approximations and uncertainty measures in incomplete information systems In this section, we investigate three types of definitions of lower approximation and upper approximation for incomplete information systems. We focus on the problem that whether these definitions are appropriate for the uncertainty measures n o forms a covering on U based on the tolerance (accuracy, roughness and approximation accuracy). Actually, T IIS B ðxÞ j x 2 U relation discussed in the last section. Thus, one can obtain various definitions of lower and upper approximations based on coverings [47]. The reason why we use the three types of definitions is that they are natural or direct extensions of Pawlak

Table 2 Incomplete information table A. U

a1

a2

a3

a4

1 2 3 4

 2 1 2

1 3  1

2 1 2 

3 1 3 2

67

J. Dai, Q. Xu / Information Sciences 198 (2012) 62–80

rough set model just by replacing the equivalence relation with the tolerance relation. While some other definitions can be viewed as indirect extensions of Pawlak rough set model. For example, some new concepts such as Minimum Description, Friend or Neighbour need to be constructed before defining the approximations. We leave the studies using other definitions in following papers. Actually, the three types of definitions of lower approximation and upper approximation were studied by Yao in [61]. Yao defined the three types of approximation operators based on an arbitrary relation, while in this paper the relation is confined to the tolerance relation defined in the last section. 3.1. The first type of approximations and corresponding measures The first type of lower and upper approximations is the element-based type, which is defined as follows: Definition 7. Let IIS = hU, A, V, fi be an incomplete information system. B # A, X # U. The first type of lower approximation and upper approximation of X according to B are defined as follows:

n o apr1B;IIS ðXÞ ¼ xjT IIS B ðxÞ # X n o apr1B;IIS ðXÞ ¼ xjT IIS B ðxÞ \ X – ;

ð13Þ

Based on the definition of lower and upper approximation, we can define the accuracy and roughness as:

Accuracy1B;IIS ðXÞ ¼

japr1B;IIS ðXÞj japr1B;IIS ðXÞj

1

RoughnessB;IIS ðXÞ ¼ 1  Accuracy1B ðXÞ ¼ 1 

ð14Þ

japr1B;IIS ðXÞj japr1B;IIS ðXÞj

For an incomplete decision system IDS = hU, C [ D, V, fi, B  C, the approximation accuracy of U/D according to B can be defined as:

AppAccuracy1B;IIS ðU=DÞ

P

¼P

1 Y i 2U=D japr B;IIS ðY i Þj 1 Y i 2U=D japr B;IIS ðY i Þj

ð15Þ

Abu-Donia and Abo-Tabl discussed properties (L1)–(L10), (U1)–(U10), (CO) and (LU) based on Definition 7 in [1,3] respectively. We investigate some new properties which are important when investigating whether the corresponding uncertainty measurement concepts including accuracy, roughness and approximation accuracy are appropriate for uncertainty measures or not. Theorem 4. For an incomplete information system IIS, "X, Y 2 U, we have (L11) (L12) (U11) (U12)

P Q ) apr 1P ðXÞ  apr 1Q ðXÞ; IIS1 0 IIS2 ) apr 1B;IIS1 ðXÞ  apr1B;IIS2 ðXÞ; P Q ) apr 1P ðXÞ # apr 1Q ðXÞ; IIS1 0 IIS2 ) apr 1B;IIS1 ðXÞ # apr1B;IIS2 ðXÞ.

IIS IIS IIS 1 Proof. (L11) Let P Q. According to the definition of , we have T IIS P ðxÞ # T Q ðxÞ. 8x 2 apr Q ;IIS ðXÞ; T Q ðxÞ # X, so T P ðxÞ # X, thus x 2 apr 1Q ;IIS ðXÞ. Therefore, apr1P;IIS ðXÞ  apr 1Q ;IIS ðXÞ.

IIS IIS IIS 1 (U11) 8x 2 apr 1P;IIS ðXÞ, we have T IIS P ðxÞ \ X – ;. Since T P ðxÞ # T Q ðxÞ; T Q ðxÞ \ X – ;, we have x 2 apr Q ;IIS ðXÞ. It follows that 1 1 apr P;IIS ðXÞ # apr Q ;IIS ðXÞ.

(L12) and (U12) can be proved similar to (L11) and (U11) by Theorem 3. h Based on the properties (L11), (L12), (U11) and (U12), we can easily get the following theorem. 1

Theorem 5. Let P,Q,B # A, the following properties hold for Accuracy1B;IIS ðXÞ; RoughnessB;IIS ðXÞ and AppAccuracy1B;IIS ðU=DÞ. (Accuracy1) P Q ) Accuracy1P;IIS ðXÞ P Accuracy1Q ;IIS ðXÞ (Accuracy2) IIS1 0 IIS2 ) Accuracy1B;IIS1 ðXÞ P Accuracy1B;IIS2 ðXÞ 1 1 (Roughness1) P Q ) RoughnessP;IIS 6 RoughnessQ ;IIS 1 1 (Roughness2) IIS1 0 IIS2 ) RoughnessB;IIS1 6 RoughnessB;IIS2 1 (AppAccuracy1) P Q ) AppAccuracyP;IIS ðU=DÞ P Accuracy1Q ;IIS ðU=DÞ (AppAccuracy2) IIS1 0 IIS2 ) AppAccuracy1B;IIS1 ðU=DÞ P Accuracy1B;IIS2 ðU=DÞ

68

J. Dai, Q. Xu / Information Sciences 198 (2012) 62–80

Proof. (Accuracy1) From (L11) and (U11), we have

Accuracy1P;IIS ðXÞ ¼

japr1P;IIS ðXÞj japr1P;IIS ðXÞj

P

japr1Q ;IIS ðXÞj japr1P;IIS ðXÞj

P

japr1Q;IIS ðXÞj japr1Q;IIS ðXÞj

¼ Accuracy1Q ;IIS ðXÞ

(Accuracy2) By (L12) and (U12), we have

AccuracyDB;IIS1 ðXÞ ¼

japr1B;IIS1 ðXÞj japr1B;IIS1 ðXÞj

P

japr1B;IIS2 ðXÞj ð japrB;IIS1 1XÞj

P

japr1B;IIS2 ðXÞj japr1B;IIS2 ðXÞj

¼ Accuracy1B;IIS2 ðXÞ

(Roughness1) It is straightforward by (Accuracy1). (Roughness2) It is straightforward by (Accuracy2). (AppAccuracy1) From (L11) and (U11), we have that if P Q, "Yi 2 U/D,

japr1P;IIS ðY i Þj P japr1Q;IIS ðY i Þj japr1P;IIS ðY i Þj 6 japr1Q;IIS ðY i Þj Thus,

P Y i 2U=D

P

Y i 2U=D

japr1P;IIS ðY i Þj P japr1P;IIS ðY i Þj 6

P Y i 2U=D

P

Y i 2U=D

japr1Q;IIS ðY i Þj

japr1Q ;IIS ðY i Þj

According to the definition of AppAccuracy1B;IIS ðU=DÞ, we have AppAccuracy1P;IIS ðU=DÞ P AppAccuracy1Q ;IIS ðU=DÞ. (AppAccuracy2) It can be proved similar to (AppAccuracy1) by (L12) and (U12). h Theorem 5 shows that the accuracy, roughness and approximation accuracy measures of Definition 7 have monotonicity 1 with respect to granularity of knowledge. Therefore, Accuracy1B;IIS ðXÞ; RoughnessB;IIS ðXÞ and AppAccuracy1B;IIS ðU=DÞ can be used to measure the uncertainty of knowledge in incomplete information systems. 3.2. The second type of approximations and corresponding measures The second type of lower approximation is granule-based lower approximation, and the upper approximation is defined based on the lower approximation by the duality [61]. Definition 8. Let IIS = hU, A, V, fi be an incomplete information system. B # A, X # U. The second type of lower approximation and upper approximation of X according to B are defined as follows:

[n

IIS T IIS B ðxÞjT B ðxÞ # X h ic apr2B;IIS ðXÞ ¼ apr2B;IIS ðX c Þ

apr2B;IIS ðXÞ ¼

o ð16Þ

Based on the definition of lower and upper approximation, we can define the accuracy and roughness as:

Accuracy2B;IIS ðXÞ ¼

japr2B;IIS ðXÞj japr2B;IIS ðXÞj

2

RoughnessB;IIS ðXÞ ¼ 1  Accuracy2B;IIS ¼ 1 

japr2B;IIS ðXÞj

ð17Þ

japr2B;IIS ðXÞj

For an incomplete decision system IDS = hU, C [ D, V, fi, B  C, the approximation accuracy of U/D according to B can be defined as:

P

AppAccuracy2B;IIS ðU=DÞ ¼ P

2 Y i 2U=D japr B;IIS ðY i Þj 2 Y i 2U=D japr B;IIS ðY i Þj

ð18Þ

Abo-Tabl and Abu-Donia discussed properties (L1)–(L10), (U1)–(U10), (CO) and (LU) based on Definition 8 in [1,3] respectively. We investigate the properties (L11), (L12), (U11) and (U12) here. Remark 1. (L11), (L12), (U11) or (U12) does not hold based on Definition 8. Example 3. Counter-example for (L11). Table 3 shows an incomplete information table IIS = hU, A, V, fi, where U = {1, 2, 3, 4, 5, 6, 7} is the universe, A = {a1, a2, a3, a4} is the attribute set. Let P, Q # A, where P = {a1, a2, a3, a4} and Q = {a1, a2, a3}. As we can see, P  Q, which means P Q. The tolerance classes of each object with respect to P and Q are

69

J. Dai, Q. Xu / Information Sciences 198 (2012) 62–80 Table 3 Incomplete information table B. U

a1

a2

a3

a4

1 2 3 4 5 6 7

 2 1 2 3 2 

1 3  1   2

2 1 2  2 1 2

3 1 3 2 2 2 

T IIS P ð1Þ ¼ f1; 3g

T IIS Q ð1Þ ¼ f1; 3; 4; 5g

T IIS P ð2Þ ¼ f2g

T IIS Q ð2Þ ¼ f2; 6g

T IIS P ð3Þ

¼ f1; 3; 7g T IIS Q ð3Þ ¼ f1; 3; 7g

T IIS P ð4Þ ¼ f4; 6g

T IIS Q ð4Þ ¼ f1; 4; 6g

T IIS P ð5Þ ¼ f5; 7g

T IIS Q ð5Þ ¼ f1; 5; 7g

T IIS P ð6Þ

T IIS Q ð6Þ ¼ f2; 4; 6g

¼ f4; 6g

T IIS T IIS P ð7Þ ¼ f3; 5; 7g Q ð7Þ ¼ f3; 5; 7g Let X = {1, 2, 3, 4, 5}, then IIS apr2P;IIS ðXÞ ¼ T IIS P ð1Þ [ T P ð2Þ ¼ f1; 2; 3g

apr2Q;IIS ðXÞ ¼ T IIS Q ð1Þ ¼ f1; 3; 4; 5g Note that apr 2P;IIS ðXÞ  apr 2Q ;IIS ðXÞ does not hold although P Q holds. h ic The upper approximation has intimate connection with the lower approximation, i.e. apr 2B;IIS ðXÞ ¼ apr2B;IIS ðX c Þ . Since the monotonicity dose not hold for the lower approximation, the monotonicity (U11) does not hold for the upper approximation. Example 4. Counter-example for (U11) (continued from Example 1) Let Y = Xc = {6,7}, then

apr2P;IIS ðYÞ ¼ ½aprP;IIS ðY c Þc ¼ f4; 5; 6; 7g apr2Q;IIS ðYÞ ¼ ½aprQ ;IIS ðY c Þc ¼ f2; 6; 7g Note that apr 2P;IIS ðYÞ  apr 2Q ;IIS ðYÞ does not hold although P Q holds. Since apr 2B;IIS ðXÞ and apr2B;IIS ðXÞ do not satisfy the monotonicity with respect to the attribute set, we can easily get that (L12), (U12) do not hold. Remark 2. Since (L11), (L12), (U11) or (U12) does not hold in this type of definition of approximations, the properties of accuracy and roughness as in Theorem 5 (Accuracy1), (Accuracy2), (Roughness1), (Roughness2), (AppAccuracy1) and (AppAccuracy2) do not hold. Example 5. Counter-example for uncertainty measures (continued from Example 1) Let X = {1, 2, 3, 4, 5}, then IIS apr2P;IIS ðXÞ ¼ T IIS P ð1Þ [ T P ð2Þ ¼ f1; 2; 3g h ic c apr2P;IIS ðXÞ ¼ apr2P;IIS ðX c Þ ¼ ; ¼ U

apr2Q;IIS ðXÞ ¼ T IIS Q ð1Þ ¼ f1; 3; 4; 5g h ic c apr2Q;IIS ðXÞ ¼ apr2Q ;IIS ðX c Þ ¼ ; ¼ U Thus, japr2

ðXÞj

3 Accuracy2P;IIS ðXÞ ¼ japrP;IIS 2 ðXÞj ¼ 7 P;IIS

japr 2

ðXÞj

Accuracy2Q ;IIS ðXÞ ¼ japrQ;IIS ¼ 47 2 ðXÞj Q;IIS

Note that Accuracy2P;IIS ðXÞ P Accuracy2Q ;IIS ðXÞ does not hold though P Q.

70

J. Dai, Q. Xu / Information Sciences 198 (2012) 62–80

Since the accuracy, roughness and approximation accuracy of Definition 8 do not have monotonicity with respect to gran2 ularity of knowledge, we cannot use Accuracy2B;IIS ðXÞ; RoughnessB;IIS ðXÞ or AppAccuracy2B;IIS ðU=DÞ to evaluate the uncertainty in incomplete information systems. 3.3. The third type of approximations and corresponding measures The third type of upper approximation is granule-based upper approximation, and the lower approximation is defined by the duality [61]. Definition 9. Let IIS = hU, A, V, fi be an incomplete information system. B # A, X # U. The third type of lower approximation and upper approximation of X according to B are defined as follows:

h ic apr3B;IIS ðXÞ ¼ apr3B;IIS ðX c Þ o [ n IIS apr3B;IIS ðXÞ ¼ T B ðxÞjT IIS B ðxÞ \ X – ;

ð19Þ

Based on the definition of lower and upper approximation, we can define the accuracy and roughness as:

Accuracy3B;IIS ðXÞ ¼

japr3B;IIS ðXÞj japr3B;IIS ðXÞj

3

RoughnessB;IIS ðXÞ ¼ 1  Accuracy3B;IIS ¼ 1 

ð20Þ

japr3B;IIS ðXÞj japr3B;IIS ðXÞj

For an incomplete decision system IDS = hU, C [ D, V, fi, B  C, the approximation accuracy of U/D according to B can be defined as:

P

AppAccuracy3B;IIS ðU=DÞ ¼ P

3 Y i 2U=D japr B;IIS ðY i Þj 3 Y i 2U=D japr B;IIS ðY i Þj

ð21Þ

As we can see, the second and the third type of definition of the lower and upper approximation look similar. However, for the second type, one first calculates the lower approximation and obtains the upper approximation by property (U1), while for the third type, one first calculates the upper approximation and obtains the lower approximation using property (L1). Abo-Tabl and Abu-Donia discussed properties (L1)–(L10), (U1)–(U10), (CO) and (LU) according to Definition 9 in [1,3] respectively. We investigate the properties (L11), (L12), (U11) and (U12) here. Theorem 6. For an incomplete information system IIS, "X,Y 2 U, we have (L11) (L12) (U11) (U12)

P Q ) apr 3P ðXÞ  apr3Q ðXÞ; IIS1 0 IIS2 ) apr 3B;IIS1 ðXÞ  apr 3B;IIS2 ðXÞ; P Q ) apr 3P ðXÞ # apr3Q ðXÞ; IIS1 0 IIS2 ) apr 3B;IIS1 ðXÞ # apr 3B;IIS2 ðXÞ.

Proof. We will first prove (U11) and (L11), then (L12) and (U12) can be proved easily. IIS (U11) P Q ) T IIS P ðxÞ # T Q ðxÞ. IIS T Q ðxÞ # apr 3Q;IIS ðXÞ. Thus, we have

IIS IIS 3 8T IIS P ðxÞ # apr P;IIS ðXÞ, we have that T P ðxÞ \ X – ;, so T Q ðxÞ \ X – ;, which means

3 3 3 T P sIIS ðxÞ # T IIS Q ðxÞ # apr Q ;IIS ðXÞ. It follows that apr P;IIS ðXÞ # apr Q ;IIS ðXÞ. h ic h ic (L11) According to U11, we have apr3P;IIS ðX c Þ # apr 3Q;IIS ðX c Þ. Therefore, apr3P;IIS ðXÞ ¼ apr3P;IIS ðX c Þ  apr 3Q;IIS ðX c Þ ¼ apr 3Q ;IIS ðXÞ. This completes the proof. We can prove (U12) and (L12) similar to (U11) and (L11). h

Since, (L11), (L12), (U11) and (U12) hold based on Definition 9, we have the following theorem. 1

Theorem 7. Let P, Q, B # A, the following properties hold for Accuracy1B;IIS ðXÞ; RoughnessB;IIS ðXÞ and AppAccuracy1B;IIS ðU=DÞ. (Accuracy1) P Q ) Accuracy3P;IIS ðXÞ P Accuracy3Q ;IIS ðXÞ (Accuracy2) IIS1 0 IIS2 ) Accuracy3B;IIS1 ðXÞ P Accuracy3B;IIS2 ðXÞ 3 3 (Roughness1) P Q ) RoughnessP;IIS 6 RoughnessQ ;IIS 3 3 0 (Roughness2) IIS1 IIS2 ) RoughnessB;IIS1 6 RoughnessB;IIS2 3 (AppAccuracy1) P Q ) AppAccuracyP;IIS ðU=DÞ P Accuracy3Q ;IIS ðU=DÞ (AppAccuracy2) IIS1 0 IIS2 ) AppAccuracy3B;IIS1 ðU=DÞ P Accuracy3B;IIS2 ðU=DÞ

71

J. Dai, Q. Xu / Information Sciences 198 (2012) 62–80

Proof. The proof is similar to the proof of Theorem 5. h Similar to the first type of approximations and corresponding measures defined in Section 3.1, the accuracy and roughness of the third type can also be used to evaluate the uncertainty of knowledge in incomplete information systems. 3.4. Relationship among these three types of approximations Theorem 8. The approximation operators have the following property.

apr3B;IIS ðXÞ # apr1B;IIS ðXÞ # apr2B;IIS ðXÞ # X # apr2B;IIS ðXÞ # apr1B;IIS ðXÞ # apr3B;IIS ðXÞ

ð22Þ

Proof. We prove the theorem as follows: (a) Proof of apr 3B;IIS ðXÞ # apr 1B;IIS ðXÞ. Suppose x 2 ðapr 3B;IIS ðXÞ ¼ ½apr3B;IIS ðX c Þc Þ, then we have x R apr 3B;IIS ðX c Þ. Obviously, T IIS B ðxÞ is c IIS not a subset of Xc. Thus, by the definition of apr 3B;IIS ðX c Þ, we have T IIS B ðxÞ \ X ¼ ;. It follows that T B ðxÞ # X. From the def-

inition of apr 1B;IIS ðXÞ, we have x 2 apr1B;IIS ðXÞ. Therefore, apr 3B;IIS ðXÞ # apr1B;IIS ðXÞ. IIS 2 (b) Proof of apr1B;IIS ðXÞ # apr 2B;IIS ðXÞ. Suppose x 2 apr 1B;IIS ðXÞ, then we have T IIS B ðxÞ # X. Hence, T B ðxÞ # apr B;IIS ðXÞ. Obviously, 2 1 2 x 2 apr B;IIS ðXÞ. Therefore, aprB;IIS ðXÞ # apr B;IIS ðXÞ.

(c) It is straightforward that apr2B;IIS ðXÞ # X # apr 2B;IIS ðXÞ. (d) Proof of apr2B;IIS ðXÞ # apr 1B;IIS ðXÞ. Suppose x 2 apr 2B;IIS ðXÞ, then we have x R apr2B;IIS ðX c Þ. It follows that T IIS B ðxÞ is not a subset 1 2 1 of Xc, so T IIS B ðxÞ \ X – ;. Thus, x 2 apr B;IIS ðXÞ. It follows that apr B;IIS ðXÞ # apr B;IIS ðXÞ. IIS 3 (e) Proof of apr 1B;IIS ðXÞ # apr3B;IIS ðXÞ. Suppose x 2 apr 1B;IIS ðXÞ, then we have T IIS B ðxÞ \ X – ;. Hence, T B ðxÞ # apr B;IIS ðXÞ. Thus, 3 1 3 x 2 apr B;IIS ðXÞ. Therefore, aprB;IIS ðXÞ # apr B;IIS ðXÞ. h

Since only the first and the third type of definitions of accuracy, roughness and approximation accuracy can be used to measure the uncertainty of knowledge in incomplete information systems, here we just study the relationship between the first type of measures and the third type of measures. Theorem 9. The uncertainty measures have the following properties

Accuracy1B;IIS ðXÞ P Accuracy3B;IIS ðXÞ 1

3

ð23Þ

RoughnessB;IIS ðXÞ 6 RoughnessB;IIS ðXÞ AppAccuracy1B;IIS ðU=DÞ

P

AppAccuracy3B;IIS ðU=DÞ

Table 4 An incomplete decision table. U

c1

c2

c3

c4

d

1 2 3 4 5 6 7

 2 1 2 3 2 

1 3  1   2

2 1 2  2 1 2

3 1 3 2 2 2 

0 1 0 1 1 1 0

Table 5 Characteristics of testing data sets. Data set

Abbreviation

# Objects

# Attributes

# Classes

Balance scale Monk_1 Monk_2 Monk_3 Car evaluation Tic_tac_toe endgame Zoo Dermatology Chess (King-Rook vs. King-Pawn)

Balance Monk_1 Monk_2 Monk_3 Car Tic_tac_toe Zoo Dermatology Kr_vs_kp

635 124 169 122 1728 958 101 366 3196

5 7 7 7 7 10 17 34 37

3 2 2 2 4 2 7 6 2

72

J. Dai, Q. Xu / Information Sciences 198 (2012) 62–80

Proof. From Theorem 8 we have apr3B;IIS ðXÞ # apr 1B;IIS ðXÞ and apr1B;IIS ðXÞ # apr 3B;IIS ðXÞ. Hence, we have

Accuracy1B;IIS ðXÞ ¼

japr3B;IIS ðXÞj japr3B;IIS ðXÞj 1

P

japr3B;IIS ðXÞj japr3B;IIS ðXÞj

¼ Accuracy3B;IIS ðXÞ 3

Consequently, we have RoughnessB;IIS ðXÞ 6 RoughnessB;IIS ðXÞ. Similarly, we can prove AppAccuracy1B;IIS ðU=DÞ P AppAccuracy3B;IIS ðU=DÞ. h 3.5. Illustrative example We have given three different types of definitions of the approximations and corresponding uncertainty measures. In this subsection, we propose an illustrative example to calculate the approximations and uncertainty measures step by step.

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

Fig. 1. The values of accuracy measures of the first 85% of all objects with respect to different sizes of attribute sets.

73

J. Dai, Q. Xu / Information Sciences 198 (2012) 62–80

Example 6. Table 4 shows an incomplete decision table IDS = hU, C [ D, V, fi, where U = {1, 2, 3, 4, 5, 6, 7} is the universe, C = {c1,c2,c3,c4} is the conditional attribute set, and D = {d} is the decision attribute set. Let X = {2, 4, 5, 6} # U and B = {c1, c2, c3} # C. Then, we can calculate the lower and upper approximation, accuracy, roughness of X and the approximation accuracy with respect to B under the three kinds of definitions. First of all, we will calculate the tolerance class of each object with respect to attribute set B:

T IIS B ð1Þ ¼ f1; 3; 4; 5g T IIS B ð2Þ ¼ f2; 6g T IIS B ð3Þ ¼ f1; 3; 7g T IIS B ð4Þ ¼ f1; 4; 6g T IIS B ð5Þ ¼ f1; 5; 7g

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

Fig. 2. The values of roughness measure of the first 85% of all objects with respect to different sizes of attribute sets.

74

J. Dai, Q. Xu / Information Sciences 198 (2012) 62–80

T IIS B ð6Þ ¼ f2; 4; 6g T IIS B ð7Þ ¼ f3; 5; 7g X = {2, 4, 5, 6}, then Xc = {1, 3, 7}, the lower and upper approximations of the three kinds of definitions are calculated as follows: (a) For the first kind of definition:

apr1B;IIS ðXÞ ¼ f2; 6g apr1B;IIS ðXÞ ¼ f1; 2; 4; 5; 6; 7g

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

Fig. 3. The values of approximation accuracy of U/D in 0.05IDS with respect to different sizes of attribute sets.

75

J. Dai, Q. Xu / Information Sciences 198 (2012) 62–80

(b) For the second kind of definition: IIS apr2B;IIS ðXÞ ¼ T IIS B ð2Þ [ T B ð6Þ ¼ f2; 4; 6g h ic c c apr2B;IIS ðXÞ ¼ apr2B;IIS ðX c Þ ¼ ½T IIS B ð3Þ ¼ f1; 3; 7g ¼ f2; 4; 5; 6g

(c) For the third kind of definition: IIS IIS IIS IIS IIS apr3B;IIS ðXÞ ¼ T IIS B ð1Þ [ T B ð2Þ [ T B ð4Þ [ T B ð5Þ [ T B ð6Þ [ T B ð7Þ ¼ f1; 2; 3; 4; 5; 6; 7g c c IIS IIS IIS IIS apr3B;IIS ðXÞ ¼ ½apr3B;IIS ðX c Þc ¼ ½T IIS B ð1Þ [ T B ð3Þ [ T B ð4Þ [ T B ð5Þ [ T B ð7Þ ¼ f1; 3; 4; 5; 6; 7g ¼ f2g

From the results, we can find that apr3B;IIS ðXÞ # apr 1B;IIS ðXÞ # apr 2B;IIS ðXÞ # X # apr2B;IIS ðXÞ # apr 1B;IIS ðXÞ # apr3B;IIS ðXÞ, which coincides with Theorem 8. After obtaining the lower and upper approximations, we can calculate the accuracy and roughness easily:

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

Fig. 4. The values of accuracy measures of the first 85% of all objects with respect to all attributes at different rates of missing values.

76

J. Dai, Q. Xu / Information Sciences 198 (2012) 62–80 japr 1

ðXÞj

2 Accuracy1B;IIS ðXÞ ¼ japrB;IIS 1 ðXÞj ¼ 6 ¼ 0:3333; B;IIS

japr 2

ðXÞj

3 Accuracy2B;IIS ðXÞ ¼ japrB;IIS 2 ðXÞj ¼ 4 ¼ 0:75; B;IIS

japr 3

ðXÞj

1 Accuracy3B;IIS ðXÞ ¼ japrB;IIS 3 ðXÞj ¼ 7 ¼ 0:1429; B;IIS

1

RoughnessB;IIS ðXÞ ¼ 1  Accuracy1B;IIS ðXÞ ¼ 0:66667 2

RoughnessB;IIS ðXÞ ¼ 1  Accuracy2B;IIS ðXÞ ¼ 0:25 3

RoughnessB;IIS ðXÞ ¼ 1  Accuracy3B;IIS ðXÞ ¼ 0:8571

U/D = {{1, 3, 7},{2, 4, 5, 6}}. Therefore, in order to calculate the approximation accuracy, we need to calculate the lower and upper approximation of set Y1 = {1, 3, 7} and Y2 = {2, 4, 5, 6}. The lower and upper approximations of set Y1 = {1, 3, 4} with respect to B are:

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

Fig. 5. The values of roughness measure of the first 85% of all objects with respect to all attributes at different rates of missing values.

77

J. Dai, Q. Xu / Information Sciences 198 (2012) 62–80

apr1B;IIS ðY 1 Þ ¼ f3g apr1B;IIS ðY 1 Þ ¼ f1; 3; 4; 5; 7g apr2B;IIS ðY 1 Þ ¼ f1; 3; 7g apr2B;IIS ðY 1 Þ ¼ f1; 3; 5; 7g apr3B;IIS ðY 1 Þ ¼ ; apr3B;IIS ðY 1 Þ ¼ f1; 3; 4; 5; 6; 7g Thus, the approximation accuracy of the three types are: japr 1

ðY 1 Þjþjapr1B;IIS ðY 2 Þj

AppAccuracy1B;IIS ðU=DÞ ¼ japrB;IIS 1 ðY B;IIS

japr 2

ðY 1 Þjþjapr2B;IIS ðY 2 Þj

AppAccuracy2B;IIS ðU=DÞ ¼ japrB;IIS 2 ðY B;IIS

japr 3

2 1 Þjþjapr B;IIS ðY 2 Þj

ðY 1 Þjþjapr3B;IIS ðY 2 Þj

AppAccuracy3B;IIS ðU=DÞ ¼ japrB;IIS 3 ðY B;IIS

1 1 Þjþjapr B;IIS ðY 2 Þj

3 1 Þjþjapr B;IIS ðY 2 Þj

¼ 1þ2 ¼ 0:2727 5þ6 ¼ 3þ3 ¼ 0:75 4þ4 0þ1 ¼ 6þ7 ¼ 0:0769

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

Fig. 6. The values of approximation accuracy of U/D with respect to all attributes at different rates of missing values.

78

J. Dai, Q. Xu / Information Sciences 198 (2012) 62–80

4. Empirical experiments In Section 3, we find that only the first and the third type can be used to evaluate the uncertainty of knowledge in incomplete information systems from the theoretical view. In this section, we will test these two types of measures (the first and the third type) on some real-life data sets. Nine real-life data sets available from the UCI Repository of Machine Learning Database at University of California [17] are used. The characteristics of the data sets are summarized in Table 5. In the experiments, we randomly take away some known attributes values from the original data sets to create incomplete information systems. Here we define a measure called incomplete rate as follows:

inrate ¼

Unknown Values jUj  jCj

For a complete decision system CDS = hU, C [ D, V, fi, there are jUj objects and jCj conditional attributes, and there are jUjjCj known conditional attribute values. Among all these values, we first take away 1% attribute values randomly, and we call the created data ‘‘0.01IDS’’, whose inrate = 1%; then based on ‘‘0.01IDS’’, we take away another 1% attribute values from the set of known attribute values further and create data ‘‘0.02IDS’’; ‘‘0.03IDS’’ is created by taking away 1% attribute values from the known values of ‘‘0.02IDS’’; . . .; taking away 1% attribute values from the known values of ‘‘0.09IDS’’, we get ‘‘0.1 IDS’’. It is obvious that 0.01IDS 0 0.02IDS 0 0.03 IDS 0 . . . 0 0.1IDS. Fig. 1a–i show the accuracy values of the first 85% of whole objects with respect to different sizes of attribute sets in 0.05IDS. Fig. 2a–i show the values of roughness measure of the first 85% of whole objects with respect to different sizes of attribute sets in 0.05IDS. The X-axis represents the size of attribute set, from one attribute to all attributes (C). The Y-axis represents the value of the measures. The values of approximation accuracy of U/D are described in Fig. 3a–i. Each figure has two lines. In Fig. 1, ‘‘Accuracy1’’ represents Accuracy1B;IIS ðXÞ, and ‘‘Accuracy3’’ represents Accuracy3B;IIS ðXÞ. In Fig. 2, ‘‘Rough1 3 ness1’’ represents RoughnessB;IIS ðXÞ and ‘‘Roughness3’’ represents RoughnessB;IIS ðXÞ. In Fig. 3, ‘‘Approximation Accuracy1’’ represents AppAccuracy1B;IIS ðXÞ while ‘‘Approximation Accuracy3’’ represents AppAccuracy3B;IIS ðXÞ. Fig. 4a–i and Fig. 5a–i show the values of accuracy measure and roughness measure of the first 85% of all objects with respective to all attributes respectively. The X-axis represents the incomplete information system (0.01 represents 0.01IDS and so on), from 0.01IDS to 0.1IDS. The Y-axis represents the value of the measures. The values of approximation accuracy of U/D are described in Fig. 6a–i. Each figure has two lines. Each line has the same meaning as in Figs. 1–3. From the figures, we can find that the accuracy and the approximation accuracy measures of both types get larger, and the roughness measure gets smaller when the attribute set gets bigger. Meanwhile, the coarser the incomplete information system is, the smaller the accuracy measure and the approximation accuracy measure are, and the larger the roughness measure is. Moreover, the accuracy and approximation accuracy measures of the first type is larger than the third type, while the roughness measure of the first type is smaller than the third type. These results verify the properties (Accuracy1), (Accuracy2), (Roughness1), (Roughness2), (AppAccuracy1), (AppAccuracy2) and Theorem 9. 5. Conclusion Uncertainty measurement is an important issue in rough set theory. In this paper, we investigate three types of lower approximations and upper approximations and the corresponding accuracy, roughness and approximation accuracy in incomplete information systems. Mainly, the monotonicity of the approximations and the measures with respect to the granularity of available knowledge are studied. We find that the first and the third type have the property of monotonicity while the second type does not. It follows that the first type of measures and the third type of measures can be used to evaluate the uncertainty in incomplete information systems, while it is not reasonable to apply the second type to the evaluation of the uncertainty information in incomplete information systems. Furthermore, the relations of the uncertainty measures between the first type and the second type are discussed. Finally, the first and the third type of measures are tested on some real-life data sets. Results show that these two kinds of measures can be used as uncertainty measures. In this paper, we focus on different types of definition of uncertainty measures and their properties by pure rough set theory approach. In the future, we will consider the application of the presented uncertainty measures, especially in attribute reduction or rule generation in incomplete information systems. Acknowledgement The work is supported by the National Natural Science Foundation of China (Nos. 61070074 and 60703038). References [1] E.A. Abo-Tabl, A comparison of two kinds of definitions of rough approximations based on a similarity relation, Information Sciences 181 (2011) 2587– 2596. [2] H. Abu-Donia, Multi knowledge based rough approximations and applications, Knowledge-Based Systems 26 (2012) 20–29.

J. Dai, Q. Xu / Information Sciences 198 (2012) 62–80

79

[3] H.M. Abu-Donia, Comparison between different kinds of approximations by using a family of binary relations, Knowledge-Based Systems 21 (2008) 911–919. [4] T. Beaubouef, F.E. Petry, G. Arora, Information-theoretic measures of uncertainty for rough sets and rough relational databases, Information Sciences 109 (1998) 185–195. [5] D. Bianucci, G. Cattaneo, Information entropy and granulation co-entropy of partitions and coverings: a summary, Transactions on Rough Sets 10 (2009) 15–66. [6] D. Bianucci, G. Cattaneo, D. Ciucci, Entropies and cocentropies of coverings with application to incomplete information systems, Fundamenta Informaticae 75 (2007) 77–105. [7] J. Blaszczynski, R. Slowinski, M. Szelag, Sequential covering rule induction algorithm for variable consistency roughset approaches, Information Sciences 181 (2011) 987–1002. [8] G. Cattaneo, D. Ciucci, Investigation about time monotonicity of similarity and preclusive rough approximations in incomplete information systems, Lecture Notes on Artificial Intelligence 3066 (2004) 38–48. [9] G. Cattaneo, D. Ciucci, D. Bianucci, Entropy and co-entropy of partitions and coverings with applications to roughness theory, Studies in Fuzziness and Soft Computing 224 (2008) 55–77. [10] D. Ciucci, Classification of dynamics in rough sets, Lecture Notes on Artificial Intelligence 6086 (2010) 257–266. [11] J.H. Dai, Rough 3-valued algebras, Information Sciences 178 (2008) 1986–1996. [12] J.H. Dai, W. Wang, Q. Xu, H. Tian, Uncertainty measurement for interval-valued decision systems based on extended conditional entropy, KnowledgeBased Systems 27 (2012) 443–450. [13] J. Derrac, C. Cornelis, S. Garcia, F. Herrera, Enhancing evolutionary instance selection algorithms by means of fuzzy rough set based feature selection, Information Sciences 181 (2012) 73–92. [14] I. Duentsch, G. Gediga, Uncertainty measures of rough set prediction, Artificial Intelligence 106 (1998) 109–137. [15] L. Feng, T. Li, D. Ruan, S. Gou, A vague-rough set approach for uncertain knowledge acquisition, Knowledge-Based Systems 24 (2011) 837–843. [16] A. Formica, Semantic web search based on rough sets and fuzzy formal concept analysis, Knowledge-Based Systems 26 (2012) 40–47. [17] A. Frank, A. Asuncion, UCI Machine Learning Repository, 2010.. [18] S. Greco, B. Matarazzo, R. Slowinski, Handling missing values in rough set analysis of multi-attribute and multi-criteria decision problems, Lecture Notes in Artificial Intelligence 1711 (1999) 146–157. [19] J. Grzymala-Busse, A new version of the rule induction system LERS, Fundamenta Informaticae 31 (1997) 27–39. [20] J. Grzymala-Busse, W. Rzasa, Local and global approximations for incomplete data, Lecture Notes in Computer Science 4259 (2006) 244–253. [21] T. Herawan, M.M. Deris, J.H. Abawajy, A rough set approach for selecting clustering attribute, Knowledge-Based Systems 23 (2010) 220–231. [22] Q. Hu, S. An, D. Yu, Soft fuzzy rough sets for robust feature evaluation and selection, Information Sciences 180 (2010) 4384–4400. [23] J. Jelonek, K. Krawiec, R. Slowiski, Rough set reduction of attributes and their domains for neural networks, Computational Intelligence 11 (1995) 339– 347. [24] R. Jensen, Q. Shen, Interval-valued fuzzy-rough feature selection in datasets with missing values, in: Proceedings of 2009 IEEE International Conference on Fuzzy Systems, pp. 610–615. [25] D. Kim, S. Bang, A handwritten numeral character classification using tolerant rough set, IEEE Transactions on Pattern Analysis and Machine Intelligence 22 (2002) 923–937. [26] M. Kryszkiewicz, Rough set approach to incomplete information systems, Information Sciences 112 (1998) 39–49. [27] M. Kryszkiewicz, Rules in incomplete information systems, Information Sciences 113 (1999) 271–292. [28] Y.L. Li, J.F. Tang, K.S. Chin, Y. Han, X.G. Luo, A roughset approach for estimating correlation measures in quality function deployment, Information Sciences 189 (2012) 126–142. [29] J. Liang, K.S. Chin, C. Dang, R.C.M. Yam, A new method for measuring uncertainty and fuzziness in rough set theory, International Journal of General Systems 31 (2002) 331–342. [30] J. Liang, Z. Shi, D. Li, M.J. Wierman, Information entropy, rough entropy and knowledge granulation in incomplete information systems, International Journal of General Systems 35 (2006) 641–654. [31] T.Y. Lin, Granular computing on binary relations II: rough set representations and belief functions, Rough Sets in Knowledge Discovery (1998) 121– 140. [32] N. Mac Parthalain, Q. Shen, Exploring the boundary region of tolerance rough sets for feature selection, Pattern Recognition 42 (2009) 655–667. [33] Z. Meng, Z. Shi, A fast approach to attribute reduction in incomplete decision systems with tolerance relation-based rough sets, Information Sciences 179 (2009) 2774–2793. [34] J.S. Mi, Y. Leung, W.Z. Wu, An uncertainty measure in partition-based fuzzy rough sets, International Journal of General Systems 34 (2005) 77–90. [35] J.S. Mi, W.Z. Wu, W.X. Zhang, Approaches to knowledge reduction based on variable precision rough set model, Information sciences 159 (2004) 255– 272. [36] F. Min, H. He, Y. Qian, W. Zhu, Test-cost-sensitive attribute reduction, Information Sciences 181 (2011) 4928–4942. [37] Z. Pawlak, Rough Sets: Theoretical Aspects of Reasoning about Data, Kluwer Academic Publishers, Dordrecht, 1991. [38] Z. Pawlak, Vagueness and uncertainty: a rough set perspective, Computational Intelligence 11 (1995) 227–232. [39] Z. Pawlak, Rough set approach to knowledge-based decision support, European Journal of Operational Research 99 (1997) 48–57. [40] Z. Pawlak, Rough set theory and its applications to data analysis, Cybernetics and Systems: An International Journal 29 (1998) 661–688. [41] Z. Pawlak, A. Skowron, Rudiments of rough sets, Information Sciences 177 (2007) 3–27. [42] L. Polkowski, Rough Sets: Mathematical Foundations, Springer, 2002. [43] Y. Qian, J. Liang, Combination entropy and combination granulation in incomplete information system, Lecture Notes in Computer Science 4062 (2006) 184–190. [44] Y. Qian, J. Liang, D. Li, F. Wang, N. Ma, Approximation reduction in inconsistent incomplete decision tables, Knowledge-Based Systems 23 (2010) 427– 433. [45] Y. Qian, J. Liang, W. Pedrycz, C. Dang, Positive approximation: an accelerator for attribute reduction in rough set theory, Artificial Intelligence 174 (2010) 597–618. [46] Y. Qian, J. Liang, F. Wang, A new method for measuring the uncertainty in incomplete information systems, International Journal of Uncertainty Fuzziness and Knowledge-Based Systems 17 (2009) 855–880. [47] P. Samanta, M.K. Chakraborty, Covering based approaches to rough sets and implication lattices, Lecture Notes on Artificial Intelligence 5908 (2009) 127–134. [48] Q. Shen, R. Jensen, Selecting informative features with fuzzy-rough sets and its application for complex systems monitoring, Pattern Recognition 37 (2004) 1351–1363. [49] J.Y. Shyng, H.M. Shieh, G.H. Tzeng, An integration method combining rough set theory with formal concept analysis for personal investment portfolios, Knowledge-Based Systems 23 (2010) 586–597. [50] A. Skowron, J. Stepaniuk, R. Swiniarski, Modeling rough granular computing based on approximation spaces, Information Sciences 184 (2012) 20–43. [51] J. Stefanowski, A. Tsoukias, Incomplete information tables and rough classification, Computational Intelligence 17 (2001) 545–566. [52] J.H. Su, B.W. Wang, C.Y. Hsiao, V.S. Tseng, Personalized rough-set-based recommendation by integrating multiple contents and collaborative information, Information Sciences 180 (2010) 113–131. [53] R. Swiniarski, A. Skowron, Rough set methods in feature selection and recognition, Pattern recognition letters 24 (2003) 833–849.

80

J. Dai, Q. Xu / Information Sciences 198 (2012) 62–80

[54] S. Tsumoto, Automated extraction of medical expert system rules from clinical databases based on rough set theory, Information Sciences 112 (1998) 67–84. [55] H. Wang, S. Wang, Discovering patterns of missing data in survey databases: an application of rough sets, Expert Systems with Applications 36 (2009) 6256–6260. [56] M.J. Wierman, Measuring uncertainty in rough set theory, International Journal of General Systems 28 (1999) 283–297. [57] W. Wu, W. Zhang, H. Li, Knowledge acquisition in incomplete fuzzy information systems via the rough set approach, Expert Systems 20 (2003) 280– 286. [58] W.Z. Wu, Y. Leung, Theory and applications of granular labelled partitions in multi-scale decision tables, Information Sciences 181 (2011) 3878–3897. [59] W. Xu, Y. Li, X. Liao, Approaches to attribute reductions based on rough set and matrix computation in inconsistent ordered information systems, Knowledge-Based Systems 27 (2012) 78–91. [60] J. Yao, J. Herbert, Financial time-series analysis with rough sets, Applied Soft Computing 9 (2009) 1000–1007. [61] Y.Y. Yao, Relational interpretations of neighborhood operators and rough set approximation operators, Information Sciences 111 (1998) 239–259. [62] Y.Y. Yao, Information granulation and rough set approximation, International Journal of Intelligent Systems 16 (2001) 87–104. [63] Y.Y. Yao, Notes on rough set approximations and associated measures, Journal of Zhejiang Ocean University (Natural Science) 29 (2010) 399–410. [64] Y.Y. Yao, Three-way decisions with probabilistic rough sets, Information Sciences 180 (2010) 341–353. [65] Y.Y. Yao, The superiority of three-way decisions in probabilistic rough set models, Information Sciences 181 (2011) 1080–1096. [66] L. Zadeh, Fuzzy sets and information granularity, Advances in Fuzzy Set Theory and Applications 11 (1979) 3–18.