Accuracy of approximation operators during covering evolutions

Accuracy of approximation operators during covering evolutions

International Journal of Approximate Reasoning 117 (2020) 1–14 Contents lists available at ScienceDirect International Journal of Approximate Reason...

424KB Sizes 0 Downloads 63 Views

International Journal of Approximate Reasoning 117 (2020) 1–14

Contents lists available at ScienceDirect

International Journal of Approximate Reasoning www.elsevier.com/locate/ijar

Accuracy of approximation operators during covering evolutions Zuoming Yu ∗ , Dongqiang Wang Department of Mathematics and Physics, Suzhou Vocational University, Suzhou 215000, PR China

a r t i c l e

i n f o

Article history: Received 29 March 2019 Received in revised form 2 August 2019 Accepted 28 October 2019 Available online 31 October 2019 Keywords: Covering evolution Covering-based rough set Neighborhood operator Core system Accuracy of approximation operator

a b s t r a c t We express our concern over two issues in this paper: one is which characters are foundational for multitudinous covering-based rough models as generalizations of Pawlak’s rough sets, i.e., which properties of covering-based rough sets guarantee the consistency of them and Pawlak’s over partitions; the other is how to utilize this equivalence in simplifying diverse covering-based approximation operators. We demonstrate that covering-based rough models are equivalent to Pawlak’s on partitions if they satisfy granule selection principles, which are weaker than a combination of contraction, extension, monotonicity, addition and granularity. In order to take advantage of their equivalence, we illustrate a method named “covering evolution” to change granules from given coverings to corresponding partitions for covering-based approximation operators. The evolutions can be divided into three steps: at the beginning, coverings are transformed into 1-neighborhood systems based on some quintessential neighborhood operators, then in the middle, more-refined 1-neighborhood systems are built by using “cores” from general topology, and at last, core systems (in fact, partitions) are extracted from these morerefined 1-neighborhood systems. We lay a strong emphasis on the variations in accuracy of six representative covering-based approximation operators during three evolutions, two of which preserve the accuracy of approximation operators. The investigation carried on covering evolutions helps us to establish the corresponding mapping relationship from covering spaces to partition spaces directly, and therefore provides a convenient method for making choice of covering-based approximation operators as they are consistent with the classical Pawlak’s rough sets over partitions. © 2019 Elsevier Inc. All rights reserved.

1. Introduction The pioneer work concerning rough set theory can be traced to Pawlak in 1982 [18]. Two definable subsets, namely, lower and upper approximations are assigned to each subset of a universe. Based on these crucial sets, more knowledge hidden in information systems can be expressed as decision rules [1,5,20,29]. As an effective tool to describe and extract useful information, rough set theory has been successfully applied in data mining, rule extraction, granular computing, process control, conflict analysis, medical diagnosis and other fields [13,15,17,19,28,38,41]. Pawlak’s rough sets are extended to covering-based rough sets due to the limitation on equivalence relations [2,3,6–8, 16,21,23,25]. It is known that covering-based approximation operators are firstly presented by Zakowski [39]. Since then,

*

Corresponding author. E-mail addresses: [email protected] (Z. Yu), [email protected] (D. Wang).

https://doi.org/10.1016/j.ijar.2019.10.012 0888-613X/© 2019 Elsevier Inc. All rights reserved.

2

Z. Yu, D. Wang / International Journal of Approximate Reasoning 117 (2020) 1–14

increasing covering-based rough sets have appeared [40,42]. The dual approximation operators of Zakowski were analyzed by Pomykala. Twenty pairs of covering-based approximation operators were investigated and they were reduced to sixteen pairs in [22,32]. Moreover, the equivalence between covering-based rough sets and relation-based rough sets was also discussed in [24,34]. Tsang et al. demonstrated approximations and reducts with covering generalized rough sets [26]. A new rough set model was introduced by combining covering-based rough sets, fuzzy rough sets, and multi-granulation rough sets [37]. Neighborhood systems were introduced by Lin from the point of interior and closure in topology [9]. Actually, reflexive neighborhood systems are special coverings on universes. Some rough set models can be interpreted as ones established on the basis of different neighborhoods. Lin explored the applications of neighborhood systems and related approximations in knowledge bases as well as relational database [11,12]. Yao studied approximation retrieval models based on neighborhood systems [33]. Wu and Zhang derived six kinds of k-step neighborhood systems in [30]. Recently, D’eer et al. verified equalities of twenty four neighborhood operators and reduced them to thirteen different ones [4]. As we have seen, growing covering-based approximation operators have been defined and investigated. Observe that different covering-based approximation operators have different properties, a natural question is : as a generalization of Pawlak’s rough sets, which properties are basic attributes for covering-based approximation operators? i.e., which properties guarantee the equivalence between covering-based approximation operators and Pawlak’s over partitions? Additionally, it should be noted that very little work has been carried on simplifying them in some way. In consideration of the identity of Pawlak’s rough sets and some representative covering-based rough models on partitions, it is worth transforming coverings into partitions for the later. In this work, we make a first attempt to establish the corresponding relationship between coverings and partitions on universes by means of covering evolutions. Of course, we hope that some “nice” properties should be preserved during the evolutions. The remainder of this paper proceeds as follows: Section 2 describes a slice of related definitions. Section 3 discusses conditions under which covering-based rough models are equivalent to Pawlak’s on partitions. Section 4 demonstrates the covering evolutions and checks the accuracy of six pairs of approximation operators step by step. In Subsection 4.1, we introduce three neighborhood operators with transitivity or symmetry to turn common coverings into 1-neighborhood systems. In Subsection 4.2, by means of cores adopted from topology, we refine 1-neighborhood systems constructed in Subsection 4.1. In Subsection 4.3, we study the properties of the cores of refiner 1-neighborhood systems generated in Subsection 4.2. The key point of our research in Section 4 is to reveal the variation of approximation operator accuracy over the covering evolutions. Results of this paper and our future work are summarized in Section 5. 2. Definitions In this paper, the universe X is assumed to be a finite set. Unless otherwise stated, the symbol U means a covering of X . We denote a pair of approximation operators by ( L , H ), where L refers to the lower approximation operator and H is the upper approximation operator. Minimal description: For each x ∈ X , we define a collection named minimal description of x as follows [40]: MdU (x) = {U : x ∈ U , U ∈ U ∧ (∀ V ∈ U ∧ x ∈ V ∧ V ⊆ U ⇒ U = V )}. : For each x ∈ X , we call the following subset of X minimal inclusion of x [40]: Minimal inclusion  Mi U (x) = {U : x ∈ U , U ∈ U }. For each element x in X , one associates it with a subset n(x) of X named a neighborhood of x. A neighborhood system N S (x) is a nonempty family of neighborhoods of x. A neighborhood system of X is the collection of N S (x) for all x ∈ X . A neighborhood system is a 1-neighborhood system (1-NS in short) if each element of X has exactly one neighborhood [31]. 1-NSs described below have been considered in [31]: re f lexi ve 1-NS: x ∈ n(x) for all x ∈ X ; symmetric 1-NS: x ∈ n( y ) ⇒ y ∈ n(x) for all x, y ∈ X ; transiti ve 1-NS: [ y ∈ n(x), z ∈ n( y )] ⇒ z ∈ n(x) for all x, y , z ∈ X . Given that a 1-NS {n(x) : x ∈ X } on X , for each x ∈ X , the core of n(x) refers to cn(x) = { y ∈ X : n( y ) = n(x)}. The family {cn(x) : x ∈ X } is defined as the core system of {n(x) : x ∈ X }. A neighborhood operator is a mapping N : X → P ( X ), where P ( X ) is the collection of subsets of X [32]. A neighborhood operator N on X is reflexive (symmetric, transitive respectively) if 1-NS N ( X ) = {n(x) : x ∈ X } is reflexive (symmetric, transitive respectively). The following is a list of some properties for a reflexive 1-NS {n(x) : x ∈ X }: Property A. x ∈ cn(x) for each x ∈ X . Property B. cn(x) ⊆ n(x) for each x ∈ X . Property C. cn(x) = n(x) for each x ∈ X if {n(x) : x ∈ X } is a partition. Property D. cn(x) ⊆ n( y ) for x, y ∈ X whenever cn(x) ∩ n( y ) = ∅ and {n(x) : x ∈ X } is symmetric or transitive (Lemma 3.10 in [35]).

Z. Yu, D. Wang / International Journal of Approximate Reasoning 117 (2020) 1–14

3

Two points x, y in X are said to be U inseparable if {U ∈ U : x ∈ U } = { V ∈ U : y ∈ V }. Otherwise, x, y are U separable. If there are U , V ∈ U such that {x, y } ∩ U = {x} and {x, y } ∩ V = { y }, then x, y are U T 1 separable. | L ( A )|

The accuracy of a pair of approximation operators ( L 1 , H 1 ) on A ⊆ X is given by | H1 ( A )| . We say that (L 1 , H 1 ) is more 1 | L ( A )| | L ( A )| accurate than (L 2 , H 2 ) if | H2 ( A )| ≤ | H1 ( A )| for each A ⊆ X . We denote it with (L 2 , H 2 ) (L 1 , H 1 ). We claim that L 1 L 2 1 2 (H 1 H 2 ) if L 1 ( A ) ⊆ L 2 ( A ) (H 1 ( A ) ⊆ H 2 ( A )) for each A ⊆ X . Let R be an equivalence relation on X . Then for any Y ⊆ X , the following properties hold for approximation operators defined by Pawlak [10,42]: (1L) R ( X ) = X (Co-normality); (1H) R ( X ) = X (Co-normality); (2L) R (∅) = ∅ (Normality); (2H) R (∅) = ∅ (Normality); (3L) R (Y ) ⊆ Y (Contraction); (3H) Y ⊆ R (Y ) (Extension); (4L) R (Y ∩ Z ) = R (Y ) ∩ R ( Z ) (Multiplication); (4H) R (Y ∪ Z ) = R (Y ) ∪ R ( Z ) (Addition); (5L) R ( R (Y )) = R (Y ) (Idempotency); (5H) R ( R (Y )) = R (Y ) (Idempotency); (6L) Y ⊆ Z ⇒ R (Y ) ⊆ R ( Z ) (Monotone); (6H) Y ⊆ Z ⇒ R (Y ) ⊆ R ( Z ) (Monotone); (7L) R (− R (Y )) = − R (Y ) (Lower-complement relation); (7H) R (− R (Y )) = − R (Y ) (Upper-complement relation); (8LH) R (−Y ) = − R (Y ) (Duality); (9L) ∀ K ∈ U / R, R ( K ) = K (Granularity); (9H) ∀ K ∈ U / R, R ( K ) = K (Granularity). 3. Granule selection principle of lower and higher approximation operators The aim of this section is to dig out the essential characters of covering-based approximation operators implying the equivalence to Pawlak’s rough set model. We start from some shared properties of approximation operators, such as contraction, extension, monotonicity, addition and granularity. For a subset Y ⊆ X and ( L , H ) on X , we denote the  granules assigned  to the lower approximation of Y (the upper approximation of Y ) by L(Y ) (H(Y )) such that L (Y ) = L(Y ) (H (Y ) = H(Y )). Proposition 3.1. Let ( L , H ) be a pair of covering-based approximation operators on X of contraction, extension, monotonicity, addition and granularity. Then ( L , H ) is equal to Pawlak’s approximation operators on partitions. Proof. Suppose that { R i : i = 1, 2, ..., n} is a partition on X and Y ⊆ X . If R i ⊆ Y for some i, then R i = L ( R i ) ⊆ L (Y ) by monotonicity and granularity. So, R i ∈ L(Y ). Moreover, if R j − Y = ∅, then Rj ∈ / L(Y ). Otherwise, L (Y ) − Y = ∅, which contradicts with contraction of L (Y ). Therefore, we have L (Y ) = { R i : R i ⊆ Y }. Given that R i with R i ∩ Y = ∅. Pick one point y ∈ R i ∩ Y . According to extension, monotonicity and granularity, we have { y } ⊆ H ({ y }) and H ({ y }) ⊆ H ( R i ) = R i . Notice that R i is the  only granule containing H( Y ) .  y, we know that R i ∈ Furthermore, by addition, monotonicity and granularity, H (Y ) ⊆ H ( { R i : R i ∩ Y = ∅}) = { H ( R i ) : R i ∩ Y = ∅} = { R i : R i ∩ Y = ∅}. It shows that R j ∈ / H(Y ) if R j ∩ Y = ∅. Hence, H (Y ) = { R i : R i ∩ Y = ∅}. 2 All properties referred in Proposition 3.1 are essential attributes for covering-based approximation operators except granularity. We will discuss granularity of six pairs of approximation operators in [27]:

 { U : U ⊆ Y , U ∈ U },  1 HU (Y ) = {U : U ∩ Y = ∅, U ∈ U };

L 1U (Y ) =

L 2U (Y ) = L 1U (Y ),

2 (Y ) = L 2U (Y ) ∪ HU

 { MdU (x) : x ∈ Y − L 2U (Y )};

L 3U (Y ) = L 1U (Y ), 3 (Y ) = HU

 { MdU (x) : x ∈ Y };

L 4U (Y ) = L 1U (Y ),

4 (Y ) = L 4U (Y ) ∪ HU



{U : U ∩ (Y − L 4U (Y )) = ∅, U ∈ U };

4

Z. Yu, D. Wang / International Journal of Approximate Reasoning 117 (2020) 1–14

L 5U (Y ) = L 1U (Y ),

5 HU (Y ) = L 5U (Y ) ∪

 { Mi U (x) : x ∈ Y − L 5U (Y )};

L 6U (Y ) = {x : Mi U (x) ⊆ Y },

6 (Y ) = {x : Mi U (x) ∩ Y = ∅}. HU

by approximation operators directly. As an example, granules for Clarification.  Here, granules mean the objects processed 3 1 HU are { MdU (x) : x ∈ X }. In this sense, ( L 1U , H U ) is the same as Pawlak’s approximation operators, while they induce two different rough set models when operating on different granules. Proposition 3.2. Granularity holds for L 1U . Proof. The proof is trivial according to the definition of L 1U .

2

1 Proposition 3.3. Granularity holds for H U if and only if U is a partition on X .

Proof. The proof of sufficiency is obvious. Necessity: Suppose that U is not a partition. Then there exist W , V ∈ U with W ∩ V = ∅ and W − V = ∅. It is easy to 1 1 see W ⊆ H U ( V ), which contradicts with H U (V ) = V . 2 The following proposition is proved in [40]: 2 ( V ). Proposition 3.4. Granularity holds for H U 3 Proposition 3.5. Granularity holds for H U if and only if { MdU (x) : x ∈ X } is a transitive 1-NS on X .

Proof. Necessity: For each x ∈ X we claim that | MdU (x)| = 1. Otherwise, there is some point x0 ∈ X with | MdU (x0 )| > 1. Let 3 W 0 , V 0 ∈ MdU (x0 ) such that W 0 = V 0 . Without loss of generality, we assume that W 0 − V 0 = ∅. So W 0 ⊆ H U ( V 0 ), which 3 means that H U ( V 0 ) = V 0 . As a result, we can denote { MdU (x) : x ∈ X } by {n(x) : x ∈ X }, where n(x) = 3 y ∈ n(x), n( y ) ⊆ H U (n(x)) = n(x). Hence, {n(x) : x ∈ X } is transitive.

3 Sufficiency: Put { MdU (x) : x ∈ X } = {n(x) : x ∈ X } with transitivity. For each x ∈ X , H U (n(x)) =  n(x)}= {n( y ) : y ∈ n(x)}=n(x) since n( y ) ⊆ n(x) for each y ∈ n(x). 2



MdU (x). For each

 { MdU ( y ) : y ∈

4 Proposition 3.6. Granularity holds for H U . 4 Proof. According to the definition of L 4U , L 4U ( V ) = V for each V ∈ U . H U ( V ) = L 4U ( V ) = V .

2

5 Proposition 3.7. Granularity holds for H U . 5 Proof. For each V ∈ U , V − L 5U ( V ) = ∅ since V = L 5U ( V ). Therefore, H U ( V ) = L 5U ( V ) = V .  For each x ∈ X , on one hand, it is not hard to see that Mi U (x) ⊆ L 5U ( Mi U (x)) ∪ { Mi U ( y ) : y ∈ Mi U (x) − L 5U ( Mi U (x))} by reflexivity of { Mi U (x) : x ∈ X }. On the other hand, L 5U ( Mi U (x)) ⊆ Mi U (x) and Mi U ( y ) ⊆ Mi U (x) if y ∈ Mi U (x) by  the transitivity of { Mi U (x) : x ∈ X } which implies that { Mi U ( y ) : y ∈ Mi U (x) − L 5U ( Mi U (x))} ⊆ Mi U (x). Consequently, 5 H U ( Mi U (x)) = Mi U (x). 2

Proposition 3.8. The followings are equivalent: (a) L 6U (cMi U (x)) = cMi U (x) for each x ∈ X ; (b) { Mi U (x) : x ∈ X } is a partition on X ; (c) for each pair of points x, y in X , x, y is U inseparable or U T 1 separable. Proof. (a) ⇒ (b): Suppose not. Then there is some point x such that Mi U (x) − cMi U (x) = ∅ as cMi U (x) ⊆ Mi U (x) by Property C. It shows that x ∈ / L 6U (cMi U (x)) by the definition of L 6U (cMi U (x)), which is a contradiction. (b) ⇒ (a): The proof is trivial due to cMi U (x) = Mi U (x) for each x ∈ X by assumption. (b) ⇒ (c ): Choose two different points x, y ∈ X . Case 1: If x, y ∈ Mi U ( z) for some z ∈ X , then Mi U (x) = Mi U ( y ) = Mi U ( z) since { Mi U (x) : x ∈ X } is a partition on X . Therefore, x is contained in each element of {U ∈ U : y ∈ U } and vice versa. It means that {x, y} is U inseparable.

Z. Yu, D. Wang / International Journal of Approximate Reasoning 117 (2020) 1–14

5

Case 2: x ∈ Mi U (a), y ∈ Mi U (b) and Mi U (a) = Mi U (b). By assumption,  Mi U (x) = Mi U (a), Mi U (y ) = Mi U (b) and Mi U (a) ∩ Mi U (b) = ∅. Suppose that {x, y} is not U T 1 separable. Then y ∈ {U ∈ U : x ∈ U }, or, x ∈ { V ∈ U : y ∈ V }. Accordingly, y ∈ Mi U (x) or x ∈ Mi U ( y ), which is a contradiction. (c ) ⇒ (b): Suppose that { Mi U (x) : x ∈ X } is not a partition on X . Then { Mi U (x) : x ∈ X } is not symmetric because it is reflexive and transitive. Pick two points x, y ∈ X with x ∈ Mi U ( y ) while y ∈ / Mi U (x). The fact x ∈ Mi U ( y ) means that {x, y } is not U T 1 separable, while y ∈ / Mi U (x) shows that {x, y } is not U inseparable. It leads to a contradiction. 2 Proposition 3.9. The followings are equivalent: 6 (a) H U (cMi U (x)) = cMi U (x) for each x ∈ X ; (b) { Mi U (x) : x ∈ X } is a partition on X ; (c) for each pair of points x, y in X , x, y is U inseparable or U T 1 separable. Proof. (a) ⇒ (b): Suppose not. Then there is some point x such that Mi U (x) − cMi U (x) = ∅ owing to cMi U (x) ⊆ Mi U (x) and {cMi U (x) : x ∈ X } is a partition on X . Let y ∈ Mi U (x) − cMi U (x). Then Mi U ( y ) ⊆ Mi U (x) by the transitivity. According 6 to the assumption, x ∈ H U (cMi U ( y )) = cMi U ( y ), which is a contradiction by transitivity. (b) ⇒ (a): The proof is trivial since cMi U (x) = Mi U (x) for each x ∈ X under this condition. (b) ⇔ (c ): It is similar to the proof of Proposition 3.8. 2 The following example shows that “U T 1 separable” can not be weaken to “U separable” in Proposition 3.8 and Proposition 3.9. Example 3.10. Let X = {a, b} and U = {{a, b}, {b}}. It is obvious that {U ∈ U : a ∈ U } = {{a, b}} and {U ∈ U : b ∈ U } = {{a, b}, {b}}. It shows that a, b are U separable while { Mi U (x) : x ∈ X } = {{a, b}, {b}} is not a partition on X . Results above show that not all the covering-based approximation operators satisfy the granularity property. For a subset Y ⊆ X and a pair of approximation operators ( L , H ) on X , we introduce the following properties for L and H respectively, which will be proved to be satisfied by all ( L i , H i ) (i = 1, ..., 6): Granule Selection Principle of Lower Approximation Operator (GSPL in short): (1) L(Y ) ⊆ {U ∈ U : U ⊆ Y }; (2) For each y ∈ Y , if {U : y ∈ U , U ⊆ Y } = ∅, then there is at least one element U y ∈ {U : y ∈ U , U ⊆ Y } such that U y ∈ L( Y ) . Granule  Principle of Higher Approximation Operator (GSPH in short):  Selection H(Y ) ⊆ {U ∈ U : U ∩ Y = ∅}; (1) (2) For each y ∈ Y , there is at least one element U y ∈ {U ∈ U : y ∈ U } such that U y ∈ H(Y ). Proposition 3.11. Let ( L , H ) be a pair of covering-based approximation operators with contraction, extension, monotonicity, addition and granularity. Then L satisfies GSPL and H satisfies GSPH. Proof. Let Y ⊆ X . If there is some U ∈ U such that U ∈ L(Y ) and U − Y = ∅, then L (Y ) − Y = ∅, which conflicts with contraction of L. Therefore, L(Y ) ⊆ {U ∈ U : U ⊆ Y }. For each y ∈ Y , suppose that {U : y ∈ U , U ⊆ Y } = ∅. Choose U y ∈ {U : y ∈ U , U ⊆ Y }. Then y ∈ U y = L (U y ) ⊆ L (Y ) by granularity and monotonicity. If U y ∈ {U : y ∈ U , U ⊆ Y } ∩ L(Y ) = ∅, then y ∈ / L(Y ) = L (Y ) by condition (1) of GSPL, a contradiction. So, L satisfies the condition (2) of GSPL.    According to  granularity, addition and monotonicity of H , H(Y ) = H (Y ) ⊆ H ( {U ∈ U : U ∩ Y = ∅}) = { H (U ) ∈ U : U ∩ Y = ∅} = {U ∈ U : U ∩ Y = ∅}.  / H(Y ) = H (Y ). We have { y } ⊆ H ({ y }) ⊆ H (Y ) by extension and Let y ∈ Y . If {U ∈ U : y ∈ U } ∩ H(Y ) = ∅, then y ∈ monotonicity of H , which leads to a contraction. 2 Proposition 3.12. L 1U satisfies GSPL. Proof. The proof is trivial from the definition of L 1U .

2

Proposition 3.13. L 6U satisfies GSPL. Proof. Notice that granules dealt by L 6U is { Mi U (x) : x ∈ X }, we can check conditions of GSPL one by one. 1 Proposition 3.14. H U satisfies GSPH.

2

6

Z. Yu, D. Wang / International Journal of Approximate Reasoning 117 (2020) 1–14

1 Proof. H U satisfies the first condition of GSPH by its definition. For Y ⊆ X , we can see that there exists at least one element U y of U such that y ∈ U y and U y ∩ Y = ∅ since U is a 1 cover on X . So, U y ∈ H(Y ) according to the definition of H U . Therefore, the second condition of GSPH is satisfied. 2 2 Proposition 3.15. H U satisfies GSPH. 2 satisfies the first condition of GSPH. Proof. Directly from its definition, H U 2 Suppose that Y ⊆ X . Notice that H U operates on U ∪ { MdU (x) : x ∈ X }. For y ∈ L 2U (Y ): There is some U y ∈ U such that y ∈ U y ⊆ Y by the definition of L 2U . For this reason, U y ∩ Y = ∅. For y ∈ Y − L 2U (Y ): As { MdU (x) : x ∈ X } is also a covering on X , for each y ∈ Y − L 2U (Y ), we can see that there exists at least one element V y ∈ MdU ( y ) such that y ∈ V y . Hence, y ∈ V y ∩ Y . 2 The discussion above shows that H U satisfies the second condition of GSPH. 2 3 Proposition 3.16. H U satisfies GSPH.

Proof. Seeing that { MdU (x) : x ∈ X } is a covering on X , the proof is similar to Proposition 3.14.

2

Similar to the proof of Proposition 3.15, we have the following two results: 4 satisfies GSPH. Proposition 3.17. H U 5 Proposition 3.18. H U satisfies GSPH. 6 Proposition 3.19. H U satisfies GSPH.

Proof. The proof is similar to Proposition 3.14 since { Mi U (x) : x ∈ X } is also a covering on X .

2

Theorem 3.20. Approximation operators are the same as Pawlak’s on partitions whenever they satisfy GSPL and GSPH. a pair of approximation operators with GSPL and GSPH on a partition D of X . Suppose that Y ⊆ X . Proof. Let ( L , H ) be We have L (Y ) ⊆ { D ∈ D : D ⊆ Y } by the first item of GSPL. For each  D ∈ D with D ⊆ Y , choose y ∈ D. The set D is the only element containing y in  D . So, D ⊆ L (Y ). Consequently, L (Y ) = { D ∈ D : D ⊆ Y }. It is obvious that H (Y ) ⊆ { D ∈ D : D ∩ Y = ∅}. Furthermore, for each D ∈ D with D ∩ Y = ∅, pick y ∈ D ∩ Y . Because D is a partition, { V ∈ D : y ∈ V } = { D }. According to the second condition of GSPH, D ⊆ H (Y ). Thus, H (Y ) = { D ∈ D : D ∩ Y = ∅}. 2 4. On three kinds of covering evolutions The chief aim of this section is to outline the framework of covering evolutions. The emphasis of the study lies in accuracy variations of covering-based approximation operators over covering evolutions. We define three kinds of covering evolutions to turn general coverings of universe into partitions. The process of each covering evolution can be divided into three steps. The first step is demonstrated in Subsection 4.1: we introduce three neighborhood operators with transitivity or symmetry to turn common coverings into 1-neighborhood systems. During the second step in Subsection 4.2, by means of cores adopted from topology, we refine 1-neighborhood systems constructed in Subsection 4.1 to get finer granules. Partitions are available at the third step by extracting the core systems from those finer 1-neighborhood systems in the previous step. With granule variations in each step, accuracy of six covering-based approximation operators are analyzed. on a universe X , we present four neighborhood operators as follows: For a covering U n1 : x → n1 (x) = {U : x ∈ U ∈ U }, N1 = {n1 (x) : x ∈ X }; n2 : x → n2 (x) = {U : x ∈ U ∈ U }, N 2 = {n2 (x) : x ∈ X }; / V , V ∈ U }, N3 = {n3 (x) : x ∈ X }. n3 : x → n3 (x) = {U : x ∈ U ∈ U } \ { V : x ∈ : x ∈ X } is a 1-NS on X . Put Suppose that N = {n(x)  x∈ / n( y )}, n4 : x → n4 (x) = cn(x) − {n( y ) ∈ N : N4 = {n4 (x) : x ∈ X }, n4i (x) = cni (x) − {ni ( y ) ∈ Ni : x ∈ / ni ( y )} and N4i = {n4i (x) : x ∈ X } for each i ∈ {1, 2, 3}. Combining the road map of covering evolutions and neighborhood operators above, we give a specific process of covering evolutions studied in the remainder of this section in Fig. 1. We give the following example to explain the covering evolutionary process more intuitively.

Z. Yu, D. Wang / International Journal of Approximate Reasoning 117 (2020) 1–14

N1

n4

N41

CN 14

N42

CN 24

N43

CN 34

7

n1 n2

U

N2

n4

n3

N3

n4

Fig. 1. The process of a covering evolution: from coverings to partitions.

Example 4.1. Let X = {1, 2, 3, 4} and U = {{1, 2}, {1, 2, 3, 4}, {3, 4}, {4}}. n3

U −→ N3 : n3 (1) = {1, 2}, n3 (2) = {1, 2}, n3 (3) = {3}, n3 (4) = {4}; n4 N3 −→ N43 : n34 (1) = {1, 2}, n34 (2) = {1, 2}, n34 (3) = {3}, n34 (4) = {4}; N43 −→ CN 34 : cn34 (1) = {1, 2}, cn34 (2) = {1, 2}, cn34 (3) = {3}, cn34 (4) = {4}. Remark 4.2. (1) The process of covering evolutions incorporates concepts and methods for constructing symmetric neighborhood assignments in general topology. On one side, the notion “core” posed in [36] plays an important role in the study of “D-property” and “symmetric neighborhood assignments” in set topology. On the other hand, “n3 ” is raised as one unique kind of symmetric neighborhood assignments in [14], which are used to characterize metrizable or generalized metrizable spaces in general topology. (2) It is not difficult to check that N1 is transitive while N2 and N3 are symmetric. We will see that the transitivity or symmetry of these neighborhood operators plays important role in the following proofs. (3) For each x ∈ X , n1 (x) is the same as Mi U (x). 4.1. Variations in accuracy of approximation operators in the first step of covering evolutions Proposition 4.3. L 1U L 1N . 1

Proof. Suppose that Y ⊆ X . For each x ∈ L 1U (Y ), there is some U ∈ U such that x ∈ U and U ⊆ Y . By the definition, we have that n1 (x) ⊆ U ⊆ Y . As a consequence, x ∈ L 1N (Y ) and L 1U (Y ) ⊆ L 1N (Y ). 2 1

1

Proposition 4.4. L 6N = L 6U . 1 Proof. The proposition follows from n1 (x) = Mi U (x) for each x ∈ X .

2

1 1 Proposition 4.5. H N

HU . 1 1 (Y ) for Y ⊆ X . There is some y ∈ X such that x ∈ n1 ( y ) and n1 ( y ) ∩ Y = ∅. So we can pick U ∈ U Proof. Choose x ∈ H N 1

1 1 1 1 containing x and y. By the definition of H U ( Y ), x ∈ H U (Y ). Thus, H N (Y ) ⊆ H U ( Y ). 1

Lemma 4.6. For each x ∈ X ,



MdN1 (x) ⊆



2

MdU (x) for each x ∈ X .

= {n1 (x)} since Proof. For each x ∈ X , MdN1 (x)  {n1 (x) : x ∈ X } is transitive. For each element U x of MdU (x), n1 (x) ⊆ U x by the definition of N1 . Therefore, MdN1 (x) ⊆ MdU (x). 2 2 2 Proposition 4.7. H N

HU . 1

Proof. Let Y ⊆ X . By Proposition 4.3, L 2U (Y ) = L 1U (Y ) ⊆ L 1N (Y ) = L 2N (Y ). Pick y ∈ L 2N (Y ) − L 2U (Y ) if L 2N (Y ) − L 2U (Y ) = ∅. 1 1 1  1 2 2 We can see that y ∈ Y − L 2U (Y ) as L 2N (Y ) ⊆ Y . It follows that y ∈ MdU (x) ⊆ H U (Y ) according to the definition of H U ( Y ). 1   { MdN1 (x) : x ∈ Y − L 2N1 (Y )} ⊆ { MdU (x) : x ∈ Y − L 2U (Y )} due to MdN1 (x) ⊆ MdU (x) by Lemma 4.6 and Y − L 2N (Y ) ⊆ Y − L 2U (Y ). 2 1

Besides,

3 3 Proposition 4.8. H N

HU . 1

8

Z. Yu, D. Wang / International Journal of Approximate Reasoning 117 (2020) 1–14

3 3 Proof. For each Y ⊆ X and x ∈ Y , MdN1 (x) ⊆ MdU (x) by Lemma 4.6. For this reason, H N (Y ) ⊆ H U (Y ) by their defini1 tions. 2 4 4 Proposition 4.9. H N

HU . 1 4 (Y ). By Proposition 4.3, L 4U (Y ) = L 1U (Y ) ⊆ L 1N1 (Y ) = L 4N1 (Y ). Pick y ∈ L 4N1 (Y ) − Proof. Suppose that Y ⊆ X . Choose x ∈ H N 1  4 4 4 4 L U (Y ) if L N − L U = ∅. Then y ∈ Y − L U (Y ) since L 4N (Y ) ⊆ Y . For each U ∈ U with y ∈ U , y ∈ U ⊆ {U : U ∩ (Y − L 4U (Y )) =

1 1  4 ∅, U ∈ U } ⊆ H U (Y ). Further more, if there is some point z ∈ {n1 (x) : n1 (x) ∩ (Y − L 4N1 (Y )) = ∅, n1 (x) ∈ N1 ( X )}, then there must be some n1 (x0 ) such that z ∈ n1 (x0 ) and n1 (x0 ) ∩ (Y − L 4N (Y )) = ∅. As a result, n1 (x0 ) ∩ (Y − L 4U (Y )) = ∅. Consequently, 1  4 z ∈ {U : U ∩ (Y − L 4U (Y )) = ∅, U ∈ U } ⊆ H U (Y ) by the definition of n1 (x0 ). 2

5 5 Proposition 4.10. H N

HU . 1

Proof. Assume that Y ⊆ X . By Proposition 4.3, L 5U (Y ) = L 1U (Y ) ⊆ L 1N (Y ) = L 5N (Y ). Pick y ∈ L 5N (Y ) − L 5U (Y ) if L 5N −

1

1

1

1

L 5U = ∅. Then y ∈ Y − L 5U (Y ) since L 5N (Y ) ⊆ Y . So, y ∈ Mi U ( y ) ⊆ { Mi U (x) : x ∈ Y − L 5U (Y )}. For any z ∈ Y − L 5N (Y ), 1 1  5 5 z ∈ Mi N1 ( z) = Mi U ( z) ⊆ { Mi U (x) : x ∈ Y − L 5U (Y )}. Therefore, H N (Y ) ⊆ H U ( Y ). 2 1

6 6 Proposition 4.11. H U = HN . 1

Proof. The proposition follows from n1 (x) = Mi U (x) for each x ∈ X .

2

Proposition 4.12. L 1N L 1U . 2

Proof. Let Y ⊆ X . For each y ∈ L 1N (Y ), there is some U ∈ U such that y ∈ U ⊆ Y by the definition of N2 . Thus, y ∈ L 1U (Y ) 2 according to its definition. 2 Proposition 4.13. L 6N L 6U . 2 Proof. Let Y ⊆ X . It is clear that Mi U (x) ⊆ Mi N2 (x) for each x ∈ X . Hence, x ∈ L 6U (Y ) whenever x ∈ L 6N (Y ). 2

2

1 1 Proposition 4.14. H U

HN . 2

1 Proof. Given that Y ⊆ X , U ∈ U and U ∩ Y = ∅. Choose x ∈ U ∩ Y . We can see that x ∈ U ⊆ n2 (x) ⊆ H N (Y ). Accordingly, 2 1 1 H U ( Y ) ⊆ H N ( Y ). 2 2



Lemma 4.15. For each x ∈ X ,

MdU (x) ⊆



MdN2 (x).



Proof. For each x ∈ X , put MdN2 ( x) = {n2 ( z) : z ∈ A }. We will prove n2 (x) ⊆ MdN2 (x). Suppose not. Pick y ∈ n2 (x) − MdN2 (x). We know that x ∈ n2 ( y ) by the symmetry of {n2 (x) : x ∈ X }. According to the definition of MdN2 (x), there is some z ∈ A such / MdN2 (x). It follows that  that x ∈ n2 (z) and n2 (z) ⊆ n2 ( y ) since n2 ( y ) ∈ y ∈ n2 ( z), which contradicts with   y ∈ n2 (x) − MdN2 (x).  It is not difficult to see that MdU (x) ⊆ n2 (x). Accordingly, MdU (x) ⊆ MdN2 (x). 2 2 2 Proposition 4.16. H U

HN . 2

Proof. Let Y ⊆ X . By Proposition 4.12, L 2N (Y ) = L 1N (Y ) ⊆ L 1U (Y ) = L 2U (Y ). Pick y ∈ L 2U (Y ) − L 2N (Y ) if L 2U (Y ) − L 2N (Y ) = ∅. 2 2 2 2 2 Then y ∈ Y − L 2N (Y ) since L 2U (Y ) ⊆ Y . As a consequence, y ∈ H N (Y ). Moreover, for each z ∈ Y − L 2U (Y ), z ∈ Y − L 2N (Y ) by



2

Proposition 4.12. Further more, MdU ( z) ⊆  { MdN2 (x) : x ∈ Y − L 2N (Y )}. 2



2

MdN2 ( z) by Lemma 4.15, which means that

2

3 3 Proposition 4.17. H U

HN . 2 3 3 Proof. It follows from Lemma 4.15 and definitions of H U , HN . 2

2

2  { MdU (x) : x ∈ Y − L 2U (Y )} ⊆

Z. Yu, D. Wang / International Journal of Approximate Reasoning 117 (2020) 1–14

9

4 4 Proposition 4.18. H U

HN . 2

Proof. Suppose that Y ⊆ X . By Proposition 4.12, L 4N (Y ) = L 1N (Y ) ⊆ L 1U (Y ) = L 4U (Y ). Pick y ∈ L 4U (Y ) − L 4N (Y ) if L 4U (Y ) − 2 2 2  4 L 4N (Y ) = ∅. Consequently, y ∈ Y − L 4N (Y ) since L 4U (Y ) ⊆ Y . As a result, y ∈ N2 ( y ) ⊆ H N (Y ). Suppose that z ∈ {U : 2

2

2

U ∩ (Y − L 4U (Y )) = ∅}. Then there is some V ∈ U such that V ∩ (Y − L 4U (Y )) = ∅. Therefore, V ∩ (Y − L 4N (Y )) = ∅ as 2

4 L 4N (Y ) ⊆ L 4U (Y )). So we have that z ∈ V ⊆ n2 ( z), which shows that n2 ( z) ∩ (Y − L 4N (Y )) = ∅. Hence, z ∈ H N ( Y ). 2 2 2

2

5 5 Proposition 4.19. H U

HN . 2

Proof. Let Y ⊆ X . By Proposition 4.12, L 5N (Y ) = L 1N (Y ) ⊆ L 1U (Y ) = L 5U (Y ). Pick y ∈ L 5U (Y ) − L 5N (Y ) if L 5U (Y ) − L 5N (Y ) = 2 2 2 2 5 ∅. Then y ∈ Y − L 5N (Y ) due to L 5U (Y ) ⊆ Y . Thus, y ∈ Mi N2 ( y ) ⊆ n2 ( y ) ⊆ H N (Y ). Additionally, for each z ∈ Y − L 5U (Y ), 2

2



z ∈ Y − L 5N (Y ) by Proposition 4.12. Further more, Mi U ( z) ⊆ Mi N2 ( z) because of {U ∈ U : y ∈ U } ⊆ W for each W ∈ N2 2   containing y, which means that { Mi U (x) : x ∈ Y − L 2U (Y )} ⊆ { Mi N2 (x) : x ∈ Y − L 5N (Y )}. 2 2

6 6 Proposition 4.20. H U

HN . 2

Proof. For each Y ⊆ X and y ∈ Y , Mi U ( y ) ⊆ Mi N2 ( y ) as 6 6 HU (Y ) ⊆ H N ( Y ). 2



{U ∈ U : y ∈ U } ⊆ W for each W ∈ N2 containing y. Accordingly,

2

Proposition 4.21. L 1U L 1N . 3 Proof. Assume that Y ⊆ X . If there exists z ∈ X such that z ∈ L 1U (Y ), then there is some U ∈ U with z ∈ U and U ⊆ Y . We can see that z ∈ n3 ( z) and n3 ( z) ⊆ U by its definition. Therefore, n3 ( z) ⊆ Y and then z ∈ L 1N (Y ). 2 3

Proposition 4.22. L 6U L 6N . 3 Proof. Suppose that Y ⊆ X and z ∈ L 6U (Y ). Then Mi U ( z) ⊆ Y . We can see that z ∈ n3 ( z) and n3 ( z) ⊆ Mi U ( z) by its definition. For this reason, z ∈ Mi N3 ( z) ⊆ n3 ( z) ⊆ Y and z ∈ L 6N (Y ). Thus, L 6U (Y ) ⊆ L 6N (Y ). 3 3

2

1 1 Proposition 4.23. H N

HU . 3

Proof. For each Y ⊆ X and x ∈ X , we have n3 (x) ⊆ n1 (x). Consequently, 1 1 It follows that H N (Y ) ⊆ H U (Y ) by Proposition 4.5. 2



{n3 (z) : n3 (z) ∩ Y = ∅} ⊆



{n1 (z) : n1 (z) ∩ Y = ∅}.

3

2 2 Proposition 4.24. H N

HU . 3

Proof. Given that x ∈ X and Y ⊆ X , n3 (x) ⊆ n1 (x) by the definitions of n3 (x) and n1 (x). Suppose that x ∈ n3 ( y ) for some y ∈ X . Then y ∈ n3 (x) since  {n3 (x) : x ∈ X } is symmetric. It means  that y ∈ U for each U ∈ U with x ∈ U . Thus, n3 ( y ) ⊆ n1 (x) by its definition. Therefore, MdN3 (x) ⊆ n1 (x) = MdN1 (x) ⊆ MdU (x) by Lemma 4.6. By Proposition 4.21, L 2U (Y ) = L 1U (Y ) ⊆ L 1N (Y ) = L 2N (Y ). Pick y ∈ L 2N (Y ) − L 2U (Y ) if L 2N (Y ) − L 2U (Y ) = ∅. We can see 3

3



3

3

2 that y ∈ Y − L 2U (Y ) since L 2N (Y ) ⊆ Y . It follows that y ∈ MdU (x) ⊆ H U (Y ). Due to MdN3 (x) ⊆ MdU (x) and Y − L 2N3 (Y ) ⊆ 3     Y − L 2U (Y ), { MdN3 (x) : x ∈ Y − L 2N (Y )} ⊆ { MdU (x) : x ∈ Y − L 2U (Y )}. 2 3

3 3 Proposition 4.25. H N

HU . 3

Proof. For each Y ⊆ X , as we proved in Proposition 4.24, 3

H U (Y ) by their definitions.



2

MdN3 (x) ⊆



3 MdU (x) for each x ∈ X , and therefore H N (Y ) ⊆ 3

4 4 Proposition 4.26. H N

HU . 3

Proof. By Proposition 4.23, L 4U (Y ) = L 1U (Y ) ⊆ L 1N (Y ) = L 4N (Y ) for each Y ⊆ X . Pick y ∈ L 4N (Y ) − L 4U (Y ) if L 4N − L 4U = ∅. 3

3

Then y ∈ Y − L 4U (Y ) as L 4N (Y ) ⊆ Y . For each U ∈ U with y ∈ U , y ∈ U ⊆ 3



3

3

4 {U : U ∩ (Y − L 4U (Y )) = ∅, U ∈ U } ⊆ H U ( Y ).

10

Z. Yu, D. Wang / International Journal of Approximate Reasoning 117 (2020) 1–14

 {n3 (x) : n3 (x) ∩ (Y − L 4N3 (Y )) = ∅, n3 (x) ∈ N3 ( X )}, then there must be some n3 (x0 )  4 such that z ∈ n3 (x0 ) and n3 (x0 ) ∩ (Y − L N (Y )) = ∅. Accordingly, n3 (x0 ) ∩ (Y − L 4U (Y )) = ∅. Thus, z ∈ {U : U ∩ (Y − L 4U (Y )) = 3 4 ∅, U ∈ U } ⊆ H U (Y ) since n3 (x0 ) is contained in the intersection of elements {U ∈ U : x0 ∈ U }. 2 Further more, if there is some point z ∈

5 5 Proposition 4.27. H N

HU . 3

Proof. Suppose that Y ⊆ X . By Proposition 4.23, L 5U (Y ) = L 1U (Y ) ⊆ L 1N (Y ) = L 5N (Y ). Pick y ∈ L 5N (Y ) − L 5U (Y ) if L 5N − L 5U =

3  3 3 3 ∅. Then y ∈ Y − L 5U (Y ) due to L 5N3 (Y ) ⊆ Y . Consequently, y ∈ Mi U ( y ) ⊆ { Mi U (x) : x ∈ Y − L 5U (Y )}. For any z ∈ Y − L 5N3 (Y ),  5 5 z ∈ Mi N3 ( z) ⊆ n3 ( z) ⊆ n1 ( z) = Mi U ( z) ⊆ { Mi U (x) : x ∈ Y − L 5U (Y )}. Therefore, H N (Y ) ⊆ H U ( Y ). 2 3

6 6 Proposition 4.28. H N

HU . 3

Proof. For any Y ⊆ X and z ∈ Y , we have that Mi N3 ( z) ⊆ n3 ( z) ⊆ n1 ( z) = Mi U ( z) ⊆ 6 6 of Mi N3 ( z). As a result, H N (Y ) ⊆ H U ( Y ). 2



{ Mi U (z) : z ∈ Y } by the construction

3

Below is a summary of order relations for approximation operators in the first step of covering evolutions. i i i i

LN and H N

HU for each i = 1, 2, ..., 6. Theorem 4.29. L U 1 1 i i i i Theorem 4.30. L U

LN and H N

HU for each i = 1, 2, ..., 6. 3 3

4.2. Variations in accuracy of approximation operators in the second step of covering evolutions Example 4.31. N4 may not be a partition. Let X = {1, 2, 3}, n(1) = {1, 2, 3}, n(2) = {1, 2, 3} and n(3) = {2, 3}. Then cn(1) = cn(2) = {1, 2} and cn(3) = {3}. We have that n4 (1) = {1, 2} − n(3) = {1}, n4 (2) = {1, 2} − ∅ = {1, 2}, and n4 (3) = {3} − ∅ = {3}. Proposition 4.32. Let N = {n(x) : x ∈ X } be a 1-NS on X . Then N4 is transitive. Proof. Suppose that x ∈ n4 ( y ) for x, y ∈ X . Then cn(x) = cn( y ) by the definition of n4 ( y ). According  to the construction of n4 ( y ), we / n(z)} ⊆ {n(z) ∈ N : x ∈ / n(z)}. As a result, n4 (x) = cn(x) − {n(z) ∈ N : x ∈ / n(z)} ⊆  can see that {n(z) ∈ N : y ∈ cn( y ) − {n( z) ∈ N : y ∈ / n(z)}. 2 Proposition 4.33. If N = {n(x) : x ∈ X } is a symmetric 1-NS on X , then N4 is also symmetric. Proof. Suppose that x ∈ n4 ( y ). Then x ∈ cn( y ) by the definition of n4 ( y ). Therefore, y ∈ cn( y ) = cn(x). For any n( z) ∈ N with x∈ / n(z), we claim that y ∈ / n(z). Else, z ∈ n( y ) = n(x) and x ∈ n(z) by the symmetry of N , which leads to a contradiction. Thus, y ∈ n4 (x) by the construction of n4 (x). It follows that N4 is a symmetric 1-NS. 2 Based on Proposition 4.32 and Proposition 4.33, we have Corollary 4.34. If N = {n(x) : x ∈ X } is a symmetric 1-NS on X , then N4 is a partition on X . Property E. n4i (x) ⊆ ni (x) for each x ∈ X and i = 1, 2, 3. Similar to the proof of Proposition 4.3, Proposition 4.4 and Proposition 4.5, we can prove the following propositions based on Property E: j

j

Proposition 4.35. L N L i for each i = 1, 2, 3 and j = 1, 2, ..., 6. i N4 Proposition 4.36. H 1

N4i

1

HN for each i = 1, 2, 3. i

Z. Yu, D. Wang / International Journal of Approximate Reasoning 117 (2020) 1–14

Lemma 4.37. Suppose that N = {n(x) : x ∈ X } is a transitive 1-NS on X . Then



MdN4 (x) ⊆



11

MdN (x) for each x ∈ X .

Proof. By the definition of MdN (x), we know that MdN (x) = {n(x)}. For each n4 ( y ) with x ∈ n ) by the 4 ( y ), x ∈ cn( y definition of n4 ( y ). Consequently, y ∈ cn ( x ) = cn ( y ) . It means that n ( y ) = n ( x ) . Now we can see that Md ( x ) ⊆ {n4 (z) : N 4  x ∈ n4 ( z), z ∈ X } ⊆ n(x) = MdN (x) since n4 ( z) ⊆ n( z) by property E. 2 Lemma 4.38. Suppose that N = {n(x) : x ∈ X } is a symmetric 1-NS on X . Then



MdN4 (x) ⊆



MdN (x) for each x ∈ X .

Proof. Suppose that n4 ( y ) ∈ MdN4 (x). Then y ∈ cn(x) = cn( y ) due to x ∈ n4 ( y ). We will prove that n4 ( y ) ⊆ n( z) for each n MdN4 (x) ⊆ (z) containing x in N . In fact, x ∈ n(z) implies that n4 ( y ) ⊆ cn( y ) = cn(x) ⊆ n(z) by Property D. Hence, MdN (x). 2 Based on Lemma 4.37, Lemma 4.38 and Property E, we can prove the following two propositions in the similar ways in Subsection 4.1: Proposition 4.39. H

j

N4i

Proposition 4.40. H 4

N4i

j

H Ni for each i = 2, 3 and j = 2, 3. 4

HN for each i = 1, 2, 3. i

Lemma 4.41. Suppose that N = {n(x) : x ∈ X } is a 1-NS on X . Then Mi N4 (x) ⊆ Mi N (x) for each x ∈ N.

2

Proof. It is obvious from Property E. Proposition 4.42. H

j

N4i

j

H Ni for each i = 1, 2, 3 and j = 5, 6. 2

Proof. It is trivial from Lemma 4.41.

The order relations of related approximation operators are described in the following theorems: i i i i Theorem 4.43. L N

LN

HN for each i = 1, 2, ..., 6. 1 and H 1 1 N41 4 i Theorem 4.44. L N

Li

N43

3

and H i

N43

i

HN for each i = 1, 2, ..., 6. 3

4.3. Variations in accuracy of approximation operators in the third step of covering evolutions and the applications In this subsection, we consider the relationship of core systems induced by N4 on Ni (i = 1, 2, 3) from a covering on X . Denote the 1-NS after Ni and N4 operating on U by N4i = {n4i (x) : x ∈ X }, i = 1, 2, 3. Put CN 4i = {cn4i (x) : x ∈ X }, i = 1, 2, 3. Proposition 4.45. L

j

N4i

L

j

CN 4i

for each i = 1, 2, 3 and j = 1, 2, ..., 6.

Proof. Based on the definition of L

j

N4i

and L

each x ∈ X , i = 1, 2, 3 and j = 1, 2, ..., 6. Proposition 4.46. H Proof. H 1

CN 14

j

CN 14

2

j

CN 4i

, the proof is not formidable according to the fact that cn4i (x) ⊆ n4i (x) for

j

H N 1 for each j = 1, 2, ..., 6. 4

1 1 1 1

HN 1 : For each Y ⊆ X , if cn4 (x) ∩ Y = ∅, then n4 (x) ∩ Y = ∅. So, H

CN 14

4

1 1 1 (Y ) ⊆ H N 1 ( Y ) due to cn4 (x) ⊆ n4 (x) 4

for each x ∈ X .  MdN 1 (x) for each x ∈ X since CN 14 is transitive. On the other side, H 2 1 H 2 1 : On one side, cn14 (x) ⊆ n14 (x) = N4 CN 4 4 MdCN 1 (x) = {cn(x)} as CN 21 is a partition on X . It is not difficult to see that H 2

CN 14

4

Proposition 4.45. H 3 1 H 3 1 : It is similar to the proof of H 2 CN 4

H4

CN 14

N4

CN 14

2 (Y ) ⊆ H N 1 ( Y ) for each Y ⊆ X by

2

HN 1. 4

4 1 1

HN 1 : It follows from the fact that cn4 (x) ⊆ n4 (x) for each x ∈ X and Proposition 4.45. 4

4

12

Z. Yu, D. Wang / International Journal of Approximate Reasoning 117 (2020) 1–14

H5

CN 14

1 5 1 1

HN 1 : It is obvious that cn4 (x) ⊆ n4 (x) = Mi N 1 (x). Notice that CN 4 is a partition on X , it is trivial to prove 4

4

Mi CN 1 (x) = cn14 (x). Accordingly, H 5

CN 14

4

H6

CN 14

4

6 5

HN 1 : It is similar to the proof in H

CN 14

4

Proposition 4.47. H Proof. H 1

CN 24

j

H

CN 24

j

N42

5

HN 2 1. 4

for each j = 1, 2, ..., 6.

H 1 2 : For each Y ⊆ X , if cn24 (x) ∩ Y = ∅, then n24 (x) ∩ Y = ∅. As a result, H 1 N4

CN 24

(Y ) ⊆ H 1 2 (Y ) since cn24 (x) ⊆ N4

each x ∈ X .  2 2 MdN 2 (x) for each x ∈ X . Pick n24 ( y ) ∈ MdN 2 (x). We have that x ∈ n24 ( y ) by the 2 H 2 : We claim that cn4 (x) ⊆

n24 (x) for 2 H

5 (Y ) ⊆ H N 1 ( Y ) by Proposition 4.45.

N4

CN 4

4

4

definition of MdN 2 (x). Therefore, cn24 (x) ⊆ n24 ( y ) by Property D since N42 is symmetric. Moreover, MdCN 2 (x) = {cn(x)} as 4 4 CN 24 is a partition on X . It is not difficult to prove that H 2 2 (Y ) ⊆ H 2 2 (Y ) for each Y ⊆ X by Proposition 4.45. N4

CN 4

H3

H 3 2 : It is similar to the proof of H 2

H4

H 4 2 : It follows from the fact that cn24 (x) ⊆ n24 (x) for each x ∈ X and Proposition 4.45.

H5

H 5 2 : For each n24 ( y ) containing x, we have cn24 (x) ⊆ n24 ( y ) by Property D. It means that cn24 (x) ⊆ Mi N 2 (x). As

CN 24 CN 24 CN 24

N4

CN 24

H2 2 . N4

N4

N4

4

CN 24 is a partition on X , Mi CN 2 (x) = cn24 (x). Thus, H 5

CN 24

4

H6

CN 24

H 6 2 : It is similar to the proof in H 5 N4

Proposition 4.48. H

CN 24

j

CN 34

H

j

N43

(Y ) ⊆ H 5 2 (Y ) by Proposition 4.45.

H5 2 . 2

N4

N4

for each j = 1, 2, ..., 6.

Proof. The proof is similar to Proposition 4.47 since N43 is also symmetric.

2

By combining the discussion above, some conclusions are presented herein: For the covering evolution U  N1  N41  CN 14 , we have i i Theorem 4.49. L U

LN

Li 1

N41

Li

i i Corollary 4.50. ( L U , HU ) (L i

N41

CN 14

and H i

CN 14

i , HN ) (L i 1

CN 14

i i i

HN 1 H N1 H U for each i = 1, 2, ..., 6. 4

, Hi

CN 14

) for each i = 1, 2, ..., 6.

For the covering evolution U  N3  N43  CN 34 , we have i i i i Theorem 4.51. L U

LN

L i 3 L i 3 and H i 3 H i 3 H N

HU for each i = 1, 2, ..., 6. 3 3 N4 N4 CN 4 CN 4 i i , HU ) (L i Corollary 4.52. ( L U

N43

i , HN ) (L i 1

CN 34

, Hi

CN 34

) for each i = 1, 2, ..., 6.

The following example shows that the accuracy is not preserved for the covering evolution U  N2  N42  CN 24 . Example 4.53. Let X = {1, 2, 3}, Na = {n(1) = {1, 2, 3}, n(2) = {2, 3}, n(3) = {3}} and Nb = {n(1) = {1, 2}, n(2) = {2, 3}, n(3) = {3}}. N2

CN 2

N42

CN 24

na (1) = {1, 2, 3} na (2) = {2, 3} na (3) = {3}

n(1) = {1, 2, 3} n(2) = {1, 2, 3} n(3) = {1, 2, 3}

cn(1) = {1, 2, 3} cn(2) = {1, 2, 3} cn(3) = {1, 2, 3}

n(1) = {1, 2, 3} n(2) = {1, 2, 3} n(3) = {1, 2, 3}

cn(1) = {1, 2, 3} cn(2) = {1, 2, 3} cn(3) = {1, 2, 3}

nb (1) = {1, 2} nb (2) = {2, 3} nb (3) = {3}

n(1) = {1, 2} n(2) = {1, 2, 3} n(3) = {2, 3}

cn(1) = {1} cn(2) = {2} cn(3) = {3}

n(1) = {1} n(2) = {2} n(3) = {3}

cn(1) = {1} cn(2) = {2} cn(3) = {3}

Z. Yu, D. Wang / International Journal of Approximate Reasoning 117 (2020) 1–14

13

Put A = {1, 3}. For Na , 1 ( A ) = {1, 2, 3}; L 1N ( A ) = {3}, H N a

a

1 L 1N ( A ) = ∅, H N ( A ) = {1, 2, 3}; 2 2 1 1 L 2 ( A ) = ∅, H 2 ( A ) = {1, 2, 3};

N4

L1

CN 24

N4

( A ) = ∅, H 1

CN 24

( A ) = {1, 2, 3};

while for Nb , 1 ( A ) = {1, 2, 3}; L 1N ( A ) = {3}, H N b

b

1 ( A ) = {1, 2, 3}; L 1N ( A ) = ∅, H N 2 2 1 L 2 ( A ) = {1, 3}, H 1 2 ( A ) = {1, 3};

N4

L1

CN 24

N4

( A ) = {1, 3}, H 1

CN 24

That is to say, (

| L 1Na ( A )|

1 |HN ( A )| a

( A ) = {1, 3}. |L1

( A )|

, | H 1 2 ( A )| , | H 1 4 ( A )| , | H 1

( A )|

N2

|L1

N2

N2 4

|L1

( A )|

( | H 1 b ( A )| , | H 1 2 ( A )| , | H 1 4 ( A )| , | H 1

( A )|

| L 1N ( A )|

| L 1N ( A )|

Na

N2

|L1

| L 1N ( A )|

N2

N2 4

( A )|

CN 2 4 CN 2 4

( A )|

CN 2 4 CN 2 4

) = { 13 , 0, 0, 0} while

) = { 12 , 0, 1, 1}.

5. Conclusion The research we have done suggested that GSPL and GSPH are the fundamental natures for covering-based approximation operators as a generalization of the classical Pawlak’s rough sets. Considering the increased number of covering-based approximation operators and consistency of them on partitions, we pose three covering evolution ways to transform arbitrary coverings into partitions. From a topology standpoint on neighborhood operators and the internal structure of 1-neighborhood systems, we refine coverings by constructing symmetric or transitive neighborhood operators until we get partitions for approximation operators. The investigation carried out reveals that accuracies of approximation operators are preserved by two covering evolutions. This work is the first attempt to build the mappings with good properties from covering spaces to partition spaces from the aspect of granules. All our preliminary results throw light on the interaction between rough set theory and general topology. In the future, we will further study properties of these mappings, for example, the effect of reduction on covering evolutions. Moreover, covering based multi-granulation rough sets will be taken into consideration. We hope to categorize covering-based approximation operators by this kind of mappings. Declaration of competing interest We wish to confirm that there are no known conflicts of interest associated with this publication and there has been no significant financial support for this work that could have influenced its outcome. Acknowledgements This work is partly supported by National Science Foundation of China (Nos. 11401262, 61472469) and National Science Foundation of Jiangsu Province under Grant No. BK20140503. References ´ ´ [1] J. Blaszczynski, R. Slowinski, M. Szelag, Sequential covering rule induction algorithm for variable consistency rough set approaches, Inf. Sci. 181 (5) (2011) 987–1002. [2] Z. Bonikowski, E. Bryniarski, U. Wybraniec-Skardowska, Extensions and intensions in the rough set theory, Inf. Sci. 107 (1998) 149–167. [3] G. Cattaneo, Abstract approximation spaces for rough theories, in: L. Polkowski, A. Skoeron (Eds.), Rough Sets in Knowledge Discovery, Methodology and Applications, vol. 1, Physica-Verlag, Heidelberg, 1998, pp. 59–98. [4] L. D’eer, M. Restrepo, C. Cornelis, J. Gómez, Neighborhood operators for covering-based rough sets, Inf. Sci. 336 (24) (2016) 21–44. [5] Y. Du, Q.H. Hu, P.F. Zhu, P.J. Ma, Rule learning for classification based on neighborhood covering reduction, Inf. Sci. 181 (24) (2011) 5457–5467. [6] M. Kondo, On the structure of generalized rough sets, Inf. Sci. 176 (2006) 586–600. [7] E.F. Lashin, A.M. Kozae, A.A. Abo Khadra, T. Medhat, Rough set theory for topological spaces, Int. J. Approx. Reason. 40 (2005) 35–43. [8] G. Liu, W. Zhu, The algebraic structures of generalized rough set theory, Inf. Sci. 178 (2008) 4105–4113. [9] T. Lin, Neighborhood systems and relational datebases, in: Proceedings of CSC’88, 1988. [10] T. Lin, Q. Liu, Rough Approximate Operators: Axiomatic Rough Set Theory, Rough Sets, Fuzzy Sets and Knowledge Discovery, Springer, London, 1994, pp. 256–260. [11] T. Lin, Neighborhood systems-application to qualitative fuzzy and rough sets, in: P.P. Wang (Ed.), Advances in Machine Intelligence and Soft Computing, Department of Electrical Engineering, Duke University, Durham, NC, USA, 1997, pp. 132–155. [12] T. Lin, Y.Y. Yao, Mining soft rules using rough sets and neighborhoods, in: Proceedings of the Symposium on Modelling, Analysis and Simulation, Computational Engineering in Systems Applications (CESA’96), IMASCS Multiconference, Lille, France, 9–12 July 1996, pp. 1095–1100. [13] W. Liu, Topological space properties of rough sets, in: Proceedings of 2004 International Conference on Machine Learning and Cybernetics, IEEE Computer Society, Washington, 2004, pp. 2353–2355.

14

[14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42]

Z. Yu, D. Wang / International Journal of Approximate Reasoning 117 (2020) 1–14

J. Nagata, Symmetric neighborhood assignments and metrizability, Quest. Answ. Gen. Topol. 22 (2004) 181–184. S.K. Pal, B.U. Shankar, P. Mitra, Granular computing, rough entropy and object extraction, Pattern Recognit. Lett. 26 (2005) 2509–2517. J.A. Pomykala, The stone algebra of rough sets, Bull. Pol. Acad. Sci., Math. 36 (1988) 495–508. L. Polkowski, A. Skowron, Rough Sets and Current Trends in Computing, vol. 1424, Spring, Berlin, 1998. Z. Pawlak, Rough sets, Int. J. Comput. Inf. Sci. 11 (1982) 341–356. Z. Pawlak, Rough Sets: Theoretical Aspects of Reasoning About Date, Kluwer Academic Publishers, Boston, 1991. Y.H. Qian, J.Y. Liang, C.Y. Dang, Converse approximation and rule extraction from decision tables in rough theory, Comput. Math. Appl. 55 (8) (2008) 1754–1765. K. Qin, J. Yang, Z. Pei, Generalized rough sets based on reflexive and transitive relations, Inf. Sci. 178 (2008) 4138–4144. M. Restrepo, C. Cornelis, J. Gómez, Partial order relation for approximation operators in covering-based rough sets, Inf. Sci. 284 (2014) 44–59. D. Slezak, P. Wasilewski, Granular sets-foundations and case study of tolerance spaces, in: RSFDGrC, 2007, pp. 435–442. X. Song, G. Liu, J. Liu, The relationship between coverings and tolerance relations, Int. J. Granular Computing, Rough Sets and Intelligent Systems 1 (4) (2010) 343–354. R. Slowinski, D. Vanderpooten, A generalized definition of rough approximations based on similarity, IEEE Trans. Knowl. Data Eng. 12 (2000) 331–336. Eric C.C. Tsang, D. Chen, Daniel S. Yeung, Approximations and reducts with covering generalized rough sets, Comput. Math. Appl. 56 (1) (2008) 279–289. L. Wang, X. Yang, J. Yang, C. Wu, Relationships among generalized rough sets in six coverings and pure reflexive neighborhood system, Inf. Sci. 207 (2012) 66–78. P. Wasilewski, D. Slezak, Foundations of Rough Sets From Vagueness Perspective, Rough Computing: Theories, Technologies, and Applications, IGI Global, Information Science Reference, 2007, pp. 1–37. W. Wu, Y. Leung, Theory and applications of granular labelled partitions in multi-scale decision tables, Inf. Sci. 181 (18) (2011) 3878–3897. W. Wu, W. Zhang, Neighborhood operator systems and approximations, Inf. Sci. 144 (2002) 201–217. Y.Y. Yao, Relational interpretations of neighborhood operators and rough set approximation operators, Inf. Sci. 111 (1998) 239–259. Y.Y. Yao, B. Yao, Covering based rough sets approximations, Inf. Sci. 200 (2012) 91–107. Y.Y. Yao, Neighborhood systems and approximate retrieval, Inf. Sci. 23 (2006) 3431–3452. Y.L. Zhang, M.K. Luo, Relationships between covering-based and relation-based rough sets, Inf. Sci. 225 (2013) 55–71. Z. Yu, X. Bai, Z. Yun, A study of rough sets based on 1-neighborhood systems, Inf. Sci. 248 (2013) 103–113. Z. Yu, G. Shi, Z. Yun, On symmetric neighborhood assignments, Topol. Appl. 160 (2013) 667–671. J. Zhan, X. Zhang, Y.Y. Yao, Covering based multigranulation fuzzy rough sets and corresponding applications, Artif. Intell. Rev. (2019) 1–34. N. Zhong, Rough sets in knowledge discovery and data mining, J. Jpn. Soc. Fuzzy Theory Syst. 13 (6) (2001) 581–591.  W. Zakowski, Approximations in the space (u , ), Demonstr. Math. 16 (1983) 761–769. W. Zhu, F. Wang, Reduction and axiomization of covering generalized rough sets, Inf. Sci. 152 (2003) 217–230. W. Zhu, F. Wang, Covering based granular computing for conflict analysis, Lect. Notes Comput. Sci. 3975 (2006) 566–571. W. Zhu, F. Wang, On three types of covering-based rough sets, IEEE Trans. Knowl. Data Eng. 19 (8) (2007) 1131–1143.