Rough multiple objective programming

Rough multiple objective programming

ARTICLE IN PRESS JID: EOR [m5G;July 24, 2015;9:21] European Journal of Operational Research 000 (2015) 1–7 Contents lists available at ScienceDire...

633KB Sizes 1 Downloads 63 Views

ARTICLE IN PRESS

JID: EOR

[m5G;July 24, 2015;9:21]

European Journal of Operational Research 000 (2015) 1–7

Contents lists available at ScienceDirect

European Journal of Operational Research journal homepage: www.elsevier.com/locate/ejor

Decision Support

Rough multiple objective programming T. E. M. Atteya∗ Department of Engineering Physics and Mathematics, Faculty of Engineering, Tanta University, Tanta, Egypt

a r t i c l e

i n f o

Article history: Received 13 December 2013 Accepted 30 June 2015 Available online xxx Keywords: Multiple objective programming Rough sets Rough programming Rough efficient solution

a b s t r a c t In this paper, we focused on characterizing and solving the multiple objective programming problems which have some imprecision of a vague nature in their formulation. The Rough Set Theory is only used in modeling the vague data in such problems, and our contribution in data mining process is confined only in the “postprocessing stage”. These new problems are called rough multiple objective programming (RMOP) problems and classified into three classes according to the place of the roughness in the problem. Also, new concepts and theorems are introduced on the lines of their crisp counterparts; e.g. rough complete solution, rough efficient set, rough weak efficient set, rough Pareto front, weighted sum problem, etc. To avoid the prolongation of this paper, only the 1st-class, where the decision set is a rough set and all the objectives are crisp functions, is investigated and discussed in details. Furthermore, a flowchart for solving the 1st-class RMOP problems is presented. © 2015 Elsevier B.V. and Association of European Operational Research Societies (EURO) within the International Federation of Operational Research Societies (IFORS). All rights reserved.

1. Introduction Decision making is a very important and much studied application of mathematical methods in various fields of human activity. In real-world situations, decisions are nearly always made on the basis of information which, at least in part, is vague in nature. In some cases (e.g. zooming out, granular computing and system complexity reduction), vague information is used as an approximation to more precise information. In such situations, this form of approximation is convenient and sufficient for making good enough decisions. In other cases (e.g. image processing and pattern recognition) and due to the limited precision in data acquisition phase, vague information is the only form of information available to the decision maker. Since it was pioneered by Pawlak, rough set theory (RST) (Pawlak, 1982, 1996) has become a hot topic of great interest in several fields. The capability of handling vagueness and imprecision in real-life problems has attracted researchers to use RST in many fields; one of them is the ’optimization’. Actually, most real-life problems involve (1) a process of optimizing simultaneously a collection of conflicting and competing objectives (i.e. a process of multiple objective programming (MOP)) and (2) vague or imprecise descriptions of some parts of the problem. Therefore, we usually need a suitable framework for handling this hyperdization of MOP and vagueness. For conventional MOP problem (Ehrgott, 2005; Hwang & Masud, 1979),



Corresponding author. Tel.: +966550894835, +201005118355. E-mail address: [email protected], [email protected]

the aim is to maximize or minimize a set of objectives over a certain decision set, both of which are precisely defined. But in many realistic situations, the available data lacke vagueness and inexactness and the decision maker may only be able to specify the objectives and/or the decision set imprecisely in a ‘rough sense’ using RST. Youness (2006) was the first who applied RST to the singleobjective programming (SOP) problem and proposed a new optimization problem with rough decision set and crisp objective function, called “rough single-objective programming” (RSOP) problem. He also defined two concepts for optimal solutions, namely “surely optimal“ and “possibly optimal”. Then after, many attempts were made to overcome the concept of rough mathematical programming. For more details see (Xu & Yao, 2009a, 2009b; Osman et al., 2011; Lu, Huang, & He, 2011; Tao & Xu, 2012; Zhang, Shi, & Gao, 2009). Hence, for the sake of acquiring more realistic models and results of real-life MOP problems, we present a new extension of RSOP models presented in Osman et al. (2011), to the case of rough multiple objective programming (RMOP). A new framework in modeling and solving the RMOP problem is proposed without requiring any additional data. 2. Rough set theory (Pawlak, 1982, 1996; Yao, 2008; Zhang & Wu, 2001) RST was proposed by Pawlak in the mid-1980s, and presents a new mathematical approach to imperfect (vague/imprecise) knowledge. The problem of imperfect knowledge has been tackled for a long time by philosophers, logicians and mathematicians. Recently, RST

http://dx.doi.org/10.1016/j.ejor.2015.06.079 0377-2217/© 2015 Elsevier B.V. and Association of European Operational Research Societies (EURO) within the International Federation of Operational Research Societies (IFORS). All rights reserved.

Please cite this article as: T.E.M. Atteya, Rough multiple objective programming, European Journal of Operational Research (2015), http://dx.doi.org/10.1016/j.ejor.2015.06.079

ARTICLE IN PRESS

JID: EOR 2

[m5G;July 24, 2015;9:21]

T.E.M. Atteya / European Journal of Operational Research 000 (2015) 1–7

has been proven to be an excellent mathematical tool dealing with vague and imprecise descriptions of objects. It became a crucial issue for artificial intelligence and cognitive sciences, especially in the areas of machine learning, knowledge acquisition, decision analysis, knowledge discovery from databases, expert systems, inductive reasoning and pattern recognition. RST expresses ‘imprecision’ by employing a boundary region of the vague object (e.g. set, number, interval, function, etc.). If the boundary region of an object is empty it means that the object is crisp (exact); otherwise the object is rough (inexact). A nonempty boundary region of an object means that our knowledge about the object is not sufficient to define it precisely. The bigger the boundary the worse (i.e. the higher the imprecision of) the knowledge we have about the object. Let U be a non-empty finite set of objects, called the universal set, and E ⊆ U × U be an equivalence relation on U. The ordered pair A = (U, E) is called an approximation space generated by E on U. E generates a partition U/E = {Y1 , Y2 , . . . ., Ym } where Y1 , Y2 , . . . ., Ym are the equivalence classes of the approximation space A. In RST, any subset M ⊆ U is described by its lower and upper approximations in terms of the equivalence classes of A, as follows:

FO pp FO ps

FOss

Fig. 1. The optimal sets of RSOP problem.

• • •

The sets E∗ (M) and E ∗ (M) (or simply M∗ and M∗ ) are called the lower and the upper approximations of M respectively, in the approximation space A. Therefore, M∗ ⊆ M ⊆ M∗ . The difference between the upper and the lower approximations is called the boundary of M and is denoted by BNE (M) = M∗ − M∗ (or simply MBN ). The set M is crisp (exact) in A iff MBN = φ , otherwise M is rough (inexact) in A. In RST, each element x ∈ U is classified as ‘surely’ inside M iff x ∈ M∗ or ‘may be’ (I’m not sure if it is or not) inside M iff x ∈ MBN ; otherwise x is surely outside M. Furthermore, an element x ∈ U is said to be “probably inside M”, iff x ∈ M∗ . On the other hand, each equivalence class Y ∈ U/E is classified as ‘completely’ included in M iff Y ⊆ M∗ or ‘partially’ included in M iff Y ⊆ MBN , otherwise Y is completely not included in M. Furthermore, an equivalence class Y ∈ U/E is said to be "possibly included in M", iff Y ⊆ M∗ .







Consider the following crisp SOP problem x∈M

a solution x is surely-feasible, iff it belongs to the lower approximation of the feasible set, a solution x is probably-feasible, iff it belongs to the upper approximation of the feasible set, a solution x is surely-not feasible iff it does not belong to the upper approximation of the feasible set.

Furthermore, in RSOP the optimal set is replaced by four optimal sets (See Fig. 1) covering all the possible degrees of feasibility and optimality of the solutions, as follows: •







The set of all surely-feasible, surely-optimal solutions, denoted by FOss . The set of all surely-feasible, probably-optimal solutions, denoted by FOsp . The set of all probably-feasible, surely-optimal solutions, denoted by FOps . The set of all probably-feasible, probably-optimal solutions, denoted by FOpp . Therefore, we have:

FOss ⊆ FOsp ⊆ FOpp , FOss ⊆ FOps ⊆ FOpp and FOss = FOsp ∩ FOps . 3.1. The 1st-class of RSOP problems (Osman et al., 2011)

3. Rough single-objective programming (Osman et al., 2011)

max g(x)

a solution x is surely-optimal, if g(x) = g¯∗ , a solution x is probably-optimal, if g(x) ≥ g¯ ∗ , a solution x is surely-not optimal, if g(x) < g¯ ∗ .

Also, in the 1st and 3rd classes of RSOP (where the feasible set is a rough set), it is remarkable that:

E∗ (M) = ∪{Yi ∈ U/E |Yi ⊆ M} E ∗ (M) = ∪{Yi ∈ U/E |Yi ∩ M = φ}

FOsp

(1)

where g(x) is the objective function, and M is the feasible set of the problem. In the conventional mathematical programming problem, it is assumed that all the parts (i.e. g(x) and M) are defined in a crisp sense and “max” is a strict imperative. However, in many practical situations it may not be reasonable to require that the feasible set or the objective function be specified in a precise crisp terms. In such situations, it is desirable to use some type of modeling that is capable of handling vagueness and imprecision in the problem. This led to the hybridization between SOP and RST to get the concept of “rough single-objective programming”. RSOP problems are broadly classified according to the place of roughness into three classes as follows: 1st-Class: problems with rough feasible set and crisp objective function. 2nd-Class: problems with crisp feasible set and rough objective function. 3rd-Class: problems with rough feasible set and rough objective function. Unlike the crisp case (where the optimal value is a single crisp ¯ is defined by its lower value), the optimal value in RSOP, denoted by g, and upper bounds i.e. g¯ ∗ and g¯∗ respectively, such that g¯ ∗ ≤ g¯ ≤ g¯∗ . Therefore, in RSOP we can say that:

Suppose that A = (U, E ) is an approximation space generated by an equivalence relation E on the universe U, and U/E = {Y1 , Y2 , . . . ., Ym } is the partition generated by E on U. A RSOP problem of the 1st-class takes the following form:

max g(x) x∈M

s.t. M∗ ⊂ M ⊂ M∗ M∗ , M∗ ⊆ U/E

(2)

where g : U → R is a crisp objective function. M ⊂ U is a rough set in the approximation space A, representing the feasible set of the problem. M is given only by its lower and upper approximations, M∗ and M∗ respectively, and the nonempty boundary region (MBN = M∗ − M∗ = φ ) of the feasible set indicates the notion of ‘rough-feasibility’ in problem (2). The lower and upper bounds of the optimal objective value g¯ in problem (2), are given by

g¯ ∗ = max{ a, b},

g¯∗ = max{ a, c}

where (assuming the existence of the solution of the following crisp problems)

a = max g(x), x∈ M∗

b = max min g(x), Y ∈U/E, Y ⊆MBN

x∈ Y

c = max g(x) x∈ MBN

Therefore, the optimal sets of problem (2) are given as follows:

FOss = {x ∈ M∗ |g(x) = g¯∗ }

Please cite this article as: T.E.M. Atteya, Rough multiple objective programming, European Journal of Operational Research (2015), http://dx.doi.org/10.1016/j.ejor.2015.06.079

ARTICLE IN PRESS

JID: EOR

[m5G;July 24, 2015;9:21]

T.E.M. Atteya / European Journal of Operational Research 000 (2015) 1–7

FOsp = {x ∈ M∗ |g(x) ≥ g¯ ∗ } FOps = {x ∈ M |g(x) = g¯ ∗



3

In RMOP, due to the roughness wherever it exists in the problem, the ideal objective vector F is defined by its lower and upper ∗ ∗ ) respectively, bounds i.e. F ∗ = ( f¯1∗ , . . . , f¯m∗ ) and F = ( f¯1∗ , . . . , f¯m ∗ ∗ ¯ ¯ ¯ where F ∗ ≤ F ≤ F and fi∗ ≤ fi ≤ f ∀i = 1, . . . , m.

}

FOpp = {x ∈ M∗ |g(x) ≥ g¯ ∗ }

i

4. Rough multiple objective programming In this section, we present a new proposal to extend the concept of “rough programming” to the case of multiple objective programming. This proposal is called rough multiple objective programming (RMOP) and represents the hybridization of MOP and RST. Generally, the crisp MOP problem is defined as follows:

max F (x) = ( f1 (x), . . . , fm (x))T s.t. x∈M ⊆U m≥2

4.1. The 1st-class of RMOP problems (3)

where U is a non-empty finite set of objects, called the universal set, M is the decision set and x is the decision variable. F : U → Rm is the objective vector and composed by m scalar objective functions fi : U → R , i = 1, . . . , m. The sets U and Rm are known as ‘decision variable space’ and ‘objective function space’, respectively. In contrast to SOP, a solution to a MOP problem is more of a concept than a definition. Typically, there is no single global optimal solution in MOP, and it is often necessary to determine a set of points that all fit a predetermined definition for an optimum. One property commonly considered as necessary for any candidate solution to the multiple objective problem is that the solution is not dominated by any other solutions in the feasible set. “The non-domination” (or best compromise, efficiency, Pareto-optimality) is the central concept in MOP only because an optimal solution for one objective function is not necessary an optimal solution for others. The predominant concept in defining an optimal point is that of Pareto optimality (Ehrgott, 2005; Hwang & Masud, 1979). Remark 1. For any two vectors F1 , F2 ∈ Rm , if at least one component of F1 is ‘greater than’ its corresponding one of F2 then we write F1 ≤ F2 . Also, if at least one component of F1 is ‘greater than or equal’ its corresponding one of F2 then we write F1 < F2 . Remark 2. In the MOP problem (3), a solution xˆ is said to be ‘not dominated’ by any member of set M if and only if ∀x ∈ M either F (xˆ) = F (x) or F (xˆ) ≤ F (x). In the conventional crisp scenario, it is assumed that all of the objectives fi (x), i = 1, 2, . . . , m and the decision set M of problem (3) are defined precisely. However, practically this does not always be the case of study. Sometimes, in multiple objective programming, one or more of the problem components may be only defined through vague descriptions, and using rough set theory this could be handled easily. The roughness may appear in the MOP problem in the decision set and/or the goals, and then the problem is called rough multiple objective programming (RMOP) problem. According to the place of the roughness, the RMOP problems can be broadly classified into three classes as follows: 1st-Class: problems with a rough feasible set and crisp objective functions. 2nd-Class: problems with a crisp feasible set and at least one rough objective function. 3rd-Class: problems with a rough feasible set and at least one rough objective function. Definition 1. In problem (3), the objective vector F , which is composed by the maximum values of the objectives over the feasible set M, is called ideal objective vector i.e. F = ( f¯1 , . . . , f¯m ), f¯i = max fi (x) ∀i = 1, . . . , m. x∈M

f¯i∗ and f¯i∗ are called the lower and upper bounds of f¯i ∀i = 1, . . . , m, respectively. Furthermore, only in the 1st and 3rd classes of RMOP problems (where the feasible set is a rough set), we have solutions with different degrees (e.g. surely and probably) of feasibility. While in the 2nd class, all the feasible solutions are surely-feasible.

In this section, we define and discuss in details the RMOP problem in which the decision set is a rough set and all of the objectives are crisp functions. The roughness of the decision set appears when the feasible solutions have different degrees of feasibility (e.g. surely and probably). This case is mostly happening in the approximation process at zooming out and granulation operations (e.g. in pattern recognition and image processing). Also, due to insufficient and limited precision in data mining and information acquisition phases, we face some vagueness in data classifications and this causes rough feasibility too. Suppose that A = (U, E ) is an approximation space generated by an equivalence relation E on a universal set U, and U/E = {Y1 , Y2 , . . . ., Ym } is the partition generated by E on U. Then the general 1st-class RMOP problem is expressed as follows:

max F (x) = ( f1 (x), . . . , fm (x))T x∈M

s.t. M∗ ⊂ M ⊂ M∗ M∗ , M∗ ⊆ U/E m≥2

(4)

where M ⊂ U is a rough set in the approximation space A, and given only by its lower and upper approximations, M∗ and M∗ , respectively. x is the decision variable, and F : U → Rm is the objective vector which is composed by m scalar crisp objective functions fi : U → R , i = 1, 2, . . . , m. Characterizing the optimality conditions, efficient solutions, weak efficient solutions, Pareto front and optimal sets for problem (4), is the core of the following discussion. It is remarkable that the key idea in the following definitions is to use the notions of "the smallest and largest probable feasible sets" in order to characterize "the probably and surely states" of the different types of solutions (e.g. efficient and weak-efficient solutions). The smallest probable feasible set refers to any set that consists of all the elements of the lower approximation M∗ together with only one element from each equivalence class Y ∈ MBN (since Y ∩ M = φ ∀Y ∈ MBN ). So that the smallest feasible set is not a unique set, while the largest probable feasible set is always the upper approximation M∗ . Now for problem (4), we have the following definitions and propositions. Definition 2. The ideal objective vector F = ( f¯1 , . . . , f¯m ) is defined ∗ by its lower and upper bounds i.e. F ∗ = ( f¯1∗ , . . . , f¯m∗ ) and F = ∗ ∗ ¯ ¯ ( f1 , . . . , fm ) respectively, where:

f¯i∗ = max {ai , bi } and f¯i∗ = max{ai , ci } (assuming the existence of the solution of the following crisp problems)

ai = max fi (x), x∈M∗

bi = max { min fi (x)} and ci Y ∈U/E Y ∈MBN

x∈Y

= max fi (x) ∀i = 1, . . . , m x∈MBN

Please cite this article as: T.E.M. Atteya, Rough multiple objective programming, European Journal of Operational Research (2015), http://dx.doi.org/10.1016/j.ejor.2015.06.079

ARTICLE IN PRESS

JID: EOR 4

[m5G;July 24, 2015;9:21]

T.E.M. Atteya / European Journal of Operational Research 000 (2015) 1–7

According to the above notions of optimality and feasibility, we can easily get that:

Wp M BN



Pp

Ws

M*



Ps •

M* •

Fig. 2. The optimal sets of the1st-class RMOP problem.





Definition 3. An objective vector F = ( f¯¯1 , . . . , f¯¯m ) is said to be a surely-utopian objective vector if and only if its components are marginally greater than that of the upper bound of the ideal objective vector, i.e. f¯¯i = f¯∗ + εi with εi > 0 ∀i = 1, . . . , m.





i

Definition 4. An objective vector F = ( f¯¯1 , . . . ., f¯¯m ) is said to be a probably-utopian objective vector if and only if its components are marginally greater than that of the lower bound of the ideal objective vector, i.e. f¯¯i = f¯i∗ + εi with εi > 0 ∀i = 1, 2, . . . , m.







M∗

Definition 5. A point xˆ ∈ is said to be a surely-complete optimal ∗ solution, if and only if F (xˆ) = F . Definition 6. A point xˆ ∈ M∗ is said to be a probably-complete optimal solution, if and only if F (xˆ) ≥ F ∗ . In problem (4), a point xˆ ∈ M∗ is said to be a surely-efficient (or surely-Pareto optimal) solution, if and only if it is not dominated by the points of the largest-probable feasible set, i.e. M∗ . Also, a point xˆ ∈ M∗ is said to be a probably-efficient (probably-Pareto optimal) solution, if and only if it is not dominated by the points of at least one of the smallest-probable feasible sets. Definition 7. A point xˆ ∈ M∗ is said to be a surely-efficient solution, if and only if ∀x ∈ M∗ either F (xˆ) = F (x) or F (xˆ) ≤ F (x). Definition 8. A point xˆ ∈ M∗ is said to be a probably-efficient solution, if and only if: (1) ∀x ∈ M∗ either F (xˆ) = F (x) or F (xˆ) ≤ F (x), and (2) ∀Y ∈ MBN there exists at least one point x ∈ Y such that either F (xˆ) = F (x) or F (xˆ) ≤ F (x). Definition 9. The set of all surely-efficient solutions is called the surely-efficient set and denoted by Ps , while the set of all probablyefficient solutions is called the probably-efficient set and denoted by Pp . Similarly, the surely and probably-weak efficient solutions are defined, as follows.



Generally in MOP problem (3), the image of the efficient set in the objective vector space is called ‘Pareto front set’. Also, the objective vector which is composed by the minimum values of the objectives over the efficient set is called ‘nadir objective vector’ and denoted by F nad . Therefore, in problem (4) we can define both of the ‘Pareto front set’ and ‘nadir objective vector’ as follows. Definition 13. •







Definition 14. •

Proposition 1. Ps ⊆ Pp ⊆ Wp and Ps ⊆ Ws ⊆ Wp . (See Fig. 2)

The nadir objective vector for the surely-feasible, surely-efficient set Fs Ose , is denoted by Fss nad and defined with Fss nad = nad , . . . , f nad ) where: ( fss,1 ss,m nad fss,i = min fi (x), x∈Fs Ose



(1) F (xˆ) < F (x) ∀x ∈ M∗ , and (2) ∀Y ∈ MBN there exists at least one point x ∈ Y such that F (xˆ) < F (x). Definition 12. The set of all surely-weak efficient solutions is called the surely-weak efficient set and denoted by Ws , while the set of all probably-weak efficient solutions is called the probably-weak efficient set and denoted by Wp .

The Pareto front for the surely-feasible, surely-efficient set Fs Ose , is defined as F Rss = {F (x)|x ∈ Fs Ose }. The Pareto front for the surely-feasible, probably-efficient set Fs Ope , is defined as F Rsp = {F (x)|x ∈ Fs Ope }. The Pareto front for the probably-feasible, surely-efficient set Fp Ose , is defined as F Rps = {F (x)|x ∈ Fp Ose }. The Pareto front for the probably-feasible, probably-efficient set Fp Ope , is defined as F Rpp = {F (x)|x ∈ Fp Ope }.

Proposition 2. F Rss ⊆ F Rsp ⊆ F Rpp , F Rss ⊆ F Rps ⊆ F Rpp and F Rss = F Rsp ∩ F Rps .

Definition 10. A point xˆ ∈ M∗ is said to be a surely-weak efficient solution, if and only if F (xˆ) < F (x) ∀ x ∈ M∗ . Definition 11. A point xˆ ∈ M∗ is said to be a probably-weak efficient solution, if and only if:

The set of all surely-feasible, surely-complete optimal solutions of ∗ problem (4), is defined as Fs Osc = {x ∈ M∗ | F (x) = F }. The set of all surely-feasible, probably-complete optimal solutions of problem (4), is defined as Fs O pc = {x ∈ M∗ | F (x) ≥ F ∗ }. The set of all probably-feasible, surely-complete optimal solutions of ∗ problem (4), is defined as Fp Osc = {x ∈ M∗ | F (x) = F }. The set of all probably-feasible, probably-complete optimal solutions of problem (4), is defined as Fp O pc = {x ∈ M∗ | F (x) ≥ F ∗ }. The set of all surely-feasible, surely-efficient solutions, is defined as Fs Ose = M∗ ∩ Ps . The set of all surely-feasible, probably-efficient solutions, is defined as Fs O pe = M∗ ∩ Pp . The set of all probably-feasible, surely-efficient solutions, is defined as Fp Ose = M∗ ∩ Ps . The set of all probably-feasible, probably-efficient solutions, is defined as Fp O pe = M∗ ∩ Pp . The set of all surely-feasible, surely-weak efficient solutions, is defined as Fs Osw = M∗ ∩ Ws . The set of all surely-feasible, probably-weak efficient solutions, is defined as Fs O pw = M∗ ∩ Wp . The set of all probably-feasible, surely-weak efficient solutions, is defined as Fp Osw = M∗ ∩ Ws . The set of all probably-feasible, probably-weak efficient solutions, is defined as Fp O pw = M∗ ∩ Wp .

The nadir objective vector for the surely-feasible, probablyefficient set Fs Ope , is denoted by Fsp nad and defined with Fsp nad = nad , . . . , f nad ) where: ( fsp,1 sp,m nad fsp,i = min fi (x), x∈Fs Ope



∀i = 1, 2, . . . , m.

∀i = 1, 2, . . . , m.

The nadir objective vector for the probably-feasible, surelyefficient set Fp Ose , is denoted by Fps nad and defined with Fps nad = nad , . . . , f nad ) where: ( fps,1 ps,m nad fps,i = min fi (x), x∈Fp Ose

∀i = 1, 2, . . . , m.

Please cite this article as: T.E.M. Atteya, Rough multiple objective programming, European Journal of Operational Research (2015), http://dx.doi.org/10.1016/j.ejor.2015.06.079

ARTICLE IN PRESS

JID: EOR

[m5G;July 24, 2015;9:21]

T.E.M. Atteya / European Journal of Operational Research 000 (2015) 1–7 •

The nadir objective vector for the probably-feasible, probablyefficient set Fp Ope , is denoted by Fpp nad and defined with Fpp nad = nad , . . . , f nad ) where: ( fpp,1 pp,m nad fpp,i = min fi (x), x∈Fp Ope

∀i = 1, 2, . . . , m.

nad Fpp



nad Fps







Fssnad

nad and Fpp



nad Fsp



Fssnad .

Theorem 1. Recall from problem (4) that F = ( f1 (x), . . . . . . ., fm (x) ). If for each i = 1,2,…,m, the sets F Oiss , F Oisp , F Oips and F Oipp represent the four optimal sets of the following RSOP problem

max fi (x) x∈M

s.t.

xˆ ∈ F Oss is a surely-feasible, surely-weak efficient solution problem (4). xˆ ∈ F Osp is a surely-feasible, probably- weak efficient solution problem (4). xˆ ∈ F O ps is a probably-feasible, surely-weak efficient solution problem (4). xˆ ∈ F O pp is a probably-feasible, probably- weak efficient solution problem(4).





Proposition 3.

M∗ ⊂ M ⊂ M∗

(5)

5

of of of of

Proof. The proof is straightforward like that of theorem 2.  The proposed approach and methodology for solving the 1st class of RMOP problems could be summarized in the following flowchart, (See Fig. 3) Numerical example: Let U be a universal set defined by

U = {x ∈ R2 | x21 + x22 ≤ 16}

then: •

The set of all surely-feasible, surely-complete optimal solutions of m  ∗ problem (4), is defined as Fs Osc = {x ∈ M∗ | F (x) = F } = F Oiss .

where x = (x1 , x2 ), and let K be a polytope generated by the following closed halfplanes



The set of all surely-feasible, probably-complete optimal solutions of m  F Oisp . problem (4), is defined as Fs O pc = {x ∈ M∗ | F (x) ≥ F ∗ } =

h1 = x2 − x1 − 2 ≤ 0,



The set of all probably-feasible, surely-complete optimal solutions of m  ∗ problem (4), is defined as Fp Osc = {x ∈ M∗ | F (x) = F } = F Oips .

h1 = x1 + x2 + 2 ≥ 0,



The set of all probably-feasible, probably-complete optimal solutions m  of problem (4), is defined as Fp O pc = {x ∈ M∗ | F (x) ≥ F ∗ } = F Oipp .

i=1

i=1

h1 = x1 + x2 − 2 ≤ 0, h1 = x2 − x1 + 2 ≥ 0, Suppose that E is an equivalence relation on U such that

i=1

i=1

Theorem 2. If F Oss , F Osp , F O ps and F O pp represent the four optimal sets of the following weighted-sum RSOP problem

max x∈M

s.t.







(6)

Theorem 3. If F Oss , F Osp , F O ps and F O pp represent the four optimal sets of the following weighted-sum RSOP problem

x∈M

s.t.

wi fi (x)

then:

f2 (x) = −x1 2 − x22 Solution: Step 1: Finding the complete optimal solutions (referring to Theorem 1):

a1 = max f1 (x) = −1, b1 = min f1 (x) = −49 and x∈M∗

x∈E3

c1 = max f1 (x) = 0 x∈E3

f¯1∗ = max {a1 , b1 } = −1 and f¯1∗ = max{a1 , c1 } = 0 FO1ss = {x ∈ M∗ | f1 (x) = 0} = φ , FO1sp = {x ∈ M∗ | f1 (x) ≥ −1} = {(2, 0)} FO1ps = {x ∈ M∗ | f1 (x) = 0} = {(3, 0)} FO1pp = {x ∈ M∗ | f1 (x) ≥ −1} = {x ∈ R2 |(x1 − 3)2 + x22 = 1} a2 = max f2 (x) = 0, b2 = min f2 (x) = −16 and c2 x∈M∗

x∈E3

x∈E3

where ( − 2)− = −2 − ε ,

ε > 0, ε ≈ 0 f¯2∗ = max {a2 , b2 } = 0 and f¯2∗ = max{a2 , c2 } = 0

M∗ ⊂ M ⊂ M∗

∀i = 1, . . . , m

f1 (x) = −(x1 − 3)2 − x22

= max f2 (x) = ( − 2)−

i=1

wi ≥ 0

M∗ = E1 ∪ E2 , M ∗ = E1 ∪ E 2 ∪ E 3

Proof. The proof of this theorem in the crisp MOP form is well known. In RSOP problem (6), the solution feasibility and optimality characteristics (e.g. surely and probably degrees) are obviously transferred to the solution feasibility and efficiency in problem (4). Therefore, a surely (or probably) feasible solution of problem (6) is a surely (or probably) feasible solution of problem (4). Also, a surely (or probably) optimal solution of problem (6) is a surely (or probably) efficient solution of problem (4). 

max

( f1 (x), f2 (x))T

M∗ ⊂ M ⊂ M∗

xˆ ∈ F Oss is a surely-feasible, surely-efficient solution of problem (4). xˆ ∈ F Osp is a surely-feasible, probably-efficient solution of problem (4). xˆ ∈ F O ps is a probably-feasible, surely-efficient solution of problem (4). xˆ ∈ F O pp is a probably-feasible, probably-efficient solution of problem (4).

m 

F (x) =

s.t.

i=1

then: •

Consider the following 1st class RMOP problem: x∈M

M∗ ⊂ M ⊂ M∗

∀i = 1, . . . , m

E3 = {x ∈ U |x is an exterior point o f polytope K } max

wi fi (x)

wi > 0

E1 = {x ∈ U |x is an interior point o f polytope K } E2 = {x ∈ U |x is a boundary point o f polytope K }

Proof. The proof is straightforward. 

m 

U/E = {E1 , E2 , E3 }

(7)

FO2ss = {x ∈ M∗ | f2 (x) = 0} = {(0, 0)} FO2sp = {x ∈ M∗ | f2 (x) ≥ 0} = {(0, 0)}

Please cite this article as: T.E.M. Atteya, Rough multiple objective programming, European Journal of Operational Research (2015), http://dx.doi.org/10.1016/j.ejor.2015.06.079

ARTICLE IN PRESS

JID: EOR 6

[m5G;July 24, 2015;9:21]

T.E.M. Atteya / European Journal of Operational Research 000 (2015) 1–7

Fig. 3. Flowchart of the proposed approach for solving the 1st-class RMOP problem.

FO2ps = {x ∈ M∗ | f2 (x) = 0} = {(0, 0)} FO2pp = {x ∈ M∗ | f2 (x) ≥ 0} = {(0, 0)} •







The set of all surely-feasible, surely-complete optimal solutions, is given by Fs Osc = FO1ss ∩ FO2ss = φ . The set of all surely-feasible, probably-complete optimal solutions, is given by Fs Opc = FO1sp ∩ FO2sp = φ . The set of all probably-feasible, surely-complete optimal solutions, is given by Fp Osc = FO1ps ∩ FO2ps = φ . The set of all probably-feasible, probably-complete optimal solutions, is given by Fp Opc = FO1pp ∩ FO2pp = φ . Step 2: finding the efficient solutions (referring to Theorem 2):

We can get the efficient solutions by solving the following RSOP problem,

max x∈M

m 

wi fi (x)

i=1

s.t. M∗ ⊂ M ⊂ M∗ wi > 0 w1 0.5 0.7 0.8

w2 0.5 0.3 0.2

∀i = 1, . . . , m

FOss {(1.5,0)}

φ φ

FOsp {(1.5,0)} {(2,0)} {(2,0)}

FOps {(1.5,0)} {(2.1,0)} {(2.4,0)}

FOpp {(1.5,0)} {(x1 , x2 )|(x1 − 2.1)2 + x22 ≤ 0.01} {(x1 , x2 )|(x1 − 2.4)2 + x22 ≤ 0.16}

Please cite this article as: T.E.M. Atteya, Rough multiple objective programming, European Journal of Operational Research (2015), http://dx.doi.org/10.1016/j.ejor.2015.06.079

ARTICLE IN PRESS

JID: EOR

[m5G;July 24, 2015;9:21]

T.E.M. Atteya / European Journal of Operational Research 000 (2015) 1–7 •







The surely-feasible, surely-efficient solutions are {(1.5, 0)} ⊂ Fs Ose . The surely-feasible, probably-efficient solutions are {(1.5, 0), (2, 0)} ⊂ Fs Ope . The probably-feasible, surely-efficient solutions are {(1.5, 0), (2.1, 0), (2.4, 0)} ⊂ Fp Ose . The set of all probably-feasible, probably-efficient solutions, is defined as {(1.5, 0)} ∪ {(x1 , x2 )|(x1 − 2.1)2 + x22 ≤ 0.01} ∪ {(x1 , x2 )|(x1 − 2.4)2 + x22 ≤ 0.16} ⊂ Fp Ope .

Step 3: finding the weak-efficient solutions (referring to Theorem 3): We can get the efficient solutions by solving the following RSOP problem,

max x∈M

s.t.

m 

wi fi (x)

M∗ ⊂ M ⊂ M∗

w1 1 0









w2 0 1

∀i = 1, . . . , m FOss

φ

{(0,0)}

into three classes. The basic model and the necessary concepts for the 1st-class are defined and presented to characterize the solution of MOP problems when the roughness exists only in the decision set. Also, we introduced the SOP weighted-sum problem which is equivalent to the 1st-class RMOP problem and declared the meaning of its solution. Furthermore, we presented a flowchart containing the proposed approach for solving the 1st-class problems. Acknowledgement The author would like to express his deep thanks to the reviewers. The Author is highly grateful to them for their inestimable comments and suggestions. References

i=1

wi ≥ 0

7

FOsp {(2,0)} {(0,0)}

FOps {(3,0)} {(0,0)}

FOpp {(x1 , x2 )|(x1 − 3)2 + x22 ≤ 1} {(0,0)}

The surely-feasible, surely-weak efficient solutions are {(0, 0)} ⊂ Fs Osw . The surely-feasible, probably-weak efficient solutions are {(0, 0), (2, 0)} ⊂ Fs Opw . The probably-feasible, surely-weak efficient solutions are {(0, 0), (3, 0)} ⊂ Fp Osw . The probably-feasible, probably-weak efficient solutions are {(0, 0)} ∪ {(x1 , x2 )|(x1 − 3)2 + x22 ≤ 1} ⊂ Fp Opw .

5. Conclusion In this paper we presented the concept of “rough multiple objective programming” from a new point of view as a multiple objective programming but in a rough environment and classified its problems

Ehrgott, M. (2005). Multicriteria optimization. Berlin: Springer. Hwang, C. L., & Masud, A. S. (1979). Multiple objective decision making: methods and applications. Berlin: Springer-Verlag. Lu, H. W., Huang, G. H., & He, L. (2011). An inexact rough-interval fuzzy linear programming method for generating conjunctive water-allocation strategies to agricultural irrigation systems. Applied Mathematical Modeling, 35(9), 4330–4340. Osman, M. S., Lashein, E. F., Youness, E. A., & Atteya, T. E. M. (2011). Mathematical programming in rough environment. Optimization, 60(5), 603–611. Pawlak, Z. (1982). Rough sets. International Journal of Computer and Information Sciences, 11(5), 341–356. Pawlak, Z. (1996). Rough sets, rough relations and rough functions. Fundamenta Informaticae, 27, 103–108. Tao, Z. M., & Xu, J. P. (2012). A class of rough multiple objective programming and its application to solid transportation problem. Information Sciences, 188(2), 215– 235. Xu, J. P., & Yao, L. M. (2009a). A class of expected value multiple objective programming problems with random rough coefficients. Mathematical and Computer Modelling, 50(1-2), 141–158. Xu, J. P., & Yao, L. M. (2009b). A class of multiobjective linear programming models with random rough coefficients. Mathematical and Computer Modelling, 49(1-2), 189– 206. Youness, E. A. (2006). Characterizing solutions of rough programming problems. European Journal of Operational Research, 168(3), 1019–1029. Yao, Y. (2008). Probabilistic rough set approximations. International Journal of Approximate Reasoning, 49(2), 255–271. Zhang, W. X., & Wu, W. Z. (2001). Theory and method of rough sets. Beijing: Science Press. Zhang, Z. W., Shi, Y., & Gao, G. X. (2009). A rough set-based multiple criteria linear programming approach for the medical diagnosis and prognosis. Expert Systems with Applications, 36(5), 8932–8937.

Please cite this article as: T.E.M. Atteya, Rough multiple objective programming, European Journal of Operational Research (2015), http://dx.doi.org/10.1016/j.ejor.2015.06.079