An evolving neural network approach in unit commitment solution

An evolving neural network approach in unit commitment solution

Microprocessors and Microsystems 24 (2000) 251–262 www.elsevier.nl/locate/micpro An evolving neural network approach in unit commitment solution M.H...

184KB Sizes 0 Downloads 108 Views

Microprocessors and Microsystems 24 (2000) 251–262 www.elsevier.nl/locate/micpro

An evolving neural network approach in unit commitment solution M.H. Wong, T.S. Chung, Y.K. Wong* Department of Electrical Engineering, The Hong Kong Polytechnic University, Kowloon, Hong Kong, People’s Republic of China Received 22 July 1999; revised 10 March 2000; accepted 18 May 2000

Abstract In this paper, the Genetic Algorithm (GA) is used to evolve the weight and the interconnection of the neural network to solve the Unit Commitment problem. We will emphasize on the determination of the appropriate GA parameters to evolve the neural network, i.e. the population size and probabilities of crossover and mutation, and the method used for selection amongst generations such as Tournament selection, Roulette Wheel selection and Ranking selection. Performance comparisons are conducted to analyze the learning curve of different parameters, to find out which has a dominant influence on the effectiveness of the algorithm. 䉷 2000 Elsevier Science B.V. All rights reserved. Keywords: Genetic algorithm; Unit commitment problem; Neural network

1. Introduction The Unit Commitment (UC) problem involves determining a start-up and shut-down schedule for the generating units over a period of time to meet the forecasted load demand at minimum cost. The commitment schedule must satisfy other constraints such as those on reserve, fuel, and individual units. Mathematically, the UC problem is a mixed-integer one, typically with thousands of 0–1 variables and a large and complex set of constraints. The exact solution of such a problem can only be obtained by exhaustive computation, usually at the cost of a prohibitively long computation time. Intensive research efforts [1–14] have been made over the past years to achieve a near-optimal solution of the UC problem in an efficient manner. The literature displays a wide range of UC methods with various degrees of accuracy and efficiency to handle difficult constraints and heuristics. In this paper, the GA algorithm is used to evolve the weight and the interconnection of the neural network. We will emphasize on the determination of the appropriate GA parameters to evolve the neural network, i.e. the population size and probabilities of crossover and mutation, and the method used for selection amongst generations such as Tournament selection, Roulette Wheel selection and Ranking selection. Performance comparisons are conducted to analyze the learning curve under different parameters, to * Corresponding author. Tel.: ⫹852-276-6140; fax: ⫹852-2330-1544. E-mail address: [email protected] (Y.K. Wong).

find out which has a dominant influence on the effectiveness of the algorithm. The paper is organized as follows: Section 1 is a brief introduction of the Unit Commitment problem. The theory of Genetic Algorithm, Neural Network and Evolving Neural Network are introduced in Section 2. The tested system and results are discussed in Section 3. The conclusion is given in Section 4. 2. Theory of algorithm 2.1. Genetic algorithm GA is a population search method, where a population of string is kept in each generation. The next generation is produced by the simulation of natural process of reproduction, gene, crossover and mutation. Selection: Selection models nature’s survival of the fittest principle. The aim is to ensure that fitter strings (solutions) receive a higher number of offspring, thereby getting a higher chance of surviving in the new generation. Selection strategies include Roulette Wheel selection, Tournament selection and Ranking selection. The Roulette Wheel selection scheme allocates offspring based on the ratio of the fitness value of a string to the average fitness value of the population. Each string is allocated a slot (sector angle ˆ 2p fitness/average fitness of a roulette wheel). A string is allocated an offspring if a random number between 0 and 2p falls in the sector corresponding to the string. It is intuitive and easy to implement. In Ranking selection, it

0141-9331/00/$ - see front matter 䉷 2000 Elsevier Science B.V. All rights reserved. PII: S0141-933 1(00)00076-4

252

M.H. Wong et al. / Microprocessors and Microsystems 24 (2000) 251–262

Fig. 1. Weight option with Roulette Wheel selection, Pc ˆ 0:7; Pm ˆ 0:01:

Fig. 2. Weight option with Roulette Wheel selection, pop size ˆ 31; Pc ˆ 0:7:

ranks members based on their fitness within the population, rather than the magnitude of the fitness. The method initially sorts all the elements of the population according to their objective function values, then assigns a maximum number of offspring to the best element of the population and a linearly decreasing count of offspring to the rest of the population. The use of ranking allows the algorithm to control the allocation of the number of offspring to the members of the population and, as a result, the loss of diversity, which usually leads to premature convergence can be substantially reduced. Tournament selection is deterministic. Pairs of individuals are picked at random form the population. The one with higher fitness is copied into the mating pool. Larger tournaments are employed by picking n individuals instead of two (a pair). This increases the selection pressure. Mutation: Every string in the mating pool may be mutated with the given mutation probability. For the string that is undergoing the mutation, a number will be randomly selected from a uniformly distributed [0,1] domain. If this number is less than the mutation probability, the bits in a string will be changed from 1 to 0, or vice versa. The mutation operator produces a new string. Crossover: The strings in the mating pool are grouped in couples. Each couple of strings would swap their bit according to the crossover probability. The uniform crossover is used here which gives every bit in the string the same chance to undergo crossover and therefore is better than one-point crossover and two-point crossover. The crossover operation would happen, if a number for crossover, also randomly selected from a uniformly distributed [0,1] domain, is less than the given crossover probability. A

27 25

Error

23 21 19 17 15 13 10

20

30

40

Generations Crossover rate = 0.1

Crossover rate = 0.3

Crossover rate = 0.7

Crossover rate = 0.9

Crossover rate = 0.5

Fig. 3. Weight option with Roulette Wheel selection, pop size ˆ 31; Pm ˆ 0:01:

50

253

Error

M.H. Wong et al. / Microprocessors and Microsystems 24 (2000) 251–262

Ge ne rations

P opulation 11

P opulation 21

P opulation 41

P opulation 51

P opulation 31

Fig. 4. Weight option with Tournament selection, Pc ˆ 0:7; Pm ˆ 0:01:

mask string is set up with randomly selected bits. The bits of the couple strings that are corresponding 1’s bits of the mask string will swap, while others will stay. After all strings finish mutation and crossover, a new generation is produced. Each string is decoded back to the control variables to compute its fitness score. The statistics will compute the new maximum fitness, minimum fitness, sum of fitness and average fitness. The converge condition will be checked. The program will stop if the condition is met, or otherwise, a new cycle of reproduction, mutation and crossover will start.

parallel interconnected networks of elementary processors or neurons. UC are done by mapping typical load curves and corresponding UC schedules by learning a set of weights and neuron thresholds based on training examples. The back error propagation algorithm is applied to train neural networks. Training is a process of inferring from repeated experiences. In neural networks, training implies a modification of its structure and its parameters based on repeated exposure to the environment in which it is being used. 2.3. Evolving neural network

2.2. Artificial neural network Artificial Neural Network is modeled as a massively

Evolving Neural Network (ENN) is to use GA to train the NN. Basically it has two options, namely connections and

32 30 28

Error

26 24 22 20 18 16 14 12 10

20

30

40

Generattions Mutation rate = 0.1

Mutation rate = 0.05

Mutation rate = 0.01

Mutation rate = 0.001

Fig. 5. Weight option with Tournament selection, pop size ˆ 31; Pc ˆ 0:7:

50

254

M.H. Wong et al. / Microprocessors and Microsystems 24 (2000) 251–262

32 30 28

Error

26 24 22 20 18 16 14 10

20

30

40

50

Generations Crossover rate = 0.1

Crossover rate = 0.3

Crossover rate = 0.7

Crossover rate = 0.9

Crossover rate = 0.5

Fig. 6. Weight option with Tournament selection, pop size ˆ 31; Pm ˆ 0:01:

weights. The connections option uses the GA to evolve the interconnect structure of the NN, in which the architecture of the network is fixed. After deciding the connection of the network, it will use back-propagation to train the weight. The weight option uses GA as weight training to find out initial set of good weights of the NN and then uses back-propagation to fine tune the weight of the network.

3. Test results Since behavior of a GA is strongly related to the parameters involved in the algorithms, that is, population size, probability of crossover and probability of mutation. Nevertheless, the method used for selection has a dominant influence on the effectiveness of the algorithm. An investigation of selection methods is conducted in

35 34 33

Error

32 31 30 29 28 27 26 10

20

30

40

G e ne ra t io ns P o p u la t io n 1 1

P o p u la t io n 2 1

P o p u la t io n 4 1

P o p u la t io n 5 1

P o p u la t io n 3 1

Fig. 7. Weight option with Ranking selection, Pc ˆ 0:7; Pm ˆ 0:01:

50

M.H. Wong et al. / Microprocessors and Microsystems 24 (2000) 251–262

255

33

32

Error

31

30

29

28

27 10

20

30

40

50

Generations Mutation rate = 0.1

Mutation rate = 0.05

Mutation rate = 0.01

Mutation rate = 0.001

Fig. 8. Weight option with Ranking selection, pop size ˆ 41; Pc ˆ 0:7:

order to determine their effect on the performance of the GA in terms of their relevant parameters. Three selection methods— Roulette Wheel, Tournament and Ranking—as well as two options—Weight and Connection—are combined to have six approaches of running the GA and will be listed as follows: Approach 1: Roulette Wheel selection with weight option; Approach 2: Tournament selection with weight option; Approach 3: Ranking selection with weight option;

Approach 4: Roulette Wheel selection with connection option; Approach 5: Tournament selection with connection option; Approach 6: Ranking selection with connection option. 3.1. Weight option 3.1.1. Roulette wheel selection Figs. 1–3 show the behavior of an ENN with Roulette

34.5 33.5

Error

32.5 31.5 30.5 29.5 28.5 27.5 10

20

30

40

Ge ne rations Crossov er rate = 0.1

Crossov er rate = 0.3

Crossov er rate = 0.7

Crossov er rate = 0.9

Crossover rate = 0.5

Fig. 9. Weight option with Ranking selection, pop size ˆ 41; Pm ˆ 0:01:

50

256

M.H. Wong et al. / Microprocessors and Microsystems 24 (2000) 251–262

16.8 15.8 14.8

Error

13.8 12.8 11.8 10.8 9.8 8.8 7.8 10

20

30

40

50

Generations Population 11

Population 21

Population 41

Population 51

Population 31

Fig. 10. Connection option with Roulette Wheel selection, Pc ˆ 0:7; Pm ˆ 0:01:

In this time, the size of population and the probability of mutation are kept constant …pop size ˆ 31; Pm ˆ 0:01† whilst different probability of crossover are considered.

Wheel selection of weight option. The result shown in Fig. 1 is performed by different population sizes, a probability of crossover Pc ˆ 0:7 and probability of mutation Pm ˆ 0:01: It can be known that the population size of 31 is the best result. A variation for the same approach is presented in Figs. 2 and 3. This time, the size of population and the probability of crossover are kept constant …pop size ˆ 31; Pc ˆ 0:7† whilst different probability of mutation are considered.

3.1.2. Tournament selection Figs. 4–6 show the behavior of an ENN with Tournament selection of weight option. The result shown in Fig. 4 is performed by different population sizes, a probability of crossover Pc ˆ 0:7 and probability of mutation Pm ˆ

19.8

17.8

Error

15.8

13.8

11.8

9.8

7.8 10

20

30

40

Generations Mutation rate = 0.1

Mutation rate = 0.05

Mutation rate = 0.01

Mutation rate = 0.001

Fig. 11. Connection option with Roulette Wheel selection, pop size ˆ 31; Pc ˆ 0:7:

50

M.H. Wong et al. / Microprocessors and Microsystems 24 (2000) 251–262

257

19.5

17.5

Error

15.5

13.5

11.5

9.5

7.5 10

20

30

40

50

Generations Crossover rate = 0.1 Crossover rate = 0.7

Crossover rate = 0.3 Crossover rate = 0.9

Crossover rate = 0.5

Fig. 12. Connection option with Roulette Wheel selection, pop size ˆ 31; Pm ˆ 0:01:

0:01: It can be known that the population size of 31 is the best result. A variation for the same approach is presented in Fig. 4. This time the size of population and the probability of crossover are kept constant …pop size ˆ 31; Pc ˆ 0:7† whilst different probability of mutation are considered. In this time, the size of population and the probability of mutation are kept constant …pop size ˆ 31; Pm ˆ 0:01† whilst different probability of crossover are considered.

3.1.3. Ranking selection Figs. 7–9 show the behavior of an ENN with ranking selection of weight option. The result shown in Fig. 7 is performed by different population sizes, a probability of crossover Pc ˆ 0:7 and probability of mutation Pm ˆ 0:01: It can be known that the population size of 41 is the best result. A variation for the same approach is presented in Fig. 7. This time the size of population and the probability of

21.4

19.4

Error

17.4

15.4

13.4

11.4

9.4 10

20

30

40

50

Generations Population 11

Population 21

Population 31

Population 41

Fig. 13. Connection option with Tournament selection, Pc ˆ 0:7; Pm ˆ 0:01:

Population 51

258

M.H. Wong et al. / Microprocessors and Microsystems 24 (2000) 251–262

18.5 17.5 16.5

Error

15.5 14.5 13.5 12.5 11.5 10.5 9.5 10

20

30

40

50

Generations Mutation rate = 0.1

Mutation rate = 0.05

Mutation rate = 0.01

Mutation rate = 0.001

Fig. 14. Connection option with Tournament selection, pop size ˆ 21; Pc ˆ 0:7:

crossover are kept constant …pop size ˆ 41; Pc ˆ 0:7† whilst different probability of mutation are considered. In this time the size of population and the probability of mutation are kept constant …pop size ˆ 41; Pm ˆ 0:01† whilst different probability of crossover are considered. 3.2. Connections option 3.2.1. Roulette wheel selection Figs. 10–12 show the behavior of an ENN with Roulette Wheel selection of connection option. The result shown in

Fig. 10 is performed by different population sizes, a probability of crossover Pc ˆ 0:7 and probability of mutation Pm ˆ 0:01: It can be known that the population size of 41 is the best result. A variation for the same approach is presented in Fig. 11. This time the size of population and the probability of crossover are kept constant …pop size ˆ 31; Pc ˆ 0:7† whilst different probability of mutation are considered. In this time, the size of population and the probability of mutation are kept constant …pop size ˆ 31; Pm ˆ 0:01† whilst different probability of crossover are considered.

18 17 16

Error

15 14 13 12 11 10 9 8 10

20

30

40

50

Generations Crossover rate = 0.1

Crossover rate = 0.3

Crossover rate = 0.7

Crossover rate = 0.9

Crossover rate = 0.5

Fig. 15. Connection option with Tournament selection, pop size ˆ 21; Pm ˆ 0:01:

M.H. Wong et al. / Microprocessors and Microsystems 24 (2000) 251–262

259

20 19.5

Error

19 18.5 18 17.5 17 16.5 10

20

30

40

50

Generations Population 11

Population 21

Population 31

Population 41

Population 51

Fig. 16. Connection option with Ranking selection, Pc ˆ 0:7; Pm ˆ 0:01:

crossover are kept constant …pop size ˆ 21; Pc ˆ 0:7† whilst different probability of mutation are considered. In this time the size of population and the probability of mutation are kept constant …pop size ˆ 21; Pm ˆ 0:01† whilst different probability of crossover are considered.

3.2.2. Tournament selection Figs. 13–15 show the behavior of an ENN with Tournament selection of connection option. The result shown in Fig. 13 is performed by different population sizes, a probability of crossover Pc ˆ 0:7 and probability of mutation Pm ˆ 0:01: It can be known that the population size of 41 is the best result. A variation for the same approach is presented in Fig. 14. This time the size of population and the probability of

3.2.3. Ranking selection Figs. 16–18 show the behavior of an ENN with ranking selection of connection option. The result shown in Fig. 16

20 19.5 19 18.5

Error

18 17.5 17 16.5 16 15.5 15 10

20

30

40

Generations Mutation rate = 0.1

Mutation rate = 0.05

Mutation rate = 0.01

Mutation rate = 0.001

Fig. 17. Connection option with Ranking selection, pop size ˆ 41; Pc ˆ 0:7:

50

260

M.H. Wong et al. / Microprocessors and Microsystems 24 (2000) 251–262 20.5 20

Error

19.5 19

18.5 18 17.5 17 10

20

30

40

50

Generations Crossover rate = 0.1 Crossover rate = 0.7

Crossover rate = 0.3 Crossover rate = 0.9

Crossover rate = 0.5

Fig. 18. Connection option with Ranking selection, pop size ˆ 41; Pm ˆ 0:1: Table 1 Best combination of parameters for connection option

P M X

Table 2 Best combination of parameters for weight option

R-wheel (Approach 1)

Tournament (Approach 2)

Ranking (Approach 3)

31 0.01 0.7

21 0.01 0.9

41 0.1 0.3

P M X

is performed by different population sizes, a probability of crossover Pc ˆ 0:7 and probability of mutation Pm ˆ 0:01: It can be known that the population size of 41 is the best result.

R-wheel (Approach 4)

Tournament (Approach 5)

Ranking (Approach 6)

31 0.01 0.1

31 0.01 0.5

41 0.01 0.3

A variation for the same approach is presented in Fig. 18. This time the size of population and the probability of crossover are kept constant …pop size ˆ 41; Pc ˆ 0:7† whilst different probability of mutation are considered.

700 600

Error

500 400 300 200 100 0 0

20

40

60

80

100

Generations NN

Tour nament by Topo logy

R-wheel by Top ology

Ranking by Topolo gy

Fig. 19. Learning Curve of topology option of error value from 0 to 600.

120

M.H. Wong et al. / Microprocessors and Microsystems 24 (2000) 251–262

261

15

Error

14.5 14 13.5 13 0

20

40

60

80

100

120

Generations NN

Tour nament by Topo logy

R-wheel by Topo logy

Ranking by Top ology

Fig. 20. Learning Curve of topology option of error value from 13 to 15.

In this time, the size of population and the probability of mutation are kept constant …pop size ˆ 41; Pm ˆ 0:1† whilst different probability of crossover are considered. 3.3. Best parameters

Error

According to the above result that we obtained, we can concluded that the best combination of the probabilities of mutation (M) and crossover (X) as well as the number of population (P) of the three generation selection methods of weight and connection option are tabulated in Tables 1 and 2. According to the parameter above, we obtain the learning curve of the ENN under two options, i.e. weight and topology are shown in Figs. 19–22, respectively. The learning curve of NN is also shown to compare the learning ability with the Evolving NN.

In addition, the following data shows the other parameters of the Evolving NN: Genetic algorithm Number of Generation ˆ 100 Neural Network Input Neurons ˆ 24 Hidden Neurons ˆ 20 Output Neurons ˆ 48 Learning Rate ˆ 0.1 Training Pattern ˆ 30

4. Conclusion Performance analysis based on ENN with three different

900 800 700 600 500 400 300 200 100 0 0

20

40

60

80

100

Generations NN

Tour nament by Weight

R-wheel by Weight

Ranking by Weight

Fig. 21. Learning Curve of weight option of error value from 0 to 800.

120

262

M.H. Wong et al. / Microprocessors and Microsystems 24 (2000) 251–262

19

Error

18.5 18 17.5 17 0

20

40

60

80

100

120

Generations NN

Tour nament by Weight

R-wheel by Weight

Ranking by Weight

Fig. 22. Learning Curve of weight option of error value from 17 to 19.

methods for selections amongst generations on Unit Commitment are presented. It provides a better learning curve than Neural Network on solving the problem. For the three selection methods, Roulette Wheel has the best performance. For the two learning options, the one using topology has better performance than using weight. It may be due to lesser connection of the network, which results in smaller error value than using topology option. In addition, Evolving NN prevents the network to be trapped into an undesirable local minimum as with the case of Back Propagation algorithms. The full ability of two options would be further investigated by on going research efforts. Acknowledgements The authors would like to acknowledge the financial support provided by The Hong Kong Polytechnic University’s Central Research Funding A/C No. 0305-746. References [1] H.H. Happ, R.C. Johnson, W.J. Wright, Large scale hydrothermal unit commitment method and results, IEEE Trans. Power Appar. Syst. PAS-90 (1971) 1373–1383. [2] C.K. Pang, G.B. Sheble, F. Albuyeh, Evaluation of dynamic programming based methods and multiple area representation for thermal unit commitment, IEEE Trans. Power Appar. Syst. PAS-100 (1981) 1212–1218. [3] P.P.J. Van den Bosch, G. Honderd, solution of the unit commitment

[4]

[5]

[6]

[7] [8]

[9]

[10] [11]

[12]

[13]

[14]

problem via decomposition and dynamic programming, IEEE Trans. Power Appar. Syst. PAS-104 (1985) 1684–1690. A.I. Cohen, M. Yoshimura, branch and bound algorithm for unit commitment, IEEE Trans. Power Appar. Syst. PAS-102 (1983) 444–451. S.K. Tong, S.M. Shahidehpour, Combination of Lagrangian relaxation and linear-programming approaches for fuel constrained unit commitment problems, IEE Proc. C 136 (1989) 162–174. S. Virmani, E.C. Anderian, K. Imhof, S. Mukherjee, Implementation of a Lagrangian relaxation based unit commitment problem, IEEE Trans. Power Syst. 4 (4) (1989) 1373–1379. M. Shahidehpour, Multi stage generation scheduling by neural networks, Proc. of AANNM, Clemson, SC, Apr. 1990, pp. 66–70. M.H. Sendaula, Application of artificial neural networks to unit commitment, Proc. of ANNPS’91, Seattle, WA, July 1991, pp. 256–260. P. Ronne-Hansen, J. Ronne-Hansen, Neural Networks as a tool for unit commitment, Proc. ANNPS’91, Seattle, WA, July 1991, pp. 266– 270. J.H. Holland, Outline for a logical theory of adaptive systems, J. ACM 3 (1962) 297–314. X. Ma, A.A. El-Keib, R.E. Smith, H. Ma, Genetic Algorithm based approach to thermal unit commitment of electric power systems, Electr. Power Res. 34 (1995) 29–36. H.-T. Yang, P.-C. Yang, C.-L. Huang, Applications of the genetic algorithm to the unit commitment problem in power generation industry, IEEE Trans. Power Syst. 10 (1) (1995) 267–274. S. Kazarlis, A.G. Bakirtzis, V. Petridis, Genetic Algorithm Solution to the Unit Commitment Problem, IEEE Trans. Power Syst. 11 (1) (1996) 83–92. G.B. Sheble, T.T. Maifeld, K. Brittig, G. Fahd, S. FukurozakiCoppinger, Unit Commitment by genetic algorithm with penalty methods and a comparison of Lagrangian search and genetic algorithm-economic dispatch example, Electr. Power Energy Syst. 18 (6) (1996) 339–346.