Copyright © IFAC Power Generation Distribution and Protection, Pretoria, South Africa, 1980.
DISTRIBUTED PROCESSING IN POWER SYSTEM OPERATION Chairman: Prof. M. Thoma Germany
The aim of the discussion was to give prospective users some insight into the following aspects of distributed processing systems with special reference to System Control and Data Acquisition (SCADA) and large energy management systems:
a physical understanding of the application of higher control strategies rather than a purely theoretical analysis. The increased flexibility and reliability of a distributed system consisting of micro-processors spread around the plant are factors in favour of its application. Prof Thoma raised a number of important points in connection with decentralised or distributed control. Firstly, low-cost hardware and memory is available and it is no longer necessary to optimise in terms of memory as before. Secondly, a serious need exists for a truly portable real-time language which is independent of computer. Thirdly, more theoretical methods are required in the design of distributed structured systems which would be more efficient than the present heuristic methods. Fourthly, an effective man-machine interconnection is required. The man-machine interface should present the operator with essential informa tion only and not complicate the qualities of distributed systems compared with non-distributed systerns.
(a) The successful implementation (if any) of distributed processing systems. (b) Operational problems experienced with distributed processing systems. (c) Current and future trends in software and hardware configurations of distributed processing systems. (d) Problem areas in the design of the systems. (e) Performance, availability, reliability, fault tolerance and error recovery of such systems. (f) Cost effectiveness of distributed processing systems.
The following five qualities were discussed:
A distributed processing system was defined as a collection of processing elements which are interconnected both logically and physically, with decentralised system-wide control of resources for co-operative execution of applications programmes.
Performance - do distributed systems improve performance e.g. time-response of the display subsystem?
System Operation was defined to include only processing associated with power system control centres and distribution stations i.e. SCADA, and other energy management functions. It excludes control in power stations, except for Automatic Generation Control (AGC/LFC).
Reliability - how is the reliability of the total distributed system affected when different sub-systems operate in different environments?
Availability - how is the availability of a geographically distributed system affected?
Error recovery - what are the implications of communication errors between processors in a distributed system?
Prof Thoma explained why distributed systems were desirable. Increased reliability is achieved with a distributed system, because the failure of a single processor does not result in total system failure. With single processor systems the tendency is to concentrate as many software functions as possible in the one machine in order to justify the cost of such a system, and this leads to undesirable overload conditions. A further advantage of decoupling a large system into several sub-systems is the better intuitive understanding of the system that results. The control engineer wishes to have
Maintenance - of specific concern is the software maintenance of a distributed system. It is not foreseen that the maintenance of a geographically distributed data base will be easier than for a centralised system. In the discussion manufacturers of computer systems reported various degrees of success. The systems discussed varied from distributed processing by means of micro-processor based remote terminal units on the one hand and the application of multiple processors in the
559
560
M. Thoma
master station at the central site on the other. The following were some of the functions reported as having been implemented on the micro-processor based remote station: Data reduction and computations Dynamic update of system parameters by the scanner Sub-master scanning other remotes Sequence of events recording Local closed-loop control Load management Implementing these functions at the remote site results in a certain degree of unloading of the centralised computer allowing it more time to execute security and economic functions so important in modern day system operation. Data reduction, which is just another term for change-only processing, contributes most to the unloading of the central processor. The other features listed mostly offer new facilities previously not available and as such do not contribute much to the unloading of the central processor. Two aspects concerning the distribution of the processors were considered. One was the distribution of these processors over a geographical area such as the regional sites (e.g. distributing state estimation, load flow and economic dispatch). The other aspect was the distribution of these functions at the one location into different processors. It was generally considered that the communications problem involved in distributing these functions geographically would be tremendous and not worth considering. The other possibility, namely that of distributing these functions in various computers at the same site, offers potential and has been implemented in a form. The so-called partner bus scheme, which consists of several processors sharing a common data bus, was described. The processors on this data bus are all equal partners and there is no master. When one processor speaks the others listen and in this way a failed processor can be detected. This scheme is event oriented and also error tolerant. The processors arranged around such a common bus could be the front-end processors communica ting with the remote stations and the background processors performing the number crunching tasks. This scheme has the advantages of being modular and of offering
error protection in that the failed unit is switched out without having to switch in a back-up unit. This scheme offers hot standby as well as load sharing. Most of the pessimistic views that were expressed concerned the software. Two aspects specifically mentioned were the lack of truly distributed real-time operating systems and real-time distributed data base management systems. Without much more progress in these fields, it was considered doubtful whether distributed systems would be realised. The idea of a universal real-time language also received much attention. This too was not considered to be likely in the near future. While many large institutions are devoting large sums of money and effort towards such a goal, the diverse approaches that have been adopted do not inspire confidence in early success. Progress has, however, been made on PEARL, a real-time language being developed in Germany. In practice it has been found to slow down access times and that memory requirements are nearly doubled, but with increasing computing power access times can be reduced. ADA, being developed in America, is expected to be available in four year's time. Concern was also expressed that these real-time universal languages have large memory requirements which contra-indicated their application to micro-processors. It was also mentioned that computer manufacturers should be persuaded to standardise hardware instructions, the justification for this being the continuously decreasing costs of hardware as opposed to the increasing costs of software. In summarising Prof Thoma pointed out that most of the discussions concerned problems in the software and that very little was said about hardware, except for a suggestion that hardware should be designed to support existing software. From the various discussions it appeared that there was no unified approach to the problem of operating distributed or hierarchical systems. Although the partner bus scheme was mentioned by several speakers it does not appear if a classical solution has been found at this stage. The desire to go to distributed processing was nevertheless real, because of the flexibility and reliability this promises.