Bad data detection and identification H - J Koglin and Th Neisius Universit~t des Saarlandes, SaarbrLJcken, West Germany G Beif~ler Siemens AG, Erlangen, West Germany K D Schmitt Asea Brown Boveri AG, Ladenburg, West Germany
In the last 20 years the treatment o f bad measurement, topological and parameter data in the static state estimation of high-voltage power systems has reached a certain level o f completeness, robustness and on-line applicability. This paper shows the historical development o/ bad data treatment, 9ires the state of the art and points out some problems not yet successfully solved. Kevwords." Network analysis, state esthnation, security assessment
I. I n t r o d u c t i o n At the time when Schweppe and his co-authors published their fundamental research works concerning static state estimation 1, their ideas seemed to be hardly accepted by the system control engineers of those days. This might be concluded from the interesting acknowledgement which is to be found in the above mentioned paper: 'These engineers may not agree with the results, but the paper would not have been possible without their help.' Today, just 20 years later, static state estimation forms the heart of nearly all modern network control systems, providing a complete, consistent and reliable database for subsequent network analysis functions such as contingency evaluation, optimal power flow and on-line load flow studies (Figure 12). Soon after the presentation of the basic algorithm of state estimation, which was able to produce an estimate of the actual network status in the presence of Gaussian noise in the measurements, the need for an effective means to treat bad data in parameters, topology and measurements became evident. The first step in this direction was presented by Merill and Schweppe 3. Following on from this, a large number of different algorithms to treat these errors were presented.
Received: December1989
94
The historical development of each kind of error treatment mentioned above will be shown. The major 'key' ideas of the last 20 years will be presented. Some interesting algorithms will be briefly discussed. In Section V some unsolved problems and possible future developments of bad data analysis within static state estimation, most of them originating from practical implementations, are summarized.
II. N o t a t i o n fl 2 f v s JR JM Jvar M R W K t
risk of false alarm of a statistical test power of a statistical test decision limit for a statistical test degree of freedom; f = m - n iteration counter number of selected measurements objective of the remaining network objective of the mini-network sum of the objectives of a switching variant: J~,r=JR+J~ measurement to branch incidence matrix measurement covariance matrix residual sensitivity matrix (symmetrical form): W=I_R-X/2H(HTR iH) lilt R 12 matrix of residual correlation coefficients: K = d i a g ( W ) - l / Z W d i a g ( W ) 1:2 true value (superscript)
III. B r i e f r e v i e w of s t a t i c s t a t e e s t i m a t i o n II1.1 Equations This subsection briefly reviews the well known equations used for WLS state estimation. Assuming that the model parameters (i.e. impedance values of re-equivalents, transformer data, including tap changer positions, and measurement covariance matrix R) and the network topology are exact, the relationship between the measurements z and the state vector x is given by z=h(x)+v
0142-0615/90/020094-10/S03.00 ~; Butterworth & Co (Publishers) Ltd
(I)
Electrical Power & Energy Systems
/
Sequence control and common routines
Mode selection • Real-time mode • Study mode • Security-checked switching
Activation of programs • Cyclic • Spontaneous • On operator request
I Network status processor
Bus scheduler
• Determination of topology
[
• Pseudomeasurements for SE
• Allocation of measured and calculated values
/
k
/
!-:.:.:-:.:-:-:-:-:.:.:-:-:.:-:.:.:.:.:-:.:.:.::::::::::? ~:.:...-...-.-.......................-...-.:.:,::.:.:.:. i:i:! • Fast decoupled :::: ::::::::'
* Fast decoupled
• Basic least :i:i squares
• NewtonRaphson
]i:i:
I i
i:i: :::::::::. :i:i i:i:::::::
Penalty factor calculation
• On-line for near external network
/
On-line load flow
/
/
• Off-line for far external network
I
• Set-up of load flow s t u d y
.,:%. . . . . . . . . . . . . . . . . ..-,-°-~ ..::i:State estimator::!:i::".':: •
Network reduction
/
Contingency evaluation
/
i
/
• Contingency selection • Contingency analysis
/ /
On-line short-circuit
[
for generators
I ° Superposition
for tie-lines
I • Equivalent voltage source method
method
Optimal power flow
• Active power
I
optimization
[
• Voltagelvar optimization
J/
1
Figure 1. Network analysis functions of modern energy management systems 2 The error vector v is unknown, but owing to its normal distribution,
By introducing the approximation h(x) ~ x ° + H Ax
E{v} = 0
the weighted least squares estimate x can be computed iteratively:
and E{vvr}=R=diag(a~ ..... ~)
are valid. Minimizing (objective) J(x)=rTR
(7)
(2)
-
(3)
the quadratic gain function
Ir
(4)
with
(5)
r = z - h(x)
gives
(~J/(~x = - 2HTR-
1 [Z --
h(x)]
Vol 12 Number 2 April 1990
(6)
HTR- 1H Ax v = HTR-
I[Z-- h(x~)J
(8)
The new 'corrected' state vector is given by x ~+l=x~+Ax ~
(9)
and the optimal state vector ~ is found if suitable convergence criteria (e.g. ]AxilVi below a certain threshold) are fulfilled. The covariance matrix of the estimation error x t - £ is
E{ ( x ' - ~ ) ( x ' - ~)T} = (HTR - 1H)- 1
(lo)
95
and the covariance matrix of z t - 2~is
E{(z'- ~.)(z'- ~,)T} =WR
'Bad data' can be caused by:
(11)
• Parameter errors such as false tap changer positions, incorrect transformer models, incorrect branch impedances
(12)
• Incorrect network topology, possibly due to manual updated switching positions or incorrect topology description
where W is the so-called residual sensitivity matrix W = I - R - 1 / 2 H ( H T R - X H ) - 1 H T R - 1/2
The measurement residual vector (equation (5)) can also be expressed by r=Wv
(13)
and in the absence of gross measurement errors it is distributed N(0,WR). Finally, the expectation value for the gain function is given by
• Erroneous errors
measuring
devices or teletransmission
'Measurement errors' can belong to one of the following groups 8 : • extreme error: J m e a s u r e m e n t - true value I > 2 0 a • gross error: Im e a s u r e m e n t - true value I = (5... 20)a
E{J(~)}
=m-n
(14)
The equations above describe the 'basic least squares SE', which is often used today in special cases (e.g. for medium-voltage networks with high R/X ratios and only few P, Q measurements, so that current magnitudes also have to be used). Most of today's implementations prefer the so-called 'fast decoupled SE' (FDSE) 4'5. although there is a trend to avoid model and algorithmic simplifications. The equations of FDSE are very similar to those valid for the 'basic least squares' algorithm. In most of today's implementations detection of gross measurement errors is performed by means of xZ-test and residual tests (weighted or normalized), identification is done by inspecting the normalized residuals of suspicious measurements and the gross measurement errors are eliminated by deleting or correcting the identified measurements.
• normal noise: Jm e a s u r e m e n t - true value I < 5a Although errors belonging to the first group can often (if local redundancy allows) be filtered out by means of plausibility checks before SE iterations are applied, the errors of the second group are dangerous because they may lead to unreliable results. Thus this error group is the main subject of bad measurement data analysis. 'Gross measurement errors' can be separated intog: • Single gross measurement error • Multiple non-interacting gross measurement errors • Multiple interacting gross measurement errors 'Bad data analysis' normally consists of three steps: • Detection procedure to determine whether bad data are present
111.2 Definitions Some important definitions concerning static state estimation and bad data analysis are summarized below. 'Measurements' to be processed by SE are: • active and reactive branch flows • active and reactive node injections/loads • bus voltage magnitudes • branch current magnitudes ('basic least squares SE') They can be classified into: • telemetered measurements • highly accurate zero injections in passive (transit) nodes
• Identification procedure to determine which measurements/parameters/switch states are bad • Elimination procedure to eliminate the influence of bad data on the state estimate 'Redundancy' is required for bad data analysis where:
(15)
global redundancy r / = m / n - 1 local redundancy
are to be distinguished. In Reference 9 a first definition of local redundancy was given, which stated: 'Redundancy for each bus counting only measurements and unknowns at bus k plus all busses up to two switchyards away'.
• pseudo-measurements with relatively low accuracy (e.g. injections/loads of neighbouring systems, forecast values, manually entered values etc.).
Most recently a measurement oriented definition of local redundancy:
Normally, only telemetered measurements are subject to bad data analysis. If topology errors can be excluded, zero injections may be treated as equality constraints 6 or treated by substituting the voltages of passive nodes v instead of handling them as injection measurements P = 0 and Q = 0.
• singly r e d u n d a n t = m e m b e r measurements (Ik,jl ~- l / x / 2 . . . 1),
96
• not redundant = critical measurement (w, = 0) of a
set
of critical
• multiply redundant = non-critical measurement
(jk,~l < 1/x/2),
Electrical Power & Energy Systems
a
b
Figure 2. 7 node network with different measurement topologies A and B but the same number of P/Q measurements. (a) A: identifiable; (b) B: detectable f
1.0
t-~ r-n l'-t I"I n ~ t-1 r"t ~ ~ l-]
IV. S t a t e o f t h e art and h i s t o r i c a l overview
0.8
0.6
:3 ¢12 0.4 I 0.2
0.0~
2
II
l
I
I
I
I
I
I
6
8
10
12
ILl
16
18
gross measurement error in measurement i of case B remains constant at a level /~Li~50% , i.e. any bad measuring data in measurement t o p o l o g y B cannot be considered identifiable! C o m p a r i n g the measurements correlation coefficients of the given example, it can be observed that in measurement t o p o l o g y A no power measurement i with a I k ~ j l > l / x / 2 V j # i exists, whereas in measurement t o p o l o g y B each measurement i has a Ik~jI ~ 1. C o m p a r e d to the practice of just presenting some selected examples, the comparison of the probabilities for detection and identification of gross measurement errors of given methods is a reliable means for estimating the quality of these methods.
20
°; Figure 3. Comparison of the probabilities of a single measurement error in measurement topology A and B
IV.1 Development of bad measurement data treatment After Schweppe et a l ) had presented the basic algorithm of state estimation, the problem of getting reliable estimates in the presence of gross measurement errors (i.e. non-Gaussian-distributed errors) became evident. Here we show the evolution of gross measurement error treatment in W L S steady state estimation, which took place in several steps. Figure 4 gives an overview of the historical bad data development tree. One of the first techniques used was gross measurement error suppression by means of changing the measurement weights during iteration. The next idea was to differentiate between detection: 'Are there gross measurement errors present in the set of measurements with a certain probability (risk) a of false alarm?'
has been published 1°'11 with and identification: K = d i a g ( W ) - 1/2W d i a g ( W ) - 1/2
(16)
kij= wii/(wiiwjj) 1/2
(17)
'Observability' of the network is required as a minim u m for a complete set of not redundant measurements. Observability is a prerequisite of state estimation, but it is not sufficient for bad data analysis. 'Detectability' of gross measurement errors is required for a complete set of singly or multiply redundant measurements, i.e. detectability includes observability. 'Identifiability' of gross measurement errors is required for a complete set of multiply redundant measurements, i.e. identifiability includes detectability. The example above clarifies the definitions of detectability and identifiabilityl°'11: Both measurement topologies shown in Figure 2 have a global r e d u n d a n c y of q r = 2 7 / 1 3 - 1 = 1 0 7 . 7 % and a mean w , = 0 . 5 2 . The probability of detection of an arbitrary single gross power measurement error in measurement i for both examples are equal. Figure 3 compares the probabilities of identification/~Li depending on the magnitude of the error introduced a~ = a ] a v /~L~ of any gross measurement error in measurement i of case A reaches 100% for a'i > 10, whereas/~Li of any
Vol 12 Number 2 April 1990
'Which measurements actually do contain gross measurement errors with a certain probability (risk) 1 --fl of no success?' These ideas led to the use of the Z},:test, f = m - n ( f is the degree of freedom), for detection and a test based on weighted
rwl = (z i - hi(x ) )/ff i
(18)
or normalized
rNi = rwi/n/wii,
Wu = diagii(W)
(19)
residuals for identification. Later on, weighted or even normalized residuals were used for detection because of sharper response when gross measurement errors were present. These procedures behave well in the case of single or multiple non-interacting gross measurement error(s), i.e. in the case of gross measurement errors in measurements i and j the magnitude of the absolute value of a residual rNi is not affected or is little affected by the magnitude of error in measurement j. An even
97
Basic WLS Schweppe et (1970)
[
1
Objective
J(x)
1 o,ect,ve I 1 J(X)and
Nonquadratic criteria
-- Merill and Schweppe 3
(1971} -- Handschin 12 (1972)
t
Change of
residuals
objective
t e c hBias nique
- Aboytes and
I
I
et al. 9
(1975) --
Garcia
et
(1975) M/]ller 13 (1975) -- Falcao
1
Orthogonal Givens transformation
Influence measurement
- Xiar, Nian-de e t al. 19-21 (1981182184)
-- Quintana and Simoes-Costa 27 (1982)
-- Bongers 8 (1979)
topology
Clements 30 (1981)
et ol,
- Mili
(1979) --
Normalized residuals
I
-- Simoes Costa and Salgado 28 et aI. 15,16,22, 23 (1983)
al. 4
1
'
Multiple interacting BD
Single/multiple non-interacting BD
-- Handschin
Coryl 7 (1975}
Simulating---q BD t r e a t m e n t ]
I
Normalized
t-test
-- Handschin el. 9
L
Elimination/correction
S u p p r e s s i o n
et
1
[
1
I
al.
[I 984185/87} - Yen24
Monticelli e t o l . 18
(1983) -
(1988) Monticelli et a l . 25
-- Beissler and Schmitt 29 (1984) -- Schmitt I 1 (1989)
(1986)
e t a l . 14
(1981)
-- C[ements and Davis 31 (1986) Haley 32 (1986) Koglin e t a l . 10 (1987) Schmitt I I [1989}
Cutsem c't a l . 26 (1986)
15,16 (1985)
Monticelli and Garcia 18 (1983)
-- Ayres and
- Van
Mili e t al.
--
Figure 4. Historical development of bad measurement data treatment
better definition is Ikul<~2 K,
ku=wu/(wiiwjj)u2, Vj~i
(20)
If, however, the above definition does not hold, the gross measurement errors are multiply interacting. In this case the magnitude of the absolute value of a residual ryi is greatly affected by the magnitude of error in measurement j and thus these procedures lose their benefits. The next generation of gross measurement error handling procedures relied on the fact that although the matrix W is not invertible as a whole (rank{W} = m - n ) , a selected part of it could be inverted if all selected measurements s ~
98
theless, speed of convergence was often heavily decreased. Mili et al. 15'1° compared 'hypothesis testing identification' and 'normalized residuals' with the above and reported rather bad behaviour in the quality of the estimates and an increase in the number of necessary iterations from three up to 24. IV.1.2 Bias t e c h n i q u e s . Aboytes and Cory 17 introduced another type of gross error treatment capable of handling parameter, topology and gross measurement errors. They used 'bias terms' and tested the hypothesis against the objective J and Student's t-distribution. However, these tests were too smooth to result in a reliable error treatment for gross measurement errors in the range ( 5 . . . 20)a. IV.1.3 N o r m a l i z e d residuals. Almost in parallel to the above, Handschin et al. 9 introduced 'normalized residuals'. This implied a standardized normal distribution N(0, 1) for the residuals in the case of Gaussian error. By means of the 'residual sensitivity matrix' W, normalized residuals could be computed with w , = d i a g u { W } as
r2Ni rwi2 Wii
(ri/ai)2
(21)
Wii
Testing these residuals with a given limit ). with respect
Electrical Power & Energy Systems
to a certain probability of false alarm in the case of single or multiple non-interacting gross measurement error(s) led to reliable estimates. This procedure suffered from the great computational burden involved in the computation of the residual sensitivity matrix and the deletion and re-estimation procedure. A considerable increase in speed was later achieved using the sparse inverse of the gain m a t r i x 33'34 and the correction of measurements instead of deleting them, thus virtually keeping constant the measurement topology and matrix W. An even greater speed-up was later achieved by means of orthogonal transformations. Based on the research work of Quintana et al. 2~ and Simoes-Costa and Salgado aS, who introduced state estimation with bad data analysis by means of orthogonal Givens transformations, Beil31er and Schmitt 29 developed a very fast gross measuring data identification and elimination scheme using linearization around the actual solution vectorS9 21 and avoiding any re-estimations, which resulted in a method as reliable as the normalized residual method. Adding a measurement z~ to the already processed measurement set z~ to z~_ 1 by applying orthogonal Givens transformations, the new gain function becomes
Ji=Ji 14-AJ li)
(22)
The term AJ "~ is not identical to the contribution r~v~ of measurement z~ to the objective, but represents instead the total increase of the gain function, i.e. A.P i' ~ r2i >~ r2wi
(23)
is wdid. Vice versa, when eliminating a measurement zs by means of Givens transformations (analogous to adding it but with negative weight), the resulting term A J ~-i) represents approximately (as long as there are, at most, multiple non-interacting bad data) the decrease in the gain function and thus is best suited to serve as a test criterion for bad data identification. On the one hand AJ ~-s) can be computed very timeefficiently, on the other hand the number of measurements to be tested can be kept relatively small by means of a simple preselection based on weighted residuals, whereby an efficient algorithm for the identification of gross measurement errors has been found. As can be seen from the explanations above, this method needs no time-consuming computation of residual sensitivity matrix elements wjj. At almost the same time, Monticelli and Garcia ~s presented their 'b-test', derived from normalized residuals, which achieved a better response in the case of wi~ less than 0.5. In the case of multiple interacting gross measurement errors, however, all the procedures mentioned above lose their reliability. For this reason Monticelli et al. 2s used normalized residuals and a branch-and-bound-method known from decision theory but taking into account the reliability of measurements. They showed that with this extension of the test on normalized residuals, it is possible to treat the multiple gross measurement error problem combinatorically. The method obviously suffers from the computational burden of updating matrix
Vo112 Number 2 April 1990
W after having deleted a measurement declared as bad and the lot of combinations to investigate. Xian Nian-de et a/. 19"2° had the idea of computing the inverse of a reduced residual sensitivity matrix Ws,,. Since r a n k { W } = m - n the inverse of Wss only exists, if a maximum of s < ~ m - n suspicious measurements are selected, and none of these measurements is a (definition of local redundancy due to Schmitt 11) (a) critical
(i.e.
not
redundant)
measurement
(Wii:O); ieS o
(b) member of a critical set of (i.e. singly redundant) measurements $1 (Ikisl = I%(w."wjj)l/21 ~ 1 . . . l/x/2; i, j 6 S 1 ; Vj ~ i) i.e. all selected measurements have to be multiply redundant. Case (a), however, should never happen, since no residual unequal zero can exist, whatever the actual magnitude of error in this measurement is. Case(b) would lead to linear dependent rows/columns of W,`,. Yen 24 used the inverse of Ws,, and proposed the use of a combinatorial technique together with the multivariate normal distribution to compute a set of 2' variants out of which the most probable one could be chosen. It is worth noting that he introduced a penalty matrix able to deal with the probability of metering/transmission failure, cost of false alarm etc. Mili et al. 22A6'23 and Van Cutsem et al. 26 took the same starting point as Xian and developed the so-called 'hypothesis testing identification' based upon error estimates calculated with the inverse of W,~. Since they could prove that these errors estimates were uncorrelated, a total decoupling between the gross measurement errors could be achieved. Further, they used two strategies, with bounded c~ and 1 - f l risk. The 'hypothesis testing identification' method comprises three main steps Mili et al. 22" (i) After detection of gross measurement errors, the suspicious measurements are arranged in decreasing order of their normalized residuals. A list s of selected measurements out of the suspicious ones is drawn up and an estimate ** of the measurements error vector vs is computed via ¢, = WL i r,
(24)
The objective after error correction is calculated linearly by means of J(~c) = J ( ~ ) - r,R~ ~¢~
(25)
where the index c denotes the state vector x after correction. J ( ~ ) is subject to a new detection test to indicate if all bad measuring data have been selected. (ii) Under fixed risk 1 - f l (e.g. 1%), i.e. declaring false a valid measurement, a threshold 2 i is computed for each measurement i~s via J-i = u 4- N 1 _ px/(Ws~ ')u-
1
(26)
with u as an upper limit (e.g. u = 2 0 ) above which
99
1.0
0.8
the effect of an increase of Iku4 on the probabilit) (~f identifying a single gross measurement error in mcasurcment i. The error introduced in i was chosen to reach [~Li = 1 at I k u l ~ O . F u r t h e r m o r e Schmitt IL showed that only the usc of all types of measurements (flows, injections, voltages and currents) i.e. exploiting as many physical laws as possible in SE (see Koglin et al. 1o), brings an evened-out numerical form of matrix W, thus ensuring low values of ikui i.e. ensuring identifiability.
--
I iI 0.6
--
2~ O.g
-
0.2
-
IV.2 Development of parameter error treatment I
0.0
0.0
0.2
I
l
0.4
0.6
I kii
I
0.8 ~,K =
I .
//2
Figure 5. Probability of identification/JtJ as a f u n c t i o n
of Ik,/l a weighted gross measurement error is required to be definitely identified and N 1_~ the quartile of the standardized normal distribution function N(0, 1). (iii) The identification test is performed by comparing the error estimate IOsii with the threshold )~ I~)~il< 2i:
valid measurement!
(27a)
I#~1 ~)~ :
false measurement!
(27b)
All measurements declared valid are removed from the list of suspicious measurements s and the procedure (i) ... (iii) is repeated until no more valid measurements can be found. The c o m p u t a t i o n a l burden of this method is comparable to the m e t h o d of Xian 19. Fortunately matrix W~ can be updated after each deletion by means of fast forward substitutions, matrix refactorizations or direct matrix update. The c o m p u t i n g times given for real sized networks are encouraging, showing that this m e t h o d of treating gross measurement errors only needs about 10% of a full estimation 23. Handschin and Bongers s showed theoretically the effect of measurement topology on the detectability of a single gross measurement error. Clements et al. 3° introduced 'critical measurements' i.e. n o n - r e d u n d a n t measurements, and showed the effect of the 'residual spread area'. Monticelli and Garcia is used simulation techniques to prove the effectiveness of their 'b-test'. Clements and Davis 31 gave a geometric interpretation of normalized residuals. Ayres and Haley 32 introduced 'bad data groups' i.e. groups of singly redundant measurements as stated above. Koglin e t al. ~° and Schmitt 11 gave a measurement oriented definition of local redundancy (no singly and multiply redundant measurements) and proved detection and identification, by means of simulation techniques. Schmitt x~ continued this work and showed analytically and by simulations the existence of a pessimal upper limit 2 K = l/x/2 for the correlation coefficient Iku I-which actually is the same as cos ® of Clements and Davis 3 1 - - t o ensure identifiability of a gross measurement error. Figure 5 proves the above statement, showing
100
W h e n Schweppe presented his idea of state estimation ~, he mentioned the problem of u n k n o w n or even erroneous model parameters such as non-telemetered transformer tap positions, etc. In this early stage, however, no errors in the network model (structure, parameters) were allowed. Soon after that, Stuart and Herget 3~ gave a detailed analysis of the influence of parameter errors on state estimation. Clements e t at. 36 analysed the effects of non-simultaneous measurement, bias and parameter uncertainty. Merill and Schweppe 37 showed how an on-line correction of model parameters could be achieved. Allam and L a u g h t o n 3s showed how model parameters could be estimated if enough redundancy in the measurement set were provided. Debs 39 developed an off-line algorithm using logged data samples to estimate model parameters for on-line state estimation by means of a least squares filtering technique. In the following year, Aboytes and C o r y 17 and Handschin et al. 9 presented some basic ideas to handle parameter, topology and gross measurement errors. The former treated parameter errors by means of bias terms and hypothesis tests based on the objective function J, the latter showed how these errors were reflected as gross measurement errors. Further work on parameter error treatment mainly concentrated on the estimation of falsely measured or non-telemetered transformer tap positions. Fletcher and Stadlin 4° and Mukherjee et al. 41 presented some methods for an on-line estimation of tap positions, the first based on reactive power flows, the second based on voltage measurements. In 1985, Van Cutsem and Q u i n t a n a 42 presented a method for estimating network parameters and, as a special case, transformer tap positions. Their algorithm was based on a linking of measurement residuals to parameter errors. By means of a linearization they were able to c o m p u t e an optimal linear estimate of the parameter error and, with this to correct the parameter values in doubt. They reported some simulations showing very satisfactory results, such as only the need for one or two transformer ratio estimations even in the case of multiple transformer tap position errors. The main disadvantage with parameter estimation is often the lack of sufficient measurement redundancy to calculate these quantities. It seems to be more suitable to restrict parameter estimation to the calculation of transformer tap positions, but it is not seldom that these transformers feed a distribution network where often only measurements on the high-voltage side are available. This leads to a conflict between gross measurement error treatment and estimation of tap positions. Static parameters of lines, shunts, etc. should be calculated or better, measured--off-line beforehand.
Electrical Power & Energy Systems
2000] 1500
1000 ;F
O 500 2
4
6
8 lO 12 lq 16 18 Suspicious substation
20
22
2q
Figure 6. Objectives of all remaining networks of a real 24- n ode- network IV.3.1 Models for topological errors. Topological errors arise if the topology of the network has changed but the model used by an estimator has not changed or has changed erroneously. There are two kinds of topological errors: •
A branch topology error occurs when the switching
state of at least one branch is not telemetered or manually updated correctly. The branch is assumed to be in (out of) service but in reality it is out of (in) service. • A substation topology error mostly occurs if a bus coupler is opened or closed and this switching action is not telemetered or manually updated correctly. Even the number of nodes might be changed. In this case the assignment of the branches to the busbars is not known.
IV.3.2 Detection and identification of topology errors. Topological errors are detectable if there are large inconsistencies between the measured and estimated values. These inconsistencies can be used for detection and identification of topological errors. Lugtu et al. 44, Koglin et al. 48, Wu and Liu 5°, Clements and Davis 49 took the results of a WLS estimator in order to detect topological errors. Most authors assume that there are no errors in the measurement set. All inconsistencies, such as the largest absolute value of the weighted residual rwi .... or the value of the objective J, can be affected by topological errors. Clements and Davis 49 proved that the residual vector ry is collinear with the column of the matrix product W M which corresponds to the erroneous branch. W is defined as the measurement sensitivity matrix and M as the 'measurement to branch incidence matrix'. For a topology error in the ith branch, the following equation can be given: rr~ =
cWm
i
(23)
were c is an unknown constant and m i is the ith column of M. If there is more than one branch state erroneous, the residual vector is collinear with several unknown columns of the matrix product WM. Wu and Liu 5° pointed out
Vol 12 Number 2 April 1990
that it is a difficult numerical problem to find the columns with which ry is collinear. Using a measurement error treatment while, or after, state estimation is performed, valid measurements near the topological error are considered to be invalid and are eliminated from the measurement set. The inconsistencies disappear until there is no more local redundancy of the measurements. Koglin et al. 48 proposed using the eliminated measurements for determining the suspicious location of a topological error. All nodes and branches connected with eliminated measurements are marked bad. Thus a priority list of all suspicious nodes and branches to be investigated is built up in detail. If all branches of the suspicious substation are made irrelevant by eliminating all injection measurements at their ends and all flow measurements on the branches themselves, as proposed by Koglin et al. 48, the topological error does not affect the remaining network. Some of the supicious substations whose topology is correct can be considered to be free of error. Figure 6 shows the values JR of all remaining networks. The topological error is located in substation 19. Only two of all the objectives have a small value (substations 19 and 24). If the other substations.are eliminated, the estimations of the remaining networks corresponding to these substations show that all inconsistencies are still in the data set. In these cases it is not necessary to investigate all topological variants of that substation in detail. By making suspicious parts of the network unobservable, topological errors can be located in most cases using an ordinary WLS estimator. The algorithm proposed by Koglin et al. 48 makes use of all measurements eliminated from the data set after the estimation of the entire network. The complex voltages at the next neighbouring stations are also known after estimation of the remaining network. Thus it is possible to estimate the feasible switching variants in a very small network (mini-network) which consists only of the suspicious substation, the neighbouring substations and the branches in between. In Reference 48 all feasible switching variants of a substation with two busbars and four branches are shown. The topology variant which gets the smallest value of the total objective J with Jvar = JR + J~ is assumed to be the most probable variant. It seems that the treatment of topological errors has become of most interest and has reached a certain standard, but has not been solved satisfactorily and is not yet in common use.
V. Unsolved problems and f u t u r e developments The historical review of the previous section shows that much research work concerning static state estimation has been carried out by a lot of authors it is almost impossible to create a complete reference list. The level of state estimation and related problems has reached a considerably high standard. Nevertheless, there are enough problems left which are only partly solved and still need to be investigated. As already mentioned, the treatment of parameter errors can be restricted to transformer tap positions for on-line applications. Hence there is still an unsolved conflict between the estimation of tap positions and the analysis of bad measurement data. Algorithms which especially treat teletransmitted tap positions as measure-
101
merits and extend the vector of state variables with tap-position-dependent voltage ratios--in the case of phase-shifting transformers, also phase angles have the problem of extending the observability check and also of finding appropriate weighting factors for 'tap' measurements with respect to 'normal' measurements V, P, Q and I. It is known that the proper weighting of measurements plays an important role in the analysis of bad measurement data, but it is not totally clear how the standard deviations a of the measurements are to be chosen to achieve the best possible estimates. In many on-line implementations the residuals of subsequent SE runs {without change in the measurement topology) are used to compute statistical characteristics--mean value, standard deviation, etc. to get valuable limits for the actual measurement weights. The future aim might be to develop a kind of adaptive algorithm to compute c, values. With respect to topology errors, most of the proposed solutions suffer from a high computational burden. The methods available today should be refined and made suitable for on-line application. Since the treatment of multiple interacting bad measurement data has made great progress during recent years, the handling of gross measurement errors seems to be nearly completely researched. Most of the authors mentioned in this paper assumed that only one type of bad data source i.e. parameters o r topological o r bad measurement data errors is present at any one time and presented their solution for that case. However, in reality, the sources of errors are mixed and thus complicate enormously the bad data problem. Especially during the commissioning phase of a state estimator errors of all three types mostly exist in parallel, thus leading to long commissioning times and high costs. Since such cases produce an unnaturally high number of identified and eliminated measurements, a rough idea of solving this problem might be to pass over the results of state estimation to a kind of diagnostic expert system which attempts to sort out the bad data sources with the help of its knowledge base. The last problem to be mentioned is that many power utilities require network analysis functions to be able to handle extremely large network sizes; 1000 to over 2000 nodes are not uncommon today. Often these large network sizes result from the idea of estimating underlying mid-level distribution networks with a low degree of measurement redundancy. Furthermore, the cycle time of SE is reduced to 5 or even 2 min. These figures make it evident that very fast algorithms and real-time databases sl often dominating running times up to 50% for SE and bad data analysis are an absolute necessity for today's requirements.
VI. Conclusions Since 1970 when Schweppe introduced the idea and algorithm of state estimation, one of the most challenging problems--the treatment of bad data, namely that of gross measurement errors--has reached a high level of robustness and computational effectiveness. A deep insight into the way of working of error treatment has been gained, such as the effect of linearizations, decoupling of measurements, influence of measurement topology, mapping of errors on residuals, etc. Very helpful in that way was the use of stochastic simulation methods, being able
102
to show the statistical properties of a method under consideration. The methods presented so far for the treatment of parameter, topological and measurement errors seem to be suitable, in theory, for on-line applications. What is lacking is the reporting of experience of practical applications. Only some authors report on running implementations in control centres. Although there is a considerable time delay between the presentation of a theoretical algorithm and its practical usage, state estimation has become one of the central and undeniable algorithms of modern network control centres.
VII. 1 2 3
4
5
6
7 8
9
10
11
12
13
14
15
16
17 t8
19 20
References Schweppe. F C Wildes, J and Rom, D B 'Power system static-state estimation, Parts 1/11/111' Vol 89 No 1 (1970) pp 120 135 Beil~ler, G and Schellstede, G 'A software package for security assessment functions' Proc. IFAC Syrup. Beijing (1 986) pp 223-230 M e r i l l , H M and Schweppe, F C 'Bad data suppression in power system static state estimation' IEEE Trans. PowerAppar. & Syst. Vol PAS-90 No 6 (1971) pp 2718 2725 Garcia, A, M o n t i c e l l i , A and A b r e u , C 'Fast decoupled state estimation and bad data processing' IEEE Trans. Power Appar. & Syst. Vol PAS-98 No 5 (1 979) pp 1645 1652 A l l e m o n g , J J, Radu, L and Sasson, A M "A fast and reliable state estimation algorithm for AEP's new control center', IEEE Trans. Power Appar. & Syst, Vol PAS-101 No 4 (1982) pp 933 944 A s c h m o n e i t , F C, Peterson, N M and Adrian, E C 'State estimation with equality constraints' Proc. PICA, Toronto (1977) Bei61er, G 'Fast state estimation' PhD Thesis TH Darmstadt (1 982) (in German) Bongers, C 'Optimal measuring systems for reliable control of electric power systems' PhD Thesis Universitat Dortmund (1979) (in German) H a n d s c h i n , E, S c h w e p p e , F C, Kohlas, J and Fiechter, A 'Bad data analysis for power system state estimation' IEEE Trans. Power Appar. & Syst. Vol PAS-94 No 2 (1 975) pp 329 337 K o g l i n , H-J, O e d i n g , D and Schmitt, K D 'Local redun dancy and identification of bad measuring data' Proc. 9th PSCC, Cascais (1987) pp 528-534 S c h r n i t t , K D 'On the influence of measurement topology on state estimation' PhD Thesis TH Darmstadt (1989) (in German) H a n d s c h i n , E 'Real-time data processing using state estimation in electric power systems' in Real-time Control of Electric Power Systems (ed. E Handschin) Elsevier, Amsterdam (1972) MiJIler, H An approach to suppression of unexpected large measurement errors in power systems state estimation' Proc. 5th PSCC, Cambridge (1975) Paper 2.3/5 Falcao, D M, Karaki, S H and B r a m e l l e r , A 'Nonquadratic state estimation: a comparison of methods' Proc 7th PSCC, Lausanne (1 981 ) pp 1002-t 006 M i l l , L, Van Cutsem, Th and R i b b e n s - P a v e l l a , M 'Bad data identification methods in power system state estimation--a comparative study' IEEE Trans. Power Appar. & Syst. Vol PAS-104 No 11 (1985) pp 3037 3049 M i l l , L, Van Cutsem, Th and R i b b e n s - P a v e l l a , M 'Deci sion theory applied to bad data identification in power system state estimation' Proc. IFAC Symp., York (1 985) A b o y t e s , F and Cory, B J 'Identification of measurement, parameter and configuration errors in static state estimation' Proc. 9th PICA, New Orleans (1 975) pp 298 302 M o n t i c e l l i , A and Garcia, A 'Reliable bad data processing for real-time state estimation' IEEE Trans. Power Appar. & Syst. Vol PAS-102 No 3 (1983) pp 1126-1139 Xian Nian-de and W a n g Shi-ying "Estimation and identification of multiple bad data in power system state estimation' Proc. 7th PSCC, Lausanne (1981) pp 1061 1065 Xian Nian-de, Wang Shi-ying and Yu Er-keng 'A new approach for detection and identification of multiple bad data
Electrical Power & Energy Systems
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
in power system state estimation' IEEE Trans. Power Appar. & Syst. Vol PAS-101 No 2 (1982) pp 454-462 Xian Nian-de, W a n g Shi-ying and Yu Er-keng 'An application of estimation-identification approach of multiple bad data in power system state estimation" IEEE Trans. Power Appar. & Syst. Vol PAS-103 (1984) pp 225-233 M i l l , L, Van Cutsem, Th and R i b b e n s - P a v e l l a , M "Hypothesis testing identification: a new method for bad data analysis in power system state estimation' IEEE Trans. Power Appar. & Syst. Vol PAS-103 No 11 (1 984) pp 3239-3252 M ill, L and Van Cutsem, Th 'Implementation of the hypothesis testing identification in power system state estimation' IEEE Trans. PowerSyst. Vol PWRS-3 No 3 (1 987) pp 887-893 Yen, M 'Decision theory approach to bad data removal for power system state estimation' Proc. IFAC Syrup., Beijing (1986) pp 360-365 M o n t i c e l l i , A, Wu, F F and Yen, M 'Multiple bad data identification for state estimation by combinatorial optimization' IEEE Trans. Power Defivery Vol PWRD-1 No 3 (1 986) pp 361 369 Van Cutsern, Th, M i l l , L and Vandeloise, Ph 'Design and implementation of an advanced state estimation software' Proc. IFAC Symp., Beijing (1 986) pp 354 359 Quintana, V H S i m o e s - C o s t a , A and M i e r , M 'Bad data detection and identification techniques using estimation orthogonal methods' IEEE Trans. Power Appar. & Syst. Vol PAS-101 (1982) pp 3356-3364 Sirnoes-Costa, A and Salgado, R 'Bad data recovery for orthogonal row processing state estimators' CIGRE Syrup., Florence (1 983) Paper 101-01 Bei61er, G and S c h m i t t , K 'Localization of bad measuring data by means of orthogonal transformations in state estimation' etzArchiv No 6 (1 984) pp 341 346 (in German) C l e m e n t s , K A, K r u m p h o l z , G R and Davis, P W 'Power system state estimation: an algorithm using network topology' IEEE Trans. Power Appar. & Syst. Vol PAS-100 No 4 (1981 ) pp 1779 1787 C l e m e n t s , K A and Davis, P W 'Multiple bad data detectability and identifiability: a geometric approach' Trans. Power Delivery Vol PWRD-1 No 3 (1986) pp 355 360 Ayres, M and Haley, P H 'Bad data groups in power system state estimation' IEEE Trans. Power Syst. Vol PWRS Vol 1 No 3 (1 986) Takahashi, K, Fagan, J and Chen, M 'Formation of a sparse bus impedance matrix and its application to short circuit study Proc. 8th PICA, Minneapolis (1 973) B r o u s o l l e , F 'Sate estimation in power systems: detecting bad data through the sparse inverse matrix method' IEEE Trans. PowerAppar. & Syst. Vol PAS-97 No 3 (1978) pp 678-682 S t u a r t , T A and H e r g e t , C J 'A sensitivity analysis of weighted least-squares state estimation for power systems' IEEE Winter Power Meeting (1 973) Paper T73 085-8
Vol 12 Number 2 April 1990
36
37
38
39
40
41
42
43
44
45
46 47
48
49
50 51
Clements, K A 'The effects of measurement non-simultaneity, bias and parameter uncertainty on power system state estimation' Proc. PICA (1973) M e r i l l , H M and S e h w e p p e , F C 'On-line system model error correction' IEEE Winter Power Meeting (1973) Paper C73 106-2 A l l a m , M F and Laughton, M A 'A general algorithm for estimating power system variables and network parameters" IEEE Summer Power Meeting, Anaheim (1 974) Debs, A S 'Estimation of steady-state power system model parameters' IEEE Trans. PowerAppar. & Syst. Vol PAS-93 No 5 (1974) pp 1260-1268 Fletcher, D L and S t a d l i n , W O 'Transformer tap position estimation' IEEE Trans. Power Appar. & Syst. Vol PAS-102 No 11 (1 983) pp 3680-3686 Mukherjee, B K, Hanson, S O, Fuerst, G R and Monroe, C A "Transformer tap position estimation--field experience' IEEE Trans. Power Appar. & Syst. Vol PAS-103 No 6 (1986) pp 1454 1458 Van Cutsem, Th and Quintana, V H 'Network parameter estimation using online data with application to transformer tap position estimation' IEEEProc. Pt C Vol 135 No 1 (1 988) pp 31-40 Harris, J H, Kellie, G H, P r e w e t t , J N and Jervis, P 'Two implementations of state estimators for power systems. Part 1 : The national control logical state estimator' CIGRE (1976) Paper 32 06 Lugtu, R L, Hackett, D F, Liu, K C and Might, D D 'Power system state estimation: detection of topological errors" IEEE Trans. Power Appar. & Syst. Vol PAS-99 No 6 (1980) pp 2406-2411 A l l a m , i F and Rashed, A M 'Power system topological error detection and identification' Power & Energy Syst. Vol 2 No 4 (1980) pp 201-205 B o r k o w s k a , B 'How to deal with errors in network data" Proc. 7th PSCC, Lausanne (1981 ) B o n a n o m i , P 'Data validation by network search techniques for power system monitoring and control' PhD Thesis ETH ZL~rich (1 982) Koglin, H-J, Oeding, D and Schmitt, K D 'Identification of topological errors in state estimation' IEESecondlnt. Conf. on Power System Monitoring and Control, Durham (1986) pp 140-144 C l e m e n t s , K A and Davis, P W 'Detection and identification of topological errors in electric power systems" IEEE Winter Power Meeting (1 988) Paper 155-4 Wu, F F and Liu, W - H 'Detection of topological errors by state estimation' IEEE Winter Meeting (1 988) Paper 216-4 Singer, i 'The application of a database management system in an energy management system' Proc. 9th PSCC, Cascais (1987) pp 359-365
103