Adaptive data link configuration for WAMC applications using a Stateful Data Delivery Service platform

Adaptive data link configuration for WAMC applications using a Stateful Data Delivery Service platform

Accepted Manuscript Adaptive data link configuration for WAMC applications using a Stateful Data Delivery Service platform Yiming Wu, Lars Nordstr¨om ...

654KB Sizes 5 Downloads 54 Views

Accepted Manuscript Adaptive data link configuration for WAMC applications using a Stateful Data Delivery Service platform Yiming Wu, Lars Nordstr¨om PII: DOI: Reference:

S2352-4677(16)30002-9 http://dx.doi.org/10.1016/j.segan.2016.04.001 SEGAN 61

To appear in:

Sustainable Energy, Grids and Networks

Received date: 21 April 2015 Revised date: 2 February 2016 Accepted date: 1 April 2016 Please cite this article as: Y. Wu, L. Nordstr¨om, Adaptive data link configuration for WAMC applications using a Stateful Data Delivery Service platform, Sustainable Energy, Grids and Networks (2016), http://dx.doi.org/10.1016/j.segan.2016.04.001 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Revised Manuscript Click here to view linked References

Adaptive Data Link Configuration for WAMC Applications Using a Stateful Data Delivery Service Platform Yiming Wu1,∗, Lars Nordstr¨om1 KTH-Royal Institute of Technology, Sweden

Abstract Wide Area Monitoring and Control (WAMC) applications provide, among other things, stability control to power systems by using global measurements. However, these measurements suffer unavoidable Quality of Service (QoS) problems, such as delay, during the transmission through a wide area communication network. Such delays potentially lead to failure of the control application. To overcome these effects, mechanisms that provide robustness to QoS problems for control applications and configuration of communication networks to optimise QoS performance have been proposed by several researchers. To optimise utilisation of the communication infrastructure and to adapt to variations in QoS, applications can benefit from an interface to request and obtain QoS information from a managed communication infrastructure. To meet this need, a novel Stateful Data Delivery Service (SDDS) for WAMC applications is proposed in this paper. The SDDS monitors the QoS performance online and provides feedback to one or more WAMC applications. To increase reliability of the applications, the SDDS also provides adaptive data link configuration services. By using online QoS measurements and multiple data sources, the SDDS platform can provide a WAMC application with data fulfilling its requirements. To prove the concept, a case study has been performed and the results of this study show that the apCorresponding author Email address: [email protected] (Yiming Wu ) URL: https://www.kth.se/profile/yimingw/ (Yiming Wu ) 1 Department of Electric Power and Energy Systems, KTH-Royal Institute of Technology, Sweden ∗

Preprint submitted to Sustainable Energy, Grids and Networks

February 2, 2016

plication using the SDDS can keep a benchmark power system stable even when original data link QoS performance does not meet the application’s requirements. The paper also contains a presentation of the SDDS prototype platform under development as a part of the ongoing research. Keywords: Adaptive Data Link Configuration, Stateful Data Delivery Service, WAMC Applications, QoS, POD 1. Introduction

5

10

15

20

25

. One of the main tasks of Wide Area Monitoring and Control Systems (WAMCS) is providing stability control to the power system by using global measurements [1]. Since measurement data is collected across a wide area communication network, naturally, these data may suffer Quality of Service (QoS) problems such as latency, packet loss, and packet jitter [2]. Such degradation of the communication is unavoidable in many cases, since it shares communication network resources with other traffic such as Voice over IP(VoIP) and video streaming from telephones and video cameras equipped in substation [3][4][5]. Potentially, these QoS performance problems affect the performance of Wide Area Monitoring and Control (WAMC) applications [6][7]. . Two research and development tracks can be identified to devise methods to manage these challenges. One is focused on increasing QoS problem tolerance for WAMC applications, see for instance [6][7][8][9][10][11][12][13]. The other track addresses decreasing QoS problems by communication network management [14][15]. To further increase the reliability of WAMC applications, a combination of these two approaches can be explored to make WAMC applications more tolerant on the failures in the communication network. . To provide such features, a novel Stateful Data Delivery Service (SDDS), is presented in this paper. The SDDS provides on-line QoS monitoring and feedback to WAMC applications as well as adaptive data link configuration. In short, the SDDS provides the following benefits for design of WAMC applications: 1). Not all QoS requirements need to be considered when designing an application algorithm. 2). Adaptive data link configuration increases data link or data source failure tolerance. To ensure the QoS performance, the SDDS also applies time out buffer and down sampling on received data before forwarding to the applications. The latency of data obtained by 2

30

35

the application has low deviation since the time out buffer is applied. The down sampling is used to ensure data rate requirement from the applications. Therefore, the QoS metrics such as latency and report rate can be regarded as deterministic input for design of future WAMC applications. . The rest of paper is organized into the following sections. In Section 2, related work in the area of communication system support for WAMC applications is presented. Section 3 describes the proposed SDDS in detail. Thereafter, in section 4 the proposed adaptive data link configuration and a case study is presented. Finally, section 5 presents the prototype of the SDDS platform. The paper is concluded with a summary of the contribution. 2. Related Work

40

45

50

55

60

. This section presents related work on the impacts of QoS provided by data delivery systems on WAMC application performance. Firstly, future power system data delivery system architectures proposed by other researchers are reviewed. Thereafter, QoS problem robustness WAMC applications design are discussed. Additionally, as solutions from the data delivery system infrastructure management aspect, related works within QoS assurance methods are discussed. Finally, the contribution of the paper in perspective of this related work is discussed. 2.1. Future Power System Data Delivery System Architecture . Modern power systems contain huge amounts of raw data from measurement devices. To efficiently handle this data, the power system data delivery systems must meet the requirements such as flexibility, interoperability, scalability, and QoS assurance [16]. To meet these emerging requirements, several data delivery architectures have been proposed. The North American SynchroPhasor Initiative (NASPI) aims to create a secure, high-speed data delivery infrastructure for time-synchronized data in bulk power systems [17]. To avoid scalability issue, a tiered architecture for NASPI network (NASPInet) has been proposed. In this architecture, hubs are deployed to distribute management tasks of the Data Bus management [18]. In a similar effort, GridStat is proposing to provide flexible, robust, secure and QoS assured power system data delivery [15]. A third approach is Peer-to-Peer architecture that is proposed for solving data delivery issues in power systems both at transmission and distribution level using information centric 3

65

70

75

80

85

90

95

networking [19][20]. In common, all of these approaches apply the publishsubscribe model to reduce complexity of data link re-configuration which creates a flexible data delivery structure[17][21][22]. 2.2. Robustness WAMC Applications and QoS Assurance . On the data user side, WAMC applications are designed to improve their robustness against QoS performance variations. Several researchers have proposed taking QoS performance into account at the design stage of WAMC applications. End-to-end latency effect, for example, can be eliminated or reduced by different means such as trajectory extrapolation compensation [11], H∞ compensation [9][12], Fuzzy algorithms [13], and Phase-lead compensation [10]. However, WAMC applications designed according to these methods is robust in one specific QoS performance range [9]. If latency is out of the design range, the application will deteriorate. As a way to mitigate this problem, an adaptive mechanism is proposed to be able to switch different compensators based on the real time QoS performance feedback [10]. However, two challenges remain. First, although these compensators are designed for a wider QoS performance range, the WAMC applications deteriorate when the data link QoS performance is out of the range of all compensators. For the case when the data source or data link is experiencing a failure which taking it off line, the QoS performance compensation methods cannot provide sufficient robustness to the applications. In order to enhance the reliability of WAMC applications, it is proposed in this paper that alternative data source input can be a solution. Therefore, an adaptive data link configuration mechanism is designed and proposed in this paper. The second challenge for these latency compensation methods is latency monitoring and feedback which is a crucial supporting part to ensure that the methods work properly. To allow WAMC applications to obtain latency information, latency information should be provided by either detection functions integrated in applications algorithms or from the data delivery systems. The SDDS described in this paper provides such information to WAMC applications which helps simplify the design of applications. . In addition to making WAMC application algorithms robust to QoS problems at design time, the assurance of high QoS levels from the data delivery systems is another approach to increase WAMC application reliability. The first step of QoS assurance is to identify QoS requirements of the applications. Generally, QoS requirements are classified into different groups 4

100

105

110

115

120

125

130

with specific ranges of latency, bandwidth, priority, and other QoS metrics [14][23][24][25][26][? ]. The classification can be based on functionality (e.g. monitoring, protection, control [25]) or ways of control (e.g. feedback control, feed forward control, open loop control [24]). To assure QoS performance, the conventional way is applying priority control to different data links according to application classification. Protocols, such as Integrated Services (IntServ), Differentiated Services (DiffServ), Multi-Protocol Label Switching (MPLS), and Resource Reservation Protocol (RSVP), can be used for providing priority control [26]. In NASPInet, the QoS classification of each application is pre-defined before it is mapped into the resource management system to obtain necessary network resources [17]. However, QoS classification based mechanism leads to challenges such as resource allocation competence among WAMC applications within the same QoS class. In addition, other research results indicate that QoS requirements for the same function differ when different algorithms are applied [13][27][28]. Therefore, application oriented QoS assurance solutions are proposed by other research groups. For instance, in Gridstat, a concept referred to as QoS+ is applied in its management plane to guarantee QoS performance in multicast communication. The latency, reporting rate, data link redundancy, security, and scalability are all stressed in the QoS+ concept. Each application acting as a subscriber sends data and QoS requirement requests to the management plane. In response, the Forwarding Engines set the data path and apply a rate filter for each application separately [21]. 2.3. Contribution in relation to related work . In comparison to the presented related work, the Stateful Data Delivery Service proposed in this paper aims to increase the flexibility of data delivery using adaptive data link configuration, enhancing availability of existing data using data source lookup, and providing application oriented QoS assurance via on-line QoS performance monitoring and feedback to the WAMC applications. Before the data is forwarded to the applications, the proposed SDDS applies down sampling, and implements a time-out buffer for received data. Therefore, input data to the applications has deterministic QoS characteristics since the time-out buffer ensures data packets received by application with same latency, without jitter, and constant data rate. It helps to simplify the design of WAMC applications.

5

3. Stateful Data Delivery Service 135

140

145

. In this section, the proposed Stateful Data Delivery Service is described in detail from both overall architecture and algorithm design. 3.1. Stateful Data Delivery Service architecture and functionality . The functionality to enable the Stateful Data Delivery Service is implemented in a communication node, here termed SDDS Provider, or SP for short. For the purpose of this study, SP is regarded as one single dedicated device. For practical implementations, the functionality of SP can also be assigned across several existing devices in a substation such as substation router, switch or IED. In the text below, we have pointed out such complementing technologies and concepts that each partially contribute to the functionality of an SP. The architecture of SDDS is illustrated in Figure 1.

Service Layer

b. Lookup c. Subscribe

SP

a. Register

SP

d. Publish e. Data Forward

Measurement Informaton

WAMC Application

Realtime Data

Measurement Device

Control Data Actuator

Measurement Side

Application Side

Figure 1: Stateful Data Delivery Service overall architecture.

At top part of the architecture, SP nodes create an SDDS layer. If an SP interacts with measurement devices such as the right part of the architecture, this SP will collect measurement information for data source lookup and receive real-time measurement data for publishing to data subscriber SP(s). 6

150

155

160

165

170

175

180

185

On the other side, if an SP interacts with WAMC applications such as the left part of the architecture, this SP will collect register information of WAMC applications, lookup data sources for the applications, subscribe real-time data from other SP(s), and forward real-time data to the applications. The applications are able to use the real-time data to control actuators. In some cases, SP might interact with both WAMC applications and measurement devices. Therefore, functions of an SP are given in general as the following: Application registration: this function allows applications to register their data requirements in the form of required input data groups, priority of data groups, and QoS requirements of each data source in these data groups. Data source lookup: SP takes the responsibility for locating data sources among other SPs, according to the requirements of the applications. Data link establishment: SPs set up requested data links according to priority and availability of data sources. The data link configuration uses the publish-subscribe concept [14]. QoS management: This function provides on-line QoS performance monitoring for the established data links. It takes the responsibility of data sorting, data time out buffer, and down sampling according to the requirements of applications. This is an obvious complementarity with other concepts such as the PhasorGateway [29]. State Awareness Notification: SP forwards the data and data link status to the application. In case that the QoS performance does not meet the requirement of the application, SP switches to an alternative data source which fulfils the requirements and sends a data source switching notification to the application. . The example work flow of an SP at the application side is illustrated in Figure 1 by marked arrays. The work flow assumes the measurement information collection has been done at the measurement side. Firstly, a WAMC application registers itself at an SP. The SP performs the data source lookup for the application after it obtained a data group list provided in register information. Based on the availability of data sources from lookup result and priority of data groups from application register information, SP makes the decision of which data group to use and subscribes the corresponding data sources. Received real-time data from publishing SP has been performed time alignment and down sampling according to the QoS requirement before it is forwarded to the application. Based on the time tag of received data, the SP is able to obtain the data link real-time QoS performance. To estimate 7

the performance of data link which is not online, the data source SP uses test packet to obtain the QoS performance. These test packets are sent from the alternative data source SPs at a rate determined by the following: T estRatei = M ax 190

195

200

205

1 Qlatency

, i

1 Qlatency

online

!

(1)

where T estRatei is the report rate of the test packets of the alternative data source i, Qlatency i is the latency requirement of the application on alternative data source i, and Qlatency online is latency requirement of the data which is feeding the application, keeping in mind that the latency requirements vary depending on data source. Based on this, the application SP has updated the latency performance of alternative data sources. Equation (1) ensures that failing data links are detected based on the data source with the most strict latency requirement. Recall the latency calculation equation Tlatency = Ttrans + Tprop + Tqueue . where Ttrans is transmission delay, Tprop is propagation delay, and Tqueue is queue delay. For a known data link, Ttrans and Tprop are fixed. However, Tqueue is highly depending on the available throughput on the path. That means if throughput is large enough, a latency test with lower rate than the main data stream can provide same prediction as higher latency test report rate. Therefore, the criterion for selection of an alternative data source is that it fulfills both the application’s latency requirement and enough available throughput along the data link path. The required throughput can be calculated by the following: Lrequired = QReportRate i × Li

(2)

where QReportRate i is the required report rate of the application on data source i and Li is the packet size of data source i. The available throughput can be explained by the following: 

Lavailable = M inLj − 210

X

a∈A

La −

X

b∈B



Lt b 

(3)

j∈J

where Lj is the throughput of a router j, J is a set of routers along the data path for an alternative data source, La and Lb are throughput occupied at router j by online data traffic from data source a and test data traffic from data source b, A and B are two sets of data sources whose data traffic and test traffic go through the router j. To obtain the available throughput of 8

215

220

225

230

235

240

245

an alternative data link, different approaches are available [30][31][32]. For the purpose of this paper, it is assumed all routers in the infrastructure are Simple Network Management Protocol (SNMP) compliant and available throughput of a data link can therefore be known to SPs. Concerning the latency effect of the latency test traffic on other traffic, it can be quantified as the worst case scenario by the following equation still under the assumption that throughput is large enough for both latency test traffic and existing data links:   X X Lt b × T estRateb   (4) ∆tlatency = Lj j∈J b∈B

Same notation is used as equation (3). In this worst case scenario all latency test traffic and the affected data link traffic arrive on the router concurrently. To identify alternative data sources QoS performance introduces additional processing time, which can be neglected, to SP. The payload of test packets from data source SP to application SP includes time tag only. Due to implementation of time synchronization mechanisms (i.e. IEEE1588, Network Time Protocols, or GPS time clock) on each SP, application SP is able to determine alternative data link QoS performance using current time deducted by time of time tag in test packets. These QoS performances are used to make decision of adaptive data link configuration. In order to achieve high accuracy of time synchronization, GPS time clock is implemented in each SP. Assuming shared use of GPS time clock within the substation, network based time synchronization mechanisms such as IEEE1588 and Network Time Protocols(NTP) are potential alternative approaches within the substation. In such instances, the master clock (for IEEE1588) or NTP server is synchronized by GPS clock directly. 3.2. Functional architecture, notation and message types . The class diagram illustrating the functional decomposition of the SP is shown in Figure 2. Each SP contains five lists to maintain the information as following: AppList: Application list stores one or more registered application entries Appn : An entry of a registered application has following entries AppN ame : Application name DGList: Data group list includes one or more following entry DGn : A data group includes one or more data sources entry DSn : A data source in data group which has following entries 9

Figure 2: Class diagram illustrating the functional decomposition of the SP.

250

255

DSN ame : Data source name Qrequirement : Data source QoS requirements DGN umIU : in using data group number DGN umBackup : backup data group number LinkList: Data link list updates QoS performance and status of each link LDSList: Data source list contains available data sources in the substation SPList: SP peer list provides address of SP peers for data source lookup DSList: Data subscriber list supports publish-subscribe communication The functionality of the SP depends on a set of algorithms, which are further described in Section 3.3. In addition, the messages types used in the algorithms are given in Table 1. Table 1: Message Types Used in the Algorithms

Message Type ARM = {AppN ame , DGList} DLRq = {DSN ame , SPLocal } DLRp = {DSN ame , DLResult , SPLocal } DLResult = {0 T rue0 ||0 F alse0 } LERq = {DSN ame , SPLocal } LERp = {DSN ame , LEResult , SPLocal } LEResult = {0 T rue0 ||0 F alse0 } DGN N = {DGN } 10

Note Application Registration Message Source Lookup Request Data Source Lookup Response Data Source Lookup Result Data Link Establish Request Data Link Establish Response Data Link Establish Result Data Group Number Notification

260

265

270

275

280

285

290

. ARM and DGN N are used between applications and SPs. ARM is used for application registration. The ARM message contains application name and data groups in order of priority. DGN N is sent by an SP to notify an application which data group is being forwarded. The remaining message types are used between SPs. DLRq is used for data source lookup. DLRp contains data source lookup results. SP sends LERq to establish data link from message receiver to sender. LERp is used as a confirmation of the LERq . The acronym given above are used in the algorithms described in the following section. 3.3. SP Algorithm . The SP algorithm starts from the initialization procedure which imports the local data source description file and information of peer SPs. After initialization, the following four parallel threads are invoked to handle all the tasks. ServiceListener : receives service messages, such as ARM , DLRq , DLRp , LERq , and LERp , and provides corresponding handling methods DataSender : sends data to the data subscriber SP(s). DataReceiver : receives data from the data sender SP(s). DataForwarder : forwards data to the application(s). . Depending on which type of message an SP has received, the corresponding handling method is used by ServiceListener as shown in Algorithm 1. If received message m is ARM , ARM Handling will be invoked to parse m and store application data groups and their priority order into the SP’s AppList. As shown in Algorithm 2, all data sources in these data groups are located by this SP through sending DLRq message to all peer SPs. After receiving DLRq , the receiving SPs check their own LDSList and send a DLRp back to the requesting SP. Based on the lookup results from received DLRp , the SP can update data source availability in its LinkList. After data source lookup, QoSM anager is invoked to monitor the QoS performance of the data links for the registered application. Based on the performance, QoSM anager sets QoS status of the data group which is being used. This procedure is called QoS performance check and is shown in Algorithm 3. . Each Data group has three possible QoS states: Normal, Alarm, and Trip. Based on the state of the data group being used, QoSM anager performs corresponding actions as described in Algorithm 4. The meanings of the states are Normal: QoS performance fulfils the requirements of the 11

Algorithm 1: ServiceListener Input: AppList, LinkList, LDSList, SP List, DSList 1 while SP is Initialized do 2 if message m is received then 3 mT ype ← identify m type 4 switch mT ype do 5 case ARM 6 ARM Handling(m, AppList, LinkList, SP List) 7 QoSM anager(AppN ame , AppList, LinkList, SP List) 8 9 10 11 12 13 14 15

case DLRq DLRequestHandling(m, LDSList) case DLRp DLResponseHandling(m, LinkList) case LERq LERequestHandling(m, DSList) case LERp LEResponseHandling(m, LinkList)

Algorithm 2: ARMHandling Input: m, AppList, LinkList, SP List 1 AppN ame , DGList ← m 2 create Appn ← {AppN ame , DGList, ∅, ∅} add into AppList 3 for i ← 1 to DGList.size do 4 DGn ← DGList.get(i) 5 for j ← 1 to DG.size do 6 DSN ame ← DGn .get(j).DSN ame 7 if DSN ame not exist in LinkList then 8 create Linkn ← {DSN ame , ∅, ∅, ∅, ∅} add into LinkList 9 send DLRq = {DSN ame , SPLocal } to all SPs in SP List 10

QoSM anager(Appn , AppN ame , AppList, LinkList, SP List)

12

Algorithm 3: QoSPCheck Input: DGN umIU , DGN umBackup , DGList, LinkList 0 0 1 QoSF lag ← N ormal ; DGn ← DGList.get(DGN umIU ) 2 for i ← 1 to DGn .size do 3 DSn ← DGn .get(i); DSN ame ← DSn .DSN ame 4 Find Linkn has name as DSN ame in the LinkList 5 Latency ← Linkn .LinkP erf ormance .PLatency 6 if Latency > DSn .QRequirement .QLatency then 7 QoSF lag ←0 T rip0 8 9 10

295

300

305

310

315

else if Latency > DSn .QRequirement .QLatencyAlarm then QoSF lag ←0 Alarm0

return QoSF lag

application. In this case, no additional action is required. Alarm: QoS performance is close to violating the QoS requirements of the application. In such case, SP will check the availability of backup data group. If a backup data group is available, SP will send a LERq to the SP(s) who provide this data to setup an alternative data source link(s). If no backup data group is available, SP will continually try to find suitable data groups. To establish the backup data link(s) before the QoS performance of data link(s) in use fails to meet the requirements helps reduce data group switching time. Trip: QoS performance violates the requirements. In this case, SP will check the data link status of each data source in the backup data group. If all data links of the backup data group are connected, the SP will switch to the backup data group and send a notification to the application. If the data links are available but not connected yet, SP will send LERq to the SPs corresponding to data sources in this data group. In the worst case, when no backup data group is available, the SP will send a notification to the application and continue to search for a backup data group until it finds one that can fulfil the requirements. Switching time of the worst case scenario from data source DS1 to alternative data source DS2 for an application is quantifiable, if alternative data source is available. In such scenario, data source DS1 is suddenly down, the switch time from DS1 to DS2 can be determined by the following equation. Tswitch = Dalarm × LRDS1 + 2 × LDS2 13

(5)

320

where 0 ≤ Dalarm ≤ 1 is latency alarm level which can be set by power engineer; LRDS1 is latency requirement of application to data source DS1, LDS2 is the latency from DS2 to the application side SP. Since DS1 is suddenly down, the application side SP uses Dalarm × LRDS1 to identify that DS1 is down. The SP establishes data link to DS2 using 2 × LDS2 then. If Dalarm is set to 0, SP can achieve the fastest switching time, since alternative data source DS2 is always on-line. However, the drawback of such setting is cost of network resources. Algorithm 4: QoSManager Input: Appn , AppN ame , AppList, LinkList, SP List 0 0 1 DGList ← Appn .DGList; QoSF lag ← N ormal 2 while AppN ame exists in AppList do 3 NInU sing ← Appn .DGN umIU ; NBackup ← Appn .DGN umBackup 4 if NInU sing is ∅ then 5 NInU sing ← FindInUsingDG(LinkList, DGList) 6 7 8 9 10 11 12 13 14 15

16

325

else QoSF lag ←QoSP Check(NInU sing , NBackup , DGList, LinkList) switch QoSF lag do case 0 Alarm0 if NBackup is ∅ then NBackup ← FindBackupDG() case 0 T rip0 if NBackup is ∅ then NBackup ← FindBackupDG()

Send DGN N = {DGN ← NBackup } to the Application NInU sing ← NBackup ; NBackup ← N A

case 0 N ormal0

3.4. State Aware Applications . In order to use the SDDS, a state awareness function must be integrated in the WAMC applications. The state awareness function enables the application to register to a SP and receive DGN N message from this SP. WAMC 14

330

335

340

345

350

355

applications are normally designed using the best option of the available signals [9]. However, the additional available signals can also be used as input to the application. For example, for Power Oscillation Damping (POD) controllers, any signal that contains the oscillation mode can be regarded as POD controller input. For applications that can only use one specific signal or all signals, the QoS performance monitoring provided by the SDDS can indicate the application control quality. 4. Adaptive Data Link Configuration Case Study . To test the functionality of the SDDS, a case study of a typical WAMC application has been performed. This study involves a Static VAR Compensator (SVC) based POD controller. The POD controller has been modified to enable the controller to switch input from different signals and to use the SDDS. The SDDS described in Chapter 3 is configured to provide data group number and input data to the application according to real time latency QoS performance of data links. To prove the concept of adaptive data link configuration, QoS is modeled in the simulation. However, latency is the only QoS metric used by SP to evaluate the QoS performance for each data link. Other QoS metrics such as packet jitter and packet loss can be illustrated by different latency performance. For example, time out buffer at data receiving side can convert effect of packet jitter to additional latency. Packet loss can be regarded as infinite end-to-end latency. 4.1. Model Used for the Case Study . The power system under study is a two-area four-machine system as shown in Figure 3. The detailed system parameters can be found in [33]. An SVC is deployed at Bus7. Instead of using Power System Stabilizer (PSS), a POD controller is used to damp system oscillation. The control signal from the POD is input to the voltage control function of the SVC as shown in Figure 4. The studied scenario starts with a three-phase to ground fault occurring on bus 8 at t = 100s and is cleared after 0.2s. The power oscillation due to the fault needs to be damped to keep system stable. . By linearizing the system at the normal operating point, the power system plant model can be represented in a standard state-space form: (

x˙ = Ax(t) + Bu(t) y(t) = Cx(t) 15

(6)

G1 1

5 6

7

8

9

10 11

3 G3

G2 2

4 G4 SVC Fault L9

L7

Figure 3: Two Area Four Machine Model. max BSVC

PODSignal U ref   

TI s 1

min BSVC





BSVC



Kp 

max BSVC

X SL

2 

min BSVC

I SVC



U

Figure 4: SVC voltage regulator diagram.

360

365

Where x(t) ∈ Rm is the system state vector; u(t) ∈ Rn is the system input vector; y(t) ∈ Rp is the system output vector. m, n, and p are the dimension of the vectors. A ∈ Rm×m , B ∈ Rm×n , and C ∈ Rm×m are the state, input, and output matrices, respectively. In the case study, MATLABTM linearisation command, linmod, is used to obtain the power system state space model. The eigenvalues of the obtained state matrices A can be calculated by the following equation: det(A − λI) = 0

(7)

Let λi = σi ± jωi be the i-th eigenvalue of the state matrix A. The damping ratio ξi and the frequency of oscillation fi can be calculated by equation (8) and equation (9) separately.

370

−σi ξi = √ 2 (8) σi + ωi 2 ωi fi = (9) 2π Therefore, the corresponding oscillation modes can be determined and presented in Table 2. As it can be seen, mode No.1 has a negative damping 16

ratio and low oscillation frequency, which means the system is unstable due to the inter-area oscillation. The POD controller is therefore applied to control this oscillation. Table 2: Dominant Inter-area Oscillation Modes

Mode No. 1 2 3 4

375

EigenValue Frequency (Hz) Damping ratio 0.0592 ± 4.1014i 0.6528 −0.0144 −0.2478 ± 0.5074i 0.0808 0.4388 −0.5568 ± 7.0769i 1.1263 0.0784 −0.5775 ± 7.2993i 1.1617 0.0789

4.2. SVC based POD Controller Design and Latency Effects . The POD controller design has been made by applying the method from [34]. As shown in Figure 5, the controller includes a gain block, washout block, lead-lag block(s) and limitation block.

Gain Washout

Sigin

K POD

Lead-Lag

Limitation

Tw s 1  T1z s 1  T2 z s 1  Tw s 1  T1 p s 1  T2 p s

Sig out

Figure 5: POD controller block diagram.

380

385

. Any signal riches inter-area oscillation can be regarded as candidate input signals for the controller. However, the selection of best signal is not in the scope of this paper. Therefore, active power measurement on bus 7, 8, and 9 and voltage measurement on bus 6 and 10 are chosen as candidate input signals. Since the system can be regarded as single input and single output (SISO) system when only one signal is used as POD controller input at a time. Based on the eigenvalue, right eigenvector φi can be obtained which satisfies Aφi = λi φi . Therefore the observability analysis can be obtained by the following equation: O = CΦ (10) where Φ is right eigenvector matrix. The i-th mode is observable in the jth output if Cj φi 6= 0. In this study case, the inter-area oscillation mode 17

390

is observable to all the candidate signals. The results of the observability analysis is given in Table 3. It shows that active power measured from bus 7, 8, and 9 provide better observability than voltages measured from bus 6 and 10. Table 3: Observability of Different Measurement

Signal Name PB7 PB8 PB9 VBus6 PBus10

395

Observability 0.0327 0.0323 0.0319 0.0029 6.5664 × 10−4

. The parameters of the POD controller using different input signals are tuned based on the residue of each signal Ri . The expression of Ri is given as the following equation: Ri = Cφi (:, i)ψi (i, :)B

(11)

where ψi is left eigenvector. Based on the residue, the compensation angle can be calculated as the following equation: ϕ = π − arg(Ri ) 400

(12)

If the compensation angle is large, more than one lead-lag blocks can be applied. In this study case, the lead-led time constants can be calculated as the following equations: when ϕ < 85◦ : 1 − sin(ϕ) (13) α= 1 + sin(ϕ) (

T1p = ωi 1√α , T2p = 0 T1s = αT1p , T2s = 0

when ϕ ≥ 85◦ : α=

1 − sin 1 + sin

18

  ϕ 2 ϕ 2

 

(14)

(15)

(

405

T1p = T2p = ωi 1√α T1s = T2s = αT1p

(16)

The parameters of the POD controller of each input signal are calculated and tuned as listed in Table 4. The parameter optimization has not been performed since it is out of the scope of this paper. Table 4: POD Controller Parameters

Input Signal PB7 PB8 PB9 VBus6 PBus10

KP OD −1.2 × 10−4 −6 × 10−4 −1 × 10−4 0.08 0.7

K1z 0.8052 0.8061 0.8070 1.7426 1.2759

K2z 0.0738 0.0737 0.0737 0.0341 0.0466

K1p 0.8052 0 0.8070 1.7426 1.2759

K2p 0.0738 0 0.8070 0.0341 0.0466

Table 5: Inter-Area Oscillation Modes When POD is Applied

Input Signal PB7 PB8 PB9 VBus6 PBus10

410

415

EigenValue Damping ratio −0.2031 ± 3.7572i 0.0738 −0.2491 ± 0.5065i 0.4431 −0.1512 ± 3.8281i 0.0395 −0.0712 ± 3.9200i 0.0182 −0.0430 ± 3.9901i 0.0108

. System analysis has been performed again including the POD controller using each signal from Table 3 and applying corresponding parameters from Table 4. The results are given in Table 5 showing that the POD controller can keep the system stable by use of any one of the candidate input signals since all damping ratios listed in Table 5 are positive. These damping ratios actually indicate the input signal priority for the POD controller since higher damping ratio gives better control performance as shown in Figure 6. The priority order of the input signals to the POD controller is determined as (from high priority to low priority): PB8 , PB7 , PB9 , VBus6 , VBus10 . . To determine the effect of latency on control performance, a transmission delay is added between the measured signal and the POD signal input port. The latency requirement of the controller is determined by latency iterative 19

Figure 6: POD performance for each possible input signal.

420

425

testing as shown in the work flow in Figure 7 for each input signal. The latency increment of each test loop is 0.01s. The test covers latency from 0.00s to 1.00s. The results of the latency effect on the controller performance for different candidate input signals are illustrated in Figure 8. In this case study, the required damping ratio is set as 0.05. Therefore, the latency requirement of each input signal can be obtained as shown in Table 6. When the latency of an input signal exceeds its requirement, the POD controller can be considered as not suitable to control the oscillation. Table 6: Latency Requirement for Different Input Signals

Signal Name PB7 PB8 PB9 VBus6 VBus10

430

Latency Requirement (s) 0.46 0.25 0.43 0.01 0.01

. As shown in Table 6, the voltage signals have the strictest requirements on latency of data delivery. This is to such a degree that they are difficult to be met by state of the art data delivery systems. Moreover, compared with power flow measurements, voltage signals do not provide better control 20

Start Latency = 0s

System Analysis Damping Ratio (DR) Identification

DR < 0? or Latency > 1

Store DR Latency = Latency + 0.01s

No

Yes End Figure 7: Latency requirement identification work flow.

performance which is shown in Figure 6. Therefore, only three power measurement signals are chosen as candidate input signals to the POD controller.

435

440

4.3. State Awareness Function Integration to POD controller . To allow the POD controller to use SDDS, the state awareness function needs to be integrated in the POD controller as mentioned in Section 3. Therefore, a State Awareness POD (SAPOD) controller has been designed. It contains three POD blocks (one for each candidate input signal) and two selectors (one for input and one for output), as shown in Figure 9. The three POD blocks implement the same algorithm as Figure 5. The parameters of these three POD controllers are set using PB7 , PB8 and PB9 in Table 4. The two selectors are controlled by the SelectedSignalN umber input block, which obtains the data group number from SP. The output signal of the active POD control block is sent to SVC controller directly.

21

Figure 8: Latency effect on damping ratio of each input signal.

SAPOD POD 1 POD 2 POD 3

Selector

Selected Signal Number

Selector

Input Signal

Output Signal

Figure 9: State aware POD controller.

445

450

4.4. SP and SAPOD Simulation Model . Both SP and SAPOD are modelled in the SIMULINKTM as shown in Figure 10. The SP implements the Stateful Data Delivery Service Provider block. It obtains latency of each signal and applies the QoSM anager algorithm to determine which data group to use for the SAPOD. The SwitchLatency input of this block is used to represent the delay of data source switching. The latency of each signal is modelled in the Input Signal Latency block where each signal is delayed according to its latency pattern. The SAPOD 22

Figure 10: Stateful Data Delivery Service modeling.

455

460

is modelled in the SAPOD block. The test traffic of each data source can be obtained by equation (1). In this case study, the online data traffic will be only affected by latency test traffic from other alternative data sources at edge router close to application SP and latency effect can be calculated by equation (4). Both test traffic report rate and their latency effects on online data source traffic are given in Table 7. It is assumed that latency test packet contains only time stamp put by data source SPs in units of nanoseconds. The packet size is 64bits(= 8Bytes) and router throughput capability is 2M Bytes/s [3]. As shown in Table 7, it is obvious that latency introduced by latency test packet from alternative data source can be ignored by comparing with latency requirements from Table 6 Table 7: Latency Requirement for Different Input Signals

Online Data Source PB7 PB8 PB9

465

Test Report Rate (pkt/sec) PB7 PB8 PB9 N/A 4 3 4 N/A 4 3 4 N/A

Latency Effect 0.056ms 0.064ms 0.056ms

4.5. Result . As a reference scenario, the system without the SDDS is demonstrated. Only one signal, determined in Section 4.2 to provide best control performance, PB8 is used. Performance of the POD controller in two different latency cases are shown in Figure 11.

23

Figure 11: Active power measured at bus 8 when POD input latency is 0.2s and 0.5s (only active power measured at bus8 is used as POD input).

470

475

480

. In the reference scenario, illustrated in Figure 11, the power system is stable after the fault when the input signal latency is 0.2s. However, when the input signal has a larger latency such as 0.5s, the system is unstable and the simulation is stopped at 116.3s due to losing synchronization of generators. In this scenario, POD without SDDS is not able to keep the system stable since the input signal latency violates the requirement of the application. . In the second scenario, the SDDS is applied. The latency of signal PB8 stays 0.2s until t = 102s. Then it increases to 0.5s. Signal PB7 ’s latency increases from 0.4s to 0.6s at t = 104s. Latency of signal PB9 is constant at 0.4s. As shown in Figure 12, SAPOD can keep the power system stable by switching data sources according to Algorithm 3 and Algorithm 4. . As shown in Table 6, the latency of signal PB8 does not fulfil the application requirement after t = 102s. Therefore, the SP switches the input signal of 24

Figure 12: Result of applying ASPOD and stateful data delivery service.

485

SAPOD from PB8 (signal number 2) to PB7 (signal number 1) at t = 103s including 1s switching process delay. At t = 104s, performance of input signal PB7 violates the requirement as well. Consequently, the SP switches the input signal of SAPOD from PB7 to PB9 (signal number 3) since neither PB7 nor PB8 can fulfill the QoS requirement of the controller. The power measured from bus 9 shows the system is well damped by SAPOD instead of instability as in the scenario without SDDS.

25

5. SDDS Testing Platform and Future Work 490

495

500

. In order to verify the SDDS concept in as close to a real situation as possible, a prototype of the SDDS is being developed. A real-time co-simulation platform concept is being developed to create an SDDS test platform as shown in Figure 13. Both power system and communication network are simulated in real-time simulators to enable testing with the SPs as Hardware in the loop. The SDDS described in section 3 is therefore being implemented in standalone controllers. The SAPOD is being implemented as two separate parts: an interface part and an application part. The interface part handles the communication between SAPOD and an SP for application registration and data receiving. The received data is forwarded to the SAPOD application part by the interface part. The SAPOD application part is being implemented in the real time power system simulator. It generates control signals using the received data and controls the SVC directly in the power system simulator.

Power System Simulator Power System

Measurement SP SP SP SP SP WAN

SAPOD SP Standalone Computer Communication Simulator

Figure 13: SDDS Testing Platform.

505

510

. The Raspberry Pi Model B (with Processor: ARM1176JZF-S 700MHz; RAM: 512MB; Computational Power: 0.041 GFLOPS) is used to prototype SDDS. The reason to choose this model is to use its limited computational capability which reflects real implementation in substation devices such RTUs, IEDs, PMUs, or Substation Gateways. Coding development of SDDS is implemented in Java SE (Version 8 Update 31 with standard embedded socket library) by author of this paper. To achieve time synchronization among SPs, NTP is implemented in each SPs. It is assumed NTP server in each substation is synchronized using GPS signal. 26

515

520

525

. The proposed SDDS has been planned to demonstrate in IEEE 14 Bus system. This system has been studied to identify the possible communication issues in power system. The result shows data link latency performance violating requirement of applications when event occurs [35]. Therefore, the proposed adaptive data link configuration can provide additional robustness to such applications. As a further step, authors will validate the proposed SDDS in larger scale power systems such as IEEE 30-buses, IEEE 118-buses, and IEEE 300-buses systems. With increased number of SP deployed in large scale power system, additional background traffic introduced by SP can be studied to provide requirements on communication systems. Another further study can focus on an optimal test packet report rate solution for minimizing the latency effect from latency test packets on other existing data links in a large scale power system. 6. Conclusion

530

535

540

. This paper presented a Stateful Data Delivery Service for WAMC applications. The SDDS uses a data source lookup mechanism among SPs to increase the availability of existing data in the power system. The SDDS monitors the performances of data links in real time and returns feedback to the applications. Another contribution of the SDDS is the adaptive data link configuration. Based on QoS performance, SDDS determines which data source(s) the application should use. In a case study, the functionality of the SDDS is proven. In the case that the data link QoS performance violates an application’s requirements, the SDDS increases the reliability thanks to the adaptive data link configuration provided. Presently the SDDS is being implemented in standalone controllers for prototyping. A real time cosimulation based testing platform will be used for this further research. In summary, the adaptive data link configuration using SDDS can be seen as a promising solution to increase the reliability of future WAMC applications, combining the perspectives of improvements in controller robustness with communication system QoS management. References

545

[1] A. Phadke, J. Thorp, Synchronized Phasor Measurements and Their Applications, Springer, 2008.

27

[2] K. Zhu, L. Nordstrom, A. Al-Hammouri, Examination of data delay and packet loss for wide-area monitoring and control systems, in: 2012 IEEE International Energy Conference and Exhibition, 2012, pp. 927–934.

550

555

[3] K. Zhu, M. Chenine, L. Nordstrom, S. Holmstrom, G. Ericsson, Design requirements of wide-area damping systems - using empirical data from a utility ip network, Smart Grid, IEEE Transactions on 5 (2) (2014) 829–838. [4] J. G. Deshpande, E. Kim, M. Thottan, Differentiated services qos in smart grid communication networks, Bell Labs Technical Journal 16 (3) (2011) 61–81. doi:10.1002/bltj.20522. [5] T. S. G. I. P. C. S. W. Group, Guidelines for smart grid cyber security: Vol.3, supportive analyses and references, NIST Report 3 (2010) 1–219.

560

[6] R. Preece, J. Milanovic, A. Almutairi, O. Marjanovic, Damping of interarea oscillations in mixed ac/dc networks using wams based supplementary controller, IEEE Transactions on Power Systems 28 (2) (2013) 1160–1169. [7] M. Ritwik, G. B., G. V., A. M., Closed loop simulation of communication and power network in a zone based system, Electric Power Systems Research 95 (0) (2013) 247 – 256.

565

570

[8] S. Ray, G. Venayagamoorthy, Real-time implementation of a measurement-based adaptive wide-area control system considering communication delays, IET Generation, Transmission Distribution 2 (1) (2008) 62–70. [9] B. Chaudhuri, R. Majumder, B. Pal, Wide-area measurement-based stabilizing control of power system considering signal transmission delay, IEEE Transactions on Power Systems 19 (4) (2004) 1971–1979. [10] J. H. Chow, S. G. Ghiocel, An Adaptive Wide-Area Power System Controller using Synchrophasor Data, Springer, 2012.

575

[11] S. Wang, W. Gao, J. Wang, J. Lin, Synchronized sampling technologybased compensation for network effects in wams communication, IEEE Transactions on Smart Grid 3 (2) (2012) 837–845. 28

[12] H. Wu, K. Tsakalis, G. Heydt, Evaluation of time delay effects to widearea power system stabilizer design, IEEE Transactions on Power Systems 19 (4) (2004) 1935–1941. 580

585

[13] M. Mokhtari, F. Aminifar, D. Nazarpour, S. Golshannavaz, Wide-area power oscillation damping with a fuzzy controller compensating the continuous communication delays, IEEE Transactions on Power Systems 28 (2) (2013) 1997–2005. [14] D. Bakken, A. Bose, C. Hauser, D. Whitehead, G. Zweigle, Smart generation and transmission with coherent, real-time data, Proceedings of the IEEE 99 (6) (2011) 928–951. [15] H. Gjermundrod, H. Gjermundrod, D. Bakken, C. Hauser, A. Bose, Gridstat: A flexible qos-managed data dissemination framework for the power grid, IEEE Transactions on Power Delivery 24 (1) (2009) 136–143.

590

[16] A. Bose, Smart transmission grid applications and their supporting infrastructure, IEEE Transactions on Smart Grid 1 (1) (2010) 11–19. [17] Y. Hu, Phasor gateway technical specifications for north american synchro-phasor initiative network (naspinet) (May 2009). URL https://www.naspi.org/File.aspx?fileID=590

595

600

[18] R. Bobba, E. Heine, H. Khurana, T. Yardley, Exploring a tiered architecture for naspinet, in: 2010 Innovative Smart Grid Technologies, 2010, pp. 1–8. [19] D. Germanus, I. Dionysiou, H. Gjermundrod, A. Khelil, N. Suri, D. Bakken, C. Hauser, Leveraging the next-generation power grid: Data sharing and associated partnerships, in: 2010 IEEE Innovative Smart Grid Technologies Conference Europe, 2010, pp. 1–8. [20] K. Eger, C. Gerdes, S. Oztunali, Towards p2p technologies for the control of electrical power systems, in: 2008. P2P ’08. Eighth International Conference on Peer-to-Peer Computing, 2008, pp. 180–181.

605

[21] D. Anderson, C. Zhao, C. Hauser, V. Venkatasubramanian, D. Bakken, A. Bose, Intelligent design” real-time simulation for smart grid control and communications design, IEEE Power and Energy Magazine 10 (1) (2012) 49–57. 29

610

[22] K. Katsaros, K. C.W., P. G., B. H., Information-centric networking for machine-to-machine data delivery: a case study in smart grid applications, IEEE Network 28 (3) (2014) 58 – 64. [23] P. Kansal, A. Bose, Bandwidth and latency requirements for smart transmission grid applications, IEEE Transactions on Smart Grid 3 (3) (2012) 1344–1352.

615

620

[24] Y. Hu, M. Donnelly, T. Helmer, H. Tram, K. Martin, M. Govindarasu, R. Uluski, M. Cioni, Naspinet specification - an important step toward its implementation, in: 2010 43rd Hawaii International Conference on System Sciences, 2010, pp. 1–9. [25] A. Phadke, J. Thorp, Communication needs for wide area measurement applications, in: 2010 5th International Conference on Critical Infrastructure, 2010, pp. 1–7. [26] J. G. Deshpande, E. Kim, M. Thottan, Differentiated services qos in smart grid communication networks, Bell Labs Technical Journal 16 (3) (2011) 61–81.

625

630

[27] K. Zhu, M. Chenine, L. Nordstrom, Ict architecture impact on wide area monitoring and control systems’ reliability, IEEE Transactions on Power Delivery 26 (4) (2011) 2801–2808. [28] J. Stahlhut, T. Browne, G. Heydt, V. Vittal, Latency viewed as a stochastic process and its impact on wide area power system control signals, IEEE Transactions on Power Systems 23 (1) (2008) 84–91. [29] Open Phasor Gateway Releases, 2nd Edition (Mar. 2013). URL http://openpg.codeplex.com/releases/view/97278

635

[30] M. Goyal, R. Guerin, R. Rajan, Predicting tcp throughput from noninvasive network sampling, in: INFOCOM 2002. Twenty-First Annual Joint Conference of the IEEE Computer and Communications Societies. Proceedings. IEEE, Vol. 1, 2002, pp. 180–189 vol.1. [31] S. Ubik, D. Antoniades, A. Oslebo, Abw–short-timescale passive bandwidth monitoring, in: Networking, 2007. ICN ’07. Sixth International Conference on, 2007, pp. 49–49. 30

640

645

[32] D. Guo, X. Wang, Bayesian inference of network loss and delay characteristics with applications to tcp performance prediction, Signal Processing, IEEE Transactions on 51 (8) (2003) 2205–2218. doi:10.1109/ TSP.2003.814466. [33] P. Kundur, Power System Stability and Control, the epri power system engineering series Edition, McGraw-Hill, 1993. [34] M. Aboul-Ela, A. Sallam, J. McCalley, A. Fouad, Damping controller design for power system oscillations using global signals, IEEE Transactions on Power Systems 11 (2) (1996) 767–773.

650

[35] W. Yiming, N. Lars, E. David, Effects of bursty event traffic on synchrophasor delays in ieee c37.118, iec 61850, and iec 60870, in: 2015 IEEE Smart Grid Communication, 2015, pp. 1–7.

31