Control Engineering Practice 84 (2019) 208–217
Contents lists available at ScienceDirect
Control Engineering Practice journal homepage: www.elsevier.com/locate/conengprac
Performance evaluation of industrial Ethernet protocols for networked control application Xuepei Wu, Lihua Xie ∗ School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore
ARTICLE
INFO
Keywords: Industrial Ethernet Performance evaluation Industrial Internet of Things Networked control systems Real-time communication
ABSTRACT Boosted by trends such as Industrial IoT and Industry 4.0, industrial Ethernet networks (EtherNet/IP, EtherCAT and PROFINET IRT as top-3 players) have been commonly deployed in industrial automation. The aim of this paper is to provide a practical guideline for selecting the right protocol in industrial networked control systems (NCSs) by comparing the performance of these protocols. Communication delays are discussed, formulated and evaluated with a study of a typical NCS. Simulation results demonstrate that with different frame packing, media access scheme, etc., EtherNet/IP, EtherCAT and PROFINET IRT offer different levels of real-time capabilities which lead to different control performance.
1. Introduction Control loops in industrial networked control systems (NCSs) are closed over communication backbones which are shared between controllers, sensors and actuators. Networked control scheme has been widely applied in industrial systems due to its advantages over traditional digital control in terms of modularity, cost-saving, self-reconfiguration, distributed intelligence, integrated diagnostics, etc. Ding, Zhang, Yin, and Ding (2013). Performance of a NCS, or the quality of control (QoC) can be highly influenced by the network quality of service (QoS) (Canovas & Cugnasca, 2010). Fast and deterministic data communication between controllers, sensors and actuators is preferred and, in certain cases, represents a critical criteria for industrial applications, especially in NCSs. Ethernet-based Industrial Internet of Things (IIoT) protocols are a fast-growing networking solution for control systems in the last decade because of the advantages over conventional fieldbus standards, e.g., high-speed, flexible topology, ease of integration and maintenance (Galloway & Hancke, 2013; Sauter, 2010). Among the protocols which are already commercially available, EtherNet/IP, EtherCAT and PROFINET IO have been consistently the top-3 in the market over last several years (HMS Industrial Networks, 2017, 2018). These three protocols are studied not only because of their strong market presence but also their technological diversity as they represent various types of industrial Ethernet. EtherNet/IP, developed by Rockwell, is built on top of standard IEEE 802.3 and entire TCP/UDP/IP suite. EtherNet/IP utilizes UDP/IP for cyclic and acyclic transmissions of implicit I/O messages. EtherCAT and PROFINET IO are developed and maintained by EtherCAT Technology Group and PROFIBUS International, respectively.
Both protocols are state-of-the-art in modified Ethernet, where data-link services are adapted from original IEEE 802.3 in order to minimize endto-end latency and frame loss. EtherCAT and PROFINET IO RT_Class_3 (PROFINET IRT) adopt very different approaches for enhancing datalink performance. EtherCAT is a summation-frame protocol (SFP). In an EtherCAT network, a single frame is issued from the network master for ‘‘on-the-fly" data exchange. On the contrary, PROFINET IRT is primarily known as an individual frame protocol (IFP), i.e., a sequence of individual frames is scheduled and transmitted using static TDMA in a PROFINET IRT network. IFP is also supported by PROFINET via dynamic frame packing, which however only works with a bus topology (Rostan, 2014). Due to this limitation in practice, this feature is not considered in this paper. All three protocols can be practically used in NCSs. The application of EtherCAT, PROFINET IRT and EtherNet/IP in the industrial NCS is the eXtended Transport System (XTS) from Beckhoff (Rostan, 2012), Siemens motion control system (SIMOTION) (Siemens, 2018), and ODVA CIP Motion control system (OVDA, 2018), respectively. Communication protocol plays a critical role in satisfying control criteria in the industrial NCS. Generally speaking, the protocol should be selected based on certain guidelines during the design phase, before the deployment of controllers, sensors and actuators in the project. This is to avoid system-level rework which might be possible when the control criteria cannot be met due to end-to-end delays and/or jitters. Consequently, this design phase can be quite important especially when the system scale is large. However, existing research works were normally limited to the delay analysis of a particular technology. There is no literature, to the best of the authors knowledge, providing a fair performance comparison among the top-3 protocols at the moment,
∗ Corresponding author. E-mail addresses:
[email protected] (X. Wu),
[email protected] (L. Xie).
https://doi.org/10.1016/j.conengprac.2018.11.022 Received 4 August 2018; Received in revised form 25 October 2018; Accepted 28 November 2018 Available online xxxx 0967-0661/© 2018 Elsevier Ltd. All rights reserved.
X. Wu and L. Xie
Control Engineering Practice 84 (2019) 208–217
because it is practically difficult to implement communication stacks using a same platform. The objective of this paper is to suggest a general guideline for choosing a suitable candidate for a particular industrial NCS. The contributions of the paper can be summarized as follows: First, a generic task-based model approach is introduced to evaluate the QoS of protocols and a probability model of communication delays is established to carry out control performance simulation. Second, the applicability of deploying industrial Ethernet protocols in the NCSs is assessed by investigating the impact of communication QoS on the control loop performance via a case study. Last, it is observed via the case study that the studied protocols offer different real-time capabilities and consequently different level of control performance. The results of the case study could be helpful for the system engineers to select a suitable protocol according to the control requirement and also the project budget. The remainder of the paper is organized as follows: Section 2 describes some previous works on EtherNet/IP, EtherCAT and PROFINET IRT. Section 3 gives a system overview and elaborates varieties of delay elements which contribute to the delay summation. Section 4 formulates and evaluates both sensor-to-controller (S–C) and controllerto-actuator (C–A) delays for the protocols. The results of a case study of a typical NCS are presented and discussed in Section 5. At last, Section 6 concludes the paper and suggests potential works for future study.
Fig. 1. System components in a real-time NCS.
2. Related work
3. System overview
The overall communication delays normally consist of hardware latencies, stack processing time and optionally application processing time (IEC, 2014; Kaliwoda & Rehtanz, 2015). IEC standard 61784-2 provided basic mathematical descriptions for communication delays (or delivery time as defined in the standard) of IP-based protocols including EtherNet/IP (IEC, 2014). The communication delay for EtherNet/IP was formulated as a summation of stack delay, cable delay, transmission time and switch latency. The round trip time of an EtherNet/IPenabled network was experimentally measured in Kalappa et al. (2006). Unfortunately, the literature did not provide any mathematical analysis which could be useful to verify the formulation defined in the standard (IEC, 2014). The research results from several references of industrial NCSs are derived based on only standard computer protocols such as UDP (Bibinagar & Won-jong, 2013; Canovas & Cugnasca, 2010; Subha & ping Liu, 2015) and HTTP (Jestratjew & Kwiecien, 2013), which might not be applicable to hard real-time applications. IEC standard (IEC, 2014) also presented the formulations of communication delays over modified Ethernet networks including EtherCAT and PROFINET. However, the calculations in the standard were not consistent enough. Stack latency is one of the instances. It was not mentioned in the formulation of EtherCAT communication delays as the calculation might be based on the assumption of ASIC-based implementation in which the stack delays could be neglected. On the other hand, this was improved for PROFINET in the corresponding section of the standard by considering the stack latency in the communication delays of a PROFINET network. Another example is about the applicationlayer latency. The standard (IEC, 2014) defined the application cycles of sender and receiver as part of the total communication delays for PROFINET. On the other hand, the application delays were not considered in the formulation of EtherCAT delays in the standard. Sung et al. and Wu et al. investigated the end-to-end delays of an EtherCAT network for both frame-driven and clock-driven configurations in Sung, Kim, and Kim (2013) and Wu and Xie (2017), respectively. The probability model of the end-to-end delay was not provided in both papers and therefore the results cannot be directly used to evaluate the control performance in a NCS. Höme et al. assessed the impact of PROFINET communication delays by deriving the probability distributions of network delays with both synchronous and asynchronous processes (Höme, Palis, & Diedrich, 2014). However, the results was limited to control systems with only Siemens products integrated. Networked control applications based on EtherCAT or PROFINET were discussed in Ikram et al. (2014), Rostan, Stubbs, and Dzilno (2010), Velagic, Kaknjo, Osmic, and Dzananovic (2011), Zheng, Ma, and He (2012).
3.1. System components In order to formulate network bus cycles and end-to-end delays, it is important to understand the system down to component level. Fig. 1 presents a generalized structure of an industrial NCS, where the controller, sensor, actuator and the communication network are denoted as 𝐽𝑐 , 𝐽𝑠 , 𝐽𝑎 and 𝐽𝑒 , respectively. Each component in a networked system is typically formed by an application CPU, a network controller and transceiver circuits, e.g., Ethernet physical layer (PHY), in the aspect of hardware. Firstly, transceiver circuits are used to transform data bits into physical and electrical signals that are compliant with Ethernet standard. Next, communication chips such as field-programmable gate array (FPGA) or applicationspecific integrated circuit (ASIC) perform data encoding/decoding, error checking, memory management, and other functionalities of the datalink layer. It can generate alarm and sync signals to the application CPU for exception handling and clock synchronization. It usually integrates a dual-port RAM (DPRAM) for cyclic and acyclic data exchange between the CPU and the network controller. It should be noted that the system can be built based on standard Ethernet infrastructure with IP-based protocols. Finally, application-specific programs such as sensing, actuating and proportional–integral–derivative (PID) control are implemented on the application CPU. 3.2. System configurations In this paper, only the scenario where the system is made up with time-triggered sensors and event-triggered controllers/actuators is considered. This type of configuration is quite commonly used in the industrial NCS (Cuenca, Salt, Sala, & Piza, 2011; Ulusoy, Gurbuz, & Onat, 2011). Clock synchronization enables networked components to share the same system time and it is recommended to be used by the sensors in practice due to deterministic sampling. Ethernet-based protocols have similar concept but different realization in terms of clock synchronization. EtherNet/IP CIP Sync relies on the first version of PTP (IEEE 1588-2002), while PROFINET IO is based on precision transparent clock protocol (PTCP) – a modified version of IEEE 1588-2008 (Cena, Bertolotti, Scanzio, Valenzano, and Zunino, 2013a, 2013b). Due to its logic ring topology, EtherCAT implements a clock synchronization mechanism simpler than precision time protocol (PTP). This mechanism is known as distributed clock (DC) (Cena, Bertolotti, Scanzio, Valenzano, & Zunino, 2012). PTP, PTCP and DC are able to achieve accuracy of sub-μs which is sufficient for industrial NCSs. 209
X. Wu and L. Xie
Control Engineering Practice 84 (2019) 208–217
Fig. 2. Gantt diagram of end-to-end delays.
3.3. Task model To analyse communication delays in a NCS, all in-scope processes including controllers, sensors, actuators, and typically one network in a NCS can be seen as tasks or jobs. Data communications can be treated as data passing between tasks, i.e., inter-task communications. Let us consider a typical NCS consisting of one controller, 𝑁 sensors and 𝑁 actuators. Controller, sensor and actuator tasks are denoted as 𝐽𝑐 , 𝐽𝑠𝑛 and 𝐽𝑎𝑛 , respectively, where 𝑛 denotes the index of a sensor or an actuator. Additionally, component set for sensors and actuators is defined as S = {1, 2, … , 𝑁} and A = {1, 2, … , 𝑁}, respectively. The data network is essentially a global DPRAM placed between controllers, actuators and sensors, and can be also described as a task 𝐽𝑒 = (𝐽𝑒_𝑠𝑐 , 𝐽𝑒_𝑐𝑎 ), where 𝐽𝑒_𝑠𝑐 and 𝐽𝑒_𝑐𝑎 represent network tasks for serving S–C and C–A link, respectively. Each task can be described mathematically by 𝐽𝑥 = (𝑆𝑥 , 𝐷𝑥 , 𝑃𝑥 ) | ∀𝑥, 𝐷𝑥 ≤ 𝑃𝑥 , where 𝑆𝑥 , 𝐷𝑥 and 𝑃𝑥 are the start time, execution time, and deadline of task 𝐽𝑥 . If 𝐽𝑥 is periodic, 𝑃𝑥 specifies the period of the task. When 𝐽𝑥 is event-driven, 𝑃𝑥 represents the minimum interval between task executions. Now 𝐽𝑥 can be made time variant by defining 𝐽 →𝐽 𝐽𝑥 (𝑘), 𝑘 ∈ N+ as task 𝐽𝑥 in the 𝑘th control cycle. Additionally, let 𝑇𝑤𝑥 𝑦 denote the context switching delay from 𝐽𝑥 to 𝐽𝑦 . Data flow of the 𝑘th control cycle for the controlled plant indexed 𝑛 can be described as 𝐽𝑠𝑛 (𝑘) → 𝐽𝑒_𝑠𝑐 (𝑘) → 𝐽𝑐 (𝑘) → 𝐽𝑒_𝑐𝑎 (𝑘) → 𝐽𝑎𝑛 (𝑘). The task model is presented in Fig. 2. Time-driven sensors are considered as the sampling is done cyclically by the sensors without any request from the network. The sensor tasks can be described as 𝐽𝑠 = (𝑆𝑠 , 𝐷𝑠 , 𝜙𝑠 ), where 𝜙𝑠 is the sampling period of the sensors. On the other hand, actuators and controllers act when data are received over the network and therefore can be described as event-driven tasks 𝐽𝑎 = (𝑆𝑎 , 𝐷𝑎 , 𝑃𝑎 ) and 𝐽𝑐 = (𝑆𝑐 , 𝐷𝑐 , 𝑃𝑐 ). For actuators, 𝑃𝑎 can be assumed to be equal to 𝐷𝑎 as the actuators are ready for the next event once the current one is processed. The controller delay 𝑃𝑐 can be longer than the execution time of the computation tasks 𝐷𝑐 when an idle phase is scheduled after the computation phase. Network task 𝐽𝑒 is executed twice, namely 𝐽𝑒_𝑠𝑐 and 𝐽𝑒_𝑐𝑎 in each control cycle. 𝐽𝑒 for S–C and C–A links are both scheduled periodically based on 𝜙𝑠 , i.e., 𝐽𝑒_𝑠𝑐 = (𝑆𝑒_𝑠𝑐 , 𝐷𝑒_𝑠𝑐 , 𝜙𝑠 ) and 𝐽𝑒_𝑐𝑎 = (𝑆𝑒_𝑐𝑎 , 𝐷𝑒_𝑐𝑎 , 𝜙𝑠 ). In order to simplify mathematical modelling without losing generality of the final results, it is assumed that each controlled plant is equipped with one sensor and one actuator connected in a line topology as shown in Fig. 3. The paper concentrates on assessing different protocols and their applicabilities in networked control applications. Consequently, evaluating how various topologies may impact the network delays for each protocol is not the scope of the paper.
Fig. 3. Space–time diagram for communication using (a) SFP; (b) IFP (with slipstream).
3.4. Network task Fig. 3 illustrates the traffic flow within a bus cycle for different frame types. The concept of summation-frame used by EtherCAT is originated from INTERBUS, where a single frame is travelled serially through the network for data exchange with all sensors and actuators. The space– time diagram is illustrated in Fig. 3(a). When the frame reaches the last station, the loop-back function in the station is enabled and the frame is forwarded back to the controller. Data exchange is carried out ‘‘on-the-fly’’. Since this single frame packs all process data inside, the data encoding efficiency is usually high. The network interface on the controller may be only required to issue one summation-frame per bus cycle. This generally leads to a shorter bus cycle as compared with protocols using individual frame scheme (Prytz, 2008). In Fig. 3(a), 𝐽𝑒_𝑠𝑐 (𝑘) and 𝐽𝑒_𝑐𝑎 (𝑘) belong to two different but consecutive bus cycles. The inbound task 𝐽𝑒_𝑠𝑐 (𝑘) collects sensor measurements and sends them to the controller. The network chip on the controller issues the next EtherCAT frame and transmits it via 𝐽𝑒_𝑐𝑎 (𝑘). 210
X. Wu and L. Xie
Control Engineering Practice 84 (2019) 208–217
For IFPs, such as PROFINET IRT and EtherNet/IP, process data for each node are contained individually in each frame. Data frames are transmitted based on a pre-configured time schedule (e.g., static TDMA). A simple line topology is illustrated in Fig. 3 as an example. The controller can address nodes in a sequential order according to their physical positions in the network (Wisniewski, Schumacher, Jasperneite, & Schriegel, 2012). Communications in S–C and C–A links can be performed simultaneously due to full-duplex operation. The slipstreaming scheme shown in Fig. 3(b) is applied for reducing cycle time (Schumacher, Jasperneite, & Weber, 2008) for networks structured with a line topology. Both 𝐽𝑒_𝑠𝑐 (𝑘) and 𝐽𝑒_𝑐𝑎 (𝑘) are made up of a collection of inbound and outbound frame transmissions.
where 𝜆sync is a boolean parameter to indicate the usage of clock synchronization. 𝜆sync = 1 when clock synchronization is applied. In both cases, no local queue is required on sensors because of the cyclic communication pattern. Once a communication cycle is finished, all sensor data are collected in the DPRAM of the network master controller. The network master controller triggers an interrupt to the controller CPU to read the sensor input data and start computing actuators’ set-points using control algorithm such as PID. The start time of 𝐽𝑐 is 𝐽
𝑛 𝑆𝑐 (𝑘) = 𝑆𝑒_𝑠𝑐 (𝑘) + 𝐷𝑒_𝑠𝑐 + 𝑇𝑤𝑒_𝑠𝑐 𝑛 where 𝐷𝑒_𝑠𝑐
The total end-to-end delay delays in S–C or C–A link are composed of latencies from all communication layers, and can be categorized into network, stack and application delays accordingly and calculated as (1)
4.1. Network delays on S–C link Network delays are induced by networking components such as cable, PHY, network controller, etc. Consequently, network delay is made up of transmission delay, propagation delay and possibly waiting delay. Transmission and propagation delays are the static part of the total network delays and are usually constant and predictable once the network is designed and deployed. Waiting delay refers to the queueing time of a frame in sending or receiving buffer on the DPRAM which may or may not be applicable. Network delays generally depend on several network parameters, e.g., number of nodes, media access method, length of cable, network traffic load, etc. Sensor data are generated in each system sampling period 𝜙𝑠 . The data are processed by the sensor CPU in 𝐽𝑠 and then placed in the shared DPRAM, awaiting for the opportunity to be collected from the network task 𝐽𝑒_𝑠𝑐 . The start time of sensor task 𝑛 at the 𝑘th control cycle can be obtained as the following 𝑆𝑠𝑛 (𝑘) = 𝑆𝑠𝑛 (0) + 𝑘𝜙𝑠 ,
(𝑘),
𝐽
→𝐽
𝐽 𝑛 →𝐽𝑒_𝑠𝑐
= 𝐷𝑒_𝑠𝑐 + (1 − 𝜆sync )𝑇𝑤𝑠
𝐽
(𝑘) + 𝑇𝑤𝑒_𝑠𝑐
→𝐽𝑐
(𝑘)
(5)
4.2. Network delays on C-A link The task 𝐽𝑒_𝑐𝑎 can be started as soon as the controller finishes its computation and the outbound process data are copied to the DPRAM of the network master controller. 𝐽 →𝐽𝑒_𝑐𝑎
𝑆𝑒_𝑐𝑎 (𝑘) = 𝑆𝑐 (𝑘) + 𝑃𝑐 + 𝑇𝑤𝑐
(6)
(𝑘), 𝐽 →𝐽 𝑇𝑤𝑐 𝑒_𝑐𝑎
where 𝜏𝑐,appl is the controller delay and is the data access time from the RAM of the controller’s CPU to the DPRAM of the network master controller. The task of actuator 𝑛 is activated when the output data are delivered from the network to actuator 𝑛 via 𝐽𝑒_𝑐𝑎 , i.e.,
where 𝑆𝑠𝑛 (0) is the initial time of 𝐽𝑠𝑛 . Here system integrators have the opportunity to use global clock to synchronize data sampling of sensors. With the clock synchronization, sensors can be configured so that 𝑆𝑠𝑛 (𝑘) are identical ∀𝑛 ∈ S. The sync signal generated by the network controller (e.g., FPGA) can be used to raise an interrupt for the sensor application via the IRQ pin. The cycle of sync signal can be identical with the bus cycle. An input shift delay can be inserted after the sync event. The input latch for inbound data is performed after the delay expires. Of course, the input shift delay should be defined in a way that the time needed for sensor data processing and data-copy to the DPRAM of the FPGA is reserved and so that the sensor value is available before the arrival of the S–C frames. In this case, the waiting time for the sensor data in the DPRAM can be minimal. Since 𝐽𝑒_𝑠𝑐 is supposed to transmit all periodic sensor data to the controller, a full bus cycle should be carried out by 𝐽𝑒_𝑠𝑐 cyclically according to 𝜙𝑠 . If clock synchronization is not applied, the sensor data are stored in the DPRAM of network slave controllers but not communicated until the next bus cycle. This leads to the possibility of additional 𝐽 𝑛 →𝐽 waiting delay 𝑇𝑤𝑠 𝑒_𝑠𝑐 (𝑘) which falls in a range of [0, 𝜙𝑠 ). With clock synchronization, the system behaviour is predicable and deterministic. Therefore, it would be preferred to utilize clock synchronization to minimize the latency between 𝐽𝑠 and 𝐽𝑒_𝑠𝑐 . Considering both scenarios (with or without clock synchronization), the start time of the bus cycle can be written as 𝐽 𝑛 →𝐽𝑒_𝑠𝑐
is the network delay for the data sampled at sensor 𝑛 to reach
𝑛 𝜏𝑠𝑐,network (𝑘) = 𝑆𝑐 (𝑘) − (𝑆𝑠𝑛 (𝑘) + 𝐷𝑠 )
(2)
𝑆𝑒_𝑠𝑐 (𝑘) = 𝑆𝑠𝑛 (𝑘) + 𝐷𝑠 + (1 − 𝜆sync )𝑇𝑤𝑠
(4)
(𝑘),
the controller and 𝑇𝑤𝑒_𝑠𝑐 𝑐 is the data access time needed by controller’s CPU. Data access time is used for the controller CPU to copy data from the DPRAM (data-link layer) to its executable program (running on the 𝑛 application layer). 𝐷𝑒_𝑠𝑐 = 𝑒 , ∀𝑛 ∈ S where 𝑒 is protocol-specific, representing the bus cycle time. Generally, 𝑒 depends on a number of parameters including frame transmission time 𝑇𝑓 and forwarding delays on sensors and actuators, denoted respectively by 𝑇𝑠 and 𝑇𝑎 . The bus cycle is not the focus of this paper as the calculation of 𝑒 is available in Prytz (2008), Robert, Georges, Rondeau, and Divoux (2012), Wu, Xie, and Lim (2014). 𝑛 S–C network delays 𝜏𝑠𝑐,network (𝑘) can be obtained by measuring the time interval from the end of 𝐽𝑠𝑛 (𝑘) to the start of 𝐽𝑐 (𝑘), i.e.,
4. End-to-end delay analysis
𝑛 𝑛 𝑛 𝑛 𝑛 {𝜏𝑠𝑐 (𝑘), 𝜏𝑐𝑎 (𝑘)} = 𝜏network (𝑘) + 𝜏stack (𝑘) + 𝜏appl (𝑘)
→𝐽𝑐
𝑛 𝑆𝑎𝑛 (𝑘) = 𝑆𝑒_𝑐𝑎 (𝑘) + 𝐷𝑒_𝑐𝑎 ,
(7)
𝑛 where 𝐷𝑒_𝑐𝑎 is the communication delay from the start of bus cycle to the reception of output data at actuator 𝑛. It normally takes less than a full bus cycle for the actuators to receive data over the network after a bus cycle is started (see Fig. 3(b)), it can be therefore known 𝑛 𝑛 that 𝐷𝑒_𝑐𝑎 ≤ 𝑒 , ∀𝑛 ∈ A. Each actuator experiences 𝐷𝑒_𝑐𝑎 differently, depending on the node index 𝑛. This is different from the case of clock𝑛 driven S–C delays where 𝐷𝑒_𝑠𝑐 is identical to all sensors. A single EtherCAT summation-frame travels along C–A and thereafter S–C links in serial. Therefore, the transmission time of the summation-frame, denoted as 𝑇𝑓S∪A , applies to all actuators. On the other hand, the network master of PROFINET IRT or EtherNet/IP issues 𝑁 𝑛 individual frames in a bus cycle. 𝐷𝑒_𝑐𝑎 can be obtained as ∑ ∑ 𝑛 𝑚 𝑚 𝐷𝑒_𝑐𝑎 = 𝑇𝑠 + 𝑇𝑎 + 𝑚∈S̃
⎧ 𝑇 S∪A , ⎪ 𝑓 ⎨ ⎪ 𝑇 𝑛 + ∑ ̃ ∁ ̃ ∁ 𝑇 𝑚, 𝑚∈(S ∪A ) 𝑓 ⎩ 𝑓
̃ 𝑚∈A
𝜆frame = 𝑆𝐹 𝑃 (8) 𝜆frame = 𝐼𝐹 𝑃
̃ is a subset of A. where S̃ = (1, 2, … , 𝑛) is a subset of S, and similarly A Obviously, the results can be different by 𝜆frame which indicates whether SFP (EtherCAT) or IFP (PROFINET IRT or EtherNet/IP) is used. The actuators start their execution once their network slave controllers receive data from the bus and actuations take place in 𝐷𝑎 . By
(3) 211
X. Wu and L. Xie
Control Engineering Practice 84 (2019) 208–217 Table 1 Simulation parameters.
neglecting the data access time on the actuators, network delays on the C–A link can be obtained as 𝑛 𝜏𝑐𝑎,network (𝑘) = 𝑆𝑎𝑛 (𝑘) − (𝑆𝑐 (𝑘) + 𝑃𝑐 ) 𝐽 →𝐽𝑒_𝑐𝑎
𝑛 = 𝐷𝑒_𝑐𝑎 + 𝑇𝑤𝑐
(𝑘)
(9)
4.3. Stack delays Stack delay 𝜏stack is also known as the communication stack traversal time from the data-link layer to the application. Unlike 𝜏network , 𝜏stack is irrelevant to network parameter. Its measurement therefore can be carried out on device-level. The communication stack can be realized on FPGA or ASIC (Felser, Felser, & Kaghazchi, 2012; Ganz & Dermot Doran, 2014). The resulting delay depends on the hardware platform on which the stack is executed, and is normally obtained by experiments and offered by the stack owner. Stack delays for EtherCAT, PROFINET IRT and EtherNet/IP were experimentally verified on the same platform in Iwanitz (2011). As the network layer and transport layer are also involved in the UDPbased communication, larger stack computational time is expected in EtherNet/IP.
Parameter
Value
Number of controlled plants 𝑁 Number of sensors or actuators Cyclic data length Acyclic data length Topology Controller execution time 𝜏𝑐,appl Actuator execution time 𝐷𝑎 Sensor execution time 𝐷𝑠 Number of bins 𝑉 Simulation duration 𝜙sim
5 5 10 Bytes per sensor/actuator 0 Byte Line 1 ms 0.3 ms 0.2 ms 19 60 s
4.4. Controller delays Controller delays are considered as delays on the application layer. It is assumed that the controller operates in a non-preemptive singletasking way at a fixed CPU capacity, i.e., the controller task for the next controlled plant has to be queued if the computation task for the current plant is being handled by the controller. It is noted that other preemptive scheduling on the CPU may be advantageous in certain cases but the delay modelling can be hardly generalized. 𝑃𝑐 , also known as the scan time of an industrial controller such as the PLC, can be calculated by adding the computational time for all controlled plants during the computation phase and additional idle time reserved. 𝑃𝑐 =
𝑁 ∑
𝐷𝑐𝑛 + 𝐷𝑐_idle ,
Fig. 4. Simulation set-up.
5. Case study 5.1. System set-up The case study considers an industrial NCS with 5 controlled plants as in Cervin et al. (2003). The system can be illustrated with Fig. 4 and the system parameters are listed in Table 1. The NCS, controlled by an industrial PC, is deployed with 5 sensors and 5 actuators connected onto the network backbone in a line topology. Additional assumptions are made here to simplify the performance evaluation without losing any generality. Firstly, node latencies (processing delay caused by the network chip) are constant and identical for all sensors or all actuators. Secondly, cable propagation delay of 5 ns per metre is ignored. Lastly, all sensors and actuators have the equal payload size (10 Bytes) in the inbound and outbound frame, respectively.
(10)
𝑛=1
where 𝐷𝑐_idle is time reserved for the idle phase. 𝑃𝑐 is deterministic in each control cycle. ∑ 𝑛 The CPU load on the controller can be calculated by 𝑈 = 𝑁 𝑛=1 𝐷𝑐 ∕𝜙𝑠 . ∑𝑁 ∑ 𝑁 Apparently, 𝑈 ≤ 𝑛=1 𝐷𝑐𝑛 ∕(𝑒 + 𝑛=1 𝐷𝑐𝑛 ) < 1 must hold. The minimum ∑ 𝑛 sampling time achievable by the network 𝜙𝑠 ≥ 𝑒 + 𝑁 𝑛=1 𝐷𝑐 . 4.5. Delay model
5.2. Evaluation strategy
The delay model is used to formulate the characteristics of the endto-end delays. With the network delays, stack delays and controller delays obtained, the end-to-end delays 𝜏 can be finally calculated using formula (1). The total delay 𝜏 is time-variant due to the variable delay elements such as stack delays and optionally waiting delays. Since all delay elements are bounded, 𝜏 varies only within its minimum and maximum limits denoted as 𝜏min and 𝜏max , respectively. The statistics of 𝜏 can be consolidated with a histogram as suggested in Cervin, Henriksson, Lincoln, Eker, and Arzen (2003). In other words, delay statistics can be summarized by the following discrete-time probability density function
There is already a MATLAB toolbox named TrueTime developed to simulate the communication and control behaviour of network-based real-time control systems (Cervin et al., 2003). The toolbox currently supports PROFINET. However, EtherCAT and EtherNet/IP cannot be directly simulated using the toolbox. Additionally, the toolbox does not take the stack delays into its delay modelling, and correspondingly the performance evaluation on both QoS and QoC may not be accurate. Therefore, this toolbox is used only for the control part of the evaluation in the case study. The strategy is to evaluate the communication and control performance individually, as presented in Fig. 5. The network evaluation is performed using a piece of MATLAB code which implements the formulations given in the previous section. The purpose of network evaluation is to form a database which contains the delay model described in formula (11). This database is used off-line as the input to the control system simulation performed by the TrueTime toolbox. As EtherNet/IP, EtherCAT, and PROFINET I/O are supported by different companies or organizations, the controllers, actuators and sensors available on the market may be implemented differently in terms of network chips. The network performance, especially the stack delays, cannot be comprehensively compared if the stacks are realized on different network chips. To make the comparison fair and valid, it
Pr 𝜏 (𝑉 )], (11) [ ) where Pr 𝜏 (𝑚) is the probability that 𝜏 ∈ 𝜏min + 𝑚𝛿, 𝜏min + (𝑚 + 1)𝛿 and 𝑉 is the number of bins in the histogram. 𝛿 is the constant timegrain and 𝛿 = (𝜏max − 𝜏min )∕(𝑉 + 1). The larger the value of 𝑉 , the more accurate the delay model can be. With the delay model, delay results obtained from a standalone network analysis can be summarized as a QoS database which can be applied to the control performance verification via a system simulation thereafter. This will be elaborated further in the case study. Pr 𝜏 = [Pr 𝜏 (0)
Pr 𝜏 (1)
Pr 𝜏 (2)
⋯
212
X. Wu and L. Xie
Control Engineering Practice 84 (2019) 208–217
Fig. 5. Evaluation process on QoS and QoC.
is required to find a common solution for both network master and slave devices. In practice, it is very hard to design and implement such a common experimental platform which assesses the network performance in an unbiased way. The network delays 𝜏network rely much on the system set-up and also the network parameters. Therefore, the network delays can be obtained with mathematical formulations described in Chapter 4. On the other hand, the stack delay 𝜏stack can be assessed on device-level as it is irrelevant to the system. It cannot be easily modelled mathematically or simulated, as it is highly dependent on the computational complexity of the stack and the hardware performance. Fortunately, there was an experiment carried out on an Altera Nios II soft processor running real-time operating system eCos (Iwanitz, 2011). The results in Iwanitz (2011) demonstrate that 𝜏stack is a bounded variable for EtherNet/IP, EtherCAT, and PROFINET I/O. However, the probability distribution of 𝜏stack is not given in the reference. It is assumed that 𝜏stack is a random variable following the PERT distribution according to the minimum, average and maximum stack delays provided in Iwanitz (2011). The network evaluation results obtained with formula (1) can be consolidated and structured with the delay model Pr 𝜏 described by formula (11) and stored as a database (e.g., MATLAB .mat file). Once the database is prepared, the control part of the study can be started. The control performance of the system is evaluated via computer simulation carried out on TrueTime (Version 2.0 Beta 8). The TrueTime models use the QoS database as one of the system inputs.
Fig. 6. Probability distribution of communication delays over EtherCAT: (a) 𝜏𝑠𝑐 ; (b) 𝜏𝑐𝑎 .
EtherCAT is of the best QoS among all three protocols. EtherCAT achieves the fastest bus cycle time 𝑒 at 106.26 μs. Control data transfer on the EtherCAT bus with delays of 0.403 ms in the S–C link and 0.318 ms in the C–A link on average. Firstly, this is because that summationframe scheme outperforms individual frame scheme in typical industrial NCSs where cyclic and fast communications with small payload exchanged between the nodes are typically required (Jasperneite, Schumacher, & Weber, 2007; Prytz, 2008; Wang, Li, & Wang, 2017). This
5.3. Communication performance
leads to shorter 𝜏network in EtherCAT as compared to EtherNet/IP and PROFINET IRT. Secondly, EtherCAT has lower stack delay 𝜏stack and
The performance of the communication networks for the third controlled plant (𝑛 = 3) is illustrated in Figs. 6 to 9 and Table 2, including bus cycle time, end-to-end delays (minimum, average and maximum values) and the standard deviation (SD) of the end-to-end delays. The discrete probability distribution of the communication delays is presented in Fig. 9 with dataset of 20 possible values. It can be observed from the figures that all delays are bounded and the gaps between average delays delivered from EtherCAT, PROFINET IRT and EtherNet/IP are distinct.
jitter than other two protocols. The PROFINET IRT network induces 1.356 ms inbound delays and 1.261 ms outbound delays which are considerably lower than those of EtherNet/IP. This is mainly caused by the lower stack delays in PROFINET as the PROFINET frames travel directly to the application layer by bypassing TCP/IP stack. EtherNet/IP delivers the worst performance among the evaluated candidates, with more than 4 ms in both S–C and C–A directions. 213
X. Wu and L. Xie
Control Engineering Practice 84 (2019) 208–217
Fig. 8. Probability distribution of communication delays over EtherNet/IP: (a) 𝜏𝑠𝑐 ; (b) 𝜏𝑐𝑎 .
Fig. 7. Probability distribution of communication delays over PROFINET IRT: (a) 𝜏𝑠𝑐 ; (b) 𝜏𝑐𝑎 . Table 2 Comparison on QoS results. Results
S-C 𝑒 (μs) 𝜏min (ms) Average (ms) 𝜏max (ms) SD (ms)
can be re-written as
EtherCAT
PROFINET C-A
106.26 0.237 0.185 0.403 0.318 0.567 0.453 0.070 0.066
S-C
EtherNet/IP C-A
149.85 1.120 1.124 1.356 1.261 1.692 1.518 0.110 0.094
S-C 2.616 4.197 6.153 0.869
𝐺(𝑠) =
C-A 252.05 2.606 4.111 6.015 0.868
1000 . 𝑠(𝑠 + 1)
A PD controller is developed according to ⎧ 𝑃 (𝑘) = 𝐾 (𝑟(𝑘) − 𝑦(𝑘)), ( ) ⎪ ⎨ 𝐷(𝑘) = 𝛼𝐷(𝑘 − 1) + 𝛽 𝑦(𝑘 − 1) − 𝑦(𝑘) , ⎪ 𝑢(𝑘) = 𝑃 (𝑘) + 𝐷(𝑘), ⎩
(12)
(13)
where 𝑟(𝑘), 𝑢(𝑘), 𝑦(𝑘) are the set-point, plant input, and plant output, respectively, for the 𝑘th control cycle resulted from the PD controller parameters 𝛼, 𝛽 and 𝐾 (same values as those in reference Cervin et al., 2003). Under cyclic communication mode, the next effective sensor sampling can take place as soon as the current sampled data is consumed by the controller. Therefore, the minimum effective sampling time is 𝜙𝑠,min = 𝑒 + 𝜏𝑐,appl . Actual sampling time 𝜙𝑠 = 10 ms is selected according to reference Cervin et al. (2003), despite that EtherCAT, PROFINET IRT, and EtherNet/IP can enable faster sampling. Maximum overshoot and Integral Absolute Error (IAE) are considered as control
5.4. Networked controller With the network performance evaluated and summarized according to the delay model, the next step is to use this as an input to the control performance assessment. In order to simulate the impact of networkinduced delay on the control performance, four possible networking configurations are considered here: direct connection, networked control over EtherNet/IP, PROFINET IRT, and EtherCAT. The second-order plant model of a DC servo process given in reference Cervin et al. (2003) 214
X. Wu and L. Xie
Control Engineering Practice 84 (2019) 208–217 Table 3 Comparison on QoC results. Connectivity
Max. overshoot
IAE
Local EtherCAT PROFINET IRT EtherNet/IP
2.19% 2.67% 4.73% 16.94%
17.85 17.95 19.02 34.00
period 𝜙𝑠 . Correspondingly, the system is unable to settle its response during 500 ms control period, and overshoot is increased to about 7 times larger than that in local control. From the case study, it can be concluded that communication delay is a key factor for control loop performance. EtherCAT and PROFINET IRT is generally capable of faster and more deterministic data communication, and hence more satisfactory control performance can be achieved than EtherNet/IP.
6. Conclusion This paper presented a piece of applied research which explores the performance of industrial NCSs achievable with most popular industrial Ethernet protocols, i.e., EtherNet/IP, EtherCAT and PROFINET IRT. End-to-end delays for these protocols were presented, analysed and modelled, and eventually evaluated via a case study. The corresponding control loop performance over different networks was compared and discussed. Based on the simulation results on QoS and QoC of a typical NCS, it was concluded that EtherNet/IP and possibly other IP-based technologies should be able to offer satisfactory results for slow dynamic control applications. The advantage of EtherNet/IP is the lower hardware cost as compared to EtherCAT and PROFINET IRT due to the usage of standard Ethernet components on the data-link layer. On the other hand, not only fast but deterministic communication can be achieved by modified-Ethernet protocols such as EtherCAT and PROFINET IRT. In generally, both protocols offer better QoS and QoC than EtherNet/IP in the NCS. Additionally, EtherCAT is capable of data delivery of shorter delays and jitters than PROFINET IRT in typical NCSs where the payload per node is normally small and cyclic data exchange between the nodes is required. As a result, EtherCAT offers better closed-loop control performance over the network as compared to PROFINET IRT. EtherCAT is hence ideal for real-time networked control applications such as motion control. However, it should be highlighted that this paper does not discuss the scenario of large payload per node as this generally not valid in industrial NCSs.
Fig. 9. Cumulative distribution function of end-to-end delays: (a) 𝜏𝑠𝑐 ; (b) 𝜏𝑐𝑎 .
performance criterion. IAE is given by 𝑘
IAE =
𝑓 ∑
|𝑒(𝑘)|,
(14)
One of the potential studies would be the continuous improvements on the protocols for better flexibility and ease of integration. It is noted that there are already a few references investigating enhancement possibilities in data-link protocols and services (Lo Bello, Bini, & Patti, 2015; Schlesinger, Springer, & Sauter, 2015). Practical improvement in the stack latency is also an option (Felser et al., 2012; Maruyama & Yamada, 2015). Another important topic is the comparison study among emerging Ethernet-enabled standard including TT-Ethernet, IEEE 802.1 TSN, etc., and investigation on their applicability to different NCS applications.
𝑘=0
where 𝑒(𝑘) is the error between set-point and actual response at time 𝜙 instance 𝑘 and 𝑘𝑓 = 𝜙sim . 𝑠
5.5. Control performance The control performance can be found in Fig. 10 (only the first 2 s out of 𝜙sim is shown) and Table 3. For local control, the maximum overshoot is 2.19%, with 2% settling time of 122 ms. The results demonstrate that EtherCAT offers excellent networked control performance as the IAE almost equals that of the case of local control. The maximum overshoot for EtherCAT-based control is 2.67%, which again is only slightly higher than that in direct connection. PROFINET IRT is also capable of delivering acceptable control performance although the maximum overshoot is nearly doubled. On the other hand, the end-toend delay and its standard deviation (jitter) of EtherNet/IP are much higher compared to EtherCAT and PROFINET IRT. The average total delays of 9.31 ms in EtherNet/IP is already comparable to the sampling
Acknowledgments The authors would like to thank the editor and anonymous reviewers for their valuable comments on the manuscript. This work was supported by Singapore Economic Development Board (EDB) [grant number S11-1669-IPP]. 215
X. Wu and L. Xie
Control Engineering Practice 84 (2019) 208–217
Fig. 10. Control performance of the DC servo.
References
Ikram, W., Jansson, N., Harvei, T., Aakvaag, N., Halvorsen, I., & Petersen, S. (2014). Wireless communication in process control loop: Requirements analysis, industry practices and experimental evaluation. In Proceedings of the 2014 IEEE emerging technology and factory automation (pp. 1–8). http://dx.doi.org/10.1109/ETFA.2014. 7005231.
Bibinagar, N., & Won-jong, K. (2013). Switched Ethernet-based real-time networked control system with multiple-client-server architecture. IEEE/ASME Transactions on Mechatronics, 18(1), 104–112. http://dx.doi.org/10.1109/TMECH.2011.2163316. Canovas, S., & Cugnasca, C. (2010). Implementation of a control loop experiment in a network-based control system with LonWorks technology and IP networks. IEEE Transactions on Industrial Electronics, 57(11), 3857–3867. http://dx.doi.org/10.1109/ TIE.2010.2040562. Cena, G., Bertolotti, I., Scanzio, S., Valenzano, A., & Zunino, C. (2012). Evaluation of EtherCAT distributed clock performance. IEEE Transactions on Industrial Informatics, 8(1), 20–29. http://dx.doi.org/10.1109/TII.2011.2172434. Cena, G., Bertolotti, I. C., Scanzio, S., Valenzano, A., & Zunino, C. (2013a). Synchronize your watches: Part i: General-purpose solutions for distributed real-time control. IEEE Industrial Electronics Magazine, 7(1), 18–29. http://dx.doi.org/10.1109/MIE.2012. 2232354. Cena, G., Bertolotti, I. C., Scanzio, S., Valenzano, A., & Zunino, C. (2013b). Synchronize your watches: Part ii: Special-purpose solutions for distributed real-time control. IEEE Industrial Electronics Magazine, 7(2), 27–39. http://dx.doi.org/10.1109/MIE.2013. 2248431. Cervin, A., Henriksson, D., Lincoln, B., Eker, J., & Arzen, K. (2003). How does control timing affect performance? Analysis and simulation of timing using Jitterbug and TrueTime. IEEE Control Systems,, 23(3), 16–30. http://dx.doi.org/10.1109/MCS.2003. 1200240. Cuenca, A., Salt, J., Sala, A., & Piza, R. (2011). A delay-dependent dual-rate PID controller over an Ethernet network. IEEE Transactions on Industrial Informatics, 7(1), 18–29. http://dx.doi.org/10.1109/TII.2010.2085007. Ding, S., Zhang, P., Yin, S., & Ding, E. (2013). An integrated design framework of fault-tolerant wireless networked control systems for industrial automatic control applications. IEEE Transactions on Industrial Informatics, 9(1), 462–471. http://dx.doi. org/10.1109/TII.2012.2214390. Felser, C., Felser, M., & Kaghazchi, H. (2012). Improved architecture for PROFINET IRT devices. In Emerging technologies factory automation, 2012 IEEE international conference on (pp. 1–8). http://dx.doi.org/10.1109/ETFA.2012.6489562. Galloway, B., & Hancke, G. (2013). Introduction to industrial control networks. IEEE Communications Surveys Tutorials, 15(2), 860–880. http://dx.doi.org/10.1109/SURV. 2012.071812.00124. Ganz, D., & Dermot Doran, H. (2014). Extending summation-frame communication systems for high performance and complex automation applications. In Factory communication systems, 2014 10th IEEE workshop on (pp. 1–8). http://dx.doi.org/10. 1109/WFCS.2014.6837596. HMS Industrial Networks, (2017). Network trends + technologies vision, URL http: //ethercat.org/forms/italy2017/files/14_HMS_Trends_Technologies_Vision.pdf (Accessed on 02.08.18). HMS Industrial Networks, (2018). Industrial Ethernet is now bigger than fieldbuses, URL https://www.hms-networks.com/press/2018/02/27/industrial-ethernetis-now-bigger-than-fieldbuses (Accessed on 02.08.18). Höme, S., Palis, S., & Diedrich, C. (2014). Design of communication systems for networked control system running on PROFINET. In Factory communication systems, 2014 10th IEEE workshop on (pp. 1–8). http://dx.doi.org/10.1109/WFCS.2014.6837578. (2014). IEC, IEC 61784-2-14: Industrial Communication Networks – Profiles – Part 2: Additional Fieldbus Profiles for Real-time Networks based on ISO/IEC 8802-3.
Iwanitz, F. (2011). Flexible real-time-Ethernet anschaltung mit FPGA, Tech. rep., messtec drives Automation - special real-time Ethernet. Jasperneite, J., Schumacher, M., & Weber, K. (2007). Limits of increasing the performance of industrial Ethernet protocols. In Emerging technologies and factory automation, 2007 IEEE international conference on (pp. 17–24). http://dx.doi.org/10.1109/EFTA.2007. 4416748. Jestratjew, A., & Kwiecien, A. (2013). Performance of HTTP protocol in networked control systems. IEEE Transactions on Industrial Informatics, 9(1), 271–276. http://dx.doi.org/ 10.1109/TII.2012.2183138. Kalappa, N., Acton, K., Antolovic, M., Mantri, S., Parrott, J., & Luntz, J. (2006). Experimental determination of real time peer to peer communication characteristics of EtherNet/IP. In 2006 IEEE conference on emerging technologies and factory automation (pp. 1061–1064). http://dx.doi.org/10.1109/ETFA.2006.355194. Kaliwoda, M., & Rehtanz, C. (2015). Measuring delays of Ethernet communication for distributed real-time applications using carrier sense. In Industrial informatics, 2015 IEEE 13th international conference on (pp. 174–178). http://dx.doi.org/10.1109/ INDIN.2015.7281730. Lo Bello, L., Bini, E., & Patti, G. (2015). Priority-driven swapping-based scheduling of aperiodic real-time messages over EtherCAT networks. IEEE Transactions on Industrial Informatics, 11(3), 741–751. http://dx.doi.org/10.1109/TII.2014.2350832. Maruyama, T., & Yamada, T. (2015). Communication architecture of EtherCAT master for high-speed and IT-enabled real-time systems. In Emerging technologies factory automation, 2015 IEEE 20th conference on (pp. 1–8). http://dx.doi.org/10.1109/ETFA. 2015.7301421. (2018). OVDA, CIP Motion, URL https://www.odva.org/Technology-Standards/ Common-Industrial-Protocol-CIP/CIP-Motion (Accessed on 02.08.18). Prytz, G. (2008). A performance analysis of EtherCAT and PROFINET IRT. In Emerging technologies and factory automation, 2008 IEEE international conference on (pp. 408– 415). http://dx.doi.org/10.1109/ETFA.2008.4638425. Robert, J., Georges, J. P., Rondeau, E., & Divoux, T. (2012). Minimum cycle time analysis of Ethernet-based real-time protocols. International journal of computers, communications and control, 7(4), 743–757. http://dx.doi.org/10.15837/ijccc.2012.4.1372. Rostan, M. (2012). XTS revolutionary drive technology enabled by EtherCAT, URL https:// www.pc-control.net/pdf/032012/products/pcc_0312_xts_ethercat_e.pdf (Accessed on 02.08.18). Rostan, M. (2014). Industrial Ethernet technologies, URL https://www.ethercat.org/ download/documents/Industrial_Ethernet_Technologies.pdf (Accessed on 09.06.18). Rostan, M., Stubbs, J., & Dzilno, D. (2010). EtherCAT enabled advanced control architecture. In Advanced semiconductor manufacturing conference, 2010 IEEE/SEMI (pp. 39– 44). http://dx.doi.org/10.1109/ASMC.2010.5551414. Sauter, T. (2010). The three generations of field-level networks - evolution and compatibility issues. IEEE Transactions on Industrial Electronics, 57(11), 3585–3595. Schlesinger, R., Springer, A., & Sauter, T. (2015). Automatic packing mechanism for simplification of the scheduling in PROFINET IRT. IEEE Transactions on Industrial Informatics, PP(99), http://dx.doi.org/10.1109/TII.2015.2509450, 1–1. 216
X. Wu and L. Xie
Control Engineering Practice 84 (2019) 208–217 Wang, K., Li, X., & Wang, X. (2017). Analysis of real-time Ethernet communication technologies of summation frame and individual frames. In 2017 IEEE 3rd information technology and mechatronics engineering conference (pp. 23–26). http://dx.doi.org/10. 1109/ITOEC.2017.8122315. Wisniewski, L., Schumacher, M., Jasperneite, J., & Schriegel, S. (2012). Fast and simple scheduling algorithm for PROFINET IRT networks. In Factory communication systems, 2012 9th IEEE international workshop on (pp. 141–144). http://dx.doi.org/10.1109/ WFCS.2012.6242556. Wu, X., & Xie, L. (2017). End-to-end delay evaluation of industrial automation systems based on EtherCAT. In 2017 IEEE 42nd conference on local computer networks (pp. 70–77). http://dx.doi.org/10.1109/LCN.2017.14. Wu, X., Xie, L., & Lim, F. (2014). Network delay analysis of EtherCAT and PROFINET IRT protocols. In Industrial electronics society, IECON 2014 - 40th annual conference of the IEEE (pp. 2597–2603). http://dx.doi.org/10.1109/IECON.2014.7048872. Zheng, W., Ma, H., & He, X. (2012). Modeling, analysis, and implementation of real time network controlled parallel multi-inverter systems. In Power electronics and motion control conference, 2012 7th international, Vol. 2 (pp. 1125–1130). http://dx.doi.org/ 10.1109/IPEMC.2012.6258978.
Schumacher, M., Jasperneite, J., & Weber, K. (2008). A new approach for increasing the performance of the industrial Ethernet system PROFINET. In factory communication systems, 2008 IEEE international workshop on (pp. 159–167). http://dx.doi.org/10. 1109/WFCS.2008.4638725. Siemens, A. (2018). SIMOTION the high-end motion control system, URL https://www.siemens.com/global/en/home/products/automation/systems/motioncontrol.html (Accessed on 02.08.18). Subha, N., & ping Liu, G. (2015). Design and practical implementation of external consensus protocol for networked multiagent systems with communication delays. IEEE Transactions on Control Systems Technology, 23(2), 619–631. http://dx.doi.org/ 10.1109/TCST.2014.2341617. Sung, M., Kim, I., & Kim, T. (2013). Toward a holistic delay analysis of ethercat synchronized control processes. International Journal of Computers Communications & Control, 8(4), 608–621. http://dx.doi.org/10.15837/ijccc.2013.4.384. Ulusoy, A., Gurbuz, O., & Onat, A. (2011). Wireless model-based predictive networked control system over cooperative wireless network. Industrial Informatics, IEEE Transactions on, 7(1), 41–51. http://dx.doi.org/10.1109/TII.2010.2089059. Velagic, J., Kaknjo, A., Osmic, N., & Dzananovic, T. (2011). Networked based control and supervision of induction motor using OPC server and PLC. In Proceedings ELMAR-2011 (pp. 251–255).
217