Performance analysis among different acquisition systems for process control

Performance analysis among different acquisition systems for process control

ISA Transactions xxx (xxxx) xxx Contents lists available at ScienceDirect ISA Transactions journal homepage: www.elsevier.com/locate/isatrans Pract...

435KB Sizes 0 Downloads 20 Views

ISA Transactions xxx (xxxx) xxx

Contents lists available at ScienceDirect

ISA Transactions journal homepage: www.elsevier.com/locate/isatrans

Practice article

Performance analysis among different acquisition systems for process control ∗

Maria Auxiliadora Muanis Persechini , Luiz Themystokliz Sanctos Mendes Department of Electronics Engineering, Federal University of Minas Gerais, Av. Antonio Carlos 6627, Belo Horizonte, MG, 31.270-901, Brazil

highlights • • • • •

A performance index is proposed to assess the sampling time accuracy The performance index can capture the effects of outliers, bias and jitter. The sampling time behavior is affected by the operating system’s timer services. The best values of the index are obtained with a real-time operating system. Systems with a data acquisition board show a better index than those with a PLC

article

info

Article history: Received 30 August 2018 Received in revised form 30 April 2019 Accepted 1 August 2019 Available online xxxx Keywords: Data acquisition Key performances index Automation system architecture

a b s t r a c t Data acquisition systems are crucial when implementing control and automation strategies correctly and on time. The knowledge of the sampling time is one of the relevant features to ensure that the sampled data properly represent the process variable. This paper describes the development of a performance index to measure continuously and in real-time the sampling time behavior of typical control and automation systems. To test this index two different architectures are used. The first one uses a data acquisition board plugged directly into the computer bus, and the second one uses a Programmable Logic Controller communicating with a computer through either Industrial Ethernet with OPC or the serial Modbus RTU protocol. In all cases, a custom software application is implemented to calculate the performance index when the sampling time is provided by the Operating System’s timer interrupt service. The analysis of the experimental results shows that the proposed performance index can be useful to assess the sampling time behavior or to assist in decision-making about hardware and software choices. © 2019 ISA. Published by Elsevier Ltd. All rights reserved.

1. Introduction Industrial control and automation systems are essential for achieving the desired quality and performance targets. There are several metrics to quantify these targets that depend on the type of process, on the type of the product, and on other business features. For example, Control Performance Assessment (CPA) is crucial to maintain a highly efficient operational performance of automation systems [1]. Regardless of the metric’s purpose, data acquisition of process variables is required at different stages of the industrial process to provide correct and on time information. Beyond that, different architectures for data acquisition systems are used both in industry and in experimental apparatus for research and academic purposes to access these process variables [2–7]. ∗ Corresponding author. E-mail address: [email protected] (M.A.M. Persechini).

There is a huge amount of information that must be exchanged among different levels of the process and automation systems hierarchical architecture. Typical industrial applications such as complex control algorithms and supervisory, management, and optimization functions run in computers responsible for setting the sampling time, and the corresponding data acquisition is typically provided by an Industrial Ethernet network with OPC (OLE for Process Control) communication. In addition, the use of computer data acquisition boards allows software applications to set the sampling time to read or write process variables and it is common to find them in research and academic projects. Whatever the data acquisition system, knowledge of the sampling time is one of the relevant features to ensure that the sampled data properly represent the process variable. Thus, the objective of this paper is to introduce a performance index to indicate the quality of the data acquisition system by analyzing in real-time the variations of a sampling time that has been set by software applications that run on computers. To test and validate this index, different hardware architectures are used and

https://doi.org/10.1016/j.isatra.2019.08.003 0019-0578/© 2019 ISA. Published by Elsevier Ltd. All rights reserved.

Please cite this article as: M.A.M. Persechini and L.T.S. Mendes, Performance analysis among different acquisition systems for process control. ISA Transactions (2019), https://doi.org/10.1016/j.isatra.2019.08.003.

2

M.A.M. Persechini and L.T.S. Mendes / ISA Transactions xxx (xxxx) xxx

a custom software application is developed in which the sampling time is set by means of timer interrupt services provided by the Operating System.

where δ tm (k) is the average of k values of the sampling time interval measured by Eq. (1) and which can be implemented recursively with forgetting factor β as:

2. Sample time performance index

δ tm (k) =

Evaluating and comparing the performance of computers, Operating Systems (OS), and software applications through benchmarks is a practice widely used for decades [8,9]. Several metrics to measure performance and determinism features such as, for example, execution time, jitter, throughput, and latency are used to assist in decision-making about hardware and software choices. As previously mentioned, CPA is commonly used to evaluate control loops. However, many other software applications for control and automation systems must be run at a fixed time interval for sampling process variables in order to perform functions such as data storage, operation, supervision, and management. In these applications, the sampling time is set by timer interrupt services provided by the OS, and the sampling time interval δ tm can be measured as:

δ tm (k) = T (k − 1) − T (k) for k = 1, 2, 3...

(1)

where k is the sequential number of the sample and T (k) is the timestamp provided by the OS for each sample k. By analyzing Eq. (1) it is expected that δ tm (k) undergoes little variations due to intrinsic features of the hardware, the OS and the CPU load. Therefore, jitter (defined as the difference of the desired sampling time interval δ td and the measured δ tm ) is an important metric to assist in the proper selection of a control and automation architecture which performs data acquisition. The jitter can be evaluated as: |δ tm (k) − δ td | j(k) = ∗ 100 (2) δ td where j(k) represents the percentage difference between the desired and the measured sampling time intervals. Instead of applying statistical analysis on an amount of data by means of Eq. (2) as benchmarks usually do, our idea is to implement an index that can be both continuously computed as part of a software application and monitored online. To develop this index as an average of a data window with N samples, we borrow the Asymptotic Sample Length from the adaptive control technique [10]. The data window width is defined by specifying a forgetting factor β , and the asymptotic sample length is then given by 1/(1 − β ). Therefore, an average of N values calculated by Eq. (2) with forgetting factor β is given by

∑N J =

k=1

∑N

β N −k j(k)

k=1

(3)

β N −k

where 0 < β ≤ 1. If β = 1 then all data samples will be considered, otherwise the recent data have more importance than the old ones. To run indefinitely Eq. (3) can be implemented recursively as: J(k) =

β D(k − 1)J(k − 1) + j(k) D(k)

(4)

where D(k) = 1 + β D(k − 1) =

k ∑

β k−i .

(5)

i=1

The information given by Eq. (4) is important, but does not reveal whether the jitter introduces a random or systematic error. Assuming that random errors have a Gaussian normal distribution, the systematic error U can be estimated as:

⏐ ⏐ ⏐δ tm (k) − δ td ⏐ U(k) = ∗ 100 δ td

(6)

β D(k − 1)δ tm (k − 1) + δ tm (k) D(k)

.

(7)

However, to compute a performance index by using mean values it is important to detect and consider outliers whose values are considerably different from the expected value and so falsifying the results of Eqs. (4) and (6). Therefore, outliers can be detected as:

⎧ ⎨0, s(k) = |δ tm (k) − δ td | ⎩ , δ td

|δ tm (k) − δ td | < γ |δ tm (k) − δ td | ≥ γ

(8)

and

σ (k) =

{

σ (k − 1), k,

|δ tm (k) − δ td | < γ |δ tm (k) − δ td | ≥ γ

(9)

where s(k) > 0 represents the occurrence of δ tm values outside the expected range, σ (k) marks when outliers occur and γ is adjusted according to the application. The effect of outliers spreading over time is considered as: S(k) = (S(k − 1) + s(k))e−(σ (k)−k) . 2

(10)

Finally, considering these different sources of sampling time variation, the sampling time performance index P(k) can be calculated as:

√ P(k) =

J(k)2 + U(k)2 + S(k).

(11)

This performance index can be monitored online and used for decision-making about the data acquisition systems. Since P(k) assess the variations of the sampling time, it is important to note that its ideal value is zero so that the lower the index, the better the data acquisition system. 3. Experimental apparatus Two different control and automation architectures were chosen to test and analyze the sampling time behavior according to the index defined by Eq. (11). The first one uses a data acquisition board (DAQ) plugged directly into the computer bus, and the second one uses a PLC communicating with a computer through either Industrial Ethernet with OPC or the serial Modbus RTU protocol. In all cases, the sampling time is defined by a custom computer software application. 3.1. DAQ case The hardware configuration is composed by a National R PCI-6221 DAQ card and a personal computer with Instruments⃝ a 2.4 GHz Pentium 4 processor. A custom multithread software application was developed using the C language to implement and record the performance index P for the following OSes: Microsoft Windows XP, Linux, and Linux with the RTAI real-time extension [11] (from now on referred to as Linux/RTAI). This application is composed of three threads in order to simulate a typical control application: the first one deals with user input data such as the sampling time interval and file names; the second one is responsible for recording the timestamp provided by the OS, the analog input measurement, and the index P into disk files; and the third one runs periodically at strict time intervals according to the user’s chosen sampling time. At each time interval this periodic thread performs an analog input (AI) measurement by means of an input read method provided by

Please cite this article as: M.A.M. Persechini and L.T.S. Mendes, Performance analysis among different acquisition systems for process control. ISA Transactions (2019), https://doi.org/10.1016/j.isatra.2019.08.003.

M.A.M. Persechini and L.T.S. Mendes / ISA Transactions xxx (xxxx) xxx

3

Table 1 Test summary of the DAQ case.

δ td

P(k)

J(k)

U(k)

OS

Outliers

200 100 80 60

ms ms ms ms

13.70 82.87 14.90 12.94

× × × ×

10−2 10−2 10−2 10−1

97.09 58.64 10.97 91.55

× × × ×

10−3 10−2 10−2 10−2

96.70 58.56 10.09 91.49

× × × ×

10−3 10−2 10−2 10−2

Windows Windows Windows Windows

200 100 80 60

ms ms ms ms

56.96 90.05 69.37 17.34

× × × ×

10−4 10−5 10−3 10−2

56.96 90.05 69.37 17.34

× × × ×

10−2 10−5 10−3 10−2

57.01 90.14 69.44 17.35

× × × ×

10−7 10−8 10−7 10−5

Linux Linux Linux Linux

0% 0% 0% 0.98%

200 100 80 60

ms ms ms ms

25.68 44.24 60.00 14.29

× × × ×

10−4 10−4 10−4 10−3

25.67 44.24 60.00 14.29

× × × ×

10−4 10−4 10−4 10−3

12.63 87.22 64.13 71.75

× × × ×

10−6 10−7 10−7 10−6

Linux/RTAI Linux/RTAI Linux/RTAI Linux/RTAI

0% 0% 0% 0%

200 100 80 60

µs µs µs µs

37.06 80.00 90.06 12.63

× × × ×

10−2 10−2 10−2 10−1

37.06 80.00 90.06 12.63

× × × ×

10−2 10−2 10−2 10−1

64.40 14.32 15.87 24.86

× × × ×

10−3 10−4 10−4 10−4

Linux/RTAI Linux/RTAI Linux/RTAI Linux/RTAI

0% 0% 0.02% 0.05%

the DAQ’s API (Application Programming Interface), obtains the timestamp, calculates P(k), and performs an analog output (AO) operation through an output write method also provided by the DAQ’s API. The analog output value switches between 1 V and 5 V at each sampling time in order to generate a square wave. The API provided by the DAQ’s manufacturer was used in the Windows OS, while the API provided by the Comedi project [12] was used in the Linux and Linux/RTAI OSes. To perform the test, the AO channel is wired to the AI channel and an oscilloscope is connected to the AO channel. The oscilloscope waveform is compared to the AI data record to ensure that the application is running correctly. 3.2. PLC case Two PLCs from different manufacturers are used to analyze the sampling time behavior based on the ‘‘classic’’ OPC client/server communication model: a Siemens S7-300 and a Rockwell CompactLogix 1769-L32E. An Industrial Ethernet network connects one PLC at a time to the same PC used in the DAQ case. A free trial version of the TopServerTM OPC server [13] was used for communicating with both PLCs. We decided to use the same third-party developer instead of the native OPC server provided by each PLC manufacturer. A custom OPC client was developed in the C language as a multithread software application similar to the one used in the DAQ case, but was deployed only in the Microsoft Windows XP OS since the ‘‘classic’’ OPC communication model is tied to the Microsoft COM (Component Object Model). As in the DAQ case, the first thread deals with user input data such as the sampling time interval, the OPC server name, and the OPC items names. The second thread is responsible for recording the timestamp provided by the operating system, the OPC data values read from the PLC and the index P into disk files. Finally, the third thread is also quite analogous to its DAQ counterpart except that at each time interval it reads one data item from the OPC server, obtains the timestamp, calculates P(k), and requests the OPC server to write another data on the PLC. The data item written on the PLC uses the integer format and, as in the DAQ case, has its value switched between 1 and 5 at each sampling time. It is worth mentioning that in the ‘‘classic’’ OPC communication model every data item is composed by the data value itself, a timestamp and a quality indication. Therefore a data item read from the OPC server already includes a timestamp, but we use our own timestamp computed by the software application in order to properly compare it with the DAQ case. Anyway, our tests indicate that the difference between the OPC timestamp and the application-computed timestamp is negligible.

XP XP XP XP

0% 0% 0.06% 0.09%

In addition, the Rockwell PLC was also connected to the computer by an RS232-C serial line using the Modbus RTU protocol at 19 200 kbps. A custom communication driver was written to implement the Modbus RTU protocol using a third-party open source library [14], and then the periodical thread of the application developed for the DAQ case was modified to read and write Modbus holding registers from/to the PLC using the communication driver commands. A simple program was developed on both PLCs using the FBD (Function Block Diagram) language. At each scan, the PLC program moves the data item written either by the OPC server or by the Modbus communication driver to the memory address from which the OPC server or the Modbus communication driver subsequently reads the data. 4. Experimental results Several tests were performed using the experimental apparatus described in Section 3 and typical results are presented in the following sections. 4.1. DAQ case A sequence of tests was performed by reducing δ td until the occurrence of outliers. Each test calculated the performance index P(k) over a period of 10 min using β = 0.998 and γ = 0.1. Table 1 shows a summary of these tests performed with different δ td , including the final values of P(k), J(k) and U(k). The last column shows the percentage of samples where outliers occurred. By analyzing Table 1 it is possible to observe that the P(k) final value is different for each test. For both Linux OSes, the jitter J(k) has greater influence than the systematic error U(k) over the index P(k). Otherwise, for the Windows OS the jitter J(k) and the systematic error U(k) has almost the same influence over the index P(k). Comparing the final value of P(k) and the outliers occurrence for the same δ td , the Linux OSes offer better results than the Windows OS except for the outliers when δ td = 60 ms. The use of a real-time extension in the Linux OS allows δ td to reach values of the order of microseconds, while without it δ td can reach only the range of milliseconds. Also, as expected, outliers begin to appear as δ td decreases. In this regard, it is important to note that the DAQ board offers a maximum sampling rate of 250 kS/s (kilo samples per second) per channel, corresponding to 4 µs as the minimum sampling time interval, but additional tests showed that the software application’s periodical thread takes approximately 33 µs to execute. Finally, the minimum δ td reached without outliers is approximately 100 µs only when using the Linux/RTAI.

Please cite this article as: M.A.M. Persechini and L.T.S. Mendes, Performance analysis among different acquisition systems for process control. ISA Transactions (2019), https://doi.org/10.1016/j.isatra.2019.08.003.

4

M.A.M. Persechini and L.T.S. Mendes / ISA Transactions xxx (xxxx) xxx

Table 2 Test summary of the PLC case.

Table 3 Test summary of validation.

δ td (ms)

P(k)

J(k)

U(k)

PLC

Protocol

Outliers

200 100 80 60

2.961 7.663 8.167 10.24

2.959 7.640 8.023 10.11

0.0961 0.5923 0.0815 0.9212

Siemens Siemens Siemens Siemens

OPC OPC OPC OPC

0% 0.15% 33.2% 28.7%

200 100 80 60

3.262 7.768 7.761 10.94

3.261 7.746 7.697 10.49

0.1041 0.0582 0.1224 0.9197

Rockwell Rockwell Rockwell Rockwell

OPC OPC OPC OPC

0% 0.81% 33.3% 31.2%

200 100

2.299 8.447

2.298 8.258

0.002 1.780

Rockwell Rockwell

Modbus Modbus

0% 11.8%

P(k) J(k) U(k) IAEpi TVpi IAEδ t TVδ t

Linux/RTAI

Windows XP

0.0192 0.0192 0 19.593 19.463 0.0630 0.1221

10.494 7.4216 7.4197 20.441 19.641 26.386 0.7211

100 ms. Table 2 shows that the P(k) values obtained by the Modbus protocol are similar to those obtained for the OPC tests. 4.3. Validation

Fig. 1. Performance index for δ td = 200 ms using the Windows XP, Linux and Linux/RTAI.

Fig. 1 shows a complete evolution of the P(k) performance index for δ td = 200 ms. In these tests, for each OS tested the P(k) values vary slightly around the same value. However, for δ td = 60 ms as can be seen in Fig. 2(a), P(k) increases when s(k) ̸ = 0, and therefore some outliers occur. These outliers are highlighted by dotted lines in Fig. 2(b), which shows the waveform read by the application. It is important to notice that when some outlier occurs the performance index increases instantly and then decreases as defined by Eq. (8), as the sample time returns to the range denoted by the γ parameter. 4.2. PLC case Table 2 shows a summary of the tests performed with the PLCs through OPC communications. As in the DAQ case, each test calculated the performance index P(k) over a period of 10 min using β = 0.998 and γ = 0.1. By analyzing Table 2 for the OPC results it is possible to observe similar behavior for both PLCs even if they have different manufacturers and use OPC servers that probably differ only regarding the communication protocol of each PLC. The index P(k) increases as δ td decreases and the outliers occurrence grows significantly for δ td smaller than 100 ms. The same behavior can be seen in Fig. 3, which shows the evolution of P(k) performance index on the last minute of the test for both PLCs. From that figure it can be seen that for δ td ≥ 100 ms the P(k) values have little variations; otherwise, for δ td ≤ 80 ms the P(k) values vary significantly due to outliers occurrence. Additional tests show that the software application’s periodical thread takes approximately 23.4 ms to run for both PLCs. Regarding the use of Modbus protocol, the software application’s periodical thread takes almost 100.0 ms to run. Because of that, tests were executed using only δ td = 200 ms and δ td =

To validate the usefulness of the P(k) index we set up a simple experimental apparatus in which a custom computer software application runs a PI controller algorithm to control the speed of a servo motor in a pilot plant. To do so, the software application used in Section 3.1 was modified to include the implementation of the PI controller in its periodical thread, and the computer’s DAQ card was connected to the pilot plant. During the experiments, the AO channel is wired to the motor drive velocity command as the manipulated variable (MV), the AI channel is wired to the velocity sensor measurement as the process variable (PV) and the setpoint (SP) is provided by the software application. The PI controller was then executed in the Linux/RTAI and in Windows, taking care of applying the same SP changes at the same timer interval for both OSes. Fig. 4(a) and (c) show the PI controller response to SP changes under the Linux/RTAI and Windows OSes, while Fig. 4(b) and (d) show the time evolution of the corresponding P(k) index. Table 3 shows the final values of P(k), J(k) and U(k). By just visually inspecting Fig. 4, one could think that the PI controller performances are virtually the same despite the corresponding P(k) values being better for the Linux/RTAI. Therefore, for a deeper quantitative analysis, we computed two other commonly used metrics to evaluate the PI controller performance: the integral of the absolute value of the error (IAE) and the total variation (TV ) [15]. The IAE is defined in its discrete form as: IAEpi =

N ∑

|sp(k) − pv (k)| δ tm (k),

(12)

k=1

where sp(k) and pv (k) are respectively the setpoint (SP) and the process variable (PV) at each sampling time k. The TV , in turn, measures the required control effort and is calculated as: TVpi =

N ∑

|mv (k + 1) − mv (k)| ,

(13)

k=1

where mv (k) is the manipulated variable (MV) at each sampling time k. As it is customary, before applying Eqs. (12) and (13) the PV, MV and SP values were normalized in the range of [0, 1]. The results are shown in Table 3. One can see that IAEpi and TVpi are respectively about 4% and 1% better for Linux/RTAI than for Windows, indicating a better control system performance for smaller values of P(k). An additional way of validating the P(k) index should be comparing it with other equivalent performance indexes but, to the best of our knowledge, no other index similar to P(k) that considers the continuous assessment of the sampling rate accuracy has been discussed in the literature. So, we apply Eq. (12) to the difference between the desired and the measured sampling time

Please cite this article as: M.A.M. Persechini and L.T.S. Mendes, Performance analysis among different acquisition systems for process control. ISA Transactions (2019), https://doi.org/10.1016/j.isatra.2019.08.003.

M.A.M. Persechini and L.T.S. Mendes / ISA Transactions xxx (xxxx) xxx

5

Fig. 2. (a) Performance index for δ td = 60 ms and (b) waveform when s(k) ̸ = 0.

Fig. 3. (a) Performance index for different δ td for the Siemens PLC and (b) for the Rockwell PLC.

intervals and, similarly, Eq. (13) to the measured sampling time interval: IAEδ t =

N ∑

|δ td (k) − δ tm (k)| δ tm (k)

(14)

k=1

and TVδ t =

N ∑

|δ tm (k + 1) − δ tm (k)| .

(15)

k=1

Therefore, Eq. (14) corresponds to sort of an integral of the absolute error of the sampling rate and Eq. (15) to the smoothness of the sampling time variation. In both cases their ideal results should be zero, and their actual values are shown in Table 3. 4.4. Discussion From Tables 1 and 3 and from Figs. 1, 2 and 4 it is clear that all metrics are better for the Linux/RTAI since in all cases the lower the performance indexes values, the better the performance. This is an expected result since a real-time OS or an OS with a hard real-time extension, such as the Linux/RTAI, usually has much more accurate interrupt timer services than those of a general purpose OS and, consequently, offers a more accurate sampling

time. In this regard, the timer services of the Windows O.S. are limited to millisecond resolution and, in addition, show deviations of up to 15.6 ms [16], so that the resulting jitter and outlier effects are significantly higher for this OS than for Linux and Linux/RTAI. On the other hand, the timing services of Linux/RTAI are based on the computer’s 8254 timer or on the APIC timer if present, which, in either case, provides microsecond resolution. As a final remark, it is worth noting that the Linux/RTAI performance has been benchmarked against traditional RTOSes such as VxWorks and QNX with excellent results [17,18]. As a particular example, in Table 3 both indexes P(k) and IAEδ t for the Linux/RTAI are approximately 0.02% of the corresponding Windows values. So, our tests not only validate the proposed metric but also confirm that the data acquisition system is much better when using a real-time OS or an OS with a hard real-time extension. The experimental tests also showed that, for the same δ td , P(k) reached lower values for the DAQ case than for the PLC case as indicated in Tables 1 and 2. This is because, regardless of the OS used, the use of a DAQ board plugged directly into the computer bus requires less time to process a data acquisition request, as there is no network traffic in this case. In addition, specifically in the DAQ board case, the overall performance index is better for Linux than for Windows, even if Linux presents a higher number of outliers when the sampling rate is set to 60 ms.

Please cite this article as: M.A.M. Persechini and L.T.S. Mendes, Performance analysis among different acquisition systems for process control. ISA Transactions (2019), https://doi.org/10.1016/j.isatra.2019.08.003.

6

M.A.M. Persechini and L.T.S. Mendes / ISA Transactions xxx (xxxx) xxx

Fig. 4. (a) Controller results using Linux/RTAI. (b) Performance results for Linux/RTAI. (c) Controller results using Windows. (d) Performance results for Windows.

Finally, it should be noted that the apparent small improvement in the control system performance as indicated by the IAEpi and TVpi indexes, when compared to the IAEδ t and TVδ t indexes associated with the sampling time, is due to the robustness of the PI controller. Also, during most of the test time pv (k) reaches sp(k) requiring a smaller control effort since the pilot plant has a stable open loop response. 5. Conclusions In this paper, we developed a sampling time performance index P(k) that is capable to quantify the sampling time behavior of a data acquisition system considering its typical sources of error such as jitter, bias, and outliers. The index behavior is such that the lower its value, the better. An experimental apparatus corresponding to two common control and automation architectures, namely the use of a DAQ board in the computer and the communication with a networked PLC through either OPC or Modbus RTU serial communications, was set up in order to test and validate the performance index. The results obtained showed that the P(k) worsens when the sampling time decreases and that it reaches the lowest values with the use of an OS with a hard real-time extension. Therefore, the timer interrupt services provided by the OS are one of the key features to be considered in data acquisition systems, and the P(k) index can point out precisely the influence of those services on the sampling time. Furthermore, the P(k) index can also show exactly the times at which there is an occurrence of outliers.

Therefore, this index can be a useful tool not only to track the real-time behavior of the sampling time but also to assist the decision-making process when comparing different control and automation architectures. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. References [1] Jelali M. An overview of control performance assessment technology and industrial applications. Control Eng Pract 2006;14(5):441–66. [2] Samad T, McLaughlin P, Lu J. System architecture for process automation: Review and trends. J Process Control 2007;17(3):191–201. [3] Mahmoud MS, Sabih M, Elshafei M. Using OPC technology to support the study of advanced process control. ISA Trans 2015;55:156–67. [4] Frederik J, Kröger L, Gülker G, van Wingerden J-W. Data-driven repetitive control: Wind tunnel experiments under turbulent conditions. Control Eng Pract 2018;80:105115. [5] Tahir F, Mercer E, Lowdon I, Lovett D. Advanced process control and monitoring of a continuous flow micro-reactor. Control Eng Pract 2018;77:225–34. [6] Laware A, Talange D, Bandal V. Evolutionary optimization of sliding mode controller for level control system. ISA Trans 2018;83:199–213. [7] Lorenzini C, Bazanella AS, Pereira LFA, da Silva GR. The generalized forced oscillation method for tuning PID controllers. ISA Trans 2018. http://dx. doi.org/10.1016/j.isatra.2018.11.014.

Please cite this article as: M.A.M. Persechini and L.T.S. Mendes, Performance analysis among different acquisition systems for process control. ISA Transactions (2019), https://doi.org/10.1016/j.isatra.2019.08.003.

M.A.M. Persechini and L.T.S. Mendes / ISA Transactions xxx (xxxx) xxx [8] Marieska MD, Kistijantoro AI, Subair M. Analysis and benchmarking performance of real time patch Linux and Xenomai in serving a real time application. In: International conference on electrical engineering and informatics. 2011. [9] Chen T, Guo Q, Temam O, Wu Y, Bao Y, Xu Z, Yunji C. Statistical performance comparisons of computers. IEEE Trans Comput 2015;64(5):1442–55. [10] Clarke DW. Self-tuning control. In: Levine WS, editor. The control handbook. IEEE Press; 1996, p. 827–46. [11] RTAI Real Time Application Interface (2016). URL https://www.rtai.org. [12] COMEDI the linux control and measurement device interface (2016). URL http://www.comedi.org/hardware.html. [13] TopServer Software Toolbox (2017). URL https://www.softwaretoolbox. com/topserver/. [14] libmodbus: A Modbus library for LINUX, Mac OS X, FreeBSD, QNX and Win32 (2019). URL https://libmodbus.org.

7

[15] Chen D, Seborg DE. PI/PID controller design based on direct synthesis and disturbance rejection. Ind Eng Chem Res 2002;41:4807–22. [16] Grobler J, Kourie D. Design of a high resolution soft real-time timer under a win32 operating system. In: Proceedings of annual research conference of the South African institute of computer scientists and information technologists on IT research in developing countries. 2005, URL https: //dl.acm.org/citation.cfm?id=1145700. [17] Aroca RV, Caurini G. A real time operating systems (RTOS) comparison. In: Proceedings of the XXIX Brazilian computing society congress. 2009, URL http://csbc2009.inf.ufrgs.br/anais/pdf/wso/st04_03.pdf. [18] Barbalace A, Luchetta A, Manduchi G, Moro M, Soppelsa A, Taliercio C. Performance comparison of VxWorks, Linux, RTAI, and Xenomai in a hard real-time application. IEEE Trans Nucl Sci 2008;55:435–9.

Please cite this article as: M.A.M. Persechini and L.T.S. Mendes, Performance analysis among different acquisition systems for process control. ISA Transactions (2019), https://doi.org/10.1016/j.isatra.2019.08.003.