Optimal Sensor Selection for Active Fault Diagnosis using Test Information Criteria ⁎

Optimal Sensor Selection for Active Fault Diagnosis using Test Information Criteria ⁎

12th IFAC Symposium on Dynamics and Control of 12th IFACSystems, Symposium on Dynamics and Control of Process including Biosystems 12th IFAC IFAC Symp...

707KB Sizes 0 Downloads 51 Views

12th IFAC Symposium on Dynamics and Control of 12th IFACSystems, Symposium on Dynamics and Control of Process including Biosystems 12th IFAC IFAC Symposium Symposium on Dynamics Dynamics and Control Control of of 12th on and Available online at www.sciencedirect.com Process Systems, including Biosystems Florianópolis SC, Brazil, April 23-26, 2019 12th IFAC Symposium on Dynamics and Control of Process Systems, including Biosystems Process Systems, Biosystems Florianópolis - SC,including Brazil, April 23-26, 2019 Process Systems, including Biosystems Florianópolis -- SC, SC, Brazil, Brazil, April April 23-26, 23-26, 2019 2019 Florianópolis Florianópolis - SC, Brazil, April 23-26, 2019

ScienceDirect

IFAC PapersOnLine 52-1 (2019) 382–387

Optimal Sensor Selection for Active Fault Optimal Sensor Selection for Active Fault Optimal Sensor Selection for Active Fault Diagnosis using Test Information Criteria Optimal Sensor Selection for Active Fault Diagnosis using Test Information Criteria Diagnosis using Test Information Criteria  Diagnosis using Test Information Criteria Palmer, Kyle A. ∗∗ Bollas, George M. ∗∗

Palmer, Palmer, Kyle Kyle A. A. ∗∗ Bollas, Bollas, George George M. M. ∗∗ Palmer, Kyle A. Bollas, George M. ∗ ∗ Palmer, Kyle A. Bollas, George M. ∗ ∗ Department of Chemical and Biomolecular Engineering, University of Department of Chemical and Biomolecular Engineering, University of ∗ ∗ Department Chemical Biomolecular University Connecticut,of 191and Auditorium Road,Engineering, Unit 3222, Storrs, CT, of ofStorrs, Chemical and Biomolecular Engineering, University of ∗ Department Connecticut, Storrs, 191 Auditorium Road, Unit 3222, Storrs, CT, Department ofStorrs, Chemical and Biomolecular Engineering, University of Connecticut, 191 Auditorium Road, Unit 3222, Storrs, CT, 06269-3222, USA (e-mails: [email protected], Connecticut, Storrs, 191 Auditorium Road, Unit 3222, Storrs, CT, 06269-3222, (e-mails: Connecticut, Storrs, USA 191 Auditorium Road, Unit 3222, Storrs, CT, 06269-3222, USA (e-mails: [email protected], [email protected], [email protected]) 06269-3222, USA (e-mails: [email protected], [email protected]) 06269-3222, USA (e-mails: [email protected], [email protected]) [email protected]) [email protected]) Abstract: A method for the optimal selection of sensors in active fault detection and isolation Abstract: A for the selection of in fault detection isolation Abstract: A method method for the optimal optimal selection of sensors sensors in active activeusing fault parametric detection and and isolation tests is presented. Thefor usefulness of test information is improved sensitivities Abstract: A method the optimal selection of sensors in active fault detection and isolation tests is presented. The usefulness of test information is improved using parametric sensitivities Abstract: method for thethe optimal selection sensors in active faultinputs detection isolation tests is presented. The usefulness of test information is improved using parametric sensitivities derived by aAdigital twin of system. In this of model-based approach, are and manipulated tests is presented. The usefulness of test information is improved using parametric sensitivities derived by a digital twin of the system. In this model-based approach, inputs are manipulated tests is presented. The usefulness of test information is improved using parametric sensitivities derived by a digital twin of the system. In this model-based approach, inputs are manipulated to maximize output sensitivities with respect to faults and minimize the joint confidence region derived by a output digital sensitivities twin of the system. In this model-based approach, inputs are manipulated to maximize with respect to faults and the joint confidence region derived by a output digital twin of the system. In this approach, are manipulated to maximize output sensitivities with respect to model-based faults and minimize minimize the joint confidence region between faults and other uncertain parameters. Two criteria are usedthe toinputs selectconfidence optimal sensors to maximize sensitivities with respect to faults and minimize joint region between faults and other uncertain parameters. Two criteria are usedthe to select optimal sensors to maximize output sensitivities respect to faults and minimize region between and uncertain parameters. Two are to select optimal sensors based onfaults information theory. Thewith optimal selection iscriteria dependent on measurement precision, as between faults and other other uncertain parameters. Twois criteria are used used to joint selectconfidence optimal sensors based on information theory. The optimal selection dependent on measurement precision, as between faults and other uncertain parameters. Two criteria are used to select optimal sensors based on information theory. The optimal selection is dependent on measurement precision, as shown in a case study of a virtual three-tank system subject to multiple faults. based on information theory. The optimal selection is dependent on measurement precision, as shown in a of three-tank system to faults. based The optimal selection dependent on measurement shown on in information a case case study studytheory. of a a virtual virtual three-tank systemissubject subject to multiple multiple faults. precision, as shown in a case study of a virtual three-tank system subject to multiple faults. © 2019,inIFAC (International Federation of Automatic Control) Hosting by Elsevierfaults. Ltd. All rights reserved. shown a case study of a virtual three-tank system subject to multiple Keywords: Active Fault Diagnosis, Fault Isolation, Classification, Optimal Experiment Design, Keywords: Active Fault Diagnosis, Fault Isolation, Classification, Optimal Keywords: Active Active Fault Diagnosis, Diagnosis, Fault Fault Isolation, Isolation, Classification, Classification, Optimal Optimal Experiment Experiment Design, Design, Information Criterion Keywords: Fault Experiment Design, Information Criterion Keywords: Active Fault Diagnosis, Fault Isolation, Classification, Optimal Experiment Design, Information Criterion Criterion Information Information Criterion 1. INTRODUCTION of outputs with respect to system faults and uncertain 1. INTRODUCTION of with respect to faults uncertain 1. INTRODUCTION INTRODUCTION of outputs outputs with respect to system system faults and and uncertain system inputs or respect parameters are optimized. Output sen1. of outputs with to system faults and uncertain system inputs or parameters are optimized. Output sen1. INTRODUCTION outputs with respect to system faults and uncertain system inputs or parameters are optimized. Output sensitivities contain test information that reflects the confiActive fault diagnosis configures input settings to increase of system inputs or parameters are optimized. Output sensitivities contain test information that reflects the confiActive fault diagnosis configures input settings to increase system inputs or parameters are optimized. Output sensitivities contain test information that reflects the confiActive fault diagnosis configures input settings to increase dence in relevant parameter estimates. This information is the usefulness of system information to enhance the desitivities contain test information that reflects the confiActive fault diagnosis configures input settings to increase dence relevant parameter estimates. This information is the usefulness of system information to enhance the decontain test Information, information that reflects confiActive fault configures input to settings to increase dence in in to relevant parameter estimates. This information is the usefulness of system system information to enhance the de- sitivities referred as Fisher often used inthe methods tection and diagnosis isolation of faults (Isermann, 2005). System dence in relevant parameter estimates. This information is the usefulness of information enhance the dereferred to as Fisher Information, often used in methods tection and isolation of faults (Isermann, 2005). System dence in relevant parameter estimates. This information is the usefulness of system information to enhance the dereferred to as as Fisher Fisher Information, often used used inparameter methods tection and isolation isolation of faults faults (Isermann, 2005). environSystem referred of information extraction and optimization forin uncertainty, measurement noise, and unknown to Information, often methods tection and of (Isermann, 2005). System of information extraction and optimization for parameter uncertainty, measurement noise, and unknown environreferred to as Fisher Information, often used in methods tection and isolation of faults (Isermann, 2005). System of information extraction and optimization for parameter uncertainty, measurement noise, and unknown environidentification (Han et al., 2016a,b). Here, we start with the mental factorsmeasurement can impact the reliability of fault diagnosis. information(Han extraction and optimization for parameter uncertainty, noise, and unknown environ- of identification et al., 2016a,b). Here, we start with the mental factors can impact the reliability of fault diagnosis. information(Han extraction and optimization for parameter uncertainty, measurement and unknown environidentification 2016a,b). Here, we start with mental factors can impact the the reliability oftest fault diagnosis. implementation of et anal., FDI design criterion based on the Therefore, it is imperative tonoise, select active designs that of identification (Han et al., 2016a,b). Here, we start with mental factors can impact reliability of fault diagnosis. implementation of an FDI design criterion based on the Therefore, it is imperative to select active test designs that identification (Han et al., 2016a,b). Here, we start with the mental factors can impact the reliability of fault diagnosis. implementation of anBollas FDI design design criterion based on onfault Therefore, it is is imperative imperative to select select active test designs that implementation work of Palmer of and (2018b) that improves can maximize the evidence of faults on test system outputs an FDI criterion based the Therefore, it to active designs that work of Palmer and Bollas (2018b) that improves fault can maximize the evidence of faults on system outputs implementation of an FDI design criterion based on the Therefore, it is imperative to select active test designs that work of of Palmer Palmer and accounting Bollas (2018b) (2018b) that improves improves fault can maximize thefor evidence of faults faults on system system In outputs identifiability while for system uncertainty. whilemaximize accounting the effects of uncertainty. most work and Bollas that fault can the evidence of on outputs identifiability while for system uncertainty. while accounting for the effects of uncertainty. In most of Palmer and accounting Bollas (2018b) that improves fault can maximize evidence ofcriteria faults on system outputs identifiability accounting for uncertainty. while accounting for the effects effects of uncertainty. uncertainty. In most This method iswhile extended to include thesystem selection of sensors approaches, thethe optimization for active FDI test work identifiability while accounting for system uncertainty. while accounting for the of In most This method is extended to include the selection of sensors approaches, the optimization criteria for active FDI test identifiability while accounting for system uncertainty. while accounting for the effects of uncertainty. In most This method is extended to include the selection of sensors approaches, the optimization criteria for active FDI test that optimize FDI test precision. design take into account detection accuracy and robustness method isFDI extended to include the selection of sensors approaches, theaccount optimization criteria for active FDI test This that test design take into detection accuracy and robustness method isFDI extended to include the selection of sensors approaches, theaccount optimization criteria for active FDI test This that optimize optimize FDI test precision. precision. design take into account detection accuracy and robustness againsttake uncertainty. In Mesbah et accuracy al. (2014), probabilistic that optimize test precision. design into detection and robustness against uncertainty. In Mesbah et al. (2014), probabilistic The selection of sensors and sensor locations for system that optimize FDI test precision. design take into account detection accuracy and robustness against uncertainty. In Mesbah Mesbahinet etsystem al. (2014), (2014), probabilistic uncertainties were considered inputprobabilistic designs us- The selection of sensors and sensor locations for system against uncertainty. In al. The selection of sensors and locations for system uncertainties were considered in system input designs usidentification has been studied by Joshi and Boyd selection has of sensors and sensor sensor locations for (2009). system against uncertainty. Inexpansion. Mesbahin (2014), probabilistic uncertainties were considered inetsystem system inputet designs us- The ing polynomial chaos Inal.Davoodi al. (2014), identification been studied by Joshi and Boyd uncertainties were considered input designs usThe selection of(2008), sensors and sensor locations for (2009). system identification has been studied by Joshi and Boyd (2009). ing polynomial chaos expansion. In Davoodi et al. (2014), In Maul et al. a subset of sensors were chosen identification has been studied by Joshi and Boyd (2009). uncertainties were considered in system input designs using polynomial chaos expansion. In Davoodi et al. (2014), simultaneous fault detection and control is implemented In Maul et al. (2008), a subset of sensors were chosen ing polynomial chaos expansion. In Davoodi et al. (2014), identification has been studied by Joshi and Boyd (2009). In Maul Maul et al. (2008), (2008), a subset subset of sensors sensors were chosen simultaneous fault detection and control is implemented from the available sensors that maximize fault diagnosis In et al. a of were chosen ing polynomial chaos expansion. In Davoodi et al. (2014), simultaneous fault detection and control control is implemented implemented using mixed Hfault sensitivity of the fault from the available sensors that maximize fault diagnosis ∞ /Hdetection − to maximize simultaneous and is In Maul et al. (2008), a subset of sensors were chosen from the available sensors that maximize fault diagnosis using mixed H /H to maximize sensitivity of the fault accuracy within cost constraints. However, efforts to im∞ − from the available sensors that maximize fault diagnosis simultaneous fault detection and control is implemented using mixed mixed H∞ /H− to maximize maximize sensitivity of the the fault fault detection while minimizing the effectsensitivity of disturbances. This accuracy within cost constraints. However, efforts to im∞ − to using H /H of from the available sensors that maximize fault diagnosis accuracy within cost constraints. However, efforts to detection while minimizing the effect of disturbances. This plement sensor selection and active test design selection within selection cost constraints. However, efforts to imimusing mixed Hinequality /H− to method maximize sensitivity of the fault ∞ detection while minimizing the effect of disturbances. This linear matrix is an effective on-line ap- accuracy plement sensor and active test design selection detection while minimizing the effect of disturbances. This accuracy within cost constraints. However, efforts to implement sensor sensor in selection and active active test design selection linear matrix inequality method is an effective on-line apsimultaneously fault diagnosis havetest been sparseselection (Patan plement selection and design detection while minimizing the effect of disturbances. This linear matrix inequality method is an effective on-line approach, albeit restricted by model size and linearity. Time simultaneously in fault diagnosis have been sparse (Patan linear matrix inequality method is an effective on-line applement sensor selection and active test design selection simultaneously in fault fault diagnosis have been sparse (Patan proach, albeit restricted by and linearity. Time and Ucinski, 2010). In diagnosis this work, webeen use sparse an integrated in have (Patan linear matrix inequality is size an effective on-line approach, albeit restricted bybemodel model size and linearity. Time and energy costs can alsomethod important in determining an simultaneously and Ucinski, 2010). In this work, we use an integrated proach, albeit restricted by model size and linearity. Time simultaneously in fault diagnosis have been sparse (Patan and Ucinski, 2010). In this work, we use an integrated and energy costs can also be important in determining an approach to generate robust FDI tests to ensure that the and Ucinski, 2010). In this work, we use an integrated proach, albeit restricted by model size and linearity. Time and energy energy costs canforalso also be important important in determining determining an approach effective testcosts design active FDI. Real-time optimization to generate robust FDI tests to ensure that the and can be in an and Ucinski, 2010). In this work, we use an integrated approach to generate robust FDI tests to ensure that effective test design for active FDI. Real-time optimization optimally selected inputs and outputs result in consistent to generate robust FDI tests result to ensure that the the and energy costs canfor also be important in determining an approach effective test design active FDI. Real-time optimization often requires a balance between on-site computational caoptimally selected inputs and outputs in consistent effective test design for active FDI. Real-time optimization approach to generate robust FDI tests to ensure that the optimally selected inputsofand and outputs result result infaults. consistent often requires a balance between on-site computational caand successful diagnoses anticipated systemin Senoptimally selected inputs outputs consistent effective test design for active FDI. Real-time optimization often requires a balance between on-site computational capacity and demand by the FDI technique for robust detecand successful diagnoses of anticipated system faults. Senoften requires a balance between on-site computational caoptimally selected inputs and outputs result in consistent and successful diagnoses of anticipated system faults. Senpacity and by the FDI technique for detecsor selection viewed as of a model selection problem successfulis diagnoses anticipated system faults.where Senˇrequires often aetbalance between on-site computational ca- and pacity and demand demand by the FDI technique for robust robust detection (Simandl al.,by 2005), which is especially challenging sor selection is viewed as aa model selection problem where pacity and demand the FDI technique for robust detecˇ and successful diagnoses of anticipated system Sensor selection is viewed as model selection problem where tion ((Simandl et al., 2005), which is especially challenging the number and type of outputs observed are faults. selected to sor selection is viewed as a model selection problem where ˇ pacity and demand by the FDI technique for robust detecˇ tion Simandl et al., 2005), which is especially challenging for nonlinear systems. Therefore, complex,challenging nonlinear the number and type of outputs observed are selected to tion (Simandl et al., 2005), which isin especially sor selection is viewed as a model selection redundancy. problem where the number and type of outputs observed are selected to for nonlinear systems. Therefore, in complex, nonlinear increase precision and reduce information In ˇ the number and type of outputs observed are selected to tion (Simandl etbe al.,more 2005), which isin especially challenging for nonlinear Therefore, complex, nonlinear systems it maysystems. appropriate to generate optimal increase precision and reduce information redundancy. In for nonlinear systems. Therefore, in complex, nonlinear the number and type of outputs observed are selected to increase precision and reduce information redundancy. In systems it may be more appropriate to generate optimal active FDI, deviations between the expected and observed increase precision and reduce information redundancy. In for nonlinear systems. Therefore, complex, systems it be appropriate to generate optimal test designs off-line, with increasedincomputational capa- active FDI, deviations between the expected and observed systems it may may be more more appropriate to generatenonlinear optimal increase precision and reduce information redundancy. In active FDI, deviations between the expected and observed test designs off-line, with increased computational capaoutputsFDI, needdeviations to be minimized anticipated faultobserved scenarbetweenat the expected and systems it may be an more appropriate to of generate optimal test designs designs off-line, with increased computational capa- active bility, allowing for optimal selection test conditions outputs need to be minimized at anticipated fault scenartest off-line, with increased computational capaactive FDI, deviations between the expected and observed outputs need to be minimized anticipated fault scenarbility, allowing for an optimal selection of test conditions ios. Well-known measures have at been used to represent such outputs need to be minimized at anticipated fault scenartest designs off-line, with increased computational capability, allowing for an optimal selection of test conditions to identify faultsforinan theoptimal presence of system uncertainty. ios. Well-known measures have been used to represent such bility, allowing selection of test conditions outputs need to be minimized at anticipated fault scenarios. Well-known measures have been used to represent such to identify faults in the presence of system uncertainty. deviation, including Kullback-Leibler Divergence (KLD) ios. Well-known measures have been used to represent such bility, allowing forin an optimal selection of test conditions deviation, including Kullback-Leibler Divergence (KLD) to identify faults the presence of system uncertainty. to identify faults in the presence of system uncertainty. ios. Well-known measures have been used to represent such deviation, including Kullback-Leibler Divergence (KLD) (Kullback and Leibler, 1951) and Fisher Information DisIn Palmer et al. (2016) and Palmer et al. (2018), an off-line deviation, including Kullback-Leibler Divergence (KLD) to identifyetfaults in theand presence ofetsystem uncertainty. (Kullback and Leibler, 1951) and Fisher Information DisIn Palmer al. (2016) Palmer al. (2018), an off-line deviation, including Kullback-Leibler Divergence (KLD) (Kullback and Leibler, 1951) and Fisher Information In Palmer et al. (2016) and Palmer et al. (2018), an off-line tance (FID) (Costa et al., 2015). In this work, we propose a method was presented that selects optimal inputs for fault (Kullback and Leibler, 1951) and Fisher Information DisDisIn Palmer et al. (2016) and Palmer et al. (2018), an off-line tance (FID) (Costa et al., 2015). In this work, we propose a method was presented that selects optimal inputs for fault (Kullback and Leibler, 1951) and Fisher Information DisIn Palmer et al. (2016) and Palmer et al. (2018), an off-line tance (FID) (Costa et al., 2015). In this work, we propose a method was presented that selects optimal inputs for fault sensor selection process that takes into account a variable diagnosis to improve FDI reliability, wherein sensitivities (FID) (Costa et al.,that 2015). In this work, we apropose a method was presented thatreliability, selects optimal inputs for fault tance sensor selection process takes into account variable diagnosis to improve FDI wherein sensitivities tance (FID) (Costa et al., 2015). In this work, we propose a method was presented that selects optimal inputs for fault sensor selection process that takes into account a variable diagnosis towas improve FDI reliability, wherein sensitivities number of sensors and their subsets. The optimal test de This workto sensor selection process that takes into account a variable diagnosis improve FDI reliability, wherein sensitivities sponsored by the United Technologies Corporation number of sensors and their subsets. The optimal test de This workto sensor selection process that takes into account a variable diagnosis improve FDI reliability, wherein sensitivities was sponsored by the United Technologies Corporation number ofsensor sensors and their subsets. The optimal test design andof setand are their calculated for The faultoptimal diagnosis using  This work number sensors subsets. test deInstitute for was Advanced Systems (UTC-IASE) of the  was sponsored by the theEngineering United Technologies Technologies Corporation sign and set are calculated for fault diagnosis using This work sponsored by United Corporation Institute for Advanced Systems Engineering (UTC-IASE) of the number ofsensor sensors and their subsets. The optimal test de sign and sensor set are calculated for fault diagnosis using Fisher Information for each number of sensors considered, sign and sensor set are calculated for fault diagnosis using University of Connecticut. Any opinions expressed herein are those This work was sponsored by the United Technologies Corporation Institute for Advanced Systems Engineering (UTC-IASE) of the Institute for Advanced Systems Engineering (UTC-IASE) the Fisher Information for each number of sensors considered, University of Connecticut. Any opinions expressed herein areofthose sign and sensor set are calculated for fault diagnosis using Fisher Information for each number of sensors considered, and the sensors that result in the the best goodness-of-fit of the authors and do not represent those of the sponsor. Institute for Advanced Systems Engineering (UTC-IASE) of the University of of Connecticut. Any Any opinions opinions expressed expressed herein herein are are those those Fisher Information for each in number of best sensors considered, University and sensors the goodness-of-fit of the authorsConnecticut. and do not represent those of the sponsor. Fisher each in number of best sensors considered, and the theInformation sensors that thatforresult result in the the the best goodness-of-fit University of Connecticut. Any opinions herein are those of and those of and the sensors that result the the goodness-of-fit of the the authors authors and do do not not represent represent thoseexpressed of the the sponsor. sponsor. and the sensors that result in the the best goodness-of-fit of the authors and do not represent those of the sponsor. 2405-8963 © 2019, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved.

Copyright © 2019 IFAC 382 Copyright 2019 IFAC 382 Control. Peer review© under responsibility of International Federation of Automatic Copyright © 382 Copyright © 2019 2019 IFAC IFAC 382 10.1016/j.ifacol.2019.06.092 Copyright © 2019 IFAC 382

2019 IFAC DYCOPS Florianópolis - SC, Brazil, April 23-26, 2019 Kyle A. Palmer et al. / IFAC PapersOnLine 52-1 (2019) 382–387

according to selected information criteria are determined. The proposed test designs and sensor sets are verified using correct classification via principal component analysis (PCA) and k-nearest neighbor algorithm (k-NN), as presented in Najjar et al. (2016). The method is validated against a virtual three-tank system that is commonly used in fault detection and control design (Mesbah et al., 2014). Case studies using this system compare the accuracy of the fault diagnosis conducted at nominal and optimal test designs using different sensor sets. 2. METHOD Active FDI tests are designed using models written as a set of differential algebraic equations, denoted as f , (1), where x(t) is the Nx × 1 vector of system states, u(t) is the Nu ×1 vector of inputs, θ is the Nθ ×1 vector of model parameters, and t is the time. ˙ f (x(t), x(t), u(t), θ, t) = 0. (1) The estimated outputs of the system that are selected for ˆ (t), are compiled into an Ny × 1 vector: fault diagnosis, y ˆ (t) = h(x(t), u(t), θ). y (2) The initial conditions, y0 , are expressed as:  ˙ 0 ), x(t0 ), u(t0 ), θ, t0 ) = 0, f (x(t y0 = ˆ (t0 ) = h(x(t0 ), u(t0 )), θ). y

(3)

Let vectors u and θ have known and unknown elements, where the known parameters and inputs are predetermined system characteristics that may be variables subject to control. The remaining elements are divided into uncertain parameters and inputs that represent anticipated faults or system uncertainty. The parameter vector, θ, is partitioned into three subvectors that represent faults, θf , system uncertainty not caused by faults, θu , and the parameters relating to known system characteristics, θp . The input trajectories are also split into subvectors, representing the inputs that are uncertain, uu , and the system inputs that depend on system design and can be adjusted to improve the FDI test design, up . The system uncertainty is assumed to have random distribution that is known, bounded and not dependent on the FDI test design. The subvectors representing the faults and system uncertainty not caused by faults are grouped together into a Nξ × 1 vector, ξ:   ξ = ξf , ξ u = [θf , θu , uu ]. (4)

A predefined (obtained prior to test design selection) set ˜ provides of anticipated fault and uncertainty values, ξ, best estimates of these parameters in each scenario. The method used to calculate the optimal test design, ϕ∗ , is based on the Fisher Information Matrix (FIM), in the neighborhood of ξ˜ for a particular test design, denoted as Hξ . Hξ is used in statistics to determine the criteria for the selection of experimental settings. The FIM is defined: Hξ =

Ny Ny  

−2 T σij Qi Qj ,

(5)

i=1 j=1

where Qi is the sensitivity matrix of the i-th output, 2 and σij is the measurement variance of the i-th and jth outputs. It is assumed that the outputs are subject to 383

383

measurement noise with uncorrelated, random distribution that is zero-mean and Gaussian with known variance, σ 2 . The sensitivities of the outputs with respect to the faults and uncertain parameters and inputs are obtained through normalized partial derivatives that are approximated at ξ˜ using forward sensitivity analysis. The sensitivities obtained from a dynamic test are compiled into a Nsp × Nξ matrix, where Nsp is the number of time samples and Nξ is the number of elements in ξ. The process inputs adjustable for FDI, up , are treated as a piece-wise constant series of discrete inputs that change at predetermined time points. These inputs are adjusted to determine the optimal test design. The overall time span, τ , number of tests, Ntest , and initial system conditions, y0 are generally considered for test design optimization and are therefore compiled into the test design vector, ϕ, which is constrained by the test design space, Φ: (6) ϕ = [up , tsp , Ntest , y0 , τ ] ∈ Φ, where up is a Nup × Ntest matrix and Φ contains the lower and upper bounds of each variable in the test design vector. A scalar measure of the Fisher Information Matrix is used as the objective of the optimization problem. The Ds optimal criterion from Parker and Gennings (2008) aims to minimize the volume of a subset of the parameter confidence region. Specifically, it maximizes the sensitivities of outputs with respect to selected parameters of interest i.e. faults, while minimizing the joint confidence region between parameters of interest and other uncertain parameters deemed to be nuisance parameters i.e. system uncertainty. The Ds -optimal design was chosen for its performance in FDI compared to other criteria for uncertain systems, as shown in Palmer and Bollas (2018a). The Ds optimal criterion (7) is written as:   ˜ ϕ) ϕ∗ ∈ arg max ΨDs Hξ (ξ, ϕ∈Φ   Hξ  , = arg max  ϕ∈Φ Huu  s.t. ˜ t) = 0, ˙ f (x(t), x(t), up (t), θp , ξ, (7) ˜ t), ˆ (t) = h(x(t), up (t), θp , ξ, y  ˜ t0 ) = 0, ˙ 0 ), x(t0 ), up (t0 ), θp , ξ, f (x(t y0 = ˜ t0 ), ˆ (t0 ) = h(x(t0 ), up (t0 ), θp , ξ, y U uL p ≤ up (t) ≤ up , L

U

x ≤ x(t) ≤ x ,

∀t ∈ [0, τ ],

∀t ∈ [0, τ ],

where Huu is the submatrix of the FIM that corresponds to the joint confidence between parameters and inputs representing system uncertainty. After the optimal test designs have been computed a sensor selection criterion needs to be chosen. Each sensor set considered for FDI generates a unique set of output ˆ . The system model structure is dependent trajectories, y on the sensors chosen for data collection. Thus, the optimal sensor set problem can be cast as a model selection problem, where each model structure generates different probabilities and selection criteria scores that reflect the relative quality of information from each sensor. Common model selection criteria require comparisons of predicted

2019 IFAC DYCOPS 384 Florianópolis - SC, Brazil, April 23-26, 2019 Kyle A. Palmer et al. / IFAC PapersOnLine 52-1 (2019) 382–387

and measured data for all sensors. In this work, the measured data is generated by (8) using the system model with noise in the uncertain parameters and measurements: m ˆ m (ξ f,j , ξ u,i ) + wy,i , j = 0, . . . , Nf , =y yj,i (8) i = 1, . . . , Nsp , m ˆm, where yj,i is the synthetic data from model output y at fault scenario j, for the i-th sampling point. wy,i is the measurement noise at the i-th sampling point and ξ f,j is the vector of parameters and inputs representing the fault scenario j and ξ u,i is the vector of parameters and inputs representing uncertainty of the i-th sampling point. ξ f,j and ξ u,i are assumed to have a random distribution that is bounded and known for all fault scenarios. Each model structure m with Ns sensed outputs is affected differently by the uncertainty of the system, generating ˆ m ). The output different model-system mismatch (ym − y distributions of the system and corresponding model are compared, taking into account system uncertainty and measurement noise. The optimal sensor set is considered to have the least divergence between output distributions. Two criteria were selected based on well-known measures to achieve this optimality. The chosen criteria in this document are normalized forms of KLD and FID, preferred for their ability to compare models comprehensively using observed and expected system outputs at different sample sizes (Rigollet, 2012).

The KLD, denoted as DKL , is an asymmetric measure reported in (Kullback and Leibler, 1951) that is strongly related to Fisher information. It describes the divergence between two distributions, P1 and P0 , as shown in (9) for models M0 and M1 . In this work, M0 represents the system model, or the ”null model” for the purpose of optimal sensor selection, and M1 represents the observed data from the virtual system. Nsp 

P1 (ξ i ) DKL (P1 ||P0 ) = P1 (ξ i ) ln P0 (ξ i ) i=1

(9)

When the data of both models is considered to have Gaussian distributions, the KLD equation can be simplified to (10), where the measurements from M0 have a mean vector µ0 with the covariance matrix, Σ0 , and M1 has a mean vector, µ1 with the covariance matrix, Σ1 .  1 tr(Σ−1 DKL (P1 ||P0 ) = 1 Σ0 )+ 2  (10) det Σ1 (µ − µ ) − N + ln (µ1 − µ0 )T Σ−1 , 1 0 s 1 det Σ0 It is assumed in this work that the mean vectors corresponding to M1 and M0 are equal, denoted as µ. The KLD is an additive property, therefore comparing between divergences based on different sampling sizes is insufficient unless the KLD is normalized (Rigollet, 2012): 1 DKL,N = DKL . (11) Ns The smallest DKL,N value corresponds to the best feasible sensor combination to determine the optimal number of sensors and corresponding sensor set. The second selection criterion is the FID, which represents the distance along the geodesic curve between two distri384

butions. It is √ a closely related measure to the KLD that approaches DKL as the distributions from M1 resemble those from M0 (Costa et al., 2015), although FID is a symmetric measure unlike KLD. The FID, denoted as DF is calculated from the densities ρ1 and ρ0 representing M1 and M0 , respectively, as shown in (12)(Walker, 2016):   ∂ρ1 (ξ)  ∂ρ0 (ξ) ∂ξ ∂ξ − dξ. (12) ρ1 (ξ) DF (ρ1 , ρ0 ) = ρ1 (ξ) ρ0 (ξ) Ξ Assuming the data has Gaussian distributions, (12) can be reduced to (13) (Costa et al., 2015):   m 1    2 (ln λi ) , λ = eig Σ−1 DF (ρ1 , ρ0 ) =  1 Σ0 , (13) 2 i=1

A similar normalization to (11) is applied to the FID for a fair comparison of data from sensor sets of different size: 1 DF,N = √ DF , (14) Ns Equations (1-14) are used to determine which sensor set is most useful through Algorithm 1. In Algorithm 1, a number of sensors, Ns , that cannot be greater than the number of available outputs is assigned to the test design step. Optimal test designs are then calculated for Ncomp feasible combinations of Ns sensors, compiled into a binary array, a. The calculated test design, ϕ∗ , is used to generate the expected outputs using the system model. These outputs are then used to compute some criteria value, ICm , based on DKL,N or DF,N for each fault scenario so that excessive or noisy sensors are rejected. The results from this procedure are the sensor sets and test design that satisfy the information criteria. Algorithm 1 Sensor selection using iterative test design optimization 1: procedure Sensor Select(f , Φ) 2: m ← 1; 3: while m ≤ Ny do 4: Ns ← m;  for model structure ym 5: Assign sensor set array, a, size Ncomp × Ny ; 6: for i = 1, . . . , Ncomb do 7: Calc ϕ∗i , ΨDs,i , see (7); 8: k = arg mini=1,...,Ncomb ΨDs,i ; ˆ (ξ1 ), . . . , yNf , y ˆ (ξNf )) 9: Calc ICm (ak , ϕ∗k , y1 , y 10: if m = 1 || ICm
2019 IFAC DYCOPS Florianópolis - SC, Brazil, April 23-26, 2019 Kyle A. Palmer et al. / IFAC PapersOnLine 52-1 (2019) 382–387

performed to randomly generate values assigned to the uncertain parameters and measurement noise for virtual system training data. After the data is collected, classifiers are used to classify future test data to diagnose system faults. The data is consolidated by extracting the most informative features using PCA as described by Najjar et al. (2016) for all anticipated fault scenarios. The transformed and compiled training data with known classes assigned to each set is denoted as X. The rows of X with the smallest distance, d, to the transformed output data, z, are compiled into the nearest neighbors matrix, X∗ . The number of neighbors considered for classification, k, must be an odd integer and can be optimized for the highest rate of correct classification using virtual system training data (Najjar et al., 2016). Algorithm 2 was used to determine which trained data is a closest match to the test data. Algorithm 2 Find k-Nearest Neighbors 1: procedure kNN Dist(Xi , zi , k)  k= number of neighbors 2: for k  = 1 : k do 3: if k  = 1 then 4: q 1 = arg min1≤l≤NM C d (Xi,l zi ) 5: else  6: q k = arg min1≤l≤NM C ,l=q1 ,...,qk d (Xi,l zi ) 7: return [q 1 , q 2 , . . . , q k ]  Elements corresponding to the k-nearest neighbors The predicted class is determined according to which scenario has the greatest likelihood of being present based on available data. This is achieved using the majority vote approach: cˆM V = arg max P (c). (15) c∈{c0 ,c1 ,...,cNf }

The rate of successful fault detection and isolation is computed to determine the success rate of the proposed FDI test design in each fault scenario. The correct classification is calculated to this effect, as shown in (16): j    NM   C 1  1, if z ξjl → cˆ = cj CCRj = j , (16) NM C l=1 0, otherwise j where NM C is the total number of Monte Carlo runs for fault scenario j. As a single FDI quality measure, the total classification accuracy, ACC was used to compare the results between the active FDI test designs at nominal and optimal test designs for different sensor sets. The ACC is the total sum of correct classifications out of the Monte Carlo simulations performed for all fault scenarios.

3. CASE STUDY: THREE-TANK BENCHMARK SYSTEM A case study was performed to explore the effectiveness of the proposed methodology for fault diagnostics. The three-tank system was used in various applications of control and fault detection studies (Mesbah et al., 2014). Figure 1 shows the general system architecture that was used. Three tanks with identical cross-section areas, A, are connected together as well as to pumps 1 and 2 with volumetric flow rates u1 and u2 , respectively. The states 385

385

2

1

Pump 1

Pump 2

3

4

5

A 9

6

8

Tank 1 Tank 1 leak

Sp

2

10

7

Tank 2

Tank 3 Tank 3 leak

Input 1

11

Tank 2 leak

Fault Pump 1 volumetric flow rate set-point, Pump 2 volumetric flow rate set-point,

Output 3

Tank 1 level,

4

Tank 2 level,

5

Tank 3 level,

6 Tank 1 leak radius, 7 Tank 2 leak radius, 8 Tank 3 leak radius,

Uncertainty 9 Tank 1 flow coefficient, 10

Tank 2 flow coefficient,

11

Tank 3 flow coefficient,

Fig. 1. Benchmark Three-Tank System. and measured outputs of the system are the levels of liquid levels in tanks 1, 2, and 3, denoted as h1 , h2 , and h3 , respectively. The tank levels must remain within 0 and 0.75 m. Three fault scenarios were considered in this case study. In each scenario, one of the tanks contains a leak. The leak is assumed to be caused by a circular hole with a radius of rf 1 , rf 2 , and rf 3 for tanks 1, 2, and 3, respectively. The pipes connected between each tank have an identical cross-sectional area Sp with uncertain flow coefficients C1 , C2 , and C3 . The leak radii and flow coefficients were considered to be uncertain, therefore they were compiled into the faults and uncertain parameter vector, ξ, for test optimization. According to Mesbah et al. (2014), the average radius of the leak was 2 mm when present in a tank, and the flow coefficients, C1−3 , were set at or close to 1. Thus, the anticipated fault and uncertain parameter ˜ was set to [2 mm, 2 mm, 2 mm, 0.95, 0.80, 0.95] vector, ξ, for the purpose of test design optimization. The lower and upper bounds of each fault and uncertain parameter was set to ±3σ, with the standard deviation of each leak radius at 0.5 mm and the standard deviation of each flow coefficient to be at 0.05. For the algorithms performed, data was collected when the system reached steady-state. The number of sampling points for each fault scenario, Nsp , was set to 1500. The three-tank model was formulated using the following the mass balance equations:  dh1 =αu1 − C1 Sp sign(h1 − h3 ) 2g|h1 − h3 | A dt  − C1 πrf2 1 2gh1 ,  dh2 A =u2 + C3 Sp sign(h3 − h2 ) 2g|h3 − h2 | dt   (17) − C2 Sp 2gh2 − C2 πrf2 2 2gh2 ,  dh3 A =C1 Sp sign(h3 − h2 ) 2g|h1 − h2 | dt  − C3 Sp sign(h3 − h2 ) 2g|h3 − h2 |  − C3 πrf2 3 2gh3 .

2019 IFAC DYCOPS 386 Florianópolis - SC, Brazil, April 23-26, 2019 Kyle A. Palmer et al. / IFAC PapersOnLine 52-1 (2019) 382–387

The admissible range of each pump is 10−5 ≤ uj ≤ 10−4 m3 /s, j = 1, 2, respectively. The anticipated fault and ˜ was injected the virtual uncertain parameter vector, ξ, three-tank system. The virtual system is identical to the system model with the exception that measurement noise and uncertainty were injected. The degree of measurement noise in the level sensors was varied in the case study to determine the impact on the optimal test design and sensor set selection. Two cases were studied:

(a)

(b)

(c)

(1) All three sensors contain normally distributed measurement noise with a standard deviation of 0.01 m. (2) Sensors for Tanks 1 and 3 are identical to those of case 1 but the level of the sensor of Tank 2 has noise with a standard deviation of 0.05 m. By increasing the measurement variance in one of the sensors as in case 2, it is shown that the inclusion of that particular sensor to the FDI test is sub-optimal, and that the application of the proposed methodology can indicate unnecessary sensors for active FDI. 4. RESULTS AND DISCUSSION The methodology as detailed in Section 2 was applied to the system described in Section 3. Following the procedure in Algorithm 1, the optimal test designs were calculated for all feasible sensor combinations. The three-tank system was allowed to have as few as one sensor or as many as three sensors during fault diagnosis. Thus, seven sensor combinations were feasible for test design optimization. For this problem (only two u), the test design optimization as formulated in (7) was validated using the FIM plots of Fig. 2, to ensure that the calculated test designs led to global solutions in (7). In Fig. 2, the logarithm of ΨDs is a function of the two input flow rates, u1 and u2 , which varied from 0.1-1.0 ×10−4 m3 /s. Essentially, the objective of (7) is to determine the minimum function output, or maximum of − ln ΨDs , within the test design space that does not violate output constraints, shown in the dark regions of the surface plots. Figure 2 shows that adding sensors to the test increases the FDI information throughout the design space, and that the location of the optimal solution can change with additional information. Table 1. Nominal and optimal FDI test designs for the three-tank system with level sensors subject to measurement noise (σ1 =0.01 m, σ2 =0.01 m, σ3 =0.01 m). Test Design Nom (y = [h1 , h2 , h3 ]) Opt (y = h1 ) Opt (y = h2 ) Opt (y = h3 ) Opt (y = [h1 , h2 ]) Opt (y = [h1 , h3 ]) Opt (y = [h2 , h3 ]) Opt (y = [h1 , h2 , h3 ])

ϕ∗ (×104 ) [0.55,0.55] [0.62,0.10] [0.77,0.10] [0.66,0.10] [0.19,1.00] [0.77,0.10] [0.20,1.00] [0.77,0.10]

ln ΨDs -0.25 2.31 2.83 2.39 0.74 0.20 0.74 -0.45

DKL,N 3.85 4.21 4.24 4.22 4.08 4.04 4.07 3.83

DF,N 6.36 7.16 6.76 6.93 6.73 6.70 6.54 6.28

The k-NN approach described in Algorithm 2 and (15-16) were used to determine the correct classification rates of each fault scenario with the test designs listed in Table 2. Although the ΨDs value of the nominal test design is lower than the optimal designs with one or two sensors, 386

Fig. 2. Solutions of the Ds -Optimal criterion function for the three-tank system FDI test over a range of admissible input flow rates, with single input settings in pumps 1 and 2 (in ×10−4 m3 /s) for one sensor (a, y=[h1 ], ϕ∗ =[u1 =0.62, u2 =0.10]), two sensors (b, y=[h1 , h3 ], ϕ∗ =[u1 =0.77, u2 =0.10 m3 /s]), or three sensors (c, y=[h1 , h3 , h2 ], ϕ∗ =[u1 =0.77, u2 =0.10]). Dark regions indicate input settings that violate output constraints. Table 2. FDI correct classification rate for fault scenarios with proposed test designs and sensor sets (σ1 =0.01 m, σ2 =0.01 m, σ3 =0.01 m). Test Design Nom (y = [h1 , h2 , h3 ]) Opt (y = h1 ) Opt (y = h2 ) Opt (y = h3 ) Opt (y = [h1 , h2 ]) Opt (y = [h1 , h3 ]) Opt (y = [h2 , h3 ]) Opt (y = [h1 , h2 , h3 ])

CCR0 0.869 0.677 0.658 0.662 0.805 0.925 0.805 0.968

CCR1 0.781 0.750 0.753 0.744 0.775 0.831 0.797 0.865

CCR2 0.852 0.597 0.641 0.631 0.631 0.871 0.666 0.948

CCR3 0.825 0.883 0.888 0.879 0.909 0.928 0.925 0.935

ACC 0.832 0.727 0.735 0.729 0.780 0.889 0.798 0.929

the CCR of each fault scenario was higher after the test design optimization was performed regardless of sensor set. The overall highest CCR was determined to occur when y = [h1 , h2 , h3 ], as indicated by the normalized KLD and FID values of Table 1. It was observed in Table 2 that the sensor sets corresponding to the highest ACC value at constant Ns generated the smallest ΨDs value. Because each sensor in this case has the same noise, the best FDI test design collects data from all three sensors in the set. The second case study was then explored, wherein greater measurement error was added to the second sensor. Table 3 shows the updated test designs and criteria outputs for each step of the methodology. The variance of the sensors is factored into the test design optimization, thus in case 2 the optimal test should not collect information from sensor 2. The DKL,N and DF,N values are lower at y = [h1 , h3 ] than y = [h1 , h2 , h3 ]. By following the procedure of Algorithm 2, the best test design should only use the first and third sensors. The correct classifications of each fault scenario were determined to verify the findings of Table 3. Table 4

2019 IFAC DYCOPS Florianópolis - SC, Brazil, April 23-26, 2019 Kyle A. Palmer et al. / IFAC PapersOnLine 52-1 (2019) 382–387

Table 3. Nominal and optimal FDI test designs for the three-tank system with level sensors subject to measurement noise, (σ1 =0.01 m, σ2 =0.05 m, σ3 =0.01 m). Test Design Nom (y = [h1 , h2 , h3 ]) Opt (y = h1 ) Opt (y = h2 ) Opt (y = h3 ) Opt (y = [h1 , h2 ]) Opt (y = [h1 , h3 ]) Opt (y = [h2 , h3 ]) Opt (y = [h1 , h2 , h3 ])

ϕ∗ (×104 ) [0.55,0.55] [0.62,0.10] [0.77,0.10] [0.66,0.10] [0.62,0.10] [0.77,0.10] [0.66,0.10] [0.77,0.10]

ln ΨDs 0.39 4.58 2.39 1.17 4.22 0.20 1.35 0.07

DKL,N 4.27 2.31 4.21 4.76 4.56 4.04 4.71 4.22

DF,N 6.92 7.16 7.46 6.93 7.28 6.70 7.31 6.86

Table 4. FDI correct classification rate for fault scenarios with proposed test designs and sensor sets (σ1 =0.01 m, σ2 =0.05 m, σ3 =0.01 m). Test Design Nom (y = [h1 , h2 , h3 ]) Opt (y = h1 ) Opt (y = h2 ) Opt (y = h3 ) Opt (y = [h1 , h2 ]) Opt (y = [h1 , h3 ]) Opt (y = [h2 , h3 ]) Opt (y = [h1 , h2 , h3 ])

CCR0 0.500 0.677 0.661 0.662 0.653 0.925 0.619 0.769

CCR1 0.623 0.750 0.720 0.744 0.710 0.831 0.706 0.805

CCR2 0.311 0.597 0.594 0.631 0.489 0.871 0.505 0.727

CCR3 0.915 0.883 0.859 0.879 0.893 0.928 0.877 0.912

ACC 0.587 0.727 0.709 0.729 0.686 0.889 0.677 0.803

presents the CCRs and ACC of each test design. The ACC for the test design with y = h2 is lower with increased measurement variance, as expected. The use of three sensors does not improve the overall success rate of fault diagnosis. The sensor set that results in the highest ACC value is y = [h1 , h3 ], which is in agreement with the normalized KLD and FID results shown in Table 3. In summary, the novelty of the proposed approach is the simultaneous test design and sensor selection for FDI. Ds optimal tests where calculated for all sensor combinations and their KLD and FID evaluation metrics were in agreement in terms correct fault classification accuracy. 5. ACKNOWLEDGEMENTS This work was sponsored by the UTC Institute for Advanced Systems Engineering (UTC-IASE) of the University of Connecticut and the United Technologies Corporation. Any opinions expressed herein are those of the authors and do not represent those of the sponsor. REFERENCES Costa, S.I., Santos, S.A., and Strapasson, J.E. (2015). Fisher information distance: A geometrical reading. Discrete Applied Mathematics, 197, 59–69. Davoodi, M.R., Meskin, N., and Khorasani, K. (2014). Simultaneous fault detection and control design for a network of multi-agent systems. In 2014 European Control Conference (ECC), 575–581. IEEE. Han, L., Zhou, Z., and Bollas, G.M. (2016a). Model-based analysis of chemical-looping combustion experiments. Part I: Structural identifiability of kinetic models for NiO reduction. AIChE Journal, 62(7), 2419–2431. Han, L., Zhou, Z., and Bollas, G.M. (2016b). Modelbased analysis of chemical-looping combustion experi387

387

ments. Part II: Optimal design of CH4 -NiO reduction experiments. AIChE Journal, 62(7), 2432–2446. Isermann, R. (2005). Model-based fault-detection and diagnosis - Status and applications. Annual Reviews in Control, 29, 71–85. Joshi, S. and Boyd, S. (2009). Sensor selection via convex optimization. IEEE Transactions on Signal Processing, 57(2), 451–462. Kullback, S. and Leibler, R.A. (1951). On information and sufficiency. The Annals of Mathematical Statistics, 22(1), 79–86. Ma, J.Z. and Ackerman, E. (1993). Parameter sensitivity of a model of viral epidemics simulated with Monte Carlo techniques. II. Durations and peaks. International Journal of Bio-Medical Computing, 32(3-4), 255–268. Maul, W.A., Kopasakis, G., Santi, L.M., Sowers, T.S., and Chicatelli, A. (2008). Sensor selection and optimization for health assessment of aerospace systems. Journal of Aerospace Computing, Information, and Communication, 5(1), 16–34. Mesbah, A., Streif, S., Findeisen, R., and Braatz, R.D. (2014). Active fault diagnosis for nonlinear systems with probabilistic uncertainties. IFAC Proceedings Volumes, 47(3), 7079–7084. Najjar, N., Gupta, S., Hare, J., Kandil, S., and Walthall, R. (2016). Optimal sensor selection and fusion for heat exchanger fouling diagnosis in aerospace systems. IEEE Sensors Journal, 16(12), 4866–4881. Palmer, K.A., Hale, W.T., and Bollas, G.M. (2018). Active fault identification by optimization of test designs. IEEE Transactions on Control Systems Technology, 1–15. doi: 10.1109/TCST.2018.2867996. Palmer, K.A. and Bollas, G.M. (2018a). Active fault diagnosis for uncertain systems using optimal test designs and detection through classification. ISA Transactions, In Review, unpublished. Palmer, K.A. and Bollas, G.M. (2018b). Analysis of transient data in test designs for active fault detection and identification. Computers & Chemical Engineering. doi:10.1016/j.compchemeng.2018.06.020. Palmer, K.A., Hale, W.T., Such, K.D., Shea, B.R., and Bollas, G.M. (2016). Optimal design of tests for heat exchanger fouling identification. Applied Thermal Engineering, 95, 382–393. Parker, S.M. and Gennings, C. (2008). Penalized locally optimal experimental designs for nonlinear models. Journal of Agricultural, Biological, and Environmental Statistics, 13(3), 334–354. Patan, M. and Ucinski, D. (2010). Sensor scheduling with selection of input experimental conditions for identification of distributed systems. In 2010 15th International Conference on Methods and Models in Automation and Robotics, 148–153. IEEE. Rigollet, P. (2012). Kullback–Leibler aggregation and misspecified generalized linear models. The Annals of Statistics, 40(2), 639–665. ˇ Simandl, M., Punˆcoch´aˇr, I., and Herejt, P. (2005). Optimal input and decision in multiple model fault detection. IFAC Proceedings Volumes, 38(1), 454–459. Walker, S.G. (2016). Bayesian information in an experiment and the Fisher information distance. Statistics & Probability Letters, 112, 5–9.