Real-time adaptive input design for the determination of competitive adsorption isotherms in liquid chromatography

Real-time adaptive input design for the determination of competitive adsorption isotherms in liquid chromatography

Accepted Manuscript Title: Real-time adaptive input design for the determination of competitive adsorption isotherms in liquid chromatography Author: ...

70MB Sizes 2 Downloads 42 Views

Accepted Manuscript Title: Real-time adaptive input design for the determination of competitive adsorption isotherms in liquid chromatography Author: Tilman Barz Diana C. L´opez C Mariano Nicol´as Cruz Bournazou Stefan K¨orkel Sebastian F. Walter PII: DOI: Reference:

S0098-1354(16)30229-0 http://dx.doi.org/doi:10.1016/j.compchemeng.2016.07.009 CACE 5510

To appear in:

Computers and Chemical Engineering

Received date: Revised date: Accepted date:

10-4-2016 5-6-2016 3-7-2016

Please cite this article as: Tilman Barz, Diana C. L´opez C, Mariano Nicol´as Cruz Bournazou, Stefan Kddotorkel, Sebastian F. Walter, Real-time adaptive input design for the determination of competitive adsorption isotherms in liquid chromatography, (2016), http://dx.doi.org/10.1016/j.compchemeng.2016.07.009 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

ip t

Real-time adaptive input design for the determination of competitive adsorption isotherms in liquid chromatography

cr

Tilman Barza , Diana C. L´ opez C.b , Mariano Nicol´as Cruz Bournazouc , Stefan K¨ orkeld , Sebastian F. Waltere a AIT

Austrian Institute of Technology GmbH, Giefinggasse 2, 1210 Wien, Austria Universit¨ at Berlin, Chair of Process Dynamics and Operation, Str. des 17. Juni 135, 10623 Berlin, Germany c Technische Universit¨ at Berlin, Institute of Biotechnology, Department of Bioprocess Engineering, Ackerstr. 71-76, D-13355 Berlin, Germany d University of Mannheim, School of Business Informatics and Mathematics, B6, 26, 68131 Mannheim, Germany e Universit¨ at Heidelberg, Interdisciplinary Center for Scientific Computing, Im Neuenheimer Feld 368, 69120 Heidelberg, Germany

M

an

us

b Technische

1. Abstract

Ac ce p

5

te

d

The adaptive input design (also called online redesign of experiments) for parameter estimation is very effective for the compensation of uncertainties in nonlinear processes. Moreover, it enables substantial savings in experimental effort and greater reliability in modeling. We present theoretical details and experimental results from the real-time adaptive optimal input design for parameter estimation. The case study considers separation of three benzoate by reverse phase liquid chromatography. Following a receding horizon scheme, adaptive D-optimal input designs are generated for a precise determination of competitive adsorption isotherm parameters. Moreover, numerical techniques for the regularization of arising ill-posed problems, e.g. due to scarce measurements, lack of prior information about parameters, low sensitivities and parameter correlations are discussed. The estimated parameter values are successfully validated by Frontal Analysis and the benefits of optimal input designs are highlighted when compared to various standard/ heuristic input designs in terms of parameter accuracy and precision.

10

15

2. Introduction

Model-based Optimal Experimental Design (OED) aims to maximize the information content of experiments for model development purposes. It has ∗ Corresponding

author Email address: [email protected] (Tilman Barz)

Preprint submitted to Elsevier

July 29, 2016

Page 1 of 60

35

ip t

cr

nonlinear models. For models with nonlinear relation in the parameters the experiment design is sensitive to parametric uncertainties. Moreover, as true parameter values are often unknown, an initial parameter guess is used instead. Thus, the quality of a computed optimal design depends on the quality of this parameter guess [? ]. A widely used approach to cope with uncertainties in the parameter estimates is the so called sequential design. This is an indirect approach based on the iterative refinement of the experimental design whenever new measurements and parameter estimates are available. This means that experiments are designed, executed and analyzed in a sequence [? ? ]. This stepwise reduction of the parameter uncertainty leads to more reliable model predictions and designs that are closer to the true optimal experiment. The sequential approach can provide reliable estimates with a drastically reduced experimental effort in the presence of uncertainties.

Ac ce p

45

te

d

40

us

30

an

25

been shown that optimally designed experiments reduce time and experimental effort compared to conventional or heuristically designed ones [? ]. The OED for parameter precision improvement seeks the experiment that minimizes the uncertainty of the model parameters recovered from a Parameter Estimation (PE) problem. The optimization problem in OED minimizes one of the alphabetic optimal criteria first proposed by Kiefer and Wolfowitz (1959) [? ]. OED for parameter precision has been used in a wide range of applications (and corresponding models). Reports on both, small and large scale nonlinear algebraic models can be found, e.g. in thermodynamics for the characterization of phase equilibria [? ? ? ], or in medical (impedance tomography image recovery of the human head [? ]) and geophysics imaging (electromagnetic borehole tomography [? ]). Dynamic problems were discussed first in the control community in the 70ies, e.g. OED based methods for the identification of linear time invariant dynamical systems [? ], [? ]. The OED for general nonlinear dynamic models in differential equation were probably treated first for a continuous yeast fermenter by Espie and Macchietto, 1989 [? ]. In the following years further applications to a wide range of general dynamic models were reported, including amongst others, Escherichia coli growth models [? ], robot dynamic models [? ? ], pharmacokinetic population models [? ? ] and chemical reaction kinetic models [? ? ].

M

20

50

55

60

online strategy. The idea to extend the sequential approach to dynamic systems was already discussed in the early 70ies by Mehra, 1974 [? ]. Here a re-design of experiments is calculated every time a new measurement is available. Thus, the update of the parameter estimates and optimal inputs are conducted iteratively online. By this, it is possible to exploit new measurement information as soon as it is generated by the running experiment minimizing the mismatch between calculated and real outputs. However, the real time computation constitutes a big challenge, and the idea was considered only for simple impulse response models. It has been only recently, that the idea to compute an optimal input design in tandem with a recursive parameter update online has been reconsidered. Stigter et al., 2006

2

Page 2 of 60

M

80

an

75

us

cr

70

presented the online adaptive optimal input design for a nonlinear bioreactor model and showed the benefits when compared to a naive input design defined by an independent binary random sequence [? ]. In the following years, the framework was applied for the identification of linear dynamic control systems, i.e. ARX-systems [? ? ? ], where optimal inputs were obtained by solution of an model predictive control (MPC) optimization problem in open and closed loop. Applications of the online adaptive optimal input design to nonlinear differential equation models have been based on MPC and nonlinear observer techniques, e.g. for a powder coating curing process [? ], a biomass fermentation and urethane reaction process [? ], a fuel cell [? ], an electrical circuit system and continuous reaction process [? ] and a rolling delta wing system [? ]. Moreover, the online model selection for the indentification of a mitogen-activated protein (MAP) kinase model [? ] and for the identification of kinetic models of methanol oxidation over silver catalyst [? ] were also studied. However, all listed applications have been implemented ’in silico’. Only very few reports on the demonstration with experimental implementation have been presented so far, i.e. parameter identification of a chromatography system [? ], a temperature controlled tank [? ] and a dynamic model of the Baxter Research Robot from Rethink Robotics [? ].

ip t

65

Ac ce p

90

te

d

85

identifiability and regularization. A critical issue for the implementation of the adaptive experiment design is the fact that, especially at the beginning of an experiment the experimental data is scarce. In addition, parameter sensitivities might be low and parameters might be correlated. As a consequence, parameters may be not identifiable and corresponding PE and OED problems are then ill-posed. In those cases, the sensitivity matrix and thus also the Fisher information matrix (FIM) are ill-conditioned. This is critical when applying iterative, gradient-based algorithms. Practical problems arise such as instabilities and poor convergence in the parameter estimation, and ineffective and/or meaningless experiment designs [? ? ]. Therefore, adequate actions need to be taken to ensure the robustness and reliability of the algorithms for PE and OED. In the available literature on the adaptive redesign of experiments ill-posed problems, e.g. due to ill conditioned matrices, had not been directly addressed. Instead, preliminary studies were carried out to test for parameter identifiability and if necessary reduce the number of parameters to be estimated, in order to guarantee parameter identifiability for all scenarios [? ? ]. Only in Barz et al. 2013 ill-posed PE and corresponding OED were explicitly addressed by an automated procedure based on the identifiable parameter subset selection [? ]. However, a variety of the so called regularization techniques exists to compute robust solutions of ill conditioned problems, see e.g. [? ? ]. The regularization controls the variance at the expense of a bias in the estimates. For ill-posed parameter estimations the following techniques have been studied, regularization based on the imposition of constraints on the parameter values [? ], the ridge regression or Tikhonov regularization[? ], the truncated singular value decomposition [? ? ? ? ], parameter subset selection [? ? ? ? ? ] and principle component analysis [? ]. In the context of optimal experimental design, theory and applications

95

100

105

3

Page 3 of 60

120

ip t

cr

M

125

parameter determination in chromatographic purification. Liquid chromatography plays an important role for downstream bioprocess development [? ]. However, it is also an expensive separation technique and requires a careful optimization of the systems design and operating conditions. Therefore, for commercial viability the acceleration of process development is deemed important. Model based analysis is considered essential for optimization of chromatographic separation processes and this approach is probably best exemplified by the model based optimization and control of simulated moving bed processes [? ? ]. The development of chromatographic process models require a large number of experimental data sets which need to be created efficiently, i.e. with as little sample material as possible in a limited time frame. Automation is certainly a valuable approach to speed up process development. It has been an important issue in chromatographic hardware development for some while, i.e. feedback control of valves and pumps, the mixing of concentration gradients, and online concentration analysis [? ]. More recently, also high throughput screening methods have been considered, i.e. the use of robotic liquid handling stations for dynamic chromatographic operation of MiniColumns [? ? ]. Central to the modeling of chromatographic systems are the underlying thermodynamic functions, i.e. the adsorption isotherms. Despite the fact that there are several experimental methods to determine adsorption isotherms available, their application is still far away from being a routine job. Moreover, to achieve reliable model predictions there exist high requirements on the accuracy and precision of estimated isotherms [? ? ]. This contribution presents an fully automated approach for the determination of multicomponent adsorption isotherm parameters.

us

115

to ill-posed problems together with Tikhonov regularization can be found in [? ? ? ? ].

an

110

Ac ce p

135

te

d

130

3. Problem formulation We consider a nonlinear dynamic model in the differential form given in Eq. (1).

140

x(t) ˙ = g (x(t), u(t), θ)

(1)

y(t) = A x(t) x(t0 ) = x0

where t ∈ [t0 , tf ] ⊆ R is the independent variable time, x(t) ∈ RNx is vector of dependent state variables, u(t) ∈ RNu the time-varying input or experimental design variable vector, θ ∈ RNp the unknown parameter vector and initial conditions are given by x0 . y(t) ∈ RNy is the vector of predicted response variables which correspond to the measurements and is defined by a linear function. A ∈ RNy ×Nx is a constant selection matrix with Ny ≤ Nx .

4

Page 4 of 60

ip t cr

M

an

us

DiscGridHorizon_3.pdf

Finite time horizon schemes:. The real time implementation is based on a finite time horizon scheme, see Figure 1. We consider a repeated update of current parameter estimates (solution of PE problem) and planned input actions (solution of optimum ED problem). Measurements are taken at equally spaced discrete time instances tk . The time-varying inputs are realized as finitely piecewiseconstant input signals which match the time grid defined by the measurement time instances. During the experiment the number of currently available measurements is increasing (past horizon see Figure 1, corresponding variables use superscript ’-’). At time tk , all available measurements are collected in the vector:

Ac ce p

145

te

d

Figure 1: Discretization grids and time horizons used in the real-time algorithm, taken from [? ].

T Ny ·k Υ− k = [υ1 , υ2 , · · · , υk−1 , υk ] ∈ R

(2)

The corresponding predicted response variables and discrete input actions are1 : Yk− (·) = [y1|k , y2|k , · · · , yk|k ]T ∈ RNy ·k Uk−

T

= [u0|k , u1|k , · · · , uk−1|k ] ∈ R

(3)

Nu ·k

1 The notation y i|j indicates the values of the predicted variable vector at sampling instant i which is calculated at instant j.

5

Page 5 of 60

E θˆk := arg min ΦP k (·) θk

T 1 Yk− (·) − Υ− k 2

Σ− k

−1

Yk− (·) − Υ− k



(4)

cr

E ΦP k (·) :=

ip t

with Yk− (·) being a function of prior input action Uk− and the current parameter values θk , i.e. (·) means (Uk− , θk ). At any sampling instant tk , the current parameter estimates are updated by solution of the unconstrained PE problem [? ]:

∇θk Yk− (·)



(5)

where (·) means (Uk− , θk ) and the weighting matrix Zk− ∈ RNy ·k×Ny ·k is the − − measurement standard deviation matrix, with Σ− k = Zk Zk . The update of the parameter estimates and the redesign of planned inputs need to be computed within the current time interval [tk , tk+1 ]. It is referred to here as current time in Figure 1, corresponding variables use superscript ’0’. The corresponding predicted response variables and discrete input actions are:

te

d

150

−1

M

Sk− (·) = Zk−

an

us

E where θˆk ∈ RNp contains the best available estimate and ΦP k (·) denotes the Ny ·k×Ny ·k weighted nonlinear least-squares criterion. The weighting matrix Σ− k ∈R is the variance-covariance matrix of the measurement errors in the data. Σ− k is assumed to be a diagonal matrix with the variance σi2 of each measurement i in its diagonal entries. Parameter estimation, identifiability analysis and optimal input design are based on the analysis of the scaled sensitivity matrix Sk− (·) ∈ RNy ·k×Np :

Yk0 (·) = yk+1|k ∈ RNy

(6)

Ac ce p

Uk0 = uk|k ∈ RNu

with Yk0 (·) being a function of current input actions Uk0 and parameter values θk , i.e. (·) means (Uk0 , θk ) 2 . In the real-time implementation Yk0 (·) and Uk0 are not considered for an update of the parameter estimate or the re-design of the experiment (discussed below). The corresponding scaled sensitivity matrix Sk0 (·) ∈ RNy ×Np reads: Sk0 (·) = (Zk0 )

−1

∇θk (Yk0 (·))

(7)

where (·) means (Uk0 , θk ) and the weighting matrix Zk0 ∈ RNy ×Ny is the measurement standard deviation matrix. At any sampling instant tk , after the update of the parameter values, the experiment is redesigned. This redesign means an update of future discrete input 2 It

has to be noted that according to the implementation, for each time horizon (past, current, future in Figure 1) the influence of its corresponding discrete input actions is analyzed. As result, current responses Yk0 (·) do depend on current inputs Uk0 , but do not depend on past inputs Uk− .

6

Page 6 of 60

actions within a limited future horizon [tk+1 , tk+h ] (see Figure 1). Corresponding predicted future responses and discrete input actions are:

Uk+

T

= [uk+1|k , uk+2|k , · · · , uk+h−1|k ] ∈ R

Nu ·(h−1)

(8)

ip t

Yk+ (·) = [yk+2|k , yk+3|k , · · · , yk+h|k ]T ∈ RNy ·(h−1)

−1

∇θk Yk+ (·)



us

Sk+ (·) = Zk+

cr

with Yk+ (·) being a function of future input actions Uk+ and the current parameter values θk , i.e. (·) means (Uk+ , θk ). The corresponding scaled sensitivity matrix Sk+ (·) ∈ RNy ·(h−1)×Np reads: (9)

an

where (·) means (Uk+ , θk ) and the weighting matrix Zk+ ∈ RNy ·(h−1)×Ny ·(h−1) is the measurement standard deviation matrix. The discrete formulation of the corresponding ED problem reads: Uk+∗ := arg min ΦED (Ck (·)) + Uk

(10)

M

1/Np ΦED (Ck (·)) := Fk (·)−1

d

where the D-optimal criterion (determinant criterion) [? ] is evaluated for the inverse of the Fisher Information Matrix (FIM), Fk (·) ∈ RNp ×Np . This inverse is the so called Cram´er-Rao lower bound of the covariance matrix of the current parameter estimate, Ck (·) ∈ RNp ×Np [? ]: −1

te

Ck (·) ≥ (Fk (·))

(11)

Ac ce p

For unconstrained problems the FIM can be calculated directly using the scaled sensitivity matrix Sk (·) ∈ RNy ·(k+h)×Np evaluated for the current parameter estimate θˆk ,  − − ˆ  Sk (Uk , θk )  Fk (·) = Sk (·) Sk (·) , with Sk (·) = Sk0 (Uk0 , θˆk )  Sk+ (Uk+ , θˆk ) T

(12)

or, more efficiently, by summation over individual contributions to the FIM: Fk (·) := Fk− (Uk− , θˆk ) + Fk0 (Uk0 , θˆk ) + Fk+ (Uk+ , θˆk )

155

(13)

where Fk+ (·) depends on the future input actions Uk+ and needs to be updated during solution of the ED Problem in Eq. (10). In contrast, Fk0 (·) and Fk− (·) are constant contributions which depend on currently implemented and past input actions, respectively, and have to be updated only once, at any sampling time instant tk for the current parameter estimate θˆk .

7

Page 7 of 60

3.1. Numerical regularization

M

175

an

170

us

cr

165

For most applications, especially at the beginning of the experiment, parameter identifiability cannot be guaranteed. That may be because of scarce informative experimental data in quantity as well as in quality. Corresponding ill-posed problems are highly critical as they might destabilize the solution and affect the reliability of the estimates. Furthermore, in the presence of ill-posed estimations a redesign of optimal inputs (for parameter precision improvement) leads to ineffective and/or meaningless designs [? ]. Ill-posed problems can be detected by analysis of the ill-conditioning of the sensitivity matrix Sk− in Eq. (5). If Sk− is ill-conditioned then it is not of full-rank and promotes identifiability problems. Numerical regularization techniques are aimed at the transformation of the original ill-posed problem into a new wellposed problem by introducing constrains (penalties) on the parameter space [? ? ? ]. In the same way, the original ill-conditioned sensitivity matrix Sk− is then transformed to a regularized well-conditioned sensitivity matrix Sk−,Reg [? ? ]. The regularization must be tuned by appropriate selection of tuning parameters according to the gravity of the ill-conditioning. In the adaptive framework for optimal input re-design for parameter estimation two different local regularization techniques are considered which are repeatedly applied at every time instant tk .

ip t

160

Subset selection (SsS). is based on an identifiability analysis and subsequent reduction of the parameter space to an identifiable subset. The regularized problem is transformed to a well posed problem, non-identifiable parameters are held constant and are not considered for optimization. The underlying algorithm is based on orthogonal projections and is one of the most widely used among several approaches for parameter subset selection [? ? ]. Parameter identifiability is analyzed by the rank-revealing singular value decomposition (SVD), with ςi being the i-th singular value of Sk− (ordered according to magnitude as ς1 ≥ ς2 ≥ · · · ≥ ςNp ≥ 0). The number of linearly independent parameters corresponds to the numerical ε-rank rε of Sk− . rε is defined by the maximum number of ςi with i = 1, · · · , Np for which the sub-condition number κi = ς1 /ςi and the sub-collinearity index γi = 1/ςi are below a critical threshold. Corresponding upper bounds, namely the maximum condition number and the maximum collinearity index are defined by empirical values, for instance, κmax = 1000 and γ max = 10 − 15, respectively [? ]. Note that γ max should be scaled by the corresponding standard deviation of the measurement errors [? ]. The vector θk is reordered according to the linear independence of the columns of Sk− by applying QRP decomposition (via Householder transformation). The decomposition reads: Sk− Π = QR, with Π being the permutation matrix which is then used for ordering the elements of θk such that the reordered parameter vector θ˜k = Π T θk . The first r elements of θ˜k constitute the identifiable parameter subset. The reordered and partitioned

Ac ce p

te

d

180

8

Page 8 of 60

ip t

(identifiable/ non-identifiable) parameter vector and sensitivity matrix read: " # (r ) h i θ˜k  ˜ θk = (Nθ −r ) , S˜k− = S˜k−,(r ) S˜k−,(Np −r ) (14) θ˜k

cr

−,(r ) with the regularized sensitivity matrix S˜k−,SsS = S˜k ε ∈ RNy ·k×rε .

us

Tikhonov regularization (Tikh). modifies the identification problem in Eq. (4) by incorporating a priori information about the size and smoothness of the desired solution [? ? ? ? ]. This a priori information is considered as additional penalty term which attracts the unidentifiable parameters toward their corresponding values in a predefined vector (in this work the last parameter estimate θˆk−1 ): (15)

The scalar parameter λ2 controls the regularization. The matrix L ∈ RNp ×Np is a diagonal matrix, typically the identity matrix INp ×Np . Alternatively, an individual weighting according to the precision of the parameter estimates can be applied. This weighting can be approximated by the inverse of the parameter variance, i.e. the Hessian can be incorporated in L [? ]. In the Tikhonov regularization the model can be improved without modifying the model structure, which means that the parameter space remains unchanged. The regularized sensitivity matrix Sk−,T ikh ∈ RNy ·k+Np ×Np is a rowaugmented matrix where its last rows are formed by a user-defined full-row-rank matrix [? ? ? ? ].  − S Sk−,T ikh = k (16) λL

Ac ce p

te

d

M

185

an

  1 2 E,T ikh − 2 E ˆ ΦP (Uk− , θk ) := ΦP k (Uk , θk ) + λ kL θk − θk−1 k2 k 2

4. Experimental setup

190

195

200

4.1. Chromatography system The experimental setup of the chromatography system is shown in Figure 2. A schematic flow sheet is given in Figure 3. The Smartline Manager and Pump constitute a computer controlled gradient system which can deliver step concentration gradients from feed A, B, C and D. The Vertex Plus Column is a preparative chromatography column for reversed phase applications in the analytical as well as preparative range. Stepwise concentration changes are delivered at a constant flow rate to the column. The concentration values (hereafter also referred to as input actions) are generated in the Matlab pogramming environment (MathWorks, Natick, MA). At the column outlet, the Smartline ultraviolet (UV) Detector measures continuously the sum of all concentrations. These concentrations are processed in Matlab.

9

Page 9 of 60

ip t cr us te

d

M

an

foto.png

Ac ce p

Figure 2: Experimental set up of the chromatography system.

ChromaFlowSheet.pdf

10

Page 10 of 60

All instruments and the column are from KNAUER, Wissenschaftliche Ger¨ate GmbH, Berlin, Germany. The following is a list with names and specifications. • Smartline Manager 5050 and Pump 1050 with low pressure gradient (LPG) module

ip t

205

cr

• Vertex Plus Column 125 x 4.6 mm Euroshper II 100-5 C18, with a length of 125 mm, an inner diameter of 4.6 mm and a total porosity of 0.68, a ◦ pore size of 100 A and a particle size of 5 µm [? ]

us

The communication between Smartline Pump and Manager and Matlab is established over a standard RS-232 serial port using Matlabs instrument control toolbox. The continuous measurement from Smartline UV Detector is first processed in the Freelance Controller AC 700F using the detectors analog output signal and the Analog Input/Output Module AX 722F (ABB, Zurich, Switzerland). The communication between Freelance controller and Matlab software is established using the Open Process Control (OPC) software interface. The detectors output signal sU V is given in arbitrary units (AU) and transformed to concentration values (measurements υ, see Eq. (2)) given in mol/l using the calibration curve in Eq. (17). "   # "  2 # 2 b s˜U V − b − a exp − , (17) υ = a exp − c c

te

d

M

215

• Smartline UV Detector 2500 equipped with a deuterium lamp and operated at a wavelength of 284 nm, the analog output signal is scaled to a maximum value of 1 AU

an

210

s˜U V = sU V + d

Ac ce p

with

220

225

Note that υ is the sum concentration of all components. Thus, a measurement of individual components is not possible and in Eq. (17) an average set of calibration parameters a, b, c, d was used, with a = 8.0917 × 10−1 , b = 1.2277, c = 5.5536 × 10−1 , d = 3.47 × 10−2 . The errors of the output signal are assumed to be Gaussian and uncorrelated with a standard deviation σ = 0.01 mol/l. Measurement standard deviations are σi = 0.1 K and each i representing individual diagonal elements in the measurement standard deviation matrices Zk− , Zk0 , Zk+ , see section 3. 4.2. Operation

230

All experiments were carried out at constant temperature of 23 ◦ C and feed flow rate of 1.5 mL/min. Two different feeding strategies (FS-1 and FS-2) have been considered, see Table 1. They differ in the way the feeds A, B, C, D (see Figure 2 and 3) were prepared. For both, FS-1 and FS-2, feed A contains eluent only. For FS-1 the feed B contains eluent and a mixture of all three benzoate. For FS-2, the benozate are individually prepared in feed B, C, D.

11

Page 11 of 60

Table 1: Different feeding strategies (FS) for the chromatography system. Abbreviations: ethyB = ethyl benzoate, propB = propyl benzoate, butyB = butyl benzoate.

240

ip t

sum concentration range [mol/l]

cr

0.0 · · · 0.636

0.0 · · · 0.636

an

It has to be noted, that in FS-2 the maximal individual feed concentration delivered by the manager and pump (i.e. concentration of ethyB, propB, butyB) was restricted to 0.212 mol/L. Accordingly, the maximal sum concentration is 0.636 mol/L, the same value as for FS-1. This value has been found critical for the miscibility of the benzoate in the eluent. The specific chemicals used in this case study are:

M

235

feed channel concentrations [mol/l] eluent ethyB propB butyB yes yes 0.212 0.212 0.212 yes yes 0.636 yes 0.636 yes 0.636

us

feeding strategy channel (ratio %) A (0· · · 100) FS-1 B (0· · · 100) C D A (0· · · 100) FS-2 B (0· · · 33.3) C (0· · · 33.3) D (0· · · 33.3)

• eluent: prepared from 80% methanol CH4 O (CAS registry number: 6756-1) and 20% demineralized water H2 O,

d

• ethyB: ethyl benzoate C9 H10 O2 (CAS registry number: 93-89-0),

te

• propB: propyl benzoate C10 H12 O2 (CAS registry number: 2315-68-6), • butyB: butyl benzoate C11 H14 O2 (CAS registry number: 136-60-7). 5. Mathematical model of the chromatography system

Ac ce p

245

The units of the process model are shown in Figure 4.

5.1. Manager and pump The experimental data which has been used for the identification of the model for the manager and pump is shown in Figure 5. A P T2 T0 step response model (transfer function in the Laplace domain) was selected (see Eq. (18)) which describes the dynamic response of the chromatography column inlet conf eed centration cin , see i as response to step changes in the feed concentrations ci Figure 4. K · e−T0 s (1 + T1 s)(1 + T2 s) i ∈ {ethyB, propB, butyB}

Gi (s) = ∀

250

(18)

For all components i the same coefficients have been assumed. The implementation in the time domain yields a sparse system of 180 ordinary differential equations, see Appendix A. 12

Page 12 of 60

ip t cr an

us

block-diagramm.pdf

5.2. Chromatography column

M

Figure 4: Units of the process model, input/ output variables and unknown model parameters.

te

d

The equilibrium dispersive model is well established in liquid chromatography ([? ? ]). This model yields convection-diffusion partial differential equations (PDE) with dominated convective terms. The corresponding differential mass balances for each component i read:

Ac ce p

∂ci 1 − ε ∂qi (c) ∂ci ∂ 2 ci + +u = Dax,i 2 ∂t ε ∂t ∂z ∂z ∀ i ∈ {ethyB, propB, butyB}

(19)

where c and q are the concentrations in the liquid and solid phase, respectively, u is the interstitial velocity of the liquid phase in the column, z is the spatial coordinate in axial direction and ε is the total porosity. Dax,i is the axial dispersion coefficient which is assumed to be equal for all components. An equilibrium relationship between liquid and solid phase is assumed and defined by the nonlinear isotherm of the competitive Langmuir-type: qi (c) =

1+

Hi ci P j kj cj

(20)

with Hi being the Henry constants and Ki thermodynamic coefficients. The following initial conditions assume a clean column at the beginning of each experiment: ci (t, z)|t=0 = 0

(21)

13

Page 13 of 60

ip t cr us

te

d

M

an

resultsPumpFit.pdf

Ac ce p

Figure 5: Responses of the manager and pump outlet concentrations (equal to the column f eed inlet concentrations cin . Flow is kept i ) for arbitrary steps in the feed concentration ci constant at 1.5 ml/min.

The Danckwerts inflow and outflow boundary conditions are given as:  ∂ci ∂ci in Dax,i = u c | − c , D =0 i z=0 ax,i i ∂z z=0 ∂z z=L

(22)

where L is the column length. The discretization in space yields another sparse system of 180 ordinary differential equations, see Appendix A. 5.3. Assignment of variables In the following, the variables of the mathematical model of the chromatography system in section 5 and Figure 4 are assigned to the general problem

14

Page 14 of 60

formulation in Eq. (1). The predicted response variables are: X y := ci |z=L

ip t

(23)

i

with i ∈ {ethyB, propB, butyB}

cr

In Eq. (23), y is the sum of all individual concentrations at the column outlet. It corresponds to the measured concentrations υ in Eq. (17). The unknown model parameters θ are taken from Eq. (20): T

us

θ := [HethyB , kethyB , HpropB , kpropB , HbutyB , kbutyB ]

(24)

The experiment design variables/ input actions u are the desired feed concentrations which are delivered by the manager and pump, see Figure 4.

6. Results

(25)

M

255

h iT eed eed eed cfethyB , cfpropB , cfbutyB

an

u :=

d

te

260

The input design is defined by a temporal sequence of variable feed concentrations, see section 4.2. In FS-1 the ratio of the three species concentrations is kept constant. In FS-2 the three species concentrations are individually defined, thus, the degree of freedom is three times higher than in FS-1. As discussed in section 4.1, only the sum concentration of all species is measured at the column outlet. The parameters of the finite time horizon scheme (see section 3) are as follows:

Ac ce p

• measurement sampling times: 5 sec, • prediction horizon: 10 min,

265

• control grid: 20 sec,

• control horizon: 3 min.

270

The control horizon was defined to reduce the number of future input actions to be optimized and by this the computation times. The result are 9 and 27 control variables for FS-1 and FS-2, respectively. In order to assess the performance of the adaptive optimal input design in terms of the achieved parameter accuracy and precision, different standard/ heuristic (non-optimal) input designs, namely: • a sum of sinusoids (sine), • rectangular pulses (pulse),

275

• uniformly distributed random signals (uniform), • random, binary signal (RBS), 15

Page 15 of 60

Table 2: Initial guess and ’true values’ of the parameter estimation problem. ’true values’ refers to the best estimates obtained from offline estimation.

θ1 1.0000 5.9950

θ2 1.0000 4.0995

θ3 1.0000 2.9082

θ4 1.0000 0.6396

θ5 1.0000 0.4685

cr

• pseudorandom, binary signal (PRBS)

θ6 1.0000 0.4819

ip t

parameter initial guess ’true’ values

These input designs were generated for FS-1 and FS-2 using the same control grid and concentration range as for the adaptive optimal input design. The VPLAN optimal design is computed offline using a full horizon scheme (30 min). The number of control variables is then 90 and 270 for FS-1 and FS-2, respectively. For the parameter estimation, the initial guess and ’true’ parameter values are given in Table 2. Note that the ’true’ values were obtained from offline estimations and correspond to the best available estimate with the smallest residual value.

M

285

• offline optimal input design (VPLAN optimal).

an

280

us

as well as an offline optimal input design (non-adaptive, full horizon) were considered, namely:

d

Ac ce p

295

Numerical solution. All computations were carried out on an Intel(R) Core(TM)2 (CPU 6600 @2.40-GHz) computer with 4-GB RAM. Parallel programming was not used. The PE and ED problems were solved by single shooting and using Matlabs Optimization Toolbox solvers lsqnonlin/ trust-region-reflective and fmincon/sqp, respectively. To limit the computational effort, in the real-time optimization the number of maximum iterations of both solvers was restricted to one and the solvers were restarted at each sampling time point using the last results as initial values. Model and parameter sensitivity equations were integrated using CVODES with sparse direct solver from SundialsTB Toolbox [? ]. The computed parameter sensitivities were used to calculate the gradients of the PE problem and the Fisher Information Matrix (FIM). The gradients of the ED were supplied using finite differences. The Jacobian and directional derivatives of the model equations were generated using Tapenade [? ]. The offline optimal experimental designs ”VPLAN optimal” were computed using VPLAN on a PC with an Intel(R) Core(TM) i7 CPU 970 processor clocked at 3.20GHz and 24 GB RAM. Because of the large number of control variables (), the adjoint optimization approach from [? ] was chosen. To solve the spatially discretized PDE, the BDF integrator DAESOL-II [? ] was employed. The occurring large sparse linear systems in the corrector iteration of DAESOL-II were solved by the direct sparse method UMFPACK 5.2. Tapenade 3.6 generated the required mixed forward and adjoint derivatives of the Fortran 77 model functions.

te

290

300

305

310

16

Page 16 of 60

6.1. Comparison of regularization strategies at the example of a fixed input design

325

ip t

cr

us

320

The usefulness of different regularization strategies was studied in order to increase the robustness of the real-time parameter estimation against wrong initial parameter estimates and scarce measurement information. The aim is to provide a smooth transition from initial parameter guess at the beginning of the experiment to the final estimate at the end of the experiment. Thus, the relative error in the parameter estimates should be monotonously decreasing with a fast convergence to the true parameter values. It has to be noted that deviations from the true parameter values as well as stability problems in the course of the parameter estimates do also affect the experiment redesign and can yield ineffective or meaningless designs. The robustness of the real-time parameter estimation has been tested ’in silico’ for various standard/ heuristic input designs (pulse and uniform) and the Tikhonov and subset selection (SsS) based regularization. Corresponding regularization parameters γ max (SsS) and λ (Tikh) have been carefully tuned. Finally, the following options were considered:

an

315

330

M

• No regularization (NONE)

• Subset Selection (SsS) with γ max = 50 · σi and κmax = 1000 • Tikhonov regularization (Tikh) with λ = 0.02

Ac ce p

335

te

d

For these three options the performance of the real-time parameter estimation is assessed in terms of the parameter accuracy. The initial parameter guess values were set to one which means relatively large deviations from the true parameter values, see Table 2. Figure 6 shows the course of the relative error for the FS-1 adaptive optimal experiment. Table 3 shows the corresponding values at the end of the experiment. The relative error is calculated as relative deviation from the ’true’ parameter values. If no regularization is used, instabilities can lead to large variances in the course of the parameter estimates, see Figure 6 (a). This is especially true in the beginning of the experiment when the experimental data is scarce and if a dead time exists between measured responses and input design variables. For this case study the dead time is about 3-4 min. In Figure 6 (a), between minute 3 to 5, the error in the estimates increases up to 10,000%. This large error remains for about 15 min where the estimation does not recover. As result, after 30 min the estimation is poor, see also the values in Table 3. For the subset selection (SsS) based regularization the maximal error is much smaller. Still it increases along the experiment run and sometimes exceeds the initial error. This can be seen in Figure 6 (b) around minute 5 and 18. Moreover, the available measurement does not provide sufficient information for the identifiability of the whole parameter space, see (subfigure ’parameter activity’). However, it can also be seen, that the real-time estimation recovers and the final estimates show only a small error (Table 3).

340

345

350

17

Page 17 of 60

regularization

relative error θ3 θ4 3.40% 43.30% 0.00% 0.49% 0.02% 0.56%

θ5 73.83 % 0.90% 1.11%

θ6 62.42% 2.22% 2.82%

us

Finally, Tikhonov regularization showed the smoothed convergence of the parameter estimation with relatively small variations in the parameter values, see Figure 6 (c). It can be seen, that the error does never exceed the initial error and is for most parameters continuously reduced. Moreover, the final estimates are close to the ’true’ parameter estimates, see values in Table 3.

Ac ce p

te

d

M

an

355

θ2 137.10% 0.03% 0.06%

cr

NONE SsS Tikh

θ1 50.71% 0.03% 0.02%

ip t

Table 3: Relative error at the end of the experiment (@t=30 min) for the real-time parameter estimation in Figure 6 using different regularization strategies.

18

Page 18 of 60

ip t cr us

te

d

M

an

resultsParEst_Reg_02dDC.pdf

Ac ce p

Figure 6: Relative error during real-time parameter estimation using different regularization strategies. Subfigure named ’parameter activity’ shows the results of the identifiability analysis in the SsS for parameters θ1 , · · · , θ6 . If a parameter is identifiable and selected by the subset selection (SsS) algorithm, this parameter is active and its activity is indicated by a dot. If a dot is missing, the parameter is not active and not identifiable. For these test scenarios the input design is fixed (non adaptive) and taken from Figure 7 (c) - FS-1 adaptive optimal.

6.2. Adaptive optimal input designs

360

365

The results of the optimal input design are presented for FS-1 and FS-2 in Figure 7 and 8, respectively. The initial parameter guess values were set to one. It turned out that the Tikhonov regularization was able to realize a relatively smooth transition to the final parameter values indicating robustness and stable convergence. The same applies for the repeated solution of the ED problem, where the identifiable parameter subset only is considered in the problem formulation.

19

Page 19 of 60

ip t cr us

te

d

M

an

plotDiffDesigns_D_zus.pdf

Ac ce p

Figure 7: FS-1 adaptive optimal∗ ; adaptive optimal input design for feeding strategy FS1. Subfigures a, b show the measured sum and predicted individual outlet concentrations. Subfigure c shows the input design, i.e. the inlet concentrations; real experiment.

20

Page 20 of 60

ip t cr us

te

d

M

an

plotDiffDesigns_D_single.pdf

Ac ce p

Figure 8: FS-2 adaptive optimal∗ ; adaptive optimal input design for feeding strategy FS2. Subfigures a, b show the measured sum and predicted individual outlet concentrations. Subfigure c shows the input design, i.e. the inlet concentrations; real experiment.

6.3. Comparison of adaptive optimal input designs with standard/ heuristic and offline optimal input designs

370

375

In the following, the optimal input designs are compared to standard/ heuristic and offline optimal input designs in terms of the achieved parameter accuracy (Table 4) and precision (Table 5). The offline computed results from VPLAN are, of course, not directly comparable to adaptive approaches, since the underlying parameter estimate θk changes over time in adaptive schemes, whereas it does not in a fixed, offline design. The VPLAN results were included to provide a reference for what is achievable by an experimental design optimization in terms of parameter precision.

21

Page 21 of 60

us

cr

ip t

Table 4: Parameter accuracy (relative error @ t=30 min) for different input designs and feeding strategies FS-1 and FS-2. All parameter estimations were realized experimentally in real-time.

Ac ce p

te

d

M

an

Table 5: Parameter precision (D-optimal criterion and standard deviation @ t=30 min) evaluated for the ’true’ parameter values in Table 2 for different input designs and feeding strategies FS-1 and FS-2. Input designs marked by a star were realized experimentally in real-time, all other are ’in silico’ experiments.

380

385

390

As expected, compared to most standard/ heuristic input designs, the adaptive optimal designs can significantly improve both, parameter accuracy and precision. Moreover, the individual concentration feeding FS-2 is superior compared to the feeding with equal concentration ratio FS-1, as FS-2 provides more informative measurement data. Comparing the results for FS-2 adaptive optimal inputs with the widely used FS-1 pulse inputs, a high reduction in both, parameter standard deviation and relative error is achieved, i.e. the maximum standard deviation is reduced from 11.95% to 3.51% and the maximum relative error from 61.09% to 1.64%. However, for both FS, the standard/ heuristic RBS, PRBS and VPLAN optimal inputs outperform the adaptive optimal designs, which indicates possibility for further improvements in the implementation of the adaptive optimal strategy. These come from restrictions in the numeric solution, i.e. the optimum is not found due to restrictions in the number of iterations and poor initial 22

Page 22 of 60

ip t

M

an

us

cr

395

guesses. Figure 9, 10, 11 and 12 show the results for different input designs for FS-2. The figures are ordered according to the achieved parameter precision. The most informative experiment is shown in Figure 12, which also presents the most exciting trajectory of inputs (subfigure c) and corresponding largest excitations in the outlet concentrations (subfigure a, b).

Ac ce p

te

d

plotDiffDesigns_sine_single.pdf

Figure 9: FS-2 sine; input design (subfigure c) and outlet concentrations (subfigure a, b) for feeding strategy FS-2 and a standard/ heuristic input design generated by a sum of sinusoids; ’in silico’ experiment.

23

Page 23 of 60

ip t cr us

te

d

M

an

plotDiffDesigns_uniform_single.pdf

Ac ce p

Figure 10: FS-2 uniform; input design (subfigure c) and outlet concentrations (subfigure a, b) for feeding strategy FS-2 and a standard/ heuristic input design generated by an uniform sampling; real experiment.

24

Page 24 of 60

ip t cr us

te

d

M

an

plotDiffDesigns_prbs_single.pdf

Ac ce p

Figure 11: FS-2 PRBS; input design (subfigure c) and outlet concentrations (subfigure a, b) for feeding strategy FS-2 and a standard/ heuristic input design generated by a PRBS; ’in silico’ experiment.

25

Page 25 of 60

ip t cr us

te

d

M

an

plotDiffDesigns_1opt_VPLAN.pdf

Ac ce p

Figure 12: FS-2 VPLAN optimal; input design (subfigure c) and outlet concentrations (subfigure a, b) for feeding strategy FS-2 and an offline optimal input design generated by VPLAN; ’in silico’ experiment.

26

Page 26 of 60

6.4. Validation of the parameter estimates by Frontal Analysis

ip t

Ac ce p

te

420

M

415

d

410

an

us

405

cr

400

The presented determination of adsorption isotherm parameters by parameter estimation is usually referred to as inverse or indirect method. In the inverse method model predictions are fitted to experimental data for the determination of parameters of a preselected isotherm model. An alternative well established method is the so called Frontal analysis (FA), which is a direct chromatographic method for the determination of adsorption isotherms [? ]. In the following, FA (staircase increment method) is used to validate the estimated Langmuir isotherm model parameters. In contrast to the inverse method, FA is model independent, i.e. the shape of the determined isotherm is not predefined by a selected isotherm model structure. However, for the determination of multi-component adsorption isotherms (especially for more than two components), FA also requires a large amount of experimental data. To limit the corresponding laboratory work, the validation is carried out for single component adsorption only, i.e. the competitive isotherm behavior is not considered. In FA, a solution of the studied component, at a known, constant concentration, is percolated through the column. Successive step changes of increasing concentration are performed at the column inlet and the breakthrough curves (transient concentration profiles) at the column outlet are determined. For each new inlet concentration the mass of the solute adsorbed at equilibrium is determined from the integral of the breakthrough curve [? ]. In doing so, the adsorption isotherms are directly determined from the available measurement data, a mass balance equation and geometric data of the column. The results of the validation are shown in Figure 13 for both feeding strategies, FS-1 and FS-2. It can be seen, that butyl benzoate shows the strongest adsorption while propyl and methyl benzoate are weaker adsorbed. Moreover, the model predictions for the parameter estimates obtained using adaptive optimal designs are close to the calculated equilibrium points from FA. Larger deviations exist for the parameter estimates obtained from standard/ heuristic designs (pulse and uniform, repeated execution).

425

27

Page 27 of 60

ip t cr

an

us

results_FA_invMethod.pdf

M

Figure 13: Validation of the parameter estimates for single component adsorption. Adsorption isotherms obtained by FA are shown by calculated equilibrium points. Predicted adsorption isotherms using the Langmuir model are shown by lines. Predictions are made using parameter estimates from FS-1 and FS-2 adaptive optimal designs and standard/ heuristic input designs (uniform and pulse, repeated execution).

te

6.5. Discussion

We presented the experimental realization of an adaptive optimal input design for parameter determination in liquid chromatography. Competitive Langmuir isotherm parameters of a three component mixture were identified by fitting model predictions to measured concentrations at the column outlet for a given experiment. The accurate and precise determination of the parameters was confirmed by comparison of the predicted isotherms with results from Frontal Analysis (FA) and a statistical analysis. The proposed approach may be useful to optimize high throughput screening techniques for acceleration of downstream bioprocess development. In liquid chromatography the fitting, i.e. estimation of parameters of a specific isotherm model by solution of a nonlinear least squares problem, is known as inverse method. Using an optimal adaptive experimental input design, the information in the measurement data and thus, the precision of parameter estimates is maximized. As a result, compared to other established methods, see e.g. Frontal Analysis [? ], this approach can minimize the experimental effort significantly. This is especially true if mixtures with more than two components need to be analyzed. Moreover, for the presented case study, all adsorption parameters could be identified using a non-selective concentration UV detector which only provided the sum concentration of all components. This is a big advantage,

Ac ce p

430

d

These results are confirmed by analysis of the accuracy and precision of the estimated parameter values shown above.

435

440

445

28

Page 28 of 60

Ac ce p

475

te

d

470

an

465

M

460

us

cr

455

as for conventional established methods individual concentration measurements are required but normally not available, see e.g. [? ]. The proposed approach is based on continuous feeding which tends to consume a relatively large amount of feed. Therefore it may not be the first choice in early development stages, especially in the pharmaceutical industry. An alternative approach is pulse response tests (batch tests), where only small amounts of feed are supplied as pulse inputs followed by elution by desorbent [? ]. However, then the determination of isotherm parameters at higher concentrations may become difficult. Further work should be directed to the inclusion of restrictions on the availability of the feed mixture in the design problem formulation. It has been discussed that numerical regularization is crucial in order to stabilize the parameter estimation, i.e. minimize the bias and variations in the parameter estimates during real-time estimation, and to compute meaningful experiment design criteria. This is especially true in the beginning of an experiment where measurement information is scarce, no prior information about parameters is available, i.e. from preliminary (screening) experiments, and the sensitivity matrix is ill-conditioned. However, a careful tuning of corresponding regularization parameters is required. In this case study the Tikhonov regularization outperformed the parameter subset selection based regularization. We believe, that this is because Tikhonov regularization is based on a smooth weighting between initial parameter guess and best available estimate (result from last iteration) as well as identifiable and non identifiable parameters. In contrast, subset selection completely excludes non identifiable parameters from the problem and thus these parameters can not be improved. Accordingly, if the initial guess of these non identifiable parameters is far away from the true value, it may stronger contribute to a bias in the estimate of the remaining active parameters than in Tikhonov regularization. However, parameter selection methods, such as subset selection are attractive from a practical viewpoint as they provide useful information for the monitoring of adaptive design strategies, as the importance of individual parameters and correlations between them. In the receding horizon implementation scheme the number of experiment design variables and nonlinear iterations is reduced in order to compute all nonlinear solutions for each sampling instant. Despite an improvement of the experiment design compared to the initial design, the result is not as good as standard/ heuristic PRBS signals. Thus, more efficient numeric implementation is desirable based on the exact derivation of first order gradient information to the design objective, i.e. by adjoint based optimization, see [? ], and using an increased prediction horizon. In this case study, the measurement sampling and frequency of the excitation signal, i.e. switching instants of input variables, have been chosen by basic knowledge of the process dynamics. Further work should be directed to consider, both, the sampling and excitation over different frequency ranges, e.g. by parameterization of the input signals by harmonic functions, see [? ? ? ].

ip t

450

480

485

490

29

Page 29 of 60

Appendix A. Transformation into ordinary differential equations

cP,i,j−1 − cP,i,j dcP,i,j = dt τj

us

Ac ce p

te

d

M

500

with cP,i,0 = cfi eed defined by the feed concentration. The outlet concentration cP,i,Nn equals the inlet feed concentration of the chromatography column cin i . The deadtime e−T0 s in Eq. (18) has been approximated by a series of Nn −2 first order lowpass filter. For a volumetric flow of 1.5 ml/min the following parameter values have been used: τj = 4.725 51 × 10−3 min, with j = 1, · · · , Nn − 2; τn−1 = 1.0481 × 10−1 min; τNn = 1.0635 × 10−3 min and Nn = 60. The PDEs in Eq. (19) were discretized by a high resolution finite volume scheme with flux-limitation [? ]. To capture the discontinuous concentration shock fronts in axial direction a number of NEls = 60 elements has been chosen which has proofed to give sound results. For 3 components the complete equation system (Eqs. (18), (19)) consists of Neq = (Nn + NEls ) · 3 = 360 differential equations.

an

495

j ∈ 1, · · · , Nn

(A.1)

cr

∀ i ∈ {ethyB, propB, butyB} ;

ip t

The model equations for the manager and pump in Eq. (18) are implemented as follows:

30

Page 30 of 60

ed pt ce Ac

Page 31 of 60

ed pt ce Ac

Page 32 of 60

ed pt ce Ac

Page 33 of 60

ed pt ce Ac

Page 34 of 60

ed pt ce Ac

Page 35 of 60

ed pt ce Ac

Page 36 of 60

ed pt ce Ac

Page 37 of 60

ed pt ce Ac

Page 38 of 60

ed pt ce Ac

Page 39 of 60

ed pt ce Ac

Page 40 of 60

ed pt ce Ac

Page 41 of 60

ed pt ce Ac

Page 42 of 60

ed pt ce Ac

Page 43 of 60

ed pt ce Ac

Page 44 of 60

us M an ed ce pt Ac

Page 45 of 60

us M an ed ce pt Ac

Page 46 of 60

p ce Ac

Page 47 of 60

p ce Ac

Page 48 of 60

Ac c

Page 49 of 60

Ac c

Page 50 of 60

pt Ac ce

Page 51 of 60

pt Ac ce

Page 52 of 60

ep t Ac c

Page 53 of 60

ep t Ac c

Page 54 of 60

ep t Ac c

Page 55 of 60

ep t Ac c

Page 56 of 60

ed pt ce Ac

Page 57 of 60

ed pt ce Ac

Page 58 of 60

ed pt ce Ac

Page 59 of 60

c Ac

Page 60 of 60