A V-curve criterion for the parameter optimization of the Tikhonov regularization inversion algorithm for particle sizing

A V-curve criterion for the parameter optimization of the Tikhonov regularization inversion algorithm for particle sizing

Optics & Laser Technology 44 (2012) 1–5 Contents lists available at ScienceDirect Optics & Laser Technology journal homepage: www.elsevier.com/locat...

507KB Sizes 3 Downloads 60 Views

Optics & Laser Technology 44 (2012) 1–5

Contents lists available at ScienceDirect

Optics & Laser Technology journal homepage: www.elsevier.com/locate/optlastec

A V-curve criterion for the parameter optimization of the Tikhonov regularization inversion algorithm for particle sizing Wei Liu n, Xianming Sun, Jin Shen School of Electrical and Electronic Engineering, Shandong University of Technology, Zibo 255049, China

a r t i c l e i n f o

abstract

Article history: Received 13 December 2010 Received in revised form 16 March 2011 Accepted 21 April 2011 Available online 8 June 2011

The regularization parameter plays an important role in applying the Tikhonov regularization method to recover the particle size distribution from dynamic light scattering experiments. The so-called V-curve, which is a plot of the product of the residual norm and the norm of the recovered distribution versus all valid regularization parameters, can be used to estimate the result of inversion. Numerical simulation demonstrated that the resultant V-curve can be applied to optimize the regularization parameter. The regularization parameter is optimized corresponding to the minimum value of the V-curve. Simulation and experimental results show that stable distributions can be retrieved using the Tikhonov regularization with optimum parameter for unimodal particle size distributions. & 2011 Elsevier Ltd. All rights reserved.

Keywords: Dynamic light scattering Inversion algorithm Optimum regularization parameter

1. Introduction Dynamic light scattering (DLS) is a well-known method for measuring the size of submicron particles [1]. In a DLS experiment, fluctuating scattered light from particles suspended in liquid is acquired using a photon counter. The scattering intensity autocorrelation function can be calculated using a digital correlator. The autocorrelation curve has an exponential form, which contains information on the particle size distribution. Extracting the particle size distribution from the autocorrelation data is known as inversion. The interpretation of data from monodisperse samples is relatively easy, but data from polydisperse samples is considerably more difficult to interpret since inverting the light intensity autocorrelation function involves solving a Fredholm integral equation of the first kind. The solution to the equation exists only when there is no noise in the data and no rounding error during the inversion. Seeking a solution with noisy correlation data in practical measurements is an ill-posed inverse problem. For an ill-posed problem, there will be a solution set within a certain error range. Each one in the solution set can be regarded as the most accurate estimation to the real one while some of them may differ from each other greatly. In order to get a best fit to the true distribution, which is stable with respect to noise, the particle size distribution is often constrained, and some related information is presumed. Based on these principles, many inverse methods have been developed, including the method of

cumulants [2] and its modified version [3–5], the exponential sampling method [6], the regularization method [7], the nonnegative least squares(NNLS) algorithm [8], truncated single value decomposition (SVD) [9], and more recently developed neural network approach [10]. The most common and well-known form of regularization is the one known as Tikhonov regularization [11]. The critical issue of the method is to determine the optimum regularization parameter. Since it is often difficult in practice to determine parameter prior to calculation, the verification algorithm after calculation is usually preferred. In the verification algorithm, regularization parameter is determined according to some evaluating criterion during the calculation of the solution, such as the generalized cross-validation (GCV) criterion [12] and the L-curve criterion [13]. However, these criteria have the drawback of complications in operation. The purpose of this paper is to demonstrate an easy method to find the optimum regularization parameter.

2. Theory 2.1. The Tikhonov regularization algorithm In the DLS experiment, the intensity autocorrelation function g(2)(t) can be measured, and expressed in terms of the field autocorrelation function, g(1)(t), as g ð2Þ ðtÞ ¼ 1 þ b½g ð1Þ ðtÞ2

n

Corresponding author. E-mail address: [email protected] (W. Liu).

0030-3992/$ - see front matter & 2011 Elsevier Ltd. All rights reserved. doi:10.1016/j.optlastec.2011.04.019

ð1Þ

where b is an instrumental constant. For a polydisperse sample, the field autocorrelation can be represented by an integral over a

W. Liu et al. / Optics & Laser Technology 44 (2012) 1–5

particle size distribution, as Z 1 f ðRÞexpðkt=RÞdR g ð1Þ ðtÞ ¼

0.9 ð2Þ

0

where R is the hydrodynamic radius of particle, and k can be represented as follows:   2 kB T 4pn y k¼ sin ð3Þ 6pZ l 2 where kB is the Boltzmann’s constant, T is the absolute temperature, Z is the dynamic viscosity, l is the laser wavelength in vacuum, n is the refractive index, and y is the scattering angle. The discrete form of g(1)(t) may be written as g ð1Þ ðtm Þ ¼

N X

expðktm =Rn Þf ðRn Þ

ð4Þ

Residual error between correlation

2

0.8 0.7 0.6 0.5 0.4 0.3 0.2

n¼1

0.1 10-3

which can be represented in matrix form as follows: g

ð1Þ

m

¼ Hm,n Ufn

10-2

ð5Þ

g(1) m

is the field autocorrelation data corresponding to a where delay time tm, fn is the unknown particle size distribution for particle size R scaled by the instrumental constant b, and Hm,n is a transfer matrix, defined by Hm,n ¼ expðktm =Rn Þ

10-1

100

101

102

Regularization parameter alpha Fig. 1. Residual errors between the simulated and fitted correlation data. This curve is a semi-logarithmic plot. The regularization parameter is varying from 0.001 to 100.

ð6Þ

Eq. (5) can be solved by NNLS algorithm with the constraint fZ0, as minð99HUf g ð1Þ 99Þ

However, the particle size distribution obtained by such a method is very sensitive to noise in the autocorrelation data. To overcome this weakness, the Tikhonov regularization method is used in order to obtain a stable, best-fit size distribution from Eq. (5). In the method an identity matrix L and a regularization parameter a are introduced to modify Eq. (7) as   2 2 ð8Þ min 99HUf g ð1Þ 99 þ a99LUg ð1Þ 99 Eq. (8) can be rewritten as 82 " #2 9 < g ð1Þ  = 4 pHffiffiffiffiffiffi min  f   : aL 0  ;

Probability

ð7Þ

ð9Þ

This constrained, linear, least-squares problem can be solved by a numerical analysis method [14].

Radius (nm) Fig. 2. The recovered particle size distribution for noisy simulated data versus input distribution. The dotted curve is retrieved distribution and the continuous one is the input distribution with lognormal form. The mean particle size is 100 nm, standard deviation is 20, and noise factor is 1e4. The regularization parameter is continually increasing.

2.2. The V-curve criterion Usually, two criteria can be used to estimate the result of inversion. Firstly, the residual error between the original correlation data and the correlation function that would be generated by the recovered size distribution was employed. The residual error may be defined by vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u N uX 2 2 ^ ð10Þ s ¼ 99gg99 ¼t 9g^ i gi 9 i¼1

i¼1

where N is the number of points used for the distributions, g is the original correlation data, and g^ is the generated data according to g^ ¼ 1 þðHm,n f^ n Þ2

becomes larger suddenly when the parameter is bigger than a particular value. Obviously there is an inflection point in Fig. 1. Secondly, the norm of the recovered distribution was used. This describes the shape of the retrieved particle size distribution, and is defined below: vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u N uX 2 2 ð12Þ e ¼ 99f^ 99 ¼ t 9f^ i 9

ð11Þ

where f^ is the recovered distribution data. The residual error may be calculated according to Eq. (10), and is shown in Fig. 1. The residual error is smaller and stable when the regularization parameter is smaller. This means these two correlation lines coincide with each other. But the residual error

The Tikhonov regularization method can be applied to recover the particle size distribution using different regularization parameter. The retrieved and input particle size distributions are shown in Fig. 2. It can be seen that the recovered particle size distribution initially approaches the input one when the parameter is increased, but the retrieved distribution gradually departs from the input distribution with further increase of the parameter. The norm of the recovered distribution is shown in Fig. 3. The shape of the recovered particle size distribution is steep, and the

W. Liu et al. / Optics & Laser Technology 44 (2012) 1–5

result of inversion is unstable and sensitive to the correlation noise when the regularization parameter is smaller. The recovered distribution gradually becomes more robust to noise as the regularization parameter is increased, but the shape of the recovered distribution is too flat and departs from the input one greatly. A balance point between these two evaluation criteria needs to be found. A new criterion can be created by multiplying these two 2 2 ^ criteria together, namely 99gg99 U99f^ 99 , as shown in Fig. 4. When plotted, it almost always has a characteristic V-shaped appearance, the so-called V-curve. The regularization parameter is optimized corresponding to the minimum value of the V-curve. Using the optimum parameter, the recovered distribution is robust and gives the most overlap with the real one.

2. Finding the optimum regularization parameter at the lowest point of the V-curve. 3. Obtaining the particle size distribution corresponds to the optimum parameter.

3. Simulation results and discussion In this section, a series of simulated experiments are performed to find the optimum parameter using the V-curve algorithm developed above. The simulated autocorrelation data

1.8 1.7

1. Recovering the particle size distribution using different regularization parameters, which change from small to large, and draw the V-curve.

1.6

Norm of Recovered Distribution

0.08

Autocorrelation data

2.3. Steps of the algorithm

0.09

0.07

Simulated Fitted

1.5 1.4 1.3 1.2 1.1

0.06

1.0

0.05

0.9 0

0.04

0.02 0.01

x

10-3

Parameter

Fig. 3. The norm of recovered particle size distribution. This curve is a semilogarithmic plot.

1.5

2 2.5 3 Delay time (ms)

3.5

4

4.5

5

100 100

20 40 60 Standard deviation

10 9

10-2

20 40 60 Standard deviation

101 Regularization

8 7 6 5

100 100

10-1

10-1

4

20 40 60 Standard deviation

3 2 10-3

10-1

10-1

11

. fˆ

1

101

Regularization Parameter alpha

gˆ − g

0.5

Fig. 5. The simulated and fitted autocorrelation data for particle size distribution with a lognormal form. The noise factor is 1e4. The points show the noisy correlation data, and the continuous curve through the point is the fitted data.

0.03

12

3

10-2 10-1 100 101 Regularization Parameter alpha

102

Fig. 4. The evaluation criterion for the recovered particle size distribution, which looks like a V-curve. The optimum regularization parameter is found at the lowest point of the V-curve.

10-2

20 40 60 Standard deviation

Fig. 6. Regularization parameters for different simulation conditions. ‘x’ represents that noise factor equals 5e4, ‘þ’ means 1e4, and ‘o’ means 5e3. X-coordinate is the standard deviation of the input distribution. Y-coordinate is the optimum regularization parameter. (a) The input particle size is lognormal distributed, and mean size of particle is 100 nm; (b) the input particle size is lognormal distributed, and mean size of particle is 250 nm; (c) the input particle size is normal distributed, and mean size of particle is 100 nm; and (d) the input particle size is normal distributed, and mean size of particle is 250 nm.

4

W. Liu et al. / Optics & Laser Technology 44 (2012) 1–5

0.03

Probability

Probability

REG Input 0.02

0.01

REG Input

0.03 0.02 0.01 0

0 0

100

200 300 400 Radius (nm)

500

0

100

200 300 400 Radius (nm)

500

Fig. 7. Particle size distribution by spherical radius for simulation data. (a) The input particle size is lognormal distributed, mean size of particle is 100 nm, and standard deviation is 20. The optimum regularization parameter is 0.1; (b) the input particle size is normal distributed, mean size of particle is 250 nm, and standard deviation is 40. The optimum regularization parameter is 0.2.

can be generated by inserting a particular particle size distribution into Eq. (13) such as lognormal or Gaussian:

REG Input

0.025

ð13Þ

where gm is the mth light intensity autocorrelation data corresponding to a delay time tm, fn is the inputting particle size distribution, and nm represents the noise in the mth correlation channel. The Tikhonov regularization approach was applied to these data, with the aim of recovering the particle size distribution. We assume the input particle size distribution accords with a lognormal distribution, the mean particle size is 100 nm, standard deviation is 20, and noise factor is 1e4. The autocorrelation function calculated by Eq. (13) is shown in Fig. 5. The points are the simulated correlation data, and the continuous curve is the fitted correlation data according to Eq. (11). It can be generated correlation data using different input particle size distribution and noise factor. At particular input distribution and noise factor, the optimum regularization parameter may be obtained using the V-curve criterion mentioned above. The result is shown in Fig. 6, which illustrates that the regularization parameter is sensitive to the size of input distribution, but is relatively stable when the noise factor is varying. The results of a series of experiments to evaluate the inversion ability of the Tikhonov regularization method with optimum parameters are shown in Fig. 7, which were obtained from simulated autocorrelation data for lognormal and Gaussian distributions, respectively, with a noise factor of 1e4.

0.02 Probability

gm ¼ 1 þ ðHm,n Ufn Þ2 þ nm

0.03

0.015 0.01 0.005 0 0

50

100 150 200 250 300 350 400 450 500 Radius (nm)

Fig. 8. Tikhonov regularization method retrieved particle size distribution for unimodal polystyrene particles experimental data. The average particle radius is 101.3 nm, while certified mean radius is 977 3 nm, and standard deviation is 4.5 nm.

regularization method provides a good fit to the true distribution. The mean diameter obtained from the experimental data is 101.3 nm, which represents a 4.4% error on the certified mean diameter.

5. Conclusion 4. Experimental results and discussion Polymer microspheres were obtained from Duke Scientific Corporation. A sample was diluted with water and transferred into a cylindrical glass cell. The DLS experimental setup comprised of a He–Ne laser, a photomultiplier tube, and a digital correlator to obtain the autocorrelation data. The laser wavelength is 632.8 nm. The scattering angle was 901. For the nanometer particle experiment, the sampling interval was 20 ms, and the number of correlation channels was 64. The unimodal particles had a mean radius of 977 3 nm with a standard deviation of 4.5 nm according to the manufacturer. The particle size distribution recovered from the experimental data using the Tikhonov regularization method with optimum parameter a ¼0.1 is shown in Fig. 8. A plot of a Gaussian distribution with the mean size and the standard deviation of particle distribution based on the manufacture’s data sheet is given for comparison. It can be seen that the Tikhonov

The V-curve criterion is introduced in optimum regularization parameter determination, which is a critical step for data inversion by Tikhonov regularization algorithm. The performance of the criterion has been tested through simulation for different particle distributions and a number of noise levels. The results show that the regularization parameter is insensitive to the noise, but somewhat sensitive to the size of the input particle size distribution. Measurement of standard polystyrene particle shows that the V-curve criterion is a good choice for finding the optimum regularization parameter.

Acknowledgments The authors gratefully acknowledge assistance and fruitful discussions from Professor John Walker and Dr. Rui Chen from the University of Nottingham, and also give thanks to the National

W. Liu et al. / Optics & Laser Technology 44 (2012) 1–5

Science Foundation of Shandong Province (Grant no. ZR2009AQ013) and Ministry of Shandong Education (J09LG17), which provided part of the research funding.

References [1] Berne Bruce J, Pecora R. Dynamic light scattering. New York: Wiley; 1976. [2] Koppel DE. Analysis of macromolecular polydispersity in intensity correlation spectroscopy: the method of cumulants. J Chem Phys 1972;57:4814. [3] Frisken Barbara J. Revisiting the method of cumulants for the analysis of dynamic light-scattering data. Appl Opt 2001;40:4087–91. [4] Liu Wei, Shen Jin. Fitting of light autocorrelation function base on unconstrained nonlinear optimization algorithm. Acta Opt Sin 2008;28(s2):289–91. [5] Hassan PA, Kulshreshtha SK. Modification to the cumulant analysis of polydispersity in quasi-elastic light scattering data. J Colloid Interface Sci 2006;300:744–8. [6] Ostrowsky N, Sornette D, Parker P, Pike ER. Exponential sampling method for light scattering polydispersity analysis. J Mod Opt 1981;28:1059–70.

5

[7] Provencher S. CONTIN: a general purpose constrained regularization program for inverting noisy linear algebraic and integral equations. Compr Phys Commun 1982;27(3):229–42. [8] Lawson Charles L, Hanson Richard J. Solving least squares problems. PrenticeHall; 1974. [9] Bertero M, Brianzi P, Pike ER. On the recovery and resolution of exponential relaxation rates from experimental data: a singular-value analysis of the Laplace. Proc R Soc Lond A 1982;383:15–29. [10] Gugliotta LM, Stegmayer GS, Clementi LA, Gonzalez VDG, Minari RJ, Leiza JR, et al. A neural network model for estimating the particle size distribution of dilute latex from multiangle dynamic light scattering measurements. Part Part Syst Charact 2009;26:41–52. [11] Tikhonov AN, Arsenin VY. Solutions of ill posed problems, Vh Winston. 1977. [12] Elic- abe Guillermo E, Garcı´a-Rubio Luis H. Latex particle size distribution from turbidimetry using inversion techniques. J Colloid Interface Sci 1989;129(1): 192–200. [13] Hansen PC. Regularization tools version 4.0 for Matlab 7.3. Numer Algorith 2007;46:189–94. [14] Coleman TF, Li Y. A reflective Newton method for minimizing a quadratic function subject to bounds on some of the variables. SIAM J Optim 1996;6(4):1040–58.