Inversion methods to determine two-dimensional aerosol mass-mobility distributions: A critical comparison of established methods

Inversion methods to determine two-dimensional aerosol mass-mobility distributions: A critical comparison of established methods

Journal Pre-proof Inversion methods to determine two-dimensional aerosol mass-mobility distributions: A critical comparison of established methods T.A...

3MB Sizes 0 Downloads 53 Views

Journal Pre-proof Inversion methods to determine two-dimensional aerosol mass-mobility distributions: A critical comparison of established methods T.A. Sipkens, J.S. Olfert, S.N. Rogak PII:

S0021-8502(19)30588-9

DOI:

https://doi.org/10.1016/j.jaerosci.2019.105484

Reference:

AS 105484

To appear in:

Journal of Aerosol Science

Received Date: 9 September 2019 Revised Date:

31 October 2019

Accepted Date: 1 November 2019

Please cite this article as: Sipkens TA, Olfert JS, Rogak SN, Inversion methods to determine twodimensional aerosol mass-mobility distributions: A critical comparison of established methods, Journal of Aerosol Science, https://doi.org/10.1016/j.jaerosci.2019.105484. This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. © 2019 Elsevier Ltd. All rights reserved.

Twomey

Leastsquares

TwomeyMarkowski

Tikhonov

DMA

AEROSOL

Mobility classification

CPMA/APM

Mass-to-charge classification

2D INVERSION

Determines the massmobility distribution

Inversion methods to determine two-dimensional aerosol massmobility distributions: A critical comparison of established methods T. A. Sipkens*,a, J. S. Olfertb, S. N. Rogaka a b

Department of Mechanical Engineering, University of British Columbia, Vancouver, Canada Department of Mechanical Engineering, University of Alberta, Edmonton, Canada

AR T I C LE I N F O

AB ST RACT

Keywords: Tandem measurements CPMA DMA Particle size distribution Data inversion Mass-mobility distribution

This paper provides a critical review of methods used to invert tandem measurements to determine the twodimensional distribution of particle mass and mobility. We consider the performance of weighted leastsquares analysis, Twomey-type approaches, a maximum entropy method, Tikhonov regularization (over a range of regularization parameters), and statistical inversion. A detailed analysis is performed on a bimodal phantom to demonstrate the typical characteristics of reconstructions resulting from the different inversion techniques, before the Euclidean error between the phantom and reconstructions are evaluated for a wider range of phantoms. It is found that 1st order Tikhonov regularization generally outperforms the other established inversion methods, even for narrow phantoms, where the finite representation of the massmobility distribution becomes a larger contributor to reconstruction accuracy. Twomey-type approaches, while not as robust, are shown to be an acceptable alternative.

1. Introduction Proper characterization of aerosols is critical to several fields, including environmental science, where materials like soot contribute significantly to the anthropogenic radiative forcing responsible for climate change (Bond, et al., 2013; Myhre, et al., 2013), and nanotechnology, where aerosolized nanoparticle synthesis is one of the most economical routes to mass production (e.g. (Pratsinis, 1998)). The impact or role of aerosolized particles in these fields depends significantly on the particle size and aggregate morphology. To this end, several devices have been developed to measure the size distributions of aerosols, including impactors, mobility sizers, and mass spectrometers. Recently, aerosol scientists have used several of these instruments in tandem to yield increasingly detailed information about the particles (Park, et al., 2008). Of note, the mass and mobility of the particles can be characterized by pairing a differential mobility analyzer (DMA, (Liu & Pui, 1973; Knutson & Whitby, 1975)) with either an aerosol particle mass analyzer (APM) (McMurry, et al., 2002; Park, et al., 2003; Scheckman, et al., 2009; Shin, et al., 2009; Beranek, et al., 2012; Park, et al., 2003; Kuwata & Kondo, 2009; Charvet, et al., 2014) or centrifugal particle mass analyzer (CPMA) (Olfert, et al., 2007; Graves, et al., 2017; Olfert, et al., 2017; Quiros, et al., 2015; Johnson, et al., 2014; Ghazi, et al., 2013; Afroughi, et al., 2019). These measurements are most often used to estimate summary parameters such as the effective density (Park, et al., 2003; Kuwata & Kondo, 2009; Olfert, et al., 2007; Schmid, et al., 2007; Olfert, et al., 2017; Johnson, et al., 2014), dynamic shape factor (Scheckman, et al., 2009; Shin, et al., 2009; Beranek, et al., 2012; Kuwata & Kondo, 2009), or mass-mobility exponent (Shapiro, et al., 2012). However, such summary parameters are limiting in that they only describe the average particle mass-mobility relationship and fail to describe the distribution of these properties. A more complete understanding of the relationship between these parameters can be derived by examining the two-dimensional mass-mobility distribution. Rawat et al. (2016) were the first to use tandem DMA-APM measurements to derive the two-dimensional mass-mobility distribution for an aerosol, which allowed the authors to identify externally mixed particles of differing density and morphology. This approach was later applied to aggregated nanoparticles generated using plasma synthesis (Chen, et al., 2018). Broda et al. (2018) also used a similar methodology to derive two-dimensional distributions of particle mass and black carbon mass using a tandem arrangement consisting of a CPMA and a single-particle soot photometer (SP2). In all cases, the true distribution of particle properties measured by these devices is masked by a convolution with various instrument functions (e.g. (Kuwata, 2015)), including the effect of multiple charging of the particles. Doing the inverse, finding the *

Corresponding author. Department of Mechanical Engineering, University of British Columbia, 2329 West Mall, Vancouver, BC V6T 1Z4. E-mail address: [email protected] (T. A. Sipkens)

T. A. Sipkens, et al.

Original Article (2019)

distribution of particle properties from the data, involves a deconvolution, which presents several challenges. To start, these problems are often rank deficient, a result of determining the particle size distribution at more positions than at which measurements are taken, i.e. there are more unknowns than equations. Even in cases where the problem is full rank, however, the convolution represents a Fredholm integral equation of the first kind (IFK) in which one performs a smoothing operation over the true distribution. These problems are classically ill-posed in that (i) there exist several solutions that can nearly equally describe the data and (ii) inversion tends to magnify measurement and discretization errors. As a consequence, proper care must be taken to determine the mass-mobility distribution, even if the problem is full-rank. Techniques to invert for the one-dimensional particle size distribution are ubiquitous in the aerosol literature (see existing reviews (Voutilainen, et al., 2001; Carfora, et al., 1998; Kandlikar & Ramachandran, 1999)). However, the extension of these techniques to derive two-dimensional distributions of aerosol characteristics is an emerging field that would benefit from a review. Accordingly, this paper aims to complement existing reviews of aerosol inversion and critically examines the methods available to simultaneously invert data for the two-dimensional mass-mobility distribution from a combination of mass and mobility classification. In this regard, we consider established methods or those currently in use in the field of aerosol science, specifically those categories in the abovementioned reviews. This work starts with a definition of the problem, including some discussion of the various instrument transfer functions that make up the problem. We continue by generating synthetic mass-mobility distributions (i.e. phantoms) and discuss inversion methods of the following categories: (i) unregularized least-squares analysis, (ii) Twomeytype approaches (including the existing extension to two-dimensions by Rawat, Buckley, and coworkers (Rawat, et al., 2016; Buckley, et al., 2017)), (iii) maximum entropy regularization, (iv) Tikhonov regularization, and (v) statistical or Bayesian inversion.

2. Problem definition The performance of the paired devices is characterized by a series of transfer functions that define the effectiveness with which particles matching a desired character are selected. These form a kernel defined as the product of the device transfer functions ∞

K ( ri *, m, d ) = ∑ f ( zk , d ) Ω ( di *, zk , d ) Λ ( mi *, zk , m, d ) ,

(1)

k =1

where m is the particle mass, d is the mobility diameter of the particles (with the traditional ‘m’ subscript on the mobility diameter dropped for ease of notation); ri* = [mi*,di*]T are the instrument set points, including the mobility diameter setpoint of the DMA, di*, and the mass setpoint of the CPMA (assuming a singly charged particle), mi*, arranged as a vector; f(zk) is the fraction of particles in an integer charge state of zk; Ω is the transfer function of the DMA; and Λ is the transfer function of the particle mass analyzer (the APM (Ehara, et al., 1996) or CPMA (Olfert & Collings, 2005)). It is noted that additional functions could be included to incorporate the efficiencies of the trailing condensation particle counter (CPC) that is used to count the particles that exit the other devices and losses through the system tubing. Although these inefficiencies are not considered here, the inversion methods are directly extendable to the case of including these additional transfer functions. 2.1

Device transfer functions

2.1.1 Particle charging The particle charging function, f(zk), is traditionally evaluated following Wiedensohler (1988), who provided a correlation derived from Fuch’s theory for charge states ranging from -2 to +2 and a Gaussian distribution for the higher charge states (Gunn & Woessner, 1956): f ( zk , d ) =

 1  z − σ ( d ) ln ( Z )  2  z z   k ex p − , 12 2 σ d ( ) 2πσ z ( d )  z   1

(2)

where Zz ≈ 0.875 is the ion mobility ratio (Wiedensohler, et al., 1986), 2πε 0 dkBT , σz (d ) = (3) e2 ε0 is the dielectric constant, kB is the Boltzmann constant, T is the system temperature, and e is the elementary electron charge. More recently, Gopalakrishnan et al. (2013) improved on the charge distribution for the integer charge states ranging from -2 to +2 and particles below 100 nm, using Brownian dynamics calculations to provide an updated correlation of the form

3  A  i f ( zk , d ) = exp ∑ai  zk , p2  ( ln d )  ,  i =1  πRs  

2

(4)

T. A. Sipkens, et al.

Original Article (2019)

where ai are correlation coefficients, Ap is the particle's orientationally-averaged projected area, and Rs is the Smoluchowski radius or capacity. The current work employs Eq. (4) for integer charge states ranging from -2 to +2, taking the values for ai as those corresponding to conducting particles from Gopalakrishnan et al. (2013), and Eq. (2) for the higher integer charge states. We consider integer charge states up to six, though improvements in the kernel by including integer charge states above two are generally minor for particles below a micrometer. 2.1.2 DMA transfer function The geometry of a cylindrical DMA is shown in Figure 1a. The transfer function for the DMA, considering a diffusing flow through a cylindrical DMA, is given by (Stolzenburg, 2018)

Ω( di *, zk , d ) =

  Z%p − 1 + β   Z%p − 1 − β   Z%p − 1 + βδQ   Z% p − 1 − βδQ  σ fε   + fε   − fε   − fε   , 2β (1 − δ)   2σ   2σ   2σ 2σ   

(5)

where σ is a non-dimensional standard deviation representing diffusion across the DMA; Z% p = Zp/Zp* is the normalized particle mobility where Zp is the particle mobility and Zp* is the centroid mobility of particles leaving the DMA, at a function of the setpoint, di*; β = (Qs + Qa)/(Qm + Qc) is the ratio of the sum of the aerosol (Qa) and sample (Qs) flow rates to the sum of the sheath (Qc) and exhaust flow rate (Qm); δQ = (Qs – Qa)/(Qs + Qa); fε(x) = x·erf(x)+exp(–x2)/π1/2; and erf(x) denotes the error function. The nondimensional standard deviation for the mobility distribution is given by G kT σ 2 = DMA B Z% p , (6) eVi where GDMA is a non-dimensional geometry factor and Vi is the setpoint voltage of the ith measurement, which depends on di*. The mobility is defined as q Zp = Cc ( d ) , (7) 3πµd where q = zke is the particle charge; µ is dynamic viscosity of the gas;   A  Cc ( d ) = 1 + Kn  A1 + A2 exp  − 3   , (8)  Kn    is the Cunningham slip correction factor; Kn = 2λg/d is the Knudson number; λg is the mean free path of the gas; A1, A2, and A3 are experimentally-determined factors. Commonly, equal aerosol flow is used such that δQ = 0, Qm = Qc, and Qa = Qs (e.g. (Buckley, et al., 2017; Kuwata, 2015)). The centroid mobility, Zp*, can be equally defined using Eq. (7) with d = di* and by equating the forces on the particles and solving for Zp:

Q + Qc  ln ( R2 R1 ) Zp * = −  m , (9)  2   2πLVi where R1 and R2 are the inner and outer radii of the DMA chamber, and L is the length of the DMA. This allows one to derive a Vi corresponding to a given di* for use in Eq. (6) and vice versa. Inlet Diffuser Aerosol inlet, Qa

Mobilityselected outlet, Qs R1 V R2

Length, L

Inner electrode

Electrode

Outer electrode

Outlet Optical characterization

Bearing

Length, L

Other particle exhaust, Qm

V

ω1

Sheath flow, Qc

Heated saturator Working fluid

Cooled condenser

ω2 Aerosol inlet

Outlet

Cylindrical differential mobility analyzer (DMA)

Couette centrifugal particle mass analyzer (CPMA)

Condensation particle counter (CPC)

(a)

(b)

(c)

Figure 1. Simple schematics demonstrating the operating principles and relevant geometric features of the (a) cylindrical DMA, (b) Couette CPMA, and (c) CPC. 3

T. A. Sipkens, et al.

Original Article (2019)

Implementations vary subtly in how the geometry factor and the Cunningham slip correction factor are treated. In the present work, we adopt A1 = 1.165, A2 = 0.483, and A3 = 0.997 for the Cunningham slip correction factor from Kim et al. (2005) over the alternatives used in the literature, such as that by Davies (1945) or Allen and Raabe (1976). The geometry factor is taken as that corresponding to a fully-developed flow from Stolzenburg (2018). 2.1.3 CPMA transfer function The geometry of the Couette CPMA is identified in Figure 1b and differs from the APM primarily in that the electrodes can rotate at different speeds. Including diffusion, the trajectory of any particle is governed by the steady-state, convective-diffusion equation (Olfert, et al., 2006; Olfert & Collings, 2005) v ⋅ ∇ ni = D∇ 2 ni − ∇ ⋅ cni .

(10)

th

where ni is the number concentration of particles at the i mass setpoint (assuming a singly charged particle), mi*, in the domain; v = [vr, vθ, vz]T is the velocity field where vr, vθ, and vz are the velocity distribution in the radial, angular, and axial directions respectively; D = ZpkBT q (11) is the diffusion coefficient; and c = [cr, cθ, cz]T is the particle migration velocity, with components assigned analogous to v. We note that, by construction, cz = 0, cθ = 0, and vr = 0. Here, plug flow is assumed, such that vz = v̄, where v̄ is the average flow velocity, which can be derived from the aerosol flow rate and is reasonable under CPMA conditions (Sipkens, et al., 2019). Considering the possibility of different angular speeds for the internal and external electrodes, the flow velocity distribution in the azimuthal direction is (Olfert & Collings, 2005; Taylor, 1923) β vθ ( r ) = α r + , (12) r

ˆ /ω1]/[(r1/rc)2 – 1]; β = ω1r12( ω ˆ /ω1 – 1)/[(r1/rc)2 – 1]; ω1 and ω2 are the angular speeds of the inner and outer where α = ω1[(r1/cr)2 – ω electrodes, respectively; r1 and r2 are the inner and outer radii of the particle mass analyzer gap, respectively; rc = (r1 + r2)/2 is the ˆ = ω2/ω1. In the radial direction, the force on the particles is the balance of centrifugal, gap center; and ω vθ ( r ) , r

(13)

qVi , r ln ( r2 r1 )

(14)

2

Fc = m and electrostatic forces,

Fe =

where, in this case, Vi refers to the setpoint voltage for the CPMA or APM. The particle migration velocity, that is the velocity of the particle relative to the gas, is then the product of the net force on the particles and the mechanical mobility, B = τ/m, such that τ cr = B ( Fc − Fe ) = ( Fc − Fe ) , (15) m where τ = Zpm/q is the particle relaxation time. Therefore,

cr =

τvθ2 τqVi 2ταβ τβ2 τqVi − = τα2r + + 3 − . r mr ln ( r2 r1 ) r r mr ln ( r2 r1 )

(16)

Unfortunately, Eq. (10) cannot generally be solved and finite difference solutions can be computationally expensive to compute (Olfert, et al., 2006; Olfert & Collings, 2005). Accordingly, we use the expressions derived from a particle tracking method following Sipkens et al. (2019). In that work, the diffusing transfer function of a CPMA for the plug flow condition is approximated by −1  K ( G0 ( r2 ) , r2 ) − K ( G0 ( r1 ) , r2 ) − K ( G0 ( r2 ) , r1 ) + K (G0 ( r1 ) , r1 )  , Λ ( mi *, zk , m, d ) = (17) 4δ  where δ = (r2 – r1)/2 is the gap half width;

(

) (

K G0 ( rg1 ) , rg2 = G0 ( rg1 ) − rg2

)

12   G r − r 2   G0 ( rg1 ) − rg2  0 ( g1 ) g2 2  ;  + σ   exp  −  erf  12 12     2 σ π  2 σ     

(18)

and

σ = ( 2DL v ) ; 12

4

(19)

T. A. Sipkens, et al.

Original Article (2019)

Table 1. DMA parameters used in the current work.

Table 2. CPMA parameters used in the current work.

Parameter

Value

Parameter

Value

R1 [mm] R2 [mm] L [m] Qs [m3/s] Qa [m3/s] Qc [m3/s] Qm [m3/s] T [K] p [atm]

9.37 19.61 0.4437 5.0×10-6 5.0×10-6 5.0×10-5 5.0×10-5 293 1.0

r1 [mm] r2 [mm] Q [m3/s] L [m] ωˆ T [K] p [atm]

60 61 5.0×10-6 0.2 0.9697 293 1.0

represents the spreading of the transfer function due to diffusion, which is a function of the diffusion coefficient and thereby also a function of the mobility diameter. In Eq. (18), G0(r) is a function mapping the final radial position of the particle to the initial radial position of the particle. In the current work, the radial component of the particle migration velocity, cr, is approximated using a 1st order Taylor-series expansion about rc, corresponding to Case B in Sipkens et al. (2019), such that

 C   C L C G0 ( r ) = rc +  r − rc + 3  exp  − 4  − 3 , C  v  C4 4  

(20)

  2αβ β2 qVi C3 = τ  α2rc + + 3−  rc rc mrc ln ( r2 r1 )  

(21)

  2αβ 3β2 qV C4 = τ  α2 − 2 − 4 + 2 i . rc rc mrc ln ( r2 r1 )  

(22)

where

and

The resultant expression is a function of both the mass-to-charge ratio and the mobility diameter, with the latter quantity entering the calculation by way of the diffusion coefficient and particle relaxation time. 2.2

The forward problem

These transfer functions form the kernel of a convolution with the true mass-mobility distribution (Rawat, et al., 2016; Buckley, et al., 2017) ∞∞

Ni = ∫∫ K ( ri *, m, d ) ⋅ 0 0

∂ 2n ∂ log10 m ⋅ ∂ log10 d

⋅ d log10 d ⋅ d log10 m + εi .

(23)

mj ,d j

where Ni is the number of particles observed at a given measurement setpoint; ∂2n/(∂logm·∂logd) is the number-based twodimensional mass-mobility distribution and is phrased in [logm,logd]T space; εi is the error in the ith measurement. From a mathematical perspective, both sides of this expression can be scaled by the total number of particles, ntot, to achieve a scaled version of the problem, ∞∞

bi = N i ntot = ∫ ∫ K ( ri *, m, d ) ⋅ p ( log m,log d ) ⋅ d log d ⋅ d log m + εi ,

(24)

0 0

where bi is the fraction of particles measured at the setpoint ri*, p(logm,logd) is a the mass-mobility probability density function (pdf), and εi would then represent a modified error term. The latter expression is advantageous in that it provides a better scaled problem and provides stability when implementing iterative solvers. The solution can then easily be rescaled after analysis. In practice, the kernel is computed numerically, requiring the definition of a finite representation for the mass-mobility distribution. The most common representation is to simply discretize the space by defining Dx elements in two-dimensional massmobility space in which the value of the kernel and mass-mobility distribution are said to be constant. These elements can be defined on a grid with Dx,1 by Dx,2 elements in mass and mobility space respectively. These elements are defined about their center and vectorized such that

5

T. A. Sipkens, et al.

Original Article (2019)

j ∈{1,2,K, Dx } .

x j = p ( log10 m) j ,( log10 d ) j  ,  

(25)

Eq. (24) can then be approximated with finite difference methods, in which case

bi ≈ ∑ K ( ri *, mj , d j ) ⋅ x j ⋅ ∆ ( log10 m ) j ⋅ ∆ ( log10 d ) j + εi . Dx

(26)

j =1

This forms a system of Db equations that can be arranged into a matrix to give the linear system b = Ax + ε , where

Aij = K ( ri *, mj , d j ) ⋅ ∆ ( log10 m) j ⋅ ∆ ( log10 d ) j

Db

Db×Dx

Dx

(27) (28)

Db

and b ∈ ,A∈ ,x∈ , and ε ∈ . Note that, in this formulation, each row of A correspond to evaluating the transfer functions for a single set point, that is for a unique value of ri*. While finite element methods (Carfora, et al., 1998; Amato, et al., 1995) may provide improvements to reconstructions, particularly as the dimension increases, the present work will proceed with this simpler and more common approach.

3.

Mathematical form of test phantoms and synthetic data

3.1

Mass-mobility phantoms

Phantoms refer to simulated distributions that act as input to the forward model to generate synthetic data. In this work, test phantoms for the joint mass-mobility distribution are evaluating using the definition of conditional probability, p ( m, d ) = p ( d ) p ( m | d ) . (29) where p(m,d) is the joint mass-mobility distribution, which is the distribution of interest here; p(d) is the distribution for the mobility diameter; and p(m|d) is the conditional distribution of mass, that is the distribution of the particle mass for a given mobility diameter. Distributions are then taken to share a similar form to that given by Buckley et al. (2017). There, the mobility diameter for a given mode is assumed to be lognormally distributed, that is having a pdf given by  ( ln d − ln d )2 g p (d ) = exp  − 12 2 ( 2π ) d ln σ g  2 ( ln σ g ) 1

 ,  

(30)

where dg and σg are the geometric mean mobility diameter and geometric standard deviation. For the conditional mass distribution, Buckley et al. (2017) assumed a Gaussian form. Here, we instead consider a lognormal form,

  ln m − ln m | d  2  ( g )  , pl ( m | d ;σ m , mg ) = exp  −  (31) 12 2   2 ( ln σ m ) ( 2π ) m ln σ m|d    where mg|d is the geometric mean mass conditional on a given mobility diameter and σm is the geometric standard deviation. The geometric mean can be determined from the empirically-defined mass-mobility equation (Shapiro, et al., 2012; Olfert & Rogak, 2019), mg | d = kd D , (32) 1

m

where k and Dm are the mass-mobility pre-factor and exponent, respectively. This indicates that the mass-mobility distribution in log-log space is clustered about a line of constant slope, where the slope is Dm. In the current work, Dm is used to partially constrain the phantoms. Distributions can also be phrased in terms of the effective density of the particles (McMurry, et al., 2002; DeCarlo, et al., 2004), defined as

ρeff ≡

m . πd3 6

(33)

Rearranging, the mass of a given aggregate is then

π  m = ρeff  d 3  . (34) 6  Here, the mass, effective density, and mobility diameter can be considered random variables in that they vary randomly between aggregates. The center of the conditional mass distribution can then be phrased in terms of the geometric mean effective density, 6

T. A. Sipkens, et al.

Original Article (2019)

Figure 2. (a) Demonstration phantom used in Section 4, representative of the mixing of two distinct aggregate populations, where one is much heavier than the other. The phantom demonstrates a case where there are distinct peaks with respect to mobility diameter but overlapping peak with respect to mass. The color scale shown is identical to that used in subsequent reconstructions of this phantom. (b) Synthetic data, b, generated by evaluating the forward model for the demonstration phantom and corrupting with Poisson-Gaussian noise.

π 3 d ⋅ ( ρeff,g | d ) . (35) 6 According to this expression, when the mean effective density is constant with respect to (i.e. independent of) the mobility diameter, the mass-mobility distribution will be clustered about a line in log-log space having a slope of three, which corresponds to Dm = 3 and k = π/6·(ρeff,g|d). More generally, the mean effective density of a particle with a given mobility diameter can be rephrased as 6 ρ eff,g | d = kd Dm − 3 . (36) π For many aerosol particles, including soot aggregates, Dm < 3, indicating that the density of the particles decreases as the mobility diameter increases. In the present work, the mean effective density of the particles at the geometric mean mobility diameter, that is ρeff,g|dg, is combined with a value of Dm to fully define mg|d. Finally, the mass-mobility distribution is given by mg | d =

2   ln m − ln m | d  2 ln d − ln dg )  ( ) ( g 1    . p ( m, d ) = exp − − (37) 2 2  2πdm ln σ g ln σ m 2 ( ln σ m|d ) 2 ( ln σ g )    Multimodal distributions are generated by weighting the presenting modes. For example, bimodal distributions are formed by p(m,d) = w1p1(m,d)+w2p2(m,d), where w1 + w2 = 1 are weights for each of the modes. These distributions can be converted from [m,d]T to [log10m,log10d]T space by multiplying the distributions by d·m·[ln(10)]2. To demonstrate, consider a hypothetical bimodal phantom with distinct peaks representative of materials with different densities. The parameter set for a distribution with these characteristics is given in Table 3. Figure 2 shows this phantom, where the denser mode is representative of a heavy material, akin to metal particles, and the lighter mode is representative of soot-like aggregates. Also indicated in Figure 2 are the marginal mass and mobility distributions, evaluated by integrating over all but one of the variables, e.g. the marginal mobility distribution is ∞

pd ( log10 d ) = ∫ p ( log 10 m,log10 d ) dlogm .

(38)

0

This phantom corresponds to a scenario in which distinct modes are visible in the marginal mobility distribution, but not in the marginal mass distribution. It is also worth noting that the value of Dm for soot typically varies from 2.2 to 2.6, depending on the type of combustion (Olfert & Rogak, 2019). For non-premixed combustion, Dm ~ 2.5 is more typical. In this work, the phantom’s second mode is generated with Dm = 2.3 only to provide a clearer separation from the first mode. Such a phantom provides an idealized distribution for visualizing the difference between the reconstruction methods and will be used as such in Section 4. Several other phantoms, representative of a wider range of aerosols, are considered in Section 5.

7

T. A. Sipkens, et al.

3.2

Original Article (2019)

Synthetic data generation

Data, b, can be generated by convolving the phantom with the kernel. In the discrete case, this is done by evaluating Eq. (27), including the addition of characteristic noise. Characteristic of other optical devices, this would typically involve Poisson-Gaussian noise, with the possibility of added fluctuations based on variation in the aerosol over the measurement duration (Sipkens, et al., 2017). If the latter source of noise varies randomly between each setpoint, this source will contribute analogous to other sources of Gaussian noise. Moreover, if there are sufficient counts, the Poisson noise can also be approximated as Gaussian, such that

ε ~ N ( 0; 0, Σb ) .

(39)

where ~ denotes distributed as; N(v;µ,Σ) indicates a multivariate Gaussian distribution with mean µ and covariance Σ,

N ( v; µ, Σ) = 2πΣ

−1 2

T exp ( v − µ ) Σ−1 ( v − µ )  .  

(40)

In this case, the covariance is given by Σ b = θ 2diag ( b ) + γ 2 I ,

(41)

where diag(v) indicates a matrix with vector v on the diagonal, θ indicates a scaling of the data and therefore scaling of the Poisson noise (in this case, θ = 1/ntot), and indicates the level of Gaussian noise, including fluctuations in the aerosol. For a majority of this work, noise is representative of a total number of incident particles of ntot = 105, corresponding to θ = 10-5 (equivalent to particle counts measured by the CPC on the order of 8,000 over the measurement duration, where most of the particles are considered lost in the devices). The Gaussian noise is assumed to be nearly negligible and is assigned a value many orders of magnitude smaller than the maximum of the Poisson noise (specifically, γ = 10-10·max{b}). For the purposes of synthetic data generation, the kernel is evaluated on a higher resolution mesh (specifically a 540×550 element grid) than that used in reconstructions. This, combined with corruption of the data with noise, is an attempt to minimize the effect of inverse crime (Colton & Kress, 2012), a phenomenon associated with considering identical forward and inverse models that often leads to unrealistically optimistic outcomes. Data generated according to this procedure is shown as DMA scans of CPMA-selected particles in Figure 2b for the demonstration phantom.

4. Discussion of established reconstruction methods Consider now inversion of the synthetic data shown in Figure 2b using the categories of methods identified in previous reviews (Voutilainen, et al., 2001; Carfora, et al., 1998; Kandlikar & Ramachandran, 1999). 4.1

Unregularized least-squares

In cases where the problem is rank deficient, the simplest route one could consider to a solution is ordinary least-squares. In this case, one seeks to minimize the square of the residual x

LSQ

2  N x  N x     = arg min  ∑  ∑ Aij x j  − bi   = arg min Ax − b x x   i =1  j =1  

{

2 2

}.

(42)

where ||·||2 denotes the 2-norm. Such an approach ignores the fact that some measurements are inherently more uncertain than others. Incorporating this uncertainty lends itself to a weighted least-squares approach, in which  N b  1  N x  x WLSQ = arg min ∑   ∑ Aij x j − bi   x    i =1  σ i  j =1

2

 b  = arg min L ( Ax − b ) x 

{

2 2

},

(43)

where σi is the standard deviation of each data point, bi. Here, Lb is a diagonal matrix containing the inverse standard deviation of each data point, that is T  1 1 1    L = diag  , ,K  .   σ 1 σ 2 σ N b     b

(44)

It is worth noting that the matrix form of Eq. (43) can easily be extended to include covariance in the data, in which the error is correlated between different entries in b. This is done by adding the appropriate off-diagonal terms such that (Lb)T = chol[(Σb)–1] is the Cholesky factorization of the inverse covariance matrix. The weighted least-squares solution can be computed explicitly using the normal equations:

8

T. A. Sipkens, et al.

Original Article (2019) −1

T T x WLSQ = ( Lb A ) Lb A  ( Lb A ) Lb b . (45)   In cases where non-negativity constraints are to be applied to the problem, as is appropriate to the present inversion, either Lagrange’s method or iterative solvers are applied to solve Eq. (43) rather than implementing the normal equations directly. While least-squares is sufficient to derive a solution in spite of the rank deficiency of the problem, it is well-known to suffer from error amplification. As a result, the precise solution often varies significantly depending on the implemented solver and contains considerable fluctuations. In the present work, two non-negative, weighted least-squares algorithms were considered: (i) a Lagrange multiplier approach, specifically the lsqnonneg function from MATLAB (Lawson & Hanson, 1974), and (ii) a constrained interior point algorithm (Altman & Gondzio, 1993). For both algorithms, the weighted, least squares reconstructions of the data in Figure 2 fail to resemble the original phantom and exhibit erratic behaviour, particularly for the interior point algorithm where the values occasionally vary several orders of magnitude between adjacent pixels. These characteristics make it difficult to even visualize the solution, serving as a warning to practitioners. This is reinforced by Figure 3, which indicates that the Euclidean distance between the reconstruction and the exact phantom (often referred to in this work as the Euclidean error) is significantly larger than any of the other methods. The solution to the above problem is to regularize the problem by adding supplemental information. This can often be understood as choosing from the family of solutions that satisfy Lb(Ax – b) but also satisfy some other information known a priori about x, such as smoothness.

4.2

Twomey-type methods

Iterative regularization methods are common in the aerosol literature. The most common scheme is that of Twomey (1975). This represents the current state-of-the-art in the 2D aerosol inversion problem, where Rawat, Buckley, and coworkers (Rawat, et al., 2016; Buckley, et al., 2017) and Chen et al. (2018) have adopted the technique to derive two-dimensional mass-mobility distributions and Broda et al. (2018) has applied it to tandem CPMA-SP2 measurements. 4.2.1 Twomey's original method Twomey’s original method constitutes a multiplicative technique with an updating rule of (Twomey, 1975)

  b   x kj +1 = 1 +  i k − 1  Aij  xkj ,     ai x

(46)

where ai is the ith row of A. This is to be repeated for i ϵ [1,Db] to form a single pass. Multiple passes of the algorithm repeat this procedure another Db times to further update x. In this regard, i = (k modulo Db) + 1. As with any other iterative method, this does require an adequate initial guess without which the algorithm will not converge to the desired solution. The algorithm also requires that the problem be scaled such that A has a maximum value of unity (Twomey, 1975), that is |Aij | ≤ 1, without which negative values may be returned by the algorithm. This also makes it challenging to incorporate Lb into this procedure, rather opting for a preference to scale each row to satisfy |Aij| ≤ 1 instead.

Table 3. Parameters used in the bimodal demonstration phantom, shown in Figure 2 and used in Section 4 and are specified using dg, σg, ρeff|dg, σm, and Dm. The remaining quantities, specifically k and mg|dg, are evaluated from the former values. Note that the second mode has an effective density representative of soot-like aggregates (Olfert & Rogak, 2019), but with Dm = 2.3 to better distinguish the modes. Mode, l

wl

dg [nm]

σg

ρeff,g|dg [kg/m3]

Dm

σm

k

mg|dg [fg]

1 2

0.5 0.5

50 200

1.4 1.4

12,000 500

3 2.3

1.3 1.3

6,283 10,683

0.785 2.09

9

T. A. Sipkens, et al.

Original Article (2019)

Consider the application of Twomey’s method to the demonstration phantom, given an initial value determined by interpolating from the quantity bi/ai1, where 1 is a vector of ones, to the basis for x, consistent with the treatment of Buckley et al. (2017). Figure 4a shows this initial guess. Figure 4b shows the refined solution having applied Twomey’s method. The plot reveals a distribution having a similar magnitude and shape as the original phantom. Figure 3 affirms this observation, indicating a significant reduction in the Euclidean error relative to the unregularized, least-squares approach. However, the solution exhibits ridges along the line corresponding to the mass-mobility equation, Eq. (32). This is not an artifact of plotting (where contours of sparse data can cause similar features), but rather a consequence of the problem not being full rank (the data is sparse relative to the reconstruction grid), resulting in preferential refinement of the distribution about the data points. This effect worsens as the data becomes sparser. The reconstructions also tend to be speckled, occasionally varying significantly between adjacent pixels. This is a known problem with Twomey’s approach such that several variants of the technique have been proposed. 4.2.2 Twomey-Markowski and the approach of Buckley and co-workers The most popular modification of Twomey’s approach is that initially proposed by Markowski (1987). This method resolves the speckle in reconstructions by routinely introducing a smoothing step between Twomey passes. Markowski initially implemented this approach for one-dimensional size distributions, smoothing using a locally-weighted average of the form

x

k +1 j

 x kj 2 + x kj −1 4 + x kj +1 4  = 3 x kj 4 + x kj +1 4  k k 3 x j 4 + x j −1 4

j > 1, j < N x j =1

.

(47)

j = Nx

The natural extension to the two-dimensional case is to also include additional neighbours and reweight the neighbours according to the number of nodes. However, simple reweights, such as assigning ½ to the central node and splitting the remaining ½ between the neighbouring nodes, tend to produce overly smooth results. Accordingly, Buckley et al. (2017) adapted the method by adding a parameter Sf to control the weight given to the non-central nodes. For the interior nodes, this corresponds to:

Euclidean error, ||x – xex||22

20.0

Phantom 1

Phantom 2

Phantom 3

Phantom 4

Demonstration phantom

Soot surrogate

Buckley et al. (2017)

Narrow distribution

21.0

121

63.5

98.7

32.7

17.5

16.3

15.0 12.5 10.0 7.5 5.0 2.5

15.5 14.9 13.4

12.6

17.1

15.6

10.6

10.0 9.96 10.1 8.51

7.73 6.28

5.61

4.62

5.72 2.86 3.08

0

100 90 80 70 60 50 40 30 20 10 0

149 92.0

88.3 87.0 87.9 91.1 91.2 91.5 70.1 66.0 68.3

Figure 3. Euclidean error from various phantoms and the set of reconstruction techniques discussed in Section 4. The first set of errors corresponds to the demonstration phantom from Section 4, while the remaining sets indicate those corresponding to the phantoms considered in Section 5. Note that the errors for Phantom 4 greatly exceed the other cases such that the bars are given on a separate scale. For Phantoms 1, 2, and 3, Euclidean errors for the least-squares and Tikhonov approaches correspond to using the Langrage multiplier approach (the interior point gives a similar error for these phantoms). For the Tikhonov reconstructions of Phantom 4, the darker, lower bars correspond to the interior point algorithm and the lighter, higher bars correspond to the Lagrange multiplier method.

Figure 4. (a) Initial guess for the iterative methods and the regularized solutions found using (b) Twomey’s method and (c) the TwomeyMarkowski method using the two-dimensional smoothing function proposed by Buckley et al. (2017). 10

T. A. Sipkens, et al.

Original Article (2019) 1,000 Phantom 1 Demonstration phantom

CPU Time [s]

100 10 1 0.1

Figure 5. CPU times from various phantoms and the set of reconstruction techniques discussed in Section 4. The CPU times are those for the MATLAB files provided in the supplemental material, are to be considered in relative terms only, and are the average of 20 repeats. Exact CPU times will vary depending on the processor and reconstruction resolution. CPU times are on a logarithmic scale.

x kj +1 =

(

x kj 2 + Sf x kj −1 + x kj +1 + x kj − Nd + x kj − Nd −1 + x kj − Nd +1 + x kj + Nd + x kj + Nd −1 + x kj + Nd +1 1 2 + 8Sf

),

(48)

where Nd is the number of elements discretizing the mobility diameter alone on the rectangular grid. However, while such a scheme does prevent the oversmoothing characteristic of simple extensions of the Twomey-Markowski approach to two dimensions, this approach also introduces an additional regularization parameter in the form of Sf that must be specified. Hysteresis between the smoothing and Twomey steps (that is, allowing for an optimal change in x before switching between the smoothing and Twomey steps) can be challenging to achieve, particularly in the two-dimensional case considered here. Figure 5 also indicates that this approach is also the slowest of the reconstruction techniques considered in this work, though the precise computation time varies widely depending on the hysteresis between the recursive Twomey and smoothing steps. Despite these pitfalls, Figure 4c shows a slight improvement in the reconstruction quality, with the fluctuations between adjacent pixels being reduced relative to the standard Twomey approach. This slight improvement can be seen more clearly in the marginal mass distribution, shown in Figure 6b, where the largest fluctuations are visibly diminished. Figure 3 indicates that this also corresponds to a reduction in the Euclidean error of around 4% for the demonstration phantom. It is worth noting that the value of Sf was systematically varied to minimize the Euclidean error relative to the true phantom, representing an optimistic degree of accuracy. Such reconstruction quality is harder to achieve in practice. The remaining regularization parameters in this scheme, responsible for controlling hysteresis between the Twomey and smoothing steps, were initially varied but were held constant once a sufficient amount of hysteresis was observed. While it is possible that better hysteresis could be achieved between the smoothing and Twomey steps, the difficulty in finding the optimal parameters, even when the true solution is known, represents a limit on this technique. While the Twomey step was considered last in the current work, as per Markowski (1987), Buckley et al. (2017) considered the smoothing step last. Such a scheme was observed to occasionally improve the reconstructions. In general, however, the difference between the traditional and smoothing-last schemes amounted to <1% of the Euclidean error. This observation speaks to the convergence of the algorithm, where the Twomey steps do little to refine the solution in the latter iterations of the TwomeyMarkowski algorithm. 4.3

Maximum entropy regularization

Maximum entropy approaches are less common in the aerosol literature but have been considered, e.g. (Yee, 1989; Amato, et al., 1995; Wang, 2008). The methods themselves aim to regularize the problem by maximizing the information entropy of the solution, ∞∞

H = − ∫∫ p ( m, d ) ln [ p ( m, d )] ⋅ d ln d ⋅ d ln m ,

(49)

0 0

or in its discrete form, Nx

H ( x ) = −∑ x j ln ( x j ) ⋅ ( ∆ln d ) j ⋅ ( ∆ln m) j . j =1

11

(50)

T. A. Sipkens, et al.

Original Article (2019)

There are multiple ways by which this information can be incorporated into the inversion. Perhaps the simplest is to add an entropy term using the generalized Tikhonov scheme (discussed further in Section 4.4), where the penalty term is the information entropy, Φ(x) = –H(x):

0.2

0.2 0 -0.2 -0.4 -0.6

∆pm

∆pd

Nx   2 xME = arg min  Lb ( Ax − b ) 2 + λ2 ∑ x j ln ( x j ) ⋅ ( ∆ ln d ) j ⋅ ( ∆ ln m ) j  , (51) x   j =1 where λ is a regularization parameter. Minimizing this expression requires the use of non-linear solvers, through which constraints can be applied. However, as the convergence of non-linear solvers can be difficult and slow, the literature suggests that this approach be avoided (Carfora, et al., 1998; Amato, et al., 1995). These challenges are amplified for the higher-dimensional case here such that this approach is not given further consideration in this work. One can, however, also consider the problem as a constrained minimization problem, in which Ax = b xME = argmin{ H ( x )} s.t.  . (52) x x ≥ 0

Phantom

0

Initial guess Interpolation

-0.2 1.0

1.5

Twomey

pm(log10m)

pd(log10d)

0.8 1.0 0.5

TwomeyMarkowski

0.6

Buckley et al.

0.4 MART

0.2 0 10

Tikhonov

0 50

100 d [nm] (a)

500

1000

0.05 0.1

0.5 1.0 m [fg] (b)

5.0 10

30

1st order

Figure 6. Marginal (a) mobility and (b) mass distributions found through various reconstruction techniques applied to the demonstration phantom. The initial guess used to initialize the MART and Twomey iterative approaches is also indicated. The marginal distributions for MART are indistinguishable from those produced using the traditional Twomey approach and are excluded for clarity. Upper panels show the difference between the marginal distribution of the reconstruction and the true phantom projected into the lower resolution space (x - xex).

Not only does this avoid the use of a regularization parameter, but such an approach also yields a problem that can be solved using the multiplicative algebraic reconstruction technique (MART) (Lent, 1976; Elfving, 1980), originally proposed by Gordon et al. (1970). In its normal implementation, the MART scheme updates x according to wA

ij  b  (53) x kj +1 = exp wAij dik  x kj =  i k  x kj ,  ai x  where w is a weight or relaxation parameter, here chosen to be unity (corresponding to the value for which convergence to the maximum entropy result has been demonstrated (Lent, 1976)), and dik is the fidelity term, here taken as

 b  dik = ln  i k   aix 

(54)

to yield the latter form of Eq. (53). Iterations proceed in an analogous manner to Twomey’s approach, first iterating over each entry in b and then repeating, such that i = (k modulo Db) + 1. This can be sped up by considering the block implementation of MART (Willis, 2000)   x kj +1 = exp  ∑ wAij dik  x kj = exp ( w a Tj d k ) x kj ,  i 

(55)

where ajT is the jth column of A, or further vectorized as

xk +1 = exp ( wATdk )  xk +1 ,

(56)

where ⊙ is the Hadamard or elementwise product. While the result of each iteration differs from the normal implementation, the block implementation will converge to the same result overall. (In fact, it has been shown that this latter form solved the same problem using a different scheme (Elfving, 1980)). Robust results are expected if 0 ≤ Aij ≤ 1.

12

T. A. Sipkens, et al.

Original Article (2019)

Figure 7 shows the MART solution for the demonstration phantom. Solutions are found in a shorter amount of time than Twomey reconstructions, requiring fewer iterations and completing a single iteration at a faster rate. Results share many of the same characteristics that are observed in the Twomey results, including considerable speckle and ridges around the locations where the data are located. Accordingly, the Euclidean error for the MART reconstruction is very similar to that of the Twomey reconstruction, though being achieved in less time. It is worth noting that as the solution contains similar fluctuations to the Twomey reconstruction, one could consider modifications in which the solution is smoothed and the MART procedure is repeated, analogous to Twomey-Markowski. It was also noted that, as the data deviated further from being square, the MART reconstructions occasionally had difficulties reconstructing the larger particle peak and became unstable. Thus, while speeding computation, more care must be taken to ensure that the solution is reasonable. 4.4

Tikhonov regularization

4.4.1 Overview Tikhonov regularization, also known as Phillips-Twomey regularization, represents one of the most common forms of regularization for inverse problems and has been used to supplement aerosol particle size inversion for several decades (Wolfenbarger & Seinfeld, 1990; Lesnic, et al., 1995; Crump & Seinfeld, 1981; Bashurova, et al., 1991; Wang, et al., 2006; Talukdar & Swihart, 2003). In its discrete form, Tikhonov regularization introduces an additional regularization term to the least-squares approach, such that

{

}

x Tk = arg min Lb ( Ax − b ) + λ2 Φ ( x ) , x

2 2

(57)

where Φ(x) is a penalty term and λ is the regularization parameter indicating the significance of the penalty term. Often, Tikhonov regularization is taken to refer specifically to the case where the regularization term is designed to minimize some order derivative, resulting in

{

x Tk = arg min Lb ( Ax − b ) + LTk x x

2

2

2

2

}.

(58)

where LTk is called the Tikhonov matrix. This can be equally phrased in terms of a least-squares problem of the form,

LbA  Lbb  Tk  x =   ,  0   L 

(59)

or as introducing an additional set of Dx equations, LTk x = 0 .

(60) The structure of L then depends on the order of Tikhonov being applied. For example, 0th order Tikhonov acts to promote small solutions by introducing equations of the form λx j = 0 , (61) Tk

such that Eq. (60) becomes LTk = λI, where I is the identity matrix. First order Tikhonov seeks to minimize the differences between adjacent nodes, thereby promoting smooth solutions. For the two-dimensional case, this is equivalent to introducing equations of the form 1 1  λ  x j − x j +1 − x j + N m 2 2 

 =0. 

(62)

nd

This can be extended indefinitely, with 2 order Tikhonov regularization acting to minimize second differences, and so on. In any case, solving these equations amounts to a weighted least-squares problem. As before, a modification to the normal equations can be used to solve this system algebraically,

x

Tk

  Lb A  T  Lb A   =   Tk   Tk     L   L    

−1

T

 Lb A   Lb b   Tk     L   0 

= ( Lb A ) Lb A + ( L  T

)

Tk T

−1

.

LTk  ( Lb A ) Lb b    

13

T

(63)

Original Article (2019)

Euclidean error, ||x – xex||2

T. A. Sipkens, et al. 40 35 30 25 0th order

20 15 10

Optimal λ

2nd order

HankeRaus rule

1st order

L-curve

5 0.01

0.1

1 λ

10

100

Figure 7. forxthe iterative methods, matching that used to initialize the Twomey (b)indicated the maximum entropy Figure 8. (a) TheInitial true guess error in verses the regularization parameter, λ, for 0th, 1st, and 2nd orderapproach, Tikhonov.and Also are the valuessolution of the generating using the MART algorithm. regularization parameter selected by the L-curve approach (diamonds) and Hanke-Raus rule (squares). The authors note that the interior point algorithm was used in place of a Lagrange multiplier approach due to increased stability in the under-regularized cases.

However, in practice, Lagrange’s method or iterative solvers should be used so as to apply non-negative constraints. 4.4.2 Regularization parameter It is well-established that Tikhonov regularized solutions will vary widely depending on the value of the regularization parameter, λ, necessitating a brief note. Here, where the true distribution is known, one can determine λ by some form of error metric, e.g. the Euclidean error. In real applications, the choice of the regularization parameter must often be established by heuristic methods. For example, the L-curve approach (Hansen, 2001) takes the optimal regularization parameter as that existing at the cusp of the L-shaped curve formed by plotting ||Ax – b||22 as a function of ||x||22. While visual inspection of these curves to determine the optimal regularization parameter is straightforward, determining the location of this cusp by automated methods in the presence of noise can be challenging. This is particularly true given that the unregularized problem is rank deficient, such that noise in the under-regularized cases will make consideration of the second derivative and spline fitting (Talukdar & Swihart, 2003) challenging. The Hanke-Raus rule (Hanke & Raus, 1996), in contrast, can be easily automated and defines the optimal regularization parameter based on the residual as (Xu, et al., 2019) 1  λ HR = arg min  Ax Tk ( λ ) − b  , 2 λ λ  Tk

(64)

where x (λ) is the regularized solution vector as a function of λ. Figure 8 examines the Euclidean distance between the reconstruction and the true distribution as a function of regularization parameter for the various Tikhonov approaches. Also indicated are the regularization parameters identified as optimal by the Lcurve and Hanke-Raus parameter selection rules. The heuristic rules generally underestimate the amount of regularization required to obtain the most accurate solution and result in suboptimal solutions. Likely causes of the discrepancy are the non-negative constraint placed on the system and the iterative solvers used to enforce this constraint. Visual inspection is generally more robust, where the optimal solution removes most but not all of the artifacts from the reconstructions. Future work could focus on alternative methods of regularization parameter selection. 4.4.3 Reconstructions

14

T. A. Sipkens, et al.

Original Article (2019)

Figure 9 shows the Tikhonov regularized solutions for the demonstration phantom using an iterative non-negative, weighted least-squares approach and regularization parameters that minimized the Euclidean error. As with the least-squares solution in Section 4.1, both Lagrange multiplier (Lawson & Hanson, 1974) and constrained interior point (Altman & Gondzio, 1993) approaches were considered to solve the system of equations. Generally, the retrieved reconstructions were very similar between the algorithms (the difference amounts to <10% of the Euclidean error). The authors note that the interior point algorithm performed more consistently over a large range of values for the regularization parameter but underperformed in areas where the reconstruction was expected to be zero. The Lagrange multiplier approach was found to have a larger amount of variability in computation time. Characteristics of the retrieved solutions reflect the order. The 0th order case produces jagged solutions, a consequence of not enforcing smoothness. However, by attempting to minimize the solution norm, 0th order Tikhonov regularization overcomes the large fluctuations in the unregularized least-squares solution with a similar computation effort to the standard Twomey approach (cf. Figure 3). The 1st and 2nd order cases greatly improve on 0th order Tikhonov, resulting in only very minor artifacts in the vicinity of regions that contain particles. This is in large part due to the fact that higher order Tikhonov reconstructions can fill in gaps between data by deriving information from adjacent pixels, a characteristic that is poorly represented by the other inversion methods considered here. This is reinforced by Figure 6, where the marginal distributions show a significant reduction in the fluctuations between adjacent pixels relative to the weighted least-squares and Twomey-type cases. The result is that the corresponding Euclidean error, shown in Figure 3, is significantly smaller than any of the other reconstruction techniques, reducing the error ~37% relative to the Twomey-Markowski approach for Phantom 1. The 1st order approach edges out the 2nd order approach slightly in that minimizing the first derivative better represents the physics (we expect the mass-mobility distribution in adjacent pixels to have similar values rather than having similar slopes). 4.5

Statistical or Bayesian approaches to inversion

The methods discussed up to this point are deterministic. This is to say, for a given data set, the methods map to a single, albeit imperfect, solution. The Bayesian or statistical approach to regularization instead uses statistics to formally incorporate the data and prior information to find solutions and their uncertainties. In the Bayesian framework, the various quantities in the problem are treated as random variables, that is quantities that follow some form of distribution from which they take random values. For example, the likelihood describes the errors that exist between the model and the data, namely the distribution of ε = Ax – b. Prior information can then be introduced by Bayes’ equation

ppo ( x | b ) =

pli ( b | x ) ppr ( x ) p ( b)

∝ pli ( b | x ) ppr ( x ) ,

(65)

where ppo(x|b) is the posterior distribution; pli(b|x) is the likelihood; ppr(x) is the prior; and p(b) is the evidence. When combined with the likelihood, the optimal solution will maximize the posterior distribution (that is the maximum a posteriori estimate or MAP),

xMAP = argmax{ ppo ( x | b)} = argmax{ pli ( b | x) ppr ( x)} . x

x

(66)

In the Bayesian framework, this represents an optimal, statistically regularized solution. Priors vary depending on the application. The most prominent prior in aerosol inversion is the Bayesian interpretation of Tikhonov regularization (Voutilainen, et al., 2001;

Figure 9. Constrained (a) 0th, (b) 1st, and (c) 2nd order Tikhonov regularized solutions for the mass-mobility distribution. The regularization parameter was optimized to give the lowest residual relative to the exact solution projected onto the reconstruction grid. Reconstructions correspond to using the Lagrange multiplier approach, due to its better performance in regions of lower number concentrations. 15

T. A. Sipkens, et al.

Original Article (2019)

Figure 10. (Top panels) Simulated phantoms, including marginalized distributions, and (Bottom panels) the corresponding 1st order Tikhonov reconstructions at the optimal regularization parameter. Tikhonov reconstructions for Phantoms 2 and 3 correspond to using the Lagrange multiplier approach, due to its better performance in regions of lower number concentrations. The reconstruction for Phantom 4 corresponds to the interior point algorithm due to its improved accuracy for the narrow distribution, noting that there remain some non-zero entries for low mobility diameter and high particle mass (visible in the marginal mobility distribution).

Ramachandran & Kandlikar, 1996), which will return the same reconstructions as those in Section 4.4 (though one can easily use a Bayesian analysis to estimate uncertainties). Work to follow will focus on these methods, including the development of novel forms of prior information for the aerosol inversion problem.

5. Quantitative comparison of established methods for other phantoms Consider now a quantitative comparison of these methods over a wider set of phantoms. Three additional phantoms, with the parameterizations in Table 4 and numbered in ascending order from 2 to 4, are considered here. The phantoms are shown in Figure 10, and the Euclidean error for each reconstruction technique across these phantoms is shown alongside those for the demonstration phantom in Figure 3. The marginal mobility distributions for a select range of reconstruction techniques are also included in Figure 11. In general, 1st-order Tikhonov reconstructions, also shown in Figure 10 (lower panels), produce the lowest Euclidean error. Phantom 2 is chosen to reflect a hypothetical aerosol that shares features with experimental data collected on soot from a labscale flare (Olfert & Rogak, 2019). This distribution is more diffuse than the other cases considered here, having geometric standard deviations for the mobility diameter and mass of 1.6 and 1.5, respectively. As a result, reconstructions generally have lower Euclidean errors than the other phantoms considered here. The error produced by Tikhonov regularization is particularly small, with the reconstruction shown in Figure 10 being nearly identical to the original phantom except for discretization error and some small artifacts around the edges of the distribution. Reconstructions from the other techniques generally share many of the same features as those observed for the demonstration phantom. For example, the MART and Twomey reconstructions contain a considerable amount of speckle, while 2nd order Tikhonov regularization tends to produce smoother solutions. A notable difference to the demonstration phantom is that the Twomey-Markowski approach tends to produce better reconstructions, which is also reflected in lower Euclidean errors. This suggest that, under the correct circumstances, the Twomey-Markowski approach can significantly improve on the traditional Twomey technique.

16

T. A. Sipkens, et al.

Original Article (2019)

Phantom 2

Phantom 3

Soot surrogate

2 0 -2 4

1.5 1.0 0.5 50 100 d [nm]

500 1000

1.5 1.0 0.5 0 10

Phantom

Narrow distribution

Initial guess

∆pd pd(log10d)

∆pd

pd(log10d)

pd(log10d)

2.0

0 10

Phantom 4

Buckley et al.

0.2 0 -0.2 -0.4 2.0

∆pd

0.2 0 -0.2 -0.4

50 100 d [nm]

500 1000

Twomey TwomeyMarkowski

3

Buckley et al.

2

MART

1

Tikhonov

0 10

50 100 d [nm]

500 1000

1st order

Figure 11. (Bottom panels) Marginal mobility distributions predicted by select reconstruction techniques. (Top panels) The difference between the marginal distribution of the reconstruction and the true phantom. In cases where the MART reconstruction is not visible, it is indistinguishable from the Twomey reconstruction.

Phantom 3 is that of Buckley et al. (2017). Note that this requires a deviation from the form of phantom used throughout the rest of this work in that the conditional distribution for mass is taken as Gaussian, pl ( m | d ; mg , σ m ) =

 ( m − m )2 g − exp 12 2 ( 2 π ) mg σ m  2 ( mg σ m ) 1

 ,  

(67)

where mg is defined with the mass-mobility relation, similar to the lognormal case, but σm instead acts to scale mg in the denominator and will be of order unity. The resultant phantom is much narrower than both the demonstration phantom and Phantom 2, which results in an increase in the Euclidean norm across most of the techniques. A notable exception is 0th order Tikhonov regularization, which performs better in that the technique naturally allows for sharper gradients or large fluctuations between pixels, which is more reasonable for sharper distributions. This improvement does not, however, reduce the Euclidean error for 0th order Tikhonov regularization below that associated with the Twomey-type approaches, MART, or the other orders of Tikhonov regularization, indicating a persistent preference for the latter techniques. Phantom 4 is chosen to demonstrate the ability of the various techniques to reconstruct even narrower distributions than Phantom 3. While the mobility diameter is assigned a modest distribution width of σg = 1.5, the conditional mass distributions are considered narrow, having a width of only σm = 1.05. This is representative of compact, nearly spherical aggregates that follow a lognormal size distribution. The errors in the reconstructions for this phantom greatly exceed the other cases, in large part because the finite grid used to represent the reconstructions poorly resolves the distribution width. The discretization error is particularly evident in the marginalized distributions in Figure 11 where it manifests as a regular oscillation that is similar in all of the reconstructions. It is also notable that the Euclidean error is more consistent between the different inversion methods in this case. This is an extension of the observations above for Phantom 3, where the large gradients of the distribution are better suited to the prior information imposed by 0th order Tikhonov regularization and, to a lesser extent, the traditional Twomey approach. As a consequence, 0th order Tikhonov outperforms all of the other techniques, though the improvement over 1st order Tikhonov is minimal such that the latter technique remains more effective in a general sense. It is also worth noting from Figure 3c that there is a sizable difference in the Euclidean error depending on the method used in the optimization, with the interior point algorithm outperforming the Lagrange multiplier method for this one phantom.

6. A short note on the effect of the data size and magnitude

Table 4. The parameterization of the three additional phantoms considered in Section 5. As with before, the distributions are specified using dg, σg, ρeff,g|dg, σm, and Dm. The remaining quantities, k and mg|dg, are evaluated from the former values. Distribution number and description 2 - Soot surrogate 3 - Buckley et al. (2017)* 4 - Narrow distr.

Mode, l

wl

dg [nm]

σg

ρeff,g|dg [kg/m3]

σm

Dm

k

mg|dg [fg]

1 1 2 1

0.5 0.5 -

127 200 300 125

1.72 1.5 2.2 1.5

626 10,000 1,000 1,000

1.46 0.15 0.15 1.05

2.34 3 3 3

8,018 5,236 524 524

0.671 41.9 14.1 1.02

*Differs from all of the other considered phantoms in that the conditional mass distribution is Gaussian.

17

T. A. Sipkens, et al.

Original Article (2019)

Consider briefly the effect that the data has on the reconstruction accuracy, specifically the effect of the total particle counts and number of DMA set points. As noted in Section 3.2, lower total number concentrations will have the effect of increasing the amount of the noise in the signals, and vice versa. Figure 12a shows the effect of changing the total number of particles incident on the devices, with an estimate of the resultant peak number concentrations indicated on the upper axis for 90 s scan of the DMA. The Euclidean error for the range of techniques considered here generally decreases as ntot increases, that is as the noise level improves. The preferences between the techniques remains largely unchanged, with the notable exception that the Twomey, TwomeyMarkowski, and MART approaches deteriorate more rapidly as the peak number concentration drops below 100 cm-3. Now, instead consider adjusting the number of DMA setpoints or bins, Db,2. The reconstruction error generally decreases as the number of bins increases and more detailed information is available to the inversion. As with changes in the particle count, the relative preferences between the techniques remain largely unchanged. For the MART, Twomey, and Twomey-Markowski algorithms, this trend is disrupted in the upper limit as Db,2 approaches and exceeds the resolution of the reconstruction grid in mobility space (in this case, the quantity is held constant at Dx,2 = 64). When Dx,2 > Db,2, the problem can become overdetermined such that the Twomey algorithm performs differently, introducing increased amounts of speckle in the reconstructions. As the Twomey step occurs last in the Twomey-Markowski algorithm in the present work, this speckle also causes an increase in the Euclidean error for this reconstruction technique, though to a lesser extent. Accordingly, under these conditions, there may be an increased benefit to finishing with the smoothing step instead of the Twomey refinement step. The Tikhonov reconstructions are relatively independent of Db,2, except in the very lower limit (that is for Db,2 < 35 for the current study). This stems from the way prior information is encoded in Tikhonov regularization, specifically that the inversion attempts to minimize the nth order derivative independent of the size of the data. This independence increases the benefit to using Tikhonov regularization over other inversion approaches as the practitioner has to worry less about the precise number of data bins. It is worth noting that the authors only considered the effect of increasing the resolution in mobility space rather than in both mass and mobility space. While the present results are expected to indicate the general trends that are expected in the higher dimensional scenario, it does limit the conclusions of the present discussion. A more complete study of the relationship between the size of the data and reconstruction accuracy should be considered in the future, employing techniques such as Bayesian model selection to consider the broader implications on aerosol inversion.

7. Conclusions This paper has compared several of the established methods for inverting tandem measurements to find aerosol particle size distributions, focusing on the two-dimensional mass-mobility distribution. Unsurprisingly, the unregularized, least-squares approach is found to be entirely inadequate at retrieving the mass-mobility distribution. The current state-of-the-art, namely the TwomeyMarkowski approach as implemented by Buckley et al. (2017) improves on this considerably, generating reconstructions that closely resemble the original phantom and greatly reduce the Euclidean error across a range of phantoms. This is true across a range Peak number concentration [cm-3] 390 4.1×103 4.2×104

4.2×105

30

Particle counts

MART

DMA set points

Twomey-Markowski 30

Euclidean error, ||x – xex||22

Euclidean error, ||x – xex||22

35 Buckley et al.

Tikhonov 0th

25

order

20 15 10 5 0 103

Tikhonov

25

0th order

20 15

Twomey-Markowski Buckley et al.

10

Tikhonov

Tikhonov

105 ntot [particles]

Tikhonov

2nd order

1st order

104

Twomey

MART

5

2nd order

Tikhonov

106

0 25

107

(a)

Dx,2 = 64

45 40

30

35

1st order

40

45 50 55 60 Db,2 [elements]

65

70

75

(b)

Figure 12. The Euclidean error of the various techniques for Phantom 1 (i.e. the demonstration phantom) as a function of (a) particle count (using the number of incident particles, ntot, as a surrogate) and (b) the number of bins or setpoints for the DMA, Db,2. Errors correspond to the optimal cases for the Tikhonov and Twomey-Markowski approaches where λ and Sf, respectively, are chosen to minimize the Euclidean error. In both cases, the errors for the Twomey scheme are nearly identical to the MART approach throughout. For (a), the upper axis indicates the peak number concentration in the measurement assuming a scan time of 90 s. 18

T. A. Sipkens, et al.

Original Article (2019)

of phantoms of varying widths. It is also noted that 1st order Tikhonov regularization appears to consistently outperform the other inversion techniques (with the notable exception of exceeding narrow phantoms, where 0th order Tikhonov provides a modest improvement), reducing the Euclidean error in the range of 20-50% relative to the Twomey-Markowski approach. The authors note that this corresponds to the best scenario for both of the inversion approaches, which may be difficult to achieve in practice. This advantage is expected to become more significant as the data becomes increasingly sparse, a result of Tikhonov regularization better reconstructing the massmobility distribution in regions away from DMA and CPMA setpoints. Thus, practitioners should generally consider using 1st order Tikhonov regularization before implementing other techniques. This study also noted that, while the MART algorithm is generally more computationally efficient than the Twomey-type approaches and often produces reconstructions of a similar quality without any added heuristics, the method is not always stable and occasionally diverges from the expected mass-mobility distribution. This can even be the case when the standard convergence criteria of the algorithm are met. Thus, practitioners must take care when applying the approach. It is worth noting that the presented methods are generally extendable to different kinds of tandem measurements, including but not limited to combining particle mass or mobility classification with the aerosol aerodynamic classifier (AAC) or the single particle soot photometer (SP2) or even tandem DMA measurements. In most of these cases, this involves implementing a new kernel and reconstructing in a different space (e.g. aerodynamic diameter and mobility diameter space for tandem AAC-DMA measurements). As the characteristics of the distributions remain similar (e.g. the distributions are normally smooth and are approximately normally- or lognormally-distributed) and the kernels have similar effects (often still accounting for some form of spreading in the data relative to the true distribution), the methods are expected to perform similarly. Other tandem measurements may benefit from specialized treatments to improve reconstruction accuracy. For CPMA-SP2 inversion, for example, the refractory black carbon mass cannot exceed that of the total particle mass (Broda, et al., 2018). This should be encoded in inversion procedure, where a blanket 1st order Tikhonov procedure will underperform, smoothing the distribution over this cut-off. It is also worth noting that, as with the inversion procedures presented here, the quality of the reconstructions will be significantly impacted by the quality of the kernel, which can vary depending on the devices considered. Work to follow will focus on introducing advanced methods of regularizing the two-dimensional aerosol particle size distribution, particularly examining Bayesian-based approaches of robustly encoding relevant of prior information. Future work should also consider different finite representations of the mass-mobility distribution, including implementation of the finite element method and possibly the consideration of a triangular mesh. In fact, structure in the data itself could be used to refine nonrectangular meshes in regions where particles are measured. This is particularly true in the case of narrow distributions where discretization errors dominate.

Data availability MATLAB code associated with this work is provided in a GitHub repository (https://github.com/tsipkens/mat-2d-aerosolinversion) and is archived with Zenodo (doi: 10.5281/zenodo.3520880). This code is designed to replicate the results of this paper. This includes implementations of the Tikhonov, Twomey, Twomey-Markowski, and MART approaches to reconstructing the massmobility distribution and a copy of the code for evaluating the CPMA transfer function, which was distributed with the work of Sipkens et al. (2019). The attached functions to evaluate Twomey-Markowski, among several other aspects of the code, are taken from the code provided by Buckley et al. (2017). More information is provided with the README in the repository.

Acknowledgements This research has been funded by NSERC (2015-05905, PDF-516743-2018) and the Killam Foundation (Postdoctoral Fellowship). The authors would also like to acknowledge Samuel Grauer for useful discussions, particularly in reference to the implementation of the MART algorithm.

ORCID Timothy A. Sipkens · https://orcid.org/0000-0003-1719-7105 Jason S. Olfert · https://orcid.org/0000-0002-9681-5428

References Abed-Navandi, M., Berner, A. & Preining, O., 1976. In: B. Y. H. Liu, ed. Fine Particles. New York: Academic Press, p. 447.

19

T. A. Sipkens, et al.

Original Article (2019)

Afroughi, M. J., Falahati, F., Kostiuk, L. W. & Olfert, J. S., 2019. Properties of carbon black produced by the thermal decomposition of methane in the products of premixed flames. J. Aerosol Sci., Volume 131, pp. 13-27. Altman, A. & Gondzio, J., 1993. Symmetric indefinite systems for interior point methods. Math. Program., 58(1-3), pp. 1-32. Amato, U., Carfora, M. F., Cuomo, V. & Serio, C., 1995. Objective algorithms for the aerosol problem. Appl. Opt., 34(24), pp. 5442-5452. Bashurova, V. S., Koutzenogil, K. P., Pusep, A. Y. & Shokhirev, N. V., 1991. Determination of atmospheric aerosol size distribution functions from screen diffusion battery data: mathematical aspects. J, Aerosol Sci., 22(3), pp. 373-388. Beranek, J., Imre, D. & Zelenyuk, A., 2012. Real-time shape-based particle separation and detailed in situ particle shape characterization. Anal. Chem., 84(3), pp. 1459-1465. Bond, T. C. et al., 2013. Bounding the role of black carbon in the climate system: A scientific assessment. J. Geophys. Res. Atmos., 118(11), pp. 5380-5552. Broda, K. N. et al., 2018. A novel inversion method to determine the mass distribution of non-refractory coatings on refractory black carbon using a centrifugal particle mass analyzer and single particle soot photometer. Aerosol Sci. Technol., 52(5), pp. 567-578. Buckley, D. T. et al., 2017. Technical note: A corrected two dimensional data inversion routine for tandem mobility-mass measurements. J. Aerosol Sci., Volume 114, pp. 157-168. Carfora, M. F., Esposito, F. & Serio, C., 1998. Numerical methods for retrieving aerosol size distributions from optical measurements of solar radiation. J. Aerosol Sci., 29(10), pp. 1225-1236. Charvet, A. et al., 2014. Characterizing the effective density and primary particle diameter of airborne nanoparticles produced by spark discharge using mobility and mass measurements (tandem DMA/APM). J. Nanopart. Res., 16(5), pp. 2418-2428. Chen, X. et al., 2018. Characterization of the state of nanoparticle aggregation in non-equilibrium plasma synthesis systems. J. Phys. D: Appl. Phys., 51(33), p. 335203. Colton, D. & Kress, R., 2012. Inverse acoustic and electromagnetic scattering theory. New York: Springer Science & Business Media. Crump, J. G. & Seinfeld, J. H., 1981. A new algorithm for inversion of aerosol size distribution data. Aerosol Sci. Technol., 1(1), pp. 15-34. Davies, C. N., 1945. Definitive equations for the fluid resistance of spheres. Proc. Phys. Soc. London, 57(4), pp. 259-270. DeCarlo, P. F. et al., 2004. Particle morphology and density characterization by combined mobility and aerodynamic diameter measurements. Part 1: Theory. Aerosol Sci. Technol., 38(12), pp. 1185-1205. Ehara, K., Hagwood, C. & Coakley, K. J., 1996. Novel method to classify aerosol particles according to their mass-to-charge ratio—aerosol particle mass analyser. J. Aerosol Sci., 27(2), pp. 217-234. Elfving, T., 1980. On some methods for entropy maximization and matrix scaling. Linear Algebra Appl., Volume 34, pp. 321-339. Ghazi, R. et al., 2013. Mass, mobility, volatility, and morphology of soot particles generated by a McKenna and inverted burner. Aerosol Sci. Technol., 47(4), pp. 395-405. Gopalakrishnan, R., Meredith, M. J., Larriba-Andaluz, C. & Hogan Jr, C. J., 2013. Brownian dynamics determination of the bipolar steady state charge distribution on spheres and non-spheres in the transition regime. J. Aerosol Sci., Volume 63, pp. 126-145. Gordon, R., Bender, R. & Herman, G. T., 1970. Algebraic reconstruction techniques (ART) for three-dimensional electron microscopy and X-ray photography. J. Theor. Biol., 29(3), pp. 471-481. Graves, B. M., Koch, C. R. & Olfert, J. S., 2017. Morphology and volatility of particulate matter emitted from a gasoline direct injection engine fuelled on gasoline and ethanol blends. J. Aerosol Sci., Volume 105, pp. 166-178. Gunn, R. & Woessner, R. H., 1956. Measurements of the systematic electrification of aerosols. J. Colloid Sci., 11(3), pp. 254-259. Hanke, M. & Raus, T., 1996. A general heuristic for choosing the regularization parameter in ill-posed problems. SIAM J. Sci. Comput., 17(4), pp. 956-972. Hansen, P. C., 2001. The L-curve and its use in the numerical treatment of inverse problems. In: Computational Inverse Problems in Electrocardiology. s.l.:WIT Press, pp. 119-142. Johnson, T. J. et al., 2014. Steady-state measurement of the effective particle density of cigarette smoke. J. Aerosol Sci., Volume 75, pp. 9-16. Kandlikar, M. & Ramachandran, G., 1999. Inverse methods for analysing aerosol spectrometer measurements: a critical review. J. Aerosol Sci., 30(4), pp. 413-437. Kim, J. H., Mulholland, G. W., Kukuck, S. R. & Pui, D. Y., 2005. Slip correction measurements of certified PSL nanoparticles using a nanometer differential mobility analyzer (Nano-DMA) for Knudsen number from 0.5 to 83. J. Res. Nat. Inst. Stand. Technol., 110(1), pp. 31-54. Knutson, E. O. & Whitby, K. T., 1975. Aerosol classification by electric mobility: apparatus, theory, and applications. J. Aerosol Sci., 6(6), pp. 443-451. Kuwata, M., 2015. Particle classification by the tandem differential mobility analyzer–particle mass analyzer system. Aerosol Sci. Technol., 49(7), pp. 508520. Kuwata, M. & Kondo, Y., 2009. Measurements of particle masses of inorganic salt particles for calibration of cloud condensation nuclei counters. Atmos. Chem. Phys., 9(16), pp. 5921-5932. Lawson, C. L. & Hanson, R. J., 1974. Chapter 23. In: Solving Least-Squares Problems. Upper Saddle River, NJ: Prentice Hall, p. 161. Lent, A., 1976. Maximum entropy and multiplicative ART. Toronto, s.n. Lesnic, D., Elliott, L. & Ingham, D. B., 1995. An inversion method for the determination of the particle size distribution from diffusion battery measurements. J. Aerosol Sci., 26(5), pp. 797-812. Liu, B. Y. & Pui, D. Y., 1973. A submicron aerosol standard and the primary, absolute calibration of the condensation nuclei counter. J. Colloid Interface Sci., 47(1), pp. 155-171. Markowski, G. R., 1987. Improving Twomey's algorithm for inversion of aerosol measurement data. Aerosol Sci. Technol., 7(2), pp. 127-141. McMurry, P. H., Wang, X., Park, K. & Ehara, K., 2002. The relationship between mass and mobility for atmospheric particles: A new technique for measuring particle density. Aerosol Sci. Technol., 36(2), pp. 227-238. Myhre, G. et al., 2013. Anthropogenic and natural radiative forcing. In: T. F. Stocker, et al. eds. Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge: Cambridge University Press. Olfert, J. et al., 2017. Effective density and volatility of particles sampled from a helicopter gas turbine engine. Aerosol Sci. Technol., 51(6), pp. 704-714. Olfert, J. & Rogak, S., 2019. Universal relations between soot effective density and primary particle size for common combustion sources. Aerosol Sci. Technol., 53(5), pp. 485-492. Olfert, J. S. & Collings, N., 2005. New method for particle mass classification—the Couette centrifugal particle mass analyzer. J. Aerosol Sci., 36(11), pp. 1338-1352.

20

T. A. Sipkens, et al.

Original Article (2019)

Olfert, J. S., Reavell, K. S., Rushton, M. G. & Collings, N., 2006. The experimental transfer function of the Couette centrifugal particle mass analyzer. J, Aerosol Sci., 37(12), pp. 1840-1852. Olfert, J. S., Symonds, J. P. R. & Collings, N., 2007. The effective density and fractal dimension of particles emitted from a light-duty diesel vehicle with a diesel oxidation catalyst. J. Aerosol Sci., 38(1), pp. 69-82. Park, K., Cao, F., Kittelson, D. B. & McMurry, P. H., 2003. Relationship between particle mass and mobility for diesel exhaust particles. Environ. Sci. Technol., 37(3), pp. 577-583. Park, K. et al., 2008. Tandem measurements of aerosol properties—A review of mobility techniques with extensions. Aerosol Sci. Technol., 42(10), pp. 801816. Park, K., Kittelson, D. B. & McMurry, P. H., 2003. A closure study of aerosol mass concentration measurements: comparison of values obtained with filters and by direct measurements of mass distributions. Atmos. Environ., 37(9-10), pp. 1223-1230. Pratsinis, S. E., 1998. Flame aerosol synthesis of ceramic powders. Prog. Energy Combust. Sci., 24(3), pp. 197-219. Quiros, D. C. et al., 2015. Particle effective density and mass during steady-state operation of GDI, PFI, and diesel passenger cars. J. Aerosol Sci., Volume 83, pp. 39-54. Ramachandran, G. & Kandlikar, M., 1996. Bayesian analysis for inversion of aerosol size distribution data. J. Aerosol Sci., 27(7), pp. 1099-1112. Rawat, V. K. et al., 2016. Two dimensional size–mass distribution function inversion from differential mobility analyzer–aerosol particle mass analyzer (DMA–APM) measurements. J. Aerosol Sci., Volume 92, pp. 70-82. Scheckman, J. H., McMurry, P. H. & Pratsinis, S. E., 2009. Rapid characterization of agglomerate aerosols by in situ mass− mobility measurements. Langmuir, 25(14), pp. 8248-8254. Schmid, O. et al., 2007. On the effective density of non-spherical particles as derived from combined measurements of aerodynamic and mobility equivalent size. J. Aerosol Sci., 38(4), pp. 431-443. Shapiro, M. et al., 2012. Characterization of agglomerates by simultaneous measurement of mobility, vacuum aerodynamic diameter and mass. J. Aerosol Sci., Volume 44, pp. 24-45. Shin, W. G. et al., 2009. Friction coefficient and mass of silver agglomerates in the transition regime. J. Aerosol Sci., 40(7), pp. 573-587. Sipkens, T. A., Hadwin, P. J., Grauer, S. J. & Daun, K. J., 2017. General error model for analysis of laser-induced incandescence signals. Appl. Opt., 56(30), pp. 8436-8445. Sipkens, T. A., Olfert, J. S. & Rogak, S. N., 2019. New approaches to calculate the transfer function of particle mass analyzers. Aerosol Sci. Technol., Volume DOI: 10.1080/02786826.2019.1680794. Stolzenburg, M. R., 2018. A review of transfer theory and characterization of measured performance for differential mobility analyzers. Aerosol Sci. Technol., 52(10), pp. 1194-1218. Talukdar, S. S. & Swihart, M. T., 2003. An improved data inversion program for obtaining aerosol size distributions from scanning differential mobility analyzer data. Aerosol Sci. Technol., 37(2), pp. 145-161. Taylor, G. I., 1923. VIII. Stability of a viscous liquid contained between two rotating cylinders. Philos. Trans. R. Soc. London, Ser. A, 223(605-615), pp. 289-343. Twomey, S., 1975. Comparison of constrained linear inversion and an iterative nonlinear algorithm applied to the indirect estimation of particle size distributions. J. Comput. Phys., 18(2), pp. 188-200. Voutilainen, A., Kolehmainen, V. & Kaipio, J. P., 2001. Statistical inversion of aerosol size measurement data. Inverse Prob. Eng., 9(1), pp. 67-94. Voutilainen, A., Kolehmainen, V., Stratmann, F. & Kaipio, J. P., 2001. Computational methods for the estimation of the aerosol size distributions. In: Mathematical Modeling: Problems, Methods, Applications. New York: Springer Science & Business Media, pp. 219-230. Wang, Y., 2008. An efficient gradient method for maximum entropy regularizing retrieval of atmospheric aerosol particle size distribution function. J. Aerosol Sci., 39(4), pp. 305-322. Wang, Y. et al., 2006. Regularized inversion method for retrieval of aerosol particle size distribution function in W1,2 space. Appl. Opt., 45(28), pp. 74567467. Wiedensohler, A., 1988. An approximation of the bipolar charge distribution for particles in the submicron size range. J. Aersol Sci., 19(3), pp. 387-389. Wiedensohler, A., Lütkemeier, E., Feldpausch, M. & Helsper, C., 1986. Investigation of the bipolar charge distribution at various gas conditions. J. Aerosol Sci., 17(3), pp. 413-416. Willis, M., 2000. Algebraic reconstruction algorithms for remote sensing image enhancement, s.l.: Brigham Young University. Wolfenbarger, J. K. & Seinfeld, J. H., 1990. Inversion of aerosol size distribution data. J. Aerosol Sci., 21(2), pp. 227-247. Xu, L., Li, L., Wang, W. & Gao, Y., 2019. CT image reconstruction algorithms based on the Hanke Raus parameter choice rule. Inverse Prob. Sci. Eng., pp. 1-17. Yee, E., 1989. On the interpretation of diffusion battery data. J. Aerosol Sci., 20(7), pp. 797-811.

21

Inlet

Length, L

Other particle exhaust, Qm

Electrode

Aerosol inlet, Qa

R1 V R2

Optical characterization

Bearing

Outer electrode Mobilityselected outlet, Qs

V

ω1

Length, L

Sheath flow, Qc

Inner electrode

Diffuser

Outlet

Heated saturator

Working fluid

Cooled condenser

ω2 Aerosol inlet

Outlet

Cylindrical differential mobility analyzer (DMA)

Couette centrifugal particle mass analyzer (CPMA)

Condensation particle counter (CPC)

(a)

(b)

(c)

p(d)

30

Demonstration 10 phantom

4

2 1 0

N [counts]

3

5

m [fg]

p[log10(m),log10(d)]

5

1 0.5

0.1 0.05

10

50

100 d [nm] (a)

500 1000

p(m)

500 450 Data Phantom 1 400 m* = 2.22 fg 350 m* = 1.30 fg 300 m* = 0.767 fg 250 m* = 0.451 fg 200 150 100 50 m* 0 10 50 100 d* [nm] (b)

m* = 3.78 fg

m*

500 1000

Euclidean error, ||x – xex||2

Phantom 1

Phantom 2

Demonstration phantom

20.0

Soot surrogate

21.0

121

Phantom 3

63.5

Buckley et al. (2017)

12.5 10.0 7.5 5.0 2.5 0

16.3

15.6 13.4

12.6

17.1

Narrow distribution

98.7

32.7

17.5 15.0

Phantom 4

15.5 14.9 10.6

10.0 9.96 10.1 6.28

8.51

7.73 5.61

4.62

5.72 2.86 3.08

100 90 80 70 60 50 40 30 20 10 0

149 92.0

88.3 87.0 87.9 91.1 91.2 91.5

70.1 66.0 68.3

p(d)

p(d)

30

Initial 10 guess

p(d)

Twomey

TwomeyMarkowski

m [fg]

5

Buckley et al.

1 0.5

0.1 0.05 10

50

100 d [nm] (a)

500 1000

p(m)

10

50

100 d [nm] (b)

500 1000

p(m)

10

50

100 d [nm] (c)

500 1000

p(m)

1,000

Phantom 1

Demonstration phantom

CPU Time [s]

100 10 1

0.1

Δpm

Δpd

0.2

0.2 0 -0.2 -0.4 -0.6

Phantom

0

Initial guess Interpolation

-0.2 1.0

1.5

Twomey

1.0

pm(log10m)

pd(log10d)

0.8

0.5 0 10

50

100 d [nm] (a)

500

1000

TwomeyMarkowski

0.6

Buckley et al.

0.4

MART

0.2 0

Tikhonov 0.05 0.1

0.5 1.0 m [fg] (b)

5.0 10

30

1st order

p(d)

p(d)

30

Initial 10 guess

MART

m [fg]

5 1 0.5

0.1 0.05 10

50

100 d [nm] (a)

500 1000

p(m)

10

50

100 d [nm] (b)

500 1000

p(m)

Euclidean error, ||x – xex||2

40 35 30 25 0th order

20

15 10

Optimal λ

2nd order

HankeRaus rule

5 0.01

1st order

L-curve

0.1

1 λ

10

100

p(d) 30

p(d)

Tikhonov

Tikhonov

10 0th order 5

m [fg]

p(d)

Tikhonov

1st order

2nd order

1 0.5

0.1 0.05 10

50 100 d [nm] (a)

500 1000

p(m)

10

50 100 d [nm] (b)

500 1000

p(m)

10

50 100 d [nm] (c)

500 1000

p(m)

p(d)

p(d)

p(d)

10 5

m [fg]

Phantom

30

1 0.5

0.1 0.05

p(m)

p(d)

p(m)

p(d)

p(m)

p(d)

10 5

m [fg]

Tikhonov 1st order

30

1 0.5

0.1 0.05 10

p(m) 50 100 d [nm] Phantom 2

500 1000

p(m) 10

50 100 d [nm] Phantom 3

500 1000

p(m) 10

50 100 d [nm] Phantom 4

Soot surrogate

Buckley et al. (2017)

Narrow distribution

(a)

(b)

(c)

500 1000

Soot surrogate

1.5

1.5

1.0 0.5 0 10

Buckley et al.

2 0 -2 4

Δpd

pd(log10d)

pd(log10d)

Δpd

2.0

0.2 0 -0.2 -0.4 2.0

Phantom 4

50 100 d [nm]

500 1000

pd(log10d)

0.2 0 -0.2 -0.4

Phantom 3

1.0 0.5 0 10

Phantom

Narrow distribution

Initial guess

Δpd

Phantom 2

50 100 d [nm]

500 1000

Twomey TwomeyMarkowski

3

Buckley et al.

2

MART

1 0 10

Tikhonov

50 100 d [nm]

500 1000

1st order

30

Particle counts

MART Twomey-Markowski Buckley et al.

Tikhonov 0th order

25 20 15 10 5 0 103

Tikhonov

2nd order

Tikhonov

105 ntot [particles] (a)

DMA set points

25

Tikhonov 0th order

20

Twomey

MART

15

Twomey-Markowski Buckley et al.

10 5

Tikhonov

Tikhonov

2nd order

1st order

104

30

106

107

Dx,2 = 64

Euclidean error, ||x – xex||22

35

4.2×105

Euclidean error, ||x – xex||22

45 40

Peak number concentration [cm-3] 390 4.1×103 4.2×104

0 25

30

35

1st order

40

45 50 55 60 Db,2 [elements] (b)

65

70

75

Highlights 1. 2. 3. 4. 5.

2D representations of aerosol distributions contain large amounts of information Twomey-type approaches can adequately invert tandem measurements Tikhonov regularization, particularly of the first order, outperforms Twomey Maximum entropy methods, such as MART, generally underperform Discretization errors are significant in reconstructions of sharp phantoms