Journal Pre-proofs Fully Resolved Scalar Transport for High Prandtl Number Flows using Adaptive Mesh Refinement A. Panda, E.A.J.F. Peters, M.W. Baltussen, J.A.M. Kuipers PII: DOI: Reference:
S2590-1400(19)30054-1 https://doi.org/10.1016/j.cesx.2019.100047 CESX 100047
To appear in:
Chemical Engineering Science: X
Received Date: Revised Date: Accepted Date:
2 August 2019 9 October 2019 14 October 2019
Please cite this article as: A. Panda, E.A.J.F. Peters, M.W. Baltussen, J.A.M. Kuipers, Fully Resolved Scalar Transport for High Prandtl Number Flows using Adaptive Mesh Refinement, Chemical Engineering Science: X (2019), doi: https://doi.org/10.1016/j.cesx.2019.100047
This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
© 2019 Published by Elsevier Ltd.
Fully Resolved Scalar Transport for High Prandtl Number Flows using Adaptive Mesh Refinement A. Pandaa , E.A.J.F. Petersa,∗, M.W. Baltussena , J.A.M. Kuipersa a Multiphase Reactors Group, Department of Chemical Engineering and Chemistry, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven, The Netherlands.
Abstract In multiphase systems, boundary layers occur at fluid-fluid or fluid-solid interfaces. In a direct numerical simulation, the grid requirements are often dictated by the thickness of these boundary layers. Systems that are characterized by high Prandtl (or Schmidt number) exhibit temperature (or mass) boundary layers that are much thinner than the momentum boundary layers. In this paper, a hybrid computational approach is presented that uses a fixed Cartesian grid for the Navier-Stokes and continuity equations and an adaptive mesh for scalar transport, thus reducing the memory and CPU requirements tremendously while resolving all boundary layers. We describe the key aspects that need to be addressed in this hybrid approach, related to discretization, grid mapping, velocity interpolation along with detailed verification tests. Finally, the robustness and accuracy of our hybrid methodology is demonstrated for forced-convection heat transfer over stationary spherical particles at high Prandtl numbers. Keywords: adaptive mesh refinement, high Prandtl number, high Schmidt number, boundary layers
1
1. Introduction
2
Numerous examples exist in nature of fluid flow systems with associated
3
scalar transport such as dilution of pollutants in air (Zhong et al., 2016), nu∗ Corresponding
author Email address:
[email protected] (E.A.J.F. Peters)
Preprint submitted to Chemical Engineering Science
October 18, 2019
4
trients dissolution in estuaries (Andriˇcevi´c and Galeˇsi´c, 2018), tracer studies
5
for reactor characterization (Dixon et al., 2006), mantle convection (Stadler
6
et al., 2010), heat transfer and dispersion in porous media (Xu et al., 2019).
7
Scalars can be classified as active or passive depending on whether they alter
8
the flow field or not. Numerical simulations of such systems is ubiquitous and
9
of paramount importance to gain fundamental understanding of the underlying
10
phenomena. Very often, the scalar quantity of interest is the concentration of a
11
certain species, or the temperature.
12
Direct Numerical Simulation (DNS) constitutes an important and powerful
13
tool to study complex multiphase flow systems. In DNS, the Navier-Stokes
14
equations are solved to obtain the velocity distribution for the fluid flow whereas,
15
an associated convection-diffusion equation is solved to obtain the scalar field.
16
To properly numerically resolve the boundary layers, an important requirement
17
for DNS is that the mesh size must be smaller than the smallest length scale,
18
which in turn depends on the molecular transport coefficients of the quantities
19
that are being solved. It follows from the boundary layer theory that, in the
20
laminar case, boundary layer thicknesses for momentum (δmom ), temperature
21
(δT ) and mass (δm ) are related as δT = P r−0.5 δmom and δm = Sc−0.5 δmom where
22
P r and Sc are the Prandtl and Schmidt numbers, respectively. Both numbers
23
are a ratio of diffusivities. When the scalar of interest is the temperature, the
24
Prandtl number is defined as P r = ν/α, where ν is the momentum diffusivity
25
(i.e. kinematic viscosity) and α is the thermal diffusivity. For many fluids e.g.
26
water, oils, polymer melts and earth mantle, P r > 1 and the thermal boundary
27
layer thickness is much smaller than the momentum boundary layer thickness.
28
Ostilla-Monico et al. (2015) demonstrated that even for P r ≈ 1 the practical grid
29
requirement for scalar transport is higher than that of momentum transport. For
30
mass transfer the Schmidt number is defined as Sc = ν/D, with D the diffusion
31
coefficient. In liquids, momentum can diffuse via intermolecular collisions, while
32
for mass diffusion, molecules need to physically move in a crowded environment.
33
This difference in diffusion mechanisms gives Sc 1.
34
Choosing a grid fine enough to resolve the scalar field would over-resolve 2
35
the momentum field. The situation becomes aggravated due to two reasons:
36
(a) solving for momentum transport requires a vector field (3 variables) and
37
an associated pressure field in comparison to a single variable needed for scalar
38
transport, (b) in many applications, for a given resolution, most of the CPU
39
time is spent on solving the momentum transport (≈ 90%). Given the above
40
arguments, it is evident that over-resolving the momentum field leads to higher
41
memory and CPU demands. To circumvent the above problems, two approaches
42
are prevalent in literature: 1) Solving the scalar equations on a coarse grid, but
43
use a sub-grid model for the unresolved scales. 2) Resolving the finest scales
44
fully only in regions where it is needed by way of multiple resolution grids.
45
The former approach involves using a subgrid scale model with a filter length
46
in the viscous-convective scale similar to the approach used in large eddy simula-
47
tions of turbulence which rely on the following principle: resolve the large eddies
48
and model the small eddies. Notable works in this direction has been carried
49
out for high Schmidt number flows in case of mass transfer from gas bubbles,
50
where the boundary layer solution is approximated by an analytical profile and
51
then added as a source term in the neighboring coarse grid after a certain cutoff
52
distance from the interface (Weiner and Bothe, 2017; Aboulhasanzadeh et al.,
53
2012). Verma and Blanquart (2013) adopted the subgrid scale model that uses
54
a filter length to include source terms accounting for the under-resolved scalar
55
turbulence. Though these methods are relatively simple to implement, they
56
are susceptible to the choice of analytical function used to model the boundary
57
layer. We thereby propose to use a computational approach in which different
58
spatial resolutions for momentum and scalar fields are used.
59
In the literature, several studies have been reported adopting such a hybrid
60
computational approach. Gotoh et al. (2012) used a pseudospectral method
61
to solve the Navier-Stokes equations on a coarse mesh and a finite difference
62
discretization of the scalar convection-diffusion equation on a (substantially)
63
finer mesh. They report a reduction of CPU time by 26% for Sc = 1 and 76%
64
for Sc = 50. While interpolating the velocity field from the coarser mesh to
65
the finer mesh, one of the key constraints that needs to be obeyed is the di3
66
vergence free condition. Ostilla-Monico et al. (2015) and Chong et al. (2018)
67
proposed a similar approach using a coarser grid for the Navier-Stokes equa-
68
tion and a finer grid for active/passive scalar transport using a finite difference
69
and finite volume approach respectively by also using divergence-free/solenoidal
70
interpolations. For interface capturing in multiphase flows, Gada and Sharma
71
(2011) used a dual-grid method where a finer grid was used to resolve the level
72
set and the temperature field while the coarser grid was used to resolve the
73
momentum/velocity field.
74
In the works cited above, the authors have used a uniformly refined grid
75
for scalar transport in the multiple resolution techniques or dual grid methods.
76
The uniform refinement method, though not very efficient, is useful at low to
77
moderate Schmidt/Prandtl numbers, but suffers from problems associated with
78
memory and CPU requirements, which grows exponentially with each level of
79
refinement beyond the coarse grid used to solve the Navier-Stokes equations.
80
Also, at higher values of P r and Sc, the solution fronts become sharper since
81
the molecular transport coefficient of the scalar quantity is decreased requiring
82
a higher refinement. Considering the targeted study of forced convection heat
83
transfer over solid particles at high Prandtl numbers, the actual location of the
84
sharp thermal boundary layer is known apriory i.e. at the solid surface.
85
Therefore, in the novel dual grid methodology proposed in this paper, an
86
adaptive mesh is used instead of a uniformly refined Cartesian mesh for the
87
scalar transport. Adaptive mesh refinement (AMR) has its own set of com-
88
putational challenges specific to different methods of mesh refinement. Among
89
many different AMR strategies, notable are block structured AMR, overset or
90
Chimera grids, tree-based AMR etc. A tree-based adaptive mesh framework is
91
chosen where, each grid cell is a structured square/cubical cell called quadrant
92
(octant in 3D). This simplifies the coarse-fine grid interpolation procedures.
93
Also a tree-based adaptive grid provides flexibility to adapt to any available
94
features in the solution domain for e.g. walls, deformable interfaces (fluid-fluid
95
interface), non-deformable interfaces (fluid-solid interface) or to the gradients
96
present in the computed fields. 4
97
In this work, we describe the methodology and implementation of the dual
98
grid method along with the divergence free velocity interpolations. The method-
99
ology is subsequently verified for different theoretical problems and validated
100
with empirical heat transfer correlations for forced convection over single and
101
multiple solid particles.
102
2. Model Description
103
2.1. Governing Equations
104
For the solution of a system with an associated passive scalar transport,
105
the equations for conservation of mass and momentum need to be solved along
106
with the scalar convection diffusion equation. The continuity and Navier-Stokes
107
equations describing the velocity field, u (u, v, w), and pressure, p, can be writ-
108
ten as:
ρ 109
∂u + ρ∇ · (uu) = −∇p + ρg + ∇ · µ(∇u + (∇u)T ) ∂t ∇·u=0
(1) (2)
110
where ρ, µ are the local density and viscosity, and g is the acceleration due to
111
gravity. A regular staggered Cartesian grid is used for the numerical solution
112
of the Navier-Stokes equations. The curved fluid-solid interface of immersed
113
objects such as particles does not conform to the Cartesian grid and to accu-
114
rately incorporate the solid-fluid coupling for momentum, an immersed bound-
115
ary method (IBM) is used. This results in a flow field that satisfies the no-slip
116
boundary condition at the solid-fluid interface. Details addressing the imple-
117
mentation, verification and validation of the current IBM method can be found
118
in the papers of Das et al. (2017b); Deen et al. (2012).
119
The generalized transport equations for a passive scalar, φ,is ∂φ + ∇ · (uφ) = α∇2 φ + Sφ , ∂t
5
(3)
120
where α is the diffusivity, which is the thermal diffusivity, k/ρCp , in case of
121
heat transfer (k and Cp represent the thermal conductivity and specific heat
122
capacity, respectively) and the molecular diffusivity, D, in case of mass transfer.
123
Although, the Navier-Stokes equations are solved on a regular Cartesian
124
grid, the scalar transport equations are solved on a grid that can be adaptively
125
refined as the solution proceeds in time. For the purpose of the work presented
126
here, we restrict ourselves to the problem of heat transfer, i.e. φ is temperature.
127
For the representation of the velocity field on this refined grid, divergence free
128
interpolation of the velocity field defined on the coarse ‘hydrodynamic’ grid is
129
required. For the heat transfer we restrict ourselves to the constant solid tem-
130
perature case for the immersed particle, which is represented on the adaptive
131
grid with a high enough resolution to have a sufficiently smooth staircase rep-
132
resentation of the particle interface. In the subsequent subsections we focus on
133
the discretization of the scalar convection diffusion equation on the adaptive
134
grid, velocity update on the adaptive grid from the Cartesian grid and the grid
135
adaptivity.
136
2.2. Spatial Discretization
137
In the current dual grid method the base hydrodynamic/momentum grid is
138
of static nature whereas the passive scalar grid is of dynamic nature featuring
139
grid adaptation. The underlying hydrodynamic grid is comprised of a staggered
140
Cartesian grid stored in arrays of fixed sizes, each grid point identified by a
141
triad of indices i, j, k for the x, y, z-direction respectively. For the adaptive grid,
142
the computational domain is represented in the form of control volumes which
143
are square in 2D (or cubic in 3D). These elementary square (or cubic) control
144
volumes are adapted using a quadtree (or octtree) data structure (Popinet,
145
2003). Figure 1(a) depicts a representation of such a discretization for a 2D
146
tree where an adaptive grid is initiated as one root cell uniformly refined twice
147
to give a 4 × 4 Cartesian grid. Each control volume (shown in green) is defined
148
as a cell and can have 4 descendants (8 in case of 3D) defined as children,
149
in which case, the cell is defined as the parent cell of these children. A cell 6
150
which has no parent cell is defined as the root cell (black cell in figure 1(d))
151
while cells that do not have any children are termed leaf cells (all numbered
152
cells in the figure 1(c,d)). The length of any cell edge is denoted by h. The
153
level of a cell is defined with the root cell as reference level 0 and children
154
being one level higher than the parents. The actual data structure used is a
155
so-called graded/balanced quadtree (or octree), which means that adjacent cell
156
levels are not allowed to differ by more than one, i.e. a maximum of one hanging
157
node is possible at any cell face. This 2:1 balance constraint is beneficial for
158
the spatial discretization of partial differential equations. Figure 1(b) shows a
159
quadtree that violates this criterion since there exists a face with two hanging
160
nodes (shown as yellow dots). The constraint restricts the number of possible
161
cases which is very convenient in devising efficient numerical schemes for such a
162
spatial representation. Each cell on the adaptive tree is identified by a unique
163
index called the Morton number derived by bit interleaving the i, j, k index of
164
the associated cell on an uniform grid at that level. For example, the quadrant
165
corresponding to 2D Cartesian control volume (2, 2) has a Morton index(Zid) of
166
3 in figure 1(a). It is worth mentioning that the Morton indexing (z-order) of the
167
constituent control volumes of an adaptive grid leads to a non-banded pattern
168
of matrix sparsity, even when all the cells are at the same level of refinement.
169
Though, the benefit of such an encoding is that the cells lying closer to each
170
other in the physical space, also lie closer to each other in the z-order space.
171
The base framework used for the implementation of the developed method-
172
ology is an open source library which provides efficient parallel tree data struc-
173
ture and functions to perform operations on this data structure, called p4est
174
(Burstedde et al., 2011; Isaac et al., 2015). The chronological order of operation
175
for the methodology explained here is described very well with the flow diagram
176
given in figure 2. The key building blocks are described in the next subsections.
7
177
178
179
2.3. Convection To discuss the treatment of convection in the scalar transport equation, we focus on the following reduced equation: ∂φ + ∇ · (uφ) = 0 ∂t
180
(4)
which can be discretized using a finite volume technique, to obtain: φn+1 = φn +
∆t X ( uf n+1 φnf Af ) ∆V
(5)
f aces
181
where ∆t is the time step whereas ∆V and Af respectively represent the volume
182
and face area. The velocity at the cell face, uf , is either directly available, or
183
is obtained from the coarse Cartesian grid after interpolation. The convective
184
flux at any cell face can be computed after estimating the value of the scalar
185
at the cell face, φf . Figure 3 (a) denotes a schematic representation of the cell
186
configuration in the case of same size cells, i.e. in the absence of adaptivity.
187
The face in consideration for the calculation of convective flux is shown with a
188
dashed line. Assuming that the velocity at this face is positive we adhere to the
189
nomenclature defined in figure 3, where the cell being emptied is termed as the
190
donor cell (D), the cell being filled is defined as the acceptor cell (A) and the
191
cell upstream to the donor cell is termed as the upwind cell (U). The respective
192
cell sizes are hD , hA and hU . The face scalar, i.e. the value of the scalar used at
193
the face to compute the flux through that face, which is depicted with a green
194
arrow in the figure can be calculated as a function of the scalar values φU , φD
195
and φA . Two schemes have been implemented here viz. the first order upwind
196
scheme and a total variation diminishing (TVD) min-mod scheme also referred
197
to as the Barton scheme (Centrella and Wilson, 1984).
198
Three different estimates of the face scalar are computed using linear upwind-
199
LI ing φLU f , linear interpolation/central differencing φf and first order upwinding
200
OU φF . For defining these scalar values at the cell face, the cell sizes or the f
201
associated distances are used. The notation dAf denotes the distance of the
202
cell centre of the acceptor cell from the face under consideration whereas dAD 8
203
denotes the distance between the cell centres of the acceptor and donor cells.
204
Hence, the face scalar values are defined as follows:
205
Linear Upwinding: hD hD )φD − 0.5 φU dU D dU D
(6)
dDf dAf φA + φD dDf + dAf dDf + dAf
(7)
φLU = (1 + 0.5 f 206
Linear Interpolation: φLI f =
207
First Order Upwind : OU φF = φD f
208
(8)
For the first order upwind scheme the face scalar, φf , is simply the value but, for the total variation diminishing scheme a unique value of φf is
209
OU φF f
210
determined by comparing the three distinct values, which for an outward velocity
211
at the cell face can be given as follows: MIN(φF OU , MAX(φLU , φLI )), if φA ≤ φD f f f φf = F OU LU MAX(φ , MIN(φf , φLI f f )), if φA > φD
(9)
212
For adaptive grids, as shown in figure 3 (b), the large dots for acceptor and
213
upwind cell correspond to the values required in the above discretization which,
214
may or may not always coincide with the node centre values (shown with small
215
dots). In that case the required values are computed using interpolation (in the
216
direction perpendicular to the face normal) from the neighboring cells. These
217
interpolations are also used for a Laplacian operator, which will be presented
218
in the subsequent subsection. For the case of same size cells neighboring a
219
face, the convective flux, uf φf Af,D is subtracted from the donor cell(s) and
220
uf φf Af,A is added to the acceptor cell(s). Here, Af,D and Af,A correspond to
221
the face area of the donor cell(s) and acceptor cell(s), respectively. In the case
222
of a hanging node at the face, there will be multiple donor/acceptor cells with
223
Af,D 6= Af,A . Considering the case of an outward convective flux at the cell
224
face, (∆t/∆VD ) φf uf Af,D is subtracted from the donor cell and correspondingly 9
225
added to the acceptor cell. If there are multiple acceptor cells then the flux is
226
divided equally among the half size cells across the face. As evident from equa-
227
tion 5, the term uf φf Af is multiplied by ∆t/∆V as a result of the explicit first
228
order Euler forward treatment of the temporal term. Hence, the time step ∆t is
229
chosen such that the Courant–Friedrichs–Lewy (CFL) number ( umax ∆t/∆xmin
230
) is less than 0.33, where ∆xmin is the cell size of the smallest quadrant/cell in
231
the domain and umax is the maximum velocity in the domain.
232
2.4. Diffusion
233
234
To discuss the treatment of diffusion on the scalar transport equation, we focus on the following reduced equation: ∂φ = ∇ · (α∇φ) + Sφ ∂t
(10)
235
where, Sφ represents the volumetric source term. Discretization of equation 10
236
with a finite volume technique and a semi-implicit scheme, leads to the following
237
representation: φn+1 − φn = (1 − β)∇ · (αn ∇φn ) + β∇ · (αn ∇φn+1 ) + Sφn ∆t
(11)
238
where, n + 1 and n corresponds to values at new time step and old time step,
239
respectively. To calculate the diffusive flux at each face, a numerical representa-
240
241
tion of the Laplacian operator ∇·(∇φ) or ∇2 φ is required, which in it’s discrete P form is given as f aces ∇f φAf , where the ∇f φ is the gradient of φ defined at
242
the cell face. For a conventional Cartesian grid, this is normally evaluated by
243
central differencing, which is second order accurate for cells of the same size.
244
However, when this scheme is applied for a cell face with a hanging node, the
245
scheme is first order accurate (Losasso et al., 2006). This type of differencing
246
would result in schemes with inconsistent order of convergence, across the whole
247
domain. To circumvent this problem, one needs to specifically adopt schemes
248
that are overall second order accurate. In this paper, two different Laplacian
249
operators viz. ∇21 and ∇22 have been implemented that are both overall second
250
order accurate but, result into different global matrix representations. 10
251
While the computation of the gradient at a cell face with second order ac-
252
curacy is simple on a regular Cartesian grid, this is more difficult within the
253
adaptive framework. In practice, only two configurations are possible at a given
254
cell face for the calculation of face centered gradients (figure 4):
255
256
(I) Both sides of the face are at the same level. (II) One side of the face is at one level lower than the other one.
257
In case (I), the stencil reduces to a configuration observed in conventional Carte-
258
sian grid and thus one can use the standard second order accurate central dif-
259
ference approximation. In case (II), the approaches for ∇21 and ∇22 differ and
260
will be discussed subsequently.
261
2.4.1. Laplacian Operator 1
262
This Laplacian operator was first given by Popinet (2003) and forms an
263
ingredient of the open source CFD code named Gerris Flow Solver. For the
264
Laplacian operator 1, the face of interest in the direction d is the one shared
265
between the cell (C) and the neighbor (Nb ) as shown in figure 5. In the case (II)
266
as described above, a second order polynomial is fitted passing through the cell,
267
C, the neighbor Nb and the opposite cell OC. The gradient at the face center
268
can thus be calculated using the derivative of the polynomial evaluated at the
269
examined face. Based on the configuration of the opposite cell, only two subcases
270
can arise due to the 2:1 restriction: an undivided leaf cell at the same level of cell
271
C or a parent cell with 4 leaf cells (8 in 3D). If the opposite cell is divided, which
272
happens to be the case shown in the figure 5, the interpolated value φ7 is created
273
from the 2 (or 4 in 3D) children across the opposite face by linear averaging.
274
Similarly, the value φ6 in the double sized neighbor Nb is also interpolated
275
using a second order polynomial fitted through its neighbors in the direction
276
perpendicular to d. The configuration of the neighbor of the neighbor across the
277
lower perpendicular(N⊥L ) and the upper perpendicular (N⊥U ) can again give
278
rise to four different combinations out of which one sample configuration has
11
279
been sketched in figure 5. With the definition of φ6 and φ7 from figure 5, the
280
expression for an undivided and divided opposite cell are respectively:
281
282
283
1 1 8 h∇fd φ = − φC − φOC + φ6 3 5 15
(12)
2 8 14 h∇fd φ = − φC − φ7 + φ6 9 27 27
(13)
where h is the size of reference cell C and ∇fd φ is the gradient at face f in the
direction d. Subsequently for the four different combinations possible across the perpendicular faces of the neighbor Nb are: 5 3 15 if N⊥L 16 φN b + 32 φN⊥L − 32 φN⊥U 5φ + 5 φ − 1 φ if N⊥U 6 Nb 21 3 14 N⊥U φ6 = 1 1 φN b + 7 φN⊥L − 7 φ4 if N⊥L 8φ + 2φ − 1φ if N⊥L 9 Nb 9 3 9 4
& N⊥U are undivided is undivided is undivided
(14)
& N⊥U are divided
284
where, φ3 and φ4 are constructed using linear averaging of the children values
285
across the lower and upper perpendicular faces respectively. Additional issues
286
arise when extending this discretization to 3D since there are two directions
287
perpendicular to the direction d that needs to be traversed to obtain the value
288
of φ6 . Moreover, these perpendiculars are non-intersecting unlike the case in
289
2D, which means neither of the two interpolated values(φ6,1 and φ6,2 ) lie on
290
the axis passing through the cell C and OC. In this case a unique value φ6
291
is computed using the relation φ6 = φ6,1 + φ6,2 − φN b . The gradient for the
292
left face of cell Nb is evaluated as minus the average gradients of the coinciding
293
smaller faces (Popinet, 2003).
294
2.4.2. Laplacian Operator 2
295
For the Laplacian operator 2, where the face of interest has a hanging node
296
as shown in figure 4 (II). The face has been sketched in a detached manner for
297
the purpose of discussion. Any face with a hanging node will be shared by 2 (4
298
in 3D) small size cells on one side of the face denoted in the figure as cell S1
299
and S2 and one large cell denoted as cell L. This operator was first proposed by 12
300
Losasso et al. (2006) where central differencing can be used to define a gradient,
301
∇fd = (φL − φS1 )/0.75 hL where, hL is the size of the large cell. This gradient
302
is evaluated at the mid point of the line joining φS1 and φL . On basis of the
303
assumption that O(∆x) perturbations in the scalar value location would still
304
yield consistent approximations as elucidated in Gibou et al. (2002), one can
305
then conclude that setting ∇fd φS1 = (φL − φS1 )/0.75 hL still leads to convergent
306
solutions. These authors report that the discretization is found to be convergent
307
and of first order. More importantly this discretization suffers from inconsistent
308
gradient approximation at the three coincidental faces since the gradient at the
309
large face is given by, ∇fd φL = As /AL × ∇fd φS1 + As /AL × ∇fd φS2 where As is
310
311
312
313
the area of the smaller face and AL is the area of the larger face.
Thus the gradient at the larger face in the direction, d, can be written as ∇fd φL AL and for the smaller as ∇fd φS1 As and ∇fd φS2 As ,
faces corresponding to S1 and S2 can be written respectively. A possible way to circumvent the
314
inconsistent gradient evaluation is to impose that the gradients at the small faces
315
must be equal to that of the large face, i.e. ∇fd φS1 = ∇fd φL and ∇fd φS2 = ∇fd φL .
316
Care has to be taken with respect to the sign of ∇fd as for the small face the
317
normal is in the opposite direction to the normal defined at the large face.
318
Although the discretization is formally first order accurate, Losasso et al. (2006)
319
report a second order convergence with the support of arguments from Lipnikov
320
et al. (2004).
321
2.5. Boundary Conditions
322
At the domain boundaries, both Dirichlet and Neumann boundary condi-
323
tions are applied with second order accuracy by use of ghost cell values. For
324
the boundary cells, a ficitious neighbor of the same size is constructed and the
325
earlier described discretization methods are implemented. For the case of Lapla-
326
cian operator 1, the boundary conditions are factored in while interpolating the
327
value φ6 (see figure 5) since the neighbor across the lower perpendicular N⊥L
328
and the upper perpendicular N⊥U can also be a boundary cell.
13
329
330
331
2.6. Linear solver When solving for both diffusion and convection using equation 3, the resultant set of simultaneous linear equations can be written as: ac φc +
X
anb φnb = bc
(15)
nb 332
where the subscript ‘c’ corresponds to the cell center and subscript ‘nb’ cor-
333
responds to the neighboring cells (neighbors of neighbors as well in case of
334
the Laplacian operator described in section 2.4.1) arising from the state of
335
quadtree/octtree. The left hand side of equation 15 contains the coefficients
336
and the unknowns φ which arise out of the implicit treatment (unknown values
337
at the current time step) of the diffusive flux and the right hand side of this
338
equation contains the terms arising from the explicit treatment (using the known
339
values from the previous time step) of the convective flux. The resulting set of
340
simultaneous linear equations is solved using robust matrix solvers available
341
from an open source library called HYPRE (Falgout et al., 2002). Laplacian oper-
342
ator 1 leads to non-symmetric matrices that are then solved using an algebraic
343
multigrid (AMG) solver called boomerAMG. On the contrary, Laplacian opera-
344
tor 2 results in symmetric matrices, which can be efficiently solved using the
345
PCG/BiCGSTAB/FlexGMRES class of solvers with preconditioning by boomerAMG.
346
It is worth mentioning that the resulting matrix from Laplacian operator 2 is
347
less dense than that obtained from Laplacian operator 1 and is easier to invert
348
with fewer iterations. Also, the Laplacian operator 1 is only valid for a tree
349
which is 2:1 corner balanced (corner cells’ level do not differ by more than one)
350
whereas, Laplacian operator 2 has no 2:1 restrictions. In both cases, the solver
351
scales to 103 processors based on the hybrid OpenMP+MPI model and uses parallel
352
graph partitioning methods for load balancing.
353
2.7. Staggered Velocity Field and Communicator
354
An essential element of this novel dual grid methodology is the velocity
355
update on the adaptive grid from the underlying fixed Cartesian grid. The ve-
356
locity needed to calculate the convective fluxes of the scalar is available on the 14
357
hydrodynamics grid and needs to be transferred to the adaptive grid at each
358
time step. Hence, the update process needs to be time efficient. Therefore, a
359
communicator is implemented such that the velocity matrices are allocated in a
360
memory address space, shared between the hydrodynamics code and the scalar
361
transport code. This memory can be modified by the hydrodynamics code for
362
updating the velocity field and subsequently read by the scalar transport code.
363
The read-write synchronization between the two codes for this memory block
364
is facilitated by POSIX semaphores. These syncronisation semaphores can be
365
considered as locks depicted in figure 2 that lock & unlock either the hydrody-
366
namics code or the scalar transport code for operations. Since, the semaphores
367
and the shared memory are available on the physical memory employed by both
368
parts of the numerical code, it provides instantaneous communication between
369
the two grids for velocity matrix access. Another consideration in the numeri-
370
cal implementation is the choice of velocity field discretization on the adaptive
371
grids. Since, the underlying Cartesian grid contains staggered velocity field en-
372
coding, the same staggering is maintained for the adaptive grid. In the current
373
implementation, this choice leads to duplicate storage of face values among cells
374
sharing a common face but it simplifies the velocity update process.
375
The velocity update on the adaptive grid has two building blocks: (a) map-
376
ping of the adaptive grid to the underlying Cartesian grid at the beginning of
377
the simulation (b) Velocity update for cells smaller than the base Cartesian grid
378
by employing interpolations. For the first building block, a ‘forest’ (Burstedde
379
et al., 2011) consisting of one root cell (also called as tree) or multiple trees is
380
initiated and then refined uniformly such that the resultant mesh maps exactly
381
with the Cartesian grid used for the flow field computation. This uniform level
382
of refinement will be referred as the base level. Next, a two index identifier(three
383
in case of 3D) is attached to each cell on the adaptive grid, which stores it’s
384
corresponding i and j index (and also k in case of 3D) in the Cartesian grid.
385
This index once initiated, is kept intact throughout the adaptation process which
386
means that all children of a parent originating due to grid refinement beyond the
387
base level will inherit the i and j index as it is. In the current implementation, 15
388
coarsening is restricted to the base level with which the simulation was started.
389
Therefore, it is not possible to have a cell size larger than the cell size of the
390
hydrodynamics grid. For cells which are at the base level, the face velocities are
391
updated by using the index [i j]. For cells smaller than the base level, inter-
392
polations are used such that the resultant velocity adheres to the divergence free
393
criteria (equation 2). To ensure this, a piecewise linear interpolation method
394
is used to interpolate a velocity component in its flow-direction, while staircase
395
interpolation is used in perpendicular directions. This interpolation is visualized
396
in figure 6(b). For example, the −x face velocity of the scalar cell (shown in red),
397
ux− , is computed using the relative distance from the bounding hydrodynamic
398
cell’s (shown in blue) faces in x−direction i.e. d0 and d1 with their respective
399
velocities Ux− and Ux+ as: ux− = d1 /(d0 + d1 ) Ux− + d0 /(d0 + d1 ) Ux+ .
400
2.8. Grid Adaptation
401
402
403
404
405
406
407
408
One of the most important aspects in the solution process is the grid adaptation. Grid adaptation involves three steps in the following order • Refinement : Flag cells for refinement and replace the flagged parent with children.
• Coarsening : Flag cells for coarsening and replace the flagged children with their parent.
• Balancing: After the above two operations, refine relevant cells to ensure that the forest meets the 2:1 balance criteria.
409
The first two steps have two user inputs that depend on the problem at hand:
410
which cells to refine/coarsen and how to update values in the new cells upon
411
refinement/coarsening. The latter forms the only user input for the third step,
412
i.e. balancing. The decision on which cells need to be refined and coarsened
413
depends on the problem features such as the presence of an interface, which can
414
be used as an indicator to flag cells in its vicinity. Another flagging criterion
415
used in the current simulations is an error estimate, proportional to the sum of 16
P3
2
416
the square of the gradients in each cell as:
417
are the cell size and cell volume, respectively whereas ∇ci is the cell-centered
i=1
(h∇ci φ) ∆V , where h and ∆V
418
gradient in the direction i. The cell-centered gradients are computed using a
419
min-mod approach where for a cardinal direction, the cell-centered gradient is
420
the minimum of the two bounding face-centered gradients or if the two face-
421
centered gradients have opposite sign then the cell centered gradient is taken
422
to be 0. For example the gradient in the x− direction is bounded by the face
423
centered gradients calculated at the −x and +x face, respectively. For refine-
424
ment, those cells are flagged for which the error estimate is more than a set
425
value. On the other hand, for coarsening if all the children have their respective
426
error estimates lower than the critical value then the family of cells is marked
427
for coarsening. When the refinement occurs the scalar value of each child is
428
updated as φchild = φparent ±
h c h ∇ φparent ± ∇cy φparent 4 x 4
(16)
where ∇cx φparent is the cell-centered gradient in the x direction, h the size of
parent cell. The critical value of the error estimate for each cell is taken to be 10−5 ∆V in all the simulations where on the fly grid adaptivity is enabled. During refinement the velocity is updated using the divergence-free interpolation method. Referring to figure 6 (a), for a cell that is under refinement, the face velocities (shown in red) are computed from the parent cell’s face velocities(shown in brown), which for the x face velocities can be written as ui− 12 ,j− 14 = ui− 12 ,j+ 14 = ui− 12 ,j ui,j− 14 = ui,j+ 14 =
1 2
ui− 12 ,j +
(17) 1 2
ui+ 12 ,j− 14 = ui+ 12 ,j+ 14 = ui+ 12 ,j
ui+ 12 ,j
(18) (19)
429
During coarsening, the scalar value of the parent cell is computed from the
430
average of the children scalar values which can be written for a 2D case as
431
follows: φparent =
1 4
X all children
17
φchild,i
(20)
432
While updating the velocities during coarsening, the cell face velocities on the
433
outer edges/faces of the children are retained.
434
3. Verification
435
3.1. Convection
436
To test the implementation of the convection of a scalar, a step profile is
437
initiated inside a square domain. For this, two test cases are considered: (a)
438
a uniformly refined grid (b) adaptive grid. The uniform grid considered here
439
is generated out of one root cell uniformly refined to level 7 and 8, thereby
440
resulting into a grid of 128 × 128 and 256 × 256, respectively. The adaptive
441
grids on the other hand, have been generated by first taking the uniform grid
442
and then refining near the discontinuity of the step profile with maximum re-
443
finement restricted to 8, 9, 11 and 13 levels. A constant velocity u = 1 m/s
444
is prescribed over the whole domain in only one cardinal direction. The step
445
profile is initiated at a non-dimensional distance of 0.0390625 from the origin
446
(−x boundary of domain) which corresponds to a length of 5 grid cells and 10
447
grid cells for level 7 and level 8 refinement respectively. The time step is chosen
448
such that the Courant–Friedrichs–Lewy (CFL) number with the minimum grid
449
size (corresponding to refinement level 13) is equal to 0.3. A step profile when
450
advected with a constant velocity should theoretically remain a step profile hav-
451
ing travelled a distance u(t2 − t1 ) where t2 and t1 correspond to different time
452
instances. It is however well-known that, all numerical convection schemes ex-
453
hibit a certain degree of numerical diffusion thus reducing the sharpness of the
454
step profile. Figure 7(a) shows the performance of the two schemes explained in
455
section 2.3 from the obtained non-dimensional scalar (˜ c) profiles at 0.4 s, where
456
one can see that the Barton scheme on an uniform grid is less diffusive than the
457
first order upwind scheme. Figure 7(b) shows the results obtained from both
458
schemes on an adaptive grid. In this test case the refinement and coarsening is
459
performed after each solution time step based on the gradient of the solution.
460
As the maximum allowed refinement at the discontinuity is increased, the solu18
461
tion approaches the analytical solution, which is shown as a black dashed line.
462
Comparison of the results of the adaptive grid with those obtained from the uni-
463
form grid at a refinement level of 7, reveals that the numerical diffusion on an
464
adaptive grid becomes less prominent as the refinement near the discontinuity
465
is increased.
466
Since in the aforementioned test case the prescribed velocity is in only one
467
cardinal direction, another test was performed where a circular blob of diam-
468
eter D = 1 mm is advected with a velocity field inclined to control volume
469
faces at 45◦ , ux = uy = 1 m/s. Here, the accuracy of the method is demon-
470
strated by comparing the results with that obtained from open source Gerris
471
flow solver, which uses a Godunov method for the evaluation of the convective
472
fluxes (Popinet, 2003). The circular blob with it’s center located at (0.25, 0.25)
473
is initiated in a domain of edge length 1. The grid is initially at a uniform
474
refinement of level 7 and refinement is restricted to a maximum level of 10 at
475
the solid surface. Refinement and coarsening is carried out by using the error
476
estimate mentioned in section 2.8. Figure 8 (a) shows the scalar profile and the
477
associated grid at the start of the simulation. Figure 8(b) shows the solution
478
and the grid obtained after 0.5 s from the Gerris flow solver whereas figure 8(c)
479
shows the solution and the grid obtained at the same time using the methodol-
480
ogy presented in this paper. The image of the grid shows the solution marked
481
at a scalar contour of 0.5 (using a yellow line) whereas the numerical diffusion is
482
visualized by the contour of 0.001 shown with the blue line. It can be observed
483
that the convection scheme presented here exhibits a similar degree of numerical
484
diffusion as obtained from the Gerris flow solver.
485
3.2. Grid convergence of Laplace operators
486
In section 2.4, the implementation of two schemes for the Laplace operator,
487
needed in the convection-diffusion equation, was discussed. To compare the
488
order of convergence of the two operators without the influence of errors arising
489
from the discretization of the time derivative, a convergence test is performed
490
on Cartesian as well as on adaptive grids by solving a pressure Poisson equation 19
491
on a unit square domain centered around the origin. The divergence of the
492
intermediate/projected velocity field is given by the equation ∇ · u∗ (x, y) = −π 2 (k 2 + l2 ) sin(πkx) sin(πly)
(21)
493
where, u∗ (x, y) is the projected velocity, which is non-solenoidal. For a pressure
494
field, p, the pressure Poisson equation is of the form ∇2 p = ∇ · u∗ . When
495
k = l = 3, the analytical solution is given by the pressure field equation using
496
Neumann boundary conditions on all sides as p(x, y) = sin(πkx) sin(πly) + κ
(22)
497
where, κ is an arbitrary constant. Numerically, the system is not fully specified
498
with the Neumann boundary condition and hence an approach suggested by
499
Popinet (2003) is used, whereby the boundaries are specified with a pressure
500
value: p = sin(3πx) sin(3πy) as a Dirichlet boundary condition. Under these
501
conditions, κ reduces to the average value of the computed pressure over the
502
entire domain which is zero. The initial guess for the pressure field is a constant
503
value p0 .
504
Since the Laplace operator 1 and 2 both simplify to the same conventional
505
central differing scheme for Cartesian grids, only one of the operators is tested
506
on a Cartesian grid, while both of them are tested on adaptive grids. To study
507
the convergence for Cartesian grids, the grid is initiated out of one root cell
508
of unit length refined uniformly to a level L (thus giving 2L cells along one
509
direction) whereas for the adaptive grid an additional refinement operation is
510
performed, such that the cells within a circular patch of radius 0.25 are refined
511
to a level L + 2 (see figure 9). The 2:1 balance constraint thus creates a small
512
buffer zone of cells at level L + 1 as well. The grid convergence results are
513
reported in terms of L1 and the L2 error norms. A volume-averaged pth norm
514
for error can be defined as:
P p i |ei | vi Lp = P i vi
(23)
515
where, the summation is performed over all the cells in the domain, ei is the
516
difference between the analytical value and the value obatained from the simu20
517
lations whereas vi is the volume of the cell i. The order of convergence can then
518
be computed using the slope of the error norms. Figure 10 shows the evolution
519
of error when, successively, all cells are refined once. It can be clearly observed
520
that the order of convergence of both the Laplacian operators is indeed 2, which
521
is denoted by the guiding dashed lines in the graphs.
522
3.3. Staggered Velocity Interpolation
523
For testing of divergence free (or solenoidal) velocity interpolation as de-
524
scribed in section 2.7, an initial velocity distribution is prescribed in an unit
525
cube on a staggered (coarse) hydrodynamic grid. Correspondingly, for the adap-
526
tive grid, a unit cube root cell is uniformly refined to a level L such that the
527
control volumes on the adaptive grid coincide with the control volumes on the
528
hydrodynamics grid. The velocity values at the control volume faces follow from
529
a solenoidal analytical flow field defined as: u(x, y, z) = (0, −2 sin2 (πy) sin(πz) cos(πz), 2 sin2 (πz) sin(πy) cos(πy)) (24)
530
All the cells on the adaptive grid are flagged to recursively refine three times
531
and use the solenoidal velocity interpolation to define the values on the faces of
532
the newly created children. The errors arising from the difference in the inter-
533
polated face velocities and those available from equation 24 are then quantified.
534
Figure 11 shows the error norms with respect to the analytical solution after the
535
interpolation of the velocity field. The L∞ norm signifies the maximum error
536
in the domain. Since, the initial level of refinement influences the error in the
537
interpolated solution, the figure shows the error with respect to the level L at
538
which the original solution/grid was created. The figure also shows the maxi-
539
mum divergence encountered in the interpolation step which is essentially zero
540
considering the machine precision of a double precision floating point number.
541
Figure 12 shows the flow field obtained using interpolation for a grid at the 10th
542
level of refinement derived from a grid initiated at the 8th level of refinement.
543
It can be inferred that if the hydrodynamics grid is fine enough to resolve the
544
momentum boundary layers and the re-circulation zones, then the adaptive grid 21
545
preserves these flow features while using the solenoidal interpolation developed
546
in section 2.7.
547
4. Results
548
4.1. Heat conduction from a hot stationary spherical particle kept at a fixed
549
temperature
550
To test the implementation of transient scalar diffusion, the heat conduction
551
from a stationary sphere in a stationary fluid is simulated. The spherical particle
552
of radius Rp at constant temperature Ts is positioned at the center of a cubical
553
domain containing a fluid with initial temperature Tf where Ts > Tf . The
554
simulation results will be compared with the analytical solution of the unsteady
555
state heat conduction equation in an infinite domain. In an infinite domain
556
radial symmetry can be assumed. The governing PDE for the temperature, T
557
as a function of time t and the radial distance from the sphere surface r for this
558
system is given by: ∂T α ∂ 2 ∂T = 2 r ∂t r ∂r ∂r subject to the initial and boundary conditions: at t = 0
at t > 0
(25)
T = Tf
for r > Rp
(26)
T = Ts
for r ≤ Rp
(27)
T = Ts
for r ≤ Rp
(28)
T = Tf
for r = ∞
(29)
with the solution for temperature distribution and Nusselt number respectively given by T (r, t) − Tf Rp r − Rp Θ(r, t) = = 1 − erf √ Ts − Tf r 4αt 2 1 αt N u(t) = 2 + √ √ with F o = 2 Rp π Fo
(30) (31)
559
The left hand side of equation (30) signifies the non-dimensional temperature
560
Θ. In equation (31), F o represents the Fourier number. To neglect any bound-
561
ary effects, the domain length is kept as 16 Rp with the particle radius being 22
562
0.5 mm. The thermal diffusivity, α, was set equal to 10−8 m2 /s whereas the time
563
step ∆t was fixed at 10−5 s. The grid was initiated with a root cell uniformly
564
refined to a level 6 (base level, BL) whereas at the sphere surface, refinement
565
levels 9 (BL + 3) to 11 (BL + 5) were used. The Laplace operator 2 (section
566
2.4.2) was used in this case. A Neumann condition was prescribed at the do-
567
main boundaries. For the case of sphere surface at refinement level 10, Figure
568
13(a) shows the profile of the non-dimensional temperature at 0.5 s and Figure
569
13(b) shows the corresponding grid with the particle surface shown as yellow
570
line. Figure 14 shows the radial temperature profiles for three different time
571
instances corresponding to 0.1 s, 0.25 s and 0.5 s. The lines signify the analyti-
572
cal temperature profile obtained from equation 30 whereas the markers signify
573
the simulation results. Table 1 shows the obtained Nusselt number from three
574
different simulations corresponding to maximum refinements near the vicinity
575
of the sphere surface to be BL + 3, BL + 4 and BL + 5 respectively with the
576
associated percentage error from the analytical value derived from equation 31.
577
4.2. Forced convection heat transfer over an in-line array of three spherical par-
578
ticles at low Prandtl number
579
In this section, additional computations are performed to validate the nu-
580
merical model for systems involving both convection and diffusion at moderate
581
Prandtl numbers. Specifically we will focus on the convective heat transfer for
582
an inline array of three spheres. Figure 15 shows the schematic representation
583
of the computational domain for three spherical particles of equal diameter Dp
584
that are separated from each other by a distance s. The parameters used for
585
the simulation are provided in table 2. The entry and the exit regions are char-
586
acterized by distances r1 and r2 respectively. Simulations are performed for two
587
values of center-to-center spacing between the sphere (s/Dp = 2 and 4) and
588
for three different Reynolds number (Re= 1, 10, and 50). The hydrodynamics
589
are solved on a Cartesian grid with (256 × 128 × 128) number of cells in x, y
590
and z direction, respectively. Hence the corresponding adaptive grid for scalar
591
transport is initiated with two trees or root cells in the x−direction uniformly 23
592
refined at level 7 (27 cells in each direction along the edge of one root cell/tree).
593
A constant particle temperature, Ts is enforced using a staircase represen-
594
tation by initially refining the grid in the vicinity of the particle surface to a
595
maximum level of 10. These simulations are performed with the Laplace opera-
596
tor presented in section 2.4.2. On the fly grid adaptivity is thus constrained by
597
level 7 for coarsening and level 10 for refinement. The initial refined cells are
598
also exempted from any coarsening throughout the simulation. The simulation
599
domain is bounded by free slip walls with zero heat flux in the y − z directions.
600
The fluid enters the domain with a constant inlet velocity, U∞ , and constant
601
inlet temperature, T∞ , whereas at the outlet zero heat flux is prescribed. At
602
the surface of the sphere a no slip boundary condition is imposed using the
603
2nd order accurate IBM. Earlier grid independence tests (Deen et al., 2012; Das
604
et al., 2017a) using the same IBM technique have revealed that, the results are
605
essentially grid independent for Dp /∆x = 15 and above, where ∆x is the grid
606
size. Hence, a grid resolution of Dp /∆x = 20 is used for the results presented
607
here. The physical properties of the fluid are chosen such that the resulting
608
Prandtl number is 0.74. This allows for the results to be compared with the
609
values reported in the literature, quantified by the Nusselt number defined as
610
N u = hf Dp /kf . Maheshwari et al. (2006) performed similar simulations with a
611
body fitted grid method using FLUENT and our results are found to be closer to
612
the body fitted grid results (1 − 5%) as compared to those reported by Tavassoli
613
et al. (2013) (5 − 7%) (see table 3). Tavassoli et al. (2013) found the results
614
obtained from 1st and the 3rd sphere to be sensitive (variations up to 15%) to
615
the choice of entrance/exit lengths and the grid resolution needed to resolve the
616
thin boundary layers, especially present near the first sphere.
617
4.3. Forced convection heat transfer for a stationary sphere at high Prandtl num-
618
bers
619
Based on the previous results, it can be concluded that the proposed method-
620
ology produces accurate results at low/moderate Prandtl numbers. In this sub-
621
section, the method’s potential for resolving the heat transfer with thin bound24
622
ary layers at high Prandtl number flows are explored. For this, flow over a hot
623
isothermal sphere is considered, as shown in figure 17. Simulations are per-
624
formed for three different Reynolds number, Re: 20, 100 and 500, defined on
625
the basis of the inlet velocity, U∞ for a particle diameter Dp = 1 mm. The
626
Prandtl numbers considered for each of the aforementioned particle Reynolds
627
numbers are 1, 10, 100 and 500. The change in Reynolds number is achieved by
628
changing the inlet velocity whereas the change in Prandtl number is achieved
629
by changing the thermal diffusivity. Since, the target is to obtain the Nusselt
630
number for this system without confinement effects, a large domain is needed
631
such that the boundary effects are negligible. The boundary conditions in this
632
problem are similar to those for the in-line array of spheres where the walls in
633
the y and z direction are set to free slip with zero heat flux. At the −x boundary
634
the inlet velocity, U∞ , and inlet temperature T∞ , are prescribed whereas the +x
635
boundary is defined as the outlet with zero derivatives in the normal direction
636
for velocity and temperature. Guardo et al. (2006) studied a similar problem
637
with different inlet sizes with respect to the particle diameter and concluded
638
that wall effects over velocity profile are present up to a domain inlet size of
639
4Dp × 4Dp whereas the wall effects on the temperature profile are observed up
640
641
to a domain inlet size of 2Dp × 2Dp . These inlet sizes were deduced for the Re ≈ 300 and subsequently used for obtaining results for 0.33 < Re < 3300.
642
The particle Reynolds number in this study lies well within this range, hence, for
643
the simulations performed in this subsection a domain size of 6Dp × 6Dp × 18Dp
644
645
is taken.
Thus, the hydrodynamics is solved on a Cartesian grid with 128 × 128 × 384
646
grid cells in the y, z and x directions respectively. Hence, the corresponding
647
adaptive grid has a configuration of three root cells/trees sewed together in the
648
x−direction each with an uniform refinement of level 7. The initial adaptive grid
649
is subsequently refined in the vicinity of the surface of the solid particle such
650
that a zone of thickness δ = Dp /2 is at the finest level which helps in resolving
651
the boundary layers forming at the particle surface. Similar to the previous case,
652
a grid resolution of Dp /∆x = 20 is taken for the Cartesian grid. With the base 25
653
level on the adaptive grid fixed at level 7, the scalar transport is solved for two
654
different maximum levels of refinement at the particle surface, thus resulting in
655
two cases: (a) coarsest level 7 and finest level 9 and (b) coarsest level 7 and
656
finest level 11. For each of the two cases and the sub cases therein, simulations
657
are performed using both the Laplace operators developed in section 2.4.
658
For the case of forced convection, there exists well–established empirical
659
correlations in the literature quantifying the heat transfer in terms of N u as a
660
function of Re and P r. Notable among these are the ones proposed by Ranz and
661
Marshall (1952) and Whitaker (1972) for a stationary spherical particle given
662
respectively as N u = 2.0 + 0.6Re0.66 P r0.33
for
10 < Re < 104 , 0.6 < P r < 380
(32)
N u = 2.0 + (0.4Re0.5 + 0.06Re2/3 )P r0.4 for
3.5 < Re < 7.6 × 104 , 0.7 < P r < 380
(33)
663
Figure 18 shows the results obtained for the two different operators in compar-
664
ison to the Ranz and Marshall (1952) correlation. The points corresponding to
665
Prandtl number of 500 are not included in the graph as it falls outside the range
666
for which the correlations were devised. Table 4 shows in detail the obtained
667
Nusselt number for the two grids and the two Laplace operators for various com-
668
binations of Reynolds and Prandtl numbers. It can be inferred that the results
669
are in good agreement with the correlation(s). It is worth mentioning that the
670
results presented in table 4 should not be comprehended as a grid convergence
671
study since, only the maximum refinement level is increased while keeping the
672
base level refinement fixed. In a true grid convergence analysis for adaptive
673
grids all cells must be flagged for refinement thus keeping the same refinement
674
level difference among the neighbors as done in section 3.2.
675
5. Conclusions
676
In this paper a novel computational strategy is presented to resolve the scalar
677
transport using a dual grid/multiple resolution approach that minimizes the 26
678
computational and memory overhead, required to accurately determine heat and
679
mass transfer coefficients in fluid-particle systems with high Prandtl numbers
680
or Schmidt numbers. Unlike earlier studies (Ostilla-Monico et al., 2015; Gotoh
681
et al., 2012; Chong et al., 2018), our method uses an adaptive grid for the scalar
682
transport. A quadtree (octree in 3D) distributed data structure is used to
683
discretize the domain for scalar transport while a base Cartesian mesh (coarser
684
in size) is used to solve for the hydrodynamics. Different discretizations for the
685
convective and the diffusive fluxes have been presented in this work as they
686
differ from those for a conventional Cartesian mesh with respect to the local
687
neighborhood of a face. A verification test for the convective scheme along with
688
the estimate for numerical diffusion is presented. Subsequently, convergence
689
tests for the two Laplace operators are presented where one of the operators
690
results in symmetric matrices whereas the other results in asymmetric matrices.
691
The corresponding matrix solvers that can be used to solve such a system of
692
simultaneous linear equations are described and it was found that the symmetric
693
Laplace operator produces linear systems that are faster and easier to solve than
694
the asymmetric one.
695
Since, in the current methodology, the solution of the momentum on the
696
adaptive grid is avoided, the details of the initial mapping of the dual grids and
697
its subsequent use in the staggered velocity interpolation under the divergence
698
free constraint are presented. The shared memory communicator facilitates the
699
update of velocity from the hydrodynamics grid to the scalar transport grid with
700
close to zero latency. The verification case for an analytical solenoidal field was
701
performed, from which it can be concluded that, the key flow features which
702
are captured by the hydrodynamic grid are maintained by the interpolation
703
on the refined scalar grid with divergence being close to machine precision.
704
Additionally, the conditions for the adaptation criteria and the update of the
705
children data from the parent data in case of refinement, or vice-versa in case
706
of coarsening was also discussed.
707
With the proposed methodology, validation cases were performed for the
708
case of heat conduction from a hot stationary sphere kept at a fixed tempera27
709
ture where the obtained radial temperature profiles and Nusselt numbers where
710
compared with the analytical solutions available for this system. Subsequently,
711
validations were also performed for the study of blockage effects in forced con-
712
vection heat transfer for an in-line array of three spheres for a low Prandtl
713
number of 0.74. The Nusselt number obtained from the simulations were com-
714
pared with those published in literature using an Immersed Boundary Method
715
on Cartesian grid and from a body-fitted unstructured grid. It was observed
716
that the results obtained from the current methodology were closer to the re-
717
sults published using the body fitted grid in comparison to those published
718
with IBM. Subsequently, simulations were performed for the case of forced con-
719
vection heat transfer for a single stationary sphere at high Prandtl numbers.
720
The Nusselt number obtained from these simulations were compared with the
721
well–established empirical correlations and an excellent agreement was found.
722
Finally we conclude with some comments on the limitations of the method-
723
ology and scope of further improvements. Any distributed dynamic data struc-
724
ture such as tree-based approaches, suffers from limitations related to oblivious
725
cache use, inefficient vectorization, complex load balancing with graph partition-
726
ing methods and performance of matrix solvers. Some of these limitations have
727
been alleviated by using state-of-the-art, well documented open-source packages
728
providing optimal capabilities with respect to the aforementioned aspects.
729
A second drawback arises from the complexity in devising higher order dis-
730
cretization schemes that could be tackled to some extent by using aggressive
731
refinement. Quadtree based grid adaptation falls under the category of isotropic
732
grid refinement, i.e. when a cell is marked for refinement it is refined in all di-
733
rections. Hence, it is not the optimal method to resolve boundary layers which
734
develop near solid boundaries aligned to any of the cardinal directions such as,
735
walls which coincide with the computational domain boundaries. Also, non-
736
deformable and fixed particles presented in this work do not pose an ideal test
737
case problem because competing methods such as body-fitted grids and overset
738
grids take benefit from the static nature of the immersed body and can actually
739
produce excellent grids which is a one-time affair. Vreman (2016) and Vreman 28
740
and Kuerten (2018) propose a methodology using an overset grid for fluid turbu-
741
lence over static and moving no-deformable spherical particles. An interesting
742
problem class where this method will show its full potential involves multiphase
743
flows with moving and deformable interfaces, specifically for gas-liquid systems
744
(drops and bubbles) where the Schmidt numbers are high. A quadtree based
745
grid adaptation would provide cheaper and more efficient remeshing along with
746
properly resolving the extremely thin mass boundary layers occuring at such
747
interfaces in multicomponent chemical systems.
748
Acknowledgments
749
This work is part of the Industrial Partnership Programme i36 Dense Bubbly
750
Flows that is carried out under an agreement between Akzo Nobel Chemicals
751
International B.V., DSM Innovation Center B.V., Sabic Global Technologies
752
B.V., Shell Global Solutions B.V., Tata Steel Nederland Technology B.V. and
753
the Netherlands Organisation for Scientific Research (NWO).
754
This work was carried out on the Dutch national e-infrastructure with the
755
support of SURF Cooperative.
756
References
757
Aboulhasanzadeh, B., Thomas, S., Taeibi-Rahni, M., Tryggvason, G., jun 2012.
758
Multiscale computations of mass transfer from buoyant bubbles. Chemical
759
Engineering Science 75, 456–467.
760
761
Andriˇcevi´c, R., Galeˇsi´c, M., 2018. Contaminant dilution measure for the solute transport in an estuary. Advances in Water Resources 117, 65 – 74.
762
Burstedde, C., Wilcox, L. C., Ghattas, O., 2011. p4est: Scalable algorithms
763
for parallel adaptive mesh refinement on forests of octrees. SIAM Journal on
764
Scientific Computing 33 (3), 1103–1133.
29
765
Centrella, J., Wilson, J. R., 1984. Planar numerical cosmology. II - the difference
766
equations and numerical tests. Astrophysical Journal Supplement Series 54,
767
229–249.
768
Chong, K. L., Ding, G., Xia, K., 2018. Multiple-resolution scheme in finite-
769
volume code for active or passive scalar turbulence. Journal of Computational
770
Physics 375, 1045 – 1058.
771
Das, S., Deen, N. G., Kuipers, J. A. M., 2017a. A DNS study of flow and heat
772
transfer through slender fixed-bed reactors randomly packed with spherical
773
particles. Chemical Engineering Science 160, 1 – 19.
774
Das, S., Deen, N. G., Kuipers, J. A. M., 2017b. Immersed boundary method
775
(IBM) based direct numerical simulation of open-cell solid foams: Hydrody-
776
namics. AIChE Journal 63 (3), 1152–1173.
777
Deen, N. G., Kriebitzsch, S. H., van der Hoef, M. A., Kuipers, J. A. M., 2012.
778
Direct numerical simulation of flow and heat transfer in dense fluidparticle
779
systems. Chemical Engineering Science 81, 329 – 344.
780
Dixon, A. G., Nijemeisland, M., Stitt, E. H., 2006. Packed tubular reactor
781
modeling and catalyst design using computational fluid dynamics. In: Marin,
782
G. B. (Ed.), Computational Fluid Dynamics. Vol. 31 of Advances in Chemical
783
Engineering. Academic Press, pp. 307 – 389.
784
Falgout, R. D., Yang, U. M., G., H. A., Tan, C. J. K., Dongarra, J. J., 2002.
785
hypre: A library of high performance preconditioners. In: Computational
786
Science — ICCS 2002. Springer Berlin Heidelberg, Berlin, Heidelberg, pp.
787
632–641.
788
Gada, V. H., Sharma, A., 2011. On a novel dual-grid level-set method for two-
789
phase flow simulation. Numerical Heat Transfer, Part B: Fundamentals 59 (1),
790
26–57.
30
791
Gibou, F., Fedkiw, R. P., Cheng, L., Kang, M., 2002. A second–order–accurate
792
symmetric discretization of the poisson equation on irregular domains. Jour-
793
nal of Computational Physics 176 (1), 205 – 227.
794
Gotoh, T., Hatanaka, S., Miura, H., 2012. Spectral compact difference hybrid
795
computation of passive scalar in isotropic turbulence. Journal of Computa-
796
tional Physics 231 (21), 7398 – 7414.
797
Guardo, A., Coussirat, M., Recasens, F., Larrayoz, M., Escaler, X., 2006. CFD
798
study on particle–to–fluid heat transfer in fixed bed reactors: Convective
799
heat transfer at low and high pressure. Chemical Engineering Science 61 (13),
800
4341–4353.
801
Isaac, T., Burstedde, C., Wilcox, L. C., Ghattas, O., 2015. Recursive algo-
802
rithms for distributed forests of octrees. SIAM Journal on Scientific Comput-
803
ing 37 (5), C497–C531.
804
Lipnikov, K., Morel, J., Shashkov, M., 2004. Mimetic finite difference methods
805
for diffusion equations on non–orthogonal non–conformal meshes. Journal of
806
Computational Physics 199 (2), 589 – 597.
807
Losasso, F., Fedkiw, R., Osher, S., 2006. Spatially adaptive techniques for level
808
set methods and incompressible flow. Computers & Fluids 35 (10), 995 – 1010.
809
Maheshwari, A., Chhabra, R., Biswas, G., 2006. Effect of blockage on drag
810
and heat transfer from a single sphere and an in-line array of three spheres.
811
Powder Technology 168 (2), 74 – 83.
812
Ostilla-Monico, R., Yang, Y., van der Poel, E., Lohse, D., Verzicco, R., 2015. A
813
multiple resolution strategy for direct numerical simulation of scalar turbu-
814
lence. Journal of Computational Physics 301, 308 – 321.
815
Popinet, S., 2003. Gerris: a tree-based adaptive solver for the incompress-
816
ible euler equations in complex geometries. Journal of Computational Physics
817
190 (2), 572 – 600.
31
818
819
Ranz, W. E., Marshall, W. R., 1952. Evaporation from drops. Chemical Engineering Progress 48 (3), 141–146.
820
Stadler, G., Gurnis, M., Burstedde, C., Wilcox, L. C., Alisic, L., Ghattas, O.,
821
2010. The dynamics of plate tectonics and mantle flow: From local to global
822
scales. Science 329, 1033–1038.
823
Tavassoli, H., Kriebitzsch, S., van der Hoef, M., Peters, E., Kuipers, J., 2013. Di-
824
rect numerical simulation of particulate flow with heat transfer. International
825
Journal of Multiphase Flow 57, 29 – 37.
826
Verma, S., Blanquart, G., 2013. On filtering in the viscous-convective subrange
827
for turbulent mixing of high schmidt number passive scalars. Physics of Fluids
828
25 (5), 055104.
829
Vreman, A. W., 2016. Particle-resolved direct numerical simulation of homoge-
830
neous isotropic turbulence modified by small fixed spheres. Journal of Fluid
831
Mechanics 796, 40–85.
832
833
Vreman, A. W., Kuerten, J. G., 2018. Turbulent channel flow past a moving array of spheres. Journal of Fluid Mechanics 856, 580–632.
834
Weiner, A., Bothe, D., 2017. Advanced subgrid-scale modeling for convection-
835
dominated species transport at fluid interfaces with application to mass trans-
836
fer from rising bubbles. Journal of Computational Physics 347, 261 – 289.
837
Whitaker, S., 1972. Forced convection heat transfer correlations for flow in pipes,
838
past flat plates, single cylinders, single spheres, and for flow in packed beds
839
and tube bundles. AIChE Journal 18 (2), 361–371.
840
Xu, H., Xing, Z., Wang, F., Cheng, Z., 2019. Review on heat conduction, heat
841
convection, thermal radiation and phase change heat transfer of nanofluids in
842
porous media: Fundamentals and applications. Chemical Engineering Science
843
195, 462 – 483.
32
844
Zhong, J., Cai, X.-M., Bloss, W. J., 2016. Coupling dynamics and chemistry
845
in the air pollution modelling of street canyons: A review. Environmental
846
Pollution 214, 690 – 704.
33
Table 1: Computed instantaneous Nusselt numbers for the case of heat conduction from a stationary sphere for three different spatial resolutions with the coarsest level of refinement (base level, BL) is 6 and the finest level of refinement at the vicinity of the sphere surface is 9 (BL + 3), 10 (BL + 4) and 11 (BL + 5).
t (s)
Fo
N u (BL + 3)
N u (BL + 4)
N u (BL + 5)
N u analytical
0.01
0.0004
64.23 (+9.96%)
59.89 (+2.53%)
58.78 (+0.65%)
58.41
0.02
0.0008
44.58 (+6.42%)
42.46 (+1.36%)
42.04 (+0.38%)
41.89
0.03
0.0012
35.87 (+3.76%)
35.0 (+0.69%)
34.62 (+0.16%)
34.57
34
Table 2: Parameters used in the simulation of heat transfer from an inline array of three spheres kept at a constant temperature.
s/Dp 2 4
nx × ny × nz
Dp (m)
Dt /Dp
r1 /Dp
r2 /Dp
Pr
−4
16
5
5
0.74
256 × 128 × 128
10−4
16
5
5
0.74
256 × 128 × 128
10
35
Table 3: Nusselt number obtained for the case of forced convection over in-line array of three spheres.
36
847
Table 4: Nusselt number obtained from simualations for forced convection heat transfer from a stationary single sphere at different combinations of Reynolds and Prandtl numbers compared with the empirical correlations derived from experiments. L denotes the coarsest level of refinement (level 7) corresponding to the Cartesian grid on which hydrodynamics is solved whereas L + 2 and L + 4 denotes the finest level of refinement which is in the vicinity of the particle surface.
Obtained Nusselt Number, N u Pr
1
10
100
500∗
∗
Re
Laplace Operator 1
Laplace Operator 2
Correlations Nu
Nu
(eq. 32)
(eq. 33)
L+2
L+4
L+2
L+4
(∇2(1,2) )
(∇2(1,4) )
(∇2(2,2) )
(∇2(2,4) )
20
4.77
4.76
4.75
4.73
4.68
4.23
100
8.16
8.15
8.17
8.16
8.00
7.29
500
15.89
15.87
15.91
15.84
15.41
14.72
20
8.19
7.89
8.18
7.89
7.73
7.60
100
15.75
15.14
15.64
15.13
14.82
15.29
500
32.86
31.55
32.81
31.48
30.68
33.96
20
15.45
14.74
15.42
14.68
14.26
16.07
100
32.10
30.42
32.15
30.36
29.42
35.39
500
69.34
65.52
69.41
65.47
63.32
82.28
20
25.12
23.66
25.09
23.64
22.86
28.79
100
53.56
50.37
53.59
50.33
48.64
65.57
500
118.26
110.94
117.97
110.86
106.30
154.83
outside the limit of correlations
37
Ghost cell centre
X Face Velocity,ux
Internal cell centre,ϕ
Y Face Velocity,uy
Invalid Tree
5
Level 0 4
10
11
14
15
3
8
9
12
13
2
2
3
6
Level 1
(b)
0
0
1
4
0
y 0
1
(a)
2
3
Level 2
9
4
6 1
6
1
5
0
x
9
7
8 1
8
7
7 4 2
Level 3
5 3
2
3
4
5
Valid Tree
5
(c)
(d)
Figure 1: Schematic diagram of spatial discretization (a) Dual grid mapping: background Cartesian grid with ghost cells and overlapping uniformly refined quadtree grid at initial time step (b) Tree violating 2:1 balance criteria (c) Balanced tree with quadrant numbers (d) Hierarchical layout of the tree showing the quadrant numbers and the corresponding levels. The dashed zig-zag line in (a) and (c) show the sequence in the cell numbering provided by the z-ordering.
38
Start
Hydrodynamics (Cartesian Grid)
Scalar Transport (Adaptive Grid)
Cartesian Grid Memory allocation
Adaptive Grid Memory allocation µ
Initialisation Project non-solenoidal velocity u∗x , u∗y , u∗z Calculate pressure corrections & Solve Pressure Poisson eq.
c
Initialise uniform refinement
µ
Mapping Cartesian ò Quadtree Zid → [i, j, k] µ
c
c
µ
Initial Adaptivity Read ux , uy , uz
Use solenoidal interpolation to update velocities on quadtree
Calculate solenoidal velocity & Update ux , uy , uz I/O Operations & Update time, t
no
Is t ≥ tend ? yes
Hydrodynamics (Other V ariables) Shared Memory ux [ ], uy [ ], uz [ ] Scalar Transport (Other V ariables) µ
c
subcycling
M emory M ap
Update gradients of scalar field Adapt the grid Calculate the convective fluxes & fill implicit diffusion matrix & solve I/O Operations & Update time, t
Exit µ: Lock for reading shared memory, when locked scalar transport code cannot read the velocity field µ: Lock for writing shared memory, when locked hydrodynamics code cannot write the velocity field
Figure 2: Flowsheet showing the processing sequence and the memory map related to the dual grid method.
39
Figure 3: Computing the face value of scalar for (a) uniform Cartesian grid (b) adaptive quadtree grid.
40
φS2 f S2 ∇d φS2
φC1 C1
∇fd φS1
∇fd φ φC2 C2
∇fd φL
φS1 S1
(I)
L (II)
Figure 4: Schematic showing two face morphologies that arise for the discretization of gradient at a face (a) face being shared by same size cells (b) face being shared by different size cells thus having a hanging node at the face.
41
φ3 N⊥U
OC
Nb
C φ7
∇fd φ
φ
φ6
φ4
N⊥L
Figure 5: Sample stencil for face centered gradient calculation in the case where opposite cell(OC), lower perpendicular cell(N⊥L ) and upper perpendicular cell(N⊥U ) is not a leaf cell.
42
Figure 6: Divergence free velocity interpolation during (a) Grid adaptation (b) Velocity update from hydrodynamics grid.
43
Figure 7: Convection of a step profile using First Order Upwind (shown in green) and Barton (shown in red) scheme on a (a) uniform grid at refinement level 7 ( (b) adaptive grid with coarsest level 7 and finest level 8 ( (
), 9 (
) and level 8 ( ), 10 (
) at the discontinuity. Exact solution is shown as black dash dotted line (
44
),
) and 11 ).
Figure 8: Convection of a circular blob in an inclined flow field (a) initial scalar distribution and the associated grid. The scalar distribution and the associated grid with the contours for dimensionless scalar value of 0.5 (yellow) and 0.001 (blue) obtained at 0.5 s from (b) Gerris flow solver and (c) Current implementation.
45
Figure 9: Domain setup for the convergence test of the Laplace operators for the pressure Poisson problem along with the solution(pressure) field (a) Cartesian grid of level L = 7 comprising of 27 × 27 cells (b) Coarse-fine grid obtained by successively refining a circular zone twice to obtain a grid with maximum refinement level of L = 9 and minimum refinement level of L = 7.
46
Figure 10: Order of convergence for the Laplacian operator for the case of (a) operator 1 on adaptive (b) operator 2 on adaptive grid (c) central differencing scheme on uniform grids. Dashed line (
) represents the convergence of order 2.
47
Figure 11: Error norms and divergence for staggered grid interpolation corresponding to different levels of refinement ((
) corresponds to order 2 and (
1).
48
) corresponds to order
Figure 12: Velocity vector plot at 10th level of refinement obtained from staggered grid interpolation of analytical flow field at 8th level of refinement.
49
Figure 13: (a) Non-dimensional temperature (Θ) profile of heat conduction from a static particle at 0.5 s and (b) the corresponding adaptive grid with the particle surface shown with yellow line.
50
Figure 14: Non-dimensional temperature (Θ) profile in the vicinity of a stationary particle along the x-axis for three different times (inset graph provides a zoomed view). Lines denote the exact solution whereas markers denote the numerical result.
51
z
r1
s
s
r2
Outlet
y Dt
Inlet
x
Figure 15: Schematic of case setup for forced convection heat transfer over an inline array of spheres.
52
Figure 16: Non dimensional fluid temperature around spheres with a constant surface temperature for the case of Re = 10 and (a) s/Dp = 2 (b) s/Dp = 4, (c) the associated adaptive grid corresponding to the case of s/Dp = 2.
53
Figure 17: Schematic of case setup for forced convection heat transfer over a solid stationary sphere.
54
Figure 18: Nusselt number obtained from single particle simulations for various cases benchmarked with the Ranz-Marshall (Ranz and Marshall, 1952) correlation for forced convection over a heated sphere.
55
Fully Resolved Scalar Transport for High Prandtl Number Flows using Adaptive Mesh Refinement Highlights
Novel multiple resolution technique is proposed to resolve scalar boundary layers Comparison of second order Laplacian operators for quadtrees Solenoidal velocity interpolations and Z-order curve for dual grid communications Accurate prediction of Nusselt number is obtained for single/multiple particles