Annals of Nuclear Energy 133 (2019) 84–95
Contents lists available at ScienceDirect
Annals of Nuclear Energy journal homepage: www.elsevier.com/locate/anucene
Development of the neutronics transport code based on the MOC method and adjoint-forward Coarse Mesh Finite Difference technique: PerMOC code for lattice and whole core calculations K.H. Hosseinipour a, F. Faghihi a,b,c,⇑ a b c
Department of Nuclear Engineering, School of Mechanical Eng., Shiraz University, 71936-16548 Shiraz, Iran Radiation Research Center, Shiraz University, Shiraz, Iran Ionizing and Non-Ionizing Research Center, Shiraz University of Medical Science, Shiraz, Iran
a r t i c l e
i n f o
Article history: Received 3 February 2019 Received in revised form 5 April 2019 Accepted 27 April 2019 Available online 16 May 2019 Keywords: PerMOC Forward and adjoint MOC Forward and adjoint Coarse Mesh Finite Difference Parallel sweeping Transport theory Point kinetics calculations
a b s t r a c t In this research, the Coarse Mesh Finite Difference (CMFD) acceleration technique in both adjoint and forward mode is applied to the MOC kernel, to develop a new code called PerMOC. A new approach in the adjoint MOC kernel which is called thermal-up-scattering-like iteration scheme reduces the total number of adjoint iteration sweeps. In additions, more speed-up is achieved using a parallel transport sweeping in MOC kernel. Validation test for PerMOC C-Sharp code is carried out for four distinct criticality benchmarks in which C5G7 is one of those. In the case of C5G7 benchmark, the best-calculated multiplication factor has 51 pcm differences with the reference case. Furthermore, the newly proposed adjoint solver reduces about 30% the number of adjoint transport sweeps in comparison with the normal adjoint solver. The present article describes the PerMOC methodologies and illustrates its ability to model large problems with acceptable accuracy. Ó 2019 Elsevier Ltd. All rights reserved.
1. Introduction In recent years, progress in computing system provides the opportunity of direct whole-core methods for transport calculation. The Method of Characteristics (MOC) is one of the popular approaches that are widely used for solving the transport equation, and originally was developed by (Askew, 1972). During the past two decades, the MOC becomes one of the main transport methods for 2D assembly calculations and 3D or 2D/1D reactor whole-core analysis. Various studies have been carried out on the different aspects of the MOC method. A basic review of this method with comprehensive details can be found in (Cacuci, 2010). The quadrature sets for implementation of the solid angles were discussed in (Leonard, 1995) and (Yamamoto et al., 2007). In additions, the acceleration techniques and convergence improvement were reviewed in (Li et al., 2014) and (Santandrea and Sanchez, 2002). The parallelism of the MOC kernel is another field of interest. From this point of view, the different parallelism algorithms were proposed in the
⇑ Corresponding author at: Department of Nuclear Engineering, School of Mechanical Eng., Shiraz University, 71936-16548 Shiraz, Iran. E-mail address:
[email protected] (F. Faghihi). https://doi.org/10.1016/j.anucene.2019.04.050 0306-4549/Ó 2019 Elsevier Ltd. All rights reserved.
literature such as (Boyd III, 2014) and (Kochunas, 2013). Recently, the adjoint form of MOC calculation is investigated by (Zhu et al., 2015). Generally, the MOC method is entitled to any method in which the associated supporting differential equation is solved only in predetermined distinct paths which are called characteristic paths. These paths are selected in such a way that to simplify the solution of the principle coupled differential equations. Final results are tallied from the averaging of the path-wise calculated parameters in the geometry domain. The applicability of characteristics method is not limited to the nuclear engineering field. As an example, it is a long time that the MOC is used in the modeling and prediction of the water hammer phenomena in the hydraulic systems (Larock et al., 1999). In the neutron transport equation, the characteristic paths are determined by discretization of solid angle in its predefined domain. Several famous lattice codes have utilized the MOC kernel in their neutron transport solver. Such examples include the CACTUS module in WIMS (Halsall, 1980), AEGIS (Yamamoto et al., 2010), APOLLO2 (Sanchez et al., 2010), CASMO-5 (Rhodes et al., 2006), DRAGON5 (Hébert, 2013) and Open-MOC (Boyd et al., 2014). In this research, the specification of the PerMOC code is presented. This code solves the neutron transport equation using
85
K.H. Hosseinipour, F. Faghihi / Annals of Nuclear Energy 133 (2019) 84–95
parallel MOC in the Cartesian geometry. In addition, an efficient combinational geometry method is taking into account. It should be emphasized that both adjoint and forward Coarse Mesh Finite Difference (CMFD) acceleration schemes are implemented in the developed code in order to enhance its ability for the point kinetic parameter calculations and/or perturbation theory. Following a brief presentation of the MOC method in the next section, we will explain the methodology of implementing the adjoint and forward CMFD acceleration algorithm in the PerMOC code. A separate section discusses the simple parallel sweeping method which is embedded in the PerMOC MOC kernel. Following this section, the point kinetic parameters and their relations are addressed. Finally, PerMOC is verified and validated against some well-known benchmarks.
Xg0 !g p
¼ vg m
Xg0 f
:
ð7Þ
By decomposing the total domain to the flat source regions, the scalar flux can be calculated from the above average angular flux across each mesh ‘‘n” as:
ugn ¼
X g
Um;n xm ;
ð8Þ
m
where ugn is the scalar flux in the gth energy group. The term xm is g
the associated solid angle weight and Um;n is the averaged angular flux in the gth energy group through the mesh n in direction m. By definition of Dm;n;k as the angular flux changes in direction m and across mesh n for characteristic ray k, one can drive from Eq. (4):
2. The PerMOC methodologies
Dm;n;k ¼ U
2.1. Forward MOC implementation
X
t
ðr; EÞUðr; E; XÞ ¼ Q ðr; E; XÞ;
ð1Þ
ð2Þ
where ds is the differential neutron path across the position r, (relative to the fixed reference point). Eq. (2) is the characteristics form of the transport equation and as stated in the textbooks describes neutron motion as viewed through the eyes of the neutron. Considering the following assumptions: - All sources (scattering, fission and fixed) are isotropic. - Anisotropic effects are dealt with through transport-corrected cross sections. - Flat source and flat flux in each sub-domain meshes are assumed. Eq. (2) can be rewritten in the following form:
dU X U ¼ Q; þ tr ds
ð3Þ
where Rtr is the corrected transport cross-section. In order to simplify the characteristic neutron equation, the arguments associated with each variable have been omitted. Eq. (3) has a straight forward solution as:
"
e 1 exp UðsÞ ¼ Uð0Þ þ q
X
#
s ;
ð4Þ
tr
where ‘‘s” is the characteristic along the neutron path. The source term in Eq. (4) is defined as:
eg ¼ q
Qg P ;
4p
ð5Þ
tr
Which: g
Q ¼
Q gs
þ
Q gf
¼
X g
0
0 Pg0 !g 1 X0 @ g !g þ p Aug0 ; s keff
ð6Þ
So that the production kernel in the above equation is defined as:
h
¼ U
in m;n;k
i
"
e m;n 1 exp q
X
!# sm;n;k
: ð9Þ
where Q is the total angular source of neutrons at location r including energy E, and direction X. In Eq. (1), U is the angular flux, Rt is the total macroscopic cross section. In the moving-neutron coordinate system, the streaming term reduces to a total derivative along the neutron path of motion and Eq. (1) becomes:
dUðr; E; XÞ X ðr; EÞUðr; E; XÞ ¼ Q ðr; E; XÞ; þ t ds
out m;n;k
tr
The steady-state form of Boltzmann transport equation is:
X:rUðr; E; XÞ
U
in m;n;k
e m;n is the source term of mesh n in direction m. This source is q calculated using Eq. (5). Now the scalar flux can be written in the following form:
1
ugn ¼ qe gn þ Pg
tr;n An
X m
( lm xm
X
)
Dm;n;k ;
ð10Þ
k
where, lm is the track spacing for solid angle m, and An is the area of mesh n. Furthermore, the summation of Dm;n;k over index k gives the angular flux change for all ray segments in mesh n and direction m. The solid angle weight xm is the product of both polar and azimuthal angle weights. Eq. (10) is the final MOC form of the neutron transport equation which is used in the present PerMOC code. Sequences of the necessary data and approach in the PerMOC code is as follows: Basic geometrical information and material properties including core configuration, assembly types, cell types, and group-wise cross sections should be loaded from the input file. Azimuthal and polar quadrature sets are selected prior to the ray tracing. The azimuthal angles are evenly- spaced in the xy-plane. In the case of N azimuthal directions of motion, one splits the directions evenly such that the boundaries of motion occur at equal intervals of ð2p=NÞ. Now, the azimuthal direction of motion takes place through the center of two neighboring boundaries. The azimuthal angles are then adjusted in such a way that to produce an integer number of sub-divisions on both sides of each pin cell. This adjustment is necessary in order to handle the reflecting boundaries. The associated weight of each azimuthal angle is proportional to the fraction of azimuthal angular space owned by its azimuthal angle. The polar angles can be discretized using the equal weights method (Cacuci, 2010), or by the quadrature sets which is proposed by (Yamamoto et al., 2007). The basic principles of ray tracing routine are illustrated in Fig. 1. The represented cell contains an imbedded cylindrical fuel rod including its surrounding coolant. These two regions are divided into totally sixteen fine-mesh subdivisions due to four red mesh-lines. The entrance point to the cell is known either from the neighboring cell, or from the origin of the characteristic along one of the outer problem boundaries. For instance, the sample green characteristic ray is also subdivided into seven different segments through these mesh subdivisions. Angular flux change in each segment is calculated based on Eq. (9). The aforementioned ray tracing method is set up with respect to the user-defined track spacing in the level of each distinct pin cells. The ray tracing should be performed only once in each pin
86
K.H. Hosseinipour, F. Faghihi / Annals of Nuclear Energy 133 (2019) 84–95
around the low-order system (overlay meshes), and high-order system (main problem meshes); we drive two equal systems with the unique multiplication factor. The low-order coarse mesh system converges faster than the main high-order problem, and therefore cause to accelerate the convergence of the main problem. As an example, Fig. 3 shows one pin cell with 16 meshes in the main high-order problem. This is equal to one coarse mesh (with the index of N) for its related low-order coarse mesh system. The CMFD neutron balance for the low order system is:
X þx þy y J N þ J x uN ¼ Q N ; N þ J N þ J N =h þ
ð11Þ
tr;N
Fig. 1. Schematic of one ray and its seven segments.
cells type, and a separate matrix is used to preserve the identification number of rays in the cell and assembly interfaces. Therefore, no information will be lost during the track sweeping from one pin cell to its adjacent cell. After determination of quadrature sets and completion of ray tracing module, the inner and outer iteration will be initiated. The outer iteration starts by calculation of region-wise fission sources at all energy groups. The calculation is performed using the scalar flux of the previous step. Inner iterations are initiated by calculations of the total source using Eq. (5). The main task of inner iteration is to effectively calculate the Eq. (10). During each inner iteration, and for each energy group; a complete ray sweeping will be performed on the total domain for all pre-produced tracks. The starting point in this scenario is one side of the problem boundary. Its input angular flux is known from initial guess or from the previous iteration. An auxiliary array saves the calculated Dm;n;k values using Eq. (9). After the completion of track sweeping, these partial currents are used to calculate the region-wise scalar flux using Eq. (10). New fluxes are used in the calculation of new sources from Eqs. (5) and (6). These repetitive loops would be stopped when flux and current are converged with desirable precision. After completion of inner iteration for all energy groups, total fission source is calculated and the effective multiplication factor is determined. It should be noted here that, the outer iteration is finished when the predefined convergence criteria of the multiplication factor as well as scalar flux are achieved. To find more information, a schematic flow-chart of the above-mentioned algorithm is depicted in Fig. 2.
where group index g is omitted for simplicity and h is the ‘‘lattice pitch”. In this equation, uN is the average coarse mesh flux. Jþx N th and Jx N are the ‘‘net currents” across +x and x faces of the N y coarse mesh, respectively. Jþy and J have the same meaning for N N +y and y faces. The term ‘‘net current” means the algebraic summation of in-currents and out-currents in each faces and in this sense, the positive direction in Eq. (11) is assumed to be the cell outward direction (c.f., Fig. 3). The calculations of these net currents are the main part of any CMFD method. The net currents in the CMFD neutron balance are basically derived from the corrected form of the diffusion equation. For instance, in the case of two-node problem approach (Smith, 1983) the net current in each surface of any coarse mesh is defined as:
^ d e d Jd N ¼ DN ðuN1 uN Þ D N ðuN1 þ uN Þ;
where N is the coarse mesh index. Furthermore, d is referred to x, +x, y and +y cell surfaces. The first term in the right hand side b is of Eq. (12) is the traditional diffusion currents. In this equation, D the finite difference diffusion coefficient and is calculated by the continuity of the interface currents between two adjacent cells from the classical diffusion equation (Chao, 2000):
b d ¼ 2DN DN1 =DN1 hN þ DN hN1 ; D N
While MOC is based on the flat source region assumption, its precision depends on the selection of fine flat source meshes. In order to derive acceptable results, especially in the high flux gradient regions, one needs to use fine meshes with relatively fine ray separation distance. Considering the mentioned effects, in order to reduce the running time, an acceleration technique should be carried out. Coarse Mesh Finite Difference or CMFD (Smith, 1983) is a well-known acceleration method which has been used in many transport theory codes. As an example, it has been implemented in CASMO-4 and showed that a larger than 100 times speedups on real LWR problems (Smith, 2002). The CMFD method selects an overlay mesh on the top of the problem domain. This overlay mesh consists of more than one mesh of the main problem. Usually, the overlay mesh is considered in the size of one pin cell. Cross sections and properties of each overlay mesh are derived from the homogenized properties of its associated pin cell. The main problem domain is called highorder flat source region while the CMFD overlay mesh is called low-order coarse mesh system. By balancing the net leakage term
ð13Þ
where hN is the width or height of coarse mesh N with respect to the direction d andDN is called the average diffusion coefficient. This average diffusion coefficient is calculated using flux-weighted averaging of fine mesh diffusion coefficients inside each coarse mesh:
DgN ¼
X n2N
2.2. Forward 2D Coarse Mesh Finite Difference
ð12Þ
Dgn ugn An =
X
ugn An :
ð14Þ
n2N
Briefly, there are four diffusion coefficient D in the CMFD calculation and are defined as: 1) Dgn is the ordinary diffusion coefficient of region n in group g (the right hand side of Eq. (14)) 2) DgN is the average flux-weighted diffusion coefficient in the left hand side of Eq. (14). This is the average diffusion coefficient of coarse mesh N and is calculated from Dgn (item 1). b d is the finite difference diffusion coefficient and is 3) D N defined based on Eq. (13). This variable is the average diffusion coefficient on the surfaces d of coarse mesh N and is calculated from DgN (item 2). e is the non-linear coupling coefficient and is appeared at 4) D the second term of right hand side of Eq. (12). Unlike the previous three items, this corrective parameter is not a real diffusion coefficient and the method of its calculation will be described later. The second term in the right hand side of Eq. (12) is a corrective term and is used to improve the deficiency of calculated interface
K.H. Hosseinipour, F. Faghihi / Annals of Nuclear Energy 133 (2019) 84–95
87
START
1) Input Processing
• • • •
2) Modular Ray Tracing
• Ray Tracing for Each Unique Pin Cell • Create Juncon Matrix (In order to Connect Tracks of Adjacent Pins to Each Other • Ray Normalizaon Based on the True Area Region
Cartesian Core, Fuel Assembly and Pin Cell Data Boundary and Inial Condions Material Properes Number of Solid Angles, Track spacing, …
3) g = 0 4) g = g + 1 5) Calculate fission source (Qf) from flux of previous outer iteraon 6) Calculate scaering source (Q s) from flux of previous inner iteraon 7) Calculate total source; Eq. (6) 8) Calculate angular flux change; Eq. (9) 9) Calculate scalar flux; Eq. (10) 10) If flux is converged, go to step 11; else go to step 6 11) If g is the last group, go to step 12; else go to step 4 12) Do thermal up-scaering iteraon 13) Calculate mulplicaon factor 14) If mulplicaon factor converged then END, else go to step 15 15) Do CMFD 16) Go to step 3
Fig. 2. Inner-outer iteration scheme of the MOC solver in the PerMOC code.
partial currents are the normal components of angular fluxes that pass through each surface. So, the net current is tallied over each surface of coarse mesh N as:
Jd N ¼
X
b b Ud m;N xm n m : n A :
ð15Þ
m
Fig. 3. Scheme of a pin-cell including one coarse mesh.
currents by the diffusion equation. Therefore, this term is used to drive the true transport current. After completing an outer iteration in the PerMOC, the CMFD acceleration is started by calculating the net currents. The net currents in each coarse mesh surface are calculated by summation on the partial currents of the high-order problem. In this text, the
In the above equation, Ud m;N is the angular flux of d surfaces in b A vector is the direction m including solid angle weight xm . The n b unit vector normal to the d cell surfaces and n m is the unit vector in the direction m. Partial currents normal to each surface are summed with respect to their solid angle weights as well as their directions. In Eq. (15), cell output angular flux is positive while cell input angular flux is negative, according to the selected positive direction in Fig. 2 and Eq. (11). The second step of CMFD calculation is to derive the average properties of the coarse mesh of the low-order system using the high-order system. The average flux is written as:
P g n2N un An /gN ¼ P : n2N An
ð16Þ
The average transport corrected cross section can be derived from:
P
Rgtr;N ¼
n2N P
Rgtr;n ugn An
n2N
ugn An
;
ð17Þ
88
K.H. Hosseinipour, F. Faghihi / Annals of Nuclear Energy 133 (2019) 84–95
and the average production kernel can be derived as follow:
R
0
g !g p;N
¼
X n2N
R
0
g !g p;n
0
u An g n
X
0
u An : g n
1) 2) 3) 4) 5) 6) 7) 8)
ð18Þ
n2N
Finally, the average flux-weighted diffusion coefficient is derived from Eq. (14) Eq. (12) is used twice in CMFD calculation. At first, it is applied to find non-linear coupling coefficients using known values of the interface partial currents:
d b d ð/ DN ¼ Jd þ D / Þ ð/N1 þ /N Þ: N1 N N N
ugn ðnewÞ ¼ ugn ðoldÞ /gN ðinitÞ=/gN ðlastÞ:
Fig. 4. Calculational steps in the CMFD solver of the PerMOC code.
ð19Þ
The calculated currents from Eq. (15) are used to compute the non-linear coupling coefficients by using Eq. (19). After determination of the above non-linear coupling coefficients, Eq. (12) will be applied again for calculation of surface currents; but now, using the average coarse mesh flux. In the PerMOC code, the CMFD coarse mesh flux is calculated by iteration between Eqs. (11) and (12). The calculated flux of Eq. (11) is used to calculate the new current of Eq. (12). These currents are used again to calculate flux from Eq. (11). The iterations continue until convergence is achieved. After CMFD convergence, the MOC flux is updated using the CMDF flux:
ð20Þ
/gN ðinitÞ
In the above equation is the initial average flux due to Eq. (16) and /gN ðlastÞ is the final converged CMFD flux. ugn ðoldÞ is the converged MOC flux and ugn ðnewÞ is the updated MOC flux using CMFD solver. In general, CMFD does not guarantee to maintain its stability and convergence. The common method to improve convergence is to employ a damping factor on the nonlinear diffusion coefficient terms (Li, 2013), but there is no method to determine the optimal damping factor for a practical reactor problem prior to the calculation. (Li et al., 2015) has proposed two techniques for stabilizing the reactor calculations that would be otherwise diverged with un-damped CMFD. The first technique is to perform additional energy sweeps for the up-scattering group region during the high-order MOC calculations (to generate more accurate information for passing into the CMFD calculation). The second technique, which was proposed, is to extending the traditional scalar flux prolongation for providing the spatial variations inside each acceleration cell. The details of the above methods are beyond the scope of current paper. PerMOC only uses strict thermal up-scattering iteration on its solver to improve the convergence rate and stability of the CMFD method. PerMOC automatically checks the scattering matrix of the problem’s materials and detect whether there is any upscattering in the problem domain or not. Before starting the CMFD acceleration module, the main MOC transport solver applies additional iterations on the groups with thermal up-scattering cross sections. The up-scattering calculation reduces the MOC flux error and this error reduction gives better CMFD average properties. Fig. 4 shows the calculational steps in the CMFD solver of the PerMOC. 2.3. The adjoint MOC implementation An adjoint form of the neutron transport equation is obtained by permuting the primary and secondary group indices. Adjoint neutron transport equation can be considered as an eigenvalueproblem, and its eigenvalue is the same as those found by the main MOC transport equation. Specifically, the adjoint flux which is also called the importance function is derived from the following equation:
START Start Calculate net currents from paral currents; Eq. (15) Calculate coarse mesh average properes; Eqs. (16 ) through (18) Calculate nonlinear coupling coefficients; Eq. (19) Calculate CMFD flux by iteraon between Eq. (11) and Eq. (12) Calculate new CMFD mulplicaon factor Update flux of main transport problem using Eq. (20) End
dU X U ¼ Q þ tr ds
ð21Þ
where U is the adjoint angular flux and Q is the adjoint source term. Similar to the forward equation, following solution for the adjoint angular flux can be assumed:
h
U ðsÞ ¼ U ð0Þ þ q 1 exp
X tr
s
i
;
g ðQ Þ g P ; q Þ ¼ 4p tr
ð22Þ ð23Þ
where ‘‘s” is the characteristic along the backward neutron path. The adjoint source term is given by: Where g
ðQ Þ ¼
X g
0 X0 @ g s
0
g
Pg 0 þ
p
keff
g
1 0
Aðu Þg ;
ð24Þ
and similarly, the adjoint production kernel in the above equation is defined as:
Xg0 p
g
0
¼ vg
Xg
m
f
:
ð25Þ
All notations are similar to Eq. (5) through Eq. (7). The MOC forward solver can simply be used to solve the adjoint equation, but the only exception is the adjoint source term. The adjoint source is calculated using Eq. (23) through Eq. (25) for the adjoint equation. PerMOC adjoint solver has two specific features. Firstly, it uses a ‘‘thermal-up-scattering-like inner iteration” for thermal energy group range. As will be seen in the next paragraphs, this feature will be effective if there is thermal up-scattering in the scattering matrix. The second feature is that the adjoint inner iterations are carried out in the reverse order of energy groups. These features are clarified at the following using an example. Fig. 5 shows seven groups scattering matrix for the UO2 fuel assemblies of C5G7 benchmark (Lewis et al., 2001). There are thermal up-scattering from groups 5 through 7. Inner iteration in forward calculation starts from group 1 and ends with group 7 (horizontal sweep pass). At the end of the forward inner iteration, thermal up-scattering iteration (gray boxes) would be started to reduce the error of forward flux. This is necessary for CMFD convergence improvement as stated in the last paragraph of Section 2.2. The normal adjoint calculation is carried out in the vertical sweep pass. It happens because the adjoint calculation is carried out with the adjoint scattering cross sections matrixes. The energy group sweep in an outer iteration have been compared for both forward and adjoint solvers with/without upscattering (c.f., Table 1). As we have mentioned earlier, the PerMOC executes the adjoint calculation in the reverse energy group order. At the beginning each adjoint inner iteration, it performs a few more inner iterations in the thermal energy range (gray boxes) which reduces the total number of adjoint transport sweeps and
89
K.H. Hosseinipour, F. Faghihi / Annals of Nuclear Energy 133 (2019) 84–95
1
2
3
4
1
Inner
Down
Down
Down
Inner
Down
Down
Inner
Down
→ Normal Adjoint Inner Iteraon Pass →
Group
2 3
5
6
7
4
Inner
Down
5
Up
Inner
Down
Down
Up
Inner
Down
Up
Inner
6 7
→ PerMOC Adjoint Inner Iteraon Pass →
→ Forward Inner Iteraon Sweep Pass →
Fig. 5. Iteration pass with respect to scattering matrix.
2.6. The kinetics parameter
Table 1 Group order in outer iteration for forward and adjoint methods. Method Forward
Adjoint
Normal PerMOC (upscattering) Normal PerMOC (up-scattering Like)
Fig. 6. Parallelism method of inner iterations in the MOC forward and adjoint solvers.
Outer Iteration Group Order
Sweep (#)
(1,2,3,4,5,6,7) (1,2,3,4,5,6,7),(4,5,6,7) , (5,6,7),(6,7) (1,2,3,4,5,6,7) (7,6,5),(7,6,5,4,3,2,1)
7 16 7 10
gives adjoint multiplication factor relatively closer to the forward multiplication factor.
2.4. Adjoint 2D coarse mesh Finite Difference Similar to the forward calculations, in order to accelerate the convergence rate of the adjoint equation, the adjoint CMFD algorithm is performed. Conceptually, the adjoint CMFD acceleration algorithm is identical with the forward CMFD method, but there is a minor exception in the definition of adjoint coarse mesh properties. The adjoint CMFD coarse mesh properties are derived from Eq. (16) through Eq. (18), but with adjoint flux as a weighting function. The coarse mesh source term (the right-hand side of Eq. (11)) is also derived based on the adjoint flux. The general idea of iteration between Eqs. (11) and (12), in order to converge the adjoint CMFD flux, remains unchanged.
The adjoint CMFD capabilities scheme gives an opportunity for the PerMOC to be applied for the point kinetics calculations and/or perturbation theory. In the current section, we have dealt the kinetics parameters. To describe the reactor kinetics (Oka and Suzuki, 2013), both time-dependent prompt and delayed neutron precursors must be considered. The point reactor approximation is used in the slow change of space parameters. This condition is valid in most transients which are related to the reactor normal control system. The starting point of reactor point kinetics calculation is the calculation of the point kinetics parameters. The emphasis is on the calculation of the inverse velocity (1/v), effective delayed neutron fraction beff and the neutron mean generation time K (which is proportional to the mean neutron lifetime). The neutron velocity is computed from the neutron mass M and its kinetics energy E using
v¼
pffiffiffiffiffiffiffiffiffiffiffiffiffi 2E=M
ð26Þ
By insertion of M = 1.66 10
1
v
27
kg, Eq. (26) numerically is:
1 ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ; 1:9129e8 Eg
ð27Þ
where Eg is the mean energy of neutron in a selected energy group g. But, based on the group theory concepts, the mean neutron inverse velocity in the gth group is obtained by:
1
v
R
¼
uðEÞdE : u ð EÞdE g 1
g
Rv ðEÞ
ð28Þ
2.5. Parallel sweeping In the parallel operations, tasks are partitioned in a way that multiple threads can concurrently operate on different segments (Toub and Platform, 2010). In C-Sharp (C#) programing language, the ‘‘Parallel” class provides method-based parallel implementations of ‘‘for” and ‘‘foreach” loops. Both parallel methods ‘‘for” and ‘‘foreach” contain several overloads that are flexible to stop or break execution of the loop; monitor the state of the loop on other threads, maintain thread-local state, finalize thread-local objects, control the degree of concurrency, etc. PerMOC is written in the C-Sharp programing language and inherits all the C-Sharp parallel features. Parallel method ‘‘for” is used in the inner iteration of both MOC forward and adjoint solvers. In each inner iteration, a complete sweep is performed in the domain for all solid angles and all segments of each track. Fig. 6 gives this parallelism more precisely.
Fig. 7. PWR-like pin cell.
90
K.H. Hosseinipour, F. Faghihi / Annals of Nuclear Energy 133 (2019) 84–95
Table 2 Macroscopic cross sections (cm1) of PWR-like pin cell. Cross Section
Fuel
Total Absorption Nu fission Fission Scattering (1 to g) Scattering (2 to g)
Clad
Moderator
Group 1
Group 2
Group 1
Group 2
Group 1
Group 2
0.392175 0.029567 0.22141 0.00918713 0.361893 0.001451
0.622581 0.26285 0.496970 0.206211618 0.000715 0.358282
0.276383 0.00159 0.0 0.0 0.274505 0.000774
0.278610 0.004029 0.0 0.0 0.000288 0.273807
0.439812 0.006534 0.0 0.0 0.411998 0.002672
1.35565 0.017808 0.0 0.0 0.02128 1.33517
R R P 0 0 0 dA dEU ðr; E; XÞ 1v m f r; E ; X U r; E dE K¼R R R P 0 0 0 : dA dEU ðr; E; XÞvp ðEÞ m f r; E ; X U r; E dE R
ð31Þ
The discretized form of Eq. (31) is in the following form:
P
n An
K¼ P
P
n An
u Þgn
gð
P
gð
1g P v
u Þgn vgp
P
0
g
0
0
0
g
0
ðmRf Þgn ugn 0
ðmRf Þgn ugn
:
ð32Þ
The delayed neutrons spectrum vd must be determined for calculation of the fraction of delayed neutrons, and the mean generation time. By the assumption of a Maxwellian distribution, the spectrum can be calculated from the following formula:
vgd;i ¼
Z
Fig. 8. Configuration of BWR benchmark with gadolinium pins.
g
! 1:5 pffiffiffi 1:5E dE; Eexp 2:07296489 E E
ð33Þ
The effective fraction of delayed neutrons is the fraction of delayed neutrons weighted (over space, energy, and angle) on the adjoint neutron flux. In the mixture of fissionable isotopes, the ith group effective fraction of delayed neutron is calculated by:
R R P 0 0 0 dA dEU ðr; E; XÞvd;i ðEÞbi m f r; E ; X U r; E dE ¼ R R R P 0 0 0 : dA dEU ðr; E; XÞvp ðEÞ m f r; E ; X U r; E dE R
beff;i
3. Benchmarking
ð29Þ The discretized form of this equation is in the following form:
P
n An
beff;i ¼ P
P
n An
gð
P
u Þgn vgd;i bi
gð
u Þgn vgp
P
0
g
0
P g
0
0
ðmRf Þgn ugn 0
ð30Þ
0
mRf Þgn ugn ;
where the mean energy of the ith delayed group is E, and the integration is carried out over the neutron energy group of g. in brief, the point kinetic parameters are calculated in the PerMOC code based on the Eqs. (27) through (33). It should be emphasized that calculations of these parameters need 3D flux distribution while PerMOC is a 2D code and so the calculated point kinetic parameters suffer this approximation.
where all subscripts have the same definitions as those mentioned in previous Sections. bi is the ith group delayed neutron fraction, and vgp is the prompt neutron spectra in group g and vgd;i is the delayed neutron spectra in group g for the ith delayed neutron group. The mean generation time in any multiplying-media is equal to the average time between two generations of prompt neutrons at the critical state. This time is known as the prompt neutron generation time and derived from the following formula:
To validate the developed PerMOC code, the standard benchmarking should be carried out. In the current section, our aim is to apply those benchmarking for the PerMOC code to find its validation. In each case study, the obtained PerMOC results are compared with previously verified results and other worldwide familiar and acceptable codes. 3.1. Few-region and two-group pin cell This case is taken from (Chen et al., 2008) and represents a simple PWR-like pin cell geometry as shown in Fig. 7. The pin cell has three different regions correspond to the fuel, clad and moderator materials. This case is an example of few-region and two-group of neutron energy. The two-group cross sections for each region are given in Table 2. The infinite multiplication factor is compared
Table 3 Macroscopic cross sections (cm1) of BWR benchmark. Material
Group
Rt
tRf
Rsg!1
Rsg!2
Fuel
1 2 1 2 1 2 1 2
3.62022E1 5.72155E1 2.74144E1 2.80890E1 3.71785E1 1.75 6.40711E1 1.69131
1.86278E2 3.44137E1 0.0 0.0 1.79336E2 1.57929E1 0.0 0.0
3.33748E1 0.0 2.72377E1 0.0 3.38096E1 0.0 6.07382E1 0.0
6.64881E4 3.80898E1 1.90838E4 2.77230E1 6.92807E4 3.83204E1 3.31316E2 1.68428
Clad Gadolinium pin Moderator
K.H. Hosseinipour, F. Faghihi / Annals of Nuclear Energy 133 (2019) 84–95
91
Fig. 9. Core configuration of C5G7 benchmark.
Cho, 1998). The fuel pins of radius 0.5 cm are made of 3 wt% UO2 while the gadolinium pins are the mixture of 3 wt% Gd2O3 and 3 wt% UO2. Clad material of all pins is zircaloy-2 with 0.1 cm thick. moderator in this benchmark is light water, and the latticeproblem is shown in Fig. 8. In addition, cross sections of UO2, (UO2 + Gd2O3), zircaloy-2, and light water are given in Table 3. The calculated multiplication factor of the configuration is found to be equal to 0.986561 using DRAGON code.
3.3. C5G7 benchmark
Fig. 10. LRA benchmark core map.
with the MCNP5 as the reference value. The MCNP5 calculations were done with 40,000 neutrons per cycle. After skipping 50 cycles, 250 active cycles were run. The calculated MCNP multiplication factor is 1.06405 (0.00043). Fuel radius of this pin cell is 0.41 cm, its clad thickness is 0.06 cm and the fuel pitch is 1.2647 cm. 3.2. BWR benchmark problem with gadolinium pins This benchmark problem is designed for a 4 4 BWR lattice which contains 14 fuel pins and 2 gadolinium pins (Hong and
The 2D C5G7 problems (Lewis, Smith et al. 2001) are proposed by OECD/NEA to distinguish the efficiency of deterministic codes for heterogeneous media. The problem contains four 17 17 pin cell assemblies with vacuum boundary conditions on the right and bottom and reflective boundary conditions on the left and top boundaries. Each pin has a 0.54 cm radius with a pitch of 1.26 cm. The bundles on the top left and bottom right contain UO2 fuel while the ones on the opposite two corners are MOX assemblies, as depicted in Fig. 9. Each assembly contains a pattern of fuel pin cells of varying enrichments, guide tubes, and fission chambers.
3.4. LRA benchmark The 2D LRA benchmark (Aviles, 1993) is a 2-group, quarter-core transient BWR problem. The geometry has 78 fuel assemblies with a size of 15 cm 15 cm. Fuel assemblies are homogenized and surrounded by water cells to fill the 165 cm 165 cm geometry as depicted in Fig. 10. The LRA problem was originally proposed as a diffusion problem and has been extensively solved with coarse mesh and nodal diffusion methods, but currently, this benchmark is used for validation of transport codes using a fine mesh overlay on its geometry (Table 4).
92
K.H. Hosseinipour, F. Faghihi / Annals of Nuclear Energy 133 (2019) 84–95
Table 4 Cross sections (cm1) of transport version of LRA benchmark. Material
Group
D(cm)
Rtr
Ra
tRf
Rsg!1
Rsg!2
1
1 2 1 2 1 2 1 2 1 2
1.255 0.211 1.268 0.1902 1.259 0.2091 1.259 0.2091 1.257 0.1592
0.265604 1.579779 0.262881 1.752541 0.26476 1.594134 0.26476 1.594134 0.265182 2.093802
0.008252 0.1003 0.007181 0.07047 0.008002 0.08344 0.008002 0.073324 0.000603 0.01911
0.004602 0.1091 0.004609 0.08675 0.004663 0.1021 0.004663 0.1021 0 0
0.232022 0 0.22803 0 0.230588 0 0.230588 0 0.217039 0
0.02533 1.479479 0.02767 1.682071 0.02617 1.510694 0.02617 1.52081 0.04754 2.074692
2 3 4 5
Table 5 PWR Pin Cell. Case No.
Azimuthal Angle
Polar Angle
Iteration Sweep (#)
Keff
Difference (pcm)
1 2 3
64 64 128
6 12 12
86 95 77
1.06539 1.06480 1.06443
134 72 38
Reference Keff = 1.06405 (0.00043) from MCNP5.
Table 6 BWR 4 4 Lattice. Case No.
Azimuthal Angle
Polar Angle
Iteration Sweep (#)
Keff
Difference (pcm)
1 2 3
64 64 64
2 4 6
64 65 74
0.984636 0.985483 0.985291
193 108 127
Reference Keff = 0.986561 from DRAGON4.
Fig. 11. Illustration of one pin cell sub-divisions. Reference case (Right) and present work (Left).
4. Results and discussions In this section, the validation results of PerMOC code are presented. The following assumptions are considered for all test cases unless explicitly stated otherwise. The polar angle for all benchmark cases is discretized based on the equal weight quadrature set. Track spacing is set to 0.01 cm for all cases. The maximum number of inner iteration is set to 5 iterations per every energy groups and forward thermal up-scattering iteration is set to 2
iterations in each energy group. Inner iteration is terminated when the scalar flux converges to 1E-7 or the maximum number of inner iterations reaches (whichever is earlier). The outer iteration terminates when both multiplication factor and scalar flux converges to 1E-7. Results of each benchmark from the previous section are presented in the following. All cases have been run on a system with Intel Core i7 @ 2.2 GHz and 4 GB installed memory. PWR Pin cell is modeled with four rings and eight angular divisions. This leads to a totally 40 flat source region in this pin cell. The results in Table 5 are in good agreement with MCNP5 calculation. The calculation is carried out with various azimuthal and polar angles. BWR problem with gadolinium pins is also modeled with fix 64 azimuthal angle and various polar angles. Each pin is modeled with three rings and eight angular divisions. This leads to the totally 32 flat source region in every pin cell. The results in Table 6 are comparable with reference DRAGON calculation. The reported multiplication factor for this configuration is from (Mazumdar and Degweker, 2015). The above two test cases are for verifying the MOC solver of PerMOC code. In order to check and verify the parallel capability, adjoint feature and the forward-adjoint CMFD acceleration technique in PerMOC code, a UO2 lattice problem which consists of a single assembly with 17 by 17 pin cells is analyzed. Each pin cell has a 1.26 cm pitch and a pin radius of 0.54 cm. It is the UO2
Table 7 Forward results of UO2 Lattice of C5G7. Case
Method
Transport Sweep (#)
Keff
Running Time (sec)
a b c d
MOC MOC + Upscat MOC + Upscat + Parallel MOC + Upscat + Parallel + CMFD
1245 1177 1251 259
1.335081 1.335081 1.335081 1.335081
1155 1122 346 116
93
K.H. Hosseinipour, F. Faghihi / Annals of Nuclear Energy 133 (2019) 84–95 Table 10 Point Kinetics Parameters of C5G7 Benchmark.
4.5 MOC
Multiplication Factor
4
MOC+UpScat
3.5
MOC+UpScat+Parallel
3
PerMOC
Reference
Difference (%)
beff K(sec)
4.9752E03 1.5738E05
5.8680E03 1.5022E05
15.2 4.8
MOC+UpScat+Parallel+CMFD 2.5 2 1.5 1 0.5 0
100
200
300
400
Itration Sweeps (#) Fig. 12. UO2 Convergence behavior based on the number of transport sweeps.
4.0
MOC
3.5
assembly. PerMOC uses 18 flat source region while the original work uses 12 flat source region per pin cell. This is illustrated in Fig. 11. Another difference is in the polar angle quadrature sets. PerMOC uses equal weight quadrature set while the reference assembly is modeled by the quadrature sets of (Yamamoto et al., 2007). Four different cases, each of which represents one functionality of the PerMOC code, are examined. These cases are as following: a) b) c) d)
4.5
Multiplication Factor
Parameter
MOC+UpScat
3.0 MOC+UpScat+Parallel 2.5 MOC+UpScat+Parallel+CMFD 2.0 1.5 1.0 0.5 0:00:00
0:01:26
0:02:53
0:04:19
0:05:46
Elapsed Time (h:mm:ss) Fig. 13. UO2 Convergence behavior based on the total running time.
assembly from the C5G7 benchmark problem with UO2 fuel pins, guide tube pins, and water cells (Lewis et al., 2001). This UO2 assembly is modeled by (Li et al., 2014) using OpenMOC code and the reported multiplication factor is 1.335059 with the following assumption: The track spacing is 0.05 cm, the number of azimuthal angles is 64 while the number of polar angles is 6 and Reflective boundary condition is assumed in all sides of the UO2
MOC kernel without up-scattering, MOC with up-scattering, MOC with up-scattering and parallel sweeping and MOC with up-scattering and parallel sweeping plus CMFD acceleration.
Results of forward calculations have been compared in Table 7. The convergence behaviors of four above cases are plotted in Fig. 12 with respect to the number of transport sweeps. These convergence behaviors are depicted in Fig. 13 based on the total running time of each case. As expected, the up-scattering case needs slightly lower transport sweeps in comparison with respect to the first case. Paralleling and CMFD, reduce the considerable amount of execution time. CMFD carry out this task by reducing the number of iteration sweeps while paralleling reduce the total running time by multi-tasking capability. As one can see from Table 7, the number of parallel transport sweeps increased in comparison with the non-parallel case (1251 parallel iteration vs. 1177 serial transport sweeps). The reason is given as follows: In the reflected surface boundaries, the output angular flux of azimuthal angle u will be the input angular flux for the ðp uÞ angle. In parallel execution, the azimuthal angle is randomly assigned using Parallel. for loop, while in the non-parallel case the azimuthal angle is uniformly increased. This causes a small disturbance in the boundary angular flux of parallel case and consequently, the code needs more iteration to reach the desired
Table 8 Adjoint results of UO2 Lattice of C5G7. Case
Method
Transport Sweep (#) (Method A)
Transport Sweep (#) (Method B)
Keff
a b c d
MOC MOC + Upscat MOC + Upscat + Parallel MOC + Upscat + Parallel + CMFD
1250 1236 1412 349
2224 2224 2675 371
1. 1. 1. 1.
335,081 335,081 335,081 335,082
Table 9 C5G7 Benchmark Results. Case No.
Azimuthal Angle
Polar Angle
Transport Sweep (#)
Keff (Forward)
Keff (Adjoint)
Error (pcm)
1 2 3 4 5 6
4 8 16 32 64 128
6 6 6 6 6 6
842 744 561 519 520 701
1.18376 1.18433 1.18572 1.18594 1.18604 1.15602
1.18376 1.18433 1.18572 1.18594 1.18603 1.18602
279 222 83 61 51 53
Reference Keff = 1.18655.
94
K.H. Hosseinipour, F. Faghihi / Annals of Nuclear Energy 133 (2019) 84–95
Table 11 LRA Benchmark Results. Case No.
Azimuthal Angle
Polar Angle
Track spacing
Transport Sweep (#)
Keff (Forward)
Keff (Adjoint)
Error (pcm)
1 2 3 4
64 64 32 64
4 4 6 6
0.10 0.05 0.01 0.01
322 267 196 153
0.99266 0.99379 0.99397 0.99375
0.99265 0.99380 0.99397 0.99376
371 258 240 262
Reference Keff = 0.99637.
convergence. Please note that as expected, all cases have finally identical multiplication factor values. The results of adjoint calculation for the above four aforementioned cases are summarized in Table 8. In this table ‘‘Method A” is the proposed adjoint method of PerMOC code while the ‘‘Method B” is the conventional or normal adjoint Method (Please see Fig. 5 and Table 1). As one can see from this table the new adjoint method, reduce the total number of transport sweeps without affecting the adjoint multiplication factor value. The C5G7 core benchmark is modeled with 40 flat source region per pin cell. The flux gradient in the fuel-moderator interface must precisely be captured in order to give good results for this benchmark. So each moderator pin cell in the vicinity of fuel region is split to extra 56 flat source region in order to capture the flux gradient correctly. Computed results of this benchmark are summarized in Tables 9 and 10. The reference kinetics parameters have been taken from (Wu et al., 2018). Please note that the reference kinetics values are based on the 3D core while in the present work they are calculated based on the 2D C5G7. The best multiplication factor result of 2D C5G7 is obtained with 64 azimuthal and 6 polar angles. As reported in Table 9, the forward calculation is converged with 520 transport sweeps in this case. From the viewpoint of adjoint calculation, the convergence of the above case is attained with 329 transport sweeps for the proposed PerMOC adjoint solver while the normal adjoint method needs 464 transport sweeps. This results again confirm the effectiveness of the PerMOC adjoint solver. As stated in the previous section, the LRA benchmark originally proposed for validation of nodal methods. The large size of its homogenized fuel assemblies makes it a good candidate for nodal codes. But from the point of CMFD and MOC view, the size of flat source region should be in the order of 1–2 cm. As stated in (Larsen, 2003) and (OpenMOC Documentation, 2014) when the fine mesh cell size becomes significantly larger than the neutron mean free path in that cell, the step characteristics no longer preserve the reaction and leakage rates within cells. This is why the size of LRA fuel assembly must be refined until the eigenvalue converged. By dividing each fuel assembly of LRA benchmark into 25 identical squares, the converged results can be achieved. Computed results of LRA benchmark are summarized in Table 11. 5. Conclusions The current article is the first development stage of the PerMOC code which is under development at Shiraz University based on the MOC method and CMFD adjoint-forward technique. The PerMOC has been written in C-Sharp programing language and uses the benefit of its fast and simple parallel methods. Currently, PerMOC is capable of modeling the Cartesian geometry consisting of a combination of circles and rectangles. Comparison of the calculated multiplication factor for the selected standard benchmarks shows, in general, an acceptable agreement with the reference values for both adjoint and forward solver. Implementation of both adjoint and forward CMFD acceleration with respect to its parallel sweeping gives a reasonable speed up to the code. Inverse energy group
sweeping with its initial thermal-up-scattering-like iterations are two unique feature of the adjoint solver of PerMOC code. The point kinetic parameters calculation is another capability of the PerMOC code based on its adjoint solver while the reliability of its results still needs more verification and benchmarking. Besides all the above notes, PerMOC is in its initial development stage. It is planned to enhance its capability using a built-in groupwised cross section generator in order to make it more powerful code. Specifically, the following features are under development: problem dependent group-wise cross section generation with resonance self-shielding opportunity, extension from 2D to 3D simulation and preparing a user-friendly and interactive input-output capability. Appendix A. Supplementary data Supplementary data to this article can be found online at https://doi.org/10.1016/j.anucene.2019.04.050. References Askew, J., 1972. A Characteristics Formulation of the Neutron Transport Equation in Complicated Geometries. United Kingdom Atomic Energy Authority. Aviles, B., 1993. Development of a variable time-step transient NEW code: SPANDEX. Trans. Am. Nucl. Soc. 68, 425–427. Boyd III, W.R.D., 2014. Massively Parallel Algorithms for Method of Characteristics Neutral Particle Transport on Shared Memory Computer Architectures. Massachusetts Institute of Technology. Boyd, W. et al., 2014. The OpenMOC method of characteristics neutral particle transport code. Ann. Nucl. Energy 68, 43–52. Cacuci, D.G., 2010. Handbook of Nuclear Engineering: Vol. 1: Nuclear Engineering Fundamentals; Vol. 2: Reactor Design; Vol. 3: Reactor Analysis; Vol. 4: Reactors of Generations III and IV; Vol. 5: Fuel Cycles, Decommissioning, Waste Disposal and Safeguards. Springer Science & Business Media. Chao, Y., 2000. Coarse mesh finite difference methods and applications. Proc. PHYSOR. Chen, Q. et al., 2008. Auto MOC—A 2D neutron transport code for arbitrary geometry based on the method of characteristics and customization of AutoCAD. Nucl. Eng. Des. 238 (10), 2828–2833. Halsall, M., 1980. CACTUS, a characteristics solution to the neutron transport equations in complicated geometries. AEEW-R–1291. UKAEA At. Energy Establishment. Hébert, A., 2013. DRAGON5: Designing computational schemes dedicated to fission nuclear reactors for space. Proc. Nucl. Emerging Technol. Space, 25–28. Hong, Ser Gi, and Nam Zin Cho. CRX: a code for rectangular and hexagonal lattices based on the method of characteristics. Annals of Nuclear Energy 25, No. 8 (1998): 547–565. Kochunas, B.M., 2013. A Hybrid Parallel Algorithm for the 3-D Method of Characteristics Solution of the Boltzmann Transport Equation on High Performance Compute Clusters. University of Michigan. PhD diss. Larock, B.E. et al., 1999. Hydraulics of Pipeline Systems. CRC Press. Larsen, E.W., 2003. Infinite-medium solutions of the transport equation, SN discretization schemes, and the diffusion approximation. Transp. Theory Stat. Phys. 32 (5–7), 623–643. Leonard, A., 1995. Optimal polar angles and weights. Trans. Am. Nucl. Soc. 73, 171. Lewis, E., et al. (2001). ‘‘Benchmark specification for Deterministic 2-D/3-D MOX fuel assembly transport calculations without spatial homogenization (C5G7 MOX).” NEA/NSC 280. Li, L., 2013. A Low Order Acceleration Scheme for Solving the Neutron Transport Equation. Massachusetts Institute of Technology. Li, L., et al. (2015). ‘‘Techniques for stabilizing coarse-mesh finite difference (CMFD) in methods of characteristics (MOC).” Li, L. et al., 2014. A Low Order Non linear Transport Acceleration Scheme for the Method of the Characteristic. PHYSOR, Kyoto (JP). Mazumdar, T., Degweker, S., 2015. Solution of neutron transport equation by Method of Characteristics. Ann. Nucl. Energy 77, 522–535.
K.H. Hosseinipour, F. Faghihi / Annals of Nuclear Energy 133 (2019) 84–95 Oka, Y., Suzuki, K., 2013. Nuclear Reactor Kinetics and Plant Control. Springer. OpenMOC Documentation (2014). ‘‘Massachusetts Institute of Technology.” Retrieved 15 Jan, 2019, from https://mit-crpg.github.io/OpenMOC/. Rhodes, J., et al. (2006). CASMO-5 development and applications. Proc. ANS Topical Meeting on Reactor Physics (PHYSOR-2006). Sanchez, R. et al., 2010. APOLLO2 year 2010. J. Nucl. Eng. Technol. 42 (5), 474–499. Santandrea, S., Sanchez, R., 2002. Acceleration techniques for the characteristic method in unstructured meshes. Ann. Nucl. Energy 29 (4), 323–352. Smith, K., 1983. Nodal method storage reduction by nonlinear iteration. Trans. Am. Nucl. Soc. 44, 265–266. Smith, K.S., 2002. Full-core, 2-D, LWR core calculations with CASMO-4E. PHYSOR 2002, Seoul, Korea.
95
Toub, S., Platform, P.C., 2010. Patterns of Parallel Programming. Microsoft Corporation. Wu, Q. et al., 2018. Whole-core forward-adjoint neutron transport solutions with coupled 2-D MOC and 1-D SN and kinetics parameter calculation. Prog. Nucl. Energy 108, 310–318. Yamamoto, A. et al., 2010. AEGIS: an advanced lattice physics code for light water reactor analyses. Nucl. Eng. Technol. 42 (5), 500–519. Yamamoto, A. et al., 2007. Derivation of optimum polar angle quadrature set for the method of characteristics based on approximation error for the Bickley function. J. Nucl. Sci. Technol. 44 (2), 129–136. Zhu, A. et al., 2015. The Implementation and Analysis of the MOC and CMFD Adjoint Capabilities in the 2D–1D Code MPACT. Proc. M&C 2015, 19–23.