Double precision rational approximation algorithm for the inverse standard normal first order loss function

Double precision rational approximation algorithm for the inverse standard normal first order loss function

Applied Mathematics and Computation 219 (2012) 1375–1382 Contents lists available at SciVerse ScienceDirect Applied Mathematics and Computation jour...

225KB Sizes 0 Downloads 27 Views

Applied Mathematics and Computation 219 (2012) 1375–1382

Contents lists available at SciVerse ScienceDirect

Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

Double precision rational approximation algorithm for the inverse standard normal first order loss function Steven K. De Schrijver a,b,⇑, El-Houssaine Aghezzaf a, Hendrik Vanmaele a,b a b

Department of Industrial Management, Faculty of Engineering, Ghent University, Technologiepark 903, B-9052 Zwijnaarde, Belgium MÖBIUS Business Redesign, Kortrijksesteenweg 152, B-9830 Sint-Martens-Latem, Belgium

a r t i c l e

i n f o

a b s t r a c t We present a double precision algorithm based upon rational approximations for the inverse standard normal first order loss function. This function is used frequently in inventory management. No direct approximation or closed formulation exists for the inverse standard normal first order loss function. Calculations are currently based on root-finding methods and intermediate computations of the cumulative normal distribution or tabulations. Results then depend on the accuracy and valid range of that underlying function. We deal with these issues and present a direct, double precision accurate algorithm valid in the full range of double precision floating point numbers. Ó 2012 Elsevier Inc. All rights reserved.

Keywords: Normal distribution Repeated integrals Normal integral Inventory system Rational approximation Loss function

1. Introduction and motivation Let u be the standard normal probability density function, U the standard normal cumulative distribution and U0 its complementary function, as respectively defined by (1), (2) and (3a). Then U1 is the standard normal first order loss function, see (4a). The inverse of the standard normal cumulative distribution, U0inv , is defined by (3b). Wichura [1] provides an approximation algorithm for U0inv . The focus of this article is on the inverse standard normal first order loss function U1inv , see (4b).

  exp z2 =2 uðzÞ ¼ pffiffiffiffiffiffiffi ; 2p

UðzÞ ¼

Z

ð1Þ

z

uðxÞdx;

ð2Þ

1





U0 zp0 ¼

Z

1

zp0

uðxÞdx ¼ p0 ;

U0inv ðp0 Þ ¼ zp0 ;

ð3aÞ ð3bÞ

⇑ Corresponding author at: Department of Industrial Management, Faculty of Engineering, Ghent University, Technologiepark 903, B-9052 Zwijnaarde, Belgium. E-mail addresses: [email protected] (S.K. De Schrijver), [email protected] (E.-H. Aghezzaf), [email protected] (H. Vanmaele). 0096-3003/$ - see front matter Ó 2012 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.amc.2012.07.011

1376

S.K. De Schrijver et al. / Applied Mathematics and Computation 219 (2012) 1375–1382





U1 zp1 ¼

Z

1

ðx  zp1 ÞuðxÞdx ¼

zp1

Z

1

zp1

U0 ðxÞdx ¼ p1 ;

U1inv ðp1 Þ ¼ zp1 :

ð4aÞ ð4bÞ

1.1. A brief literature review The cumulative normal distribution and its approximation have been studied in several papers. Waissi and Rossin [2] presented a simple sigmoid function for the approximation of the cumulative standard normal probabilities for 8 6 z 6 8. Bryc [3] presented two simple formulas for the approximation of the standard normal right tail probabilities. Kiani et al. [4] worked out a formula and series for approximating the normal distribution. This formula has a maximum absolute error of 6.5e9 and the series have a very high accuracy over the whole range. Linhart [5] compared three C functions to compute the logarithm of the cumulative standard normal distribution, based upon existing algorithms. In Fig. 1 the functions U0inv and U1inv are plotted over the p range ½0:001; 0:999. A closed formulation for computing U1 does not exist. Shore [6] provides a simple but very rough approximation for U1 and calls it the loss integral of the normal distribution. Calculating the values of the standard normal first order loss function can be done using the cumulative normal distribution, see (5), see also (1.12) of Withers and McGavin [7]. Although Gallego et al. [8] state that all standard software packages, including MS Excel, have a built in function for U, this holds risks because the U accuracy can be bad and the valid range limited, as discussed in West [9]. So currently two intermediate step are needed to calculate U1inv . First U1 needs to be calculated, and the accuracy of the result depends on the accuracy and valid range of the underlying cumulative normal function. Secondly also a root-finding method is necessary on top of this, to calculate U1inv . With the approximations developed in this paper we want to deal with these issues.

U1 ðzÞ ¼ zU0 ðzÞ þ zuðzÞ:

ð5Þ

1.2. Inventory performance measures In inventory management a set of performance measures are used to express the quality of the policy. A well known performance measure is the stockout frequency A (percentage of time there is no stock). In case of a normal demand this measure can be expressed using the normal cumulative distribution and its loss function. The stockout frequency expression may vary depending on the replenishment policy. We will refer here to the ðr; Q Þ policy in which an order of size Q is placed as soon as the inventory position falls to or below the reorder point r. In Section 4.9 from Hadley and Within [10] the normal demand approximation is already discussed and they make use of (5) to express the considered performance measure and an additional root-finding method is assumed to compute the inverse function. Zipkin [11] expresses in Section 6.4.3 the stockout frequency as (6) and this can be simplified in practice for most cases to (7). So computing the optimal reorder point r value requires the function U1inv .

A¼ A¼

r Q

r Q



U1 ðzðrÞ Þ  U1 ðzðrþQ Þ Þ ;

ð6Þ



U1 ðzðrÞ Þ :

ð7Þ

Fig. 1. U0inv and U1inv over range [0.001, 0.999].

S.K. De Schrijver et al. / Applied Mathematics and Computation 219 (2012) 1375–1382

1377

In Axsäter [12] Section 5.3.4 the same stockout frequency performance measure is derived making use of U1 , and U1 is denoted by GðxÞ. In Appendix B of [12] he gives a table with U1 values for z 2 ½0; 3. Silver et al. [13] express in Section 7.6 U1 as ’a special function of the unit normal variable’ and give some graphical aid to determine (6). In Appendix B of [13] a table is given with values for U and U1 for z 2 ½0; 4. As no closed-form expressions exist, simple approximations are being sought-after to solve the inventory models. Ghalebsaz-Jeddi et al. [14] use UNLLIðkÞ, the unit normal linear loss integral, for U1 . In order to solve the investigated multi-item problem they make use of a piecewise quadratic approximation. Axsäter [15] develops a table and polynomial approximations to easily find the reorder point r and the order quantity Q in case of a fill rate S limitation, where S ¼ 1  A. Within inventory management U1inv computations are needed on a large scale, e.g. in inventory problems with a huge amount of items. So one can appreciate the direct benefit of having a highly efficient and effective approximation within the range and the precision of double precision floating point numbers. Other statistical applications where these and other repeated integrals can be used are listed in Fisher [16]:  Calculation of moments of truncated normal distribution;  Expression of the non-central t density;  In the posterior distribution of a Poisson variate with chi-squared prior for the squared mean parameter of the Poisson variate. 2. Situating the problem We first describe a Taylor series function for U1 with managed precision used as the foundation for the approximation of U1inv . Next we describe the approximation method used. 2.1. Repeated integrals of the univariate normal distribution Withers and Nadarajah [17] extended the divergent expansion for Mill’s ratio for repeated integrals of the univariate normal distribution and Taylor series are also presented. It is shown that the error in approximating by the first k terms of its Laplace-type approximation is bounded in magnitude by the kth term. U1 can be translated in the following Taylor series: 1 X ðzÞj t jþ1

U1 ðzÞ ¼ uðzÞ

j¼0

j!

ð8Þ

;

where

rffiffiffiffi

p

t0 ¼

2

ð9aÞ

;

t1 ¼ 1;

ð9bÞ

tk ¼ 1  3  . . . ðk  3Þðk  1Þt0 ; tk ¼ 2  4  . . . ðk  3Þðk  1Þ;

for k even; for k odd:

ð9cÞ ð9dÞ

Eq. (8) is used to calculate very high precision U1 values, with a relative error magnitude of less than 1e32. Validation sets are created for 10,000, 100,000 and 1,000,000 random z values with this accuracy, to be used to test our approximation algorithm. The necessary number of terms in (8) is limited to k, that is defined for U1 by condition (10):

" abs

ðzÞkþ1 t kþ2 ðk þ 1Þ!

!,

k X ðzÞj t jþ1 j¼0

j!

!# 6 1e  32:

ð10Þ

2.2. Rational approximations Based upon the Eq. (8) a rational approximation is generated, making use of the Remez minimax C++ implementation, see Boost [18]. Fraser and Hart [19] already indicated that for computer programs that require as nearly as possible approximations the Chebyshev approximations are better than others. The Remez algorithm has been developed into a stable method for finding the best polynomial approximations. Barrar and Loeb [20] gave proof of the convergence for the classic Remez algorithm when applied in certain non-linear approximating families. Cody [21] provided a survey with methods for generating rational or polynomial approximations to continuous functions. Dunham [22] investigated the convergence of the Fraser–Hart variant of the Remez algorithm, used to determine the best rational Chebyshev approximation to a continuous function. Litinov [23] describes several construction methods for rational approximations to functions of one real variable and he focuses on error auto correction, so that significant errors in the coefficients do not affect the accuracy of the approximation.

1378

S.K. De Schrijver et al. / Applied Mathematics and Computation 219 (2012) 1375–1382

Elbarbary et al. [24] construct a restrictive type of Chebyshev rational approximation to approximate the exponential function, this approximation yields more accurate results and exact values at selected points. 3. Approximation, range and accuracy 3.1. Approximation range parameters Here we list the range parameters used in the approximation algorithm. r0 ¼ 2:2250738585072014e  308 r2 ¼ 1:1875 r4 ¼ 0:6640625 r6 ¼ 0:398942280401432647 r8 ¼ 0:102067280401432647 r10 ¼ 1:7976931348623157e þ 308

r1 r3 r5 r7 r9

¼ 7:96582630953042053 ¼ 0:5234375 ¼ 0:398942280401432702 ¼ 0:367692280401432647 ¼ 0:296875

3.2. Range of p As we want to cover the range of double precision floating point values of the IEEE 754 standard, see Goldberg [25], the smallest value that can be represented is 2:2250738585072014e  308 ¼ r0 . Lower values are subnormal and thus cannot be represented to full precision. The maximum double value is r10 ¼ 1:7976931348623157e þ 308. The considered range for our approximation of U1inv is given by (11)

U1inv ðpÞ : p 2 ½r 0 ; r10 :

ð11Þ

3.3. Higher precision intermediate rational approximation First we created sets of rational approximations for U1inv with a very high accuracy based upon (8) and a bisection rootfinding method. The magnitude of the relative error of these intermediate approximations is smaller than 1e28. This cannot be achieved with double precision, so we used an arbitrary precision floating point library [26]. 3.4. U0inv and U1inv approximation differences In comparison with the approximation of U0inv , see [1], U1inv has two additional difficulties. First, the U1inv root (12a) is not an exact double value, where for U0inv the root is an exact double: 0.5. The U1inv root value is expressed with 32 digits in (12b). The most accurate double presentation is (12c), which is greater than the actual root value. The two values with an exact double representation closest to the root are r5 (12c) and r 6 (12d), respectively to the right and to the left of the root. These values will be used in ranges close to the root, as we do not want the algorithm to depend on the representation of p, and its accuracy, within the used programming language.



1

1



U1 ð0Þ ¼ pffiffiffiffiffiffiffi ) U1inv pffiffiffiffiffiffiffi ¼ 0;

2p 2p ¼ 0:398942280401432677939946059934;

ð12aÞ ð12bÞ

< 0:398942280401432702 ¼ r 5 ;

ð12cÞ

> 0:398942280401432647 ¼ r 6 :

ð12dÞ

The second difference is that U0inv has a closed formula relation for the range left and right of its root, see (13). Although there is a closed formulation relation for positive and negative values of U1 , see (14), this cannot be converted into a closed formula expression for U1inv values left and right of its root. So the range covered by the U1inv approximation should not only cover from r 0 to the root, but from r 0 to r 10 .

U0inv ð1  pÞ ¼ U0inv ðpÞ; 



p 2 ½0; 1;

 

U1 zp ¼ U1 zp þ zp :

ð13Þ ð14Þ

Although (14) does not give a closed formulation for values to the right of the root for U1inv , it enables an approximation for p values larger than r 1 ¼ 7:96582630953042053. As U1 ðr1 Þ ¼ 1e  16, (14) can be approximated by (15) in case of double precision calculations. As such U1inv can be approximated by (16) in double precision for p values larger than r 1 .

U1 zp ¼ zp ;





when zp > r 1 ;

ð15Þ

U1inv ðpÞ ¼ p;

when p > r1

ð16Þ

S.K. De Schrijver et al. / Applied Mathematics and Computation 219 (2012) 1375–1382

1379

3.5. Algorithm ISN1OLF: U1inv double precision approximation Here we will describe algorithm ISN1OLF, given a value p, it will compute z ¼ U1inv ðpÞ with double precision in the range ½r 0 ; r 10 . The acronym ISN1OLF stands for ‘inverse standard normal first order loss function’. The valid range is divided in 8 sub ranges, 4 left of the root and 4 to the right of the root. For p values close to zero we first perform a conversion based upon the square root and the natural logarithm of p. Algorithm ISN1OLF 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

If p > r 1 set z ¼ p, go to step 10 If p P r 2 : with x ¼ ðp  r2 Þ, set z ¼ (A.1), go to step 10 If p P r 3 : with x ¼ ðp  r3 Þ=r 4 , set z ¼ (A.2), go to step 10 If p P r 5 : with x ¼ ðp  r5 Þ  8, set z ¼ (A.3), go to step 10 If p P r 7 : with x ¼ ðr 6  pÞ  32, set z ¼ (A.4), go to step 10 If p P r 8p : ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi with yffi ¼ ðr6  pÞ; x ¼ ðr 9  yÞ, set z ¼ (A.5), go to step 10 Set y ¼  lnðpÞ If y 6 6: with x ¼ ðy  1:5Þ, set z ¼ (A.6), go to step 10 Else: with x ¼ ðy  6Þ, set z ¼ (A.7), go to step 10 U1inv ðpÞ ¼ z

3.6. Algorithm validation We used four validation sets with respectively 1000, 10,000, 1,00,000 and 1 million random p points uniformly distributed in the range ½1e  100; 8. These U1inv validation values have a relative error (17) of less than 1e32. For each of the sets we give the maximum magnitude (18) of the relative error of algorithm ISN1OLF and the root mean square (RMS) (19) of the relative error, see Table 1.

rel ¼

U1inv ðpÞ  ISN1OLFðpÞ jU1inv ðpÞj

ð17Þ

Maxðjrel jÞ ¼ Maxðjrel1 j; jrel2 j; . . . ; jreln jÞ

ð18Þ

rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 2 ð þ 2rel2 þ    þ 2reln Þ n rel1

ð19Þ

RMSðrel Þ ¼

In Table 2 we test the values close to p ¼ 0. Again we used four validation sets with respectively 1000, 10,000, 1,00,000 and 1 million random p points and U1inv validation values that have an error of less than 1e32, but now the random p values are uniformly distributed on a logarithmic scale within the considered range. We again make sure that the z values can be exactly represented by a double precision number. 3.7. Test data for algorithm Table 3 gives values that may be used to check whether the algorithm has been correctly implemented. 4. Concluding remarks We have developed the ISN1OLF algorithm with rational approximations for the inverse standard normal first order loss function. This ISN1OLF function approximation algorithm has a double precision accuracy within the full range of double precision floating point numbers. As further research we see the inverse of the second order loss function and also the inverse of the combined function (20). Within inventory optimizations these inverse functions are often used.

½U1 ðzÞ  U1 ðz þ zc Þ  c ¼ 0:

ð20Þ

Table 1 Magnitude relative error. Range

Test points

Maxðrel Þ

RMSðrel Þ

½1e  100; 8 ½1e  100; 8 ½1e  100; 8 ½1e  100; 8

1000 10,000 1,00,000 1,000,000

3.95e16 4.62e16 5.77e16 6.33e16

6.08e17 6.05e17 6.02e17 6.05e17

1380

S.K. De Schrijver et al. / Applied Mathematics and Computation 219 (2012) 1375–1382

Table 2 Magnitude relative error ISN1OLF: p close to 0 on a logarithmic scale. Range

Test points

Maxðrel Þ

RMSðrel Þ

½r0 ; r8  ½r0 ; r8  ½r0 ; r8  ½r0 ; r8 

1000 10,000 1,00,000 1,000,000

6.30e16 7.16e16 7.99e16 9.47e16

1.86e16 1.85e16 1.80e16 1.86e16

Table 3 Magnitude relative error, range on a logarithmic scale. p

z ¼ ISN1OLF

1e  100 0:25 1

21:12967328021652 0:3448674639990244 0:8994715612537435

Appendix A. Rational functions A.1. Rational function expressions Here we list the rational functions used in the Algorithm ISN1OLF.

  z ¼ p þ ðexpð0:5p2 Þ=p2  ððððððððððm10 x þ m9 Þx þ m8 Þx þ m7 Þx þ m6 Þx þ m5 Þx þ m4 Þx þ m3 Þx þ m2 Þx þ m1 Þx þ m0 Þ=ððððððððððn10 x þ n9 Þx þ n8 Þx þ n7 Þx þ n6 Þx þ n5 Þx þ n4 Þx þ n3 Þx þ n2 Þx þ n1 Þx þ 1Þ;

ðA:1Þ

 pffiffiffi z ¼ p þ ðexpð0:5p2 Þ= p  ðððððððk7 x þ k6 Þx þ k5 Þx þ k4 Þx þ k3 Þx þ k2 Þx þ k1 Þx þ k0 Þ=ðððððððl7 x þ l6 Þx þ l5 Þx þ l4 Þx þ l3 Þx þ l2 Þx þ l1 Þx þ 1Þ;

ðA:2Þ

z ¼ ðððððði6 x þ i5 Þx þ i4 Þx þ i3 Þx þ i2 Þx þ i1 Þx þ i0 Þ=ðððððj5 x þ j4 Þx þ j3 Þx þ j2 Þx þ j1 Þx þ 1Þ;

ðA:3Þ

z ¼ ðððððg 5 x þ g 4 Þx þ g 3 Þx þ g 2 Þx þ g 1 Þx þ g 0 Þ=ððððh4 x þ h3 Þx þ h2 Þx þ h1 Þx þ 1Þ;

ðA:4Þ

z ¼ y  ððððððððe8 x þ e7 Þx þ e6 Þx þ e5 Þx þ e4 Þx þ e3 Þx þ e2 Þx þ e1 Þx þ e0 Þ=ððððððððf8 x þ f7 Þx þ f6 Þx þ f5 Þx þ f4 Þx þ f3 Þx þ f2 Þx þ f1 Þx þ 1Þ;

ðA:5Þ

z ¼ y  ððððððððc8 x þ c7 Þx þ c6 Þx þ c5 Þx þ c4 Þx þ c3 Þx þ c2 Þx þ c1 Þx þ c0 Þ=ððððððððd8 x þ d7 Þx þ d6 Þx þ d5 Þx þ d4 Þx þ d3 Þx þ d2 Þx þ d1 Þx þ 1Þ;

ðA:6Þ

z ¼ y  ðððððððða8 x þ a7 Þx þ a6 Þx þ a5 Þx þ a4 Þx þ a3 Þx þ a2 Þx þ a1 Þx þ a0 Þ=ððððððððb8 x þ b7 Þx þ b6 Þx þ b5 Þx þ b4 Þx þ b3 Þx þ b2 Þx þ b1 Þx þ 1Þ:

ðA:7Þ

A.2. Rational function parameters Here we will provide the list of used parameters for the rational functions. a0 a2 a4 a6 a8

¼ 1:310444679913372 ¼ 0:34297377276691108 ¼ 0:0056707052143933359 ¼ 0:87299508529791606e  5 ¼ 0:47715414930209652e  9

a1 a3 a5 a7

¼ 1:0496342982777808 ¼ 0:058806303584142083 ¼ 0:00030646561311550538 ¼ 0:11359029676757079e  6

b1 b3 b5 b7

¼ 0:77935982713146056 ¼ 0:04218517946986447 ¼ 0:00021712896462726042 ¼ 0:8032129209966964e  7

b2 b4 b6 b8

¼ 0:24949458983424002 ¼ 0:0040342579952242503 ¼ 0:61753952795739417e  5 ¼ 0:33739809465660943e  9

S.K. De Schrijver et al. / Applied Mathematics and Computation 219 (2012) 1375–1382

c0 c2 c4 c6 c8

¼ 0:58234615354415903 ¼ 3:7644259282127282 ¼ 1:5804886582031085 ¼ 0:071752521699595843 ¼ 0:00011544405570833181

c1 c3 c5 c7

¼ 2:3954910046771972 ¼ 3:1822730248163338 ¼ 0:45927351465091796 ¼ 0:0051073685232919929

d1 d3 d5 d7

¼ 2:8868777130708606 ¼ 2:7459319764867306 ¼ 0:33776812350821943 ¼ 0:0036125038564975782

d2 d4 d6 d8

¼ 3:7190165916935734 ¼ 1:2389548635486364 ¼ 0:051177121304169432 ¼ 0:81625833768166845e  4

e0 e2 e4 e6 e8

¼ 3:0018269039209338 ¼ 989:7575959597692 ¼ 15009:178138029023 ¼ 10861:978269562645 ¼ 394:32281881668326

e1 e3 e5 e7

¼ 87:774236056726657 ¼ 5425:7139678862877 ¼ 19803:67505848015 ¼ 2360:7711979594225

f1 f3 f5 f7

¼ 31:891192048993958 ¼ 2505:8535715458245 ¼ 13666:751461068495 ¼ 2583:8131935382122

f2 f4 f6 f8

¼ 399:96189801714692 ¼ 8237:9586584846684 ¼ 10067:57909732112 ¼ 292:57249970310298

g 0 ¼ 0:61175758417034084e  16 g 2 ¼ 0:0072754785571224347 g 4 ¼ 0:82125690103447695e  7

g 1 ¼ 0:062499999999999994 g 3 ¼ 0:00020727510842806252 g 5 ¼ 0:21022601430286476e  7

h1 ¼ 0:14134154943904849 h3 ¼ 0:31892520349077855e  4

h2 ¼ 0:0055971987449862891 h4 ¼ 0:8898381877233207e  6

i0 i2 i4 i6

i1 ¼ 0:25000000000000002 i3 ¼ 0:030445417368412181 i5 ¼ 0:68276543115638487e  4

¼ 0:49846544058069863e  16 ¼ 0:15242142496267809 ¼ 0:0023029650958645681 ¼ 0:12757147423548785e  5

j1 ¼ 0:70942126995107004 j3 ¼ 0:016757888840674101 j5 ¼ 0:76994691214814959e  5

j2 ¼ 0:17264183638706689 j4 ¼ 0:00058599116642757624

k0 k2 k4 k6

¼ 0:24488851831921104 ¼ 0:18301189125291796 ¼ 0:021184219173857513 ¼ 0:00072233393146827096

k1 k3 k5 k7

l1 l3 l5 l7

¼ 2:5410054607988286 ¼ 0:5529951895300247 ¼ 0:017367680496451343 ¼ 0:00039215993062993774

l2 ¼ 2:0418856609447443 l4 ¼ 0:055678606510953158 l6 ¼ 0:0062574676815150778

m0 ¼ 0:18756233208103297 m2 ¼ 0:18050826111441451 m4 ¼ 0:036805415000645036

1381

¼ 0:48029713677686631 ¼ 0:026160178329977429 ¼ 0:0029987754713519703 ¼ 0:44262578270966203e  4

m1 ¼ 0:31676099852893919 m3 ¼ 0:075146244143085796 m5 ¼ 0:0093534945621974929 (continued on next page)

1382

S.K. De Schrijver et al. / Applied Mathematics and Computation 219 (2012) 1375–1382

m6 ¼ 0:0034409217608655729 m8 ¼ 0:00019426953835475993 m10 ¼ 0:80030048496041983e  5 n1 n3 n5 n7 n9

¼ 1:1106105568550152 ¼ 0:23366691487529792 ¼ 0:031404493662692849 ¼ 0:0020956017866871155 ¼ 0:47477680185174772e  4

m7 ¼ 0:00063248029913634361 m9 ¼ 0:16897829742340167e  4

n2 ¼ 0:52698392742028986 n4 ¼ 0:09983456392019021 n6 ¼ 0:007561621875344939 n8 ¼ 0:00047749988216206908 n10 ¼ 0:19898093244659233e  4

References [1] M.J. Wichura, Algorithm as 241: the percentage points of the normal distribution, Journal of the Royal Statistical Society Series C (Applied Statistics) 37 (1988) 477–484. [2] G.R. Waissi, D.F. Rossin, A sigmoid approximation of the standard normal integral, Applied Mathematics and Computation 77 (1996) 91–95. [3] W. Bryc, A uniform approximation to the right normal tail, Applied Mathematics and Computation 127 (2002) 365–374. [4] M. Kiani, J. Panareos, S. Psarakis, M. Saleem, Approximations to the normal distribution function and an extended table for the mean range of the normal variables, Journal of Iranian Statistical Society 7 (2008) 57–72. [5] J.M. Linhart, Algorithm 885: computing the logarithm of the normal distribution, ACM Transactions on Mathematical Software 35 (2008). [6] H. Shore, Simple approximations for the inverse cumulative function, the density function and the loss integral of the normal distribution, Journal of the Royal Statistical Society Series C (Applied Statistics) 31 (1982) 108–114. [7] C.S. Withers, P.N. McGavin, Expressions for the normal distribution and repeated normal integrals, Statistics and Probability Letters 76 (2006) 479–487. [8] G. Gallego, O. Özer, P. Zipkin, Bounds, heuristics, and approximations for distribution systems, Operations Research 55 (2007) 503–517. [9] G. West, Better approximations to cumulative normal functions, Wilmott Magazine 9 (2005) 70–76. [10] G. Hadley, T.M. Within, Analysis of Inventory Systems, Prentice Hall, New Jersey, 1963. [11] P.H. Zipkin, Foundations of Inventory Management, McGraw-Hill, Signapore, 2000. [12] S. Axsäter, Inventory Control (International Series in Operations Research & Management Science), 2nd ed., Springer, Berlin, 2006. [13] E. Silver, D. Pyke, R. Peterson, Inventory Management and Production Planning and Scheduling, Wiley, 1998. [14] B. Ghalebsaz-Jeddi, B.C. Shultes, R. Haj, A multi-product continuous review inventory system with stochastic demand, backorders, and a budget constraint, European Journal of Operational Research 158 (2004) 456–469. [15] S. Axsäter, A simple procedure for determining order quantities under a fill rate constraint and normally distributed lead-time demand, European Journal of Operational Research 174 (2006) 480–491. [16] R. Fisher, Introduction of table of hh functions of airey, Mathematical Tables 1 (1931) xxvi–xxxvii. [17] C.S. Withers, S. Nadarajah, Repeated integrals of the univariate normal as a finite series with the remainder in terms of Moran’s functions, Statistics (2010) 1–10. [18] Boost, Minimax approximations and the remez algorithm, 2011. . [19] W. Fraser, J.F. Hart, On the computation of rational approximations to continuous functions, Communications of the ACM 5 (1962) 401–403. [20] R.B. Barrar, H.L. Loeb, On the remez algorithm for non-linear families, Numerische Mathematik 15 (1970) 382–391. [21] W.J. Cody, A survey of practical rational and polynomial approximation of functions, Society for Industrial and Applied Mathematics 12 (1970) 400– 423. [22] C.B. Dunham, Convergence of the Fraser–Hart algorithm for rational Chebyshev approximation, Mathematics of Computation 29 (1975) 1078–1082. [23] G.L. Litinov, Approximate construction of rational approximations and the effect of error autocorrection, Russian Journal of Mathematical Physics 1 (1993) 313–352. [24] E.M. Elbarbary, S.M. Elsayed, I.K. Youssef, A.M.M. Khodier, Restrictive Chebyshev rational approximation and applications to heat-conduction problems, Applied Mathematics and Computation 136 (2003) 395–403. [25] D. Goldberg, What every computer scientist should know about floating-point arithmetic, ACM Computing Surveys 23 (1991) 5–48. [26] V. Shoup, Ntl: A library for doing number theory, 2011. .