A Koksma–Hlawka inequality for general discrepancy systems

A Koksma–Hlawka inequality for general discrepancy systems

Journal of Complexity 31 (2015) 773–797 Contents lists available at ScienceDirect Journal of Complexity journal homepage: www.elsevier.com/locate/jc...

479KB Sizes 1 Downloads 67 Views

Journal of Complexity 31 (2015) 773–797

Contents lists available at ScienceDirect

Journal of Complexity journal homepage: www.elsevier.com/locate/jco

A Koksma–Hlawka inequality for general discrepancy systems✩ Florian Pausinger a,∗ , Anne Marie Svane b a

IST Austria (Institute of Science and Technology Austria), Am Campus 1, 3400 Klosterneuburg, Austria

b

Aarhus University, Aarhus, Denmark

article

info

Article history: Received 26 November 2014 Accepted 1 June 2015 Available online 16 June 2015 Keywords: Harman variation Hardy–Krause variation Koksma–Hlawka theorem Integral geometry

abstract Motivated by recent ideas of Harman (Unif. Distrib. Theory, 2010) we develop a new concept of variation of multivariate functions on a compact Hausdorff space with respect to a collection D of subsets. We prove a general version of the Koksma–Hlawka theorem that holds for this notion of variation and discrepancy with respect to D . As special cases, we obtain Koksma–Hlawka inequalities for classical notions, such as extreme or isotropic discrepancy. For extreme discrepancy, our result coincides with the usual Koksma–Hlawka theorem. We show that the space of functions of bounded D -variation contains important discontinuous functions and is closed under natural algebraic operations. Finally, we illustrate the results on concrete integration problems from integral geometry and stereology. © 2015 Elsevier Inc. All rights reserved.

1. Introduction Let f be a real-valued function on [0, 1]s and let x1 , x2 , . . . , xN be a set of N points in [0, 1]s . The classical Koksma–Hlawka Theorem introduces a general principle to quantify and bound the

✩ F.P. is supported by the Graduate School of IST Austria, A.M.S is supported by the Centre for Stochastic Geometry and Advanced Bioimaging funded by a grant from the Villum Foundation. ∗ Corresponding author. E-mail addresses: [email protected] (F. Pausinger), [email protected] (A.M. Svane).

http://dx.doi.org/10.1016/j.jco.2015.06.002 0885-064X/© 2015 Elsevier Inc. All rights reserved.

774

F. Pausinger, A.M. Svane / Journal of Complexity 31 (2015) 773–797

approximation error

   N 1     f (xj ) − f (x) dx   N j =1  s [0,1]

(1)

by the product of two independent factors: one factor depends only on the function f (the variation of f ) and the other factor depends only on the discrete point set (the discrepancy of x1 , x2 , . . . , xN ). The discrepancy measures the irregularity of the distribution of the first N elements of an infinite sequence. The Koksma–Hlawka inequality is used for numerical integration. If f has bounded variation and X = (xj )j≥1 is an infinite sequence such that the discrepancy of the first N elements goes to zero when N → ∞, then the approximation error (1) goes to zero when N → 0. Fast convergence can be obtained by choosing a sequence with low discrepancy. The inequality can also be used to quantify the integration error explicitly. However, if the integrand f does not have bounded variation, then the Koksma–Hlawka inequality is vacuous. Thus it becomes an important problem to identify classes of functions with bounded variation. For s = 1, the total variation of f is the natural definition of variation. This leads to Koksma’s inequality; see [15] and [17, Chapter 2, Theorem 5.1]. However, a suitable analogue for multivariate functions in the context of numerical integration is not obvious. There are several classical and well-studied concepts of variation of multivariate functions; see [1,7,21]. The multivariate case of the approximation theorem was originally shown by Hlawka [14] for the notions of Hardy–Krause variation and extreme discrepancy. Recently, Brandolini, Colzani, Gigante and Travaglini [5,6] generalised the classical Koksma–Hlawka Theorem. They replaced the integration domain [0, 1]s by an arbitrary bounded Borel subset of Rs and proved the inequality for piecewise smooth integrands. However, a shortcoming of these different Koksma–Hlawka inequalities is that functions of practical interest might have unbounded Hardy–Krause variation (e.g., the indicator function of a ball or a tilted box). Harman [13] recently proved a Koksma–Hlawka inequality involving a different type of discrepancy and a new notion of variation. Harman’s variation remains finite for certain discontinuous functions with unbounded variation in the sense of Hardy and Krause. This result was applied to an imaging problem and led to the approximation of certain Crofton-type integrals in R3 ; [10]. To the best of our knowledge, nothing is known about the space of functions of bounded Harman variation. We present some examples illustrating the lack of natural algebraic properties such as additive and multiplicative closure. Motivated by applications in stereology, where discontinuous step-functions with infinite Hardy–Krause variation are abundant, we build on the ideas of Harman to define a more general notion of variation for which a Koksma–Hlawka type inequality holds. The price for this is that the discrepancy may be large. In detail, we (i) develop a general framework (Definition 3.2) of variations extending the notion of Harman variation; (ii) prove closure properties for the induced spaces of functions of bounded variation (Section 3.3); (iii) study properties of functions with bounded variation and relate their variation to Harman’s notion of variation (Sections 3.5 and 3.6); (iv) use this framework to obtain a general Koksma–Hlawka type result (Theorem 4.3); (v) study the relation of our new family of variations to classical notions (Section 5). More precisely, we show that our definition generalises the classical Hardy–Krause variation (Theorem 5.6); (vi) apply the results to integral-geometric integrals that are of interest in stereology (Section 6). The applications in Section 6 can be seen as an affirmative answer to the question of Harman at the very end of [13]; namely, whether his Koksma–Hlawka inequality has any practical significance. 2. Harman variation For the reader’s convenience, we first recall (and slightly extend) Harman’s original definition of variation. We list properties and discuss shortcomings.

F. Pausinger, A.M. Svane / Journal of Complexity 31 (2015) 773–797

775

2.1. Harman’s version of the Koksma–Hlawka inequality Let K be the space of convex subsets of [0, 1]s and let 1A denote the characteristic function of a set A. Furthermore, let λ be the usual Lebesgue measure on [0, 1]s . For an infinite point sequence X = (xj )j≥1 in [0, 1]s , we define the isotropic discrepancy by

  N   1    DiscK (X , N ) = sup λ(A) − 1A (xj ) .   N A∈K j =1

(2)

Definition 2.1. A set A ⊆ [0, 1]s is an algebraic sum of convex sets if there exist A1 , . . . , Am ∈ K and 1 ≤ n ≤ m such that 1A =

n 

m 

1A i −

i =1

1A i .

i=n+1

Let A be the collection of algebraic sums of convex sets. Definition 2.2. The Harman complexity h(A) of a set A ∈ A with A ̸= [0, 1]s and A ̸= ∅ is the minimal number m such that there exist A1 , . . . , Am and 1 ≤ n ≤ m satisfying 1A =

n 

m 

1A i −

i =1

1A i ,

(3)

i=n+1

where either Ai ∈ K or [0, 1]s \Ai ∈ K . Moreover, we define h([0, 1]s ) = h(∅) = 0. Let f : [0, 1]s → R be a bounded, measurable function such that all superlevel sets f −1 [α, ∞) are algebraic sums of convex sets. Write hf (α) = h(f −1 [α, ∞)). If the function α → hf (α) is Riemann integrable over [inf f , sup f ], then the Harman variation of f is defined as H (f ) =



sup f

hf (α)dα =





hf (α)dα.

(4)

−∞

inf f

Otherwise we set H (f ) = ∞. The set of functions with finite Harman variation is denoted by H . Using this terminology, Harman [13] obtains the following version of the Koksma–Hlawka inequality: Theorem 2.3 (Harman). For f ∈ H and X = (xj )j≥1 a point sequence in [0, 1]s ,

   

[0,1]s

f (x)dx −

N 1 

N j =1

  f (xj ) ≤ H (f )DiscK (X , N ).

Remark 2.4. In Harman’s original definition of h, all the sets Ai in (3) are supposed to belong to K . However, on observing that

    N N      1  s λ(A) − 1    s 1 ( x ) = λ([ 0 , 1 ] \ A ) − 1 ( x ) A i  [0,1] \A i ,   N i=1

N i=1

Harman’s proof of the Koksma–Hlawka inequality carries over without changing a word. Our definition will, in general, give rise to a smaller variation. Moreover, it treats convex and concave functions the same way. In both cases, H (f ) = sup f − inf f , while this only holds for f concave with Harman’s original definition.

776

F. Pausinger, A.M. Svane / Journal of Complexity 31 (2015) 773–797

2.2. Properties The advantage of Harman variation is that it sometimes provides a Koksma–Hlawka inequality for functions with infinite Hardy–Krause variation. For instance, all characteristic functions 1A with A ∈ K have Harman variation 1, while the Hardy–Krause variation is only finite if A is an axis-parallel box; see Section 5. We say that a function f is quasi-convex if all sublevel-sets f −1 (−∞, α] are convex. If −f is quasi-convex, or equivalently, if all superlevel sets are convex, then f is said to be quasi-concave. In particular, a convex (concave) function is quasi-convex (-concave). The following is immediate from the definition: Proposition 2.5. If f is quasi-convex or -concave, then H (f ) = sup f − inf f . Now, let S ⊆ H be the vector space of simple functions f =

m 

αi 1Ai ,

i=1

where αi ∈ R, Ai ∈ K , and m ∈ N. The following property of functions with bounded Harman variation follows from a standard measure theoretical argument. Lemma 2.6. If f ∈ H , then there exists a uniformly converging sequence of simple functions fn ∈ H such that f = limn→∞ fn and H (f ) = limn→∞ H (fn ). Proof. One may take fn = inf f +

n  δ j =1

n

1f −1 [inf f +j δ ,∞) , n

where δ = sup f − inf f . Then |fn − f |∞ ≤ nδ , where | · |∞ denotes the supremum norm. Moreover, H (fn ) =

n δ

n j =1

 hf

inf f + j

δ n



,

which is a Riemann right sum for H (f ).



2.3. Shortcomings Intuitively, we would expect from a measure of variation that the space of functions with bounded variation is closed under addition and multiplication. While this is true for simple functions of bounded Harman variation, the following examples show that it does not hold in general. Example 2.7. Choose two monotonically decreasing sequences an and bn that converge to 0 such that an > bn for odd n and an < bn for even n. Define the functions f , g : [0, 1] → R with f (0) = 0 and f (x) = an for x ∈ (2−n−1 , 2−n ], and g (0) = 0 and g (x) = −bn for x ∈ (2−n−1 , 2−n ]. All superlevel sets of f and g are intervals and hence both functions have finite Harman variation. However, (f + g )−1 [0, ∞) cannot be written as an algebraic sum of finitely many intervals, so H (f + g ) is not finite. Example 2.8. Turning to multiplicativity, we take the exponentials F = ef and G = eg of the functions in Example 2.7. These functions again have bounded Harman variation, while (F · G)−1 [1, ∞) is not the algebraic sum of finitely many intervals. Furthermore, the inverse of Lemma 2.6 does not hold in general. That is, the limit of a uniformly converging sequence of simple functions with bounded Harman variation might not have bounded Harman variation.

F. Pausinger, A.M. Svane / Journal of Complexity 31 (2015) 773–797

777

Example 2.9. Look again at the function (f + g ) from Example 2.7. Setting fn = (f + g )1[2−n−1 ,1] , we obtain a uniformly convergent sequence of simple functions with bounded Harman variation. However, the limit is (f + g ), which is a bounded function with infinite Harman variation. 3. A new family of variations Recent applications [10] show that the notion of Harman variation is interesting and useful. Unfortunately, it is not very flexible, i.e. the space of functions with bounded Harman variation is not closed under simple operations and it only applies to functions whose superlevel sets are algebraic sums of convex sets. For this reason we propose a new concept of variation. Our notion is not as intuitive as Harman’s original idea. However, its properties and its generality may compensate for that. We begin by defining the variation of simple functions and turn then to general functions. Next, we study the closure properties of the corresponding function spaces. This is followed by bounds on twice differentiable functions. We close this section by relating our concept to a straightforward generalisation of Harman variation. 3.1. Simple functions Generalising the notation of the previous section, let D denote an arbitrary family of measurable subsets of [0, 1]s with ∅, [0, 1]s ∈ D . Let S (D ) denote the corresponding vector space of simple functions f =

m 

αi 1Ai ,

i =1

where αi ∈ R, Ai ∈ D , and m ∈ N. Naturally, the representation of f in this way is not unique. Furthermore, we define algebraic sums of sets in D in the same way as in the convex case and denote the collection of algebraic sums of sets in D as A(D ). With this, the Harman complexity hD (A) of an algebraic sum A ∈ A(D ) is defined as in the convex case and satisfies hD (A) = hD ([0, 1]s \A). In particular, for A ∈ D , the Harman complexity is simply given by hD (A) =



0 1

A = ∅ or A = [0, 1]s , otherwise.

If D is closed under finite intersections, then A(D ) is closed under finite unions and intersections; i.e. A(D ) is a set algebra. In this case, if A, B ∈ A(D )\{[0, 1]s }, then hD (A ∩ B) ≤ 3hD (A)hD (B).

(5)

Examples of families D of (convex) sets of particular interest are: the set R of all axis-aligned hyperrectangles; the set R∗ of all axis-aligned hyperrectangles anchored at the origin; the set B of all s-dimensional balls in [0, 1]s ; and of course K . We define discrepancy with respect to D as in (2) but with the supremum taken over all A ∈ D rather than K . We have the following Koksma–Hlawka like inequality: Proposition 3.1. For A ∈ D and an arbitrary point sequence X = (xj )j≥1 in [0, 1]s ,

   

[0,1]s

1A (x)dx −

N 1 

N j =1

 

1A (xj ) ≤ hD (A)DiscD (X , N ).

Proof. This is immediate since the left hand side is simply

  N    λ(A) − 1 1A (xj ).  N j =1

In particular, this vanishes for A = ∅ and A = [0, 1]s .



778

F. Pausinger, A.M. Svane / Journal of Complexity 31 (2015) 773–797

Next, for f ∈ S (D ), we define the following preliminary version of D -variation:

 m

VS ,D (f ) := inf

i =1

  m   |αi |hD (Ai )f = αi 1Ai , αi ∈ R, Ai ∈ D . i=1

The reason for this definition is that, by the triangle inequality, the following Koksma–Hlawka inequality holds for f ∈ S (D ):

   

[0,1]s

f (x)dx −

N 1 

N j =1

  f (xj ) ≤ VS ,D (f )DiscD (X , N ).

(6)

We list the following properties: (i) The inequality, sup f − inf f ≤ VS ,D (f ), holds for all f ∈ S (D ). (ii) VS ,D has the properties of a semi-norm: VS ,D (f1 + f2 ) ≤ VS ,D (f1 ) + VS ,D (f2 ), VS ,D (α f ) = |α|VS ,D (f ), VS ,D (f ) = 0

(7)

if and only if f is constant.

(iii) For D = K and A ∈ K , hK (A) = VS ,D (1A ) = H (1A ) = VS ,D (1[0,1]s \A ) = H (1[0,1]s \A ). (iv) If f ∈ S (K ), then VS ,K (f ) ≤ H (f ) < ∞. Note that the first inequality in (iv) can be strict! For instance, if f = 1B2 + 1B2 +e1 , where B2 is the unit

ball in R2 and e1 = (1, 0), then H (f ) = 3 and VS ,K (f ) = 2. 3.2. General functions

Let V∞ (D ) be the collection of all measurable functions f : [0, 1]s → R for which there exists a sequence of fi ∈ S (D ) that converges uniformly to f . Definition 3.2. We define the D -variation of f ∈ V∞ (D ) as VD (f ) = inf { lim inf VS ,D (fi ) | fi ∈ S (D ), lim |f − fi |∞ = 0 }. i

i

We define VD (f ) = ∞ when f ̸∈ V∞ (D ). The space of functions of bounded D -variation is denoted by

V (D ) = {f ∈ V∞ (D ) | VD (f ) < ∞}. We note for later that there always exists a sequence of fi ∈ S (D ) converging uniformly to f with VD (f ) = limi→∞ VS ,D (fi ). Proposition 3.3. All functions in V∞ (D ) are bounded and sup f − inf f ≤ VD (f ). Proof. The first statement follows because for all f ∈ V∞ (D ), there is an f0 ∈ S (D ) such that |f0 − f |∞ < 1 and f0 is bounded. The second statement follows from (i) since |f − fi |∞ ≤ ε implies | sup f − sup fi |∞ ≤ ε .  Example 3.4. To illustrate Definition 3.2, we recall Example 2.7 in which it was shown that the set of functions with bounded Harman variation is not closed under addition. We approximate f and g by fi =

i  (an − an+1 )1[2−n−1 ,1] , n =0

gi =

i  (bn − bn+1 )1[2−n−1 ,1] , n=0

respectively. Then VS ,K (fi ) = a0 − ai+1 < H (fi ) = a0 and VS ,K (gi ) = b0 − bi+1 < H (gi ) = b0 . As f + g is approximated by fi + gi , the relations (7) show that VK (f + g ) ≤ lim infi VS ,K (fi + gi ) ≤ a0 + b0 . So, f + g has finite K -variation.

F. Pausinger, A.M. Svane / Journal of Complexity 31 (2015) 773–797

779

3.3. Algebraic structure of V∞ (D ) and V (D ) Having defined a new notion of variation, we turn to study various properties of the function spaces

V∞ (D ) and V (D ). The following two propositions follow from the corresponding properties in S (D ). Proposition 3.5. V∞ (D ) and V (D ) are vector spaces with VD (f + g ) ≤ VD (f ) + VD (g ), VD (λf ) = |λ|VD (f ), VD (f ) = 0 if and only if f is constant . In particular, VD defines a semi-norm on V (D ). Proposition 3.6. V∞ (D ) is closed under limits in the supremum-norm. We have the following lower semi-continuity: if |f − fi |∞ → 0, then VD (f ) ≤ lim infi VD (fi ). Theorem 3.7. If D is closed under intersection, then V (D ) is closed under multiplication. In fact, VD (fg ) ≤ 3VD (f )VD (g ) + inf |f |VD (g ) + inf |g |VD (f ). Proof. Let fν ∈ S (D ) for ν = 1, 2 be two simple functions with m(ν)

fν =



αiν 1Aνi + cν 1[0,1]s ,

i =1

  m (ν)    ν  VS ,D (fν ) − |αi | < ε,  i=1

ν

where Ai ̸= ∅, [0, 1]s . Note that m(ν)

|cν | ≤



|αiν | + inf |fν |.

i=1

We have that f1 f2 =

m(1)  m(2)  i=1 j=1

αi1 αj2 1A1 ∩A2 + c1 i

j

m(2) 

αj2 1A2 + c2

j=1

j

m(1)  i=1

αi1 1A1 + c1 c2 1[0,1]s . i

Hence, VS ,D (f1 f2 ) ≤ (VS ,D (f1 ) + ε)(VS ,D (f2 ) + ε) + |c1 |(VS ,D (f2 ) + ε) + |c2 |(VS ,D (f1 ) + ε). Letting ε → 0 and recalling (5), we find that VS ,D (f1 f2 ) ≤ 3VS ,D (f1 )VS ,D (f2 ) + inf |f1 |VS ,D (f2 ) + inf |f2 |VS ,D (f1 ).

(8)

Given f , g ∈ V (D ), we may choose fi , gi ∈ S (D ) with fi → f , gi → g such that VS ,D (fi ) → VD (f ) and VS ,D (gi ) → VD (g ). Then (8) yields VD (fg ) ≤ lim inf VS ,D (fi gi ) ≤ 3VD (f )VD (g ) + inf |f |VD (g ) + inf |g |VD (f ).  i

Proposition 3.5 shows that VD defines a norm on the quotient V (D ) = V (D )/C where C ∼ = R is the space of constant functions. In fact, we have: Proposition 3.8. V (D ) equipped with the norm VD is a Banach space.

780

F. Pausinger, A.M. Svane / Journal of Complexity 31 (2015) 773–797

Proof. Let ([fi ])i≥1 be a Cauchy sequence in V (D ). Then VD ([fi − fj ]) < ε implies that sup(fi − fj ) − inf(fi − fj ) < ε. Therefore, there is a representative fji of [fj ] such that |fi − fji |∞ < ε . Choose a sequence in such that VD ([fin − fin +k ]) <

1 2n

for all k ≥ 0. Choose a representative for f0 .

Then choose representatives for fin inductively such that |fin−1 − fin |∞ < 2n1−1 . Then (fin )n≥1 is a Cauchy sequence with respect to the supremum norm and hence converges uniformly to some f ∈ V∞ (D ). The semi-continuity yields that VD ([f − fi ]) ≤ lim infn VD ([fin − fi ]), which is smaller than some given ε > 0 if only i is sufficiently large. In particular, f ∈ V (D ) and [fi ] → [f ] in the VD -norm.  3.4. Functions of interest contained in V∞ (D ) One drawback of D -variation when compared to Hardy–Krause variation is that it only has a nontrivial definition for functions in V∞ (D ). First, we give a condition to ensure that V∞ (D ) contains all continuous functions. This applies to R, R∗ , and K . Proposition 3.9. If D is closed under intersections and for every δ > 0 and every x ∈ [0, 1]s there is a neighbourhood Dx,δ ∈ A(D ) of x having diameter less than δ , then V∞ (D ) contains all continuous functions on [0, 1]s . Proof. Let f be a continuous function on [0, 1]s and let ε > 0 be given. Moreover, let f1 = inf f +

ε n

2

1f −1 [inf f +n ε ,∞) . 2

Then |f − f1 |∞ ≤ 2ε . For each n, choose δn such that 0 < δn < d(f −1 [inf f + nε/2, ∞), f −1 (inf f + (n − 1)ε/2)), where d is the minimum distance between two disjoint compact sets. Choose a finite cover U1,n , . . . , UNn ,n of f −1 [inf f + n 2ε , ∞) where all Ui,n belong to A(D ) and have diameter less than δn . Then, for Un =

Nn

i=1

Ui,n , the function

f2 = inf f +

ε n

2

1Un

is a simple function with |f − f2 |∞ ≤ ε .



Unfortunately, V∞ (D ) only contains very few characteristic functions: Proposition 3.10. If 1A ∈ V∞ (D ) and D is stable under intersection, then A ∈ A(D ). Proof. If 1A ∈ V∞ (D ), then there is a sequence of fi ∈ S (D ) with fi → 1A uniformly. Choosing fi with |fi − 1A | < 21 shows that A = fi−1 [ 12 , ∞), which is an element of A(D ).  3.5. Some classes of functions of bounded K -variation It is difficult to compute the D -variation of a given function explicitly. Therefore, we provide bounds on the variation for various classes of functions in the special case D = K . The following is immediate from Proposition 2.5: Proposition 3.11. All bounded quasi-convex and quasi-concave functions belong to V (K ). Such functions satisfy VK (f ) = H (f ) = sup f − inf f .

F. Pausinger, A.M. Svane / Journal of Complexity 31 (2015) 773–797

781

Theorem 3.12. All C 2 functions belong to V (K ). We have the following bound: VK (f ) ≤ sup f − inf f +

s M 16

(f ),

where M (f ) = sup{∥ Hess(f , x)∥ | x ∈ [0, 1]s }, Hess(f , x) is the Hessian matrix of f at x, and ∥ · ∥ is the operator norm. Proof. If f is C 2 , then it can be written as a difference of two convex functions f = f1 − f2 ; namely, one may take f1 (x) = f2 (x) =

1 2 1 2

(f (x) + 12 M (f )|x − c |2 ), (−f (x) + 12 M (f )|x − c |2 ),

where c = ( 21 , . . . , 12 ). Since Hess(f1 , x) = Hess(f2 , x) =

1 2 1 2

(Hess(f , x) + M (f )I ), (− Hess(f , x) + M (f )I )

are positive semi-definite, it follows that f1 and f2 are convex. Moreover, VK (f ) ≤ VK (f1 ) + VK (f2 ) ≤ sup f − inf f + by Proposition 3.11.

s M 16

(f )



Proposition 3.13. If f ∈ V∞ (K ), then the set of discontinuity points of f is at most (s − 1)-dimensional. Proof. If fi → f uniformly and f is discontinuous at x, then also fi is discontinuous at x for i sufficiently   i j j large. Since fi = j,i ∂ Ai . The claim follows because each ∂ Ai j αj 1Ai , we see that x must belong to j

has dimension at most s − 1.



3.6. Generalised Harman variation In this section, we generalise the notion of Harman variation from Section 2 and study its relation to D -variation. Let f : [0, 1]s → R be a bounded, measurable function. Define hD ,f (α) =

hD (f −1 [α, ∞))





if f −1 [α, ∞) ∈ A(D ), otherwise.

If this function is Riemann integrable over [inf f , sup f ], then we define the Harman D -variation as in (4) for the convex case. However, as we have seen, this definition only applies to a very small class of function. Therefore we extend it and define the generalised Harman D -variation to be HD (f ) =



sup f

hD ,f (α) dα, inf f

where



is the lower Darboux integral (for a definition see e.g. [22]). It is defined for any function,

but it may well be infinite. This definition agrees with the above Harman D -variation whenever the latter is defined. Let H (D ) be the set of functions with finite generalised Harman D -variation. The generalised definition is more flexible. For instance, it may still be finite if hD ,f (α) is only undefined in a single point: Example 3.14. Consider the function with f (x) = (−2)−n for x ∈ (2−n−1 , 2−n ]. Then HK (f ) =  ∞ −n < ∞, while the ordinary Harman K -variation was infinite. n=0 2 The generalised Harman D -variation can be used to bound the D -variation:

782

F. Pausinger, A.M. Svane / Journal of Complexity 31 (2015) 773–797

Theorem 3.15. For any f ∈ V∞ (D ), VD (f ) ≤ HD (f ). In particular, H (D ) ⊆ V (D ). Proof. If HD (f ) is infinite, then the theorem is trivial. Let inf f = t0 < · · · < tN = sup f be a partition with |ti − ti+1 | < 1n . Let mi = inf{hf (α) | α ∈ [ti , ti+1 ]}. Note that hD ,f (α) is integer valued (or infinite), so there is an αi ∈ [ti , ti+1 ] with hD ,f (αi ) = mi . Let fn = inf f +

N −1  (ti+1 − ti )1f −1 [αi ,∞) . i=0

Then |fn − f |∞ ≤ VS ,D (fn ) ≤

2 . n

If, moreover, all mi are finite, then fn ∈ S (D ) with

N −1 

mi (ti+1 − ti ).

i=0

Choosing partitions such that the right hand side converges to



hD ,f (α)dα yields that

VD (f ) ≤ lim inf VS ,D (fn ) ≤ HD (f ).  n

In a next step, we state results about composition of functions in HD yielding a useful corollary. Theorem 3.16. (i) Let f ∈ H (D ). Let g : [inf f , sup f ] → R be strictly increasing and C 1 . Then g ◦ f ∈ H (D ) with inf g ′ HD (f ) ≤ HD (g ◦ f ) ≤ sup g ′ HD (f ). (ii) Let f ∈ H (D ) and let g : [inf f , sup f ] → R be C 1 . Then g ◦ f ∈ V (D ) with VD (g ◦ f ) ≤ (0 ∨ sup g ′ + 0 ∨ sup(−g ′ ))HD (f ). (iii) Let f ∈ H (D ) and let g : [inf f , sup f ] → R be Lipschitz with Lipschitz constant L. Then g ◦ f ∈ V (D ) and VD (g ◦ f ) ≤ 2L HD (f ). Proof. To prove the first claim, observe that g ◦ f (x) ≥ α if and only if f (x) ≥ g −1 (α). Hence, HD (g ◦ f ) =



g (sup f )

hD ,f (g −1 (α))dα

g (inf f )

= sup

N  (ti+1 − ti ) i =1

inf

α∈[ti ,ti+1 ]

N  = sup (g (αi+1 ) − g (αi )) i =1

≤ sup g ′



hD ,f (g −1 (α))

inf

α∈[αi ,αi+1 ]

hD ,f (α)

sup f

hD ,f (α)dα. inf f

Here, ti = g (αi ) and the supremum is over all partitions t0 < · · · < tN of [g (inf f ), g (sup f )] or, equivalently, all partitions α0 < · · · < αN of [inf f , sup f ]. The last inequality uses the mean value theorem. The other inequality is similar.

F. Pausinger, A.M. Svane / Journal of Complexity 31 (2015) 773–797

783

The second statement follows by writing g = g1 + g2 , where g1 =



1 2

g ( x) +

x





|g (t )|dt + ε x, ′

0

g2 =

1 2



g (x) −

x





|g (t )|dt − ε x. ′

0

The last statement comes from approximating g by a sequence of C 1 functions with derivatives bounded by L.  Corollary 3.17. If f ∈ H (D ), then ef ∈ H (D ). If, in addition, inf f > 0 and r ∈ R, then f r , log f ∈ H (D ). In particular, 1/f ∈ H (D ). 4. A general Koksma–Hlawka theorem The main motivation for the definition of V∞ (D ) is the general Koksma–Hlawka inequality that we present in this section. Its corollaries show how to incorporate different well-studied notions of discrepancy in a natural way into our framework. 4.1. Integration over the unit cube We state a first version of the theorem for functions defined over the unit cube [0, 1]s . Theorem 4.1. For f ∈ V∞ (D ) and X = (xj )j≥1 a point sequence in [0, 1]s ,

   

[0,1]s

f (x)dx −

N 1 

N j =1

 

f (xj ) ≤ VD (f )DiscD (X , N ).

Proof. Let fi → f uniformly and fi ∈ S (D ). Then Eq. (6) yields

   

[0,1]s

f (x)dx −

  ≤ 

[0,1]s

N 1 

N j =1

  f (xj )

(f − fi )(x)dx −

N 1 

N j =1

  (f − fi )(xj ) + VS ,D (fi )DiscD (X , N ).

Letting i → ∞, we find that

   

[0,1]s

f (x)dx −

N 1 

N j =1

 

f (xj ) ≤ lim inf VS ,D (fi )DiscD (X , N ). i



The theorem holds, of course, for any family D of measurable sets for which we can define the D -variation and for any sequence of points X = (xj )j≥1 in [0, 1]s . However, in the context of numerical integration, it is only natural to require X to be a uniformly distributed sequence, i.e. lim

N →∞

N 1 

N j =1

f (xj ) =

 [0,1]s

f (x) dx

holds for all real-valued continuous functions on [0, 1]s , and to require D to be a discrepancy system, i.e. lim DiscD (X , N ) = 0

N →∞

holds if and only if the sequence X is uniformly distributed; see [9]. The families K , R, R∗ , and B are all known to be discrepancy systems and are thus of particular interest.

784

F. Pausinger, A.M. Svane / Journal of Complexity 31 (2015) 773–797

The discrepancy system R of axis aligned boxes is especially well-studied. A uniformly distributed sequence is called a low discrepancy sequence if its discrepancy with respect to R is of order O ((log N )s /N ). For examples of such sequences we refer to the monographs [8,9]. The order O ((log N )s /N ) is conjectured to be best possible. This notion gives the following corollary. Corollary 4.2. Let (xj )j≥1 be a low discrepancy sequence of points in [0, 1]s . Let f ∈ V∞ (R). Then

   

[0,1]s

f (x)dx −

N 1 

 

f (xj ) ≤ c

N j =1

(log N )s VR (f ) N

for some constant c independent of N (and f ). An analogous result holds for R∗ . To turn to the isotropic discrepancy, it is important to note that Schmidt [23] proved the lower bound DiscK (X , N ) ≥ c

1 N 2/(s+1)

,

where the constant c > 0 depends only on the dimension s. The proof of this result is elegant and short, and the bound is optimal despite a logarithmic factor in the numerator. Moreover, Stute [26] showed that for s = 3, almost all sequences satisfy DiscK (X , N ) ≤ cN −1/2 (log N )3/2 and for s ≥ 4, almost all sequences satisfy DiscK (X , N ) ≤ cN −2/(s+1) (log N )2/(s+1) . Later, Beck [4] also settled the two-dimensional case by showing that there exists a sequence with DiscK (X , N ) ≤ cN −2/3 (log N )4 . For an overview of discrepancy systems and corresponding upper bounds on the discrepancy, we refer to the book of Matou˘sek [20]. 4.2. Integration over general compact spaces There is nothing special about the integration domain [0, 1]s in the definition of D -variation and the proof of the Koksma–Hlawka inequality. The concepts of uniform distribution theory can be extended to compact spaces. Therefore, let X be a compact Hausdorff space and µ a positive regular normalised Borel measure on X. A sequence X = (xj )j≥1 is called uniformly distributed with respect to µ if lim

N →∞

N 1 

N j=1

f ( xj ) =



f dµ X

holds for all continuous functions f : X → R. A Borel set D ⊆ X is called a µ-continuity set if µ(∂ D) = 0, where ∂ D denotes the boundary of D. Applying [17, Chapter 3, Theorem 1.2] leads to the following definition. A system D of µ-continuity sets of X is a (general) discrepancy system if

  N   1    1D (xj ) = 0 lim sup µ(D) − N →∞ D∈D   N j =1 holds if and only if X is µ-uniformly distributed. For a discrepancy system D , the supremum

  N   1    DiscD (X , N , X) = sup µ(D) − 1D (xj )   N D∈D j=1 is the discrepancy of X with respect to D . The definition of D -variation can be carried over verbatim. To emphasise in which space X we are working, we write V∞ (D , X) instead of V∞ (D ) and obtain the following Koksma–Hlawka inequality: Theorem 4.3. For f ∈ V∞ (D , X),

  N     f (x)µ(dx) − 1 f (xj ) ≤ VD (f )DiscD (X , N , X).  N X

j =1

F. Pausinger, A.M. Svane / Journal of Complexity 31 (2015) 773–797

785

All the properties of D -variation shown in Sections 3.1–3.4 and 3.6 hold in this general set-up except Proposition 3.9, where X must be a metric space. In order to apply the Koksma–Hlawka inequality, one needs finite D -variation. The following lemma shows that it may be necessary to choose a large D in order to obtain finite D -variation. The price for this is a larger discrepancy. Lemma 4.4. Let D1 ⊆ D2 be two collections of subsets of X. Then VD2 (f ) ≤ VD1 (f ), DiscD2 (X , N , X) ≥ DiscD1 (X , N , X). The next lemma describes the variation of composite functions: Lemma 4.5. Let X1 , X2 be two spaces and D1 and D2 be collections of subsets of X1 and X2 , respectively. Let f : X1 → X2 be a map satisfying f −1 (D2 ) ⊆ D1 . If g ∈ V∞ (D2 ), then g ◦ f ∈ V∞ (D1 ) and VD1 (g ◦ f ) ≤ VD2 (g ). The proofs of both lemmas are immediate from the construction of D -variation. Remark 4.6. When speaking about discrepancy with respect to a non-normalised finite measure µ, we shall always mean discrepancy of its normalised version µ0 . A Koksma–Hlawka inequality for integration with respect to µ is then obtained from the inequality for µ0 by multiplying with the total measure of µ on both sides. 4.3. Integration over the sphere In general, not much is known about discrepancies in abstract spaces. One exception is the unit sphere Ss−1 = {x ∈ Rs | ∥x∥ = 1} ⊆ Rs . Since we consider integrals over spheres in Section 6, we introduce the most important discrepancy system for Ss−1 . We consider the normalised surface area measure σs−1 on Ss−1 . For x ∈ Ss−1 , we let C (x, r ) = {z ∈ Ss−1 | x · z ≥ r } be a spherical cap. It follows from [9, Proposition 2.5 and 2.6] that the set of all spherical caps

SC = {C (x, r ) | x ∈ Ss−1 , −1 ≤ r ≤ 1} is a discrepancy system for Ss−1 . By [9, Theorem 2.22], we have the following bounds for the discrepancy with respect to SC c1 ·

1

≤ DiscSC (X , N , Ss−1 ) ≤ c2 ·

(log N )1/2

.

N 1/2+1/(2s) N 1/2+1/(2s) While the lower bound holds for any sequence X = (xj )j≥1 of points on Ss−1 , it is only shown that for every N > 1 there exists a sequence of N points for which the upper bound holds. The constants depend only on the dimension s, but not on N. These results bound the optimal possible order of convergence one can expect for discrepancy on spheres, thus coining the term low discrepancy sequence on Ss−1 . Furthermore, as shown in [2, Theorem 10], the typical discrepancy of a random set of N i.i.d. uniformly distributed points on Ss−1 is of order N −1/2 . Interestingly, there are no known explicit constructions of low discrepancy sequences on the sphere. In [18,19], an elegant construction was given of a sequence on S2 with DiscSC (X , N , S2 ) ≤ c · N −1/3 (log N )2/3 , whose numerics suggest an order of N −1/2 . Unfortunately, this construction is limited to the case S2 . Recently, these results were improved using a different construction. In [2], points with small isotropic discrepancy in the plane were lifted to the sphere yielding



DiscSC (X , N , S2 ) ≤ 44 s N −1/2 . Again, numerical results suggest an even smaller order, which appears to be close to the before mentioned non-constructive upper bound for s = 2. Using Theorem 4.3, these results immediately imply bounds on the approximation error similar to Corollary 4.2.

786

F. Pausinger, A.M. Svane / Journal of Complexity 31 (2015) 773–797

5. Relation to Hardy–Krause variation The Koksma–Hlawka inequality was originally shown for Hardy–Krause variation and discrepancy with respect to R∗ . In this section, we investigate the relation between D -variation and the classical Hardy–Krause variation. In particular, we show that VR∗ (f ) coincides with the Hardy–Krause variation of f whenever f ∈ V∞ (R∗ ). 5.1. The Vitali and Hardy–Krause variation In the following, we use the notation of Owen [21]. For a ∈ Rs , we write a = (a1 , . . . , as ) for the coordinates. If a, b ∈ Rs and all ai ≤ bi (ai < bi ), then we write a ≤ b (a < b). In this case, [a, b] denotes the hyperrectangle consisting of those x with all ai ≤ xi ≤ bi . For u ⊆ {1, . . . , s}, we denote by au : b−u the point with ith coordinate equal to ai if i ∈ u and equal to bi otherwise. The set −u is the set complement of u in {1, . . . , s}. In dimension s = 1, a ladder Y on the interval [a, b] is a partitionof [a, b], i.e. a sequence s j s a = y0 < · · · < yk < b. A ladder in [0, 1]s is a set of the form Y = j=1 Y ⊆ [0, 1] , where each Yj is a one-dimensional ladder. j

j

j

j

j

j

Suppose Yj = {y1 < · · · < ykj }. Define the successor (yi )+ of yi to be yi+1 if i < kj and (ykj )+ = bj .

If y = (y1i1 , . . . , ysis ) ∈ Y, then we define its successor to be y+ = ((y1i1 )+ , . . . , (ysis )+ ). Define



∆(f ; a, b) =

(−1)|u| f (au : b−u ).

u⊆{1,...,s}

For a ladder Y, we have by [21, Proposition 2]

∆(f ; a, b) =



∆(f ; y, y+ ).

y∈Y

Define the variation over Y by VY (f ; a, b) =



|∆(f ; y, y+ )|.

y∈Y

Let Y be the set of all ladders on [a, b]. Then the Vitali variation of f over [a, b] is defined by V (f ; a, b) = sup VY (f ; a, b). Y∈Y

For u ⊆ {1, . . . , s}, we define

∆u (f ; a, b) =

 (−1)|v| f (av : b−v ). v⊆u

u −u Given a ladder Y, let Yu = {yu : b−u | y ∈ Y}. For a point y0 = yu : b−u ∈ Yu , we let y+ . 0 = (y+ ) : b This is independent of the chosen y. Again,

∆u (f ; a, b) =



∆u (f ; y, y+ ).

y∈Yu

Define VYu (f ; a, b) =



|∆u (f ; y, y+ )|.

y∈Yu

This is the variation over the ladder Yu of the restriction of f to the face of [a, b] consisting of points of the form xu : b−u .

F. Pausinger, A.M. Svane / Journal of Complexity 31 (2015) 773–797

787

Definition 5.1. The Hardy–Krause variation is defined as HK (f ; a, b) =



sup VYu (f ; a, b).

∅̸=u⊆{1,...,s} Y∈Y

H K denotes the class of functions with bounded Hardy–Krause variation. In words, the Hardy–Krause variation is the sum of the Vitali variations of the restrictions of f to all faces of [a, b]s . We note for later that by [21, Proposition 4], we may take a sequence of ladders Yn with |y − y+ | ≤ n−1 for all y ∈ Yn and



HK (f ; a, b) = lim

n→∞

V(Yn )u (f ; a, b).

∅̸=u⊆{1,...,s}

Moreover, we need [21, Proposition 6]: If Y is a ladder on [a, b], then f (a) = f (b) +



(−1)|u| ∆u (f ; a, b)

(9)

∅̸=u⊆{1,...,s}

= f (b) +



(−1)|u|

∅̸=u⊆{1,...,s}



∆u (f ; y, y+ ).

y∈Yu

5.2. Hardy–Krause and R∗ -variation Let [a, b]u denote the box

[a, b]u = {x ∈ Rs | ∀i ∈ u : ai ≤ xi ≤ bi , ∀i ̸∈ u : ai ≤ xi < bi }. If ai = bi and i ̸∈ u, then [a, b]u should be interpreted as the empty set. We define (a, b]u similarly. We define R∗ as the collection of all [0, a]u with u ⊆ {1, . . . , s} and a ∈ [0, 1]s . In the following, we show that

H K ∩ V∞ (R∗ ) = V (R∗ ) and that the two notions of variation agree on this set. In dimension s = 1, any function in H K is the difference of two bounded monotone functions. Such functions always belong to V∞ (R∗ ), so H K = V (R∗ ) in this case. We do not know whether this holds in higher dimensions. Theorem 5.2. Let f ∈ V∞ (R∗ ) and let c = (1, . . . , 1) ∈ Rs with [0, 1]s = [0, c ]. Then VR∗ (f ) ≤ HK (f ; 0, c ). Proof. Let fn be a sequence of simple functions converging uniformly to f . Then we may assume |f − fn | < 1n and (by adding zero terms) that there is a ladder Yn such that fn (x) = f (c ) +





(−1)|u|

∅̸=u⊆{1,...,s}



αu,y,v 1[0,y]v (x).

y∈(Yn )u v⊆{1,...,s}

The idea of this proof is to construct another sequence f˜n of simple functions converging to f and having VS ,R∗ (f˜n ) ≤ HK (f ; 0, c ). The function f˜n will be a linear combination of the same indicator  ˜ n such that u⊆{1,...,s} (Y˜ n )u functions as fn to ensure convergence. We first introduce a new ladder Y v is in bijection with the collection of boxes {[0, y] | y ∈ (Yn )u , u, v ⊆ {1, . . . , s}} in such a way that each grid point paired with the smallest box containing it. is s s i i i i ˜ ˜i If Yn = i=1 Yn with Yn = {0 = y1 < · · · < yki }, then we let Yn = i=1 Yn , where

Y˜ ni = {0 = yi1 < ai1 < yi2 · · · yiki < aiki } for arbitrarily chosen aij < 1. If y = (y1j1 , . . . , ysjs )u : b−u ∈ (Yn )u ,

788

F. Pausinger, A.M. Svane / Journal of Complexity 31 (2015) 773–797

then a(y) will denote the point with ith coordinate aiji −1 if i ∈ u (with the convention ai0 = 0) and aki

otherwise. The box [0, y]v is then paired with the point yv : a(y)−v . ˜ Let p = yv : a(y)−v and let p∼ + indicate that the successor is taken with respect to the ladder (Yn )u . Then



f˜n (x) := f (c ) +





(−1)|u∪(−v)| ∆u∪(−v) (f ; p, p∼ + ) 1[0,y]v (x).

∅̸=u⊆{1,...,s} y∈(Yn )u v⊆{1,...,s}

We now show that |f − f˜n |∞ ≤ 2n−1 . If x ∈ (0, c ], then there is a unique z ∈ Yn and a minimal w such that x ∈ (z , z+ ]w . Then w 1[0,y]v (x) = 1[0,y]v (z+ : a(z+ )−w ) := 1[0,y]v (q)

for all y ∈ (Yn )u . Thus,



f˜n (x) = f (c ) +





(−1)|u∪(−v)| ∆u∪(−v) (f ; p, p∼ + ) 1[0,y]v (q)

∅̸=u⊆{1,...,s} y∈(Yn )u v⊆{1,...,s}



= f (c ) +



(−1)|u| ∆u (f ; y, y∼ + ) 1[0,y] (q)

∅̸=u⊆{1,...,s} y∈(Y˜ n )u

= f (q), where the last equality follows from (9) since

{y ∈ Y˜ n | q ≤ y} = {y ∈ Y˜ n | z+w : a(z+ )−w ≤ y} is a ladder on [q, c ].  Since both fn and f˜n are constant on (z , z+ ]w \ v⊂w (z , z+ ]v ,

|f (x) − f˜n (x)| = |f (x) − f (q)| ≤ |f (x) − fn (x)| + |fn (q) − f (q)| ≤ 2n−1 . v w Similarly, if v ̸= ∅ and x = xv : 0−v , where xv ∈ (z v , z+ ] for w ⊆ v minimal and z ∈ Yn , then ˜fn (x) = f (qv : 0−v ). As above, we find that |f (x) − f˜n (x)| < 2n−1 . Hence, |f − f˜n |∞ ≤ 2n−1 . On the other hand, f˜n ∈ S (R∗ ) with    VS ,R∗ (f˜n ) ≤ |∆u∪(−v) (f ; p, p∼ + )| ∅̸=u⊆{1,...,s} y∈(Yn )u v⊆{1,...,s}



=



|∆u (f ; y, y∼ + )|

∅̸=u⊆{1,...,s} y∈(Y˜ n )u



=

∅̸=u⊆{1,...,s}

V(Y˜ n )u (f ; 0, c )

≤ HK (f ; 0, c ); so by Proposition 3.6, VR∗ (f ) ≤ lim inf VS ,R∗ (f˜n ) ≤ HK (f ; 0, c ).  n

To show the reverse inequality, we first prove several lemmas. Lemma 5.3. Let f = 1[0,a]u . Then

∆(f ; y1 , y2 ) =

 (−1)s 0 0

u if a = yu1 : y− 2 , if a = yv1 : y−v 2 and u ̸= v, if a ̸∈ [y1 , y2 ].

F. Pausinger, A.M. Svane / Journal of Complexity 31 (2015) 773–797

789

u Proof. If a = yv1 : y−v 2 , then the vertices of [y1 , y2 ] contained in [0, a] are exactly those of the form −w w y1 : y2 where v ∪ (−u) ⊆ w and v ⊆ u. Thus,



∆(f ; y1 , y2 ) = 1{v⊆u}

(−1)

|w|

 =

v∪(−u)⊆w

(−1)s , 0,

v = u, v ̸= u.

If a ̸∈ [y1 , y2 ], then ai ̸∈ [(y1 )i , (y2 )i ] for some i. Hence,



∆(f ; y1 , y2 ) =

u∪{i}

u |u| (−1)|u| f (yu1 : y− 2 ) − (−1) f (y1

−u∪{i}

: y2

) = 0. 

u⊆{1,...,s}\{i}

Lemma 5.4. Any f ∈ S (R∗ ) has a unique representation of the form f =

 i ,k

αi,k 1[0,ai ]vi,k + α 1[0,c ] ,

where all [0, ai ]vi,k ̸= ∅, [0, c ] are distinct and all αi,k ̸= 0. In particular, VS ,R∗ (f ) =



|αi,k | = HK (f ; 0, c ).

i ,k

Proof. We begin by showing the first claim for f = 0. Assume that a non-trivial representation is given. Clearly, α = 0 because of f (c ) = 0. We may choose ai with ai1 + · · · + ais maximal and then a vi,k not contained in any other vi,k′ . Consequently, there is an x ∈ [0, ai ]vi,k not contained in any other [0, aj ]vj,l . But then f (x) = αi,k ̸= 0 in contradiction to f = 0. Let f be a general function. We assume that there exist two different representations of f . Their difference would be a non-trivial representation of 0. This is not possible according to our earlier argument. Thus, any f ∈ S (R∗ ) has a unique representation. To compute HK (f ; 0, c ), we choose a ladder containing all ai as a vertex of a subrectangle and no subrectangles containing more than one distinct ai . Note that

∆w (f ; y, y+) =

 i ,k

αi,k ∆w (1[0,ai ]vi,k , y, y+).

For fixed y and w , Lemma 5.3 implies that there can be at most one pair (ai , vi,k ) with ∆w (1[0,ai ]vi,k , y, y+) ̸= 0, so

|∆w (f ; y, y+)| =

 i,k

|αi,k ||∆w (1[0,ai ]vi,k , y, y+)|.

It follows from Lemma 5.3 that HK (1[0,ai ]vi,k ; 0, c ) = 1, so HK (f ; 0, c ) =

 i,k

|αi,k |HK (1[0,ai ]vi,k ; 0, c ) =



|αi,k |. 

i ,k

Lemma 5.5. HK is lower semi-continuous with respect to the supremum norm. Proof. Let fi → f in the supremum norm. Given ε > 0, choose a ladder Y such that HK (f ; a, b) ≤



V Y u ( f ; a, b ) +

∅̸=u⊆{1,...,s}

ε

2

=





ε |∆u (f ; y, y+ )| + .

∅̸=u⊆{1,...,s} y∈Yu

2

Assume that the number of terms in the latter sum is N. We choose fi such that |f − fi |∞ <

 ∅̸=u⊆{1,...,s}

VYu (f ; a, b) ≤



ε

2N2d

. Then

(VYu (fi ; a, b) + VYu (f − fi ; a, b))

∅̸=u⊆{1,...,s}

ε ≤ HK (fi ; a, b) + ; 2

i.e. HK (f ; a, b) ≤ HK (fi ; a, b) + ε for all i sufficiently large. The case HK (f ) = ∞ is similar.



790

F. Pausinger, A.M. Svane / Journal of Complexity 31 (2015) 773–797

Theorem 5.6. For any f ∈ V∞ (R∗ ), HK (f ; 0, c ) = VR∗ (f ). Proof. It suffices to show the reverse inequality in Theorem 5.2. Take a sequence fi ∈ S (R∗ ) converging uniformly to f such that VS ,R∗ (fi ) → VR∗ (f ). Then by semi-continuity of HK , HK (f ; 0, c ) ≤ lim inf HK (fi ; 0, c ) = lim inf VS ,R∗ (fi ) = VR∗ (f ).  i

i

Remark 5.7. In [28] the following version of the Koksma–Hlawka inequality is proved: for f ∈ H K and K ∈ K ,

  N     f (x)dx − 1 f (xi )1K (xi ) ≤ (HK (f ; 0, c ) + |f (c )|)DiscK (X , N ).  N K

(10)

1=1

For f ∈ V∞ (R∗ ), this also follows from the proof of Theorem 5.2 since f˜n 1K converges uniformly to f 1K and therefore, VK (f 1K ) ≤ lim inf VS ,K (f˜n 1K ) ≤ HK (f ; 0, c ) + |f (c )|. n

(11)

Note that strict inequality may hold in (11); e.g., for f (x) = x and K = [0, 21 ], so in general Theorem 4.3 yields a stronger inequality. 5.3. Hardy–Krause and K -variation By Theorem 5.2, VK (f ) is bounded by the Hardy–Krause variation if f ∈ V∞ (R∗ ). This applies for instance to all continuous functions. In dimension one, we can do a little better for continuous functions by using the Banach indicatrix theorem [3]. Note that in dimension one, K is the set of all intervals. Theorem 5.8. Let f : [0, 1] → R be continuous. Then VK (f ) ≤

1 2

(HK (f ; 0, 1) + |f (1) − f (0)|) ≤ HK (f ; 0, 1).

Proof. Suppose HK (f ; 0, 1) is finite. Let N (α) be the cardinality of f −1 (α). Then by [3, Section 2, Theorem 3], HK (f ; 0, 1) =





N (α)dα, −∞

where the right hand side is the Lebesgue integral. Now, if N (α) is finite, then f −1 [α, ∞) is a finite union of closed intervals. The value of f at the endpoints is α (except possibly at 0 and 1) by continuity. If α > f (0), f (1), then all intervals are contained in (0, 1) and hence 2hK ,f (α) ≤ N (α). If α < f (0), f (1), then their complement is contained in (0, 1) and again 2hK ,f (α) ≤ N (α). In the remaining cases, either 0 or 1 belongs to f −1 [α, ∞) and we can only be sure that 2hK ,f (α) ≤ N (α) + 1. The claim now follows by Theorem 3.15 and the fact that the Lebesgue integral bounds the lower Darboux integral from above.  Remark 5.9. The function f (x) = x shows that the term |f (1) − f (0)| is necessary in the theorem. The following example shows that V (K ) is generally larger than H K ∩ V∞ (K ) Example 5.10. Take the step function f : [0, 1]2 → R defined by f (x1 , x2 ) = 1{x1 +x2 ≥1} . Any box of the form [a1 , a2 ] × [1 − a2 , 1 − a1 ] has ∆(f ) = 1. We can decompose [0, 1]2 into arbitrarily many such boxes implying that the Vitali variation of this function is not bounded. On the other hand, it is easy to see that VK (f ) = 1 and hence that f belongs to V (K ).

F. Pausinger, A.M. Svane / Journal of Complexity 31 (2015) 773–797

791

Finally, we mention the following corollary to Theorem 5.6 and Lemma 4.5: Corollary 5.11. Let g : [0, 1]s → [0, 1] be a convex function and f : [0, 1] → R. Then VK (f ◦ g ) ≤ VR∗ (f ) = HK (f ; 0, 1). The corollary applies, for instance, to functions of the form f (|x|). 6. Application to integral geometry The initial motivation for this paper came from stereology. Here the problem is to estimate the intrinsic volumes Vi , i = 0, . . . , s, of a real world object K ⊆ Rs given only its intersection with finitely many lower dimensional subspaces (for instance slices of a tissue). The most prominent intrinsic volumes are the volume Vs , the surface area 2Vs−1 , and the Euler characteristic V0 . Many stereological procedures are based on the Crofton formula [24, Theorem 4.5.5]: If K ⊆ Rs is a polyconvex set, then the kth intrinsic volume is given by



Vk (K ) = ck

A(s,s−k)

V0 (K ∩ P ) µs−k (dP ),

(12)

where A(s, s − k) is the affine Grassmannian consisting of all affine (s − k)-dimensional affine subspaces in Rs equipped with the normalised Haar measure (see [25, Chapter 13] for a definition) and ck is an explicitly known constant. Moreover, V0 (K ∩ P ) is the Euler characteristic of the intersection of K with the (s − k)-dimensional hyperplane P. Typically, only finitely many of the intersections K ∩ P can be measured. If K is known to lie in a window W , then the planes P are chosen uniformly from the compact set A = {P ∈ A(s, s − k) | P ∩ W ̸= ∅}. Then Vk (K ) is estimated by

Vk (K ) ≈ ck

N µs−k (A) 

N

V0 (K ∩ Pi ).

(13)

i=1

If the Pi are chosen independently and uniformly at random from A, then the Crofton formula shows that this estimator is unbiased. Below, we bound the integration error in (13) in the cases k = 1 and k = d − 1 using the theory of this paper. 6.1. Estimation of V1 We start with considering the first intrinsic volume V1 . For convex sets, V1 has an interpretation as the mean width. In dimension s = 2, it is half the boundary length and for s = 3, it is proportional to the integrated mean curvature. The latter interpretation was used for estimating the length of root systems in [10]. Using [25, Theorem 13.2.12], we can rewrite (12) as

V1 (K ) = =

c1 2 c1





S

2

s−1

Ss−1



V0 (K ∩ (P (u) + α u)) dα σs−1 (du)

(14)

−∞

GK (u) σs−1 (du),

where P (u) denotes the hyperplane with normal direction u ∈ Ss−1 and GK is the inner integral GK (u) =





V0 (K ∩ (P (u) + α u)) dα. −∞

A technique for computing GK (u) for a given direction u is developed in [11] for general bodies K that satisfy a certain reach condition (including convex sets satisfying this additional smoothness

792

F. Pausinger, A.M. Svane / Journal of Complexity 31 (2015) 773–797

assumption). In stereology, GK (u) is estimated by an average of V0 (K ∩ (P (u) + α u)) for finitely many

α . When K is convex,

GK (u) = h(K , u) + h(K , −u) = h(K ⊕ Kˇ , u),

(15)

where h(K , ·) is the support function (see [25, Chapter 14]) h(K , u) = sup{⟨x, u⟩ | x ∈ K } and K ⊕ Kˇ = {x − y | x, y ∈ K }. This is used in some applications [12,27] to measure GK (u) directly. Taking finitely many directions u1 , . . . , uN , we approximate (14) by N c1 1 

2 N j=1

GK (uj ).

It is not obvious how to bound the occurring integration error using the classical Koksma–Hlawka inequality. We illustrate how Theorem 4.3 can be applied in two examples. Example 1: Simplicial complexes. We start with a preliminary lemma. For a collection D of sets from X, we let Dp denote the collection of all intersections of at most p sets each being either an element of D or the complement of such a set. We obtain the following bound on the corresponding variation: Lemma 6.1. Let D be an intersection stable discrepancy system on X. For f ∈ V∞ (Dp ), VD (f ) ≤ 2p VDp (f ). Proof. For A ∈ Dp , we may write 1A =

k 

p 

1Ai

i=1

k

i =1

Ai ∩

p

j=k+1

X\Aj , where A1 , . . . , Ap ∈ D . Thus,

(1 − 1Aj ) ∈ S (D )

(16)

j=k+1

by expanding the product. Hence S (D ) = S (Dp ), so it suffices to show the inequality for simple functions. This again follows by expanding the product in (16).  For the particular collection of sets SC s+1 that consists of intersections of at most (s + 1) spherical caps of Ss−1 , the corresponding variation VSC s+1 (GK ) can be bounded. Proposition6.2. Let K be a simplicial complex and let S be the collection of relatively open simplices of K ; i.e. K = τ ∈S τ . Then VSC s+1 (GK ) ≤ 4(s + 1)|S | diam(K ). Proof. For τ ∈ S and a hyperplane P, note that P ∩ τ is either empty, τ , or a relatively open (dim(τ ) − 1)-dimensional convex polytope. Thus,  V0 (K ∩ P ) = (−1)dim(τ ∩P ) 1{τ ∩P ̸=∅} . τ ∈S

The situation P ∩ τ = τ happens exactly if τ ⊆ P, which can be the case for at most one of the planes P (u) + α u for fixed u. Therefore, GK (u) =





V0 (K ∩ (P (u) + α u))dα  ∞  dim(τ )−1 1{τ ∩(P (u)+α u)̸=∅} dα = (−1) −∞

τ ∈S

=

 τ ∈S

−∞

(−1)

dim(τ )−1

(h(τ , u) + h(τ , −u)).

F. Pausinger, A.M. Svane / Journal of Complexity 31 (2015) 773–797

793

For τ ∈ S, we let v1τ , . . . , vkτ(τ ) be its vertices, where k(τ ) = dim(τ ) + 1 ≤ s + 1. For i = 1, . . . , k(τ ), we define s−1 W (τ )+ | ∀j > i : ⟨viτ , u⟩ ≥ ⟨vjτ , u⟩, ∀j < i : ⟨viτ , u⟩ < ⟨vjτ , u⟩}, i = {u ∈ S s−1 W (τ )− | ∀j > i : ⟨viτ , u⟩ ≤ ⟨vjτ , u⟩, ∀j < i : ⟨viτ , u⟩ > ⟨vjτ , u⟩}. i = {u ∈ S s−1 Then the W (τ )+ and so do the W (τ )− i define a disjoint decomposition of S i . Moreover,

h(τ , u) + h(τ , −u) =

k(τ ) 

(⟨viτ , u⟩1W (τ )+ (u) − ⟨viτ , u⟩1W (τ )− (u))

=

k(τ ) 

(17)

i

i

i=1

(⟨viτ − v τ , u⟩1W (τ )+ (u) − ⟨viτ − v τ , u⟩1W (τ )− (u)), i

i=1

i

where v τ ∈ τ is some fixed point. Note that W (τ )± i ∈ SC s . All superlevel sets of the function u → ⟨w, u⟩ are spherical caps. Hence VSC s+1 (GK ) ≤

k(τ ) 

τ ∈S i=1

HSC s+1 (⟨viτ − v τ , ·⟩1W (τ )+ ) + HSC s+1 (⟨viτ − v τ , ·⟩1W (τ )− ) i

i

≤ 4(s + 1)|S | diam(K ) τ

where sup(⟨vi − v τ , ·⟩1W (τ )+ ) − inf(⟨viτ − v τ , ·⟩1W (τ )+ ) is bounded by 2 diam(K ). i

i



For our next step, let Ss+−1 be the lower hemisphere. We parametrise it by inverse stereographic projection from the North Pole φ : Bs−1 → Ss+−1 . Since GK (u) is even, we may rewrite the integral as 1 2

 Ss−1

GK (u)σs−1 (du) =

 [−1,1]s−1

GK (φ(x))Jφ (x)1Bs−1 (x)dx,

where Jφ is a smooth Jacobian. The parametrisation allows us to choose the point sequence in [−1, 1]s−1 and bound the integration error using the tools of Section 3. Proposition 6.3. Suppose K is a simplicial complex. Then for any point sequence X = (xj )j≥1 in [−1, 1]s−1 ,

  1  2

Ss−1

GK (u)σs−1 (du) −

s−1

≤2

N 2s−1 

N

j =1

 

GK (φ(xj ))Jφ (xj )1Bs (xj )

VK ((GK ◦ φ)Jφ 1Bs−1 )DiscK (X , N , [−1, 1]s−1 ),

where VK ((GK ◦ φ)Jφ 1Bs−1 ) < ∞. Note that the integral is now approximated by a weighted average. This is to make up for the curvature of the sphere. In the case s = 3, it is actually possible to find a parametrisation of Ss−1 with Jacobian 1. This was exploited in [10]. Proof. Only the last claim requires an argument. Clearly, VK (1Bs−1 ) = 1 since the ball is convex, and VK (Jφ ) < ∞ by Theorem 3.12 since Jφ is smooth. The stereographic projection φ −1 takes a spherical cap to either a ball or its complement, so from Proposition 6.2, Lemma 4.5, and Lemma 6.1 , it follows that VK (GK ◦ φ) ≤ 2s+1 VKs+1 (GK ◦ φ) ≤ 2s+1 VSC s+1 (GK ) < ∞. Finally, by Theorem 3.7 the product of three functions from V (K ) is again in V (K ).



794

F. Pausinger, A.M. Svane / Journal of Complexity 31 (2015) 773–797

Remark 6.4. We think this example illustrates the strength of VK . Neither Harman nor Hardy–Krause variation alone could provide a Koksma–Hlawka inequality for (GK ◦φ)Jφ 1Bs−1 , but K -variation allows us to combine the two notions. The integral could in principle also be bounded by repeated use of Zaremba’s bound (10) since Eq. (17) expresses GK ◦ φ as a sum of restrictions of functions with bounded Hardy–Krause variation. However, this is tedious since the φ −1 (Wi± (τ )) are not convex, but only elements of A(K ). The required work is in some sense already incorporated in VK . Example 2: Polyconvex sets. For a general polyconvex set we have the following Koksma–Hlawka inequality:

M

Proposition 6.5. Suppose K = i=1 Ki is the union of the finitely many compact convex sets Ki and let X = (xj )j≥1 be a point sequence in [−1, 1]s . Then

  N s    V1 (K ) − c1 (s + 1)2 s (xj ) | x | G ( x /| x |) 1 j K j j B   sκ s N j =1 ≤

c1 (s + 1)2M +s

diam(K )DiscK (X , N , [−1, 1]s ),

sκs

where κs is the volume of the unit ball in Rs . The idea is to choose a point sequence in Bs and apply GK to the lines spanned by the points. As in the previous example, the average must be weighted, this time by the norm of the points. Proof. First assume that K is convex. As noted in (15), GK (·) = h(K ⊕ Kˇ , ·) in this case. The support function extends naturally to h(K ⊕ Kˇ , ·) : Rs → R. This is a non-negative convex function that is positive homogeneous of degree 1, i.e. h(K ⊕ Kˇ , λu) = λh(K ⊕ Kˇ , u) for all λ ≥ 0. Thus we have the formula

V1 (K ) = c1

 Ss−1

GK (u)σs−1 (du) = c1

(s + 1) sκs

 Bs

h(K ⊕ Kˇ , x)dx.

Since 0 ≤ h(K ⊕ Kˇ , ·) ≤ diam(K ) and h(K ⊕ Kˇ , ·) is convex, VK (h(K ⊕ Kˇ , ·)1Bs ) ≤ HK (h(K ⊕ Kˇ , ·)1Bs ) ≤ 2 diam(K ).

M

Now consider the general case K = i=1 Ki . The map K → GK is additive (GK1 ∪K2+ G K1 ∩K2 = GK1+ GK2 ) because the Euler characteristic K → V0 (K ∩ P ) has this property. Thus, setting KI = i∈I Ki , it follows from the inclusion–exclusion principle that



GK (u) =

(−1)|I | GKI (u).

∅̸=I ⊆{1,...,M }

˜ K : Rs → R by homogeneity: G˜ K (x) = |x|GK (x/|x|). Using additivity of VK and the Extend GK to G ˜ K = h(K ⊕ Kˇ , ·), we obtain convex case where G 

˜ K 1Bs ) ≤ VK (G

˜ KI 1Bs ) ≤ 2M diam(K ), VK ( G

∅̸=I ⊆{1,...,M }

where diam(K ) is a rough bound on diam(KI ) = diam( i∈I Ki ). The claim now follows by applying the Koksma–Hlawka inequality to the identity



V1 (K ) = c1

 Ss−1

GK (u)σs−1 (du) = c1

(s + 1) sκs

 Bs

˜ K (x)dx.  G

F. Pausinger, A.M. Svane / Journal of Complexity 31 (2015) 773–797

795

In this example, the points are generated in an s-dimensional space though the integration space is only (s − 1)-dimensional. The bound on DiscK (X , N , [−1, 1]s ) for a low discrepancy sequence given in Section 4 is larger than the one for DiscK X , N , [−1, 1]s−1 . However, in the first example, the variation of GK goes to infinity with the number of vertices in K . The larger discrepancy may simply be the price we have to pay for allowing general convex sets. 6.2. Estimation of surface area Now we turn to the estimation of the surface area 2Vs−1 of an object from its intersection with finitely many lines. Using [25, Theorem 13.2.12], the Crofton formula becomes

Vs−1 (K ) = cs−1

 A(s,1)

V0 (K ∩ L) dL = cs−1



 Sd−1

u⊥

V0 (K ∩ (Lu + y)) dy σs−1 (du),

where Lu is the line spanned by u. Denote the inner integral by FK (u) =

 u⊥

V0 (K ∩ (Lu + y))dy.

For K convex, FK (u) = λs−1 (πu (K ))

(18)

where πu : R → u is the projection and λs−1 is the (s − 1)-dimensional Lebesgue measure on u⊥ . In many applications, e.g., in tomography [12], K is convex and the only pieces of information one has about K are finitely many projections πu (K ). Hence, FK (u) can be computed for finitely many values of u. For this we have the Koksma–Hlawka inequality: s



Proposition 6.6. Suppose K is the union of M compact convex sets. Let X = (xj )j≥1 be a point sequence in [−1, 1]s . Then

  N s    Vs−1 (K ) − cs−1 (s + 1)2  s | x | F ( x /| x |) 1 ( x ) j K j j B j   sκs N j =1 ≤ cs−1

(s + 1)2M +s κs−1 diam(K )s−1 DiscK (X , N , [−1, 1]s ). sκs

Proof. Suppose K is convex. According to [24, Section 5.3], there exists another convex body, the projection body Π K , such that FK (u) = h(Π K , u). As in Example 2 of Section 6.1, we thus have

Vs−1 (K ) = cs−1

(s + 1) sκ s

 Bs

h(Π K , x)dx.

Again, VK (1Bs h(Π K , ·)) ≤ 2 diam(K )s−1 κs−1 , where diam(K )s−1 κs−1 is a bound for λs−1 (πu (K )). M ˜ ˜ For K = i=1 Ki polyconvex, let FK (x) = |x|FK (x/|x|) and note that K → FK is additive. The rest of the argument now follows by using the inclusion–exclusion principle as in the proof of Proposition 6.5.  In the stereological setting where only the intersection with finitely many lines can be measured, it is a common technique to first take finitely many line directions u. For each one, the Euler characteristic of the intersection with K is measured for finitely many translations of Lu and FK (u) is estimated by the average value, see [16]. We can get a Koksma–Hlawka inequality for this estimate:

796

F. Pausinger, A.M. Svane / Journal of Complexity 31 (2015) 773–797

Proposition 6.7. Let K ⊆ Bs be a union of M convex sets. Let u ∈ Ss−1 be given and let Y = (yj )j≥1 be a point sequence in u⊥ obtained from a point sequence X in [−1, 1]s−1 by an isometry Rs−1 → u⊥ . Then

  N   2s−1    V0 (K ∩ (Lu + yj )) ≤ 2M +s−1 DiscK (Y , N , [−1, 1]s−1 ). FK (u) −   N j =1 Proof. This is trivial when K is convex by (18) because V0 (K ∩ (Lu + y)) = 1πu (K ) (y) in this case. For polyconvex sets, the claim follows by the inclusion–exclusion principle.  To put things together, we take a sequence of points X = (xj )j≥1 in [−1, 1]s and a sequence j Y = (yl )l≥1 in [−1, 1]s−1 . We let (yl )l≥1 be a sequence obtained from Y by mapping it to a x⊥ j by a j

linear isometry. We then measure the finitely many quantities V0 (K ∩ Lj,l ), where Lj,l = Lxj /|xj | + yl j

is the line parallel to xj translated by yl . The triangle inequality and Propositions 6.6 and 6.7 yield the Koksma–Hlawka inequality

  L N  2s−1   cd−1 (s + 1)2M +s  Vd−1 (K ) − cd−1 (s + 1)2 s (xj )V0 (K ∩ Lj,l ) ≤ | x | 1 j B   sκs NL sκs j =1 l =1   × κs−1 diam(K )s−1 DiscK (X , N , [−1, 1]s ) + 2s−1 DiscK (Y , L, [−1, 1]s−1 ) . It is interesting that the largest error comes from the line directions. This is probably due to the way they are generated from an s-dimensional spaces rather than Ss−1 . Remark 6.8. A similar argument applies in Section 6.1 if GK (u) is estimated from the intersection of K with finitely many parallel hyperplanes. 7. Concluding remarks We introduced a new concept of variation of multivariate functions for which we obtain Koksma–Hlawka type inequalities. The introduced framework utilises measure theoretic arguments and unifies the notion of variation in the sense of Hardy and Krause and in the sense of Harman. This general point of view and the machinery we developed in our paper enable us to bound the integration error of functions that could not be bounded so far using only Harman or only Hardy–Krause variation. The space V∞ (D ) of functions with a non-trivial definition of D -variation is unfortunately rather limited. One could replace uniform convergence with pointwise and L1 -convergence in Definition 3.2. Under the conditions of Proposition 3.9, this would yield 1U ∈ V∞ (D ) for any open set U. The proof of the Koksma–Hlawka inequality given in Section 4 still works with this definition. We do not know if this approach leads to more functions of bounded variation. Studying the corresponding spaces of functions with bounded variation seems to require new ideas and is therefore left as a problem for future research. The main motivation for this paper was the application of the Koksma–Hlawka inequality to integrals appearing in integral geometry. Since K -variation is adapted to convexity, we believe that our inequality can be useful in this context in many situations going beyond the scope of our examples in Section 6. We hope that the applications in this paper will motivate future research on discrepancy systems in compact spaces. References [1] C.R. Adams, J.A. Clarkson, Properties of functions f (x, y) of bounded variation, Trans. Amer. Math. Soc. 36 (1934) 711–730. [2] C. Aistleitner, J.S. Brauchart, J. Dick, Point sets on the sphere S 2 with small spherical cap discrepancy, Discrete Comput. Geom. 48 (2012) 990–1024. [3] S. Banach, Sur les lignes rectifiables et les surfaces dont l’aire est finie, Fund. Math. 7 (1925) 225–236. [4] J. Beck, On the discrepancy of convex plane sets, Monatsh. Math. 105 (1988) 91–106. [5] L. Brandolini, L. Colzani, G. Gigante, G. Travaglini, On the Koksma–Hlawka inequality, J. Complexity 29 (2013) 158–172.

F. Pausinger, A.M. Svane / Journal of Complexity 31 (2015) 773–797

797

[6] L. Brandolini, L. Colzani, G. Gigante, G. Travaglini, A Koksma–Hlawka inequality for simplices, in: Trends in Harmonic Analysis, in: Springer INdAM Ser., vol. 3, Springer, Milan, 2013, pp. 33–46. [7] J.A. Clarkson, C.R. Adams, On definitions of bounded variation for functions of two variables, Trans. Amer. Math. Soc. 35 (1933) 824–854. [8] J. Dick, F. Pillichshammer, Digital Nets and Sequences, Cambridge University Press, Cambridge, 2010. [9] M. Drmota, R. Tichy, Sequences, Discrepancies and Applications, in: Lecture Notes in Mathematics, vol. 1651, Springer, Berlin, 1997. [10] H. Edelsbrunner, F. Pausinger, Stable length estimates for tube-like shapes, J. Math. Imaging Vis. 50 (2014) 164–177. [11] H. Edelsbrunner, F. Pausinger, Convergence and approximation of the intrinsic volume, 2014, submitted for publication. [12] R.J. Gardner, Geometric Tomography, Cambridge University Press, Cambridge, 2006. [13] G. Harman, Variations on the Koksma–Hlawka inequality, Unif. Distrib. Theory 5 (2010) 65–78. [14] E. Hlawka, Funktionen von beschränkter Variation in der Theorie der Gleich- verteilung, Ann. Math. Pura Appl. 54 (1961) 325–333. [15] J.F. Koksma, A general theorem from the theory of uniform distribution modulo 1, Mathematica B (Zutphen) 11 (1942) 7–11. [16] L. Kubínová, J. Janáček, Estimating surface area by the isotropic fakir method from thick slices cut in an arbitrary direction, J. Microsc. 191 (1998) 201–211. [17] L. Kuipers, H. Niederreiter, Uniform Distribution of Sequences, Wiley, New York, 1974. [18] A. Lubotzky, R. Phillips, P. Sarnak, Hecke operators and distributing points on the sphere, Comm. Pure Appl. Math. 39 (1986) 149–186. [19] A. Lubotzky, R. Phillips, P. Sarnak, Hecke operators and distributing points on S 2 , Comm. Pure Appl. Math. 40 (1987) 401–420. [20] J. Matou˘sek, Geometric discrepancy, in: Algorithms and Combinatorics, vol. 18, Springer-Verlag, Berlin, 1999. [21] A.B. Owen, Multidimensional variation for quasi-Monte Carlo, in: Contemporary Multivariate Analysis and Design of Experiments, in: Ser. Biostat., vol. 2, World Sci. Publ, Hackensack, NJ, 2005, pp. 49–74. [22] M.H. Protter, C.B. Morrey, A First Course in Real Analysis, Springer, New York, 1977. [23] W.M. Schmidt, Irregularities of distribution IX, Acta Arith. 27 (1975) 385–396. [24] R. Schneider, Convex Bodies: the Brunn–Minkowski Theory, second ed., Cambridge Univ. Press, Cambridge, England, 2014. [25] R. Schneider, W. Weil, Stochastic and Integral Geometry, Springer, Heidelberg, Germany, 2008. [26] W. Stute, Convergence rates for the isotrope discrepancy, Ann. Probab. 105 (1977) 91–106. [27] D. Wulfsohn, H.J.G. Gundersen, E.B. Vedel Jensen, J.R. Nyengaard, Volume estimation from projections, J. Microsc. 215 (2004) 111–120. [28] S.K. Zaremba, La discrépance isotrope et l’intégration numérique, Ann. Mat. Pura Appl. (4) 87 (1970) 125–135.