Sets of random variables with a given uncorrelation structure

Sets of random variables with a given uncorrelation structure

Statistics & Probability Letters 55 (2001) 359 – 366 Sets of random variables with a given uncorrelation structure So#ya Ostrovskaa; b a Department ...

105KB Sizes 2 Downloads 52 Views

Statistics & Probability Letters 55 (2001) 359 – 366

Sets of random variables with a given uncorrelation structure So#ya Ostrovskaa; b a Department

of Mathematics, Faculty of Art and Science, Atilim University, 06836 Incek, Ankara, Turkey b G.Skovoroda State University, Kharkov, Ukraine Received July 2000; received in revised form March 2001

Abstract Let 1 ; : : : ; n be random variables having #nite expectations. Denote   k k   j l = Ejl ; ik := # (j1 ; : : : ; jk ): 1 6 j1 ¡ · · · ¡ jk 6 n and E l=1

k = 2; : : : ; n:

l=1

The #nite sequence (i2 ; : : : ; in ) is called the uncorrelation structure of 1 ; : : : ; n : It is proved that for any given sequence of nonnegative integers (i2 ; : : : ; in ) satisfying 0 6 ik 6 ( kn ) and any given nondegenerate probability distributions P1 ; : : : ; Pn

there exist random variables 1 ; : : : ; n with respective distributions P1 ; : : : ; Pn such that (i2 ; : : : ; in ) is their uncorrelation c 2001 Elsevier Science B.V. All rights reserved structure.  MSC: 60E05 Keywords: Independence; Independence structure; Uncorrelation structure

1. Introduction Let ( ; F; P) be a probability space, and A1 ; : : : ; An ∈ F be nontrivial random events. The random events are called independent (see e.g. Feller, 1968, v.1, Chapter V.3) if the following 2n − n − 1 equalities hold: P(Aj1 ∩ · · · ∩ Ajk ) = P(Aj1 ) · · · P(Ajk )

for all 1 6 j1 ¡ · · · ¡ jk 6 n; k = 2; : : : ; n:

(1)

If at least one of equalities (1) fails to be true, the random events are called dependent. In this case di=erent notions of partial independence can be considered. Note that probability spaces with certain sets of partially independent events are used in computer science (see e.g. Karlo= and Mansour, 1997). Following Wang et al. (1993) we say that nontrivial random events are independent at the level k (2 6 k 6 n) if the following ( nk ) equalities hold: P(Aj1 ∩ · · · ∩ Ajk ) = P(Aj1 ) · · · P(Ajk )

for all 1 6 j1 ¡ · · · ¡ jk 6 n:

Random events independent at the level 2 are usually called pairwise independent. c 2001 Elsevier Science B.V. All rights reserved 0167-7152/01/$ - see front matter  PII: S 0 1 6 7 - 7 1 5 2 ( 0 1 ) 0 0 1 1 2 - 2

(2)

360

S. Ostrovska / Statistics & Probability Letters 55 (2001) 359 – 366

Now, let A1 ; : : : ; An ∈ F be nontrivial random events. Denote by ik (2 6 k 6 n) the number of equalities at the level k, that is    k  k   ik := # (j1 ; : : : ; jk ): 1 6 j1 ¡ · · · ¡ jk 6 n and P Aj l = P(Ajl ) : l=1

l=1

The #nite sequence (i2 ; : : : ; in ) is said to be the independence structure for the collection A1 ; : : : ; An of random events (see Stoyanov, 1995). If ik = ( kn ) for all k = 2; : : : ; n; the random events are mutually independent,

otherwise they are dependent. If ik = 0 for all k = 2; : : : ; n; the random events are called totally dependent. The notion of independence structure was studied by Stoyanov (1995,1998a,b), who considered the following problem called the Italian Problem.

Given a sequence of nonnegative integers (i2 ; : : : ; in ) satisfying the inequalities 0 6 ik 6 ( kn ): Does there exist a probability space ( ; F; P) and a collection of random events A1 ; : : : ; An ∈ F such that (i2 ; : : : ; in ) is the independence structure for the events A1 ; : : : ; An ? In Stoyanov (1998b) the aGrmative answer to this question is obtained. Actually, Stoyanov proved a more general statement. To formulate his result we introduce the following notation. For random events A1 ; : : : ; An denote    k  k   Jk := (j1 ; : : : ; jk ) : 1 6 j1 ¡ · · · ¡ jk 6 n and P Aj l = P(Ajl ) ; k = 2; : : : ; n: l=1

l=1

Obviously ik = |Jk | (k = 2; : : : ; n): We call the sequence C = C(A1 ; : : : ; An ) = (J2 ; : : : ; Jn ) the independence characteristic of the set of random events A1 ; : : : ; An : Theorem (Stoyanov; 1998b). Let C = (J2 ; : : : ; Jn ); where Jk ⊆ {(j1 ; : : : ; jk ): 1 6 j1 ; : : : ; jk 6 n};

k = 2; : : : ; n

be given. Then there exists a probability space ( ; F; P) and a collection of random events A1 ; : : : ; An ∈ F such that C is the independence characteristic of A1 ; : : : ; An : That is; we may prescribe not only the number of k-tuples (k = 2; : : : ; n) for which equalities (2) hold; but also specify the k-tuples themselves. In this paper we present an extension of the above result. We deal with random variables and study their correlation properties.

2. Statement of results Denition 1. Let 1 ; : : : ; n be random variables having #nite expectations. Denote   k k   Jk := (j1 ; : : : ; jk ): 1 6 j1 ¡ · · · ¡ jk 6 n and E j l = Ejl ; k = 2; : : : ; n: l=1

l=1

The sequence C = C(1 ; : : : ; n ) = (J2 ; : : : ; Jk ) is called the uncorrelation characteristic of the collection 1 ; : : : ; n of random variables. Denition 2. The sequence (i2 ; : : : ; in ); where ik = |Jk | (k = 2; : : : ; n) is called the uncorrelation structure for the collection 1 ; : : : ; n of random variables.

S. Ostrovska / Statistics & Probability Letters 55 (2001) 359 – 366

361

If IA1 ; : : : ; IAn are indicators of random events A1 ; : : : ; An ; then the uncorrelation characteristic C = C(IA1 ; : : : ; IAn ) of these random variables coincides with the independence characteristic of the random events; and the uncorrelation structure of IA1 ; : : : ; IAn coincides with the independence structure of A1 ; : : : ; An : In the sequel we use the following notation. Denote by U0 the subset of the Cartesian product {0; 1}n consisting of all elements with (n − 1) or n zero co-ordinates. That is U0 = {(0; 0; : : : ; 0); (0; 1; : : : ; 0); : : : ; (0; : : : ; 0; 1)}: n

Let K = {0; 1}n \U0 : Clearly, Kn consists of all elements from {0; 1}n with at least two 1s. To describe the uncorrelation characteristic we introduce the following de#nitions. Denition 3. Let 1 ; : : : ; n be random variables and U be a subset Kn : We say that the random variables are U-uncorrelated if   n n    k E k = (Ek k ) for all (1 ; : : : ; n ) ∈ U: k=1

k=1

We say that the random variables are U-correlated if   n n    k k = (Ek k ) for all (1 ; : : : ; n ) ∈ U: E k=1

k=1

Denition 4. A set U ⊆ Kn is called the uncorrelation region of random variables 1 ; : : : ; n if they are U -uncorrelated and Kn \ U -correlated. We denote the uncorrelation region of 1 ; : : : ; n by R(1 ; : : : ; n ): The uncorrelation region of 1 ; : : : ; n uniquely de#nes their uncorrelation characteristic and therefore their uncorrelation structure. Indeed, the uncorrelation structure of 1 ; : : : ; n can be found as   n  ik = # (1 ; : : : ; n ) ∈ R(1 ; : : : ; n ): i = k ; k = 2; : : : ; n: i=1

The purpose of the present paper is to prove that any subset of Kn is an uncorrelation region for some collection of n random variables with prescribed nondegenerate distributions. It is worth mentioning that Wang (1990) studied the dependence properties for sequences of random variables with given marginal distributions. Recent developments on multidimensional distributions with di=erent kinds of dependence and speci#ed marginals can be found in, e.g. Dall’Aglio et al. (1991) and Benes and Stepan (1997). Theorem. Let 1 ; : : : ; n (n ¿ 2) be random variables having 8nite expectations with nondegenerate distributions and let U be a subset of Kn : Then there exist random variables 1 ; : : : ; n satisfying the following conditions: d d (a) 1 = 1 ; : : : ; n = n ; (b) R( 1 ; : : : ; n ) = U: Corollary. For any sequence of nonnegative integers (i2 ; : : : ; in ) satisfying 0 6 ik 6 ( kn ) and any num-

bers pj ∈ (0; 1); j = 1; : : : ; n; there exists a probability space ( ; F; P) and a collection of random events A1 ; : : : ; An ∈ F such that (a) P(Aj ) = pj ; j = 1; : : : ; n; (b) (i2 ; : : : ; in ) is the independence structure for A1 ; : : : ; An .

362

S. Ostrovska / Statistics & Probability Letters 55 (2001) 359 – 366

3. Proof of the theorem Let 1 ; : : : ; n be random variables with nondegenerate distributions P1 ; : : : ; Pn : Without loss of generality, we may assume that the random variables are independent. Let  : {0; 1}n → {1; 2; 3; : : : ; 2n }

(3) n

be a one-to-one function (an enumeration of the set {0; 1} ). If (1 ; : : : ; n ) = s; denote !s (x1 ; : : : ; x n ) := x11 ; : : : ; xnn :

(4)

De#ne the following measures on Borel sets of the real line:  2 e−x Pk (d x); k = 1; : : : ; n k (E) :=

(5)

E

and consider their Cartesian product  := 1 × · · · × n : First, we will prove the following statement. n

Lemma 1. The system of functions {!s (x1 ; : : : ; x n )}2s=1 is linearly independent in L2 (Rn ): n

Proof. To establish linear independence of the system {!s (x1 ; : : : ; x n )}2s=1 we apply induction on n: For n = 1 the system consists of the two functions: !1 = 1 and !2 = x1 : Assume that 1 + 2 x1 = 0 almost everywhere (a.e.) in L21 (R): Since the distribution P1 is nondegenerate, that is, the support of P1 consists of at least two points, it follows that the #rst degree polynomial 1 + 2 x1 has at least two distinct roots and hence 1 = 2 = 0: n−1 Now, suppose that linear independence is established for {!s (x1 ; : : : ; x n−1 )}2s=1 in L21 ×···×n−1 (Rn−1 ): We n are to prove linear independence of {!s (x1 ; : : : ; x n )}2s=1 in L2 (Rn ): Without loss of generality we may assume that the function  in (3) maps {0; 1}n−1 × {0} onto {1; 2; 3; : : : ; 2n−1 }: Suppose n

2 

s !s (x1 ; : : : ; x n ) = 0

(6)

s=1

in L2 (Rn ): Equality (6) can be written in the following way: n−1

2  s=1

n−1

s !s (x1 ; : : : ; x n−1 ) + x n

2 

s !s (x1 ; : : : ; x n−1 ) = 0

s=1

a.e. in L2 (Rn ); where the numbers s ; s = 1; : : : ; 2n−1 form a permutation of the numbers s ; s = 2n−1 +1; : : : ; 2n : Denote n−1

2  s=1 2n−1 

s !s (x1 ; : : : ; x n−1 ) := S(x1 ; : : : ; x n−1 ); s !s (x1 ; : : : ; x n−1 ) := T (x1 ; : : : ; x n−1 ):

s=1 n−1 Consider the set M ⊆ R(x de#ned as 1 ;:::; x n−1 )

M := {(x1 ; : : : ; x n−1 ): S(x1 ; : : : ; x n−1 ) + x n T (x1 ; : : : ; x n−1 ) = 0 a:e: in L2n (R) w:r:t: variable x n }:

S. Ostrovska / Statistics & Probability Letters 55 (2001) 359 – 366

363

We are going to prove that 1 × · · · × n−1 (M ) = 1: Assume the contrary, and let 1 × · · · × n−1 (M ) ¡ 1; that is, 1 × · · · × n−1 (ML ) ¿ 0; where ML = Rn−1 \ M: For each point (x1 ; : : : ; x n−1 ) ∈ ML consider the set N (x1 ; : : : ; x n−1 ) := {x n : S(x1 ; : : : ; x n−1 ) + x n T (x1 ; : : : ; x n−1 ) = 0}: By the de#nition of ML we get that n (N (x1 ; : : : ; x n−1 )) ¿ 0: Finally, consider the set F in Rn de#ned as follows: F := {(x1 ; : : : ; x n ): (x1 ; : : : ; x n−1 ) ∈ ML ; x n ∈ N (x1 ; : : : ; x n−1 )}: Clearly S(x1 ; : : : ; x n−1 )+x n T (x1 ; : : : ; x n−1 ) = 0 on F: Since 1 ×· · ·×n−1 (ML ) ¿ 0 and n (N (x1 ; : : : ; x n−1 )) ¿ 0; it follows that (F) ¿ 0: This is impossible because S(x1 ; : : : ; x n−1 ) + x n T (x1 ; : : : ; x n−1 ) = 0 a.e. in L2 (Rn ): Hence both S(x1 ; : : : ; x n−1 ) and T (x1 ; : : : ; x n−1 ) vanish a.e. in L21 ×···×n−1 (Rn−1 ): By induction assumption it follows that 1 = · · · = 2n−1 = 1 = · · · = 2n−1 = 0: Since s ; s = 1; : : : ; 2n−1 is a permutation of s ; s = 2n + 1; : : : ; 2n ; we conclude that s = 0 for all s = 1; : : : ; 2n : n

Since the system {!s (x1 ; : : : ; x n )}2s=1 is linearly independent in L2 (Rn ); we may apply to it the Gram– Schmidt orthogonalization process. We obtain the system of orthogonal polynomials n

{Hs (x1 ; : : : ; x n )}2s=1 such that (Hs ; !k ) = 0

for k ¡ s

(7)

(Hs ; !s ) = 0

for s = 1; : : : ; 2n :

(8)

and

Lemma 2. For any numbers b1 ; : : : ; b2n the system of linear equations in unknowns c1 ; : : : ; c2n n

2 

ck (Hk ; !s ) = bs ;

s = 1; : : : ; 2n

(9)

k=1

has a unique solution. Proof. The statement of the lemma follows from the fact that the determinant of linear system (9) is equal to (H1 ; !1 ) · · · (H2n ; !2n ) = 0 because of (7) and (8). Let a subset U of Kn be given. Denote by U ∗ the set U  0 if −1 (s) ∈ U ∗ ; bs = s = 1; : : : ; 2n ; −1 ∗ 1 if  (s) ∈ U ;



U0 : We put

where  is function (3), and #nd the coeGcients c1 ; : : : ; c2n from linear system (9).

(10)

364

S. Ostrovska / Statistics & Probability Letters 55 (2001) 359 – 366

Then the linear combination 2n  Q(x1 ; : : : ; x n ) := ck Hk (x1 ; : : : ; x n ) k=1

satis#es the conditions (Q; !s ) = bs ;

s = 1; : : : ; 2n :

(11)

Note that (Q; 1) = (Q; xi ) = 0

for all

i = 1; : : : ; n:

(12)

Indeed, since !s (x1 ; : : : ; x n ) = 1; for s = (0; : : : ; 0); and !s (x1 ; : : : ; x n ) = xi ; for s = (0; : : : ; 0; 1; 0; : : : ; 0) (with 1 in the ith position), and since U contains U0 , condition (12) follows from (10). Consider the function ( de#ned on Borel sets of Rn as follows:  2 2 [1 + )Q(x1 ; : : : ; x n )e−x1 −···−x n ]P1 × · · · × Pn (d x1 · · · d x n ): ((E) := E

Clearly, ( is a signed measure on the -algebra of Borel sets of Rn for any ): We choose ) in such a way that the expression in the brackets is nonnegative. Such a choice is possible because the function 2

2

Q(x1 ; : : : ; x n )e−x1 −···−x n is bounded in Rn : Hence the function ( is a nonnegative measure on the -algebra of Borel sets of Rn for ) suGciently small. Moreover, since by (12) (Q; 1) = 0; we have ((Rn ) = 1: Therefore, ( is a probability measure on Rn for suGciently small ): Next, we show that the projection of ( on the coordinate axis xi coincides with the given distribution Pi for each i = 1; : : : ; n: We have   di Q(x1 ; : : : ; x n ) d1 × · · · × di−1 × di+1 × · · · × dn : (i ((−∞; xi ]) = Pi ((−∞; xi ]) + ) (−∞; xi ]

Rn−1

Lemma 3. For all xi ∈ R (i = 1; : : : ; n) the following equality holds  Q(x1 ; : : : ; x n ) d1 × · · · × di−1 × di+1 × · · · × dn = 0: Rn−1

Proof. Write Q(x1 ; : : : ; x n ) in the form Q(x1 ; : : : ; x n ) = S(x1 ; : : : ; xi−1 ; xi+1 ; : : : ; x n ) + xi T (x1 ; : : : ; xi−1 ; xi+1 ; : : : ; x n ): Denote A :=

 Rn−1

S(x1 ; : : : ; xi−1 ; xi+1 ; : : : ; x n ) d1 × · · · × di−1 × di+1 × · · · × dn ;

 B :=

Rn−1

T (x1 ; : : : ; xi−1 ; xi+1 ; : : : ; x n ) d1 × · · · × di−1 × di+1 × · · · × dn :

It follows from (12) that A and B satisfy the linear system: A(1; 1)i + B(xi ; 1)i = 0; A(1; xi )i + B(xi ; xi )i = 0:

(13)

The determinant of (13) is the Gram determinant of the system {1; xi }: It is di=erent from 0 since 1 and xi are linearly independent in L2i (R): Hence A = B = 0:

S. Ostrovska / Statistics & Probability Letters 55 (2001) 359 – 366

365

Using Lemma 3 we obtain (i ((−∞; xi ]) = Pi ((−∞; xi ])

for all xi ∈ R;

that is, the projections of the measure ( on the coordinate axes coincide with the given probability distributions. Now, let = ( 1 ; : : : ; n ) be a random vector with the distribution (. We will show that 1 ; : : : ; n are random variables with required properties. It was already proved that P 1 = P1 ; : : : ; P n = Pn ; that is d

d

1 = 1 ; : : : ; n = n :

(14)

We have to prove that U is the uncorrelation region for 1 ; : : : ; n : Indeed, for any (1 ; : : : ; n ) ∈ {0; 1}n we have E( 11 · · · nn ) = E(11 · · · nn ) + )(!(1 ;:::;n ) ; Q) : Using (10) and (11) we conclude that (!(1 ;:::; n ) ; Q) = 0 if and only if (1 ; : : : ; n ) ∈ U ∗ : Hence E( 11 · · · nn ) = E(11 · · · nn )

if and only if

(1 ; : : : ; n ) ∈ U ∗ :

(15)

Since 1 ; : : : ; n are independent, we have E(11 · · · nn ) = E(11 ) · · · E(nn ) and (14) yields E(k k ) = E( k k )

for all k = 1; : : : ; n:

Now, let (1 ; : : : ; n ) ∈ Kn : In this case (15) implies E( 11 · · · nn ) = E( 11 ) · · · E( nn ) if

(1 ; : : : ; n ) ∈ U

and E( 11 · · · nn ) = E( 11 ) · · · E( nn ) if

(1 ; : : : ; n ) ∈ Kn \ U:

Thus, U is the uncorrelation region for 1 ; : : : ; n : Acknowledgements The author is grateful to Prof. Vladimir Azarin and Prof. Alexander Il’inskii for their thorough reading of the manuscript and valuable suggestions. The author would like to express her gratitude to the anonymous referee for his=her comments which improved the presentation of the paper. References Benes, V., Stepan, J. (Eds.), 1997. Distributions with Given Marginals and Moment Problems. Kluwer Academic Publishers, Dordrecht. Dall’Aglio, G., Kotz, S., Salinetti, G. (Eds.), 1991. Advances in Probability Distributions with Given Marginals. Kluwer Academic Publishers, Dordrecht. Feller, W., 1968. An Introduction to Probability Theory and Its Applications, 3rd Edition, Vol. 1. Wiley, New York.

366

S. Ostrovska / Statistics & Probability Letters 55 (2001) 359 – 366

Karlo=, H., Mansour, Y., 1997. On the construction of k-wise independent random variables. Combinatorica 17, 91–107. Stoyanov, J., 1995. Dependency measure for sets of random events or random variables. Statist. Probab. Lett. 23, 12–20. Stoyanov, J., 1998a. Global dependency measure for sets of random elements: “The Italian Problem” and some consequences. In: Karatzas, I., Rajput, B.S., Taqqu, M.S. (Eds.), Stochastic Processes and Related Topics. BirkhTauser, Boston, pp. 357–375. Stoyanov, J., 1998b. Probability spaces with prescribed independence=dependence structure. Technical Report N 113, Federal Univ. Rio de Janeiro. Wang, Y., 1990. Dependent random variables with independent subsets—II. Canad. Math. Bull. 33 (1), 24–28. Wang, Y.H., Stoyanov, J., Shao, Q.M., 1993. On independence and dependence properties of a set of random events. The American Statistician 47 (2), 112–115.