Ideal and Exterior Weight Enumerators for Linear Codes: Examples and Conjectures

Ideal and Exterior Weight Enumerators for Linear Codes: Examples and Conjectures

Annals of Discrete Mathematics 17 (1983) 81-90 @ North-Holland Publishing Company IDEAL AND EXTERIOR WEIGHT ENUMERATORS FOR LINEAR CODES: EXAMPLES AN...

572KB Sizes 0 Downloads 65 Views

Annals of Discrete Mathematics 17 (1983) 81-90 @ North-Holland Publishing Company

IDEAL AND EXTERIOR WEIGHT ENUMERATORS FOR LINEAR CODES: EXAMPLES AND CONJECTURES Kenneth P. BOGART Dartmouth College, Hanover, NH

037SS,U.S.A.

1. Introduction

Several years ago Gordon 141 and the author began exploring the use of certain algebraic structures related to codes with the hope that elementary invariants of these structures could be used to distinguish between inequivalent codes. We found examples that lead us to believe that exterior - or Grassman - algebra provided a natural context for the study of codes. At this stage of the research major theorems are elusive, but suggestive examples abound. The purpose of this paper is to present recent examples of the author on weight enumerators of ideals and subalgebras of exterior algebras generated by codes and the conjectures these examples suggest. In this paper, an ( n , k ) code C over a q element field F is a k-dimensional subspace of the vector space F" of n -tuples. The weight w ( x ) of a vector x in F" is the number of nonzero entries of the vector and the weight enumerator of a code C is the polynomial (over the integers)

=

2(

i -0

number of vectors of weight i in C

)Y

n-i

We say two codes are equivalent if there is a weight-preserving linear transformation between them. It is a result of MacWilliams that this means that one code can be obtained from the other by permuting the standard basis for F" and perhaps multiplying the standard basis elements by nonzero scalars [l]. Section 2 contains a brief summary of the relevant parts of exterior algebra, developed in an elementary way. Most textbook treatments of exterior algebra assume that the relevant field has characteristic not equal to 2 or in places that it has characteristic 0. Since our fields will often have characteristic 2, we give here

82

K. P. Bogart

a brief elementary development of the relevant aspects of exterior algebra. With the exception of Theorem 13, the results of Section 2 appear in or are direct consequences of the treatment in Bourbaki [2]; the author believes the present development is more elementary. Since working out this approach, the author has found equivalent theorems spread among the books by Chevalley [3], Greub [5], Marcus [8] and Bourbaki [2], however, only Bourbaki’s treatment is independent of characteristic; the interested reader should have little difficulty in translating the results given in Section 2 (with the exception of Theorem 13) into corresponding results in Bourbaki. The author’s intent in presenting this expository section is to make the methods of exterior algebra accessible to a wider audience.

2. Notation and concepts in exterior algebra , Given an n-dimensional vector space V over a field F with base { x ~ } identify V with the homogeneous polynomials of degree 1 in the algebra FB[xl,x2,. . . ,x,] in non-commuting variables x I , x 2 , . . . , x n . By the exterior algebra of V associated with our basis we mean the algebra FR[xI, x?,. . . , x,,]/Z where I is the ideal generated by squares of polynomials of degree 1. The algebras we get from different bases are isomorphic; we assume our basis has been fixed and use the symbol E ( V ) to stand for the exterior algebra (without referring explicitly to the chosen basis in our notation.) Since x?, x f and (x, + x , ) ~ are all zero, x,x, + x,x, = 0, so x,x, = - x,x,. Note that if F has characteristic 2, then

E ( V) = F[xi, x Z , . . . ,x,]/(x:, x : , . . . , x i ) , the ordinary ring of polynomials modulo the ideal generated by squares of the indeterminants. As a vector space over F, E ( V ) has as its basis the (cosets of) polynomials x,,x,>. . * xtv with i , < i z < . . . < i , . If S = { i l , i z , . . . ,i , ) we denote x,,x,, . . . xts by xs. The algebra E ( V ) is graded; the subspace spanned by the monomials xs in which S has size m, denoted by Em(V), consists of the elements of degree m. Theorem 1. If v, = 2 a,.x, for i = 1 , 2 , . . ., m, then the coefficient of xs in the product v 1 v 2 - *v,* is 0 unless the size of S is m, and otherwise is det(Ms) where Mqis the matrix whose columns are the columns of the matrix A = (a,,)indexed by elements of S.

Ideal and exterior weight enumerators for linear codes

83

Proof.

sizem j&.

size m

....im

in S

u

of S

this formula proves the theorem. 0

Corollary 2. A set of vectors in V is independent if and only if its product in E ( V ) is nonzero. Corollary 3. A vector in V is in the annihilator of v t v 2 * is in the subspace spanned by the vi’s.

* *

v, ( # 0 ) if and only if it

Proof. The matrix whose rows are the independent vectors ui as well as a vector v has rank m if and only if v is a linear combination of the vi’s. 0 Lemma 4. If the independent vectors v l , v2,.. . ,vk are in the annihilator of z E E ( V), then there is an element y E E ( V) such that z = v Iu2 * * uky. Proof. Extend v I , u z , . . . , v k to a basis by adding vectors z = csvs where us = v,,vi2. . . vts

Vk+l

through v , ~ Let .

if S = {i,, iz, . . . ,is}.

Since z . u, = 0 for i s k, each i less than or equal to k is in S for each S with cs # 0. Thus each S contains {1,2,. . . , k}, and so v l v z . . * v k is a factor of z while the other factor is

y = C esuS-(1.2....,k).

0

Given a k-dimensional subspace C of V, its vectors generate a subalgebra of E ( V). Since this subalgebra has dimension at least 2k and since the squares of all elements of C must be zero, this subalgebra is isomorphic to E ( C ) and thus is a graded algebra in which the space of elements of degree k is one dimensional. This yields

Theorem 5. For any bases Bt and Bz of C, the products scalar multiples of each other.

nL,EH, v and notH, u are

K.P.Bogart

84

Thus the vector of coordinates of one of these basis products in the space Ek ( u ) of all elements of degree k in V is determined up to a nonzero multiple, i.e., is a ‘projective coordinate vector’. Such a coordinate vector is called a Plucker coordinate vector for C. We write

for some basis B of C so that Ps stands for the “Plucker coordinate of C relative to xs”. We let E(R,S ) = 0 if R n S # 0 and otherwise we let E(R, S ) equal the sign of the permutation that permutes the list consisting of R in increasing order following by S in increasing order into a single list in increasing order. It follows that for arbitrary elements u and w of E ( V ) , if u = x S a S x S ,u = C T b T x T ,

The following lemma, which appears for characteristic zero in Van der Waerden [9], is useful in removing standard assumptions that the characteristic of F is not 2. Lemma 6 . If C has dimension k and Plucker coordinates P,, then for each k - 1 element set R, the vector

is in C. Proof. Let u I ,u z , . . . , UI, be a basis for C and let M be the matrix where rows are the coordinate vectors for the uz’srelative to the basis x I ,xZ,.. .,x, of u. If i!? R, p R u ( # ) is the determinant of the matrix MRu(t) consisting of the columns of M indexed by R U {i}. We expand this determinant along the ith column of MRuil), letting M, denote the matrix obtained by deleting row j and column i from MRui,, and letting u , ( i ) denote the ith coordinate of u,. We obtain

E(R,{i})PRU(i)=

k j= I

( - l)“juj(i)det(M,).

If i E R, let M, be the matrix obtained by deleting row j from the submatrix of M whose columns are indexed by R. In this case, both sides of equation (1) above are 0, the right-hand side being an expansion of the determinant of a matrix with 2 equal columns.

Ideal and exterior weight enumerators for linear codes

85

Thus

=

2 ( - l)k+j det(M,) 2

j=1

i=l

uj (i)ei =

9( - l)k+idet(M,)ui

j=l

so we have realized l7, as an explicit linear combination of our basis vectors for

w. 0

We now assume there is a bilinear form B on V for which x l , x2,. . . ,x, is an orthogonal basis; in the application to coding theory these vectors will form an orthonormal basis. We let ai = B ( x i , x i ) and let as = n,,,a,, and a, = 1; aN = ai. For each u = I:bsxs in E ( V) we define

ny==,

using S' to denote the complement of S in N. This 'star' operator was apparently introduced by Hodge in the case of fields of characteristic 0 (see [6] or [8]), was reintroduced as a ( u ) by Chevalley [3] for fields of characteristic not 2, and is intimately related to 4 ( u ) used by Bourbaki in 111, 11.1 [2]; in this last case 4 actually relates elements in the exterior algebras of V and its linear dual. There does not appear to be an explicit treatment of the star operator for characteristic 2, though implicitly the results that follow can be derived using Bourbaki's 4. Lemma 7. For any u E E ( u ) , u * * = aNu. Lemma 8. The map x + x * is a weight-preserving linear isomorphism of E ( u )

onto E " - k ( u ) . Lemma 9. e ( R , ( R U { i } y ) =E ( R , R ' ) E ( R , { ~ } ) .

-

Lemma 10. Let x E V be orthogonal to u1u2

x(uIu1'*'2)k)*=o.

Proof. Suppose uIu2*

Vk

=

* uk

E V. Then

c Psxs. For each subset R

of N of size k - 1,

is in the space spanned by the ui's, so it is orthogonal to x. Thus if x = I::=,c,ei, using angle braces to denote the inner product,

86

K.P. Bogart

However

= 0.

A result of major interest in coding theory is our next theorem.

Theorem 11. If { w l ,w2, ..., wk} is a basis for the subspace c of V, then ( w I w 2 . . . w k ) * is equal to u l u 2 - - * u n -where k { v I u 2 * ~ * v , -isk }a basis for the orthogonal complement C of C.

-

Proof. By Lemma 6, u(w,wz* * W k ) * is 0 for all u E C. Thus C is a subspace of the annihilator in E ( V ) of (wlw2* * w k ) * . By Lemma 4, if vl, uz,. . . ,u “ - ~is a basis for C,then for some y, uIu2 ’ ’ *

Un-ky

= (w1w2 ‘ . . wk)*.

Since both u I u 2 * *U*n - k and ( w I w 2 - *w*k ) * have degree n - k , y must be a scalar, and so may be “absorbed” into one of the u i k 0

Theorem 12. The mapping (x, y ) = (x y *)*/aNis an inner product on E, ( V )for which the basis of sets xs with S of size i is an orthogonal base. Further this extends by linearity to an inner product on E ( V ) which agrees with the original inner product on V, and in which Ei and E, are orthogonal if i# j . Proof. Note that if S # T but both have size i, then S n T’ is nonempty so xs (xT)* = 0. Thus (ZPsxs)(X?sys)* = ( Z P ~ Q ~ E ( S , S ~ ) U ~This ) X N .is clearly a symmetric bilinear map from Ei ( V )to En ( V ) ;applying the * map again gives a scalar and dividing by aN normalizes the value so the inner product agrees with the original one in u. 0

Ideal and exterior weight enumerators for linear codes

87

Theorem 13. The orthogonal complement of E ( C )in E (V) is the ideal generated by C', the orthogonal complement of C in V.

Proof. Let C have {ul, u,, ...,u k ) as a basis. By Lemma 10, a vector u is in C' if ) * But then u is orthogonal to any subset of and only if u ( u 1 u 2 ~ ~ . v k =O. { U I , U ~ , . . ,,Uk), SO U ( U , I U , 2 . . * U,,,,)*=O. Thus if U1, Uz,. . . , UJ E C ', and Z = zlul + z2u2+ * * + z,uJ has degree m, then

-

z (o,, Ut2 . . * Utm )* = 0

--

because each u, ( a , u r 2 . v,_)* = 0. Therefore each homogeneous element of the ideal generated by C ' is orthogonal in E ( V) to all elements of E ( C )of the same degree, and it is orthogonal by definition to the elements of E ( C ) of different degree. Therefore each element of the ideal generated by C" is orthogonal to each element of E ( C ) . Now suppose {u1,u2,..., u n - k ) is a basis for C ' and extend it to a basis {u,,u,, . ..,u,) of V. The intersection of the ideal generated by C' and E ' ( V) will contain each monomial in the vectors u, except for the monomials that contain none of u l , u 2 , ...,un-k. There are (t) such monomials, so the dimension of J ( C L )n E ' ( u ) is at least

Therefore the dimension of I ( C " ) is at least

2(3-(3=2"-2k,

i =(I

and since this is the dimension of E(C)', Z(C') must equal E ( C ) . 0

3. Applications to coding theory

We regard an (n, k ) code as a k-dimensional subspace C of the vector space V = F" with standard basis x i , x 2 , . . . ,x , and as before use E ( C )to stand for the subalgebra of E ( V ) generated by C and Z(C) to stand for the ideal of E ( V ) generated by C. We define the exterior weight enumerator f E ( = ) ( y , zto) be the weight enumerator of E ( C ) regarded as a code in E ( V) with the monomials in the x, 's as a basis. The ideal weight enumerator of C is the weight enumerator of the ideal generated by C. In light of Theorem 13 and the MacWilliams theorem [7], studying ideal weight enumerators is essentially the same as studying exterior weight enumerators of dual codes, so we restrict our attention to exterior weight enumerators.

Proof. The weight enumerator of the code direct sum of codes is the product of their weight enumerators, and the representation of E ( C )as the sum of Ei(C) is a code direct sum. 0 Note that Eo(C)is the field F, so its weight enumerator is (q - l)y + z and E,(C)is C itself and so its weight enumerator is the ordinary weight enumerator of C. One other factor of fE(c) is easily interpreted; namely fEk(c)(y,z). Since Ek(C) must be the one-dimensional space spanned by a basis product, it also has q - 1nonzero vectors in it, all having the same weight. In light of Theorem 1, this weight is the number of k element sets of columns of a generator matrix for C that have a nonzero determinant. Thus this weight is the number of information sets of C. Since this is the weight of a Plucker coordinate vector for C, we call it the Plucker weight of C, denoted by PW(C). Summarizing, we have,

Already this information allows us to distinguish between some codes with the same ordinary weight enumerator. For example, the two codes given by the generator matrices below have weight enumerator (z2 +

GI=

[:,

: : :::I9

0 0 1 1 0 0

[

1 1 0 0 0 0

G2= 0 1 1 0 0 0

0 0 1 1 1 1

1

However the code generated by GI has Plucker weight 8 and the code generated by G 2 has Plucker weight 10, and so the codes have different exterior weight enumerators, so they are not equivalent. Similarly, it is possible to give examples of codes with different weight enumerators and the same Plucker weight; once again such codes will have different exterior weight enumerators. There are no known examples of inequivalent binary codes with the same exterior weight enumerators. Both the ordinary weight enumerator and Plucker weight are examples of matroidal invariants of codes; namely invariants that depend only on which sets of coordinate positions are information sets. For arbitrary fields, it is not the case that two codes with the same information sets are equivalent, though two such binary codes are identical. Theorem 16. Two binary codes with the same information sets are identical. Proof. For two binary codes to have the same information sets, they must have

Ideal and exterior weight enumerators for linear codes

89

the same Plucker coordinate vector. However, Corollary 3 shows that two subspaces with the same Plucker coordinate vector are identical. 0 The generator matrices G3 and G4 below are generator matrices' for MDS codes over an extension field of GF(4) in which u is a cube root of A, and A' + A + 1 = 0. Note that 1, a, and u2 are linearly independent over GF(4).

1 0 0 1 1 0 1 0 1 A 0 0 0 1 A+l

1 A+l], A

[

G4= 0 1 0 1 A 0 0 1 1 A+l

u2

Generator matrices for the 'second exterior powers' E2(C3)and E2(C4) are

1 0 1 A A+101 1 1 0 0 O A + l A 0 1 l A + l A 0 0 0 0 1 1 1 A A+l 0 0 0 0 0 1 l A + l A l A A + l 1 1.

G:

=

']. 1

1 0 1 A cr 0 1 1 1 0 0 0 A + 1 1 + u 0 1 l A + l u 2 0 0 0 0 1 1 1 A 1+u2 A+1+u2 0 0 0 0 0 1 1 A + l u2 1 A u 1 u + u 2 A u + u + A a 2

The sum of the first two rows of the matrix G: has weight 13, but no vectors in the code spanned by G: have weight 13, so the codes C3and C, spanned by G, and G, cannot be equivalent. This example might lead us to conjecture that exterior weight enumerators distinguish between inequivalent codes over all fields. However Gs and G, below are generator matrices for two MDS codes Cs and C,over the same extension field of GF(4) above.

1 0 0 1

1

1 0 0 1

1

0 0 1 1 h+l

By direct computation, it is possible to check that E ( C s )and E ( C 6 )have the same weight enumerators. The matrices G: and GL are generator matrices for the codes E2(Cs)and E2(C6): 1 0 1 A 0 1 1 A+l 0 0 0 0

0 1 1 O O A + 1 0 0 0 1 1 1 1 A + 1 1 A 1

l O l u 0 1 1 O O a + l 0 1 1 u2 0 0 0 1 1 u 2 + 1 ] . 0 0 0 0 1 1 u z 1 ff u + u 2

Now the generator matrices G'; and G: for E2(E2(C5)) and E2(E2(C6)) will each have 90 columns, so they are not listed here! Of these columns they will share 50,

K.P. Bogart

90

while the 40 columns that involve 2 by 2 determinants of submatrices of GI, or GA using columns 4 , 7 , 9 or 10 will be different. In fact, any column of GI: derived from 2 columns of {4,7,9,10}turns out to be a multiple of (1, A, A + l)', while any column of GZ derived from 2 columns of {4,7,9,10}turns out to be a multiple of (1, u,u2)1. Thus when we compute the Plucker coordinate vector from E2(E2(C5)) the determinant

[; ; : ] = o 1 1 A+l

arises often and the corresponding Plucker coordinate of E2(E2(Ch)) is the determinant

[p p

;2]=1+,+,,0.

However, whenever a Plucker coordinate of E2(E2(C6)) is zero, the corresponding Plucker coordinate of E2(E2(Cs))is zero. Thus the Plucker weight of E2(E2(Ch)) is higher than that of E2(E2(C5)), so that E ( C s ) and E(C,) have different exterior weight enumerators, and therefore Cs and C, are not equivalent. The examples given here suggest that we may well be able to distinguish between inequivalent codes by computing weight enumerators (or perhaps Plucker weights?) of iterated exterior algebras of the codes. In the case of binary codes, the author has neither been able to prove nor give a counterexample to the conjecture that the exterior weight enumerator of a code determines the code up to equivalence.

References (1J K. Bogart, D. Goldberg and J. Gordon, An elementary proof of the MacWilliams on equivalence of codes, Inform. Control 37(1) (1978) 19. [2] N. Bourbaki, Elements of Mathematics, Algebra, Part 1 (Hermann, Paris, 1974) Chapter 111. [3] C. Chevalley, Algebraic Theory of Spinors (Columbia University Press, New York, 1954). [4] J. Gordon, Application of the exterior algebra to graphs and codes, Ph.D. Dissertation, Dartmouth College, 1977. (51 W. Greub, Multilinear Algebra (Springer-Verlag, Berlin, Heidelberg, New York, 1967). [6] W.V.D. Hodge and D. Pedoe, Methods of Algebraic Geometry (Cambridge University Press, London, 1947). [7] F.J. MacWilliams and N. Sloane, The Theory of Error-Correcting Codes, Part I, North-Holland Mathematical Library Vol. 16 (North-Holland, Amsterdam, 1977) Chapter 5. [8] M. Marcus, Finite Dimensional Linear Algebra, Part 2 (Marcel Dekker, New York, 1975). [9] B. Van der Waerden, Einfiihrung in die Algebraische Geometrie (Dover, New York, 1945).