Cyclic distances of idempotent convolutional codes

Cyclic distances of idempotent convolutional codes

JID:YJSCO AID:1972 /FLA [m1G; v1.261; Prn:24/10/2019; 14:03] P.1 (1-26) Journal of Symbolic Computation ••• (••••) •••–••• Contents lists availabl...

1MB Sizes 0 Downloads 70 Views

JID:YJSCO

AID:1972 /FLA

[m1G; v1.261; Prn:24/10/2019; 14:03] P.1 (1-26)

Journal of Symbolic Computation ••• (••••) •••–•••

Contents lists available at ScienceDirect

Journal of Symbolic Computation www.elsevier.com/locate/jsc

Cyclic distances of idempotent convolutional codes ✩ José Gómez-Torrecillas a , F.J. Lobillo a , Gabriel Navarro b a b

Department of Algebra and CITIC, University of Granada, Spain Department of Computer Science and Artificial Intelligence, and CITIC, University of Granada, Spain

a r t i c l e

i n f o

Article history: Received 13 November 2018 Accepted 4 July 2019 Available online xxxx Keywords: Cyclic convolutional code Free distance Column and row distances Brouwer–Zimmermann algorithm

a b s t r a c t We show that, for convolutional codes endowed with a cyclic structure, it is possible to define and compute two sequences of positive integers, called cyclic column and row distances, which present a more regular behavior than the classical column and row distance sequences. We then design an algorithm for the computation of the free distance based on the calculation of the cyclic column distance sequence. © 2019 Elsevier Ltd. All rights reserved.

1. Introduction The free distance dfree of a convolutional code was introduced as a useful tool for sequential decoding by Costello (1969), and it became one of its most relevant parameters (Lin and Costello, 2004; Johannesson and Zigangirov, 1999). Its computation is not, in general, an easy task. Even the calculation of the Hamming distance of a binary linear block code is an NP-hard problem, see Vardy (1997). A method to calculate the free distance is based on the computation of terms of the sequences of column distances and row distances until they meet (see Johannesson and Zigangirov, 1999, Ch. 3). More precisely, from a given basic generator matrix of the code, one computes sequences of positive integers {dlc }l≥0 and {dlr }l≥0 such that

dlc ≤ dlc+1 ≤ dfree ≤ dlr+1 ≤ dlr ✩ Research partially supported by grant MTM2016-78364-P from Agencia Estatal de Investigación (AEI) and from Fondo Europeo de Desarrollo Regional (FEDER). E-mail addresses: [email protected] (J. Gómez-Torrecillas), [email protected] (F.J. Lobillo), [email protected] (G. Navarro).

https://doi.org/10.1016/j.jsc.2019.10.008 0747-7171/© 2019 Elsevier Ltd. All rights reserved.

JID:YJSCO

AID:1972 /FLA

[m1G; v1.261; Prn:24/10/2019; 14:03] P.2 (1-26)

J. Gómez-Torrecillas et al. / Journal of Symbolic Computation ••• (••••) •••–•••

2

for all l ≥ 0, and liml→∞ dlc = dfree = liml→∞ dlr . Thus, there must be some index l such that dlc = dlr = dfree . These column and row distance sequences seem to have few further regularities for a general convolutional code, although for some particular families of convolutional codes, like the generated in Costello (1969), the column distance sequence is better controlled. In this paper we prove that, for convolutional codes endowed with suitable cyclic structures, it is possible to compute a sequence of positive integers {δlc }l≥0 such that

δlc ≤ δlc+1 ≤ dfree for all l ≥ 0, and liml→∞ δlc = dfree . In addition, adapted as it is to the cyclic structure of the code, this sequence presents a more regular behavior than the classical column distance. In order to describe this regularity, let us say that the role of the polynomial generator matrix of the code will be played in our setting by a non commutative polynomial in one variable z, that represents the delay operator, with coefficients in some finite (commutative or not) algebra A over a finite field F . Details of the construction, which has its roots in Piret (1976) and Roos (1979), are given in Section 3. The degree m of such a generator polynomial leads to a remarkable property of the column distances sequence: we prove (Theorem 18) that, if l is an index such that δlc = δlc+m , then δlc = dfree . We present an algorithm based on this property (see Algorithm 4). It computes terms of the sequence of cyclic column distances, and it stops when it gets m repeated terms. These results were presented in the ISSAC’18 contribution (Gómez-Torrecillas et al., 2018), although some remarks on isometries have been included here and we present a more conceptual proof of Proposition 10. The aforementioned generator non commutative polynomial allows as well the computation of the classical row distance sequence, see Proposition 14. We also show that it is possible to know a priori an index such that both row and column cyclic distances meet and therefore are equal to the free distance (Theorem 22 and Corollary 28). These results were not included in Gómez-Torrecillas et al. (2018) and constitute Section 4. Each term of the column distance sequence is computed as the minimum weight of a block code presented as the set theoretical difference V \ W , where V is a linear code and W is a nonzero vector subspace of V . To compute them, in Section 5, we discuss in detail the algorithms sketched in Gómez-Torrecillas et al. (2018). The first one (Algorithm 1) is an adaptation of the general procedure for computing the Hamming minimum distance of a linear code from a parity check matrix, and it should be used if F is large. The second algorithm for this task adjusts the Brouwer–Zimmermann algorithm to this setting (see Algorithm 2). 2. Convolutional codes and their distances: an outline We are going to freely use some results and terminology from Johannesson and Zigangirov (1999) on convolutional codes, well understood that they are still valid when the binary field used there is replaced by any finite field F . Indeed, convolutional codes are modeled over an arbitrary finite field in fundamental references like Forney (1970); Piret (1976); Roos (1979); Rosenthal and Smarandache (1999). Let k ≤ n be positive integers. A rate k/n convolutional transducer G transforms information sequences u = (ui )i ∈Z (ui ∈ F k ) into code sequences v = (vi )i ∈Z (vi ∈ F n ), see Johannesson and Zigangirov (1999, Chapter 2). The map G is subject to some requirements. First, both the information sequence and the ∞ code sequence start at some finite time. This allows to represent them as u = i =i 0 ui t i ∈ F k (( z)), v=

∞

j= j0

v j t j ∈ F n (( z)), for some i 0 , j 0 ∈ Z. Here, F k (( z)) denotes the set of all Laurent series in

the variable z with coefficients in F k . The obvious bijection F k (( z)) ∼ = F(( z))k shows how to consider k F ((z)) as a vector space of dimension k over the field of Laurent series F(( z)). Obviously, the same considerations apply for the code sequences. Then it is assumed that

v = uG , where G is a k × n full rank matrix with entries in the rational function field F( z), which is a subfield of F(( z)). The rational functions are used because they can be realized, via rational transfer functions, as linear shift registers with feedback (see e.g. Johannesson and Zigangirov, 1999, Figure 2.1).

JID:YJSCO

AID:1972 /FLA

[m1G; v1.261; Prn:24/10/2019; 14:03] P.3 (1-26)

J. Gómez-Torrecillas et al. / Journal of Symbolic Computation ••• (••••) •••–•••

3

A rate k/n convolutional code over F is then defined as the image of a convolutional transducer, that is, a k-dimensional vector subspace D of F n (( z)) which has a basis whose vectors belong to F( z)n . The matrix G is known as a generator matrix of the code, whenever its rational entries belong to the formal power series ring F[[ z]], that is, their denominators, in an irreducible representation as fractions of polynomials, are delay free, in the terminology of Johannesson and Zigangirov (1999, Chapter 2). The convolutional code D is uniquely determined by the row space of G in F( z)n . In other words, a rate k/n convolutional code could have been equivalently defined as a k-dimensional vector subspace of F( z)n . Taking this remark, besides some basic facts on finitely generated modules over principal ideal domains, into account, we get the following module-theoretical and coordinate-free refined version of Forney (1970, Theorem 3). Proposition 1. Let k ≤ n. The map D → D ∩ F n [ z] establishes a bijection between the set of all convolutional codes of rate k/n and the set of all F[ z]-submodules of F n [ z] of rank k that are direct summands of F n [ z]. By stacking the (row) vectors of a basis of the F[ z]-module D ∩ F n [ z], we get a matrix with polynomial entries called a basic generator matrix of the code. Therefore, D ∩ F n [ z] is a delay free and observable code in the sense of Rosenthal et al. (1996). As a consequence, any convolutional code D determines (and it is determined by) a unique delay free observable code. Let C = D ∩ F n [ z], where D is a rate k/n convolutional code. The Hamming weight wH on F n can  be extended to any polynomial with vector coefficients f = i zi f i ∈ F n [ z] as

wH ( f ) =



wH ( f i ).

i

The free distance of C is defined as

dfree (C ) = min {wH ( f ) : f ∈ C , f = 0} , and it coincides with the free distance of D , as defined in Johannesson and Zigangirov (1999, Chapter 3), since C is a direct summand of F n [ z] (see also Costello, 1974, last paragraph of page 357). It follows as well (see e.g. Gluesing-Luerssen and Schmale, 2004, Proposition 2.2) that there exists f ∈ C with f 0 = 0 such that dfree (C ) = wH ( f ). One way to compute the free distance of a convolutional code consists in the calculation of the classical column and row distance sequences until they coincide. We recall the definition and some properties of these sequences.  For each f = i zi f i ∈ F n [ z], the truncated polynomial at degree j is

f [0, j ] =

j 

zi f i .

i =0

The jth column distance of C is defined as





dcj = min wH ( f [0, j ] ) : f ∈ C , f 0 = 0 . In fact, as observed in the proof of Johannesson and Zigangirov (1999, Theorem 3.1), this definition matches with the column distance of any (rational) generator matrix G of D such that G (0) has full rank (e.g. when G is a basic generator matrix). m The jth row distance of C (or of D ) with respect to a basic generator matrix G = i =0 zi G i , of degree m is defined as

drj = min {wH ( f ) : f ∈ C , f = 0, deg( f ) ≤ j + m} . This definition depends on the degree m of G as a polynomial in z with matrix coefficients. See Johannesson and Zigangirov (1999, p. 114) or Huffman and Pless (2010, Theorem 14.4.6) for more details.

JID:YJSCO

AID:1972 /FLA

[m1G; v1.261; Prn:24/10/2019; 14:03] P.4 (1-26)

J. Gómez-Torrecillas et al. / Journal of Symbolic Computation ••• (••••) •••–•••

4

Row and column distances help to compute the free distance, since, for every index j,

dcj ≤ dcj +1 ≤ dfree (C ) ≤ drj +1 ≤ drj , and dcs = dfree (C ) = drs for s big enough (see Johannesson and Zigangirov, 1999, Ch. 3). The degree m of G should play some role in row and column distance sequences. In fact, each vector in the information sequence interacts only with the m + 1 coefficients of G. This leads to the following natural question: Does the equality dcj = dcj +m for some j ≥ 0 imply dcj = dfree (C )? The answer is no, as the following example shows. Example 2. Let C be the rate 2/4 code generated by the basic matrix



G=

z4 + z2 + 1 z4 + z3 + 1

z3 + z2 + z + 1 z3

z4 + z3 3 z +z+1

z3 + z2 + z 1

 .

With the aid of the computer software SageMath (The Sage Developers, 2017), we have computed the column distances, whose values are written in the following table:

j

0 1 2 3 4 5 6 7 8 9 10 11 12

dcj drj dc5

= dc10

So m = 4.

2 3 4

5

5

6

6

6

6

6

6

7

8

8 8 8

8

8

8

8

8

8

8

8

8

8

= 6, but dfree (C ) = 8. Observe that, in this example, we have dc5 = dc5+m+1 < dfree (C ) since

The same question can be asked for the row distance sequence: Does the equality drj = drj +m for some j ≥ 0 imply drj = dfree (C )? The answer is no again, as Example 3 shows. Example 3. Let C be the rate 2/3 code generated by the basic matrix



G=

z2 + 1 z2 + z

z2 + z 2 z +z+1

1 z2



With the aid of the computer software SageMath (The Sage Developers, 2017), we have computed the row and column distances, whose values are written in the following table:

j

0

1

2

3

4

5

6

7

dcj

1

2

2

2

3

3

3

4

drj

5

5

5

4

4

4

4

4

So dr0 = dr2 = 5, but dfree (C ) = 4. 3. Row and column distances adapted to cyclic structures on convolutional codes Let F[ z] denote the polynomial ring in the variable z with coefficients in a finite field F . It follows from Proposition 1, and the subsequent discussion on free, row and column distances in Section 2, that we may safely define convolutional codes as follows.

JID:YJSCO

AID:1972 /FLA

[m1G; v1.261; Prn:24/10/2019; 14:03] P.5 (1-26)

J. Gómez-Torrecillas et al. / Journal of Symbolic Computation ••• (••••) •••–•••

5

Definition 4. Let k ≤ n be positive integers. A rate k/n convolutional code over F is an F[ z]-submodule C of F n [ z] of rank k such that F n [ z] = C ⊕ C for some F[ z]-submodule C ⊆ F n [ z]. Various cyclic structures on convolutional codes have been modeled by using a skew polynomial ring with coefficients in a finite algebra as sentence-ambient algebra, see Piret (1976); Roos (1979); Gluesing-Luerssen and Schmale (2004); Estrada et al. (2008); López-Permouth and Szabo (2013); Gómez-Torrecillas et al. (2016, 2017b). We briefly recall this construction. Let A be an F -algebra (associative with unit) of finite dimension n over F , and σ : A → A an F -algebra automorphism. The right skew polynomial ring A [ z; σ ] consists of all polynomials in z with coefficients in A written on the right, whose multiplication is skewed according to the rule az = zσ (a) for all a ∈ A. Clearly, since F is a subalgebra of A, F[ z] is a commutative subring of the non-commutative ring A [ z; σ ]. However, F[ z] is not contained in the center of A [ z; σ ], unless σ is the identity map. Consider A [ z; σ ] as an F[ z]-module by defining the action of f ( z) ∈ F[ z] on g ( z) ∈ A [ z; σ ] as the non-commutative product f ( z) g ( z). Every F -basis {b0 , . . . , bn−1 } of A becomes a basis of A [ z; σ ] as an F[ z]-module, and the corresponding isomorphism of vector spaces

v : A → Fn extends to an isomorphism of F[ z]-modules

v : A [ z; σ ] → F n [ z] with inverse

p : F n [z] → A [z; σ ]. At this level of generality, cyclic structures on convolutional codes were introduced in LópezPermouth and Szabo (2013) under the name of left ideal convolutional codes. Thus, a convolutional code C ⊆ F n [ z] is said to be a left ideal convolutional code if p(C ) is a left ideal of A [ z; σ ]. Left ideal convolutional codes are often generated, as left ideals, by idempotent elements of A [ z; σ ]. This is the case when A is a semisimple commutative algebra (see López-Permouth and Szabo, 2013, Theorem 3.5) and, more generally, when the ring extension F[ z] ⊆ A [ z; σ ] is a separable ring extension (see Gómez-Torrecillas et al., 2017b, Corollary 7). Moreover, the idempotent generator of each left ideal convolutional code can be, in this case, explicitly computed (see Gómez-Torrecillas et al., 2017b, Algorithm 1). We are interested in the consequences for the cyclic convolutional code of being generated by an idempotent, so we find useful to introduce the following definition. Definition 5. Let R = A [ z; σ ], and fix a basis {b0 , b1 , . . . , bn−1 } of A over F . A convolutional code C ⊆ F n [ z] is said to be an idempotent convolutional code (ICC) if there exists an idempotent  =  2 ∈ A [ z; σ ] such that p(C ) = R  . By a slight abuse of language, we will simply say that C is generated by  , and we write C = R  . For cyclic convolutional codes in the sense of Piret, generating idempotents play a relevant role in the construction of canonical minimal encoders, see (Roos, 1979). Many examples, as well as algorithms for their construction, of idempotent convolutional codes are given in Gómez-Torrecillas et al. (2016, 2017b,a, 2015). The name ICC has been first used in our contribution to ISSAC 2018 (Gómez-Torrecillas et al., 2018); in previous papers different names have been used, for instance they are called split ideal codes in Gómez-Torrecillas et al. (2016, Definition 2.15). Next, we will introduce the free distance of a convolutional code. Using an abstract weight function will not entail additional difficulties, while the corresponding general framework is of potential interest. A weight function on A is a map w : A → N such that 1. w(a + b) ≤ w(a) + w(b), 2. w(−a) = w(a),

JID:YJSCO

AID:1972 /FLA

[m1G; v1.261; Prn:24/10/2019; 14:03] P.6 (1-26)

J. Gómez-Torrecillas et al. / Journal of Symbolic Computation ••• (••••) •••–•••

6

3. w(a) = 0 if and only if a = 0. Such a weight function allows to define a distance function d : A × A → N by d(a, b) = w(a − b). We extend the weight function to any cartesian product A l by defining w : A l → N as

w(a0 , . . . , al−1 ) = w(a0 ) + · · · + w(al−1 ). The minimum weight of a subset S ⊆ A l is defined as usual, namely,

d( S ) = min{w(u ) : 0 = u ∈ S }, whenever S contains some non zero vector. The weight function is also extended to w : R → N by setting



w





i

z ai

=

i



w(ai ),

i

which leads to the definition of the free distance of a convolutional code C as

dfree (C ) = min{w( f ) : f ∈ C , f = 0}. We say that the algebra automorphism

(1)

σ of A is an isometry if w(σ (a)) = w(a) for all a ∈ A.

Example 6. If A is a full matrix algebra over F , then every automorphism respect to the rank on A used as metric.

σ is an isometry with

Remark 7. If we consider on A the weight function w that comes from the Hamming weight wH in F n , namely,

w : A → N,

a → w(a) = wH (v(a)),

then (1) is the usual free distance of the convolutional code C . Obviously, w depends on the choice of the basis B = {b0 , b1 , . . . , bn−1 } of A over F . Proposition 8 below shows that σ will be an isometry whenever such a basis is suitably chosen. Proposition 8. The automorphism σ is an isometry if and only if there exists a permutation vector (α0 , . . . , αn−1 ) ∈ F n such that for all 0 ≤ i ≤ n − 1, σ (b i ) = αi bπ (i ) .

π ∈ S n and a

Proof. It follows from the fact that wH (β0 , . . . , βn−1 ) = 1 is and only if (β0 , . . . , βn−1 ) is a scalar multiple of an element in the canonical basis of F n . 2 Remark 9. The permutations and the scalar multiples in Proposition 8 cannot be freely chosen, since σ is an F -algebra morphism. For instance, if b0 = 1 then it follows that α0 = 1 and π (0) = 0. The algebra structure gives more restrictions on the vector of coefficients and the permutations. An interesting question could be: given an F -automorphism σ : A → A, is there a basis B such that σ is an isometry when we consider the Hamming weight function from Remark 7? After this digression on isometries, we resume our idempotent convolutional codes. Let C = R  an ICC generated by an idempotent  = 0, 1 of R. Let m be the degree of  in z. Write

e=1− =

m 

zi e i ,

i =0

which is also a non trivial idempotent of degree m of R. The map

JID:YJSCO

AID:1972 /FLA

[m1G; v1.261; Prn:24/10/2019; 14:03] P.7 (1-26)

J. Gómez-Torrecillas et al. / Journal of Symbolic Computation ••• (••••) •••–•••

ρe : R → R ,

r → ρe (r ) = re

7

(2)

is a left R-module morphism. Obviously,

C = ker(ρe ) = { g ∈ R : ge = 0}.

(3)

The idempotent  is called an idempotent generator of C = R  , and e = 1 −  is called a parity check idempotent. The inclusion A ⊆ R makes R an A-bimodule. By construction, {1, z, z2 , . . . } is a basis of R as right A-module. Since σ is an automorphism,



zi ai =

i ≥0



σ −i (ai )zi ,

i ≥0

which implies that {1, z, z2 , . . . } is a basis of R as a left A-module. In the sequel, the coordinates of f ∈ R as left A-module with respect to the basis {1, z, z2 , . . . } are called left coordinates of f . A straightforward argument shows that the (infinite) matrix of the left A-module endomorphism ρe with respect to this basis is



e0

⎜ ⎜ ⎝

E =⎜



σ −1 (e1 ) σ −2 (e2 ) ··· ··· ···   σ −1 (e0 ) σ −2 (e1 ) σ −3 (e2 ) ··· ···⎟ ⎟ , = σ − j (e j −i ) σ −2 (e0 ) σ −3 (e1 ) σ −4 (e2 ) · · · ⎟ 0≤ i , j ⎠ ..

..

.

.

..

.

..

.

where we adopt the convention that e i = 0 for i < 0 and i > m. Since e is idempotent, it follows that E is idempotent too. Next, we introduce and study two sequences of distances which reach the free distance in somewhat more controllable way than the classical sequences of row and column distances. Let E lc be the square submatrix of E consisting in the first l + 1 rows and columns, that is

E lc =





σ − j (e j−i )

0≤i , j ≤l

(4)

,

and let E lr be the submatrix of E consisting in the first l + 1 rows and m + l + 1 columns, that is

E lr =





σ − j (e j−i )

0≤i ≤l,0≤ j ≤m+l

(5)

.

The following proposition will be useful in the sequel. Proposition 10. For all l ≥ 0, E lc in an idempotent matrix. Proof. Since E 2 = E, the block decomposition



E =⎝

E lc 0



..

.

..

.

⎠,

implies that ( E lc )2 = E lc .

2

For all l, let





K l = (a0 , . . . , al ) ∈ ker(· E lc ) : a0 = 0 ⊆ A l+1

(6)

K lr = ker(· E lr ) ⊆ A l+1

(7)

and

JID:YJSCO

AID:1972 /FLA

[m1G; v1.261; Prn:24/10/2019; 14:03] P.8 (1-26)

J. Gómez-Torrecillas et al. / Journal of Symbolic Computation ••• (••••) •••–•••

8

Definition 11. Let C = ker(ρe ) for some non trivial idempotent e ∈ R and let l ≥ 0. The l-th cyclic column distance of C is defined as

δlc = d( K l ) = min {w(a0 , . . . , al ) : (a0 , . . . , al ) ∈ K l } . The l-th cyclic row distance of C is defined as

  δrj = d( K lr ) = min w(a) : a ∈ K lr , a = 0 . The matrices E lc and E lr follow the patterns described below:

⎛ ⎜

E lc+1 = ⎜ ⎝

 E lr+1

σ −(l+1) (el+1 )

.. .

E lc

σ −(l+1) (e

1)

⎞ ⎟ ⎟, ⎠

σ −(l+1) (e0 )

0

E lr

=

· · · σ −(l+1) (e0 ) ··· σ −(l+m) (em−1 )   c

E lr = E l and

=

0 σ −(l+m+1) (em )

(9)

,

(10)

l

 E lc+m

(8)



E lr 0

(11)

, u

where l and u are suitable lower and upper triangular blocks respectively. Lemma 12. Let h =

l

i =0

zi h i ∈ R. Then h ∈ C if and only if (σ −0 (h0 ), . . . , σ −l (hl )) ∈ K lr .

Proof. The left coordinates of h are (σ −0 (h0 ), . . . , σ −l (hl ), 0, . . . ). Since



E lc

⎜ E =⎝ 0 .. .

l u

.. .

0



··· ⎟ ⎠, .. .

it follows from (10) that he = 0 if and only if (σ −0 (h0 ), . . . , σ −l (hl )) E lr = 0. Proposition 13. For each l ≥ 0, δlr ≥ δlr+1 . Moreover, if there is j ≥ 0 such that δ rj = dfree (C ).

2

σ is an isometry, then δlr ≥ dfree (C ) for all l ≥ 0 and

Proof. Since (a0 , . . . , al ) ∈ K lr implies (a0 , . . . , al , 0) ∈ K lr+1 , it follows that δlr ≥ δlr+1 . If l try, then, for any g = i =0 zi g i , we have

w( g ) =

l  i =0

w( g i ) =

l 

w(σ −i ( g i )) = w(σ −0 ( g 0 ), . . . , σ −l ( gl )).

i =0

This implies, in view of Lemma 12, that

δlr = min{w( g ) : 0 = g ∈ C , deg( g ) ≤ l}. Hence, δlr ≥ dfree (C ), and the free distance is reached by some δ rj .

2

σ is an isome-

JID:YJSCO

AID:1972 /FLA

[m1G; v1.261; Prn:24/10/2019; 14:03] P.9 (1-26)

J. Gómez-Torrecillas et al. / Journal of Symbolic Computation ••• (••••) •••–•••

9

The cyclic row distance sequence is therefore the decreasing sequence we were looking for. In fact, it boils down to the classical row distance sequence when the Hamming weight is used, as the following result shows. Proposition 14. Assume σ is an isometry and w is defined as in Remark 7. If l ≥ m, dlr−m = δlr . Proof. Let g =

w( g ) =

l

i =0

l 

zi g i ∈ R. Since

σ is an isometry,

w(σ −i ( g i )).

i =0

Hence, the result follows from Lemma 12.

2

As a consequence of (8), we have that ( f 0 , . . . , f l , f l+1 ) ∈ K l+1 implies ( f 0 , . . . , f l ) ∈ K l . There is a somehow reciprocal result. Lemma 15. Let ( f 0 , . . . , f l ) ∈ K l and let

f l +1 = −

l 

f i σ −(l+1) (el+1−i ).

i =0

Then

( f 0 , . . . , f l , f l +1 ) ∈ K l +1 . As a consequence, K l = ∅ for all l ≥ 0. Proof. By (8) and Proposition 10, we have

⎛ ⎝

σ −(l+1) (el+1 )

.. .

σ −(l+1) (e





⎠ = Ec ⎝

σ −(l+1) (el+1 )

.. .

l

σ −(l+1) (e

1)





⎠+⎝

σ −(l+1) (el+1 )

.. .

σ −(l+1) (e

1)

⎞ ⎠ σ −(l+1) (e 0 ).

1)

Hence, since ( f 0 , . . . , f l ) ∈ K l , we have



( f 0, . . . , fl ) ⎝

σ −(l+1) (el+1 )

.. .



⎠ (1 − σ −(l+1) (e 0 ))

σ −(l+1) (e1 )



= ( f 0 , . . . , f l ) E lc ⎝ ⎛ + ( f 0, . . . , fl ) ⎝

σ −(l+1) (el+1 )

.. .

σ −(l+1) (e1 ) σ −(l+1) (el+1 )

.. .

⎞ ⎠ (1 − σ −(l+1) (e 0 )) ⎞ ⎠ σ −(l+1) (e 0 )(1 − σ −(l+1) (e 0 ))

σ −(l+1) (e1 )

= 0, because

σ −(l+1) (e0 ) is idempotent. So ⎞ ⎛ −(l+1) σ

( f 0, . . . , fl ) ⎝

(el+1 )

.. .



⎠ = ( f 0, . . . , fl ) ⎝

σ −(l+1) (e1 )

σ −(l+1) (el+1 )

.. .

⎞ ⎠ σ −(l+1) (e 0 )

σ −(l+1) (e1 )

=

l  i =0

f i σ −(l+1) (el+1−i )σ −(l+1) (e 0 ),

JID:YJSCO

AID:1972 /FLA

[m1G; v1.261; Prn:24/10/2019; 14:03] P.10 (1-26)

J. Gómez-Torrecillas et al. / Journal of Symbolic Computation ••• (••••) •••–•••

10

that is,



⎛ σ −(l+1) (e

 Therefore,

⎜

l

f 0, . . . , fl | −

f i σ −(l+1) (el+1−i ) ⎜ ⎝

i =0

f 0, . . . , fl , −

l



i =0

l +1 )

.. .

σ −(l+1) (e

1)

⎞ ⎟ ⎟ = 0. ⎠

σ −(l+1) (e0 )

f i σ −(l+1) (el+1−i ) ∈ K l+1 by (8).

To prove that K l = ∅ for every l ≥ 0, it suffices to argue that K 0 = ∅, that is, e 0 = 1. Since e is idempotent, it follows that

ek =



σ j (e i )e j =

k 

σ j (ek− j )e j .

(12)

j =0

i + j =k

So, if e 0 = 1 then

e 1 = σ −1 (e 0 )e 1 + e 1 e 0 = e 1 + e 1 , so e 1 = 0. Iterating this process we get e i = 0 for all i > 0, i.e. e = 1. But we have assumed e = 1, so e 0 = 1. 2 Proposition 16. For all l ≥ 0, δlc ≤ δlc+1 . Moreover, if σ is an isometry, then δlc ≤ dfree (C ). Proof. Let ( f 0 , . . . , f l , f l+1 ) ∈ K l+1 such that

δlc+1 = w( f 0 , . . . , f l , f l+1 ). By (8), it follows that ( f 0 , . . . , f l ) ∈ K l , therefore

δlc ≤ w( f 0 , . . . , f l ) ≤ w( f 0 , . . . , f l , f l+1 ) = δlc+1 . t

Let h = i =0 zi h i ∈ C with h0 = 0 such that dfree (C ) = w(h). We may take t such that t + m ≥ l, by adding suitable zero monomials. Since h ∈ C , it follows that

(σ −0 (h0 ), . . . , σ −t (ht )) E tr = 0 by Lemma 12. Hence

(σ −0 (h0 ), . . . , σ −t (ht ), 0, .m. ., 0) E tc+m = 0 because



E tc+m

=



E tr 0

u

by (11). Therefore,

δlc ≤ δtc+m ≤ w(σ −0 (h0 ), . . . , σ −t (ht ), 0, .m. ., 0) = w(σ −0 (h0 ), . . . , σ −t (ht )) = w(h) = dfree (C ), where, in the penultimate equality, we used that

σ is an isometry. 2

In order to prove the main result of this section, we need the following lemma. Lemma 17. Let l, j ≥ 0 such that δlc = δlc+ j . If ( f 0 , . . . , f l+ j ) ∈ K l+ j is such that δlc+ j = w( f 0 , . . . , f l+ j ), then ( f 0 , . . . , f l ) ∈ K l , ( f l+1 , . . . , f l+ j ) = (0, . . . , 0) and δlc = w( f 0 , . . . , f l ).

JID:YJSCO

AID:1972 /FLA

[m1G; v1.261; Prn:24/10/2019; 14:03] P.11 (1-26)

J. Gómez-Torrecillas et al. / Journal of Symbolic Computation ••• (••••) •••–•••

Proof. Since

E lc+ j



=

11



E lc

l

0

u

,

it follows that ( f 0 , . . . , f l ) ∈ K l . Then

δlc+ j = δlc ≤ w( f 0 , . . . , f l ) ≤ w( f 0 , . . . , f l , f l+1 , . . . , f l+ j ) = δlc+ j , so we conclude that

δlc = w( f 0 , . . . , f l ) and ( f l+1 , . . . , f l+ j ) = (0, . . . , 0) because w( f 0 , . . . , f l , f l+1 , . . . , f l+ j ) = w( f 0 , . . . , f l ) + w( f l+1 , . . . , f l+ j ).

2

Theorem 18. Assume that σ is an isometry. If δlc = δlc+m for some l, then δlc = dfree (C ). Proof. By Lemma 17, there exists ( f 0 , . . . , f l ) ∈ K l such that ( f 0 , . . . , f l , 0, . . . , 0) ∈ K l+m and δlc =  w( f 0 , . . . , f l ) = δlc+m . Let h i = σ i ( f i ) for all 0 ≤ i ≤ l, and h = i zi h i . Then

( f 0 , . . . , f l , 0, . . . , 0) E lc+m = 0, which implies, by (11), that ( f 0 , . . . , f l ) E lr = 0. Hence, h ∈ C , by Lemma 12, because f i = σ −i (h i ) for all 0 ≤ i ≤ l. Therefore

dfree (C ) ≤ w(h) = w( f 0 , . . . , f l ) = δlc , and, by Proposition 16, dfree (C ) = δlc .

2

Corollary 19. If σ is an isometry, then the free distance is reached by the sequence {δlc }l≥0 . By Example 3 and Proposition 14, a similar result to Theorem 18 can not be proved for the cyclic row distances sequence. Nevertheless, there is still a relation between cyclic column distances and the classical column distances when σ is an isometry and w is defined from the Hamming weight as in Remark 7. Proposition 20. Assume σ is an isometry and w is defined as in Remark 7. For all j ≥ 0, δ cj ≤ dcj . Proof. Let j ≥ 0. Let h = needed. By Lemma 12,

l

i =0

zi h i ∈ C with h0 = 0. We may assume that l ≥ j by adding zeros if

(σ −0 (h0 ), . . . , σ −l (hl )) E lr = 0. By (9), (11) and (10)



E lr

=



E rj u

 =

l

E cj

l

0

0

u

l

.

Hence

(σ −0 (h0 ), . . . , σ − j (h j )) E cj = 0, and there is an injective map





h[0, j ] : h ∈ C , h0 = 0 → K j

j

zi h i → (σ −0 (h0 ), . . . , σ − j (h j )) j Since σ is an isometry, w(h[0, j ] ) = i =0 w(σ −i (h i )), hence δ cj ≤ dcj . 2 i =0

JID:YJSCO

AID:1972 /FLA

[m1G; v1.261; Prn:24/10/2019; 14:03] P.12 (1-26)

J. Gómez-Torrecillas et al. / Journal of Symbolic Computation ••• (••••) •••–•••

12

4. Column and row distances interplay Our next aim is to show that the column and row distance sequences defined in Definition 11 seem to be more tightly linked than the classical ones. Thus, in Theorem 22, we prove that if δlc(m+1)−1 = dfree for some l ≥ 1, then δlr(m+1)−1 = dfree . We keep the notation of the previous section. In particular, we have an idempotent convolutional code C generated, as a left ideal of R = A [x; σ ], by a no trivial idempotent  ∈ R of degree m. The idempotent e = 1 −  ∈ R is such that g ∈ C if and only if ge = 0. If M is any matrix with coefficients in A, and τ : A → A is an automorphism, then τ ( M ) represents the matrix obtained from M by applying τ component wise. Recall that

e = e 0 + ze 1 + · · · + zm em ,

(e i ∈ A , e 0 = 0, 1 and em = 0).





Set

σ −1 (e1 ) · · · σ −m (em ) σ −1 (e0 ) · · · σ −m (em−1 ) ⎟ ⎟

e0 ⎜0 ⎜

c Em = Em =⎜ . .

.. .

⎝ .

0

.. .

···

0

⎟, ⎠

σ −m (e0 )

and



Tm

··· ··· σ −1 (em ) · · · .. . σ −1 (e2 ) · · ·

0 ⎜ em ⎜ ⎜ = e m −1

⎜ ⎜ ⎝

0 0

.. .

e1

0 0 0

.. .



0 0⎟ ⎟ 0⎟ ,

⎟ .. ⎟ .⎠

σ −(m−1) (em ) 0

both m + 1 square matrices over A. Let

     Tm = ( K = ker · u , v ) ∈ ( A m+1 )2 | uT m + v E m = 0 . E m

Lemma 21. Let s ≥ 2, u 0 = 0 ∈ A m+1 and (u 1 , u 2 , . . . , u s ) ∈ ( A m+1 )s . Assume that the first component in A of u 1 ∈ A m+1 is nonzero. Then (u 1 , u 2 , . . . , u s ) ∈ K s(m+1)−1 if and only if (σ (k−1)(m+1) (uk−1 ), σ (k−1)(m+1)) (uk )) ∈ K for all k = 1, . . . , s. Proof. Straightforward from (13) and (6).

2

For s ≥ 2 we define

⎛ ⎜ Em

⎜ ⎜ ⎜ ⎜ c E s(m+1)−1 = ⎜ ⎜ ⎜ ⎜ ⎜ ⎝

0

.. . 0 0



σ −(m+1) ( T m )

..

.

σ −(m+1) ( E

..

.

.

..

.

0

..

.

0

..

.

..

m)

0

⎟ ⎟ ⎟ 0 ⎟ ⎟ .. ⎟. . ⎟ ⎟ ⎟ σ (1−s)(m+1) ( T m ) ⎟ ⎠ σ (1−s)(m+1) ( E m )

(13)

JID:YJSCO

AID:1972 /FLA

[m1G; v1.261; Prn:24/10/2019; 14:03] P.13 (1-26)

J. Gómez-Torrecillas et al. / Journal of Symbolic Computation ••• (••••) •••–•••

By deleting the last block row from E cs(m+1)−1 , we get the matrix



..

σ −(m+1) ( T

13



. 0 m) ⎟ ⎜ Em ⎟ ⎜ . ⎟ ⎜ .. −(m+1) ( E ) 0 σ 0 ⎟ ⎜ m r  E (s−1)(m+1)−1 = ⎜ ⎟ .. .. .. ⎟ ⎜ .. . . . ⎟ ⎜ . ⎠ ⎝ .. ( 1−s)(m+1) . 0 0 σ (T m ) or, also,





0

.. .

⎜ c  E r(s−1)(m+1)−1 = ⎜ ⎝ E (s−1)(m+1)−1

⎟ ⎟ ⎠

0

σ (1−s)(m+1) ( T m )

Note that  E r(s−1)(m+1)−1 is also obtained by adding a last zero column to E r(s−1)(m+1)−1 , so the left kernel of both matrices is the same, that is





K (rs−1)(m+1)−1 = ker · E r(s−1)(m+1)−1 .

(14)

Theorem 22. Assume that σ is an isometry. The following conditions are equivalent for our idempotent convolutional code C , and a given s ≥ 2. 1. 2. 3.

δsc(m+1)−1 = δ(cs−1)(m+1)−1 ; δ(rs−1)(m+1)−1 = δ(cs−1)(m+1)−1 ; dfree (C ) = δ(cs−1)(m+1)−1 .

Proof. 1 ⇒ 2. Let (u 1 , . . . , u s ) ∈ K s(m+1)−1 such that δsc(m+1)−1 = w(u 1 , . . . , u s ). Now,

w(u 1 , . . . , u s ) = w(u 1 , . . . , u s−1 ) + w(u s ) ≥ δ(cs−1)(m+1)−1 + w(u s ), since (u 1 , . . . , u s−1 ) ∈ K (s−1)(m+1)−1 by Lemma 21. The equality δsc(m+1)−1 = δ(cs−1)(m+1)−1 implies that u s = 0. Therefore, (u 1 , . . . , u s−1 ) belongs to the left kernel of  E r(s−1)(m+1)−1 and hence to the left kernel of E r(s−1)(m+1)−1 . Thus, (u 1 , . . . , u s−1 ) ∈ K (rs−1)(m+1)−1 and we get, by Propositions 13 and 16,

δ(cs−1)(m+1)−1 = w(u 1 , . . . , u s−1 ) ≥ δ(rs−1)(m+1)−1 ≥ dfree (C ) ≥ δ(cs−1)(m+1)−1 . 2 ⇒ 3. It follows from the inequalities δ(rs−1)(m+1)−1 ≥ dfree (C ) ≥ δ(cs−1)(m+1)−1 in Propositions 13 and 16. 3 ⇒ 1. It follows from the inequalities dfree (C ) ≥ δsc(m+1)−1 ≥ δ(cs−1)(m+1)−1 . 2 Remark 23. Suppose that K s(m+1)−1 has been computed for some s ≥ 1 and, hence, we know the value of δsc(m+1)−1 . Since



K sr(m+1)−1

= K s(m+1)−1 ∩ ker

0

σ −s(m+1) ( T m )



,

(15)

we should also know δsr(m+1)−1 and, thus, decide whether dfree (C ) = δsc(m+1)−1 by virtue of Theorem 22. Theorems 18 and 22 show that the column distance sequence plays a relevant role in the computation of the free distance. From this perspective it would be interesting to get some information on its rate growth. To this end we define

JID:YJSCO

AID:1972 /FLA

[m1G; v1.261; Prn:24/10/2019; 14:03] P.14 (1-26)

J. Gómez-Torrecillas et al. / Journal of Symbolic Computation ••• (••••) •••–•••

14

K ∗ = {(u , v ) ∈ K | uT m = 0} . Obviously, d( K ∗ ) ≥ 2, whenever K ∗ = ∅. Lemma 24. Suppose that σ is an isometry. If δ2c (m+1)−1 < dfree (C ), then

(σ m+1 (u 1 ), σ m+1 (u 2 )) ∈ K ∗ for every (u 1 , u 2 ) ∈ K 2(m+1)−1 such that δ2c (m+1)−1 = w(u 1 , u 2 ). In particular K ∗ = ∅. Proof. Since (u 1 , u 2 ) ∈ K 2(m+1)−1 , u 1 ∈ K m . By Lemma 21, (σ m+1 (u 1 ), σ m+1 (u 2 )) ∈ K . Assume σ m+1 (u 1 ) T m = 0. Then u 1 σ −(m+1) ( T m ) = 0 and u 1 ∈ K mr by (15). Therefore r δ2c (m+1)−1 = w(u 1 , u 2 ) ≥ w(u 1 ) ≥ δm ≥ δ2r (m+1)−1

by Proposition 13. Hence dfree (C ) = δ2c (m+1)−1 by Theorem 22. Therefore



m+1

(u 1 ), σ

m+1

(u 2 )) ∈

K ∗.

σ m+1 (u 1 ) T m = 0 and

2

Remark 25. It follows from Lemma 24 that, if K ∗ = ∅, then dfree (C ) = δ2c (m+1)−1 . Theorem 26. Assume that δ(cs+1)(m+1)−1 < dfree (C ) for some s ≥ 2, and that δ(cs+1)(m+1)−1 − δ(cs−1)(m+1)−1 ≥ d( K ∗ ).

σ is an isometry. Then

Proof. Let (u 1 , . . . , u s+1 ) ∈ K (s+1)(m+1)−1 such that δ(cs+1)(m+1)−1 = w(u 1 , . . . , u s+1 ). By equation (8),

(u 1 , . . . , u s ) ∈ K s(m+1)−1 . We claim that u s σ −s(m+1) ( T m ) = 0. Indeed, if u s σ −s(m+1) ( T m ) = 0, since (u 1 , . . . , u s−1 , u s ) ∈ K s(m+1)−1 , it follows that (u 1 , . . . , u s−1 , u s ) ∈ K sr(m+1)−1 by (15). Therefore,

δ(cs+1)(m+1)−1 = w(u 1 , . . . , u s−1 , u s , u s+1 ) ≥ w(u 1 , . . . , u s−1 , u s ) ≥ δsr(m+1)−1 . Hence, δ(cs+1)(m+1)−1 = dfree (C ), which is contrary to our hypothesis. Now,

δ(cs+1)(m+1)−1 = w(u 1 , . . . , u s+1 ) = w(u 1 , . . . , u s−1 ) + w(u s , u s+1 ) ≥ δ(cs−1)(m+1)−1 + w(u s , u s+1 ) = δ(cs−1)(m+1)−1 + w(σ s(m+1) (u s ), σ s(m+1) (u s+1 )) ≥ δ(cs−1)(m+1)−1 + d( K ∗ ). In the last inequality we used that (σ s(m+1) (u s ), σ s(m+1) (u s+1 )) ∈ K ∗ according to Lemma 21 and the previous claim, and in the second equality we used that σ is an isometry. 2 Corollary 27. Assume that s · d( K ∗ ).

σ is an isometry. Let s ≥ 1 such that δ(cs+1)(m+1)−1 < dfree (C ). Then dfree (C ) >

Proof. We argue inductively on s ≥ 1. If s = 1, the result follows from Lemma 24. Now, if s ≥ 2, then, by Theorem 26,

dfree (C ) > δ(cs+1)(m+1)−1

≥ δ(cs−1)(m+1)−1 + d( K ∗ ) ≥ (s − 1) d( K ∗ ) + d( K ∗ ) = s · d( K ∗ ), which completes the induction.

2

JID:YJSCO

AID:1972 /FLA

[m1G; v1.261; Prn:24/10/2019; 14:03] P.15 (1-26)

J. Gómez-Torrecillas et al. / Journal of Symbolic Computation ••• (••••) •••–•••

15

r Corollary 28. Assume that σ is an isometry. Let d ≥ dfree (C ) (e.g. d = δm ). If s ≥ d/ d( K ∗ ), then δ(cs+1)(m+1)−1 = dfree (C ) = δ(rs+1)(m+1)−1 .

Remark 29. Corollary 28 can by used to calculate dfree (C ) by computing a sequence of row distances, since, once δsr(m+1)−1 is known for some s, if s ≥ δsr(m+1)−1 / d( K ∗ ), then dfree (C ) = δ(rs+1)(m+1)−1 . 5. Computing cyclic distances Theorem 18 and Corollary 28 open the possibility of designing an algorithm for the computation of the free distance of an idempotent convolutional code C based in the calculation of some cyclic row and column distances. Each term δlc of this sequence is the minimum weight of K l ⊆ Al+1 , which is of the form

K l = Nl \ P l ,

(16)

where

N l = ker(· E lc ) and P l = {(a0 , a1 , . . . , al ) ∈ N l : a0 = 0}.

(17)

δlc

We will discuss how to compute when the weight function d on A comes from the Hamming weight according to Remark 7. So, we fix a basis B = {b0 , . . . , bn−1 } of our F -algebra A, and the corresponding coordinate isomorphism of vector spaces

v : A → Fn. The right regular representation of A maps each b ∈ A onto the F -linear endomorphism

ρb : A → A , a → ρb (a) = ab. Taking coordinates with respect to B, we get an injective homomorphism of F -algebras

m : A → Mn (F), where Mn (F) denotes the ring of square matrices of order n over F and the matrix m(b) is determined, for each b ∈ A, by the condition

v(ρb (a)) = v(ab) = v(a)m(b), for all a ∈ A . These maps extend component wise to an F -linear isomorphism

v : Al+1 → F (l+1)n ( f 0 , . . . , f l ) → v( f 0 , . . . , f l ) = (v( f 0 ), . . . , v( f l )) and an injective homomorphism of F -algebras

m : Ml+1×l+1 ( A ) → M(l+1)n×(l+1)n (F)     M = mi j → m( M ) = m(mi j ) , in such a way that

v(( f 0 , . . . , f l ) M ) = v( f 0 , . . . , f l )m( M ). Note that, for every S ⊆ F l+1 ,

d( S ) = d(v( S )). In particular, it follows from (18), that

(18)

JID:YJSCO

AID:1972 /FLA

[m1G; v1.261; Prn:24/10/2019; 14:03] P.16 (1-26)

J. Gómez-Torrecillas et al. / Journal of Symbolic Computation ••• (••••) •••–•••

16

δlc = d({ v ∈ ker(·m( E lc )) : v [0,n−1] = 0}).

(19)

Here, in analogy with our notation for polynomials, we denote v [0,n] = ( v 0 , . . . , v n ) ∈ F n+1 for any vector v ∈ F N with n + 1 ≤ N. It follows from (19) that cyclic column distances are distances of some non linear block codes over F . For linear codes, a widely used method for small fields is the Brouwer–Zimmermann algorithm (see e.g. Betten et al., 2006, §1.8). For large fields, the well known characterization of the minimum distance in terms of the linear dependence of the rows of a parity check matrix, (see e.g. Huffman and Pless, 2010, Corollary 1.4.14), should work better. However there is not a general efficient deterministic way to compute the minimal weight of a non linear code, all the known methods use some exhaustive search. Even in the linear block case, the computation of the minimum distance is an NP-hard problem, see Vardy (1997). 5.1. Computation of the cyclic column distances from parity check matrices We are going to give two ways to compute the cyclic column distances. The first one, Algorithm 1, is an adaptation of the computation of the Hamming distance via parity check matrices. In order to describe it, we need some notation. For each pair of integers a ≤ b, we denote [a, b] = {c ∈ Z : a ≤ c ≤ b}. Given a matrix M ∈ Ms×t (F) and a subset J ⊆ [0, s − 1], we denote by M [ J ] the submatrix of M formed by the rows labeled by J . Given a singleton set { j } ⊆ [0, s − 1], we denote H [{ j }] simply by H [ j ]. The cardinality of J is denoted by | J |. For each vector v ∈ F s , its support is supp( v ) = {i ∈ [0, s − 1] : v i = 0}. For a given n ≤ s, if I = [0, n − 1], then we write I c = [n, s − 1]. The rank of a matrix M is denoted by rk M. Let H be a matrix with s rows and n ≤ s. Let

C = { v ∈ ker(· H ) : v [0,n−1] = 0} ⊆ F s and

J = { J ⊆ [0, s − 1] : rk H [ J ] < | J | and rk H [ J ∩ I c ] = | J ∩ I c |}. Lemma 30. For each v ∈ C there exists v ∈ C such that supp( v ) ⊆ supp( v ) and supp( v ) ∈ J . Reciprocally, for each J ∈ J , there exists v ∈ C such that supp( v ) ⊆ J . Proof. Let v ∈ C and J = supp( v ). If J ∈ J , set v = v and we are done. Assume then J ∈ / J . Since  c c v ∈ ker(· H ), v H [ j ] = 0, so rk H [ j ] < | J | . Then rk H [ J ∩ I ] < | J ∩ I | so there exists j0 ∈ J ∩ I c j j∈ H  such that H [ j 0 ] = k∈ J ∩ I c , k = j 0 ak H [k] for some ak ∈ F . Then



0=

v j H [ j] +

j∈ J ∩ I



( v j + v j0 a j ) H [ j ] +

j ∈ J ∩ I c , j = j 0



v j H [ j ].

j∈ J ∩ I c

Set v ∈ F s given by, for any j ∈ [0, s],

v j =

⎧ ⎨ vj

v j + v j0 a j ⎩ 0

if j ∈ J ∩ I , if j ∈ J ∩ I c and j = j 0 , otherwise.

Clearly, v ∈ C and J = supp( v ) ⊂ supp( v ) = J . If J ∈ J we are done. Otherwise, set v = v and J = J , and repeat the reasoning. Since J is finite, this loop must end after a finite number of steps. The converse result is obvious. 2 The correctness of Algorithm 1 follows from the following Proposition 31. With the previous notation, assume C is non empty. Then {| J | : J ∈ J } = [d(C ), n + rk H [ I c ]].

JID:YJSCO

AID:1972 /FLA

[m1G; v1.261; Prn:24/10/2019; 14:03] P.17 (1-26)

J. Gómez-Torrecillas et al. / Journal of Symbolic Computation ••• (••••) •••–•••

17

Algorithm 1 Minimum distance from parity check. Input: A matrix H with s rows and n ≤ s. Output: d(C ), where C = { v ∈ ker(· H ) | v [0,n−1] = 0}. 1: d ← n + rk( H [ I c ]) 2: continue ← true 3: while continue do 4: continue ← false 5: for all J ⊆ {0, . . . , s − 1} with | J | = d do 6: if rk( H [ J ]) < d and rk H [ J ∩ I c ] = | J ∩ I c | then 7: d←d−1 8: continue ← true 9: return d + 1

Proof. Since C is non empty, {| J | : J ∈ J } = ∅ by Lemma 30. Let J 0 ∈ J such that | J 0 | = min{| J | : J ∈ J }. By Lemma 30, there exists u ∈ C such that supp(u ) ⊆ J 0 , so d(C ) ≤ w(u ) ≤ | J 0 |. Let v ∈ C such that w( v ) = d(C ). By Lemma 30, supp( v ) ∈ J . Therefore | J 0 | ≤ | supp( v )| = w( v ) = d(C ). Thus d(C ) = min{| J | : J ∈ J }. Let us prove that if d ∈ {| J | : J ∈ J } with d < n + rk H [ I c ], then d + 1 ∈ {| J | : J ∈ J }. Let d ∈ {| J | : J ∈ J } and J ∈ J such that d = | J |. If rk H [ J ∩ I c ] < rk H [ I c ], there exists j ∈ I c \ J such that rk H [( J ∪ { j }) ∩ I c ] = |( J ∪ { j }) ∩ I c |, so J ∪ { j } ∈ J , and then d + 1 ∈ {| J | : J ∈ J }. If rk H [ J ∩ I c ] = rk H [ I c ], since d < n + rk H [ I c ], | J ∩ I | < n and there exists i ∈ I \ J , which implies that J ∪ {i } ∈ J . We have proved that d + 1 ∈ {| J | : J ∈ J }. 2 5.2. Computation of cyclic column distances by Brower–Zimmermann algorithm Alternatively, following a suggestion of Prof. Alfred Wassermann, we may adapt the well known Brouwer–Zimmermann algorithm, in order to calculate the cyclic column distances. Let V ⊆ W ⊆ F s be two vector subspaces and let C = W \ V . Our purpose is to compute d(C ), which covers the computation of the cyclic column distances. Let k = dim W and r = dim V . Without loss of generality we can assume that W is generated by the rows of a matrix



G=

G1



G2

where G 1 ∈ M(k−r )×s (F), G 2 ∈ Mr ×s (F) and the rows of G 2 generate V . Hence

C = { v G : v [0,k−r −1] = 0}. We may apply now the first part of the Brouwer–Zimmermann algorithm, as presented in Betten et al. (2006, §1.8). For clarity and self-completeness, we detail here the procedure. We also point out that any permutation of the coordinates of an arbitrary block code preserves the Hamming distance. Since the linear code W is equivalent to a systematic one, there exists a k × k non-singular matrix B 1 and an s × s permutation matrix M 1 such that



  1 = B 1 G M 1T = I k1

L1 ,

where k1 = k. Now, if L 1 is neither empty nor zero, again by Gaussian elimination on the rows and a proper permutation of the columns of L 1 , we may find k2 unit columns in the last s − k1 columns of  1 , so that,



 2 = B 2 1 M 2T =

L

2

I k2

L2

0

0



,

where 0 < k2 ≤ k, L 2 is a k2 × (s − k1 − k2 ) matrix, B 2 is a k × k non-singular matrix and M 2 is an s × s permutation matrix which only permutes the last s − k1 columns. Obviously, the row operations modify the first k1 columns of  1 yielding the matrix L 2 .

JID:YJSCO

AID:1972 /FLA

[m1G; v1.261; Prn:24/10/2019; 14:03] P.18 (1-26)

J. Gómez-Torrecillas et al. / Journal of Symbolic Computation ••• (••••) •••–•••

18

This process can be iterated until L t is empty or zero for some t ≥ 1, so we may find matrices

 1 ,  2 , . . . ,  t such that



 j = A jGN j = T

L

j

Ik j

Lj

0

0

,

where A j = B j B j −1 · · · B 1 is a k × k non-singular matrix and N = M j M j −1 · · · M 1 is an s × s permutafor  d and tion matrix. Observe that the columns where the identity matrices are placed are disjoint   d with d = d . In particular, the maximum Hamming weight for a vector in W is tj =1 k j .

1 ,  2 , . . .  t are isometric, but not equal, to W . For this reason, we set The codes generated by   j = j N j = A j G, for j = 1, . . . , t, which are generator matrices of W . Observe that, by construction, the sets of unit columns of each matrix j are also pairwise disjoint for j = 1, . . . , t. Let us then define the subsets Ci =

t 

{ v j ∈ C : w( v ) ≤ i }.

j =1

Since i = A i G,

Ci =

t 

{ v j ∈ W : w( v ) ≤ i and ( v A j )[0,k−r −1] = 0}.

j =1

Hence,

C 1 ⊆ C 2 ⊆ · · · ⊆ C k −1 ⊆ C k = C . If we set di = d(C i ), we find then that

d1 ≥ d2 ≥ · · · ≥ dk−1 ≥ dk = d(C ). In order to compute an increasing sequence of lower bounds of the distance of C , we may also make use of the one provided by the Brouwer–Zimmermann algorithm. Indeed, we may recall it by its application to W . Let then

 Ci =

t 

{ v j ∈ W : w( v ) ≤ i },

j =1

and  di = d( C i ). Let c ∈ W \ C i , then, for each j = 1, . . . , t, there exists a vector v j such that v j j = c with w( v j ) ≥ i + 1. In order to estimate the weight of c, we may provide a lower bound of the weight of the coordinates corresponding to each set of k j unit columns of the matrix j , for j = 1, . . . , t. Concretely, each of these entries contributes at least with (i + 1) − (k − k j ). Since these sets of coordinates are pairwise disjoint,

w(c ) ≥

t  (i + 1) − (k − k j ). j =1

t

 We may define di = j =1 (i + 1) − (k − k j ) and then d( W \C i ) ≥ di for any i = 1, . . . , k. Clearly, these form an increasing sequence 2 ≤ d1 ≤ d2 ≤ · · · ≤ dk = t +

t  j =1

k j.

JID:YJSCO

AID:1972 /FLA

[m1G; v1.261; Prn:24/10/2019; 14:03] P.19 (1-26)

J. Gómez-Torrecillas et al. / Journal of Symbolic Computation ••• (••••) •••–•••

19

Algorithm 2 Brouwer–Zimmermann for a difference of vector spaces (BZD). 

Input: A k × s matrix G =

G1



, where the rows of G form a basis of a vector space W , and the rows of G 2 form a basis of G2 a subvector space V ⊂ W . A lower bound d of d( W \ V ). Output: The minimum Hamming weight d( W \ V ). 1: Compute the matrices 1 , . . . , t following the procedure explained above, where i = A i G with A i non-singular for any i = 1, . . . , t. 2: C 0 ← 0 3: i ← 0, d ← 0, d ← ∞ 4: while d < d and d = d do 5: i←i+1 6:

C i ← C i −1 ∪

t 

{ v j ∈ W | w( v ) = i, ( v A i )[0,k−r −1] = 0}

j =1

7:

d ← d(C i )

8:

d←

t 

(i + 1) − (k − k j )

j =1 k−k j ≤i

9: return d

The classic Brouwer–Zimmermann algorithm finishes by asserting that, since the maximum weight of t    a vector in W is j =1 k j , there exists a smallest index i 0 such that d(C i 0 ) = d i 0 ≤ d i 0 ≤ d( W \C i 0 ), so it follows that d( W ) =  di 0 . Now,

W \ C i ⊇ C \ C i = C \(C i \ V ) = C \(C i ∩ V c )

= C ∩ (C i ∩ V c )c = C ∩ (C ic ∪ V ) = (C ∩ C ic ) ∪ (C ∩ V ) = C ∩ C ic = C \C i , so d(C \C i ) ≥ d( W \ C i ) ≥ di for any i = 1, . . . , k. Similarly to above, the maximum weight for a vector t in C ⊂ W is j =1 k j , so there exists a minimal index j 0 such that

d(C j 0 ) = d j 0 ≤ d j 0 ≤ d(C \C j 0 ), and thus d(C ) = d j 0 . The discussion above ensures the correction of Algorithm 2. There, a slight refinement is considered. If we know a lower bound of the distance d(C ), say d, hence we may check if d i equals d in each iteration of the while loop. A positive answer of this comparison ensures that d i = d(C ), despite holding di < di . This improvement speeds up the execution of the iterations of the while loop of Algorithm 4 for which δ cj = δ cj−1 , since δ cj−1 acts as a known lower bound when computing δ cj . Remark 32. The executions of both Algorithm 1 and Algorithm 2 require, at least, an exponential number of operations, in the worst case, with respect to the size of the matrix (that is, the length by the dimension of the code). Nevertheless, the efficiency of Algorithm 1 grows polynomially with respect to the bit-size of the elements of the base field (i.e. the dimension of the base field over its prime subfield). This is so since Algorithm 1 is based on the calculation of the rank of some submatrices. Contrary to that, Algorithm 2 needs an exhaustive search over the elements of the base field, so its efficiency also grows exponentially with respect to the bit-size. This suggests that, for large finite fields, it should be more convenient to compute the distance by using Algorithm 1. For small finite fields, it should be better to use Algorithm 2 since, in most cases, we do not need to compute all the values di . In order to implement this version of the Brouwer–Zimmermann algorithm to the computation of

δlc , it is convenient to present the set K l , expressed according to (16) and (17), via a generator matrix. To this end, write F lc = I − E lc , which is an idempotent matrix. Here, I denotes a suitable identity

JID:YJSCO

AID:1972 /FLA

[m1G; v1.261; Prn:24/10/2019; 14:03] P.20 (1-26)

J. Gómez-Torrecillas et al. / Journal of Symbolic Computation ••• (••••) •••–•••

20

Algorithm 3 Cyclic column distance calculation using BZD (CCD-BZD). Input: The matrix E cj , where j ≥ 1, n = dimF A and a lower bound d of δ cj .

Output: The cyclic column distance δ cj . 1: F cj−1 ← I − E cj −1 . 2: F cj ← I − E cj .

3: a1 , . . . , ar ← basis of Row(m((0|σ −1 ( F cj−1 ))).

4: b1 , . . . , bk−r ← extension to a basis of Row(m( F cj )). 5: G ← matrix with rows formed by b1 , . . . , bk−r , a1 , . . . , ar . 6: return BZD(G , d)

Algorithm 4 Free distance calculation of an ICC. Input: A parity check idempotent e = Output: dfree (C ). 1: rep ← 0, j ← 0, δ0c ← 0 2: while rep < m do 3: Compute the matrix E cj 4:

m

i =0

zi e i of an idempotent convolutional code C .

δ cj ← CCD-BZD( E cj , δ cj−1 ).

5: if j > 0 and δ cj = δ cj−1 then 6: rep ← rep + 1 7: else 8: rep ← 0 9: j← j+1 10: return δ cj−1

matrix. Therefore, Nl = im(· F lc ). Indeed, if we pick a ∈ A l+1 , then a ∈ Nl if, and only if, aF lc = a. On the other hand,



E lc

=

e0

e >0

0

σ −1 ( E lc−1 )



,

where e >0 = (σ −1 (e 1 ), σ −2 (e 2 ), . . . ). Setting



F lc =

0 0

>0 − 1 σ ( F lc−1 )



0 = 1 − e0 and >0 = −e>0 , we get

.

Therefore, if we write a = (a0 |a>0 ) ∈ A × Al , then a ∈ Nl if and only if a0 = a0 0 and a>0 = a0 >0 + a>0 σ −1 ( F lc−1 ). We thus get that a ∈ P l if, and only if, a = (0|a>0 ) with a>0 = a>0 σ −1 ( F lc−1 ). Hence, P l = {(0|b) ∈ A × Al : b ∈ im(·σ −1 ( F lc−1 )}. Algorithm 3 comprises these comments. By Row( M ) we mean the row space of a matrix M. Algorithm 4 computes the free distance of an ICC by means of the sequence of cyclic column distances. Its correctness is ensured by virtue of Theorem 18. In order to illustrate the execution of these algorithms, we provide some examples of idempotent convolutional codes. 5.3. Examples We will give some examples of construction of ICC codes and computation of their distances. The first non trivial cyclic structures on convolutional codes were introduced in Piret (1976). These primal idempotent ideal codes are covered by our theory. Let A = F[x]/(xn − 1), where n is coprime with the characteristic of F . Piret’s cyclic convolutional codes are left ideal codes of A [ z; σ ], where σ (x) = xr for some r relatively prime with n, see Roos (1979). According to Gómez-Torrecillas et al. (2017b, Proposition 14) or López-Permouth and Szabo (2013, Theorem 3.5), they are idempotent convolutional codes. The canonical basis of A is B = {1, x, . . . , xn−1 }. Since σ (xs ) = xrs mod n and (r , n) = 1, it follows that σ induces a permutation on B, hence σ is an isometry.

JID:YJSCO

AID:1972 /FLA

[m1G; v1.261; Prn:24/10/2019; 14:03] P.21 (1-26)

J. Gómez-Torrecillas et al. / Journal of Symbolic Computation ••• (••••) •••–•••

Example 33. Let A = F2 [x]/(x7 − 1) and polynomial





21

σ : A → A defined by σ (x) = x3 . Let R = A [z; σ ]. The skew







 = z3 x6 + x3 + x2 + x + z x6 + x4 + x + 1 + x4 + x2 + x + 1



is idempotent and generates an idempotent convolutional code C = R  whose parity check idempotent is













e = 1 −  = z3 x6 + x3 + x2 + x + z x6 + x4 + x + 1 + x4 + x2 + x , so m = deg(e ) = 3. From this parity check idempotent we may calculate the matrices E lc needed to compute the sequence {δlc }l≥0 . For instance, E 3c is the matrix



x4 + x2 + x ⎜ 0 ⎜ ⎝ 0 0

x6 + x5 + x2 + 1 x6 + x5 + x3 0 0



0 x4 + x3 + x2 + 1 x4 + x2 + x 0

x6 + x5 + x4 + x ⎟ 0 ⎟. 6 3 x +x +x+1 ⎠ x6 + x5 + x3

The sequence {δlc }l≥0 has to be computed until m + 1 = 4 consecutive values are equal. Concretely

l

δlc

0

1

2

3

4

5

6

7

8

9

10

4 6 8 8 10 10 10 12 12 12 12

Hence dfree (C ) = 12 by Theorem 18. As shown in Theorem 22, we have δ7c = δ7r with s = 2. So, if we use both the row and column distance sequences, we obtain

l

0

1

2

3

4

5

6

7

δlc

4

6

8

8

10

10

10

12

δlr

− − − 12 12 12 12 12

and 7 is the first index in which row and column distances coincide. Hence its free distance can also be alternatively computed by means of Propositions 13 and 16. Finally, Corollary 28 allows us to know a specific index l such that δlc = dfree (C ). In this example, E 3 = E 3c and



0

⎜x + x + x + x

T3 = ⎜ ⎝

6

3

0 x +x +x+1 6

0 0

2

4

x5 + x3 + x2 + x 0

0 0 0



0 0⎟ ⎟. 0⎠ x5 + x4 + x3 + x 0

Let K be the left kernel of



x6 + x3 + x2 + x ⎜ 0 ⎜ 6 ⎜ x + x4 + x + 1

⎜ ⎜ ⎜ ⎜ ⎜ ⎝

x4 + x2 + x 0 0 0

0

0 0

0 0 0



⎟ ⎟ ⎟ ⎟ 6 5 2 6 5 4 x +x +x +1 0 x + x + x + x⎟ ⎟, ⎟ 6 5 3 4 3 2 x +x +x +1 0 x +x +x ⎟ x6 + x3 + x + 1 ⎠ 0 x4 + x2 + x 0 0 x6 + x5 + x3 x +x +x +x 0 5

3

2

x5 + x4 + x3 + x

then d( K ∗ ) ≥ 1 + d( K ) = 1 + 3 = 4. Hence Corollary 28 says that dfree (C ) = δ(c3+1)(4+1)−1 = δ(r3+1)(4+1)−1 .

JID:YJSCO

AID:1972 /FLA

[m1G; v1.261; Prn:24/10/2019; 14:03] P.22 (1-26)

J. Gómez-Torrecillas et al. / Journal of Symbolic Computation ••• (••••) •••–•••

22

Example 34. Let F = F2 and A = F[x]/x7 − 1. Let σ : A → A defined by σ (x) = x3 . It is easy to check that σ is indeed an algebra map, with inverse σ −1 (x) = x5 . Let R = A [ z; σ ],









 = z5 x4 + x3 + x2 + 1 + z x5 + x2 + x + 1 + x4 + x2 + x + 1 and









e = 1 −  = z5 x4 + x3 + x2 + 1 + z x5 + x2 + x + 1 + x4 + x2 + x, which are the idempotent generator and the parity check idempotent of the ICC C = R  . We have applied Algorithm 4 getting the following sequence of cyclic distances:

j

0 1 2 3

δ cj

4

5

6

7

8

9

10

11

4 6 8 8 8 10 12 12 12 12 12 12

c Hence dfree (C ) = δ6c = δ11 = 12.

Our next examples explore the idea of considering as word-ambient algebra a factor of a skew polynomial algebra. Example 35. Let F = F2 (a) be the field with 4 elements. Consider the left skew polynomial ring1 F[x; τ ], where τ is the Frobenius automorphism, and the quotient algebra A = F[x; τ ]/x2 − 1. Let R = A [ z] be the ring of standard polynomials over the algebra A, and consider e = e 0 + ze 1 , where e 0 = ax + a and e 1 = x + 1. It is easy to check that e is idempotent in R, so C = v( R (1 − e )) is an ICC in the sense of Definition 5. Observe that



m( E 0c ) =

a

a



,

a+1 a+1

so, clearly, δ0c = 2. In order to compute δ1c , following Algorithm 3, F 1c is extended to



a+1 ⎜a+1

m( F 1c ) = ⎜ ⎝

0 0

a a



1 1

1 1⎟

⎟.

a⎠ a

0 a+1 0 a+1

The last two rows generate the vector space P 1 described above, hence, computing its row reduced echelon form, we get a basis of P 1 . Concretely, this basis is {(0, 0, 1, a + 1)}. Now, we extend it to a basis of N 1 by adding the first (or the second) row. So we construct



G=

a+1 a 1 1 0 0 1 a+1



.

Applying Algorithm 2 we find that 1 = A 1 G and 2 = A 2 G, where



1 = 

2 =

1 a+1 0 a+1 0 0 1 a+1 a 1 a 0 0 a

0 1







,

A1 =

,

A2 =



a 0

a+1 1 0 a



a 1



,

.

Moreover, k1 = 2 and k2 = 2. An exhaustive search provides that d1 = 3 whilst d1 = 4. Therefore δ1c = 3.

1

i.e. the coefficients are written on the left and the multiplication is based on the rule xa = τ (a)x.

JID:YJSCO

AID:1972 /FLA

[m1G; v1.261; Prn:24/10/2019; 14:03] P.23 (1-26)

J. Gómez-Torrecillas et al. / Journal of Symbolic Computation ••• (••••) •••–•••

23

Similarly, we may compute a basis of N 2 by considering

⎛ ⎝

a+1

a

1

1

0

0



0 0 1 a + 1 0 a + 1 ⎠. 0 0 0 0 1 a+1

In this case, 1 = A 1 G and 2 = A 2 G, where



1 a+1 0 1 = ⎝ 0 0 0



a 1 2 = ⎝ 0 0 0 0

0 a+1 1 a+1 0 0





0 1 0 a + 1 ⎠, 1 a+1

a 0 1 0 a 1 a 0 ⎠, 0 0 a 1





a A1 = ⎝ 0 0





a 0 1 0 ⎠, 0 1

a+1 1 1 0 a a ⎠, A2 = ⎝ 0 0 a

which yields that δ2c = 4. Finally, Algorithm 3 outputs that δ3c = 4, so, by Theorem 18, the free distance dfree (C ) = 4. Example 36. Let F = F2 (a) be the field with 25 elements, where a5 + a2 + 1 = 0. Consider the left skew polynomial ring F[x; τ ], where τ is the Frobenius automorphism, and the quotient algebra A = F[x; τ ]/x5 − 1. Let R = A [ z] be the ring of standard polynomials over the algebra A, and consider e = e 0 + ze 1 , where

e 0 = a28 + a21 x + a11 x2 + a22 x3 + a13 x4 e 1 = a14 + a28 x + a25 x2 + a19 x3 + a7 x4 . It is easy to check that e is idempotent in R, so C = v( R (1 − e )) is an ICC in the sense of Definition 5. With the aid of SageMath (The Sage Developers, 2017), we have applied Algorithm 4 getting the following sequence of cyclic distances:

j

δ cj

0

1

2

3

5 9 10 10

Hence dfree (C ) = δ2c = δ3c = 10. Observe that, in order to compute δ3c , it is needed to work with E 3c , a 20 × 20 matrix over F25 . A third kind of word-ambient algebra A is that of the matrices over F . Let A = Mm (F) the matrix algebra of order m over F . Consider the weight function on A obtained from the Hamming weight associated to the canonical basis of matrix units { E i j | 0 ≤ i , j ≤ m − 1} according to Remark 7. By the Skolem-Noether Theorem all F -automorphisms of A are inner, that is for any automorphism σ of A, there exists a non singular matrix U ∈ A such that σ (x) = U xU −1 . In order to obtain an isometry, U must be a permutation matrix. If the automorphism σ is separable in the sense of Gómez-Torrecillas et al. (2016, Definition 3.1), then every ideal code over R = A [x; σ ] is an idempotent convolutional code. An efficient algorithm to decide whether σ is separable is designed in Gómez-Torrecillas et al. (2016, Section 4). In particular, taking σ to be the identity automorphism leads to a separable isometry with respect to the Hamming distance in A. Example 37. Let A = M3 (F2 ) and



1 U = ⎝0 0



σU the inner automorphism defined by the matrix

0 0 0 1⎠ . 1 0

This inner automorphism is a permutation on the positions of the matrix, so it is an isometry with respect to the Hamming distance. Let R = A [ z; σU ],  the idempotent

JID:YJSCO

AID:1972 /FLA

[m1G; v1.261; Prn:24/10/2019; 14:03] P.24 (1-26)

J. Gómez-Torrecillas et al. / Journal of Symbolic Computation ••• (••••) •••–•••

24



 = z5

101 101 101



 + z4

110 110 110



 + z3

101 101 101





011 011 011

+ z2



 +z

110 110 110



 +

111 111 111



and C = R  , whose parity check skew polynomial is e = 1 −  , i.e.

 e = z5

101 101 101



 + z4

110 110 110



 + z3

101 101 101



 + z2

011 011 011



 +z

110 110 110



 +

011 101 110



and m = deg(e ) = 5. As in Example 33, the next step consists in computing the matrices E lc and their correspondent δlc . For example, E 5c is



0 1 1 ⎜1 0 1 ⎜ ⎜1 1 0 ⎜

⎜0 ⎜ ⎜0 ⎜ ⎜0 ⎜ ⎜ ⎜0 ⎜ ⎜0 ⎜ ⎜0 ⎜ ⎜0 ⎜ ⎜0 ⎜ ⎜0 ⎜ ⎜ ⎜0 ⎜ ⎜0 ⎜ ⎜0 ⎜ ⎜0 ⎜ ⎝0 0

1 1 1

0 0 0

1 1 1

0 0 0

1 1 1

1 1 1

1 1 1

1 1 1

0 0 0

1 1 1

1 1 1

0 0 0

1 1 1

1 1 1

0 0 0

0 1 1 0 1 1 0 1 1

1 1 1

0 0 0

1 1 1

1 1 1

0 0 0

0 1 1 0 1 1 0 1 1

1 1 1

1 1 1

0 0 0

0 0 0

0 1 1 1 0 1 1 1 0

1 1 1

1 1 1

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 1 1

1 1 0 1 1 0

1 1 1

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 1 1 1 0 1 1 1 0

1 1 1

0 0 0

0 0 0

1 1 1

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 1 1 1 0 1 1 1 0

1 1 1

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 1 1

1 0 1

0 0 0

1 1 1

1 1 1

0 0 0

0 0 0



0 0⎟ ⎟ 0⎟ ⎟



1⎟ 1⎟ ⎟ 1⎟ ⎟



0⎟ ⎟ 0⎟ ⎟ 0⎟



1⎟ ⎟ 1⎟ ⎟ 1⎟ ⎟



1⎟ ⎟ 1⎟ ⎟ 1⎟



1⎟ ⎟ 1⎠ 0

and the first terms of the sequence {δlc }l≥0 are

l

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

δlc

3 4 5 5 5 6 7 7 8 9 10 10 11 11 11 11 11 11

c r Therefore dfree (C ) = 11 by Theorem 18. Theorem 22 guarantees that δ17 = δ17 with s = 2. The computation of both sequences is

l

0

1

2

3

4

5

6

7

8

9

10

11 12

δlc

3

4

5

5

5

6

7

7

8

9

10

10 11

δlr

− − − − − 13 13 11 11 11 11 11 11

c r So dfree (C ) can also be computed by means of Propositions 13 and 16, with dfree (C ) = δ12 = δ12 = 11. Observe that in this example row and distance sequences meet before the index computed in Theorem 22. We could use again Corollary 28 to compute an index l such that δlc = dfree (C ). In this example, E 5 = E 5c and

JID:YJSCO

AID:1972 /FLA

[m1G; v1.261; Prn:24/10/2019; 14:03] P.25 (1-26)

J. Gómez-Torrecillas et al. / Journal of Symbolic Computation ••• (••••) •••–•••



0 ⎜0 ⎜ ⎜0 ⎜

⎜1 ⎜ ⎜1 ⎜ ⎜1 ⎜ ⎜ ⎜1 ⎜ ⎜1 ⎜ ⎜1 T5 = ⎜ ⎜1 ⎜ ⎜1 ⎜ ⎜1 ⎜ ⎜ ⎜0 ⎜ ⎜0 ⎜ ⎜0 ⎜ ⎜1 ⎜ ⎝1 1

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

1 1 1

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

1 1 1

0 0 0

1 1 1

1 1 1

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

1 1 1

1 1 1

0 0 0

1 1 1

1 1 1

0 0 0

1 1 1

0 0 0

0 0 0

0 0 0

0 0 0

1 1 1 1 1 1

1 1 1

1 1 1

0 0 0

1 1 1

1 1 1

0 0 0

1 1 1

1 1 1

0 0 0

0 0 0

1 1 1

0 1 1 0 1 1 0 1 1

1 1 1

0 0 0

1 1 1

1 1 1

0 0 0

1 1 1

0 0 0

25



0 0 0 0 0 0⎟ ⎟ 0 0 0⎟ ⎟ 0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

0 0 0

1 0 1 1 0 1 1 0 1

0 0 0

0 0 0



0⎟ 0⎟ ⎟ 0⎟ ⎟



0⎟ ⎟ 0⎟ ⎟ 0⎟

⎟.

0⎟ ⎟ 0⎟ ⎟ 0⎟ ⎟



0⎟ ⎟ 0⎟ ⎟ 0⎟



0⎟ ⎟ 0⎠ 0

An easy computation shows that d( K ∗ ) ≥ 1 + d( K ), where K is the left kernel of



T 5



E5

, being T 5

the matrix obtained from T 5 deleting the first zero block row. With the aid of SageMath (The Sage c r Developers, 2017), d( K ) = 2, hence d( K ∗ ) ≥ 3. Corollary 28 says that dfree (C ) = δ35 = δ35 . Example 38. Let A = M3 (F2 ) and

σU be the inner automorphism associated to the matrix





1 0 1 U = ⎝0 1 1⎠ . 0 1 0 This is not an isometry with respect to the Hamming distance, but it is with respect to the rank distance. Let R = A [ z; σU ], the idempotent



0  = z5 ⎝ 1 0

0 1 0





0 1 1 ⎠ + z4 ⎝ 0 0 1

1 0 1











1 1 1 1 0 0 0 0⎠ + z ⎝0 0 0⎠ + ⎝1 1 0⎠ 1 1 1 1 1 1 0

generates an idempotent convolutional code C = R  whose parity check idempotent skew polynomial is



0 e = 1 −  = z5 ⎝ 1 0

0 1 0









0 1 1 1 1 1 ⎠ + z4 ⎝ 0 0 0 ⎠ + z ⎝ 0 0 1 1 1 1

1 0 1





1 1 0⎠ + ⎝1 1 1

0 0 1



0 0⎠. 1

The degree of e is m = 5. The first terms of the sequence of column distances {δlc }l≥0 are

l

0

1

2

3

4

5

6

7

8

9

10

11

12

13

δlc

1

2 2

2

3

3

3

3

4

4

4

4

4

4

so the free distance with respect to the rank is dfree (C ) = 4 by Theorem 18. This distance could also be computed using Propositions 13 and 16, or Corollary 28, as we have done in Examples 33 and 37.

JID:YJSCO

AID:1972 /FLA

26

[m1G; v1.261; Prn:24/10/2019; 14:03] P.26 (1-26)

J. Gómez-Torrecillas et al. / Journal of Symbolic Computation ••• (••••) •••–•••

Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgement The authors would like to thank Alfred Wassermann his suggestion of adapting the Brouwer– Zimmermann algorithm in order to compute the distance of a difference set of vector spaces. References Betten, A., Braun, M., Fripertinger, H., Kerber, A., Kohnert, A., Wassermann, A., 2006. Error-Correcting Linear Codes. Algorithms and Computation in Mathematics, vol. 18. Springer. Costello Jr., D.J., 1969. A construction technique for random-error-correcting convolutional codes. IEEE Trans. Inf. Theory 15 (5), 631–636. Costello Jr., D.J., 1974. Free distance bounds for convolutional codes. IEEE Trans. Inf. Theory 20 (3), 356–365. Estrada, S., García-Rozas, J.R., Peralta, J., Sánchez-García, E., 2008. Group convolutional codes. Adv. Math. Commun. 2 (1), 83–94. Forney Jr., G.D., 1970. Convolutional codes I: algebraic structure. IEEE Trans. Inf. Theory 16 (6), 720–738. Gluesing-Luerssen, H., Schmale, W., 2004. On cyclic convolutional codes. Acta Appl. Math. 82 (2), 183–237. Gómez-Torrecillas, J., Lobillo, F.J., Navaro, G., 2018. Computing free distances of idempotent convolutional codes. In: Proceedings of the 2018 ACM International Symposium on Symbolic and Algebraic Computation. ISSAC ’18. ACM, New York, NY, USA, pp. 175–182. URL http://doi.acm.org/10.1145/3208976.3208985. Gómez-Torrecillas, J., Lobillo, F.J., Navarro, G., 2015. Separable automorphisms on matrix algebras over finite field extensions: applications to ideal codes. In: Proceedings of the 2015 ACM on International Symposium on Symbolic and Algebraic Computation. ISSAC ’15. ACM, New York, NY, USA, pp. 189–195. Gómez-Torrecillas, J., Lobillo, F.J., Navarro, G., 2016. Convolutional codes with a matrix-algebra word ambient. Adv. Math. Commun. 10 (1), 29–43. Gómez-Torrecillas, J., Lobillo, F.J., Navarro, G., 2017a. Computing separability elements for the sentence-ambient algebra of split ideal codes. J. Symb. Comput. 83, 211–227. Gómez-Torrecillas, J., Lobillo, F.J., Navarro, G., 2017b. Ideal codes over separable ring extensions. IEEE Trans. Inf. Theory 63 (5), 2796–2813. Huffman, W.C., Pless, V., 2010. Fundamentals of Error-Correcting Codes. Cambridge University Press. Johannesson, R., Zigangirov, K.S., 1999. Fundamentals of Convolutional Coding. Wiley-IEEE Press. URL http://eu.wiley.com/ WileyCDA/WileyTitle/productCd-0780334833,miniSiteCd-IEEE2.html. Lin, S., Costello Jr., D.J., 2004. Error Control Coding, second edition. Prentice-Hall, Inc., Upper Saddle River, NJ, USA. López-Permouth, S.R., Szabo, S., 2013. Convolutional codes with additional algebraic structure. J. Pure Appl. Algebra 217 (5), 958–972. Piret, P., 1976. Structure and constructions of cyclic convolutional codes. IEEE Trans. Inf. Theory 22 (2), 147–155. URL http:// doi.ieeecomputersociety.org/10.1109/TIT.1976.1055531. Roos, C., 1979. On the structure of convolutional and cyclic convolutional codes. IEEE Trans. Inf. Theory IT-25 (6), 676–683. Rosenthal, J., Schumacher, J.M., York, E.V., 1996. On behaviors and convolutional codes. IEEE Trans. Inf. Theory 42 (6), 1881–1891. Rosenthal, J., Smarandache, R., 1999. Maximum distance separable convolutional codes. Appl. Algebra Eng. Commun. Comput. 10 (1), 15–32. The Sage Developers, 2017. SageMath, the Sage Mathematics Software System (Version 7.4). URL http://www.sagemath.org/. Vardy, A., 1997. The intractability of computing the minimum distance of a code. IEEE Trans. Inf. Theory 46 (6), 1757–1766.