racefn Revista de la Academia Colombiana de Ciencias Exactas, Físicas y Naturales Rev. acad. colomb. cienc. exact. fis. nat. 0370-3908 Academia Colombiana de Ciencias Exactas, Físicas y Naturales 10.18257/raccefyn.928 Mathematics Characteristic-Dependent Linear Rank Inequalities in 21 Variables Desigualdades Rango Lineales Dependientes de la Característica en 21 variables Peña-Macias Victor 1 * Sarria-Zapata Humberto 1 Departamento de Matemáticas, Facultad de Ciencias, Universidad Nacional de Colombia, Bogotá, Colombia Universidad Nacional de Colombia Departamento de Matemáticas Facultad de Ciencias Universidad Nacional de Colombia Bogotá Colombia Corresponding autor: Victor Peña-Macias, vbpenam@unal.edu.co Oct-Dec 2019 43 169 764 770 05 06 2019 24 09 2019 This is an open-access article distributed under the terms of the Creative Commons Attribution License Abstract

In Linear Algebra over finite fields, a characteristic-dependent linear rank inequality is a linear inequality that holds by ranks of spans of vector subspaces of a finite dimensional vector space over a finite field of determined characteristic, and does not in general hold over fields with other characteristic. This paper shows a preliminary result in the production of these inequalities. We produce three new inequalities in 21 variables using as guide a particular binary matrix, with entries in a finite field, whose rank is 8, with characteristic 2; 9 with characteristic 3; or 10 with characteristic neither 2 nor 3. The first inequality is true over fields whose characteristic is 2; the second inequality is true over fields whose characteristic is 2 or 3; the third inequality is true over fields whose characteristic is neither 2 nor 3.

Resumen

En Álgebra Lineal sobre cuerpos finitos, una desigualdad rango lineal dependiente de la característica es una desigualdad lineal que es válida para dimensiones de sumas de subspacios vectoriales de un espacio vectorial de dimensión finita sobre un cuerpo finito de determinada característica, y no es válida en general sobre cualquier cuerpo de otra característica. Este documento presenta un resultado preliminar referente a la producción de estas desigualdades. Nosotros producimos tres desigualdades nuevas en 21 variables usando como guía una matriz binaria particular, con entradas en un cuerpo finito, cuyo rango es 8, 9 o 10 dependiendo de que la característica sea 2, 3 o distinta de 2 y 3. La primera desigualdad es válida sobre cuerpos de característica 2; la segunda es válida sobre cuerpos de característica 2 o 3; la tercera es válida sobre cuerpos de característica distinta de 2 y 3.

Key words: Entropy Linear rank inequality Binary matrix Direct sum in vector spaces Palabras clave: Entropía Desigualdad rango lineal Matriz binaria Suma directa de espacios vectoriales
Introduction

In Linear Algebra over finite fields, a linear rank inequality is a linear inequality that is always satisfied by ranks (dimensions) of subspaces of a vector space over any field. Information inequalities are a sub-class of linear rank inequalities (A. Shen, et al., 2000). The Ingleton inequality is an example of a linear rank inequality which is not information inequality (Ingleton, 1969), other inequalities have been presented in (Kinser, 2011) among others. A characteristic-dependent linear rank inequality is like a linear rank inequality but this is always satisfied by vector spaces over fields of certain characteristic and does not in general hold over fields with other characteristic. In Information Theory, especially in linear network coding, all these inequalities are useful to calculate the linear rates of communication networks (cougherty, et al., 2013). It is remarkable that the linear rate of a network depends on the characteristic of the scalar field associated to the vector space of the network codes (cougherty, et al., 2005; Dougherty, et al., 2013). Therefore, when we study linear rates over specific fields, characteristic-dependent lin-ear rank inequalities are more useful than usual linear rank inequalities.

Characteristic-dependent linear rank inequalities have been presented in (Blasiak, et al., 2011; Dougherty, et al., 2013; Freiling, 2014). The technique used by Dougherty et al. to produce these inequalities used as a guide the network flow of some matroidal networks to obtain restrictions over linear solubility; these restrictions imply the inequalities. This technique has produced many inequalities (Freiling, 2014), and this is different from the technique used by Blasiak et al. that directly produces two inequalities from the dependency relations of the aano and non-aano matroids; it has only produced two inequalities. So we ask ourselves, can more inequalities be produced from other suitable representable matroids and following the ideas of Blasiak et al.?

In this paper, we answer affirmatively. Following a particular case, we show a method to produce characteristic dependent linear rank inequalities using as guide a suitable binary matrix; we use the dependency relationships of its columns which are naturally associated with matroid representations. The rank of the desired matrix is 8 if the entries are in a field whose characteristic is 2; the rank is 9 if the characteristic is 3; and the rank is 10 if the characteristic is neither 2 nor 3. We "convert" this property in three inequalities: the first inequality is true over fields whose characteristic is 2; the second inequality is true over fields whose characteristic is 2 or 3; the third inequality is true over fields whose characteristic is neither 2 nor 3. The inequalities do not in general hold over fields with other characteristic. We hope that the techniques presented in this paper can be applied to other types of matrices whose rank behaves in a similar way to the described matrix.

The paper is organized as follows. We introduce some mathematical tools of information theory. After, we show the theorem that produces the described inequalities; before presenting the proof, we give some propositions and lemmas that will be helpful. Finally, we show the proof and some conclusions.

Entropy and inequalities in Linear Algebra

In the following, we introduce the necessary concepts to understand this paper. Let A, B, A 1 , A n be vector sub-spaces of a finite dimensional vector space V over a finite field . Let be the span of A i , i e I. There is a correspondence between inequalities satisfied by dimensions of spans of vector spaces and inequalities satisfied by entropies of certain class of random variables induced by vector spaces (A. Shen, et al., 2000, Theorem 2). We explain that: let f be chosen uniformly at random from the set of linear functions from V to F. For A i , A n define the random variables X 1 = f \ a1 , X n = f | An . For I[n] := {1,...,n}, we have

The random variables X 1 , X n are called linear random variables over . For simplicity, we identify the entropy of linear random variables with the dimension of the associated subspaces, i.e. H(A1: i Є I), the entropy of Ai, i e I, is dim . With this notation, the mutual information of A and B is given by I(A; B) = dim (A ∩ B). The codi-mension of A in V is given by codimy (A) = dim (V) - dim (A). We have H(A | B) = codimA (A ∩ B). In a similar way conditional mutual information is expressed.

We give the following definition in order to fix ideas about inequalities.

Definition 1. Let P be a proper subset of primes, and let Ii, I k be subsets of [n]. Let α 1 Є ℝ, for 1 ≤ i ≤ k .A linear inequality of the form

- is called a characteristic-dependent linear rank inequality if it holds for all jointly distributed linear random variables X1, X n finite fields with characteristic in P, and does not in general hold over fields with other characteristic.

- is called a linear rank inequality if it holds for all jointly distributed linear random variables over all finite field.

- is called an information inequality if the inequality holds for all jointly distributed random variables.

By definition of linear random variables, we note any information inequality is an inequality which is also satisfied by dimensions of spans of vector spaces. The following inequality is the first linear rank inequality which is not information inequality.

Example 2. (Ingleton, 1969) For any A1, A2, A3, A4 sub-spaces of a finite dimensional vector space,

We can think a characteristic-dependent linear rank inequality like a linear rank inequality that is true over some fields.

We say that A + B is a direct sum, denoted by AB, if AB = O. In case that V = A ⊕ B, the members of this sum are called (mutually) complementary subspaces in V. Alternatively, A 1 , ... , A n are mutually complementary subspaces in V if every vector of V has an unique representation as a sum of elements of A1, An. In this case, п I denotes the I-projection function given by

The inequalities of the following lemmas, that we will use later, are valid for linear random variables that hold some additional conditions.

We remark that we use the following notation of intervals: [j, k] := {i Є ℕ : j ≤ i ≤ k}, [k] = [1,k]. The sum A j +… + A k is denoted by A [tk], and A0: = AØ := O.

Lemma 3. For any subspaces A 1 , ..., A n , A 1 , ..., A n of a finite dimensional vector space V such that Ai ≤ A 1 , we have

with equality if and only if Ai+1A [i] = A' i+1 A’ [i] for all i.

Proof. By induction over n. In case n =2, we have

The equality holds if and only if I (A 1 ; A2 )= I (A’ 1; A’2). In other words, A 1 A 2 = A’ 1 ∩ A’2 because A’1 ≤ Ai. Now, we suppose the case n - 1 is true. We have

The equality holds if and only if I (A i+1 ; A [i] ) =

. Since A’ i ≤ A i for all i, we have A i+1 A [i] = A’ i+1 A’ [i]

Lemma 4. For any subspaces A, B , C of a finite dimensional vector space V such that B ≤ A, we have

with equality if and only A + C = B + C.

Proof. We have

The equality holds if and only if H(A, C)=H(B, C).

Since B ≤ A, it follows that A + C = B + C.

Producing inequalities

Let B be the following 10 x 10 binary matrix

We calculate the rank of the matrix B over different fields to find:

For a column b i of B, the set {j : b j ¡ = 1} ⊆  is denoted by bi; if there is no confusion, by abuse of notation, we identify bi and b i . The notation is the same for row and column vectors. We define:

where S is the set of the three points (1,3), (4, 6), (7,10).

The theorem of this paper shows three characteristic-dependent linear rank inequalities. The proof is guided by the matrix B; we choose this matrix because it is the smallest binary matrix, which we find, whose rank is different over at least three different finite fields. We hope that the arguments presented in the demonstration can be used to produce other inequalities taking matrices that have similar properties to this matrix.

Theorem 5. Let A 1 , A 2 , A 3 , A 4 , A 5 , A 6 , A 7 , A 8 , A 9 , A 10 , B 1 , B 2 , B 3 , B 4 , B 5 , B 6 , B 7 , B 8 , B 9 , B 10 , C be vector subspaces of a finite dimensional vector space V over a finite field F. The following inequalities are characteristic-dependent linear rank inequalities: - If the characteristic of is 2, then

- If the characteristic of is neither 2, then

- If the characteristic of is neither 2 nor 3, then

We remark that these inequalities do not in general hold over other fields whose characteristic is different to the described characteristic. A counterexample would be in V = GF (p) 10 . Take the vector subspaces: A i = (e i ), the span of each vector of the canonical basis in V; B i = (b i ), the span of each column of the matrix B; and C = {(1 … 1)), the span of the vector with 1 in all entries. Then, if p =2, the first inequality does not hold; if p = 2, 3, the second inequality does not hold; if p =2or 3, the third inequality does not hold.

The proof is given at the end of the section. Before, we introduce some propositions and lemmas that will help its development.

Let V = A 1 A n , and take a vector subspace C of V such that A 1 h-----h A i-1 + C + A i+1 h-----h A n is a direct sum for all i. We say that (A 1 ,...,A n ,C) is a tuple of complementary vector subspaces in V.

Proposition 6. A subspace C as above described holds H(п i (C)) =H(C) ≤ H(A 1 ),forall Ø #= I[n].

Proof. .

Hence, x e 0 Ai. By property of the tuple of complementary vector subspaces, x = O. In other words, п I (C) and C are isomorphic or have the same dimension.

As a consequence a non-zero element of C can be written uniquely as a sum of non-zero elements of A 1 , ... , A n .

Consider any n x m binary matrix B', with columns denoted by bi. Let п bi be the I i -projection of

Where

Take

where (e j ) j is the canonical basis in V; and b j It . is 1 if ejA k for some k such that b k = 1, and 0 in otherwise. If , then we have

We have the following proposition.

Proposition 7. Let V = A 1 A n , A i # O, and let B ' be a n x m binary matrix. For all i and I, we have if and only if .

Proof. We note if and only if .

Now, let . For xV,

The other implication is obtained from

Example 8. , define . We have . One can check that b 1 = b 2 + b 3 and b I3 = b I2 + b I3 .

As a consequence of Proposition 6, if each A i and C are one-dimensional vector spaces, then C is isomorphic to the span of (1 … 1), and each п b1 . (C) is isomorphic to {b Ii ). From this and the last proposition, we have the following proposition that is a stronger version of this fact:

Corollary 9. Let (A 1 ,...,A n ,C) be a tuple ofcomplementary vector subspaces in V over with C # O, and let B ' be a n x m binary matrix with columns denoted by bi. We have {b i } i I is an independent set if and only if is a direct sum.

If we take B' := B, the 10 x 10 binary matrix defined above, in the last corollary, we obtain:

Corollary 10. If (A 1 ,..., A10,C) is a tuple of complementary vector subspaces of V, then

In the rest of the paper, we only work with the matrix B. The Corollary 10 is used immediately to find three inequalities which are true over vector spaces that hold some conditions. We note that the desired conditions are described by the dependency relationships of the columns of B.

Proposition 11. Let A x , A 2 , A3, A4, A 5 , A e , A 7 , A 8 , A 9 , A10, B1, B2, B3 , B 4 , B 5 , B 6 , B r , B 8 , B 9 , B 10 , C vector subspaces of a finite dimensional vector space V over a finite field F such that (A 1 ,..., A10,C) is a tuple of complementary vector subspaces. Consider the following conditions:

We have the following implications:

1. If conditions (i) and(ii) hold, and the characteristic of is 2, then

2. If conditions (i) and(ii) hold, and the characteristic ofis 3, then

3. If conditions (i) and (iii) hold, and the characteristic of is neither 2 nor 3, then

Proof. To proof 1 and 2, we can use

Therefore, using conditions (i) and (ii), we have Bi ≤ п bi (C). Then we apply the last corollary. To proof 3, using conditions (i) and (iii), we can derive п bt . (C) ≤ B i . Then we apply the last corollary.

Define : = A1, and for i > 1, take a vector subspace of Ai which is complementary to , denote this subspace by . We have

Let C (0) = V'C. Recursively, for i, denote by C (i),a subspace of C (i-1) which is a complementary subspace to

in

Let Ċ denote the subspace C (10). The tuple ( ,..., ) is not unique but from now on we fix one of these tuples.

Lemma 12. ( ,..., ) is a tuple of complementary vector subspaces in V ' that holds the following inequalities:

Proof. The given tuple is a tuple of complementary vector subspaces in V' by definition. Furthermore, using Lemma 3, we have the following inequalities:

One can use all (hese inequalities along with the definitions of B (C), B (A), ∇ B (A) (these definitions were previously given) to obtain the described inequalities.

Lemma 13. For each i, let

We have the subspaces satisfy conditions (i), (ii) of Proposition 11, and the following inequality holds

Proof. The subspaces satisfy conditions (i) and (ii) of Proposition 11 by definition. Also, we have

Then, we apply the inequalities from Lemma 12.

Lemma 14. For each i, let

We have the subspaces satisfy conditions (i), (iii) of Proposition 11, and the following inequality holds

Proof. The subspaces satisfy conditions (i) and (iii) of Proposition 11 by definition. Also,we have

Hence,

Then, we apply the inequalities from Lemma 12 and the inequality presented at the beginning of the proof.

Proof of Theorem 5

To prove the inequality 1, from Lemma 13, the vector sub-spaces satisfy Proposition 11 with condition 1 if the characteristic of the field F is 2. We have

One can note H (C) < I (A[ 10 ]; C), and the inequality given in Lemma 13 can be write as

Using all these inequalities we obtain the desired inequality.

The inequality 2 is obtained in a similar way; and from the inequality 1, it is easy to note that the inequality 2 also holds over fields whose characteristic is 2.

To prove the inequality 3, from Lemma 14, the vector subspaces satisfy Proposition 11 with condition 3 if the characteristic of the field F is neither 2 nor 3, we have

One can note , and the inequality given in Lemma 14 can be write as

Using all these inequalities we obtain the desired inequality.

Conclusions

In this paper, we produce three characteristic-dependent linear rank inequalities in 21 variables using as guide a suitable binary matrix whose rank is different over at least three different fields (specifically, the rank depends on the characteristic of the field). The first inequality is true over fields whose characteristic is 2; the second inequality is true over fields whose characteristic is 2 or 3; and the third inequality is true over field whose characteristic is neither 2 nor 3. We hope that the technique presented here can be used to produce other inequalities, choosing other suitable matrices. In future work, the independence or dependence of these inequalities and their possible applications to Network Coding must be studied.

Acknowledgments

The first author thanks the support provided by COLCIEN-CIAS and the second author thanks the support provided by Universidad Nacional de Colombia.

References Blasiak A., Kleinberg R., Lubetzky E. (2011). Lexicographic products and the power of non-Linear Network Coding. Foundations of Computer Science (FOCS) 2011 IEEE 52nd Annual Symposium on. 609-618. Blasiak A. Kleinberg R. Lubetzky E .2011 Lexicographic products and the power of non-Linear Network Coding Foundations of Computer Science (FOCS) 2011 IEEE 52nd Annual Symposium on 609 618 Dougherty R., Freiling C., Zeger K. (2005). Insufficiency of linear coding in network information flow. IEEE Transactions on Information Theory. 51 (8): 2745-2759. Dougherty R. Freiling C. Zeger K .2005 Insufficiency of linear coding in network information flow IEEE Transactions on Information Theory 51 8 2745 2759 Dougherty R., Freiling C., Zeger K. (2013). Achievable rate regions for Network Coding. IEEE Transactions on Information Theory. 61 (5): 2488-2509. Dougherty R. Freiling C. Zeger K .2013 Achievable rate regions for Network Coding IEEE Transactions on Information Theory 61 5 2488 2509 Freiling E.F. (2014). Characteristic dependent linear rank inequalities and applications to Network Coding. Ph.D. thesis. San Diego, The United States: University of California. Freiling E.F .2014 Characteristic dependent linear rank inequalities and applications to Network Coding Ph.D. thesis San Diego, The United States San Diego, The United States University of California Ingleton W. (1969). Representation of matroids. Combinatorial mathematics and its applications. Oxford. 149-167. Ingleton W .1969 Representation of matroids. Combinatorial mathematics and its applications Oxford 149 167 Kinser R. (2011). New inequalities for subspace arrangements. Journal Combinatorial Theory Serie A. 118 (1): 152-161. Kinser R .2011 New inequalities for subspace arrangements Journal Combinatorial Theory Serie A 118 1 152 161 Shen A., Hammer D., Romashchenko A.E., Vereshchagin N.K. (2000). Inequalities for Shannon entropy and Kolmogorov complexity. Journal of Computer and Systems Sciences. 60: 442-464. Shen A. Hammer D. Romashchenko A.E. Vereshchagin N.K .2000 Inequalities for Shannon entropy and Kolmogorov complexity Journal of Computer and Systems Sciences 60:442 464

Jorge Cossio

The first author created the central idea of the manuscript and both authors contributed to its development and writing.

The authors declare that they have no conflict of interest.