The Journal of China Universities of Posts and Telecommunications June 2010, 17(3): 91–96 www.sciencedirect.com/science/journal/10058885
http://www.jcupt.com
Variable-rate convolutional network coding MA Song-ya1,2,3,4, CHEN Xiu-bo1,2,3 ( ), LUO Ming-xing1,2,3, YANG Yi-xian1,2,3 1. Information Security Center, State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing 100876, China 2. Key Laboratory of Network and Information Attack and Defense Technology, Ministry of Education, Beijing University of Posts and Telecommunications, Beijing 100876, China 3. National Engineering Laboratory for Disaster Backup and Recovery, Beijing University of Posts and Telecommunications, Beijing 100876, China 4. School of Mathematics and Information Sciences, Henan University, Kaifeng 475004, China
Abstract
Four types of variable-rate convolutional network codes are investigated over a single-source finite cyclic network. It is found that variable-rate generic, dispersion and broadcast can be implemented on the same network without changing the local encoding kernels of the non-source nodes. The efficient implementation has the advantage that each non-source node only needs to store one copy of the local encoding kernel within a session. However, it is also shown by an example that variable-rate multicast may not always be implemented under the above condition. Keywords variable-rate, convolutional generic, convolutional dispersion, convolutional broadcast, convolutional multicast
1
Introduction
Ahlswede et al. [1] proved that multicast rate can be increased by network coding. Li et al. [2] proved that linear network coding can achieve the max-flow bound of multicast. Furthermore, Jaggi et al. [3] devised a polynomial time algorithm to construct capacity achievable linear network codes. Yeung et al. [4] defined four different types of linear network codes for single-source finite acyclic networks: multicast, broadcast, dispersion and generic. These four types of linear network codes possess properties of increasing strength. Tan et al. [5] formulated these four types of linear network codes under a unified framework by regular independent set. This unified framework gives a more accurate description of the linear independence structure of linear network codes and is very useful for understanding the four types of linear network codes. Over an acyclic network, it easily constructs the upstream-to-downstream partial order among nodes. The encoding and transmitting of each message generated by the Received date: 09-07-2009 Corresponding author: CHEN Xiu-bo, E-mail:
[email protected] DOI: 10.1016/S1005-8885(09)60462-3
source are independent. Therefore, each individual message instead of a whole stream can be simply processed in linear network coding. In this way, propagation delay can be disregarded and the transmission medium is purely in the space domain. Over a cyclic network, sequential messages may convolve together through transmission. For this reason, feedback is allowed in the encoder at each node. The propagation delay is an inseparable issue, and the transmission medium is the combined space-time domain. Yeung et al. [4] described the single-source unit-delay convolutional network coding to deal with the propagation of a message pipeline over cyclic networks. Erez and Feder [6] first constructed the network code for cyclic networks. Li et al. [7] gave the ringtheoretic foundation of convolutional network coding which formulates a general abstract theory of convolutional network coding. Also, four types of optimal convolutional network codes at various levels of strength are introduced and constructed in Ref. [7] for delivering maximal possible data rates. Various properties of network codes with a fixed rate have been investigated. Meanwhile, there are concerns with variable-rate network coding over acyclic networks. Fong et al. [8] analyzed the linkage among linear broadcasts of variable-rate without changing the local encoding kernels of
92
The Journal of China Universities of Posts and Telecommunications
the non-source nodes on a single-source finite acyclic network. Vieira et al. [9] investigated the interaction between network coding and link-layer transmission rate diversity in multi-hop wireless networks and presented a linear programming model to compute the maximal throughput that a multicast application can achieve with network coding in a rate-diverse wireless network. Goseling et al. [10] constructed a single code that enables the source to control the throughput always achieving the minimum possible cost per transmitted symbol. Lakshminarayana et al. [11] discussed the practical multi-rate multicasting strategies without a given subgraph. Their scheme identifies the optimal routes and provides the utility maximizing rate allocation and coding solution. Xia et al. [12] discussed a multi-rate linear network coding for minimum-cost in wireless mesh networks and proposed a scheme for efficient implementation. Wang et al. [13] investigated the minimum transmission time encoding problem in multi-rate wireless networks. It was formulated as a minimum weighted clique partition of graph and proven to be NP-complete. Based on the work of Ref. [8], Ma et al. [14] gave a unified result for variable-rate linear network coding. However, there are few results focused on variable-rate network coding over cyclic networks. This article mainly investigates variable-rate network codes on a single-source finite cyclic network. The four types of variable-rate convolutional network codes are investigated. It is found that variable-rate convolutional generic, dispersion and broadcast can be implemented without changing the local encoding kernels of the non-source nodes, while the variable-rate convolutional multicast may not always be implemented under the above condition, as shown by an example in later section. The remainder of this article is organized as follows. Sect. 2 presents the preliminaries such as notations, basic algebra knowledge, and a brief description of the four types of optimal convolutional network codes. Sect. 3 presents the main results that supply efficient implementations of variable-rate convolutional network codes for generic, dispersion and broadcast. Sect. 4 gives an example to illustrate the technique developed in Sect. 3. Sect. 5 concludes this article.
2 Preliminaries 2.1
Notations
Throughout this article, R refers to a commutative ring with identity. D refers to a discrete valuation ring (DVR). The
2010
symbol alphabet is a finite field denoted by F. F [ z ] refers to the polynomial ring in z over F. F [( z )] denotes the set of rational power series which is a principal ideal domain (PID). The element in F [( z )] is in the form of p ( z ) >1 zq ( z )@ ,
and is called a rational power series as it can be uniquely expanded as a power series, where p ( z ) and q( z ) are polynomials in F [ z ] . F ( z ) refers to the field of rational functions in z over F. The element in F ( z ) is in the form of p ( z ) q ( z ) , where p ( z ) and q ( z ) z 0 are polynomials in F[ z] . 2.2
Modules over principal ideal domain
The submodule spanned by a subset W of an R-module M is the set of all the linear combinations of the elements of W : W {r1v1 ... rnvn | ri R, vi W } (1) A subset W M is said to span M, if M
W . A subset
W of an R-module M is a basis if it is R-linearly independent and spans M. The fact is not all the modules have a basis. If an R-module M has a basis W , then W is a minimal spanning set; and W is a maximal linearly independent set. An R-module M is said to be free if it has a basis. For example, F [( z )]Z is a free module consisting of all the
Z -dimensional column vectors over F [( z )] , and [1
1,0,...,0
T
,..., [Z
0,0,...,1
T
is the standard basis for
F [( z )] . If M is a free module over a PID R, then any two bases of M have the same cardinality. Define rank( M ) to be
the cardinality of any basis for M. Any submodule N of M is also free. Moreover, rank( N )İ rank( M ) . 2.3 Four types of optimal convolutional network codes over a cyclic network A cyclic network is represented by a finite directed cyclic graph G (V , E ) , where V is the set of nodes and E is the set of edges. Each edge in E represents a noiseless communication channel with unit capacity. The unique source node is denoted as S. The set of incoming edges and outgoing edges of node t are represented by In(t ) and Out(t ) , respectively. Denote tail(e) t if edge e is an outgoing edge of node t and head(e) t if edge e is an incoming edge of node t. While the source node S transmits information at a rate Z , install a set of Z incoming imaginary edges at S. A pair of edges (d , e) is called an adjacent pair when there exists a node t with d In(t ) and e Out(t ) . A sequence of edges e1 ,..., en , where e1 may be the imaginary channel, form a
Issue 3
MA Song-ya, et al. / Variable-rate convolutional network coding
93
path if (ei , ei 1 ) is an adjacent pair for all 1İiİn 1 . Two
onto each edge e Out(t ) via the linear formula
paths are edge-disjoint if they do not have any edge in common. A set of edges is called an independent set if each edge is on a path originating from an imaginary channel and these paths are edge-disjoint. Over a single-source cyclic network, the message stream generated by the source becomes a row vector over the integral domain F [[ z ]] of power series, where z symbolizes
x ( z ) fe ( z )
a unit-time delay. Yeung et al. [4] described the unit-delay convolutional network coding. The global coding kernels are restricted to F [( z )] . On the one hand, the global coding kernels are restricted to F [( z )] instead of F [[ z ]] in order for implementation with finite memory. On the other hand, if coding kernels were all polynomials, the corresponding decoding kernels could be non-polynomials. Li et al. [7] explored the essential algebraic structure that makes F [( z )] a proper domain for coding/decoding kernels is a DVR . A general abstract theory is formulated over a DVR and does not confine convolutional network coding to the combined space-time domain. In this article, for convenience, the authors introduce the unit-delay network in Ref. [4] as the model. The global description of a convolutional network code is described as follows: Definition 1 [4] Let Z be a positive integer. An Z -dimensional F -valued convolutional network code on a unit-delay cyclic network consists of an element kd , e ( z ) F [( z )] for every adjacent pair (d , e) in the network as well
as an Z -dimensional column vector
f e ( z ) over F [( z )]
for every edge e in the network such that: 1) f e ( z ) z ¦ kd , e ( z ) f d ( z ) , where e Out(t ) . d In ( t )
2) The vectors
f e ( z ) for the Z imaginary channels
e In( S ) form the standard basis of the vector space F Z . The vector f e ( z ) is called the global encoding kernel for edge e, the local encoding kernel at the node t refers to the | In(t ) | u | Out(t ) | matrix K t ( z ) (kd , e ( z )) d In( t ), eOut (t ) . Subject to the recursion and boundary condition, there is a unique way to assign an Z -dimensional column vector over F [( z )] as the global encoding kernel for edge e. An
Z -dimensional F -valued convolutional network code can be considered to transmit messages at a rate of Z data units in the source per time unit. Let the source generate a message x ( z ) in the form of an Z -dimensional row vector over F [[ z ]] . A node t receives the symbols x ( z ) f d ( z ), d In(t ) , from which it calculates the symbol x ( z ) f e ( z ) for sending
z
¦
d In ( t )
kd , e ( z )( x ( z ) f d ( z ))
(2)
Denote Vt as the F [( z )]-module generated by { f e ( z ) : e In(t )} and V9
as the F [( z )]-module generated by
{Vt , t 9 } . These are submodules of the free module F [( z )]Z and hence are free modules. For a set of edges [ , denote its corresponding global encoding kernels by K ([ ) { f e ( z ), e [} . The data rate delivered from the source to any set of non-source nodes is bounded by the maximum flow in the network flow theory. The focus is on the convolutional network codes that achieve this intrinsic bound. Li et al. [7] described four types of optimal convolutional network codes and proved their existence. Definition 2 [7] A D-convolutional network code is said to qualify as a D-convolutional multicast, D-convolutional broadcast, D-convolutional dispersion or D-convolutional generic, respectively, if the following statements hold: 1) rank(Vt ) Z for every non-source node t with max flow(t )ıZ . 2) rank(Vt ) min{max flow(t ), Z} for every non-source node t. 3) rank(V9 ) min{max flow(9 ),Z} for every collection
9 of non-source nodes. 4) For any independent set [ with Z edges, the global encoding kernels K ([ ) are linearly independent. Li et al. [7] proved the consistence of this convolutional multicast with the definition in Ref. [4] which is defined by the existence of decoding matrix.
3
Main results
For an Z -dimensional convolutional network code, the authors only consider the case that | Out( S ) | ıZ , otherwise, the problem is degenerated because no node in the network can receive the message generated at the source node. Lemma 1 An Z -dimensional F [( z )]-convolutional network code is given in a single-source finite directed cyclic network. Denote IZ 1 as the (Z 1) u (Z 1) identity matrix. Let f e ( z ) be the global encoding kernels for edge e E . Let b be any arbitrary (Z 1)-dimensional column vector over F [( z )] . Let
f eZ 1 ( z ) [ IZ 1 b] f e ( z )
(3) Z 1
for all the non-imaginary channels e. Then f e
( z ), e E
94
The Journal of China Universities of Posts and Telecommunications
2010
constitute the global encoding kernels of an (Z 1)-
pZ 1 ( z ) qZ 1 ( z ), 0@T Vu , then for any non-zero element
dimensional convolutional network code. In particular, the local encoding kernel of this (Z 1)-dimensional convolu-
r F ( z ),
tional network code at every non-source node is the same as that of the original Z -dimensional convolutional network code. Proof With the similar approach of Lemma 1 in Ref. [8], one can obtain the lemma. Lemma 2 Let L1 ,..., Ls be non-trivial submodules of Z
rK Vu . Specially,
ª Z 1 º « qi ( z ) »K Vu , but ¬i 1 ¼
ª Z 1 º Z « qi ( z ) »K F [( z )] . Thus, one can find some non-zero ¬i 1 ¼ element E [ E1 ,..., EZ 1 ,0]T F [( z )]Z satisfying E Vu .
Let
z p F [( z )] . Then any two elements in
kp
{D k p E F [( z )]Z | p
0,1,...} are mutually different and
F [( z )] , and rank( Li ) ki İZ 1, 1İiİs , where s is a
D k p E Vu . Moreover, any two elements D kl E and
positive integer. Then there exists some D [D1 ,...,DZ 1 , 1]
D k j E cannot lie in the same Vi , 1İiİu 1 . Otherwise
T
Z
F [( z )] Proof
such that D Li , 1İiİs . Z
Denote F ( z )
klD k jD Vi which leads to D Vi . This is a contradiction.
as the Z -dimensional
linear
space consisting of all the Z -dimensional column vectors over F ( z ) , where F ( z ) is the field of rational functions in z over F . From F [( z )] F ( z ) , one can easily obtain F [( z )]Z F ( z )Z .
Assume [ i1 ,...,[ iki to be the basis of Li . Let
ki ½ Vi ®¦ gih[ ih | g ih F ( z ) ¾ ; 1İiİs (4) ¯h 1 ¿ Obviously, V1 ,...,Vs are non-trivial subspaces of F ( z )Z . Since Li Vi , 1İiİs , to prove Lemma 2, it suffices to show that there exists some D [D1 ,...,DZ 1 , 1]T F [( z )]Z such that D Vi , 1İiİs .
specially
[0,...,0, 1] V1 . Since
V1
is a non-trivial
subspace of F ( z ) , then {[J 1 ,..., J Z 1 ,0]T | J 1 ,..., J Z 1 F [( z )]} V1 Together
with
F [( z )]Z V1 .
[0,...,0,1] V1 ,
(6) one
can
F ( z )Z V1 .
Furthermore,
This
obtain is
a
contradiction. Therefore, there exists some D [D1 ,...,D Z 1 , 1]T F [( z )]Z such that D V1 . Assume that for s u 1 there exists some D
[D1 ,...,D Z 1 ,
Z
1] F [( z )] such that D Vi , 1İiİu 1 . T
Now consider the case of s u . It is discussed in two cases. 1) D Vu . Then D is the element we want to find. Since Vu is a non-trivial subspace of F ( z )Z ,
2) D Vu .
then {[J 1 ,..., J Z 1 ,0]T | J 1 ,..., J Z 1 F ( z )} Vu Choose
a
non-zero
vector
This completes the induction. Lemma 3 Let Z and m be positive integers such that Zı2 and mİZ 1 . Let c1 ,..., cm F [( z )]Z be m linearly independent vectors and d i [ IZ 1 b]ci ; 1İiİm Z 1
where b F [( z )]
(8)
. Then d1 ,..., d m are F [( z )]-linearly
independent if and only if ªbº « 1» c1 ,..., cm ¬ ¼
(5)
Z
T
last component equals to 1 and D *Vi , 1İiİs .
(9)
Proof To facilitate our discussion, denote ªh º ci « i » ; 1İiİm ¬ ki ¼
The proof is completed by induction on s. For s 1 . Suppose {[J 1 ,..., J Z 1 , 1]T | J 1 ,..., J Z 1 F [( z )]} V1 T
Therefore, one can find some D * D kq E F [( z )]Z whose
(7)
K
> p1 ( z)
q1 ( z ),...,
With regard to the PID F [( z )] , F [( z )]-linearly
(10)
d1 , d 2 ,..., d m
independent if and only if
are
ªd1 º « 0 » ,..., ¬ ¼
ªd m º ª b º « 0 » , « 1» are F [( z )] -linearly independent. Since di ¬ ¼ ¬ ¼ ki b , i.e.,
hi
ªd i º ªbº (11) « 0 » ki « 1» ; 1İiİm ¬ ¼ ¬ ¼ ªd º ªd º ª b º therefore « 1 » ,..., « m » , « » are F [( z )]-linearly independent ¬ 0 ¼ ¬ 0 ¼ ¬ 1¼ ci
if
and
only
independent.
if
Note
ªbº c1 ,..., cm , « » ¬ 1¼ that c1 ,..., cm
are
F [( z )]-linearly
are
F [( z )]-linearly
ªbº c1 ,..., cm , « » are ¬ 1¼ ªbº F [( z )]-linearly independent if and only if « » c1 ,..., cm . ¬ 1¼
independent, one may obtain that
Issue 3
MA Song-ya, et al. / Variable-rate convolutional network coding
mİZ 1 , from Lemma 2, one can
Since rank c1 ,..., cm
ªbº ªbº find some « » satisfying « » c1 ,..., cm , which follows ¬ 1¼ ¬ 1¼ ªbº that c1 ,..., cm , « » are F [( z )] -linearly independent. It yields ¬ 1¼ ªd º ªd º ª b º that « 1 » ,..., « m » , « » are linearly independent. Therefore, ¬ 0 ¼ ¬ 0 ¼ ¬ 1¼ there exists some b F [( z )]Z 1 such that d1 ,..., d m are F [( z )] -linearly independent. Definition 3 For a given Z (ı2)-dimensional F [( z )]-convolutional generic on a single-source finite directed cyclic network. Let b F [( z )]Z 1 be any arbitrary (Z 1)-dimensional
[ IZ 1
column
vector.
f eZ 1 ( z )
Let
b] f e ( z ) be a column vector for any non-imaginary
channels e. Then b is called a convolutional generic reduction vector for the given generic if f eZ 1 ( z ), e E specify an (Z 1)-dimensional F [( z )] -convolutional generic. Lemma 4 For any Z (ı2)-dimensional F [( z )] -convolutional generic, a convolutional generic reduction vector always exists. Proof The existence of a convolutional generic reduction vector is shown by properly choosing such a vector. For any independent set P j {e j1 ,..., e jZ 1 } with Z 1 edges,
the
global
encoding
kernels
K (P j )
are
F [( z )] -linearly independent since the given network code is an Z -dimensional convolutional generic. Let Lj
f e j ,..., f e j
(12)
Z 1
1
From Lemma 2, one can find some [b1 ,..., bZ 1 , 1]T H
F [( z )]Z * L j , where H is the number of independent sets j 1
consisting of Z 1 edges. Let f eZj 1 ( z ) [ IZ 1 i
b] f e j ( z ); 1İiİZ 1 i
(13)
where b [b1 ,..., bZ 1 ]T . From Lemma 3, one can get f eZj 1 ( z ),..., f eZj 1 ( z ) are F [( z )]-linearly independent. 1
Therefore,
Z 1
[b1 ,..., bZ 1 ]T
is
a
convolutional
generic
reduction vector for the given convolutional generic. Theorem 1 An Z (ı2)-dimensional F [( z )]-convolutional generic is given on a single-source finite directed cyclic network. Then, for every h 1,..., Z 1 , an h-dimensional F [( z )]-convolutional generic can be constructed such that these convolutional generics have the same local encoding kernels at all the non-source nodes.
95
Proof From Lemma 4, a convolutional generic reduction vector for the given convolutional generic can be found and an (Z 1)-dimensional convolutional generic is obtained. From Lemma 1, the local encoding kernel of this (Z 1)-
dimensional convolutional generic at every non-source node is the same as that of the original Z -dimensional generic. Repeating this procedure, at each time one can reduce the dimension of the generic by one. Then the desired set of convolutional generics can be obtained. Remark The proof of Theorem 1 also renders an efficient implementation of convolutional generics of different dimensions on the same network. Similar to the proof of Theorem 1, the following theorems can be obtained. Theorem 2 An Z (ı2)-dimensional F [( z )]-convolutional dispersion is given on a single-source finite directed cyclic network. Then, for every h 1,...,Z 1, an h-dimensional F [( z )]-convolutional dispersion can be constructed such that these convolutional dispersions have the same local encoding kernels at all the non-source nodes. Theorem 3 An Z (ı2)-dimensional F [( z )]-convolutional broadcast is given on a single-source finite directed cyclic network. Then, for every h 1,...,Z 1, an h-dimensional F [( z )]-convolutional broadcast can be constructed such that these convolutional broadcasts have the same local encoding kernels at all the non-source nodes. Remark From Definition 2, the main difference among the convolutional dispersion, convolutional broadcast and convolutional generic comes from the assumptions of the independent set. Based on this notification, as for the variable-rate convolutional dispersion or broadcast, there is no need to change the proofs of Lemmas 1–3. Making changes to the selection of independent sets in the proof of Lemma 4, one can easily prove Theorems 2 and 3. However, there is no similar result for F [( z )]convolutional multicast. An example illustrated in Fig. 1 is given to show it. The corresponding local encoding kernels at the nodes in Fig. 1 are: ª1 0 0 º ª1º ª1 º « » K S « 0 1 0» , K Y ««1»» , K X K Z « » , KW >1 0@ ¬0¼ «¬ 0 0 1 »¼ «¬1»¼ The global encoding kernels for all the edges are labeled in Fig. 1. It can be found that network code in Fig. 1 is in fact a 3-dimensional convolutional multicast. However, it cannot be a 2-dimensional multicast if local encoding kernels at all the non-source nodes keep unchanged. Since under this condition,
96
The Journal of China Universities of Posts and Telecommunications
2010
f SZ ( z ), fWZ ( z ) fYW ( z )
node Z with max flow( Z )ı2 cannot receive two messages sent from the source node.
T
For example, let b [b1 , b2 ]
(14) T
[1,1] , then the network
code with the global encoding kernels f e2 ( z ) [ I 2
b] f e ( z ),
e E shown in Fig. 3 is a 2-dimensional convolutional broadcast.
Fig. 1 A 3-dimsensional convolutional multicast
4
Example
In this section, an example which illustrates the efficient implementation of variable-rate convolutional broadcast is presented to explain the technique developed above. Assume a 3-dimensional convolutional broadcast in Fig. 2 with the prescription of local encoding kernel at every node: ª1 0 0 º ª1º ª1 º K S ««0 1 0 »» , K Y ««1»» , K X K Z « » , KW >1 1@ ¬0 ¼ «¬0 0 1 »¼ «¬1»¼
Fig. 3 A 2-dimsensional convolutional broadcast
5
Conclusions
In this article, the authors mainly study variable-rate convolutional network coding on finite cyclic networks. It has been proved that variable-rate generic, dispersion, and broadcast can be implemented without changing the local encoding kernels of the non-source nodes. It means that convolutional generics, dispersions and broadcasts of different dimensions can be implemented on the same network, while each non-source node is required to store only one copy of the local encoding kernel within a session. Moreover, an example is given to show that variable-rate convolutional multicast may not always be implemented under the above condition. Acknowledgements This work was supported by the National Natural Science
Fig. 2
A 3-dimsensional convolutional broadcast
The global encoding kernel for every edge is labeled in Fig. 2. From Theorem 3, a 2-dimensional broadcast can be constructed such that the local encoding kernels at all the non-source nodes keep unchanged. Now an explicit implementation is presented. For each non-source node t , choose mt min{2,max flow(t )} F [( z )]-linearly independent vectors f e ( z ) from the set of incoming edges e In(t ) . This can be made since the given network code is a 3-dimensional broadcast. Select some [b1 , b2 , 1]T F [( z )]3 f SX ( z ), fWX ( z ) f SY ( z ), f XY ( z )
Foundation of China and the Research Grants Council of Hong Kong Joint Research Scheme (60731160626), the National Natural Science Foundation of China (60821001), the Fundamental Research Funds for the Central Universities (BUPT2009RC0220), the 111 Project (B08004).
References 1. Ahlswede R, Cai N, Li S Y R, et al. Network information flow. IEEE Transactions on Information Theory, 2000, 46(4): 12041216 2. Li S Y R, Yeung R W, Cai N. Linear network coding. IEEE Transactions on Information Theory, 2003, 49(2): 371381 3. Jaggi S, Sandrs P, Chou P A, et al. Polynomial time algorithms for multicast network code construction. IEEE Transactions on Information To p. 117 Theory, 2005, 51(6): 19731982
Issue 3
LIU Chuan-chang / Search recommendation model based on user search behavior and…
of 2009 IEEE International Conference on Communications Technology and Applications (ICCTA’09), Oct 1618, 2009, Beijing, China. Piscataway, NJ, USA: IEEE Computer Society, 2009: 925929 4. Chen L R, Sycara K. WebMate: a personal agent for browsing and searching. Proceedings of the 2nd International Conference on Autonomous Agents and Multi-Agent Systems (AGENTS’98), May 913, 1998, Minneapolis, MN, USA. New York, NY, USA: ACM, 1998: 132139 5. Scime A, Kerschberg L. WebSifter: an ontology-based personalizable search agent for the Web. Proceedings of 2000 Kyoto International Conference on Digital Libraries: Research and Practice (ICDL’00), Nov 1316, 2000, Kyoto, Japan. Los Alamitos, CA, USA: IEEE Computer Society, 2000: 203210 6. Kerschberg L, Kim W, Scime A. WebSifter II: a personalizable meta-search agent based on semantic weighted taxonomy tree. Proceedings of the International Conference on Internet Computing (IC’01): Vol 1, Jun 2528, 2001, Las Vegas, NV, USA. Las Vegas, NV, USA: CSREA Press, 2001: 1420
From p. 96 4. Yeung R W, Li S Y R, Cai N, et al. Network coding theory. Foundation and Trends in Communications and Information Theory, 2005, 2(4/5): 241381 5. Tan M, Yeung R W, Ho S T. A unified framework for linear network codes. Proceedings of the 4th Workshop on Network Coding, Theory and Applications (NetCod’08), Jan 34, 2008, Hong Kong, China. 2008: 132136 6. Erez E, Feder M. Efficient network codes for cyclic networks. Proceedings of the 2005 IEEE International Symposium on Information Theory (ISIT’05), Sep 49, 2005, Adelaide, Australia. Piscataway, NJ, USA: IEEE, 2005: 19821986 7. Li S Y R, Ho S T. Ring-theoretic foundation of convolutional network coding. Proceedings of the 4th Workshop on Network Coding, Theory and Applications (NetCod’08), Jan 34, 2008, Hong Kong, China. 2008: 5661 8. Fong S L, Yeung R W. Variable-rate linear network coding. Proceedings of the 2006 IEEE Information Theory Workshop (ITW’06), Oct 2226, 2006, Chengdu, China. Piscataway, NJ, USA: IEEE, 2006: 409412 9. Vieira L F M, Misray A, Gerla M. Performance of network-coding in multi-rate wireless environments for multicast applications. Proceedings of IEEE Military Communications Conference (Milcom’07), Oct 2931,
117
7. Sieg A, Mobasher B, Burke R. Learning ontology-based user profiles: a semantic approach to personalized Web search. IEEE Intelligent Informatics Bulletin, 2007, l8(1): 718 8. Tanudjava F, Mui L. Persona: A contextualized and personalized web search. Proceedings of the 35th Hawaii International Conference on System Sciences (HICSS'02): Vol 3, Jan 710, 2002, Big Island, HI, USA. Los Alamitos, CA, USA: IEEE Computer Society, 2002: 12321240 9. Sarwar B, Karypis G, Konstan J, et al. Item-based collaborative filtering recommendation algorithms. Proceedings of the 10th International World Wide Web Conference (WWW’01), May 15, 2001, Hong Kong, China. New York, NY, USA: ACM, 2001: 285295 10. Zheng X R, Tang Z Y, Cao X B. Non-lineal gradual forgetting collaborative filtering algorithm capable of adapting to users’ drifting interest. Computer Aided Engineering, 2007, 16(2): 6973 (in Chinese)
(Editor: ZHANG Ying)
2007, Orlando, FL, USA. Piscataway, NJ, USA: IEEE, 2007 10. Goseling J, Weber J H. Multi-rate network coding for minimum-cost multicasting, Proceedings of IEEE International Symposium on Information Theory (ISIT’08), Jul. 611, 2008, Toronto, Canada. Piscataway, NJ, USA: IEEE, 2008: 3640 11. Lakshminarayana S, Eryilmaz A. Multi-rate multicasting with network coding. Proceedings of the 4th International Conference on Wireless Communications, Networking and Mobile Computing (WiCOM’08), Nov. 1719, 2008, Dalian, China. Piscataway, NJ, USA: IEEE: 2008 12. Xia Z Q, Chen Z G, Zhao M, et al. An efficient scheme for multi-rate network coding in wireless Mesh networks. Proceedings of the 2nd IEEE International Conference on Computer Science and Information Technology (ICCSIT’09), Aug. 811, Beijing, China. Piscataway, NJ, USA: IEEE, 2009: 505509 13. Wang Q S, Wang Q W, Xu Y L, et al. A minimum transmission time encoding algorithm in multi-rate wireless networks. Computer Communications, 2010, 33 (2): 222226 14. Ma S Y, Zhuo X J, Guo Q, et al. Journal of Harbin Institute of Technology, in Press
(Editor: WANG Xu-ying)