Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Random network error correction codes
(USC Thesis Other)
Random network error correction codes
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
RANDOM NETWORK ERROR CORRECTION CODES
by
Huseyin Balli
A Dissertation Presented to the
FACULTY OF THE GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Ful¯llment of the
Requirements of the Degree
DOCTOR OF PHILOSOPHY
(Electrical Engineering)
December 2008
Copyright 2008 Huseyin Balli
Dedication
Dedicated to my parents
Kadri and Ayse Balli
ii
Acknowledgements
First of all I would like to thank to University of Southern California provost
C.L.Max Nikias. I am grateful for his continuous and generous support. I would
also like to thank to Associate Dean for Graduate a®airs Margery Berti for her
guidance from my ¯rst day at USC to the last. Finally, the last but not the least I
would like to thank to my advisor Professor Zhen Zhang. It was a privilege to be
his student. I am in debt for his guidance and inspiration through my Ph.D.
iii
Table of Contents
Dedication ii
Acknowledgements iii
List of Figures vii
Abstract x
Chapter 1: Introduction 1
1.1 Network Coding and Information Flow . . . . . . . . . . . . . . . . 1
1.1.1 Butter°y Network. . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 History of Network Coding . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Network Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3.1 Notations and Terminology . . . . . . . . . . . . . . . . . . 8
1.4 Linear Network Coding . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.4.1 Local description of a linear network code . . . . . . . . . . 11
1.4.2 Global description of a linear network code . . . . . . . . . 12
1.5 An Algebraic Approach to Network Coding . . . . . . . . . . . . . . 18
1.5.1 System Transfer Matrix of Network Codes . . . . . . . . . . 18
1.6 Random Linear Coding for Multicast . . . . . . . . . . . . . . . . . 20
1.6.1 Reported Bounds on the Failure Probability of Random Lin-
ear Network Codes for Single Source Multicast . . . . . . . 23
Chapter2: UpperBoundsontheFailureProbabilityofRandomLin-
ear Network Codes for Multicast 25
2.1 Preliminary Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.2 Cut Decomposition of Network . . . . . . . . . . . . . . . . . . . . 30
2.2.1 Partition of CUTs . . . . . . . . . . . . . . . . . . . . . . . 33
2.3 Failure Probability of Random Linear Network Codes . . . . . . . . 35
2.3.1 Lower bound on the success probability of Random linear
network codes . . . . . . . . . . . . . . . . . . . . . . . . . . 40
iv
2.4 Comparison of the Bound with the Reported Bounds in the Litera-
ture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.4.1 Comparison of the bounds over a network withjJj=1 . . . 45
Chapter3: UpperBoundsontheFailureProbabilityofRandomLin-
ear Network Codes with Redundancy for Multicast 47
3.1 De¯nitions and Terminologies . . . . . . . . . . . . . . . . . . . . . 48
3.1.1 Redundancy distribution in CUT
k;t
. . . . . . . . . . . . . . 48
3.2 Preliminary Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.2.1 Performance Analysis of Random Linear Network Codes over
a Network withjJj=0 . . . . . . . . . . . . . . . . . . . . . 50
3.2.2 The Role of P
!
0;±
in analyzing the failure probability of Ran-
dom Linear Network Codes . . . . . . . . . . . . . . . . . . 52
3.3 FailureProbabilityofRandomLinearNetworkCodeswithRedundancy 62
3.3.1 Upper Bound on the failure probability of Random Linear
Network Codes with Redundancy . . . . . . . . . . . . . . . 62
3.4 Comparison of the Bound with the Reported Bounds in the Litera-
ture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
3.4.1 Comparison of the bounds over a network withjJj=0 . . . 71
3.4.2 Comparison of the bounds over Block Codes . . . . . . . . . 72
Chapter 4: Limiting Behavior of the Failure Probability as the Field
Size Goes to In¯nity 75
4.1 Characterizationof
+
m;l
theworstcaselimitingbehavioroftheRan-
dom Linear Network Codes . . . . . . . . . . . . . . . . . . . . . . 76
4.2 Characterization of
¡
m;l
the best case limiting behavior of the Ran-
dom Linear Network Codes . . . . . . . . . . . . . . . . . . . . . . 86
Chapter 5: Random Network Error Correction Codes 88
5.1 Network Error Correction Theory . . . . . . . . . . . . . . . . . . . 88
5.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
5.1.2 Minimum distance of Network Error Correction Codes . . . 90
5.2 Random Network Error Correction Codes . . . . . . . . . . . . . . 92
5.2.1 Error Correction Capability of Random Network Error Cor-
rection Codes . . . . . . . . . . . . . . . . . . . . . . . . . . 93
5.2.2 Required Field Size for the Existence of Network Error Cor-
rection Codes with Code Degradation . . . . . . . . . . . . . 94
Chapter 6: Conclusion and Future Work 100
6.1 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
v
6.2 Conjecture : Anti-Chain Bound . . . . . . . . . . . . . . . . . . . . 102
Appendix A 103
Bibliography 113
vi
List of Figures
1.1 Butter°y Network . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Routing vs Network Coding . . . . . . . . . . . . . . . . . . . . . 4
1.3 Butter°y network with global encoding kernels . . . . . . . . . . . 16
2.1 Cuts of Paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.2 Partition of a cut . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.1 (a) e represents the erroneous channel (b) the erroneous channel is
replacedbytheimaginarynodei
e
andtheimaginarychannelse
0
;e
1
and e
2
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
A.1 Second observation (a) dotted arrows represents the channels in N
(b) the channels in N are merged at new sink t
new
. . . . . . . . . 110
vii
Abstract
Network Communications has been established on the principle of routing , in
which received packets at intermediate nodes are replicated and forwarded to the
outgoing channels. This scheme has been well accepted primarily for its simplicity
in implementation on the networks. However, the vast development and availabil-
ity of advanced microprocessors has given the ability to process and re-encode the
received information at the intermediate nodes of the network. In that respect Net-
work Coding studies the design and analysis of codes for networks. The primary
bene¯ts of network coding over the routing are it's increased throughput and the
error correction capability. However, the design of network codes that achieves the
network capacity and also the error correction capability relies on the knowledge of
thenetworktopology. Whilethesepredesignednetworkcodespossesallofthesede-
sired properties, the centralized nature of these codes makes them highly intolerant
to network changes. And hence, arbitrary network topology and highly dynamic
tra±cs makes it impossible to use these predesigned codes. Under these circum-
stances decentralized codes presents a more viable option. The main decentralized
network coding approach ,which is the topic of this thesis is random coding. While
random coding provides a decentralized coding operation of network, it may not
necessarily achieve the best possible performance of a code. This makes the anal-
ysis of random coding vital both theoretically and also for its applications. This
thesismainlyprovidesatheoreticalbasisforrandomcodinganditserrorcorrection
capabilities through the analysis of the failure probability of random linear network
viii
codes. Firstweprovideageneralupperboundonthefailureprobabilityofrandom
linear network codes that applies to all single source multi-cast transmissions over
acyclic networks as long as the transmission is feasible (i.e. source rate is at most
the minimum cut capacity). Then we study the limiting behavior of Random Lin-
ear Network Codes as the ¯nite ¯eld size F goes to in¯nity. The study of limiting
behavior of Random network codes has lead us to a new tight bound for the failure
probability of random network codes with redundancy. Finally the error correction
capability of random linear network codes has been analyzed, the primary tool in
facilitating the study of error correction capability is the bounds we have derived
on the failure probability of random linear network codes. We give a de¯nition of
minimum distance of a network error correction code which plays exactly the same
role as it does in classical coding theory. As expected the minimum distance of the
random network code is a random variable which takes non negative integer values
that satis¯es the Singleton bound.The error correction capability of random net-
workerrorcorrectioncodeshasbeenpresentedbygivingaboundontheprobability
massfunctiononit'sminimumdistance. Moreoverwede¯nedthecodedegradation
as the di®erence between the singleton bound and the actual minimum distance of
the code. We have derived bounds on the required ¯eld size for the existence of
network error corrections codes with a given maximum degradation which showed
that the required ¯eld size for a network error correction code decays exponentially
asafunctionofcodedegradation. Thissuggeststhatevenaminorsacri¯ceinterms
ix
of a code degradation causes a signi¯cant reduction in the required ¯eld size for the
existence of a network error correction code.
x
Chapter 1
Introduction
N
ETWORK °ows has been initially studied in graph theory. The main result
ofgraphtheoryregardingthenetwork°owsisthecelebratedmax-°owmin-
cut theorem. The theorem states that the maximum °ow from the source node to
thesinknodethroughanetworkischaracterizedbytheminimumcutofthenetwork.
From the graph theory perspective the problem of maximum °ow in networks has
been inspired by the commodity °ow problem. Maximum °ow problem in this
setting involves the transportation of physical entities from a given source to a
given sink through a transportation network.
1.1 Network Coding and Information Flow
In the case when it is the information that has to be delivered from a given source
to a given sink it is the °ow of information that is desired to be maximized through
a communication network.
1
Unlike a physical entity, information inherits two distinctive properties which
separates the commodity °ow problem and the information °ow problem from each
other.
1) Thereceiveddatacanbereplicatedattheintermediatenodestobeforwarded
to the outgoing channels while the replication of physical entities at the in-
termediate nodes of a transportation network is out of question.
2) The received data at an intermediate node can be processed by a lossless
transformation and still keep its information content intact. This gives the
ability to manipulate the received data without losing the information inher-
ited while the physical entities in a transportation network are not subject to
any manipulations such as transformations or processing at all.
Network communications has been established on the principle of routing, in
which received packets at intermediate nodes are replicated and forwarded to the
outgoing channels. This scheme has been primarily well accepted for its simplicity
in implementation on the networks.
Whiletheabilitytoreplicatepacketsattheintermediatenodesduringthetrans-
missionpossessesthedistinguishedproperty1)mentionedabove,routingoperation
still su®ers the °aws of the treatment of the information as a physical entity due to
the lack of property 2) , that the received information is kept in its original format
without any processing.
Theimplications of lackingproperty2) on Routing revealsit self in terms of the
throughput of the network. This will be expressed in the following example.
2
1.1.1 Butter°y Network
Example 1 ( Butter°y Network ) Suppose we would like to transmit two informa-
tion bits b
1
;b
2
2GF(2) from the source node to two receivers in a multi-hop fashion
such that each channel in the network can be used at most once in one time unit.
De¯ne S to be the source node and T
1
and T
2
be the receiver nodes. It can be seen
from Figure1.1 that there exists two disjoint source receiver paths for each receiver.
Max °ow Min Cut theorem of the Graph Theory asserts that two symbols can be
transmitted from the source node to any one of the receiver nodes in one time unit.
S
T
1
T
2
b
1
+ b
2
b
2
b
2
b
2
b
1
b
1
b
1
+ b
2
b
1
+ b
2
Z U
W
X
S
T
1
T
2
Z U
W
X
Figure 1.1: Butter°y Network
The question in butter°y network is whether simultaneous transmission of the
two source symbols b
1
;b
2
2 GF(2) to the sink nodes T
1
and T
2
is achievable or not
?
3
Suppose all the intermediate nodes in the network are restricted to routing. The
restriction to routing at the nodes Z and U has no degrading e®ect from the coding
perspective as they are receiving only one data unit. But this is no longer true for
the node W. As depicted in Figure 1.2(a) when the coding operation is restricted
to routing the channel (W,X) acts as a bottleneck of the network, we have to pick
eitherb
1
orb
2
totransmit. Supposewepickb
W
tobeb
1
orb
2
fortransmissionoverthe
channel(W,X).Either way, oneof the sinknodeswillreceivetwo copiesof thesame
information bit and thus that particular sink node will not be able to decode both of
the source information. This eventually clari¯es that simultaneous transmission of
two bits to sink nodes T
1
and T
2
is not possible as long as the coding is restricted to
routing.
S
T
1
T
2
(b) Network Coding
b
1
+ b
2
b
2
b
2
b
2
b
1
b
1
b
1
b
1
+ b
2
b
1
+ b
2
S
T
1
T
2
b
1
b
2
b
W
(a) Routing
b
1
b
1
b
2
b
2
Z Z U U
W W
X X b
W
b
W
b
1
b
W
b
2
b
W
b
1
b
1
+ b
2
b
2
b
1
+ b
2
Figure 1.2: Routing vs Network Coding
Instead of routing suppose we allow received information to be coded at the in-
termediate nodes of the network. In this case the received bits b
1
and b
2
can be coded
4
as b
1
+b
2
for transmission over the link (W;X). In e®ect "+" operation in GF(2)
acts as an Exclusive-OR operation on b
1
and b
2
. Receiver T
1
receives b
1
and can
recover b
2
from b
1
+b
2
and similarly T
2
receives b
2
and can recover b
1
from b
1
+b
2
as
shown in Figure 1.2(b). Therefore the butter°y network evidently shows the need for
coding at the intermediate nodes of the network in order to increase the throughput
of the network.
This example shows that routing is insu±cient in terms of achieving network
capacity. Thisprovestheprominentroleofnetworkcodingforane±cientoperation
of network communications.
Throughoutthethesiswewillbefocusingonthesinglesourcemulti-casttrans-
missionwhichisde¯nedasthesimultaneoustransmissionofasetofsourcemessages
originating at the same source to a select set of sink nodes.
1.2 History of Network Coding
The fundamental concept of network coding was initially introduced for satellite
communications in a paper by R.W.Yeung and Z.Zhang [47]. The concept was
further developed by Ashlwede, Cai, Li and Yeung in a subsequent paper named
"Network Information Flow" in 2000. It was this paper which has o±cially started
theNetworkCoding¯eldinwhichthetermNetworkCodingwascoinedforthe¯rst
time [1]. This paper showed that the information can not be treated as a physical
5
entity but rather it should be re-encoded at the nodes in order to save bandwidth
and achieve optimality in terms of throughput. The bene¯ts of network coding
over routing was introduced for the ¯rst time by means of the butter°y network
example in that paper. This eventually made the butter°y network the trademark
of network coding.
Indeed, Ashlwede et al [1] have proved that max °ow min cut bound character-
izes the capacity of an acyclic network for the single source multicast transmission.
By this result, the max-°ow min-cut theorem of the graph theory is generalized for
the °ow of information in communication networks.
Later on Li, Cai and Yeung [31] has showed by construction that Linear Coding
achieves the multicast capacity of single source acyclic networks. This result indi-
cates that there is no need for the nonlinear codes to achieve network capacity for
single source multicast. It is worth mentioning that the assertion of linear codes
achieving network capacity is not true in general for all type of transmission sce-
narios other than the multicast. There exists networks and transmission scenarios
thatstrictlyrequiresnonlinearcodestoachievenetworkcapacity[10]. Inthisthesis
we will be focusing on single source multicast transmission which makes the linear
codes su±cient for our purposes.
Through a system transfer matrix, an algebraic formulation to linear network
coding has been presented by R.Koetter and M.Medard [28]. This formulation al-
lowedthelinearnetworkcodingproblemtobestudiedfromanalgebraicperspective
through the determinant polynomial of the system transfer matrix. The results in
6
their paper parallels the results of Li, Cai and Yeung in terms of the su±ciency of
linear network codes achieving the single source multicast capacity.
However the construction of linear network codes achieving network capacity
relies heavily on the knowledge of the topology of the network which makes it very
fragile for implementation in practice due to the highly dynamic nature of network
communications such as highly volatile tra±c patterns as well as network changes
and link failures.
This has lead to the search for decentralized algorithms for network coding. Ho
et al,[22] has proposed the random coding approach for pushing the network cod-
ing towards a decentralized operation. So far, random coding is the most e±cient
encoding technique to utilize the bene¯ts of network coding in a distributed way.
Of course as the name random suggests this comes at the cost of a failure proba-
bility when the source information is not decodable at some of the receiver nodes.
Following the research line of Koetter and Medard's algebraic approach to network
coding, Ho et al has studied this problem and provided an analysis of random cod-
ing in terms of bounds for the failure probability for the cases with and without
redundant source receiver paths [22].
Inthisthesiswehaveappliedafreshmethodtoanalyzerandomcodingwhichled
to new improved bounds on the failure probability of random coding for multi-cast
overacyclicnetworks. Asidefromanalyzingthedecodabilityofrandomcoding, our
proposed bounds are strong enough to provide the basis for the analysis of random
network error correction codes as well.
7
1.3 Network Coding
1.3.1 Notations and Terminology
Singlesourcemulticastisde¯nedasthetransmissionofamessagexfromthesource
node S to a select set of sink nodes T = fT
1
;:::;T
n
g through a communication
network. The multicast is achieved if and only if each sink node T
i
;8i2f1;:::;ng
can successfully decode the source message x.
A communication network is a directed graph G = (V;E) where the vertex set
V denotes the set of nodes and the edge set E denotes the set of communication
channels. The vertex set V is divided in to three disjoint subsets ,the set of source
node fSg, the set of sink nodes T = fT
1
;:::;T
n
g and the set of internal nodes
J =V ¡fSg¡T. Whence,fSg,T and J partitions the vertex set V.
A communication network is said to be acyclic if it contains no directed cy-
cles.When the communication network is acyclic, each message is individually en-
coded and transmitted from the upstream nodes to downstream nodes in a se-
quential order. This makes the network coding problem independent of both the
transmission and processing delays at the nodes. Throughout this thesis we will
assume the communication network G=(V;E) is acyclic.
Thesourcemessagexconsistsof! dataunitsandisrepresentedbyan! dimen-
sional row vector x2F
!
. The source node S generates the message x and sends
it out by transmitting a symbol over every outgoing channel. Message propagation
8
in the network is then achieved by the transmission of a symbol U
e
2F over every
channel e2E in the network.
Each directed edge in the network leading from node i to node j is represented
as e = (i;j)2 E. We call node i the tail and node j the head of e. Furthermore,
the channel e is called an outgoing channel of node i and an incoming channel of
node j.
De¯nition 1 For a node i, we de¯ne the outgoing links of node i as,
Out(i)=fe2E :e is an outgoing channel of ig (1.1)
and similarly we de¯ne the incoming links of node i as,
In(i)=fe2E :e is an incoming channel of ig (1.2)
Associated with each channel e 2 E there is a positive number R
e
called the
capacityofthechannel. Inourformulationweallowmultiplechannelsbetweentwo
nodesandassumethatallchannelshaveunitcapacity. Eachdataunitisrepresented
as an element of a certain base ¯eld F and one data unit can be transmitted over
the channel in each time unit.
9
1.4 Linear Network Coding
A pair of channels (d;e) are called an adjacent pair if there exists a node v 2 V
such that d2 In(v) and e2 Out(v). For each adjacent pair of channels we de¯ne
the scalar k
d;e
2F as local encoding coe±cient for the adjacent channel pair (d;e).
The message transmitted overchannel e=(i;j), denoted byU
e
is calculated by the
following formula inductively,
U
e
=
X
d2In(i)
k
de
U
d
This formulates the linear network coding as a linear mapping of the symbols
receivedatanodetoasymbolforeachoutgoingchannel. Throughthisformulation
the local encoding coe±cients completely characterize a linear network code.
By the above formulation U
e
can also be represented as an inner product of the
local encoding coe±cients and the ensemble of symbols received at the node.
For a particular channel e2 Out(v) , let In(v) =fd
1
;¢¢¢ ;d
n
g be the incoming
channels of node v. The received symbols at node v can be put in a row vector
y = [U
d
1
;¢¢¢ ;U
dn
]. And similarly the local encoding coe±cients of channel e can
also be put in a column vector k
e
= [k
d
1
;e
;¢¢¢ ;k
d
n
;e
]
t
. The vector k
e
is de¯ned as
the local encoding kernel for channel e. The dimension of both y and k
e
isjIn(v)j
and the symbol transmitted over channel e is given by the following inner product.
U
e
=hy¢k
e
i
10
Based on this formulation we have the next de¯nition,
1.4.1 Local description of a linear network code
De¯nition 2 (Local description of a linear network code on an acyclic Network)
A linear network code is de¯ned as a set of local encoding functions
k
e
:F
jIn(v)j
!F;8e2Out(v)
that linearly maps the ensemble of symbols received from the incoming channels of
each node v to a symbol for each outgoing channel e2Out(v).
The k
e
corresponds to ajIn(v)j-dimensional column vector such that the symbol
transmitted over channel e is given by
U
e
=hy¢k
e
i
Where y2F
jIn(v)j
is the row vector representing the symbols received at node v.
Note that at a given node v 2 V , the symbols to be transmitted over each
outgoing channel e of node v is the inner product of k
e
with the same row vector
y representing the received symbols at v . Therefore the column vectors k
e
, 8e2
Out(v) can be concatenated in a row to form matrices called the local encoding
kernels of a node.
11
De¯nition 3 (Local encoding kernels)
Let F be a ¯nite ¯eld and ! be the source rate. An ! dimensional F valued linear
network code on an acyclic communication network consists of a scalar k
de
, called
the local encoding coe±cients for every adjacent channel pair. In accordance, the
local encoding kernel at a node v is de¯ned as the jIn(v)j£jOut(v)j matrix
K
v
=[k
de
]
d2In(v);e2Out(v)
Note that the matrix structure assumes an implicit ordering among the channels.
The characterization of the linear network codes by the local description does
not explicitly give the values of U
e
transmitted over channels e 2 E. This makes
the mathematical properties of linear network codes somewhat subtle to be ana-
lyzed. Forthatreasonwearegoingtode¯neanequivalentcharacterizationoflinear
network codes that will allow us to analyze the fundamental properties of network
coding in a network wide context.
1.4.2 Global description of a linear network code
De¯nition 4 The global encoding function f
e
is de¯ned as the mapping
f
e
:F
!
!F ;8e2E
12
When the local encoding functions are linear so are the global encoding functions,
[31]. Therefore the global encoding function f
e
for linear network codes corresponds
to a !-dimensional column vector such that
U
e
=hx¢f
e
i ;8e2E
where the !-dimensional row vector x2F
!
is the original source message .
The advantage of global encoding functions f
e
is, it explicitly speci¯es the cod-
ing at a channel 8e 2 E. The global encoding representation uni¯es the coding
operations of each individual channel to be analyzed from a network wide perspec-
tive.
In order to be consistent with our formulation we append ! imaginary channels
to the source node S. The source message x 2 F
!
is transmitted to the source
node S by these imaginary channels e2 In(S). The global encoding functions of
e 2 In(S) are the projections of F
!
to ! di®erent coordinates respectively. The
remaining global coding functions f
e
are obtained inductively through the local
encoding coe±cients by the formula,
f
e
=
X
d2In(v)
k
de
f
d
for each node v2V ¡T and every channel e2Out(v).
In this way all the global encoding functions f
e
can be de¯ned recursively
through the local encoding coe±cients k
d;e
for all the channels e2E.
13
De¯nition 5 (Global description of a linear network code on an acyclic Network)
Let F be a ¯nite ¯eld and ! be the source rate. An ! dimensional F valued linear
network code on an acyclic communication network consists of a scalar k
de
for
each adjacent pair as well as an !-dimensional column vector f
e
for every channel
e=(i;j) such that
f
e
=
X
d2In(i)
k
de
f
d
And the symbol transmitted over channel e is given by
U
e
=hx¢f
e
i
The vector f
e
is called the global encoding function for channel e. While for the
imaginary channels fd
1
;¢¢¢ ;d
!
g2In(S) , the corresponding global encoding func-
tionsff
d
1
;¢¢¢ ;f
d!
g forms the natural basis of the vector space F
!
.
Therefore given the local encoding coe±cients k
d;e
, the global encoding ker-
nels f
e
and the messages U
e
transmitted over channels e 2 E can be calculated
recursively in an upstream to downstream order by the given formulas in de¯nition
5.
On each channel e 2 E the transmission takes places in the form of packets
, where in each packet both the message U
e
and the global kernel vector f
e
are
recorded. Therefore, at sink t2T we receive a message vector A
t
=(U
e
;e2In(t))
14
and a matrix F
t
=(f
e
:e2In(t)). When the channels are error-free, the decoding
equation is given by,
xF
t
=A
t
Where x is the !-dimensional row vector representing the original source mes-
sage. F
t
is the !£jIn(t)j matrix representing the concatenated global encoding
kernels and ¯nally A
t
is thejIn(t)j-dimensional row vector representing the ensem-
ble of symbols received at the sink node t. Note that F
t
and A
t
assumes an implicit
ordering on the incoming edges of the sink t.
The source messages are decodable at t if and only if the decoding equation
has a unique solution or equivalently the rank of F
t
is ! , so that there exists !
linearly independent vectors inff
d
:d2In(t)g. The decodability of linear network
codes will be used as the main criteria to analyze the the mathematical properties
of linear network codes. We state it as another de¯nition.
De¯nition 6 The source message x is decodable at t if and only if one of the
following statements holds
² Rank(F
t
)=!
² There exists ! linearly independent global kernel vectors in ff
d
:d2In(t)g
² dim(L(in(t))=!
15
Example 2 (Butter°y network with global encoding kernels)
As an example we demonstrate these concepts on the butter°y network of exam-
ple 1.
S
T
1
T
2
b
1
+ b
2
b
2
b
2
b
2
b
1
b
1
b
1
b
1
+ b
2
b
1
+ b
2
Z U
W
X
b
1
b
2
S
T
1
T
2
Z U
W
X
Figure 1.3: Butter°y network with global encoding kernels
The source message is x = [
b
1
b
2
] . The local encoding kernels at the nodes
are given by,
K
S
=
2
6
4
1 0
0 1
3
7
5
; K
Z
=K
X
=K
U
=[
1 1
] ; K
W
=
2
6
4
1
1
3
7
5
16
The corresponding global encoding kernels are given by ,
f
e
=
2
6
4
1
0
3
7
5
8e2f(I
1
;S);(S;Z);(Z;W);(Z;T
1
)g
f
e
=
2
6
4
0
1
3
7
5
8e2f(I
2
;S);(S;U);(U;W);(U;T
2
)g
f
e
=
2
6
4
1
1
3
7
5
8e2f(W;X);(X;T
1
);(X;T
2
)g
where (I
1
;S) and (I
2
;S) represents the two imaginary channels providing b
1
and
b
2
respectively to the source node S. We have,
f
e
(x)=hx;f
e
i=b
1
8e2f(I
1
;S);(S;Z);(Z;W);(Z;T
1
)g
f
e
(x)=hx;f
e
i=b
2
8e2f(I
2
;S);(S;U);(U;W);(U;T
2
)g
f
e
(x)=hx;f
e
i=b
1
+b
2
8e2f(W;X);(X;T
1
);(X;T
2
)g
17
1.5 An Algebraic Approach to Network Coding
When the network is acyclic the upstream to downstream ordering of the nodes
allows the edges of the network to be ordered according to an ancestral ordering.
In this context a system transfer matrix can be de¯ned.
1.5.1 System Transfer Matrix of Network Codes
De¯nition 7 The adjacency matrix F = (k
de
)
d2E;e2E
is a jEj£jEj matrix where
k
de
is the local kernel value if 9i 2 V such that head(d) = tail(e) = i and is 0
otherwise.
Similarly A = (k
de
)
d2In(S);e2E
is a j!j£jEj matrix where k
de
is the local kernel
value when e2Out(S) and is 0 otherwise.
Note that the adjacency matrix F is a strict upper triangular matrix due to the
ancestral ordering of the edges. All the gains for each channel e2E in the network
are accounted for in the series I +F +F
2
+F
3
+¢¢¢ . Since matrix F is strict
upper triangular, it is nilpotent and eventually there will be an N such that F
N
is
the all zero matrix. Therefore I+F +F
2
+F
3
+¢¢¢ converges and can be written
as (I¡F)
¡1
=I+F +F
2
+F
3
+¢¢¢. Therefore the system transfer matrix of the
network is given by,
A(I¡F)
¡1
18
which is aj!j£jEj matrix whose columns make up the global kernel vectors f
e
of
the channels e2E in ancestral ordering.
When the received information at the sink node t is decodable, the ! source
information can be decoded by multiplying the system transfer matrix by another
matrix B
T
as follows,
A(I¡F)
¡1
B
T
t
where the matrix B represents the decoding operation and is de¯ned as B
t
=
(b
de
)
d2In(t)
which is aj!j£jEj matrix. Therefore the received information at sink
node t is decodable if and only if A(I¡F)
¡1
B
T
t
is non singular.
Thereforeasetoflocalencodingcoe±cientsfk
d;e
gforeachadjacentchannelpair
(d;e) constitute a linear network code for single source multi-cast over the network
G=(V;E) if and only if ,
A(I¡F)
¡1
B
T
t
is nonsingular 8t2T
19
1.6 Random Linear Coding for Multicast
So far the known constructions for network codes achieving the multicast capacity
relies on the global knowledge of the network topology. That the network code has
to be speci¯cally tailored for each network topology and all the nodes has to be
instructed about their linear mappings. This implies that each time a new global
network code has to be constructed in order to adapt changing tra±c patterns as
wellastocombatlinkfailuresetc. Thismakessuchglobaldesignsofnetworkcoding
practically impossible to be implemented in real time for large scale networks and
especially for wireless networks.
In order to implement network coding in practice, the nodes of the network has
to be able to perform in a decentralized way. As mentioned earlier, the foremost
method to solve this issue is random coding , where the intermediate nodes applies
random linear transformations from the inputs to the outputs with independently
anduniformlydistributedcoe±cientsfromthe¯nite¯eldF. Thedecentralizedran-
domcodingapproachinheritsthedesiredrobustnesstochangingnetworkconditions
due to the fact that it involves no coordination among nodes.
Ho et al, [22] has studied this problem in the algebraic framework through the
system transfer matrix
A(I¡F)
¡1
B
T
t
20
As mentioned the received information at sink node t is decodable if and only if
A(I¡F)
¡1
B
T
t
isnon-singular. IthasbeenprovedthatA(I¡F)
¡1
B
T
t
isnon-singular
if and only if the corresponding edmonds matrix
2
6
4
A 0
I¡F B
T
t
3
7
5
is nonsingular.
In this setting a random linear network code is given by (A;F) in which the
local encoding coe±cients k
de
for each adjacent channel pair are independently
and uniformly distributed random variables over the ¯nite ¯eld F. The matrix B
t
represents the decoding operation at the decoder and it is determined based on the
received information at the sink node t. Thus the values of B
t
are not regarded as
part of the random network code.
The determinant of
2
6
4
A 0
I¡F B
T
t
3
7
5
can be written as a polynomial P
t
in the random variablesfk
de
g. Only the ¯rst E
columns of the above matrix contain random variables fk
de
g and the determinant
polynomial P
t
is the sum of products of !+E entries one from each row and col-
umn. Therefore each such product is linear in each termfk
de
g and the determinant
polynomial has degree at most E.
21
Therefore the product
Q
t2T
P
t
is, accordingly a polynomial in fk
de
g of total
degree at most jTjjEj , in which the largest exponent of each of these variables
is at most jTj. In order for all the jTj receivers to be able to decode the source
message requires
Q
t2T
P
t
to be non zero for a given set of fk
de
g while, fk
de
g are
independently and uniformly distributed random variables over the ¯nite ¯eldF.
AgeneralizedSchwartz-Zippelboundshowsthatforanonzeropolynomial P in
fk
de
g of degree less than or equal to jTjjEj, in which the largest exponent of each
of these variables is at most jTj ,the probability that polynomial P equals zero is
at most
1¡(1¡
jTj
F
)
jEj
when thefk
de
g are independently and uniformly distributed random variables over
the ¯nite ¯eldF.
For a single source multi-cast. The probability that the source message can be
decoded at all of the jTj receivers by using a random linear network code can be
lower bounded as a consequence of this generalized Schwartz-Zippel bound.
Based on this algebraic approach the following lower bounds on the success
probabilityofrandomlinearnetworkcodesforsinglesourcemulti-casttransmission
over acyclic networks has been reported in [22] .
22
1.6.1 Reported Bounds on the Failure Probability of Ran-
dom Linear Network Codes for Single Source Multi-
cast
Proposition 1 Let G = (V;E) be a communication network with source S and
sinks T = ft
1
;:::;t
L
g. For a feasible multicast connection problem in which code
coe±cients are chosen independently and uniformly over a ¯nite ¯eld F, the prob-
ability that all the receivers can decode the source information is at least,
(1¡
jTj
F
)
jEj
(1.3)
Proposition 2 Consider a multicast connection problem on an acyclic network
G = (V;E) with source rate !. Let r be the redundancy of the network, i.e. the
original connection requirements are feasible on a network obtained by deleting any
r links in G=(V;E). The probability that random linear network code inF is valid
for a particular receiver is at least.
r+!
X
x=w
µ
r+!
x
¶
(1¡
1
jFj
)
Lx
(1¡(1¡
1
jFj
)
L
)
!+r¡x
(1.4)
where L is the longest source receiver path in the network.
In this thesis we will develop a novel approach to study the performance of
random linear network codes for single source multi-cast transmission. Our bounds
23
derived in this thesis on the performance of random linear network codes are much
stronger than the results reported in [22].
24
Chapter 2
Upper Bounds on the Failure Probability
of Random Linear Network Codes for
Multicast
In this section a general bound on the failure probability of random linear network
codes for single source multi-cast will be presented. The bound derived in this
sectionappliestoallsinglesourcemulti-casttransmissionsaslongasthemulti-cast
is feasible. In fact the upper bound for the failure probability of random linear
network codes is tight as the ¯eld size F goes to in¯nity in particular when the
network does not have redundant source receiver paths . A generalization of this
result for the case with redundant source receiver paths is presented in theorem 2
which is already stronger than the reported bounds on this problem. However we
are going to derive a stronger bound for the failure probability of random linear
network codes in the next chapter which is eventually tight as the ¯eld sizeF goes
25
toin¯nity. Forthatreasonstheproofoftheorem2willbepresentedintheappendix
as a reference.
2.1 Preliminary Results
The lower bound on the probability that all receivers can decode the source infor-
mation reported in [22] was
(1¡
jTj
jFj
)
jEj
The bound derived in [22] perceives the random coding from the perspective of
each individual channel independently. In other words the bound is expressed in
the form of failure due to random coding at a particular channel and hence have
the number of edgesjEj as the exponent of the bound.
We have conjectured that instead of the number of the edgesjEj, the exponent
oftheboundonsuccessprobabilityshouldbejV¡Tj=jJj+1whichisthenumber
of nodes performing the random coding. We will be presenting in the next theorem
that our conjecture is indeed true.
In order to be able to reduce the exponent to the number of nodes that is
performing the random coding, we need to be able to study the characteristics of
random coding from the perspective of each individual node instead of the individ-
ual edges. This brings up the following question ,
26
What is the probability of failure due to random coding at a particular node ?
Before trying to answer this question and stating the theorem we need some
preliminary results on random linear network codes.
By the de¯nition of random coding, local kernels k
de
are independently and
identically distributed inF with uniform distribution.
Lemma 1 For a node v2V ¡T let the input linear space of the node v be de¯ned
as I
v
=hff
d
: d2 In(v)gi then the vectors in ff
e
: e2 Out(v)g are independently
and identically distributed in the linear space I
v
with uniform distribution.
Proof of Lemma 1: Let In(v)=fd
1
;¢¢¢ ;d
m
g
Case 1: Suppose dim(I
v
) = jIn(v)j, that is the case when global encoding kernels
of all the incoming edges are linearly independent. The global encoding kernels of
e2Out(v) are given by,
f
e
=
X
d2In(i)
k
de
f
d
=F
v
k
e
By the assumption dim(I
v
) =jIn(v)j , all the columns of F
v
are linearly indepen-
dent. Therefore the mapping F
v
k
e
7! f
e
is a bijection. The local encoding kernel
k
e
is a In(v)-dimensional random vector with each component independently and
uniformly distributed inF. Therefore each realization of k
e
has probability
1
F
jIn(v)j
and hence f
e
is independently and uniformly distributed in the linear space I
v
.
27
Case 2: Suppose dim(I
v
) < jIn(v)j, In this case there exists dim(I
v
) incoming
channels with linearly independent global encoding kernels. Let the index set of
those channels be In
¤
(v) and the index set of the remaining channels be In
¦
(v).
The local encoding kernel can be decomposed in to two components as
k
e
=k
¤
e
+k
¦
e
wherek
¤
e
istheprojectionofk
e
ontotheindexsetIn
¤
(v)andk
¦
e
istheprojectionof
k
e
on to the index set In
¦
(v). Therefore the global encoding kernels of e2 Out(v)
are given by,
f
e
=
X
d2In(i)
k
de
f
d
=F
v
k
e
=F
v
k
¤
e
+F
v
k
¦
e
due to the independence of underlying coding coe±cients, the k
¤
e
and k
¦
e
are inde-
pendent. Therefore the realization of k
¦
e
has no e®ect on k
¤
e
. Let,
b=F
v
k
¦
e
then given k
¦
e
the global encoding kernel f
e
is given by
f
e
=b+F
v
k
¤
e
28
By the proof of Case 1 the mapping F
v
k
¤
e
7! f
¤
e
is a bijection between random
encodingkernelsk
¤
e
andthelinearspaceI
v
. Andf
¤
e
isindependentlyanduniformly
distributed in the linear space I
v
.
The linear space I
v
is a group [23, 34] with ordinary vector addition in ¯nite
¯eldF. For a ¯xed b2I
v
, the mapping g :I
v
¡!I
v
given by
f
e
=g(f
¤
e
)=b+f
¤
e
is a bijection. Therefore f
e
is independently and uniformly distributed in the linear
space I
v
.
29
2.2 Cut Decomposition of Network
Fix ! channel disjoint source receiver paths for each sink t2T and denote the set
of channels on all these paths to be E
t
r
. Let E
r
= [
t2T
E
t
r
. The subgraph formed
by the channels in E
r
and the nodes adjacent to the channels in E
r
is denoted by
G
r
=(V
r
;E
r
). The performance of random coding is going to be analyzed through
this subgraph G
r
.
Lemma 2 There exists a natural partial order among the nodes V implied by the
°ow of information. For i;j2V , we de¯ne i·j if and only if there exists a path
from node j to i. Then the following conditions hold,
² a·a ;8a2V
² a·b and b·a) a=b
² a·b and b·c ) a·c
Therefore the relation "·" de¯nes a partial order among the nodes V.
The partial order given by the relation¸ can be extended to a linear order as
S =v
0
>v
1
>¢¢¢>v
m
where m·jJj, and the relation ">" de¯nes an upstream to downstream ordering
ofthenodes. Notethatoncethelinearorderisde¯nedonthenodesofthenetwork,
that linear order is valid for all t2T.
30
Basedonthelinearorderingofthenodesweassociateacut CUT
k
foreachnode
v
k
2 V ¡T . The cut associated with the source node S is de¯ned as CUT
0
=
E
r
\Out(S)whichconsistsofalltheoutgoinglinksofthesourcenode S intheedge
set E
r
. Once CUT
k
is de¯ned, CUT
k+1
is formed from CUT
k
by replacing channels
In(v
k+1
)\CUT
k
by channels in Out(v
k+1
)\E
r
. In this manner all cuts CUT
k
for
k =0;¢¢¢ ;m can be de¯ned by induction.
In a similar fashion the CUT
k;t
for a given sink node t is de¯ned as
CUT
k;t
=CUT
k
\E
t
r
For an edge set X µE denote,
L(X)=hff
d
:d2Xgi
31
…
Figure 2.1: Cuts of Paths
De¯nition 8 We declare a failure at CUT
k;t
for sink node t, if the dimension of
linear space hff
e
:e2CUT
k;t
gi is less than !. i.e. dim(L(CUT
k;t
)32
2.2.1 Partition of CUTs
De¯nition 9 We de¯ne the following edge sets for the CUT's of the network
1) the inside channels: CUT
i
k;t
=CUT
k;t
\In(v
k+1
)
2) the outside channels: CUT
o
k;t
=CUT
k;t
¡CUT
i
k;t
3) replacement channels: CUT
¤
k+1;t
=CUT
k+1;t
\Out(v
k+1
)
…
Figure 2.2: Partition of a cut
The inside channels and the outside channels partitions the cut CUT
k;t
. While
the replacement channels replaces the inside channels for the next cut. That is
jCUT
¤
k+1;t
j = jCUT
i
k;t
j and CUT
k+1;t
is obtained from CUT
k;t
by replacing CUT
i
k;t
with CUT
¤
k;t
.
33
In accordance with the partitioning of the CUT
k;t
we de¯ne the following linear
spaces for CUT
k;t
,
² O =hff
e
:e2CUT
o
k;t
gi,
² I =hff
e
:e2In(v
k+1
)gi
² I
¤
=hff
e
:e2CUT
i
k;t
gi.
34
2.3 Failure Probability of Random Linear Net-
work Codes
The information rate of the source S is ! . By assumption the multicast is feasible
on the communication network G. Therefore, there exists ! channel disjoint source
receiver paths for each t2T and the minimum cut capacity C of the network is at
least !. In this section the results are based on the existence of ! channel disjoint
sourcereceiverpathsforeach t2T andhencetheresultholdsforallnetworkswith
!·C.
The results of this chapter relies on the cut decomposition of the network. By
de¯nition 8 the CUT
k;t
is successful if and only if dim(L(CUT
k;t
)) = !. Due to
random coding the success or failure of CUT
k;t
is a random phenomena. We de¯ne
the ¡
k;t
to be the event that no failure occurs at CUT
k;t
and similarly the event
that a failure occurs at CUT
k;t
is denoted by ¡
c
k;t
.
Lemma 3 Suppose there is no failure at CUT
k;t
for the sink node t ,so that
dim(L(CUT
k;t
)=! then
Pr(¡
c
k+1
j¡
k
)·
1
F¡1
35
Proof of Lemma 3: By assumption there is no failure at CUT
k;t
which means
dim(L(CUT
k;t
) = !. We also know jCUT
k;t
j = !, that is CUT
k;t
is composed of
exactly ! channels for each sink t. Since CUT
k;t
is successful, all the global coding
vectors in ff
e
: e2 CUT
k;t
g are linearly independent. Let ff
e
: e2 CUT
k;t
g be a
basis ofL(CUT
k;t
)=hff
e
:e2CUT
k;t
gi the ! dimensional linear space of CUT
k;t
.
We decompose each global coding vector f
e
e2In(v
k+1
) as
f
e
=f
o
e
+f
i
e
such that f
o
e
2O;f
i
e
2I
¤
this decomposition is unique sinceff
e
:e2CUT
k;t
g is a basis ofL(CUT
k;t
). Then
for d2CUT
¤
k+1;t
we have,
f
d
=
X
e2In(v
k+1
)
k
ed
f
e
=
X
e2In(v
k+1
)
k
ed
(f
o
e
+f
i
e
) =
X
e2In(v
k+1
)
k
ed
f
o
e
+
X
e2In(v
k+1
)
k
ed
f
i
e
Thereforeff
d
:d2CUT
¤
k+1;t
g can be decomposed as
f
d
=f
o
d
+f
i
d
such that f
o
d
2O;f
i
d
2I
¤
Where f
o
d
and f
i
d
are given by ,
f
o
d
=
X
e2In(v
k+1
)
k
ed
f
o
e
f
i
d
=
X
e2In(v
k+1
)
k
ed
f
i
e
36
Again this decomposition of f
d
is unique , for ff
e
: e 2 CUT
k;t
g forms a basis
of the message space I
S
so that each point in the message space can uniquely be
decomposed to an element in linear space O and an element in linear space I
¤
.
Note that channels fe 2 E : e 2 CUT
o
k;t
g are common in both the CUT
k;t
and CUT
k+1;t
and therefore given ¡
k;t
the success of the CUT
k+1;t
depends on the
channels in CUT
¤
k+1;t
through random coding at the node v
k+1
.
De¯nition 10 Given ¡
k;t
that CUT
k;t
is successful. If CUT
k+1;t
is successful then
the random coding at node v
k+1
is said to be successful for sink node t.
Let h=dim(I
¤
). Since all the global coding vectors are linearly independent in
CUT
k;t
, we have h=jCUT
i
k
j=jCUT
¤
k
j.
Proposition 3 Given that there is no failure at CUT
k;t
the following two state-
ments are equivalent,
1) CUT
k+1;t
is successful if and only if
I
¤
=hff
i
d
:d2CUT
¤
k+1;t
gi
2) CUT
k+1;t
issuccessfulifandonlyiftheh£hsubmatrixK
¤
v
ofthelocalencoding
kernel K
v
of node v is nonsingular.
det(K
¤
v
=[k
ed
]
e2CUT
i
k;t
;e2CUT
¤
k+1;t
)6=0
37
By lemma 1 the vectors in ff
i
d
: d2 CUT
¤
k
g are independently and identically
distributed in the linear space I
¤
with uniform distribution. Therefore,
Pr(¡
k+1
j¡
k
)=(1¡
1
F
h
)(1¡
1
F
h¡1
)¢¢¢(1¡
1
F
)
¸(1¡
1
F
!
)(1¡
1
F
!¡1
)¢¢¢(1¡
1
F
)
¸1¡
!
X
j=1
(
1
F
)
j
¸1¡
1
F¡1
and we get the desired result,
Pr(¡
c
k+1;t
j¡
k;t
)·
1
F¡1
and the lemma 3 is proved2.
Lemma 4 The probability of the failure of Random Linear Network Code due to
random coding at a particular node is upper bounded by
jTj
F¡1
38
Proof of Lemma 4: By de¯nition 10 we see that,
¡
c
k+1;t
j¡
k;t
() random coding at node v
k+1
fails for sink node t (2.1)
Therefore lemma 3 bounds the probability of a failure due to random coding
at a particular node for only one sink. Since the receiver set has cardinality jTj.
This bound would generalize to be
jTj
F¡1
by the union bound. We can deduce from
equation2.1thatfailureofalinearnetworkcodeataparticularnodeduetorandom
coding is upper bounded by
jTj
F¡1
and the lemma 4 is proved2.
Now we are ready to state and prove the main result of this section.
39
2.3.1 Lower bound on the success probability of Random
linear network codes
Theorem 1 For single source multicast over an acyclic network G = (V;E), in
which single source multicast is feasible with information rate ! symbols per unit
time. If the local coding coe±cients k
d;e
are independent, uniformly distributed
randomvariablesinthebase¯eldF, theprobabilitythatthemessagescanbedecoded
correctly at all the sink nodes t2T is at least
(1¡
jTj
jFj¡1
)
jJj+1
Remark-1: If the ¯eld size F is large enough, a deterministic code can be used on
the outgoing links of the source node such that the global kernels of any ! outgoing
channels of the source node will have rank !. Using this deterministic code at the
source node would reduce the exponent of the lower bound from jJj+1 to jJj.
Remark-2: At some internal nodes it is unnecessary to perform random coding. i.e.
for nodes v2 V ¡T such that jIn(v)j = 1. In that case the exponent of the bound
would reduce to the number of nodes performing the random coding.
40
Proof of Theorem 1: The overall network code meets the multicast require-
ment if all of the jJj+1 nodes are successful. By lemma 4 the probability of the
failure of the linear network code due to random coding at a particular node is
upper bounded by
jTj
F¡1
There are at most jJj + 1 nodes in the network performing random coding.
Therefore, thesuccessprobabilityofrandomcodingonthecommunicationnetwork
is lower bounded by,
(1¡
jTj
F¡1
)
jJj+1
and the theorem 1 is proved2.
41
Theorem 2 For single source multicast over an acyclic network G = fV;Eg, let
minimumcutcapacitiesforsinknodest2T beC
t
, letinformationratebe! symbols
per unit time, and let the redundancy for sink node t be ±
t
= C
t
¡!. If the local
coding coe±cients k
d;e
are independent, uniformly distributed random variables in
the base ¯eld F, the failure probability P
t
e
that the messages can not be decoded
correctly at sink node t2T is upper bounded by
P
t
e
·
¡
±t+jJj+1
jJj
¢
(jFj¡1)
±
t
+1
:
Proof of Theorem 2: See Appendix
Corollary 1 For single source multicast over an acyclic network G =fV;Eg, let
minimumcutcapacitiesforsinknodest2T beC
t
, letinformationratebe! symbols
per unit time, and let the redundancy for sink node t be ±
t
= C
t
¡!. If the local
coding coe±cients k
d;e
are independent, uniformly distributed random variables in
the base ¯eld F, the probability that the messages can be decoded correctly at sink
node t2T is lower bounded by
jJj
Y
k=0
(1¡
¡
±t+k
k
¢
(jFj¡1)
±
t
+1
)
42
Theconnectionbetweenthiscorollaryandtheorem2canbeestablishedbyusing
the following combinatorial identity,
k
X
i=0
µ
n+i
n
¶
=
µ
k+n+1
n+1
¶
:
43
2.4 Comparison of the Bound with the Reported
Bounds in the Literature
When the ¯eld size is large our bound derived in this chapter improve the bounds
in[22]roughlybyreplacingthenumberofchannelsjEjinequation2.2(Proposition
1) by the number of internal nodesjJj. However the bound
(1¡
jTj
F
)
jEj
(2.2)
reported in [22] is quite misleading. This quantity behaves completely opposite to
what one would expect from a bound. The problem arises due to the exponentjEj
, to be more speci¯c; based on lemma 1 we can deduce that including an extra edge
to a network the success probability of random linear network code on this new
network is at least the success probability of the network without the extra edge.
However the bound in 2.2 implicitly argues in a contradictory way that including
extra edges to an existing network (and hence increasing the value of jEj) worsens
the success probability of random linear network coding.
In the light of this viewpoint our bound
(1¡
jTj
F¡1
)
jJj+1
o®ers more than an improvement over the exponent. As an example we are going
to construct a network to emphasize how both of these bounds relate to the actual
44
success probability of random linear network codes in this speci¯c example. As
a matter of fact this example shows how the bound reported in [22] can literally
collapse for certain network topologies.
2.4.1 ComparisonoftheboundsoveranetworkwithjJj=1
Example 3 Consider a single source multi-cast with source rate ! over a network
G which has only one intermediate node v
1
. There exists ! edges E
1
between the
source node S and v
1
and ! edges E
2
between v
1
and the sink node t. Therefore
E = E
1
[ E
2
and the network has minimum cut capacity C = !. Hence single
source multi-cast is feasible and the theorems apply. Denote the success probability
of random linear network codes on this network by P
s
then both of the bounds give,
P
s
¸(1¡
jTj
F¡1
)
2
>(1¡
jTj
F
)
2!
(2.3)
which shows that our bound is exceptionally stronger in an obvious way.
But this quantity reported as a bound in [22] is de¯cient beyond that. In order to
showthissupposeweenlargetheedgesetofthenetworkbyintroducinganadditional
¸ new edges between the source node S and v
1
. Let this enlarged edge set be called
E
0
. Enlarging the edge set in this manner does not a®ect the minimum cut capacity
of the network i.e.(C =! after the edge set is enlarged).
45
The exact success probability of this network can be shown to be
P
s
=
!¡1
Y
i=0
(1¡
1
jFj
!+¸¡i
)£
!¡1
Y
j=0
(1¡
1
jFj
!¡j
)
It is obvious to see that P
s
is a strictly increasing function in the parameter ¸.
This translates in to saying that the success probability of random linear network
codes strictly improves as we incorporate more and more edges between S and v
1
.
Under these circumstances the equation 2.3 becomes
P
s
¸(1¡
jTj
F¡1
)
2
>(1¡
jTj
F
)
2!+¸
Suppose we let ¸!1 then,
lim
¸!1
P
s
=
!¡1
Y
j=0
(1¡
1
jFj
!¡j
)> lim
¸!1
(1¡
jTj
F¡1
)
2
> lim
¸!1
(1¡
jTj
F
)
2!+¸
=0
It is obvious that enlarging the edge set by incorporating additional edges between
S and v
1
strictly improves the success probability of random linear network coding
while the lower bound on the success probability in equation 2.2 decays to zero. This
very simple example shows how drastically irrelevant the bound reported in [22] can
behave with respect to the performance of random linear network codes. This shows
that our bound is immune to such shortcomings.
46
Chapter 3
Upper Bounds on the Failure Probability
of Random Linear Network Codes with
Redundancy for Multicast
Let G = (V;E) be a network with jJj internal nodes. We consider single source
multi-cast on this network with source rate !. The minimum cut capacity of the
network for each sink node might not be the same so, we de¯ne C
t
as the minimum
cut capacity of the network for sink node t2T. The minimum cut capacity of the
network is then de¯ned as
C =minfC
t
:t2Tg
The redundancy of the linear network code is de¯ned as the di®erence between
the minimum cut capacity and the source rate, since the minimum cut capacity of
47
each sink node might not be the same. The redundancy of the linear network code
for each sink is de¯ned as
±
t
=C
t
¡!
3.1 De¯nitions and Terminologies
We need additional de¯nitions for this section in order to extend our analysis of
random coding to the case with redundancy. As we mentioned there exists ±
t
=
C
t
¡! redundant source receiver paths for each sink t2T. By construction there
arejC
t
j channels in each CUT
k;t
where±
t
=C
t
¡! of them are redundant channels.
3.1.1 Redundancy distribution in CUT
k;t
De¯nition 11 Given ¡
k;t
, that there is no failure at CUT
k;t
. The outside channel
redundancy of CUT
k;t
is de¯ned as,
±
o
k;t
=jCUT
o
k;t
j¡Rank((f
e
:e2CUT
o
k;t
))
±
o
k;t
is the redundancy that is already consumed by the outside channels.
De¯nition 12 Given ¡
k;t
, that there is no failure at CUT
k;t
. The inside channel
redundancy of CUT
k;t
is de¯ned as ,
±
i
k;t
=±
k;t
¡±
o
k;t
48
As a matter of fact ±
i
k;t
is the left over redundancy that can be used by the node
v
k+1
. Given that the CUT
k;t
is successful the success of CUT
k+1;t
is determined by
the random coding at the node v
k+1
.
Ideally we would like to allocate as much redundancy as we could to the node
v
k+1
to improve its success. As the success of CUT
k+1;t
solely depends on the
random coding at node v
k+1
. As a matter of fact we will show that the failure
probabilityataparticularnodedecaysexponentiallywiththeredundancyallocated
to it. However, due to random coding the redundancy allocated to each node is a
random variable taking values in f0;¢¢¢ ;±
t
g. The probability distribution of the
redundancy at a given node actually depends on the position of the node in the
upstream to downstream ordering of the nodes. We are going to derive and use
bounds on the probability mass function for the distribution of redundancy of each
node to facilitate the performance analysis of random coding.
49
3.2 Preliminary Results
3.2.1 PerformanceAnalysisofRandomLinearNetworkCodes
over a Network with jJj=0
Firstwearegoingtodevelopsomepreliminaryresultsthatwillserveasthebuilding
blocksofthemainresultofthissection. Westarto®withaspecialnetworkinwhich
all the outgoing links of the source node are connected directly to the sink nodes ,
i.e. jJj=0 .
De¯nition 13 Consider the single source multi-cast with rate ! on a network with
no internal nodes, i.e. jJj=0 where all the outgoing links of the source node is con-
necteddirectlytothesinknode. Supposethisspecialnetworkhasredundancy±.Then
the failure probability of a random linear network code for this type of networks is
denoted by
P
!
0;±
Lemma 5 Consider the single source multi-cast with rate ! on a network with
no internal nodes, i.e. jJj = 0 where all the outgoing links of the source node is
50
connected directly to the sink node. Suppose this special network has redundancy
±.Then the exact failure probability is given by,
P
!
0;±
=1¡
!¡1
Y
i=0
(1¡
1
jFj
C¡i
)
Proof of Lemma 5: The random linear network code is successful for this
network if and only if the local encoding kernel of the source node K
0
has rank
equalto!. ThelocalencodingkernelK
0
isa!£C randommatrixwithcoe±cients
independently and uniformly drawn from the ¯nite ¯eld F. The probability that
this random matrix has rank ! is given by
!¡1
Y
i=0
(1¡
1
jFj
C¡i
)
and the result follows.
Lemma 6 Consider the single source multi-cast with rate ! on a network with
no internal nodes, i.e. jJj = 0 where all the outgoing links of the source node is
connected directly to the sink node t. Suppose this special network has redundancy
±
t
.Then the failure probability P
!
0;±t
is bounded from above and below by,
1
(jFj¡1)jFj
±
t
¸P
!
0;±
t
¸
1
jFj
±
t
+1
51
Proof of Lemma 6: In the proof of lemma 5 ,the probability of the random
matrix having rank ! can be bounded by,
(1¡
1
jFj
±t+1
)¸
!¡1
Y
i=0
(1¡
1
jFj
Ct¡i
)¸1¡
1
X
i=±
t
+1
1
jFj
i
and the result follows.
3.2.2 TheRoleof P
!
0;±
inanalyzingthefailureprobabilityof
Random Linear Network Codes
Lemma 7 Given there is no failure at CUT
k;t
,and the outside redundancy is ±
o
k;t
=
l. The probability of failure at CUT
k+1;t
due to random coding at node v
k+1
is given
by
Pr(¡
c
k+1;t
j¡
k;t
;±
o
k;t
=l)=P
!¡dim(O)
0;±t¡l
Proof of Lemma 7: De¯ne,
O =hff
e
:e2CUT
o
k;t
gi
52
There exists dim(O) linearly independent global encoding vectors in CUT
o
k;t
which
forms a basis for linear space O. Let,
ff
o
1
;¢¢¢ ;f
o
dim(O)
g
beabasisforthelinearspaceO. Byassumption¡
k;t
,CUT
k;t
issuccessfultherefore
there exists !¡dim(O) linearly independent global encoding vectors in CUT
i
k;t
ff
i
1
;¢¢¢ ;f
i
!¡dim(O)
gµff
e
:e2CUT
i
k
g
such that
dim(hff
o
1
;¢¢¢ ;f
o
dim(O)
g[ff
i
1
;¢¢¢ ;f
i
!¡dim(O)
gi)=!
Let the linear space spanned by these vectors be denoted by
I
¤
=hff
i
1
;¢¢¢ ;f
i
!¡dim(O)
gi
Then,
(¡
c
k+1;t
j¡
k;t
;±
o
k;t
=l)()I
¤
*L(CUT
¤
k+1;t
)
53
We decompose each global coding vector f
e
e2In(v
k+1
) as
f
e
=f
o
e
+f
i
e
such that f
o
e
2O;f
i
e
2I
¤
this decomposition is unique sinceff
o
1
;¢¢¢ ;f
o
dim(O)
g[ff
i
1
;¢¢¢ ;f
i
!¡dim(O)
g is a basis
ofL(CUT
k;t
). Then for d2CUT
¤
k+1;t
we have,
f
d
=
X
e2In(v
k+1
)
k
ed
f
e
=
X
e2In(v
k+1
)
k
ed
(f
o
e
+f
i
e
) =
X
e2In(v
k+1
)
k
ed
f
o
e
+
X
e2In(v
k+1
)
k
ed
f
i
e
Thereforeff
d
:d2CUT
¤
k+1;t
g can be decomposed as
f
d
=f
o
d
+f
i
d
such that f
o
d
2O;f
i
d
2I
¤
Where f
o
d
and f
i
d
are given by ,
f
o
d
=
X
e2In(v
k+1
)
k
ed
f
o
e
f
i
d
=
X
e2In(v
k+1
)
k
ed
f
i
e
Thereforeeachglobalencodingkernelcanuniquelybedecomposedtoanelement
in linear space O and an element in linear space I
¤
.
54
Based on the condition (¡
c
k+1;t
j¡
k;t
;±
o
k;t
= l) () I
¤
* L(CUT
¤
k+1;t
) the f
o
e
= 2 I
¤
components of all f
e
;e2CUT
i
k;t
can be dropped. Therefore,
I
¤
*L(CUT
¤
k+1;t
)()I
¤
6=hff
i
d
:d2CUT
¤
k+1;t
gi
Condition I
¤
6=hff
i
d
: d2 CUT
¤
k
gi is equivalent to a failure of multi-cast with
source rate !¡dim(O) and redundancy ±
t
¡l on a network with no internal nodes
where all the outgoing edges of the source node are connected directly to the sink
node t. As a matter of fact the edge set ff
i
1
;¢¢¢ ;f
i
!¡dim(O)
g can be viewed as the
imaginarylinksprovidingthemessagetothesourcenode. Inde¯nition13wede¯ne
this failure probability to be P
!¡dim(O)
0;r¡l
therefore we have,
Pr(¡
c
k+1;t
j¡
k;t
;±
o
k;t
=l)=Pr(I
¤
*L(CUT
¤
k+1;t
))=Pr(I
¤
6=hff
i
d
:d2CUT
¤
k+1;t
gi)=P
!¡dim(O)
0;±t¡l
and the lemma is proved2.
De¯nition 14 The rank reduction at the source node S is de¯ned as
R
0
=!¡Rank(K
0
)
55
For the special type of networks in which jJj = 0 , the random linear network
code fails if dim(CUT
0;t
) < ! . By lemma 6 we know that rank reduction of 1 or
more at CUT
0;t
is actually a failure and we have
1
(jFj¡1)jFj
±
t
¸Pr(R
0
¸1)
We are interested in failures due to rank reduction by 2 or more for this special
type of networks with jJj = 0. In order to do this we will develop an alternative
approach to bound this probability.
Lemma 8 Consider the single source multi-cast with rate ! on a network with
no internal nodes, i.e. jJj = 0 where all the outgoing links of the source node is
connected directly to the sink node t. Suppose this special network has redundancy
±
t
. De¯ne the probability of failure with rank reduction R
0
¸k as P
!;k
0;±t
. Then P
!;k
0;±t
is upper bounded by,
P
!;k
0;±
t
·(
1
jFj
k¡1
(jFj¡1)
)
±t+k
Proof of Lemma 8:
By Lemma 1 the vectors in ff
e
: e 2 CUT
0;t
g are independently identically
distributed in the linear space I
S
with uniform distribution due to the assumption
56
that the coding coe±cients k
de
are independently and identically distributed in F
with uniform distribution.
ThenumberofchannelsinCUT
0;t
isC
t
=!+±
t
labelthesechannelsasCUT
0;t
=
fd
1
;¢¢¢d
Ct
g. De¯ne the linear space O
i
as
O
i
=hff
d
j
:1·j·igi
We consider the sequence Z
i
= dim(O
i
)¡dim(O
i¡1
) where O
0
= O. Z
i
takes
values either 0 or 1. The event ¡
c
k+1;t
corresponds to the set of sequences Z =
(Z
1
;¢¢¢Z
Ct
) of weight at most !¡k. Therefore,
P
!;k
0;±
t
=
X
Z2f0;1g
C
t:wt(Z)·!¡k
Pr(Z) (3.1)
where wt(:) represents for the weight of a binary sequence. We have
Pr(Z)=¦
C
t
i=1
Pr(Z
i
jZ
1
;¢¢¢Z
i¡1
); (3.2)
where
Pr(Z
i
=0jZ
1
;¢¢¢Z
i¡1
)=
1
jFj
!¡wt(Z
1
;¢¢¢;Z
i¡1
)
(3.3)
and
Pr(Z
i
=1jZ
1
;¢¢¢Z
i¡1
)=(1¡
1
jFj
!¡wt(Z
1
;¢¢¢;Z
i¡1
)
) ·1: (3.4)
57
The equation (3.3) is proved as follows: The random variables Z
i
takes value 0 if
and only if f
d
i
is in O
i¡1
, the linear space spanned by the vectors inff
d
j
:1·j·
i¡1g. We have dim(I
S
)¡dim(O
i¡1
)=!¡wt(Z
1
;¢¢¢Z
i¡1
). Since f
d
i
is uniformly
distributed in the linear space I
S
, f
d
i
falls in O
i¡1
with probability
Pr(Z
i
=0jZ
1
;¢¢¢Z
i¡1
)=
jO
i¡1
j
jI
S
j
=
1
jFj
!¡wt(Z
1
;¢¢¢;Z
i¡1
)
From this formula, we can see that Pr(Z) depends not only on the number of
1's in Z but also on the locations of 1's and 0's in the sequence. We use a di®erent
method to characterize a binary sequence Z. We consider the location of the ith 0
in the sequence. This can be characterized by the number of 1's before the ith 0.
Letthisbet
Z
(i). Obviouslyt
Z
=(t
Z
(i):i=1;¢¢¢ ;C
t
¡wt(Z))isanon-decreasing
sequence with maximum entry value t
Z
(i)·wt(Z) and length C
t
¡wt(Z).
58
By using this sequence, we proceed as follows:
P
!;k
0;±t
=
X
Z:wt(Z)·!¡k
Pr(Z)
=
!¡k
X
a=0
X
Z:wt(Z)=a
Pr(Z)
(¤)
=
!¡k
X
a=0
X
t
Z
2T
C
t
;a
Pr(Z)
(¤¤)
·
!¡k
X
a=0
X
t
Z
2T
C
t
;a
¦
C
t
¡a
i=1
1
jFj
!¡t
Z
(i)
(¤¤¤)
·
!¡k
X
a=0
X
t
Z
2T
0
C
t
;a
¦
±t+k
i=1
1
jFj
!¡t
Z
(i)
:
² In step (*),T
C
t
;a
consists of all t
Z
sequences corresponding to Z-sequences of
length C
t
and weight a.
² In step (**), we use (3.3) and upper bound 1 in (3.4).
² In step (***), ¯rst we upper bound ¦
C
t
¡a
i=1
1
jFj
!¡t
Z
(i)
by ¦
±
t
+k
i=1
1
jFj
!¡t
Z
(i)
because
±
t
+k·C
t
¡a for all a and
1
jFj
!¡t
Z
(i)
·1. Secondly, we introduce a setT
0
Ct;a
consisting of all non-decreasing sequences of length ±
t
+k with entries taking
values inf0;1;¢¢¢ ;ag.
59
Replace [
!¡k
a=0
T
0
Ct;a
by a bigger set T
¤
Ct;!¡k
which consists of all sequences t of
length±
t
+k withmaximumentryvalue!¡k withoutthenon-decreasingmonotony
requirement. That is, t2T
¤
Ct;!¡k
satis¯es two conditions:
1. the length of t is ±
t
+k, i.e. t=(t
1
;¢¢¢ ;t
±t+k
);
2. for each i:1·i·±
t
+k, t
i
2f0;¢¢¢ ;!¡kg.
We obtain by using this set an upper bound as below:
P
!;k
0;±
t
·
X
t2T
¤
C;!¡k
¦
±
t
+k
i=1
1
jFj
!¡t(i)
=(
!¡k
X
t=0
1
jFj
!¡t
)
±t+k
·(
1
X
n=k
1
jFj
n
)
±t+k
·(
1
jFj
k¡1
(jFj¡1)
)
±
t
+k
The lemma is proved. 2
60
Lemma 9 Consider the single source multi-cast with rate ! on a network with
no internal nodes, i.e. jJj = 0 where all the outgoing links of the source node is
connected directly to the sink node. Suppose this special network has redundancy
±+1.Then the failure probability is upper bounded by,
P
!
0;±+1
·
P
!
0;±
jFj¡1
Proof of Lemma 9: This lemma is one of the key components in proving the
next theorem. The P
!
0;±
t
+1
can be written as
P
!
0;±
t
+1
=
P
!
0;±t
¡P
!;2
0;±
t
jFj
+P
!;2
0;±t
·
P
!
0;±t
¡(
1
jFj(jFj¡1)
)
±
t
+2
jFj
+(
1
jFj(jFj¡1)
)
±
t
+2
·
P
!
0;±
t
jFj
+
1
jFj
±t+3
(jFj¡1)
±t+1
·
P
!
0;±t
jFj
+
1
jFj
±
t
+2
(jFj¡1)
(¤)
·
P
!
0;±t
jFj
+
P
!
0;±t
jFj(jFj¡1)
·
1
X
i=1
P
!
0;±
t
jFj
i
·
P
!
0;±
t
jFj¡1
In step(*) lemma 6 is invoked i.e. P
!
0;±t
¸
1
jFj
±
t
+1
2.
61
3.3 Failure Probability of Random Linear Net-
work Codes with Redundancy
3.3.1 Upper Bound on the failure probability of Random
Linear Network Codes with Redundancy
Theorem 3 For single source multicast over an acyclic network G = fV;Eg, let
minimumcutcapacitiesforsinknodest2T beC
t
, letinformationratebe! symbols
per unit time, and let the redundancy for sink node t be ±
t
= C
t
¡!. If the local
coding coe±cients k
d;e
are independent, uniformly distributed random variables in
the base ¯eld F, the failure probability P
t
e
that the messages can not be decoded
correctly at sink node t2T is upper bounded by
P
t
e
·
¡
jJj
0
¢
+
¡
jJj
1
¢
+¢¢¢+
¡
jJj
±t+1
¢
(jFj¡1)
±
t
+1
:
ProofofTheorem3: Theproofofthetheoremisbyinductiononthenumber
of internal nodesjJj. ForjJj=0 the result is trivial by the lemma 6,
P
!
0;±t
·
1
(jFj¡1)
±t+1
62
We assume a linear order among the internal nodes of the network
S =v
0
>v
1
>¢¢¢>v
m
where m·jJj.
NowsupposetheresultholdsforalljJj·k andconsiderthecaseforjJj=k+1.
De¯ne the probability of failure at CUT
k;t
as p
k
e
then,
p
k+1
e
·Pr(¡
c
k;t
)+Pr(¡
c
k+1;t
;¡
k;t
)
·p
k
e
+
±t
X
i=0
Pr(±
o
k;t
=i;¡
c
k+1;t
;¡
k;t
)
·p
k
e
+
±t
X
i=0
Pr(±
o
k;t
=i;¡
k;t
)Pr(¡
c
k+1;t
j±
o
k+1;t
=i;¡
k;t
)
·p
k
e
+
±t
X
i=0
Pr(±
o
k;t
=i;¡
k;t
)P
!¡jCUT
o
k;t
j+i
0;±t¡i
·p
k
e
+
±t
X
i=0
Pr(±
o
k;t
=i)P
!¡jCUT
o
k;t
j+i
0;±t¡i
63
Lemma 10
±t
X
i=0
Pr(±
o
k;t
=i)P
!¡jCUT
o
k;t
j+i
0;±t¡i¡1
·
¡
k
0
¢
+
¡
k
1
¢
+¢¢¢+
¡
k
±
t
¢
(jFj¡1)
±
t
:
Proof of Lemma 10: The CUT
k;t
is comprised of C
t
= !+±
t
channels. The
cardinality of inside channels is at least one i.e. jCUT
i
k;t
j¸ 1. Therefore
jCUT
o
k;t
j·C
t
¡1=!+±
t
¡1
De¯ne CUT
¦
k;t
as a subset of CUT
k;t
such thatjCUT
¦
k;t
j=C
t
¡1 and
CUT
o
k;t
µCUT
¦
k;t
µCUT
k;t
Denote the success at CUT
¦
k;t
as ¡
¦
k;t
and the failure as ¡
¦;c
k;t
. The CUT
¦
k;t
can be
viewed as the set of incoming channels to an imaginary sink node. Under these cir-
cumstancestherandomlinearnetworkcodehasredundancy±
t
¡1fortheimaginary
sink node.
64
Therefore by the induction hypothesis the failure probability at CUT
¦
k;t
is upper
bounded by
Pr(¡
¦;c
k;t
)·
¡
k
0
¢
+
¡
k
1
¢
+¢¢¢+
¡
k
±t
¢
(jFj¡1)
±t
±
t
X
i=0
Pr(±
o
k;t
=i;¡
¦;c
k;t
)·
±
t
X
i=0
Pr(±
o
k;t
=i)Pr(¡
¦;c
k;t
j±
o
k;t
=i)·
By using lemma 11 we have the following inequality ,
±
t
X
i=0
Pr(±
o
k;t
=i)P
!¡jCUT
o
k
j+i
0;±t¡i¡1
·
±
t
X
i=0
Pr(±
o
k;t
=i)Pr(¡
¦;c
k;t
j±
o
k;t
=i)·
¡
k
0
¢
+
¡
k
1
¢
+¢¢¢+
¡
k
±
t
¢
(jFj¡1)
±t
and the lemma 10 is proved. 2
Lemma 11
P
!¡jCUT
o
k;t
j+i
0;±t¡i¡1
·Pr(¡
¦;c
k;t
j±
o
k;t
=i)
Proof of Lemma 11: We know thatjCUT
¦
k;t
j=C
t
¡1 and ±
o
k;t
=i. De¯ne,
CUT
¦;i
k;t
=CUT
¦
k;t
¡CUT
o
k;t
65
Given ±
o
k;t
= i we have dim(L(CUT
o
k;t
)) = jCUT
o
k;t
j¡i. De¯ne a linear order
among the channels in CUT
¦;i
k;t
. Let ,
O
0
=hff
e
:e2CUT
o
k;t
gi
then for e
j
2CUT
¦;i
k;t
the linear space O
j
is de¯ned as,
O
j
=hff
e
m
:1·m·jg[ff
e
:e2CUT
o
k;t
gi
With each e
j
2CUT
¦;i
k;t
we associate an indicator function Z
j
given by,
Z
j
=dim(O
j
)¡dim(O
j¡1
)
If f
e
j
2O
j¡1
then Z
j
= 0 and we declare a failure for the channel e
j
. Since coding
is random at channel e
j
, the indicator function Z
j
is a binary random variable.
By lemma 1 f
e
j
is independently and uniformly distributed in the linear space
hff
d
:d2In(tail(e
j
))gi. Let I
j
=hff
d
:d2In(tail(e
j
))gi then,
Pr(Z
j
=0)=
jFj
dim(I
j
\O
j¡1
)
jFj
dim(I
j
)
=
1
jFj
dim(I
j
)¡dim(I
j
\O
j¡1
)
therefore Pr(Z
j
=0) is of the form
1
jFj
h
j
where h
j
=dim(I
j
)¡dim(I
j
\O
j¡1
).
Note that h
j
is an integer which satis¯es the following inequality
0·h
j
·!¡dim(O
j¡1
)
66
Then Pr(Z
j
=0) is lower bounded by
1
jFj
!¡dim(O
j¡1
)
·Pr(Z
j
=0)
Remark-3: if dim(I
j
) = ! then the above lower bound of Pr(Z
j
= 0) is always
attained.
LetI
S
=hff
d
:d2In(S)gibethemessagespace. LetthesetB
o
=ff
o
1
;¢¢¢ ;f
o
dim(O
0
)
g
be a basis of the linear space O
0
. The set B
o
can be enlarged with a set of linearly
independent vectors B
¤
= ff
¤
1
;¢¢¢ ;f
¤
!¡dim(O
0
)
g. Such that B
o
[B
¤
is a basis for
the linear space I
S
.
And let I
¤
=hB
¤
i , the I
¤
is a linear space with dim(I
¤
)=!¡dim(O
0
). Then
,
(¡
¦;c
k;t
j±
o
k;t
=i)()I
¤
*L(CUT
¦;i
k;t
)
De¯ne the condition A as,
A()fdim(I
j
)=!;8e
j
2CUT
¦;i
k;t
g
67
Remark-3isthekeyobservationinprovingthislemma. Itstatesthattheproba-
bility of coding failure at each channel e
j
2CUT
¦;i
k;t
is minimum when dim(I
j
)=!.
Therefore,
Pr(I
¤
*L(CUT
¦;i
k;t
)jA)·Pr(I
¤
*L(CUT
¦;i
k;t
))
ButundertheassumptionofconditionAandbyusingadecompositionofglobal
coding vectors as in the lemma 3 we have
P
!¡jCUT
o
k;t
j+i
0;±
t
¡i¡1
=Pr(I
¤
*L(CUT
¦;i
k;t
)jA)
·Pr(I
¤
*L(CUT
¦;i
k;t
))
·Pr(¡
¦;c
k;t
j±
o
k;t
=i)
and the lemma 11 is proved2.
Now we can go back to the proof of theorem 3. We already showed that,
p
k+1
e
·p
k
e
+
±
t
X
i=0
Pr(±
o
k;t
=i)P
!¡jCUT
o
k;t
j+i
0;±
t
¡i
68
Therestoftheproofbuildsonlemma9andlemma10. Bylemma10weshowed
that
±
t
X
i=0
Pr(±
o
k;t
=i)P
!¡jCUT
o
k;t
j+i
0;±
t
¡i¡1
·
¡
k
0
¢
+
¡
k
1
¢
+¢¢¢+
¡
k
±t
¢
(jFj¡1)
±t
:
At the same time lemma 9 indicates that,
P
!¡jCUT
o
k;t
j+i
0;±
t
¡i
·
P
!¡jCUT
o
k;t
j+i
0;±
t
¡i¡1
(jFj¡1)
Combining these results we have,
±t
X
i=0
Pr(±
o
k;t
=i)P
!¡jCUT
o
k;t
j+i
0;±t¡i
·
±t
X
i=0
Pr(±
o
k;t
=i)
P
!¡jCUT
o
k;t
j+i
0;±t¡i¡1
(jFj¡1)
·
1
(jFj¡1)
¢
¡
k
0
¢
+
¡
k
1
¢
+¢¢¢+
¡
k
±t
¢
(jFj¡1)
±
t
·
¡
k
0
¢
+
¡
k
1
¢
+¢¢¢+
¡
k
±
t
¢
(jFj¡1)
±t+1
By the induction hypothesis p
k
e
is bounded by
p
k
e
·
¡
k
0
¢
+
¡
k
1
¢
+¢¢¢+
¡
k
±t+1
¢
(jFj¡1)
±
t
+1
69
Combining these two inequalities we have
p
k+1
e
·p
k
e
+
±t
X
i=0
Pr(±
o
k;t
=i)P
!¡jCUT
o
k;t
j+i
0;±t¡i
·
¡
k
0
¢
+
¡
k
1
¢
+¢¢¢+
¡
k
±t+1
¢
(jFj¡1)
±
t
+1
+
¡
k
0
¢
+
¡
k
1
¢
+¢¢¢+
¡
k
±t
¢
(jFj¡1)
±
t
+1
·
[
¡
k
0
¢
+0]+[
¡
k
1
¢
+
¡
k
0
¢
]+[
¡
k
2
¢
+
¡
k
1
¢
]+¢¢¢+[
¡
k
±
t
+1
¢
+
¡
k
±
t
¢
]
(jFj¡1)
±t+1
·
[
¡
k
0
¢
]+[
¡
k+1
1
¢
]+[
¡
k+1
2
¢
]+¢¢¢+[
¡
k+1
±
t
+1
¢
]
(jFj¡1)
±
t
+1
·
¡
k+1
0
¢
+
¡
k+1
1
¢
+
¡
k+1
2
¢
+¢¢¢+
¡
k+1
±
t
+1
¢
(jFj¡1)
±t+1
and the theorem 3 is proved2.
Corollary 2 For single source multicast over an acyclic network G =fV;Eg, let
minimumcutcapacitiesforsinknodest2T beC
t
, letinformationratebe! symbols
per unit time, and let the redundancy for sink node t be ±
t
= C
t
¡!. If the local
coding coe±cients k
d;e
are independent, uniformly distributed random variables in
the base ¯eld F, the probability that the messages can be decoded correctly at sink
node t2T is lower bounded by
jJj
Y
k=0
(1¡
P
minfk;±
t
g
i=0
¡
k
i
¢
(jFj¡1)
±
t
+1
)
70
3.4 Comparison of the Bound with the Reported
Bounds in the Literature
3.4.1 ComparisonoftheboundsoveranetworkwithjJj=0
Example 4 In this example we aim to give a comparison of the bound we derived
in this chapter with the bound reported in [22]. We check a very simple network
consisting of a source node S and a sink node t such that all the outgoing edges
of the source node are connected directly to the sink node t. Suppose there are
K =jEj = C
t
channels between S and t. The exact failure probability P
t
e
for this
network is given in lemma 5 which is bounded by lemma 6 as
1
(jFj¡1)jFj
±
t
¸P
t
e
¸
1
jFj
±
t
+1
71
Therefore the bound speci¯cally derived for the networksjJj=0 coincides with our
bound. However when the ¯eld sizejFj is large the bound of [22] with L=1,C
t
=K
gives,
P
t
e
·1¡
±t
X
x=0
µ
K
x
¶
(1¡
1
jFj
)
K¡x
1
jFj
x
=
K
X
x=±
t
+1
µ
K
x
¶
(1¡
1
jFj
)
K¡x
1
jFj
x
'
µ
K
±
t
+1
¶
(1¡
1
jFj
)
!¡1
1
jFj
±
t
+1
'
µ
K
±
t
+1
¶
1
jFj
±t+1
which is larger than the bound we have derived.
3.4.2 Comparison of the bounds over Block Codes
The advantage of the new bounds is maximized for the so-called block code we
examine in the next example. For block code, the number of internal nodes is ¯xed
while the number of channels goes to in¯nity as block length goes to in¯nity.
For a block code of block length n, the source messages are ! random vectors
X = (X
i
: i = 1;¢¢¢ ;!) where 8i;X
i
2 F
n
. They are transmitted to the source
node S through ! imaginary channels in In(s). At each node i2V ¡T, there is a
local kernel
K
i
=fk
de
:d2In(i);e2Out(i)g (3.5)
72
where k
de
are n£ n F-valued matrices. At the source node s, we assume that
the message transmitted over the ith imaginary channel d
i
is the source message
U
d
i
=X
i
. The message transmitted over channel e=(i;j), denoted by U
e
which is
anF-valued column vector of length n, is calculated by the following formula
U
e
=
X
d2In(i)
k
de
U
d
: (3.6)
Block codes can be alternatively de¯ned by using an extended network G
(n)
=
fV;E
(n)
g in which E
(n)
= [
n
i=1
E
i
and E
i
: i = 1;¢¢¢ ;n are n identical copies of
E. In other words, the extended network is a network obtained from the original
network by enlarge the channel set by replacing each channel by its n identical
copies. It can be easily seen that, a linear block code is an ordinary linear code for
the extended network. For a block code of block length n, let!
(n)
be the number of
symbols in the source message. Then the minimum cut capacity for the extended
network is nC
t
and the redundancy of the block code is nC
t
¡!
(n)
. Let ! =
!
(n)
n
be the source message data rate of the block code which may not be an integer as
in the scalar case. Let ±
t
=C
t
¡! be the redundancy rate of the code.
We examine the new bound and the bound from [22] for block codes with block
length n. In the upper bound in (1.4) from [22], the main term is
µ
±
t
+!
±
t
+1
¶
(1¡
1
jFj
)
L(!¡1)
(1¡(1¡
1
jFj
)
L
)
±t+1
:
73
When this is applied to block codes with a ¯xed redundancy rate ±
t
, the bound
becomes
µ
n(±
t
+!)
n±
t
+1
¶
(1¡
1
jFj
)
L(n!¡1)
(1¡(1¡
1
jFj
)
L
)
n±
t
+1
»
=
µ
n(±
t
+!)
n±
t
+1
¶
e
¡nL!
jFj
(
L
jFj
)
n±t+1
=
µ
n(±
t
+!)
n±
t
+1
¶
e
¡nL!
jFj
L
n±
t
+1
jFj
n±t+1
»
=
µ
nLC
t
n±
t
+1
¶
e
¡nL!
jFj
1
jFj
n±
t
+1
:
The new bound (3) gives
¡
jJj
0
¢
+
¡
jJj
1
¢
+¢¢¢+
¡
jJj
n±t+1
¢
(jFj¡1)
n±
t
+1
Since
¡
nLC
t
n±
t
+1
¢
e
¡nL!
jFj
growsexponentiallywithnand
¡
jJj
0
¢
+
¡
jJj
1
¢
+¢¢¢+
¡
jJj
n±
t
+1
¢
·2
jJj
is a ¯xed constant, our bound decays exponentially faster than theirs for large ¯eld
F when block length n goes to in¯nity.
74
Chapter 4
Limiting Behavior of the Failure
Probability as the Field Size Goes to
In¯nity
To get a deeper understanding of the tightness of the new bound, we study the lim-
iting behavior of the failure probability as the ¯eld size goes to in¯nity. This makes
it possible for us to ignore some complicated minor terms during the derivation.
For a single source multi-cast network coding problem N =fG =fV;Eg;s;T;!g
with redundancy ±
t
=C
t
¡!, de¯ne
t
(N)=limsup
jFj!1
jFj
±t+1
P
(t)
e
;
75
which characterizes the limiting behavior of the failure probability as the ¯eld size
goestoin¯nity. LetN(m;l)bethesetofallsinglesourcemulti-castNwitha¯xed
number of internal nodesjJj=m and a ¯xed redundancy ±
t
=l. De¯ne
+
m;l
= max
N2N(m;l)
t
(N);
which characterizes the worst case limiting behavior of the failure probability and
¡
m;l
= min
N2N(m;l)
t
(N);
which characterizes the best case limiting behavior of the failure probability.
4.1 Characterization of
+
m;l
the worst case limit-
ing behavior of the Random Linear Network
Codes
Theorem 4 For Single source multi-cast random linear network coding , we have
+
m;l
=
minfl+1;mg
X
i=0
µ
m
i
¶
(4.1)
76
Proof of Theorem 4:
The upper bound
minfl+1;mg
X
i=0
µ
m
i
¶
¸
+
m;l
(4.2)
is a straightforward consequence of Theorem 3. To prove the converse,
+
m;l
¸
minfl+1;mg
X
i=0
µ
m
i
¶
we analyze the following single source multi-cast network.
Network that achieves
+
m;l
.
For givenjJj=m and ±
t
=l, the network is constructed as follows. The vertex
set of the network is V = fs;i
1
;¢¢¢ ;i
m
;tg. A channel in the edge set E is either
of the form e = (s;i
j
) or of the form e = (i
j
;t) for an index j : 1· j · m. That
is, E = [
m
j=1
(In(i
j
)[Out(i
j
)). We assume that jIn(i
j
)j = jOut(i
j
)j = b >
l
m
for
all j. That is ! = mb¡l > 0, jEj = 2mb, jJj = m;C
t
= mb and ±
t
= l. We have
In(t)=[
m
j=1
Out(i
j
).
For each j :1·j·m, consider the b£b matrix
K
i
j
=(k
de
:d2In(i
j
);e2Out(i
j
))
Since the matrix K
i
j
is random, Rank(K
i
j
) is also random. The rank reduction at
node i
j
is de¯ned as ,
R
j
=b¡Rank(K
i
j
)
77
It is obvious that R
j
·b. We have
Lemma 12 For the single source multicast network de¯ned above and the random
network code with independently, uniformly distributed local coding coe±cients, we
have
1.
Pr(R
j
=r
j
)=jFj
¡r
2
j
(1+O(
1
jFj
)): (4.3)
2. when r >jJj=m
Pr(
m
X
j=1
R
j
=r)=o(jFj
¡r
) (4.4)
3. and when r·jJj=m
Pr(
m
X
j=1
R
j
=r)=
µ
jJj
r
¶
jFj
¡r
(1+O(jFj
¡1
)): (4.5)
Proof of Lemma 12:
1. The estimate (4.3) is a well-known result in the random matrix theory. It is
already proved by lemma 8.
2. The estimate (4.3) shows that whenever there is an j such that R
j
= r
j
> 1,
Pr(R
j
= r
j
) = o(jFj
¡r
j
). When r > m, there exists at least 1 j such that
R
j
> 1. This and the observation that Pr(R
j
= 1) = jFj
¡1
(1+O(jFj
¡1
))
implies (4.4).
78
3. By(4.3), togetarealizationfr
j
:1·j·mgsuchthatPr(R
j
=r
j
:1·j·
m) = O(jFj
¡r
) where r =
P
m
j=1
r
j
, we must have 8j, r
j
2f0;1g. The total
number of such realizations is
¡
jJj
r
¢
. Each of such realization has probability
jFj
¡r
(1+O(jFj
¡1
)). The estimate (4.5) is proved2.
Next, we study the conditional probability
Pr(Rank(F
t
)m
X
j=1
R
j
=r):
It is apparent that this conditional probability is 1 when r¸±
t
+1. Otherwise, we
will need a sequence of auxiliary results.
First, we study the joint distribution of the output spaces L(Out(i
j
)) for j =
1;¢¢¢ ;m. Under the condition that the rank reductions are R
j
= r
j
: 1· j · m,
the output space at node i
j
, L(Out(i
j
)) =hff
e
: e2 Out(i
j
)gi), has dimension at
most b¡r
j
for j : 1 · j · m. Under the condition that the output space at a
node i
j
is of dimension b¡r
0
j
¸ 0 where r
0
j
¸r
j
, this output space has an uniform
conditional distribution over the set of all subspaces of dimension b¡r
0
j
. This is
because
² the vectors f
e
:e2In(i
j
) are independently and uniformly distributed in the
full space;
79
² this implies that given the dimension l of the input space, the input space has
an uniform conditional distribution in the set of all l-dimensional subspaces
of the full space;
² under the conditions that the output space has a given dimension d and the
input space is a given space L, the output space has an uniform conditional
distribution in the set of all d-dimensional subspaces of the input space L;
² this implies that its distribution in the collection of d-dimensional subspaces
of the full space is also uniform.
We know that the output spaces are subspaces of their respective input spaces
L(In(i
j
)) = hff
e
e 2 In(i
j
)gi : 1 · j · m. Given their respective dimensions,
these output spaces are independent because the input spaces of the nodes are
independent and the matrices K
i
j
are independently selected.
The next result concerns with the counting of subspaces of a given linear space
which is used to estimate probabilities related to random subspaces.
Lemma 13 The number of subspaces of dimension k of an n-dimension linear
space L over ¯eld F that contain a ¯xed l-dimensional subspace L
1
of L where
k¸l is
k¡l¡1
Y
i=0
jFj
n
¡jFj
i+l
jFj
k
¡jFj
i+l
: (4.6)
ProofofLemma13: Letfa
1
;¢¢¢ ;a
l
gbeabasisofL
1
. Weexpandittoalinearly
independent set of k vectors fa
1
;¢¢¢ ;a
k
g. When select a
l+1
, there are jFj
n
¡jFj
l
80
available vectors to choose from. When select a
l+1
, there arejFj
n
¡jFj
l+1
available
vectors to choose from. Therefore, the total way to form the linearly independent
set of vectorsfa
1
;¢¢¢ ;a
k
g is
k¡l¡1
Y
i=0
(jFj
n
¡jFj
i+l
):
For a ¯xed subspace L
2
of dimension k that contains L
1
, we count the number of
sets of linearly independent vectorsfa
1
;¢¢¢ ;a
k
g that can serve as a basis of L
2
. By
the same method as above, we can show that this number is
k¡l¡1
Y
i=0
(jFj
k
¡jFj
i+l
):
Therefore, the number of subspaces of dimension k that contain L
1
is
k¡l¡1
Y
i=0
jFj
n
¡jFj
i+l
jFj
k
¡jFj
i+l
:
lemma 13 is proved2.
The equation (4.6) implies the following lemma in an obvious way.
Lemma 14 Theprobabilitythatauniformlydistributedrandomk-dimensionalsub-
space of ann-dimensional linearspaceis a subspaceof anl-dimension subspace with
l¸k is
k¡1
Y
i=0
jFj
l
¡jFj
i
jFj
n
¡jFj
i
=jFj
¡k(n¡l)
(1+O(
1
jFj
)): (4.7)
81
We also need the following lemma.
Lemma 15 Let A
1
;¢¢¢ ;A
M
be M independent random subspaces of respective di-
mensions dim(A
j
)
¯eld F. Suppose that these random subspaces have uniform distribution in the col-
lection of subspaces of their respective dimensions of the full space. Then under the
condition that
P
j
dim(A
j
)¸n,
Pr(dim(h[
M
j=1
A
j
i)
¸jFj
n¡1¡§
M
j
dim(A
j
)
(1+O(jFj
¡1
)):
Proof of Lemma 15: Let fB
j
: j 2 J
n¡1
g be the collection of all n¡ 1
dimensional subspaces where
jJ
n¡1
j=
jFj
n
¡1
jFj¡1
:
LetfC
j
:j2J
n¡2
g be the collection of all n¡2 dimensional subspaces where
jJ
n¡2
j=
(jFj
n
¡1)(jFj
n¡1
¡1)
(jFj
2
¡1)(jFj¡1)
:
82
Let G
j
be the event that 8k;A
k
½ B
j
and let H
j
be the event that 8k : A
k
½ C
j
.
We have
Pr(dim(h[
M
j=1
A
j
i)
j2J
n¡1
G
j
)
¸
X
j2J
n¡1
Pr(G
j
)¡
X
j2J
n¡2
Pr(H
j
)
(¤)
=jJ
n¡1
jjFj
¡§
M
l=1
dim(A
l
)
(1+O(jFj
¡1
))
¡jJ
n¡2
jjFj
¡2§
M
l=1
dim(A
l
)
(1+O(jFj
¡1
))
(¤¤)
=jFj
n¡1¡§
M
j=1
dim(A
j
)
(1+O(jFj
¡1
))
¡jFj
2n¡4¡2§
M
j=1
dim(A
j
)
(1+O(jFj
¡1
))
=jFj
n¡1¡§
M
j=1
dim(A
j
)
(1+O(jFj
¡1
)):
In Step (*), (4.7) is employed and in Step (**) the cardinality results forjJ
n¡1
j
andjJ
n¡2
j are invoked. 2
These lemmas make it ready for us to study the conditional probability
Pr(Rank(F
t
)where E is the event that R
j
= r
j
: 1· j · m. In the following, we assume that
the eventE is given.
83
We have,
Pr(Rank(F
t
)=
X
fr
0
j
g:r
0
j
¸r
j
;8j
Pr(dim(L
j
)=b¡r
0
j
:1·j·mjE)
£Pr(Rank(F
t
)j
)=b¡r
0
j
:1·j·m)
=
(¤)
X
fr
0
j
g:r
0
j
¸r
j
;8j
Pr(dim(L
j
)=b¡r
0
j
:1·j·mjE)
£jFj
§
m
j=1
r
0
j
¡±t¡1
(1+O(jFj
¡1
))
¸jFj
§
m
j=1
r
j
¡±
t
¡1
(1+O(jFj
¡1
))
=jFj
r¡±t¡1
(1+O(jFj
¡1
)): (4.8)
In the derivation, Step (*) is valid because of Lemma 15.
The inequality (4.8) and Lemma 12 imply the following lower bound for the
failure probability:
Pr(Rank(F
t
)¸
minfm;l+1g
X
r=0
Pr(
m
X
j=1
R
j
=r)
£Pr(Rank(F
t
)m
X
j=1
R
j
=r)
=
minfm;l+1g
X
r=0
µ
m
r
¶
jFj
¡r
jFj
r¡l¡1
(1+O(jFj
¡1
))
84
=
minfm;l+1g
X
r=0
µ
m
r
¶
jFj
¡l¡1
(1+O(
1
jFj
)):
This shows that the network achieves the upper bound in (4.2). Therefore,
+
m;l
=
minfm;l+1g
X
r=0
µ
m
r
¶
:
and the theorem 4 is proved 2.
85
4.2 Characterization of
¡
m;l
the best case limit-
ing behavior of the Random Linear Network
Codes
Theorem 5 For Single source multi-cast random linear network coding , we have
¡
m;l
=1 (4.9)
Proof of Theorem 5: First we prove the lower bound
¡
m;l
¸1 (4.10)
De¯ne the minimum cut of a network as CUT
min
then,
P
e
¸Pr(dim(L(CUT
min
)a slight modi¯cation of the proof of lemma 11 yields that
Pr(dim(L(CUT
min
)!
0;±t
86
therefore we have ,
¡
m;l
¸1
To prove the converse
1¸
¡
m;l
we analyze the following single source multi-cast network.
Network that achieves
¡
m;l
.
For givenjJj = m and ±
t
= l, the network is constructed as follows.The vertex
set of the network is V =fs = i
0
;i
1
;¢¢¢ ;i
m
;i
m+1
= tg. A channel in the edge set
E is of the form e=(i
j
;i
j+1
) for an index j :0·j·m. That is, E =[
m+1
j=1
In(i
j
).
Let the cardinalities of In(i
j
) be jIn(i
j
)j = ! + ±
t
+ 1 for j : 1 · j · m and
jIn(t)j = ! +±
t
. The failure probability of random linear network code on this
network is upper bounded by,
P
e
·
m
(jFj¡1)
±t+2
+
1
(jFj¡1)
±t+1
This shows that the network achieves the lower bound in (4.10). Therefore,
¡
m;l
=1
and the theorem 5 is proved 2.
87
Chapter 5
Random Network Error Correction
Codes
5.1 Network Error Correction Theory
5.1.1 Introduction
When the network is noisy, besides the basic formulation of the network coding
problem we need some further concepts to study error correction capability of ran-
dom linear network error correction codes. If there is an error in channel e, the
output of the channel is
~
U
e
=U
e
+Z
e
, where Z
e
2F is the channel error in channel
e. The error Z
e
can be treated as a message called error message. For each chan-
nel e, we introduce an imaginary channel e
0
which is connected to the tail of e to
provide error message Z
e
. A linear network code for the original network can be
amended to a code for the network with these added imaginary channels by letting
k
e
0
e
= 1 and k
e
0
d
= 0 for all other channels. By introducing an error message for
88
each channel e2E in the network, the global kernels
~
f
e
:e2E for this network is
of dimension !+jEj and are column vectors of
~
A(I¡F)
¡1
where
~
A=
0
B
@
A
I
1
C
A
;
with I being an jEj£jEj identity matrix corresponding to the local kernel values
k
e
0
e
=1andk
e
0
d
=0ford6=e. Wecall
~
f
e
theextendedglobalkernelfortheoriginal
network. The entries of
~
f
e
can be indexed by the elements of In(s)[E.
LetZ=fZ
e
:e2EgbeanjEjvector,whereZ
e
2F; 8e2E. Thisiscalledthe
error message vector. An error pattern is a set of channels in the original network
in which channel errors occur. For an error pattern ½, we have Z
e
= 0 for e 62 ½.
Let 1
e
be the ! +jEj dimensional indicator function of feg with entries index by
the set In(s)[ E. If there is no error in channel e, the source message part of
the packet transmitted over channel e should be U
e
= (X;Z)(
~
f
e
¡1
e
) = (X;Z)
~
f
e
.
If there is an error Z
e
in the channel, the real message part is
~
U
e
= (X;Z)
~
f
e
=(X;Z)(
~
f
e
¡1
e
)+Z
e
=U
e
+Z
e
:
89
5.1.2 MinimumdistanceofNetworkErrorCorrectionCodes
In order to de¯ne the minimum distance of network error correction codes we in-
troduce the following concepts.
De¯nition 1 The matrix
~
F
t
=(
~
f
e
:e2In(t)) (5.1)
is called the decoding matrix at sink t.
Let the row vectors of
~
F
t
be row
t
(d):d2In(s)[E which is of sizejIn(t)j.
De¯nition 2 De¯ne
¢(t;½)=hfrow
t
(e):e2½gi (5.2)
©(t)=hfrow
t
(e):e2In(s)gi (5.3)
We call ¢(t;½) the error space of error pattern ½ and ©(t) the message space.
De¯nition 3 Wesaythatanerrorpattern½
1
isdominatedbyanothererrorpattern
½
2
with respect to a sink t if ¢(t;½
1
)µ ¢(t;½
2
) for any linear code. This relation
is denoted by ½
1
Á
t
½
2
.
We usej½j to denote the number of channels in an error pattern ½.
De¯nition 4 The rank of an error pattern with respect to a sink t is de¯ned by
rank
t
(½)=minfj~ ½j:½Á
t
~ ½g: (5.4)
90
De¯nition 5 A code is called regular if
dim(©(t))=!:
For regular codes, the minimum distance at sink node t2T is de¯ned by
d
t
min
=minfrank
t
(½):©(t)\¢(t;½)6=fÁgg: (5.5)
In [54], it was proved that the error correction capabilities of network error cor-
rection codes for several kinds of errors are all characterized in terms of minimum
distance. This concept plays exactly the same role as it does in classical coding
theory.
All other parameters of the random linear network code are functions of the
random local kernels. This includes the minimum distance of the code at each sink
nodet2T. Therefore, weuseD
t
min
todenotetheminimumdistanceoftherandom
linear network code at sink node t2T where the capital letter D indicates that it
is a random variable while the lower case letter d is used for deterministic codes.
The Singleton bound tells us that D
t
min
takes values in f0;¢¢¢ ;±
t
+1g where ±
t
is
the redundancy at sink t. For a code having minimum distance d
t
min
at t, we call
±
t
+1¡d
t
min
the degradation of the code at t. We are interested in the probability
mass function of D
t
min
.
91
5.2 Random Network Error Correction Codes
In this section we are going to analyze the error correction capability of random
linear network codes. Due to the random coding we know that the minimum dis-
tance at each sink t2T is a random variable. The generalized singleton bound for
network error correction codes states that the minimum distance D
t
min
is bounded
above by
D
t
min
·±
t
+1
.
When the minimum distance D
t
min
reaches its upper bound, the code is called
maximum distance separable (MDS). For the cases when the minimum distance is
less than ±
t
+1 we introduce the following concept of code degradation.
De¯nition 15 The code degradation d is de¯ned as the di®erence between the sin-
gleton bound and the minimum distance D
t
min
of the code
d=±
t
+1¡D
t
min
Note that when the network code is maximum distance separable (MDS) the
code degradation d is zero.
92
The aim of this section is to determine error correction capability of random
linearnetworkcodesthroughtheprobabilitymassfunctionoftheminimumdistance
D
t
min
at each sink t2T.We have the following result.
5.2.1 Error Correction Capability of Random Network Er-
ror Correction Codes
Theorem 6 For single source multicast over an acyclic network G = fV;Eg, let
minimumcutcapacitiesforsinknodest2T beC
t
, letinformationratebe! symbols
per unit time, let ±
t
=C
t
¡! be the redundancy of the code. For a given degradation
d¸0, the random network code de¯ned above satis¯es:
Pr(D
t
min
<±
t
+1¡d)·
¡
jEj
±t¡d
¢P
minfd+1;jJjg
i=0
¡
jJj
i
¢
(jFj¡1)
d+1
: (5.6)
The inequality 5.6 implies:
Pr(9t2T such that D
t
min
<±
t
+1¡d)
·
P
t2T
¡
jEj
±t¡d
¢P
minfd+1;jJjg
i=0
¡
jJj
i
¢
(jFj¡1)
d+1
: (5.7)
If this probability is strictly less than 1, the probability that for all sinks t2T, the
degradation is at most d is strictly positive which implies the existence of such a
code. This proves the following corollary.
93
5.2.2 Required Field Size for the Existence of Network Er-
ror Correction Codes with Code Degradation
Corollary 3 If the ¯eld size satis¯es the following condition
jFj>1+
0
@
X
t2T
µ
jEj
±
t
¡d
¶minfd+1;jJjg
X
i=0
µ
jJj
i
¶
1
A
1
d+1
then there exists a code having degradations at most d at all sinks t2T.
In[54], itisprovedthatifjFj>
P
t2T
¡
jEj
±
t
¢
,thenthereexistsanetworkMDScode.
TheCorollarysays,the¯eldsizerequiredforcodewithdegradationismuchsmaller
than that required for the existence of MDS code. As an example the ¯eld size re-
quired for degradation d = 1 is almost the square root of the ¯eld size required for
the MDS code. And the ¯eld size required for the network code with degradation
d=2 is almost the cubic root of the ¯eld size required for the MDS code.
Proof of theorem 6 : In order to prove theorem 6 we need the following lemmas
from [54].
Lemma 16 If for any error pattern ½ satisfying rank
t
(½) = ±, the linear space
hfrow
t
(e):e2In(s)[½gi has dimension !+±, then
d
t
min
¸±+1:
94
In the lemma, the statement that the linear space hfrow
t
(e) : e2 In(s)[½gi has
dimension !+±, is equivalent to any of the following statements:
² The matrix (
~
f
½
e
: e 2 In(t)) has rank ! + rank
t
(½), where
~
f
½
e
is a ! +j½j
dimensional column vector obtained from
~
f
e
by removing all entries
~
f
e
(d) for
d62 In(s)[½. This is called the global kernel vector for channel e restricted
to error pattern ½.
² Thereexists!+rank
t
(½)linearlyindependentglobalkernelvectorsrestricted
to error pattern ½ among channels in In(t).
² If rank
t
(½)=j½j=±, and the error pattern is known at the sink t, then both
thesourcemessagesandtheerrormessagesfromchannelsin½canbedecoded
at t.
An erasure error for sink t is an error with error pattern known by the decoder at
sink t. Lemma 16 implies that if a code has erasure correction capability ± at t,
then the minimum distance of the code at t is at least ±+1.
For each error pattern ½, there exists an error pattern ½
0
such that 1) ½ Á
t
½
0
,
2) rank
t
(½) = rank
t
(½
0
) = j½
0
j. If the source messages and the error messages
from the imaginary channels for channels in ½ can be decoded at t for any error
pattern rank
t
(½) · j½j · ± when the error pattern is known at the decoder, then
d
t
min
¸±+1. This becomes almost the same problem as in the case without errors.
Suppose that the bound derived for failure probability without channel errors
can be applied to this case, then we can derive a bound for the probability in
95
the theorem. We proceed as follows: We treat these error messages as the source
messagesgeneratedatthesourcenode S andinjectedintothetailoftheerroneous
channels.Therefore, the redundancy now is no longer ±
t
, it should be d = ±
t
¡
rank
t
(½) because we add j½j = rank
t
(½) error messages. For ¯xed d¸ 0, consider
all error patterns of rank
t
(½) = j½j = ±
t
¡ d: For each such error pattern, the
probabilitythathfrow
t
(e):e2In(s)[½gi hasdimension lowerthan !+± is upper
bounded by
P
minfd+1;jJjg
i=0
¡
jJj
i
¢
(jFj¡1)
d+1
:
Thentheprobabilitythatthereexistsanerrorpattern½satisfyingrank
t
(½)=j½j=
±
t
¡d for whichhfrow
t
(e) :e2In(s)[½gi has dimension lower than !+±
t
¡d is
at most
µ
jEj
±
t
¡d
¶P
minfd+1;jJjg
i=0
¡
jJj
i
¢
(jFj¡1)
d+1
:
This implies that
Pr(D
t
min
·±
t
+1¡d)·
¡
jEj
±
t
¡d
¢P
minfd+1;jJjg
i=0
¡
jJj
i
¢
(jFj¡1)
d+1
:
Then the theorem is proved.
We now prove that Theorem 3 can be applied in this case. We introduce the
following lemma from [54].
96
Lemma 17 For any error pattern ½, there exist at least C
t
channel disjoint paths
from either s orftail(e):e2½g to t having the properties that 1) there are exactly
rank
t
(½) paths from ftail(e) : e2 ½g to t and 2) each of these rank
t
(½) paths from
ftail(e):e2½g to t starts with an erroneous channel in ½.
If we consider only error pattern satisfying rank
t
(½) = j½j as we discussed above.
The set of ¯rst channels of the paths that start with channels in ½ is exactly ½.
De¯ne an imaginary node i
e
for each e 2 ½. Use two channels (e
1
= (i;i
e
) and
e
2
= (i
e
;j) to replace channel e and de¯ne a channel e
0
= (s;i
e
) for each e 2 ½.
Channels e
0
: e2 ½ provide error messages. Then this is a single source multicast
problem of transmitting ! +j½j message symbols from s to t. For any code with
local kernel k
de
, amend the code by letting k
de
1
=k
de
, k
e
1
e
2
=1 and k
e
0
e
2
=1. Then
the extended global kernel vector
~
f
½
e
will be exactly the global kernel vector for this
new network.
To apply Theorem 3 to this case, the things we need to consider include:
² The encoding at i
e
is no longer random, it is deterministic;
² The channelse
0
:e2½ transmit error messages to node i
e
:e2½. The coding
for these channels are deterministic, too.
² In Theorem 3, the number of intermediate nodes jJj occurred in the bound.
If Theorem 3 can be applied, whether this number is still the same.
In Theorem 3, we assume that at all nodes the coding are random. In this case,
coding for some nodes is deterministic.
97
e
e
1
e
2
e’
(a)
(b)
Figure 5.1: (a) e represents the erroneous channel (b) the erroneous channel is
replaced by the imaginary node i
e
and the imaginary channels e
0
;e
1
and e
2
WetakethecutsforpathsinLemma17withtheerroneouschannelsin½replaced
bychannelse
0
=(S;i
e
). The¯rstcutCUT
0
includesallchannels(S;i
e
)whoseglobal
kernelvectorsaretheprojectionvectorsofchannelsin½. Therefore,aslongasthere
exist ! linearly independent global kernel vectors for other channels in the CUT
0
,
there exist !+j½j linearly independent global kernel vectors for channels in CUT
0
.
This is because the values of global kernel vectors
~
f
½
e
:e2CUT
0
¡f(S;i
e
):e2½g
at position d 2 ½ are all zero. Then at S, the problem is the same as in 3. This
takes care of our second concern above.
At node v
k+1
=i
e
, given ¡
k
, from CUT
k
to CUT
k+1
, the only thing we do is to
replace e
0
= (S;i
e
) by e
2
= (i
e
;j)(suppose that e = (i;j)). Since the global kernel
98
vector for e
0
is the only vector among all global kernel vectors for channels in CUT
k
which has a non-zero entry at channel e. Since the k
e
0
e
2
= 1,
~
f
e
2
(e)6= 0. From the
discussion above, this is the only global kernel vector with a non-zero entry at e
among all such vectors for channels in CUT
k+1
. It is apparent that ¡
k
implies ¡
k+1
if v
k+1
= i
e
. This implies that Theorem 3 can be applied to this case and nodes
i
e
: e2 ½ should not be counted in to the set of intermediate nodes J in which we
do random coding. The theorem is proved. 2
99
Chapter 6
Conclusion and Future Work
Although the Random linear network coding problem was originally studied by the
algebraic tools and methods. Our results in this thesis reveals that the core of
Random linear network coding problem is unmistakeably combinatorial in nature.
This should guide the ongoing research on this subject to better understand the
mathematical properties of Random linear network codes.
6.1 Future Work
Based on our current results we have some new open problems. The bound in
theorem 3 applies to all networks regardless of its topological structures. One of
the open problems is to generalize the bound in theorem 3 to networks with certain
topological structures. As shown before when the network is acyclic the vertex set
V is a partially ordered set. Certain topological parameters of the network enforces
a certain structure to the network which a®ects the performance of random linear
100
networks. Our preliminary ¯ndings reveals that the anti-chains of the vertex set
has an enormous impact on the performance of random linear network codes.
De¯nition 16 Let P be a partially ordered set. We say two elements a and b of a
partially ordered set are comparable if a·b or b·a.
A chain is a subset of a partially ordered set in which each pair of elements is
comparable, (i.e. C is totally ordered).
An anti-chain is a subset of a partially ordered set such that any two elements
in the subset are incomparable.
De¯nition 17 A maximal anti-chain is an anti-chain that is not a proper subset of
any other anti-chain. A maximum anti-chain is an anti-chain that has cardinality
at least as large as every other anti-chain.
The width of a partially ordered set is the cardinality of a maximum anti-chain.
De¯nition 18 Based on the above de¯nition we de¯ne the width of the network as
the cardinality of the maximum anti-chain in partially ordered set of vertices V¡T.
101
6.2 Conjecture : Anti-Chain Bound
Based on our preliminary results we give the following conjecture.
Conjecture 1 (Anti-Chain Bound)
ForsinglesourcemulticastoveranacyclicnetworkG=(V;E), letinformationrate
be ! symbols per unit time, and let the redundancy for sink node t be ±
t
= C
t
¡!.
If the local coding coe±cients k
d;e
are independent, uniformly distributed random
variables in the base ¯eldF, the failure probability P
t
e
that the messages can not be
decoded correctly at sink node t2T is upper bounded by
P
t
e
·
¡
jJj
0
¢
+
¡
jJj
1
¢
+¢¢¢+
¡
jJj
m
¢
(jFj¡1)
±
t
+1
:
where m=minf width ; ±
t
+1g.
102
Appendix A
Proof of Theorem 2
In order to prove theorem 2 we need a lemma which states that the failure proba-
bility at a particular node decays exponentially with the redundancy allocated to
it.
Lemma 18 Assume there is no failure at CUT
k
and let the outside channel redun-
dancy be r
o
k
=jCUT
o
k
j¡Rank((f
e
: e2 CUT
o
k
)) = l then the probability of failure
on CUT
k+1
due to random coding at the node v
k+1
satis¯es,
Pr(¡
c
k+1
j¡
k
;r
o
k
=l)·
1
(F¡1)
±
t
+1¡l
where ±
t
¡l =r
i
k
Proof of Lemma 18: If Rank((f
e
: e2 OUT
o
k
)) = !, then Pr(¡
c
k+1
j¡
k
;r
k
=
l) = 0. The bound is valid. Therefore, we consider only the case that Rank((f
e
:
e2OUT
o
k
))De¯ne the linear spaces
103
² O =hff
e
:e2CUT
o
k
gi
² I =hff
e
:e2In(v
k+1
)gi
Under the condition ¡
k
, (f
e
: e2 CUT
k
) has rank !. Therefore, ff
e
: e2 CUT
k
g
spans the whole ! dimensional space. This implies that the dimension of I is
at least ! ¡ Rank((f
e
: e 2 OUT
o
k
)). By Lemma 1 the vectors in ff
e
: e 2
CUT
¤
k
g are independently identically distributed in the linear space I with uniform
distributionduetotheassumptionthatthecodingcoe±cientsk
de
areindependently
and identically distributed inF with uniform distribution.
Let h = dim(I)¡dim(I\O) = !¡Rank((f
e
: e2 OUT
o
k
)) > 0 where dim(:)
stands for the dimension of a linear space. The number of channels in CUT
¤
k
is
g = C
t
¡jCUT
o
k
j = ±
t
+h¡l label these channels as CUT
¤
k
=fd
1
;¢¢¢d
g
g. De¯ne
the linear space O
i
as
O
i
=hff
d
j
:1·j·ig[ff
e
:e2CUT
o
k
gi
WeconsiderthesequenceZ
i
=dim(O
i
)¡dim(O
i¡1
)whereO
0
=O. Z
i
takesvalues
either 0 or 1. The event ¡
c
k+1
corresponds to the set of sequences Z = (Z
1
;¢¢¢Z
g
)
of weight at most h¡1. Therefore,
Pr(¡
c
k+1
j¡
k
;r
o
k
=l)=
X
Z2f0;1g
g
:wt(Z)·h¡1
Pr(Z) (A.1)
104
where wt(:) represents for the weight of a binary sequence. We have
Pr(Z)=¦
g
i=1
Pr(Z
i
jZ
1
;¢¢¢Z
i¡1
); (A.2)
where
Pr(Z
i
=0jZ
1
;¢¢¢Z
i¡1
)=
1
jFj
h¡wt(Z
1
;¢¢¢;Z
i¡1
)
(A.3)
and
Pr(Z
i
=1jZ
1
;¢¢¢Z
i¡1
)=(1¡
1
jFj
h¡wt(Z
1
;¢¢¢;Z
i¡1
)
) ·1: (A.4)
The equation (A.3) is proved as follows: The random variables Z
i
takes value 0 if
and only if f
d
i
is in O
i¡1
, the linear space spanned by the vector in ff
d
j
: 1· j ·
i¡1g[ff
e
:e2CUT
o
k
g. We have dim(I)¡dim(I\O
i¡1
)=h¡wt(Z
1
;¢¢¢Z
i¡1
).
Sincef
d
i
is uniformly distributed in I, it falls in I\O
i¡1
with probability
jI\O
i¡1
j
jIj
=
1
jFj
h¡wt(Z
1
;¢¢¢;Z
i¡1
)
.
From this formula, we can see that Pr(Z) depends not only on the number of
1's in Z but also on the locations of 1's and 0's in the sequence. We use a di®erent
method to characterize a binary sequence Z. We consider the location of the ith 0
in the sequence. This can be characterized by the number of 1's before the ith 0.
Let this be t
Z
(i). Obviously t
Z
= (t
Z
(i) : i = 1;¢¢¢g¡wt(Z)) is a non-decreasing
105
sequence with maximum entry value t
Z
(i)·wt(Z) and length g¡wt(Z). By using
this sequence, we proceed as follows:
Pr(¡
c
k+1
j¡
k
;r
o
k
=l)=
X
Z:wt(Z)·h¡1
Pr(Z)
=
h¡1
X
w=0
X
Z:wt(Z)=w
Pr(Z)
=
(¤)
h¡1
X
w=0
X
t
Z
2Tg;w
Pr(Z)
·
(¤¤)
h¡1
X
w=0
X
t
Z
2Tg;w
¦
g¡w
i=1
1
jFj
h¡t
Z
(i)
(¤¤¤)
·
h¡1
X
w=0
X
t
Z
2T
0
g;w
¦
g¡h+1
i=1
1
jFj
h¡t
Z
(i)
: (A.5)
² In step (*), T
g;w
consists of all t
Z
sequences corresponding to Z-sequences of
length g and weight w.
² In step (**), we use (A.3) and upper bound 1 in (A.4).
² In step (***), ¯rst we upper bound ¦
g¡w
i=1
1
jFj
h¡t
Z
(i)
by ¦
g¡h+1
i=1
1
jFj
h¡t
Z
(i)
because
g¡h+1· g¡w for all w and
1
jFj
h¡t
Z
(i)
· 1. Secondly, we introduce a set
T
0
g;w
consisting of all non-decreasing sequences of length g¡h+1 with entries
taking values inf0;1;¢¢¢ ;wg.
Replace [
h¡1
w=0
T
0
g;w
by a bigger set T
¤
g;h¡1
which consists of all sequences t of length
g¡h+1 with maximum entry value h¡1 without the non-decreasing monotony
106
requirement. Thatis,t2T
¤
g;h¡1
satis¯estwoconditions: 1)thelengthoftisg¡h+1,
i.e. t = (t
1
;¢¢¢ ;t
g¡h+1
); 2) for each i : 1· i· g¡h+1, t
i
2f0;¢¢¢ ;h¡1g. We
obtain by using this set an upper bound as below:
Pr(¡
c
k+1
j¡
k
;r
k
=l)·
X
t2T
¤
g;h¡1
¦
g¡h+1
i=1
1
jFj
h¡t(i)
=(
h¡1
X
t=0
1
jFj
h¡t
)
g¡h+1
·
1
(jFj¡1)
g¡h+1
=
1
(jFj¡1)
±
t
¡l+1
: (A.6)
The lemma is proved. 2
Proof of theorem 2 : We prove this theorem by induction in k which is the
index of the CUT
k
in a network. Let p
(k)
e
be the probability that a failure occurs
at CUT
k
. Recall that S = v
0
< v
1
<¢¢¢ < v
m
< t where m· J, is an upstream
to downstream order of the nodes. The decoding failure probability at sink t is at
most p
(m)
e
. When k = 0 the network has no intermediate nodes that the source
node S is the only node employing the random coding. For this case the lemma 18
above gives the desired result
p
(0)
e
·
1
(jFj¡1)
±
t
+1
:
107
Assume that the result of the theorem is proved for k = 0;¢¢¢ ;k
0
and for all
acyclic networks, we now prove it for k =k
0
+1. We have
p
(k
0
+1)
e
·Pr(¡
c
k
0)+Pr(¡
c
k
0
+1
;¡
k
0)
=p
(k
0
)
e
+
±
t
X
l=0
Pr(r
k
0 =l;¡
k
0)Pr(¡
c
k
0
+1
j¡
k
0;r
k
0 =l): (A.7)
We distinguish two cases.
² In the ¯rst case, Rank(f
e
: e2 CUT
o
k
0) = !. This implies Pr(¡
c
k
0
+1
j¡
k
0;r
k
0 =
l) = 0 for all l. In this case, the bound in theorem 2 is obvious. Therefore,
we will concentrate on the second case formulated below.
² In the second case, Rank(f
e
: e 2 CUT
o
k
0) < !. In this case, we have the
following two observations:
{ The ¯rst observation is that depending on the random code r
o
k
0 can take
values from 0 to ±
t
. This observation is obvious.
{ The second observation is much involved. It is an upper bound stated
below. The induction hypothesis, which says that for k· k
0
, the upper
bound in Theorem 2 is valid for any ¯nite acyclic network, implies
Pr(r
k
0 =l;¡
k
0)·
¡
l+k¡1
k¡1
¢
(jFj¡1)
l
108
Proof of the second observation Consider a subset N of CUT
k
0 which includes
all channels in CUT
o
k
0 and ! +l¡1¡jCUT
o
k
0j channels from CUT
i
k
0. Remove all
channels in CUT
i
k
0¡N from the network. Remove all nodes from v
k
0
+1
to v
m+1
=t
and all channels a®ected by these nodes. Introduce a new node t
new
. The channels
in N are all removed because their heads are in the set of removed nodes v
k
0
+1
;¢¢¢ ;
v
m+1
= t. We add the channels in N back to the network by replacing their head
by t
new
. In this way we obtain a new network. This network has minimum cut
capacity jNj = ! + l¡ 1. We use a subscript new to distinguish quantities for
this network and same quantities for the original network. For instance, the failure
probability at CUT
k;new
is denoted by p
(k)
e;new
. By induction assumption the bound
holds for all networks withjJj·k
0
, therefore we can see that for this network,
Pr(r
k
0 =l;¡
k
0)·p
(k
0
)
e;new
·
¡
l+k
0
k
0
¢
(jFj¡1)
l
109
…
…
(a)
(b)
Figure A.1: Second observation (a) dotted arrows represents the channels in N (b)
the channels in N are merged at new sink t
new
Using these two observations, we proceed as follows:
p
(k
0
+1)
e
·p
(k
0
)
e
+
±
t
X
l=0
Pr(r
k
0 =l;¡
k
0)Pr(¡
c
k
0
+1
j¡
k
0;r
k
0 =l)
·
(¤)
¡
±t+k
0
+1
k
0
¢
(jFj¡1)
±
t
+1
+
±t
X
l=0
¡
l+k
0
k
0
¢
(jFj¡1)
l
1
(jFj¡1)
±
t
¡l+1
=
P
±
t
+1
l=0
¡
l+k
0
k
0
¢
(jFj¡1)
±
t
+1
=
(¤¤)
¡
±t+k
0
+2
k
0
+1
¢
(jFj¡1)
±t+1
: (A.8)
This is the desired result. In step (¤), we use
110
² the induction hypothesis that the result of the lemma is valid for k =k
0
, that
is,
p
(k
0
)
e
·
¡
±t+k
0
+1
k
0
¢
(jFj¡1)
±t+1
² Lemma 18 . This lemma gives
Pr(¡
c
k
0
+1
j¡
k
0;r
k
0 =l)·
1
(jFj¡1)
±t¡l+1
² the second observation proved above which implies
Pr(r
k
0 =l;¡
k
0)·
¡
l+k
0
k
0
¢
(jFj¡1)
l
In step (**), we use the following formula
k
X
i=0
µ
n+i
n
¶
=
µ
k+n+1
n+1
¶
: (A.9)
By induction the theorem is proved.2
111
A restatement of theorem 2 can be made in terms of a lower bound on the
success probability as presented in the following corollary,
Corollary 4 For single source multicast over an acyclic network G =fV;Eg, let
minimumcutcapacitiesforsinknodest2T beC
t
, letinformationratebe! symbols
per unit time, and let the redundancy for sink node t be ±
t
= C
t
¡!. If the local
coding coe±cients k
d;e
are independent, uniformly distributed random variables in
the base ¯eld F, the probability that the messages can be decoded correctly at sink
node t2T is lower bounded by
jJj
Y
k=0
(1¡
¡
±t+k
k
¢
(jFj¡1)
±t+1
)
The connection between this corollary and theorem 2 can be seen by using the
formula A.9 .
112
Bibliography
[1] R. Ahlswede, N. Cai, S.-Y. R. Li, and R. W. Yeung, "Network information
°ow," IEEE Transactions on Information Theory, vol. 46, no. 7, pp. 1204{
1216, July 2000.
[2] Huseyin Balli, Xijin Yan, Zhen Zhang. "On Randomized Linear Network Code
and Its Error Correction Capability" submitted to IEEE Transactions on In-
formation Theory.
[3] H.Balli,X.Yan,andZ.Zhang,\Errorcorrectioncapabilityofrandomnetwork
error correction codes," 2007 IEEE International Symposium on Information
Theory, Nice, France, Jun. 24-29, 2007.
[4] Huseyin Balli, Zhen Zhang. "A note on the failure probability of Random
Linear Network Codes" Preprint, December 2008
[5] N. Cai and R. W. Yeung, "Network error correction, Part II: Lower bounds,"
Communications in Information and Systems , vol. 6, no. 1, pp. 37 -54, 2006.
[6] Ning Cai and Raymond W. Yeung, "Network Coding and Error Correction",
ITW2002 Bangalore
[7] N. Cai and R. W. Yeung, \Secure network coding," in Proc. IEEE Int. Symp.
Inform. Theory, Lausanne, Switzerland,, June 2002.
[8] Ning Cai and Raymond. W. Yeung, "Secure Network Coding," ISIT 2002.
[9] T. M. Cover and J. A. Thomas, Elements of information theory, New York:
Wiley, 1991.
[10] R. Dougherty, C. Freiling, and K. Zeger, \Insu±ciency of linear coding in
network information °ow," IEEE Trans. Inform. Theory, vol. 51, no. 8, pp.
2745{2759, Aug. 2005.
113
[11] M. E®ros, M. Medard, T. Ho, S. Ray, D. Karger, R. Koetter, "Linear Net-
work Codes: A Uni¯ed Framework for Source Channel, and Network Coding",
DIMACS workshop on network information theory.
[12] P. Elias, A. Feinstein, and C. E. Shannon, \A note on the maximum °ow
through a network," IRE Trans. Inform. Theory, no. 2, pp. 117{119, 1956.
[13] E.Erez and M. Feder, On Codes for Network Multicast, 41st Annual Allerton
Conference on Communication Control and Computing, Oct. 2003
[14] MeirFeder,DanaRon,AmiTavory: BoundsonLinearCodesforNetworkMul-
ticast. Electronic Colloquium on Computational Complexity (ECCC) 10(033):
(2003)
[15] J. L. K. Ford and D. K. Fulkerson, Flows in Networks. Princeton, New Jersy:
Princeton Univ. Press, 1962.
[16] C. Fraguoli, E. Soljanin and A. Sokrollahi " Network Coding as a Color-
ing Problem" IEEE Annual Conference on Information Sciences and Systems
(CISS 2004), Princeton, NJ, USA, March 2004.
[17] C. Fragouli, E. Soljanin, Decentralized Network Coding, 2004 IEEE Informa-
tion Theory Workshop, San Antonio, Oct 25-29, 2004.
[18] C. Fragouli and E. Soljanin, Required Alphabet Size for Linear Network Cod-
ing, IEEE International Symposium on Information Theory (ISIT 2004), June
27th-July 2nd, Chicago.
[19] T. Ho, M. Mdard, R. Koetter, An information theoretic view of network man-
agement, INFOCOM 2003
[20] T. Ho, D. Karger, M. Medard and R. Koetter, "Network Coding from a Net-
work Flow Perspective", ISIT 2003
[21] T. Ho, M. Mdard, J. Shi, M. E®ros and D. R. Karger," On Randomized Net-
work Coding " , 41st Annual Allerton Conference on Communication Control
and Computing, Oct. 2003.
[22] T. Ho, R. Koetter, M. Mdard, M. E®ros, J. Shi, and D. Karger, "A Ran-
dom Linear Network Coding Approach to Multicast", IEEE Transactions on
Information Theory, 52 (10). pp. 4413-4430, October 2006.
[23] Thomas W. Hungerford " Algebra " , Springer edition , Springer , 2000
114
[24] S. Jaggi, M. Langberg, S. Katti, T. Ho, D. Katabi, M. Medard "Resilient
Network Coding in the Presence of Byzantine Adversaries," Infocom 2007.
[25] S.Jaggi,P.Sanders,P.A.Chou,M.E®ros,S.Egner,K.Jain,andL.Tolhuizen,
Polynomial time algorithms for multicast network code construction,IEEE
Transactions on Information Theory. Submitted July 2003.
[26] R. Koetter, M. Mdard, "Beyond Routing: An Algebraic Approach to Network
Coding", INFOCOM, 2002.
[27] R. Koetter and F. Kischischang, "Coding for errors and erasures in random
networkcoding,"2007IEEEInternationalSymposiumonInformationTheory,
Nice, France, Jun. 24-29, 2007.
[28] R. Koetter and M. Medard, "An algebraic approach to network coding,"
IEEE/ACM Transactions on Networking, 11: 782-795, 2003.
[29] S.-Y.R.Li and R.W.Yeung " Network information Flow - multiple sources"
Proc. 2001 IEEE Int.Symp.Information Theory , p.102
[30] S.-Y.R.Li and R.W.Yeung " Network multicast °ow via linear coding" Proc.
2001 IEEE Int.Symp.Operations Research and its applications 1998 , pp.197-
211
[31] S.-Y. R. Li, R. W. Yeung, and N. Cai. "Linear network coding". IEEE Trans-
actions on Information Theory , Februray, 2003.
[32] J.H. van Lint and R.M. Wilson " A Course In Combinatorics " , 2nd edition ,
Cambridge University Press , 2001
[33] R. Matsumoto, "Construction algorithm for network error -correcting codes
attaining the Singleton bound," arXiv:cs.IT/0610121, Oct. 2006.
[34] Joseph J. Rotman " An Introduction to the Theory of Groups " , 4th edition
, Springer, 1999
[35] D. Silva, F. R. Kschischang, R. Koetter, "A Rank Metric Appeoach to Er-
ror Control in Random Network Coding," submitted to IEEE Trans. on Inf.
Theory, 2007.
[36] N. Sloane and F.J. Mac Williams "Error Correcting Codes" North Holland
1998.
115
[37] L. Song and R. W. Yeung "Zero-error network coding for acyclic network",
IEEE Transactions on Information Theory, 49 (12). pp. 3129-3139
[38] L. Song, R. W. Yeung, and N. Cai, Zero-error network coding for acyclic net-
works, to appear in IEEE Trans. on Information Theory
[39] L. Song, R. W. Yeung and N. Cai, A separation theorem for single-source
network coding, to appear in IEEE Trans. on Information Theory
[40] X. Yan, H. Balli and Z. Zhang, \Decoding Netwrok Error Correction Codes
beyond Error Correction Capability," preprint, 2007.
[41] X.YanandZ.ZhangandJ.Yang"ExplicitInnerandOuterBoundsforMulti-
source Multi-sink Network Coding",ISIT 06, Seattle, jul, 2006.
[42] X.YanandJ.YangandZ.Zhang"AnImprovedOuterBoundforMulti-source
Multi-sink Network Coding" First Workshop on Network Coding, Theory, and
Applications (NETCOD), Riva del Garda, Italy, apr, 2005.
[43] S. Yang, R. W. Yeung and Z. Zhang, "Weight Properties of Network Codes,"
European Transactions on Telecommunications, 2008.
[44] S. Yang and R. W. Yeung "Characterizations of Network Error Correc-
tion/DetectionandErasureCorrection", ThirdWorkshoponNetworkCoding,
Theory, and Applications (NETCOD), San Diego, feb, 2007.
[45] R. W. Yeung and N. Cai and S.-Y. R. Li and Z. Zhang "Network Coding
Theory",FoundationsandTrendsinCommunicationsandInformationTheory
2 (4-5), 2005.
[46] R. W. Yeung, S. -Y. R. Li, N. Cai, and Z. Zhang, Network Coding Theory,
Foundation and Trends in Communications and Information Theory , vol. 2,
nos. 4 and 5, pp. 241 -381, 2005.
[47] R. W. Yeung and Z. Zhang, \Distributed Source Coding for Satellite Com-
munications" IEEE Transactions on Information Theory, IT-45 pp. 1111-1120,
1999.
[48] R. W. Yeung and N. Cai, "Network error correction, Part I: Basic concepts
and upper bounds," Communications in Information and Systems , vol. 6, no.
1, pp. 19 -36, 2006.
116
[49] R. W. Yeung, S.-Y. R. Li, N. Cai, and Z. Zhang, "Network coding theory,"
Foundations and Trends in Comm. and Info. Theory, vol. 2, nos. 4 and 5, 241-
381, 2005.
[50] R. W. Yeung and Z. Zhang "Distributed source coding for satellite communi-
cations", IEEE Transactions on Information Theory, 45 (5). pp. 1111-1120
[51] R. W. Yeung, A First Course in Information Theory. New York:
Kluwer/Plenum, 2002.
[52] Zhen Zhang, "Linear Network Error Correction Codes ", submitted to IEEE
Transactions on Information Theory.
[53] Zhen Zhang, Xijin Yan and Huseyin Balli, "Some Key Problems in Network
Error Correction Coding Theory," ITW 2007, July 1-5, 2007, Bergen, Norway.
[54] Zhen Zhang, "Linear Network Error Correction Codes in Packet Networks",
IEEE Transactions on Information Theory, Vol. 54, No. 1, pp. 209-218, Jan,
2008.
117
Abstract (if available)
Abstract
Network Communications has been established on the principle of routing , in which received packets at intermediate nodes are replicated and forwarded to the outgoing channels.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Robust video transmission in erasure networks with network coding
PDF
Network coding capacity and performance optimization
PDF
Quantum computation and optimized error correction
PDF
Open quantum systems and error correction
PDF
Communication and estimation in noisy networks
PDF
Algorithms for scalable and network-adaptive video coding and transmission
PDF
Advanced techniques for high fidelity video coding
PDF
Joint routing, scheduling, and resource allocation in multi-hop networks: from wireless ad-hoc networks to distributed computing networks
PDF
Collaborative detection and filtering of DDoS attacks in ISP core networks
PDF
Techniques for increasing number of users in dynamically reconfigurable optical code division multiple access systems and networks
PDF
IEEE 802.11 is good enough to build wireless multi-hop networks
PDF
Structured codes in network information theory
PDF
Distributed wavelet compression algorithms for wireless sensor networks
PDF
Lifting transforms on graphs: theory and applications
PDF
Error-rate testing to improve yield for error tolerant applications
PDF
Coding and optimization for ultra-wide band wireless communications
PDF
Joint routing and compression in sensor networks: from theory to practice
PDF
Quantum coding with entanglement
PDF
Coexistence mechanisms for legacy and next generation wireless networks protocols
PDF
Applications of estimation and detection theory in decentralized networks
Asset Metadata
Creator
Balli, Huseyin
(author)
Core Title
Random network error correction codes
School
Viterbi School of Engineering
Degree
Doctor of Philosophy
Degree Program
Electrical Engineering
Publication Date
11/27/2010
Defense Date
10/24/2008
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
network error correction codes,OAI-PMH Harvest
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Zhang, Zhen (
committee chair
), Kuo, C.-C. Jay (
committee member
), Montgomery, Susan (
committee member
)
Creator Email
balli@usc.edu,huseyinballi@yahoo.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-m1838
Unique identifier
UC1127446
Identifier
etd-BALLI-2552 (filename),usctheses-m40 (legacy collection record id),usctheses-c127-130682 (legacy record id),usctheses-m1838 (legacy record id)
Legacy Identifier
etd-BALLI-2552.pdf
Dmrecord
130682
Document Type
Dissertation
Rights
Balli, Huseyin
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Repository Name
Libraries, University of Southern California
Repository Location
Los Angeles, California
Repository Email
cisadmin@lib.usc.edu
Tags
network error correction codes