Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
Computer Science Technical Report Archive
/
USC Computer Science Technical Reports, no. 854 (2005)
(USC DC Other)
USC Computer Science Technical Reports, no. 854 (2005)
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
A Study of TCP Fairness in High-Speed
Networks
Junsoo Lee
Dept. of Computer Science
Univ. of Southern California, CA 90089
Email: junsool@cs.usc.edu
Stephan Bohacek
Dept. of Electrical and Computer Engineering
Univ. of Delaware, Newark, DE 19716
Email: bohacek@eecis.udel.edu
Jo˜ ao P. Hespanha
Dept. of Electrical and Computer Engineering
Univ. of California Santa Barbara, CA 93106-9560
Email: hespanha@ece.ucsb.edu
Katia Obraczka
Computer Engineering Department
University of California Santa Cruz, CA 95064
Email: katia@cse.ucsc.edu
Abstract— Under the TCP congestion control
regime, heterogeneous flows, i.e., flows with differ-
ent round-trip times (RTTs), that share the same
bottleneck link will not attain equal portions of the
available bandwidth. In fact, according to the TCP
friendly formula [1], the throughput ratio of two
flows is inversely proportional to the ratio of their
RTTs. It has also been shown that TCP’s unfairness
to flows with longer RTTs is accentuated under loss
synchronization. Well-known mechanisms to avoid
synchronization are based on injecting randomness
into the network, e.g., introducing background traffic,
using random drop (as opposed to drop-tail queuing).
In this paper, we show that, in high-speed networks,
injecting bursty background traffic may actually
lead to synchronization and result in unfairness to
foreground TCP flows with longer RTTs. We observe
that unfairness is especially severe in high-speed
variants of TCP such as Scalable TCP (S-TCP) and
HighSpeed TCP (HSTCP). We propose three dif-
ferent metrics to characterize traffic burstiness and
show that these metrics are reliable predictors of TCP
unfairness. Finally, we show that TCP unfairness
(including TCP SACK, S-TCP, and HSTCP) in high-
speed networks due to bursty background traffic can
be mitigated through the use of random drop queuing
disciplines (such as RED) at bottleneck routers.
I. INTRODUCTION
It is well known that when several TCP flows
with different round-trip times (RTTs) share the
same bottleneck link, they will not gain access to
an equal share of the available bandwidth [2], [3],
[4], [5]. This could be concluded, e.g., from the
TCP friendly equation [1]:
(1)
which relates a flow’s throughput
, the prob-
ability
of a packet from that flow being dropped,
and the flow’s RTT. In this formula, denotes a
fixed constant [1]. According to Equation (1), one
would expect that when two flows and share
the same bottleneck and therefore are subject to
the same drop probability
, the ratio between their
throughputs
should be inversely pro-
portional to the ratio of their RTTs, i.e.,
!
"
#$
(2)
where
% stands for the fairness ratio of the two
flows. This “unfairness” is caused by TCP’s AIMD
(Additive Increase Multiplicative Decrease) scheme
in the congestion avoidance stage. The AIMD al-
gorithm in TCP increases the congestion window
by one for each RTT and decreases the window by
half when a drop is detected [6]. Flows with smaller
RTT increase their congestion window more rapidly
and are therefore able to reach larger sending rates
before congestion occurs. Flows with smaller RTTs
will thus achieve larger average sending rates.
It has been shown that TCP’s unfairness under
heterogeneous RTTs can be significantly worse than
what the TCP friendly equation (Equation (1))
suggests [2], [3], [4], [5]. This generally occurs
when competing TCP flows become synchronized
in the sense that they suffer drops roughly at
the same time. Synchronization further penalizes
flows with larger RTTs because these flows suffer
the multiplicative decrease even when most of the
congestion is caused by flows with smaller RTT.
We will see in Section II that using a very simple
argument one can conclude that the fairness ratio
of two flows perfectly synchronized is given by
% % !
"
" which can be significantly worse than what Equa-
tion (2) suggests when the ratio between the RTTs
is far from one. As observed by [4], in practice,
perfect synchronization is rare and the fairness ratio
is more accurately modeled by
% !
"
#$
for some .
Flow synchronization in TCP has been reported
in previous studies. [7], [8] showed that TCP con-
nections with the same propagation delay can easily
become synchronized. [4] reported synchronized
drops even when connections have different propa-
gation delays.
In order to avoid synchronization between TCP
connections, [9] suggested injecting randomness
into the network to avoid synchronization of the
window increase and decrease cycles. [10] pro-
posed to randomly select which packets to drop
during periods of congestion (as opposed to drop-
tail queuing). Introducing random processing time
when sending packets or RED was suggested in
[11] as a way of removing synchronization when a
large number of TCP connections share the bottle-
neck. However, this mechanism does not appear to
suffice when the number of connections decreases.
RED and its variations are also known to remove
global synchronization [12]. Indeed, the avoidance
of global synchronization was one of the original
objectives of RED in addition to reduction of queu-
ing delay[12]. Drop synchronization is particularly
common in network simulation studies, which of-
ten require the introduction of background traffic,
random delays, and reverse traffic to reduce or
remove it. For instance, [8] introduced 10% of on-
off exponential or Pareto background traffic in a
5Mbps bottleneck to remove synchronization.
Recently, a number of efforts have focused on de-
veloping transport protocols for high-speed, lightly
utilized networks such as ESNet and Internet2 (e.g.,
[13], [14], [15]). Much of this work has addressed
the well known problem that current implementa-
tions of TCP increase the sending rate too slowly
to efficiently utilize the available bandwidth on
high-speed networks. However, another inherent
problem with TCP over high-speed networks is
that flows will synchronize and result in excessive
unfairness (i.e., less fair than predicted by (2))
[5]. In this paper, this synchronization problem is
investigated and related to various characteristics
of the background traffic. For example, it will be
shown that in high-speed networks, global syn-
chronization occurs under a wide variety of back-
ground traffic conditions, including, for example,
when there are hundreds of randomized background
flows. By analyzing traffic and resulting packet loss
patterns, we observe relationships between certain
traffic metrics and synchronization. These metrics
not only provide insight into how this type of
synchronization occurs, but also are a set of tools
for easily detecting synchronization at a router (i.e.,
without end-to-end information such as RTTs or
flow bit rates).
One implication of this prevalence of syn-
chronization in high-speed networks is that de-
signer/administrators of such networks should tune
their networks to alleviate synchronization and the
related unfairness. It will be shown that while
randomized background traffic has little impact,
randomized dropping of packets will eliminate this
type of synchronization. It is interesting to note
that there have been several studies devoted to the
impact of RED and other randomized drop AQM
techniques on actual networks (e.g., [16], [17]). In
terms of various metrics, these studies do not make
a strong case in favor of the deployment of RED.
Consequently, according to folklore, while most
routers can implement RED, due to lack of hard
evidence supporting its use, network administrators
disable it [16]. However, these studies have been
6000 6200 6400 6600 6800 7000 7200 7400 7600 7800 8000
0
5000
10000
Queue Size
6000 6200 6400 6600 6800 7000 7200 7400 7600 7800 8000
0
2000
4000
6000
Cwnds
6000 6200 6400 6600 6800 7000 7200 7400 7600 7800 8000
1
2
3
4
Drop Event 6947.4 6947.45 6947.5 6947.55 6947.6 6947.65 6947.7
1
2
3
4
Drops per Event
Time (sec)
(a) Bursty background traffic
6000 6200 6400 6600 6800 7000 7200 7400 7600 7800 8000
0
5000
10000
Queue Size
6000 6200 6400 6600 6800 7000 7200 7400 7600 7800 8000
0
2000
4000
6000
Cwnds
6200 6400 6600 6800 7000 7200 7400 7600 7800 8000
1
2
3
4
Drop Event 7863.3 7863.325 7863.35
1
2
3
4
Drops per Event
Time (sec)
(b) Smooth background traffic
Fig. 1. Queue size, congestion window, drop events, and packet losses for four (foreground) TCP flows subject to different
types of background traffic.
focused on today’s low/moderate-speed networks.
In this paper, we show that in future high-speed
networks, synchronization and the resulting unfair-
ness will likely occur. While one solution is to
deploy transport layer protocols that alleviated syn-
chronization (e.g., [5]), an alternative is to simply
enable RED on routers where synchronization is
occurring. The metrics here presented for detecting
whether synchronization is occurring due to traffic
conditions at a router can also be used to determine
where RED should be enabled.
As shown in Section V, synchronization is es-
pecially problematic for some TCP variants that
have been proposed for high-speed networks. More
specifically, HighSpeedTCP (HSTCP) [14] and
Scalable TCP (S-TCP) [13], which were developed
to overcome TCP’s poor performance in these net-
works, are extremely sensitive to drop synchroniza-
tion. [5] showed that under perfect synchronization,
HSTCP results in a fairness ratio of
% % !
"$
and STCP does even worse by completely starving
the flow with larger RTT.
In summary, the main contributions of this paper
are as follows:
We show that, in high-speed networks, in-
jecting bursty background traffic, which has
been used as a way to remove synchronization
in “traditional” networks, may actually lead
to synchronization and result in unfairness to
foreground TCP flows with longer RTTs. We
observe that unfairness is especially severe in
high-speed variants of TCP such as Scalable
TCP (S-TCP) and HighSpeed TCP (HSTCP).
We propose three different metrics to charac-
terize traffic burstiness and show that these
metrics are reliable predictors of TCP unfair-
ness.
Finally, we show that TCP unfairness in high-
speed networks can be mitigated through the
use of random drop queuing disciplines (such
as RED) at bottleneck routers. This is shown
to be true for TCP-Sack, HSTCP, and S-TCP.
The remainder of the paper is organized as fol-
lows. We develop a simple analytical model that
characterizes fairness under drop synchronization
in Section II. Section III proposes three metrics to
measure traffic burstiness and show that they are
good predictors of fairness. We validate our metrics
through ns-2 simulations with different types of
background traffic. Section IV shows that burstiness
can accentuate unfairness of variants of TCP that
have been proposed for high-speed networks and
Section V presents Active Queuing mechanisms
such as RED as a way to improve fairness of TCP
and its high-speed variants. We describe related
work in Section VI. Finally, Section VII, we
present our concluding remarks and directions for
future work.
II. SYNCHRONIZED PACKET LOSS AND
FAIRNESS
Figure 1 shows the effect of background traffic in
drop synchronization. These results were obtained
from two ns-2 [18] simulations of four TCPs flows
competing for the same 500Mpbs bottleneck link,
but with different propagation delays. In both simu-
lations, the four (foreground) TCP flows shared the
bottleneck with 50Mbps (10%) of ON-OFF UDP
traffic background traffic. The durations of the ON
and OFF periods are both Pareto distributed with
mean .5sec and shape parameter equal to 1.5.
The two simulations differ in that the one cor-
responding to Figure 1(a) used only 10 (high-
rate) UDP sources to generate background traffic
whereas the one corresponding to Figure 1(b) used
1000 (low-rate) UDP sources to generate the same
amount of background traffic. The latter resulted in
considerably burstier traffic in Figure 1(a) than the
former. Consequently, Figure 1(a) shows a larger
variance in the queue size.
We can see in the bottom plot of Figure 1(a)
that the congestion event taking place around
1967.55sec results in roughly 240 packets losses.
This is typical of bursty background traffic and
occurs whenever a burst in the sending rate of the
background traffic takes place while the bottleneck
queue is almost full. This results in a very large
number of drops before any of the foreground TCP
flows has time to react. This is confirmed by the
observation that in the non-bursty simulation in
Figure 1(b), the queue remains above 97.5% full
roughly 1.5sec per congestion event, whereas in
the bursty simulation it only remains above 97.5%
for 0.23sec per congestion event. In the simulation
shown in Figure 1(a), on average 3.8 out of the 4
TCP connections experience synchronized packet
loss on each congestion event. This should be con-
trasted with the simulation in Figure 1(b) in which
on average only 2.6 out of the 4 TCP connections
experience synchronized losses. On the other hand,
while 3.8 flows out of 4 flows receiving at least
one drop in each drop event will surely cause
synchronization, 2.6 out of 4 flows receiving drops
will also cause partial synchronization. Indeed, the
fairness ratio in the case of fewer background
flows is 2.5 while it is 2.0 when there are 1000
background flows, both fairness ratio are far larger
than the 1.53 predicted by the ratio of RTTs.
The reason that such synchronization occurs
in high-speed networks is that while bursts in
low/moderate-speed networks and high-speed net-
works may be of the same size in proportion (e.g.,
in terms of the fraction of bandwidth), bursts in
high-speed networks are much larger in magnitude.
Thus, while a burst and resulting congestion event
may last only a short-time, a large number of pack-
ets will attempt to traverse the congestion router
during the event, hence many drops will occur and
they will be spread across several flows. In lower
speed networks, drop events may last equally as
long, but fewer packet traverse the router during
the event, hence fewer flows receive drops.
To understand how drop synchronization affects
fairness, we consider the topology in Figure 2,
whose sources always have packets to send. We
assume that the drops of all flows are perfectly
synchronized. This extreme form of synchroniza-
tion was observed in [8], [5], [7] under droptail
queuing.
PSfrag replacements
Fig. 3. Congestion window sizes of 2 synchronized TCP
flows.
Figure 3 shows evolution of congestion windows
of TCP where two flows are synchronized. For the
simplicity, slow-start was not considered. Since the
congestion window increases by one packet per
RTT, the flow with smaller RTT exhibits a steeper
increase in its congestion window size.
We denote by
the window of the th flow at
time . Suppose that in steady-state, synchronized
PSfrag replacements
Fig. 2. Topology with two TCP flows with different propagation delays
losses occur with a period of
. In this period,
the congestion window increases until it reaches its
maximum
"!$#%
, it is then reduced by a factor
of two and becomes
&!$#%
. This happens when
all drops are detect in one fast retransmit, which
is likely under the Sack version of TCP. Since the
rate of increase of
is one packet per
, after
a period of
the window increases by ’ ( ’)’+*
and
reaches
!$#%
again, which can be expressed by
!$#%
, !-#.%
thus
!-#.%
/ The average window size is given by the average
between its maximum value
0!$#%
and its mini-
mum value
!-#.%
:
#.1.2
43
/ Since
packets are sent every
, the corre-
sponding average throughput is given by
# #.1.2
53
/ Hence, the fairness ratio between two flows per-
fectly synchronized flows is given by
% % !
"$
/ III. BURSTINESS METRICS
This section presents results from extensive ns-
2 [18] simulations. These simulations demonstrate
the relationship between burstiness, degree of syn-
chronization, and fairness ratio. For these sim-
ulations, we used the y-shape topology of Fig-
ure 4. Four TCP connections are simulated with
propagation delays of 45ms( 6 ), 90ms( 6 ),
135ms( 6 ), and 180ms( 6 ). To generate var-
ious degree of burstiness in background traffic,
different types of background traffic are injected
into a
router.
Three metrics to measure burstiness of back-
ground traffic are investigated. First, we define
Burst Packet Drop Density (BPDD), which mea-
sures the average number of packet losses per drop
event. This number includes packet losses both in
foreground and background traffic.
789;: <>=>?A@B CDFEGH: <>GJI>K
G GL: < GL: I GL: M GL: N < <;: < <;: I <;: M <;: N 9 <>9 IO9 MO9 NO9 GJ9O9
PHQSR T ULV>WXY ZU>[LR \J][H^ T U W @ XSZ B PHVH[H[ D _ ‘ a bcde e ‘ c fgh h g ‘ i a j kmlJV n C ]S\ @ Z @ U ^ Wo V>WR ZSU \ pSllOV qOll
Fig. 5. Burst Packet Drop Density (BPDD) vs. Fairness)
Figure 5 shows the relationship between BPDD
and fairness ratio between rtsFu+v
and rtsFuv
. Each
point represents the fairness ratio between two
flows under particular types of background traffic.
FTP, Exponential, Pareto, and HTTP background
traffic are shown in this figure. Parameters used for
FTP background traffic are specified in Table I.
Each FTP instance sends the amount of packets
shown in this table and has start time that is
exponentially distributed with the inter-FTP start
time parameter in this table.
For the exponential background traffic in Figure
5, we use UDP sources with exponentially dis-
tributed ON-OFF time with 500ms mean. Sending
rates of sources are ranging from 25Kbps to 5Mbps
PSfrag replacements
Fig. 4. Y-shape multi-queue topology with 4 different propagation delays.
each and aggregated background traffic is 100Mbps.
If background traffic is composed of UDP sources
with 5Mbps, each source generates large burst of
traffic and BPDD is quite large (around 95 in this
example). However, if traffic sources are composed
of 25Kbps, then BPDD is less than 20.
Figure 5 also shows ON-OFF Pareto distributed
background traffic where the flows’ sending rates
range from 25Kbps to 5Mbps. Another type of
background traffic in this experiment is HTTP
traffic. Parameters of HTTP background traffic used
in these simulations are obtained from [19], [5] and
are shown in Table II.
This figure also shows RTT ratio between rtsFuv
and rtsFuv
. If synchronization does not occur, fair-
ness ratio is expected to have values around these
RTT ratios. This figure shows that BPDD has strong
relationship with fairness ratio. If a background
traffic causes more bursty packet losses in a bot-
tleneck queue then the fairness ratio between two
flows becomes larger.
As a second metric to measure degree of syn-
chronization, we compute the percentage of aver-
age number of flows experiencing packet loss per
loss event (P). In other words, this shows average
number of flows experiencing at least one packet
loss for each loss event.
As in Figure 6, if the percentage of the average
number of flows that experience packet loss (P)
increases, then the fairness ratio increases. This
conclusion is obtained from the simulations of
TCP under FTP, HTTP and different number of
UDP background traffic. This also shows a strong
relationship between P and Fairness Ratio.
Another metric considered is coefficient of vari-
! "!#%$’&( !) "!#*+, - .!0/
+, 1 +, / +, ) " + " . " 1 " / " ) "01 2 1 -01 3 1
4 5 6 7 89: ; ; < 6 = 7 > ?A@!4 B #DCFE0GFHDGAI J KDL 4 KDM HFI E N @%@ 4
Fig. 6. Percentage that flows experience synchronized packet
loss per drop event vs. Fairness Ratio
ation (CoV), which is the ratio of the standard
deviation to the mean of the observed number of
packets arriving at a bottleneck router. CoV is
also used to quantify burstiness of TCP in [20].
However, their use is comparing different flavors
of TCP in terms of burstiness such as Reno and
Vegas. Here, the CoV is used to quantify burstiness
of FTP background traffic.
If the CoV is small the amount of packets arriv-
ing at the router in each time frame will concentrate
mostly around average, i.e., smooth background
traffic. If the CoV is large the aggregate background
traffic is more bursty.
Figure 7 shows CoV of aggregate FTP back-
ground traffic. Each source generates FTP traffic
which is described in Table I. These results are ob-
tained when time interval is 5ms. In our experiment,
background traffic with many short flows incur
more burstiness than that of long flows especially
in short term interval. One of the reason why short
flows are bursty in short time scale is that it sends
back-to-back packets in the slow-start. We believe
that short-term burstiness causes synchronization
!"# " $ !%&(’")* +# -,))./ 0120# ") 3 4 5 67 89 9 : 4 ; 5 < Fig. 7. Coefficient of Variation of FTP background traffic vs.
Fairness Ratio
TABLE I
PARAMETERS FOR FTP TRAFFIC.
Inter FTP start time (sec) Number of Packets
FTP1 0.01 63 packets
FTP2 0.05 3135 packets
FTP3 0.2 1250 packets
FTP4 0.5 6250 packets
FTP5 3 18750 packets
FTP6 5 31250 packets
FTP7 10 62500 packets
of foreground flows in general. This figure shows
that when short-term CoV increases, the fairness
ratio also increase. Self-similar modeling might be a
better way to model burstiness in larger time scales.
This is one of the topics in our future work.
IV. FAIRNESS OF HIGH-SPEED TCP AND STCP
During the congestion avoidance phase, TCP
increases congestion window by one for every
RTT when there is no packet loss. Because the
increase in the congestion windows is too slow,
TCP under-utilizes network bandwidth in high-
speed networks [13], [14]. Recent deployment of
high-speed networks, such as ESNet and Abilene,
provisioned with high bandwidth links (ranging
from 1 to 10 Gbps) has motivated researchers to
develop new protocols.
Several promising protocols for high-speed net-
works have been introduced. Among these pro-
tocols, High-speed TCP (HSTCP) [13] and Scal-
able TCP (STCP) [14] are studied in this paper.
HSTCP is known to be compatible with TCP when
congestion window size is small but its increase
parameter becomes larger as window size grows.
Scalable TCP [14] also adaptively adjusts the in-
crease rate based on the current window size. While
these protocols utilize the available bandwidth more
efficiently, it is unknown that how these two pro-
tocols behave when synchronization occurs due to
background traffic.
= > ? @ A B C D E B =F B =&F = B =F =? B GIHJ(KL J(MONPQ HSRITVU&PWXSY[ZI\]G^R^_‘ WH a b2cdef
g h i jk lm m n h o i p q(r&s(t^uv(w-xv(y^z {|} ~- q(r&s(t^uv(xs-s
rs(t^uvwxvyIz {|} ~ rs(t^uvxss
s(t^uv(w-xv(y^z {|} ~- s(t^uv(xss
Fig. 8. Fairness Ratio of TCP variants with Drop-tail queue
Figure 8 represents the fairness and RTT ra-
tios of TCP, HSTCP and STCP where sending
rate of each UDP background flows ranges from
25Kbps to 5Mbps. If each UDP background traffic
source generates bursty traffic (5Mbps per source)
HSTCP’s fairness ratio becomes around 7.5 and
STCP’s fairness becomes around 6.9. If a UDP
source generates smoother traffic (25Kbps), the
both HSTCP and STCP shows a fairness ratio of
5.5. RTT ratios are around 1.55. This shows that
fairness is poor for HSTCP and STCP in high-speed
networks where synchronization is more frequent.
V. ACTIVE QUEUE MANAGEMENT(AQM)
Active Queue Management (AQM) algorithms
[12], [21], [22] were proposed to lower delay ser-
vices and give randomness in the router. However,
AQMs impact on synchronization in high-speed
networks due to bursty background traffic has not
been studied.
RED [12] is one of the most well known AQM
methods. The RED algorithm controls a queue by
dropping packets with increasing probability as the
average queue size increases. RED router computes
average queue size denoted as avg. If avg is smaller
than I , which is minimum threshold for the
queue, router accept all packets. If avg is bigger
than I and smaller than the the , which
is a maximum threshold for the queue.
TABLE II
PARAMETERS FOR HTTP TRAFFIC.
Session Number Page Number Inter Page Page Size Inter Obj Obj Size Number of server
HTTP1 30 1000 4 300 0.01 10 10
HTTP2 30 1000 4 300 0.01 10 100
HTTP3 300 1000 4 30 0.01 10 10
HTTP4 30 1000 4 1000 0.01 3 10
While RED is well known, it has not been
deployed widely because it is sensitive to traffic
load and to its parameters, especially in high-speed
networks [17]. Also, RED exhibits nonlinear phe-
nomena such as oscillations or even chaos [23]. To
improve this instability of RED, adaptive RED [22]
was proposed. This is dynamic version of RED,
which dynamically updates the loss probability so
that average queue size stays close to a target queue
size. Since it solves some of the problems that RED
have, we use adaptive RED queue to show how it
mitigates unfairness in high-speed networks.
!"#%$&’( )+* "+-, .0/+1243
5 6 7 89 :; ; < 6 = 7 > ? A@B’ACDC!&
? A@B’AC @+@
A@B’ACD+AC! &
A@B’ACA@@
@B’ACD+AC! &
@B’AC @+@
Fig. 9. Fairness Ratio of variants of TCP with RED queue
Figure 9 shows simulation results of HSTCP,
STCP and TCP over high-speed network using
adaptive RED queue. Unfairness due to back-
ground traffic is eliminated, hence each connec-
tion’s throughput is proportional to its RTT. We
conclude that AQM schemes such as adaptive RED
and perhaps other schemes that randomly drop
packets can dramatically improve fairness in the
high-speed networks where synchronization is apt
to occur.
VI. RELATED WORK
One of the earliest study on RTT unfairness can
be found in [24]. They studied effect of random
drop in the gateway and showed that random drop
improves the fairness of connections with diferent
RTTs that have the same bottleneck.
TCP’s performance under multiple congested
gateways was studied by [2], showing that TCP’s
throughput decreases rapidly as the number of
congested gateways increases. A heuristic analysis
showed that the throughput of a connection is
inversely proportional to its round-trip time. It was
concluded that the throughput would improve with
RED because of the bias against bursty foreground
traffic in both Random Drop and Drop Tail gateway.
[4] studied the performance of TCP in high
bandwidth-delay product networks with random
losses. This study shows that TCP is grossly unfair
toward connections with higher propagation delays.
Under FIFO queue, TCP’s throughput is inversely
proportional to
FE
where E depends
on the queuing delay. This work is focused on TCP
Tahoe and Reno under ATM system and not on
TCP-Sack.
More recently, RTT unfairness in high-bandwidth
networks was studied in [5]. It is shown through
simulation that whenever a flow suffers a loss
event in a drop tail router, at least half of the
remaining flows will also experience a drop with
70% probability. AQM routers such as RED do not
result in as many synchronized losses, but there
still exist some amount of synchronization since
the probability that more than a quarter of the total
flows have synchronized loss events is around 30%.
This result shows how RED performs in high-speed
networks where burstiness exists due to foreground
TCP traffic.
[25] proposed a method to measure the buffer
requirement at the router. They concluded that the
buffer requirement decreases with the square root
of the number N of flows that are active at the
link. Their key insight is that when the number of
competing flow is sufficiently large they become
non-synchronized. However, this is argued in [26]
since the flows in the backbone network are not
synchronized because the backbone is generally not
a bottleneck link. Also, when the queue is too small
it may experience partial loss synchronization.
[11] examines how the aggregate throughput,
goodput and loss rate vary with different underlying
topologies. By varying the bandwidth of bottleneck
they observed global synchronization. To break syn-
chronization they add random processing time or
use RED gateways. However, sometimes this delay
is not sufficient when the number of connections
decreases and if flows do not experience timeouts.
Thus often we needed to inject background traffic
or randomness to get rid of synchronization.
VII. CONCLUSIONS
TCP synchronization and fairness over high-
speed networks was studied. It was observed that
synchronization persisted even when there were a
large number of randomized background flows. For
example, in some case, synchronization persisted
until over 1000 randomized background flows were
present. This synchronization causes unfairness,
i.e., flows with small RTT exhibited substantially
more throughput than flows with long RTT. Several
metrics of traffic and drop patterns were used to
investigate how traffic impacts synchronization and
fairness. It was also observed that variants of TCP
that are specialized for high-speed networks suffer
from extreme unfairness. However, in all cases, it
was found that randomized drops, such as RED,
eliminates synchronization and the associated un-
fairness.
REFERENCES
[1] J. Padhye, V . Firoiu, D. Towsley, and J. Kurose, “Mod-
eling TCP throughput: a simple model and its empirical
validation,” in Proc. of the ACM SIGCOMM, Sept. 1998.
[2] S. Floyd, “Connections with multiple congested gateways
in packet-switched networks part1: One-way traffic,”
ACM Comput. Comm. Review, vol. 21, no. 5, pp. 30–47,
Oct. 1991.
[3] T. H. Henderson, E. Sahouria, S. McCanne, and R. H.
Katz, “On improving the fairness of TCP congestion
avoidance,” in Proc. of IEEE GLOBECOM, Nov. 1998.
[4] T. V . Lakshman and U. Madhow, “The performance of
TCP/IP for networks with high bandwidth-delay products
and random loss,” IEEE/ACM Trans. on Networking,
vol. 5, no. 3, pp. 336–350, July 1997.
[5] L. Xu, K. Harfoush, and I. Rhee, “Binary increase
congestion control for fast, long-distance networks,” in
Proc. of the IEEE INFOCOM, Mar. 2004.
[6] D. Chiu and R. Jain, “Analysis of the increase and
decrease algorithms for congestion avoidance in com-
puter networks. journal of computer networks and isdn
systems,” IEEE/ACM Trans. on Networking, vol. 17, pp.
1–14, 1989.
[7] S. Shenker, L. Zhang, and D. Clark, “Some observations
on the dynamics of a congestion control algorithm,” ACM
Comput. Comm. Review, pp. 30–39, Oct. 1990.
[8] S. Bohacek, J. P. Hespanha, J. Lee, and K. Obraczka, “A
hybrid systems modeling framework for fast and accurate
simulation of data communication networks,” in ACM
SIGMETRICS, 2003.
[9] L. Zhang and D. D. Clark, “Oscillating behavior of net-
work traffic: A case study simulation,” Internetworking:
Research and Experience, vol. 1, pp. 101–112, 1990.
[10] S. Floyd and V . Jacobson, “On traffic phase effects
in packet-switched gateways,” Internetworking: Research
and Experience, vol. 3, no. 3, pp. 115–116, Sept. 1992.
[11] L. Qiu, Y . Zhang, and S. Keshav, “Understanding the
performance of many TCP flows,” Computer Networks,
vol. 37, pp. 277–306, Nov. 2001.
[12] S. Floyd and V . Jacobson, “Random early detection gate-
ways for congestion avoidance,” IEEE/ACM Trans. on
Networking, vol. 1, no. 4, pp. 397–413, Aug. 1993.
[13] S. Floyd, “Highspeed TCP for large congestion win-
dows,” RFC 3649, 2003.
[14] T. Kelly, “Scalable tcp: improving performance in high-
speed wide area networks,” SIGCOMM Comput. Com-
mun. Rev., vol. 33, no. 2, pp. 83–91, 2003.
[15] C. Jin, D. X. Wei, and S. H. Low, “Fast tcp: motivation,
architecture, algorithms, performance,” in Proc. of the
IEEE INFOCOM, Mar. 2004.
[16] M. May, J. Bolot, C. Diot, and B. Lyles, “Reasons not to
deploy RED,” in IWQoS’99, 1999.
[17] M. Christiansen, K. Jaffay, D. Ott, and F. D.
Smith, “Tuning RED for web traffic,” in
SIGCOMM, 2000, pp. 139–150. [Online]. Available:
citeseer.ist.psu.edu/christiansen00tuning.html
[18] The ns Manual (formerly ns Notes and Documentation),
The VINT Project, a collaboratoin between researchers
at UC Berkeley, LBL, USC/ISI, and Xerox PARC,
Oct. 2000, available at http://www.isi.edu/nsnam/ns/ns-
documentation.html.
[19] A. Feldmann, A. C. Gilbert, P. Huang, and W. Willinger,
“Dynamics of IP traffic: A study of the role of variability
and the impact of control,” in Proc. of the ACM SIG-
COMM, Sept. 1999.
[20] P. Tinnakornsrisuphap, W. Feng, , and I. Philp, “On the
burstiness of the tcp congestion-control mechanism in
a distributed computing system,” in Proc. of the The
20th International Conference on Distributed Computing
Systems, Apr. 2000.
[21] W. Feng, D. Kandlur, D. Saha, and K. Shin, “A self-
configuring RED gateway,” in Proc. of the IEEE INFO-
COM, Mar. 1999.
[22] S. Floyd, R. Gummadi, and S. Shenker, “Adaptive RED:
An algorithm for increasing the robustness of RED’s
active queue management,” ACIRI, Tech. Rep., 2001.
[23] P. Ranjan, E. H. Abed, and R. J. La, “Nonlinear instabil-
ities in tcp-red,” in Proc. of the IEEE INFOCOM, Mar.
2002.
[24] A. Mankin, “Random drop congestion control,” in
Proc. of the ACM SIGCOMM, Sept. 1990, pp. 1–7.
[25] G. Appenzeller, I. Keslassy, and N. McKeown:, “Sizing
router buffers,” in Proc. of the ACM SIGCOMM, Aug.
2004, pp. 281–292.
[26] A. Dhamdhere, H. Jiang, and C. Dovrolis, “Buffer siz-
ing for congested internet links,” in Proc. of the IEEE
INFOCOM, Mar. 2005.
Abstract (if available)
Linked assets
Computer Science Technical Report Archive
Conceptually similar
PDF
USC Computer Science Technical Reports, no. 660 (1997)
PDF
USC Computer Science Technical Reports, no. 735 (2000)
PDF
USC Computer Science Technical Reports, no. 680 (1998)
PDF
USC Computer Science Technical Reports, no. 738 (2000)
PDF
USC Computer Science Technical Reports, no. 707 (1999)
PDF
USC Computer Science Technical Reports, no. 595 (1994)
PDF
USC Computer Science Technical Reports, no. 638 (1996)
PDF
USC Computer Science Technical Reports, no. 658 (1997)
PDF
USC Computer Science Technical Reports, no. 737 (2000)
PDF
USC Computer Science Technical Reports, no. 714 (1999)
PDF
USC Computer Science Technical Reports, no. 651 (1997)
PDF
USC Computer Science Technical Reports, no. 652 (1997)
PDF
USC Computer Science Technical Reports, no. 713 (1999)
PDF
USC Computer Science Technical Reports, no. 689 (1998)
PDF
USC Computer Science Technical Reports, no. 811 (2003)
PDF
USC Computer Science Technical Reports, no. 752 (2002)
PDF
USC Computer Science Technical Reports, no. 495 (1991)
PDF
USC Computer Science Technical Reports, no. 848 (2005)
PDF
USC Computer Science Technical Reports, no. 852 (2005)
PDF
USC Computer Science Technical Reports, no. 859 (2005)
Description
Junsoo Lee, Dr. Stephan Bohacek, Dr. Joao Hespanha, Dr. Katia Obraczka. "A study of TCP fairness in high-speed networks." Computer Science Technical Reports (Los Angeles, California, USA: University of Southern California. Department of Computer Science) no. 854 (2005).
Asset Metadata
Creator
Bohacek, Stephan
(author),
Hespanha, Joao
(author),
Lee, Junsoo
(author),
Obraczka, Katia
(author)
Core Title
USC Computer Science Technical Reports, no. 854 (2005)
Alternative Title
A study of TCP fairness in high-speed networks (
title
)
Publisher
Department of Computer Science,USC Viterbi School of Engineering, University of Southern California, 3650 McClintock Avenue, Los Angeles, California, 90089, USA
(publisher)
Tag
OAI-PMH Harvest
Format
10 pages
(extent),
technical reports
(aat)
Language
English
Unique identifier
UC16269178
Identifier
05-854 A Study of TCP Fairness in High-Speed Networks (filename)
Legacy Identifier
usc-cstr-05-854
Format
10 pages (extent),technical reports (aat)
Rights
Department of Computer Science (University of Southern California) and the author(s).
Internet Media Type
application/pdf
Copyright
In copyright - Non-commercial use permitted (https://rightsstatements.org/vocab/InC-NC/1.0/
Source
20180426-rozan-cstechreports-shoaf
(batch),
Computer Science Technical Report Archive
(collection),
University of Southern California. Department of Computer Science. Technical Reports
(series)
Access Conditions
The author(s) retain rights to their work according to U.S. copyright law. Electronic access is being provided by the USC Libraries, but does not grant the reader permission to use the work if the desired use is covered by copyright. It is the author, as rights holder, who must provide use permission if such use is covered by copyright.
Repository Name
USC Viterbi School of Engineering Department of Computer Science
Repository Location
Department of Computer Science. USC Viterbi School of Engineering. Los Angeles\, CA\, 90089
Repository Email
csdept@usc.edu
Inherited Values
Title
Computer Science Technical Report Archive
Coverage Temporal
1991/2017
Repository Email
csdept@usc.edu
Repository Name
USC Viterbi School of Engineering Department of Computer Science
Repository Location
Department of Computer Science. USC Viterbi School of Engineering. Los Angeles\, CA\, 90089
Publisher
Department of Computer Science,USC Viterbi School of Engineering, University of Southern California, 3650 McClintock Avenue, Los Angeles, California, 90089, USA
(publisher)
Copyright
In copyright - Non-commercial use permitted (https://rightsstatements.org/vocab/InC-NC/1.0/