Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
Computer Science Technical Report Archive
/
USC Computer Science Technical Reports, no. 757 (2002)
(USC DC Other)
USC Computer Science Technical Reports, no. 757 (2002)
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
1
Fairness Evaluation Experiments for Multicast Congestion Control Protocols
Karim Seada, Ahmed Helmy
Electrical Engineering-Systems Department
University of Southern California, Los Angeles, CA 90089
{seada,helmy}@usc.edu
Abstract
Fairness to current Internet traffic, particularly TCP, is an
important requirement for new protocols in order to be safely
deployed in the Internet. This specifically applies to multicast
protocols that should be deployed with great care. In this
paper we provide a set of experiments that can be used as a
benchmark to evaluate the fairness of multicast congestion
control mechanisms when running with competing TCP
flows. We carefully select the experiments in such a way to
target specific congestion control mechanisms and to reveal
the differences between TCP and the proposed multicasting
protocol. This enables us to have a better understanding of
the proposed protocol behavior and to evaluate its fairness
and when violations can happen. To clarify our experiments
we carry them on a single-rate case study protocol, pgmcc,
using NS-2 simulations. Our analysis shows the strengths and
potential problems of the protocol and point to possible
improvements. Several congestion control mechanisms are
targeted by the experiments such as timeouts, response to
ACKs and losses, independent and congestion losses effect.
In addition, we evaluate multicast mechanisms such as the
effect of multiple receivers, group representative selection,
and feedback suppression when there is network support.
Keywords
Multicast, congestion control, fairness, TCP, simulation,
benchmark.
1 Introduction
Congestion control is a major requirement for multicast
transport protocols in order to be safely deployed in the
Internet [11]. It is believed that the lack of good, deployable,
well-tested multicast congestion control mechanisms is one
of the factors inhibiting the usage of IP multicast [19]. These
mechanisms have to be fair to current Internet traffic,
especially TCP.
In this paper we consider evaluating the fairness of
multicast congestion control protocols, by providing a set of
selected experiments and scenarios that target specific
congestion control mechanisms. This facilitates a better
understanding of multicast congestion control protocols in
order to assess their safety and their effects on TCP. Fairness
analysis is important for any kind of protocols that compete
with TCP.
Our approach attempts to relate overall protocol behavior
to individual protocol mechanisms by evaluating carefully
selected scenarios. In our experience this often points to
possible mechanistic modifications to improve the protocol
performance. In this paper, we arrive at these scenarios based
on our intuition and understanding of the protocol
mechanisms. Ultimately, however, we aim to develop a
methodology, based on the STRESS framework [9][10], to
systematize this scenario selection process. In our ongoing
and future work, we address this issue for the classes of
multicast congestion control described here.
To clarify our experiments we have chosen, as a case
study, a multicast congestion control scheme called pgmcc.
pgmcc [15] is a single-rate multicast congestion control
scheme that is designed to be fair with TCP. It is an
interesting scheme that captures many of the phenomena of
representative-based multicast congestion control protocols.
We apply the experiments using NS-2 simulations. Our
analysis shows the strengths and potential problems of the
protocol and point to possible improvements.
The rest of this paper is outlined as follows. In Section 2
we provide an overview of multicast congestion control and
pgmcc. In Section 3 we explain our experiments and the
motivation behind them. In Section 4 we show the simulation
results and analysis of the case study. Conclusions and future
work are presented in Section 5.
2 Related Work
In this section we start with a general background about
multicast congestion control and then provide a brief
description for our case study scheme, pgmcc.
2.1 Multicast Congestion Control
Reliable multicast has been an area of extensive research due
to the increasing number of emerging applications that rely
on it. Most of the reliable multicast protocols were just a
general solution for group communication, but they do not
implement any explicit form of congestion control [6][12].
One of the most important criterion that IETF [11] requires
reliable multicast protocols to satisfy is to perform congestion
control. The protocol should be safe to deploy in the current
Internet and should not starve other competing flows. Since
most of the traffic on the Internet is TCP, then the protocol
should be TCP-friendly, and should behave similar to TCP by
decreasing its rate in the case of congestion. TCP-friendly
means that the protocol sessions should have the same effect
on competing TCP flows as if they themselves were TCP
sessions.
2
4
4
3
3
3
3
2
1 1
1
1
1
1
(old acker)
1) The sender sends a packet
2) Packet lost by one receiver
3) This receiver sends NACK
and the acker sends ACK
4) If the NACK is from a receiver
worse than the acker, then it is
designated as the new acker
(new acker)
Sender
Receiver
acker
Data
NACK
ACK
The design of a MCC (Multicast Congestion Control)
protocol that provides high performance, scalability, and
TCP-friendliness is a difficult task that attracts a lot of
research effort. Multicast congestion control can be classified
into two main categories: single-rate and multi-rate. Single-
rate has a limited scalability because all receivers must
receive data at the same (slowest receiver) rate. It also suffers
from feedback implosion problem and drop-to-zero problem
[2] (where the rate degrades significantly due to independent
losses by a large number of receivers). Multi-rate, where
different receivers can receive at different rates, is more
scalable but has other concerns such as the complex encoding
of data, possible multiple paths in the layered approach, and
the effects of receivers joining and leaving layers.
TCP-friendly multicast congestion control can also be
classified into window-based and rate-based. Window-based
has a similar congestion window control as TCP, while rate-
based depends on the TCP throughput equation [13] for
adjusting the transmission rate. Rate-based has smoother rate
transitions and its fairness with TCP is over a long-term.
Rate-based also requires accurate RTT computations which is
difficult in multicast. For more details about these issues see
[8][18].
Another possible classification for single-rate protocols
is whether they are representative-based or not. Non-
representative-based protocols solve the scalability problems
using some aggregation hierarchy. This requires complex
building of the hierarchy and may need network support. The
performance is still limited and [3] shows that even without
losses, small variations in delay can cause fast performance
degradation with the increase in number of receivers.
Representative-based protocols provide a promising
emerging approach to solve the scalability problems, where a
small dynamic set of receivers is responsible for providing
the feedback [4][5]. The main challenge is the dynamic
selection of a good set of representatives in a scalable and
efficient manner with appropriate reaction to changes in
representatives. This still needs further investigation.
Examples of single-rate representative-based protocols are
pgmcc (window-based) [15] and TFMCC (rate-based) [19].
We will use pgmcc as our case study example, and in the rest
of this section we will provide a more detailed description of
pgmcc.
2.2 pgmcc
pgmcc [15][16] is a single-rate multicast congestion control
scheme that is designed to be TCP-friendly. It supports both
scalability and fast response. To achieve fast response while
retaining scalability, a group representative called the acker is
selected and a tight control loop is run between it and the
sender. It is called the acker because it is the receiver that
sends the ACKs. Other receivers can send NACKs when they
lose packets, if a reliable transport protocol is used
1
. pgmcc
1
In [16] the authors state that pgmcc can be used with both
has been used to implement congestion control in the PGM
protocol [17]. A sample scenario for pgmcc is shown in
Figure 1.
The acker is the representative of the group. It is chosen
as the receiver with the worst throughput to ensure that the
protocol will be TCP-friendly. A window-based TCP-like
controller based on positive ACKs is run between the sender
and the acker. The feedback in pgmcc is provided in receiver
reports that are used by the sender to estimate the throughput.
They are embedded into the NACKs and ACKs and contain
the loss rate and information for computing an estimate for
the round trip time (RTT) of the sending receiver. The RTT
can be evaluated by using timestamps or sequence numbers.
The loss rate is computed using a low pass filter to avoid
oscillations.
The most critical operation of pgmcc is the acker
election and tracking. As mentioned, the receiver with the
worst throughput is selected as the acker. When another
receiver with worse throughput sends a NACK, an acker
change may occur. A receiver that does not send NACKs is
assumed to have no congestion problems, and will not be
considered at all. Computation of throughputs uses
information sent by receivers and the TCP-like formula:
p RTT
T
1
α
where T is the throughput, RTT is the round trip
time estimate and p is the loss rate [13]. An acker switch
reliable and non-reliable transport protocols. Non-reliable
protocols will also need to send NACKs from time to time for
congestion control purposes.
Figure 1: A sample pgmcc scenario
3
occurs from receiver j to receiver i if T(i) < c T(j), where T(n)
is the throughput computed by the sender for receiver n. c is a
constant (0< c ≤1) that is used during throughput comparison
to dampen oscillations of acker selection when the difference
between the current acker and the worst receiver is not large.
There is a 32-bit field in the ACK called the bitmask, which
indicates the receive status of the most recent 32 packets.
This is included to help the sender deal with lost and out-of-
order ACKs.
A window based congestion control scheme similar to
that used by TCP is run between the sender and the acker.
The parameters used are a window W and a token count T. W
is the number of packets that are allowed to be in flight and
has the same role as the TCP window, while T is used to
regulate the sending of data packets by decrementing T for
every packet sent and incrementing it for every ACK
received. On packet loss, W is cut by half and T is adjusted
accordingly. A packet is assumed lost when it has not been
acked in a number (normally three) of subsequent ACKs. W
and T are initialized to 1 when the session starts or after a
stall when ACKs stop arriving and a timeout occurs. The
congestion control here is more conservative than that used
by TCP. For example the exponential opening of the window
(slow start phase) is limited to a small window size. The
reason is that the exponential opening performed by TCP is
considered very aggressive and there are still fears of
deploying such mechanisms for multicast protocols over the
Internet. pgmcc has a different flow/reliability window than
that used for congestion to decouple congestion control from
retransmissions, so it can be used with both reliable and non-
reliable protocols.
Some multicast transport protocols depend on router
support for feedback aggregation. For example, in PGM
routers, the first instance of a NACK for a given data segment
is forwarded to the source, and subsequent NACKs are
suppressed. As we will show, this can have a significant
effect on the operation of pgmcc.
3 Experiments Details and Motivation
In this section we discuss the experiments details and the
motivation behind them. Each subsection contains a set of
related experiments.
3.1 Experiment Set 1: Window and Timeouts
The first set of experiments contains simple topologies to
compare the MCC (Multicast Congestion Control) protocol to
different flavors of TCP in simple cases. The flavors are
Reno, New-Reno, and SACK [7]. This comparison helps us
understand the behavior of the protocol and the subtle
differences between it and TCP. Two congestion control
issues are targeted by this comparison: (1) reaction to losses
and ACKs with its effect on the window size, (2)
retransmission timeouts. TCP Reno is still the most widely
deployed flavor in the Internet, but recent statistics show that
TCP New-Reno and TCP SACK deployment is increasing
[14]. New-Reno and SACK solve performance problems of
TCP in case of multiple-packet loss in a window and they
reduce the number of timeouts. When multiple packets are
lost from a single window of data, New-Reno and SACK can
recover without a retransmission timeout. With Reno and
New-Reno at most one dropped packet is retransmitted per
round-trip time, while SACK does not have this limitation
[7]. This response to losses and ACKs has a major impact on
the window size, and consequently on the fairness. According
to [13] timeouts also have a significant impact on the
performance of TCP Reno and they constitute a considerable
fraction of the total number of loss indications.
Measurements have shown that in many cases the majority of
window decreases are due to timeouts, rather than fast
retransmits. So this experiment highlights the MCC protocol
policy in handling ACKs and timeouts, and which flavor it is
closer to.
3.2 Experiment Set 2: Diverse Losses and Delay
This set of experiments addresses the effect of having
multiple receivers with different losses and delay. We
consider both independent and correlated (due to congestion)
losses. The throughput of the MCC protocol when the
receivers have different combinations of delay and loss rates
(e.g. high loss, low delay vs. low loss, high delay) is
compared to the competing TCP flows. There are several
objectives behind this comparison: First, better understanding
of the effect of losses especially with multiple receivers and
the effect of retransmissions. Second, many MCC protocols
use a TCP throughput equation to model the TCP behavior.
This set of experiments evaluates the accuracy of the used
equation. Third, the difference between the reaction to
independent and congestion losses can show some of the
protocol characteristics.
3.3 Experiment Set 3: Feedback Suppression
Most MCC protocols depend on the receivers’ feedback in
making decisions. Some multicast transport protocols have
network support (e.g. by routers) to improve their
performance. This support is normally in the form of
feedback aggregation or suppression to avoid problems as
ACK and NACK implosion. In this part we consider
experiments to test the effect of feedback suppression on
fairness. The experiments consist of topologies with critical
receivers having their feedback suppressed. The definition of
critical receivers depends on the protocol as will be shown in
the next section. Feedback suppression affects the accuracy
of decisions based on feedback. These experiments
investigate the tradeoff between the amount of feedback and
the correctness of decisions or computations.
4
0
1000
2000
3000
4000
5000
6000
7000
8000
0 50 100 150 200 250 300
Time
Sequence
MCC
TCP
0
10000
20000
30000
40000
50000
60000
70000
80000
0 500 1000 1500 2000 2500 3000
Time
Sequence
MCC
TCP
3.4 Experiment Set 4: Group Representatives
Several MCC protocols use the idea of representatives to
achieve scalability. Feedback is normally provided only by
these special receivers. An important task is the selection and
changing of the representatives. The experiments here target
this operation by having configurations containing multiple
sets of receivers that can be selected as representatives and
having scenarios that trigger the changing between them. The
aim of the experiments is to study the effect of these changes
on the overall protocol operation and on its fairness to TCP.
4 Case Study: pgmcc
In this section we apply our experiments to a case study
congestion control scheme ‘pgmcc’. We show the results
obtained using the NS-2 simulator [1] to evaluate pgmcc and
analyze its fairness with regard to TCP. Both TCP Reno and
SACK are examined and they provide close results and
similar conclusions (except for the first experiment, as will be
shown). For each experiment performed, we show the results
obtained, and the interpretation and analysis of these results.
The source models used in the simulation are FTP sources
with packet size of 1400 bytes. The links have propagation
delay of 1ms, and bandwidth of 10Mb/s, unless otherwise
specified. The queues have a drop-tail discard policy (RED
was also examined and the results are similar) and FIFO
service policy, with capacity to hold 30 packets.
In the topologies we use MS and MR to refer to MCC
(Multicast Congestion Control) sender and MCC receiver,
respectively. In the graphs we show the sequence numbers
sent by the sender vs. the time. This has the same effect as
showing the throughput. We provide a fairness metric f as the
ratio between TCP and pgmcc throughput
2
.
4.1 Experiment Set 1: Window and Timeouts
In the first experiment we use the topology shown in Figure 2
to test the fairness of pgmcc with the different TCP flavors in
a very simple case, where we have only a single TCP session
competing with a pgmcc session over a bottleneck link
(500Kb/s, 50ms). pgmcc has a number of identical receivers,
so anyone of them could be the acker and their number
should make no difference.
Starting with TCP Reno and comparing the throughput of
the TCP sender with the pgmcc sender we find in Figure 3
that both of them are so close to each other. At short intervals
one of them can get a higher throughput and its window gets
larger but soon they converge again and on average they are
identical. Similar behavior can also observed between
different TCP sessions. This is the same topology and results
2
The final sequence numbers in the graphs represent the
aggregate throughput. So their ratio can be considered as the
ratio between the average instantaneous throughputs.
obtained in [15]. Acker changes and the order in which
sessions start have no effect in this topology. The simulation
run in Figure 3 is 300 seconds long. However, by extending
the same experiment for a longer period (3000 seconds) we
get the results in Figure 4. As we see in Figure 4, pgmcc is
not fair to TCP Reno and it gets higher throughput, which
becomes clear as the period gets longer.
The reason for this behavior can be interpreted if we look
more closely at how pgmcc works in comparison to TCP
Reno. pgmcc times out after a stall when the ACKs stop
coming in, and a long timeout expires. But there are no
specific information about the exact timeout value for pgmcc
and how it is determined. Without timeout pgmcc reacts to
congestion by cutting the window in half similar to fast
Figure 2: TCP session competing with
MCC session over a bottleneck link
Figure 3: Throughput of pgmcc vs.
TCP Reno over a short period
Figure 4: Throughput of pgmcc vs. TCP
Reno over a longer period (f=74%)
Congested Link
TS
TR
MS
MR1
MR2 MR3
TS: TCP Sender
TR: TCP Receiver
MS: MCC Sender
MR: MCC Receiver
5
0
10
20
30
40
50
60
0 200 400 600 800 1000
Time
Window Size
MCC
TCP
0
10000
20000
30000
40000
50000
60000
70000
80000
0 500 1000 1500 2000 2500 3000
Time
Sequence
MCC
TCP
0
10000
20000
30000
40000
50000
60000
70000
0 500 1000 1500 2000 2500 3000
Time
Sequence
MCC
TCP
recovery in TCP. TCP on the other hand adjusts its timeout
value depending on the measured RTT and the standard
deviation of the measured RTT values. In addition, ACKs in
pgmcc are per-packet as in SACK, while in Reno ACKs are
aggregate only, so for Reno to send an ACK for a packet, all
packets in between have to be received. This has a large
effect when multiple packets are dropped from a window.
Our explanation of the unfairness that is observed over
long periods is due to these differences in handling timeouts
and responding to ACKs and losses. In this simulation the
timeout of pgmcc is fixed to a relatively high value (4
seconds). In the first 300 seconds that is shown in Figure 3 no
timeouts happen to TCP, this is the reason for equivalent
throughput. But during the longer period, several timeouts
happen and since the timeout value of pgmcc is fixed to a
relatively high value, it incurs fewer timeouts and slow start
in comparison to TCP. Furthermore, TCP depends on ACKs
solely for responding to the receiver, changing the window,
entering fast recovery and entering slow start. If there are no
more ACKs in flight coming due to a large number of losses
(e.g. loss bursts) for example, this prevents TCP from
entering fast recovery and it has to timeout. pgmcc, on the
other hand, has another stimulant for sending retransmissions
which is the NACK.
By observing the window size changes in both of them we
found that our explanation is correct and that pgmcc window
is larger most of the time and it does not enter the slow start
phase. We can see that in Figure 5 where pgmcc window is
higher most of the time than TCP window. In Figure 6 we get
a closer look over a shorter period, we see how TCP window
size goes to ‘1’ several times while pgmcc window does not.
We have also conducted several other experiments with
changing the timeout value, we found that the results
obtained depend heavily on this value. For example, if the
timeout is set to a relatively small value this can cause TCP to
have a much higher throughput. The appropriate value for
timeout that achieves fairness depends on dynamic network
conditions that change over time.
Next we try the same experiments with New-Reno and
SACK. New-Reno and SACK reduce the timeouts and solve
the performance problems when multiple packet are dropped
from a window. Figure 7 shows that pgmcc is fairer with
SACK (New-Reno also has similar result to SACK).
To clarify more the effect of timeout and window size, we
run the same experiment of TCP Reno with an adaptive
timeout mechanism added to pgmcc. In this experiment
pgmcc uses an adaptive timeout similar to that used in TCP
and the reset of the timeout is controlled to be as close as
possible to TCP Reno. It is reset only if there are no packets
missing in the received bitmask (i.e. all packets are acked).
Because of differences in RTT between different ackers, after
a switch a fixed timeout is used until the adaptive timeout for
the new acker is computed. Figure 8 shows the result of
pgmcc compared to TCP Reno after adding the adaptive
timeout. The modified pgmcc is friendly to TCP Reno.
These experiments clarify some of the characteristics and
design choices of pgmcc. It is similar to TCP SACK in
handling ACKs and losses, and it avoids timeouts as much as
possible. Since the deployment of SACK is permitted and it
is currently increasing, there is no requirement to degrade
Figure 5: Window size comparison of
pgmcc and TCP
Figure 6: Closer window size comparison
Figure 7: Throughput of pgmcc vs. TCP
SACK (f=92%)
Figure 8: Throughput of pgmcc with the
adaptive timeout vs. TCP Reno (f=96%)
0
5
10
15
20
25
30
35
1500 1550 1600 1650 1700 1750 1800
Time
Window Size
MCC
TCP
6
0
10000
20000
30000
40000
50000
60000
70000
80000
0 500 1000 1500 2000 2500 3000
Time
Sequence
MCC
TCP1
TCP2
pgmcc to TCP Reno and these design choices seem to be
correct.
4.2 Experiment Set 2: Diverse Losses and Delay
This experiment evaluates the effect of different
combinations of RTT and loss rates on the protocol behavior
and shows how accurate is the equation used for computing
the throughput. In Figure 9 we have two pgmcc receivers, one
with high RTT (400ms) and low loss rate (.4% or 2%) and
the other with lower RTT (200ms) and higher loss rate (1.6%
or 8%). Losses in this experiment are considered to be
independent (e.g. over wireless links) and not correlated. This
enables us to control the parameters accurately to have equal
throughputs in both links and evaluate the results in this case.
Next experiment will have the more complex case of
congested links.
In Figure 10 we see that pgmcc and the two TCP sessions
have close throughput
3
. The loss rates here are .4% for the
low loss link and 1.6% for the high loss link. In Figure 11 we
are using a loss rate of 2% for the low loss link and 8% for
the high loss link, which causes pgmcc to be unfair to the
high loss rate TCP session. The reason for that is mainly
because of the difference in reacting to packet losses. In
3
We set the parameters of RTT and loss rates to let the two
TCP sessions get the same throughput, according to the TCP
equation.
pgmcc the reliability window is separated from the
congestion window and the handling of acknowledgements is
different. Unlike TCP, in pgmcc the sender keep sending new
packets, even if there are previous lost packets not received
yet. This separation between reliability and congestion seems
to be unavoidable in order to achieve an acceptable
performance in reliable multicast
4
. In addition according to
[13] the simplified equation used for computing the
throughput is for fast retransmission only and it does not take
timeouts into account. It also overestimates the throughput
for losses above 5% (this was pointed also in [15]) and so it is
suitable only when loss rates are below 5% and no timeouts
happen.
This experiment shows that at high loss rates pgmcc can
be unfair to TCP due to the ignoring of previously lost
packets by congestion control, and due to the inaccuracy in
the throughput equation.
Next experiment we have the more complex case in
Figure 12, where we use high and low BW (1Mb/s and
.5Mb/s respectively), instead of low and high losses. The
delays are 200ms for the low BW link and 800ms for the high
BW link. Losses in this topology are due to congestion, so it
is very difficult to control the parameters and the results
4
In [16] the authors mention that this separation between
reliability and congestion is done so that the scheme can work
with both reliable and unreliable protocols, while in [15] the
reason mentioned is that in PGM repair packets can be sent
with a significant delay from loss detection.
Figure 9: MCC session with receivers having different
delays and loss rates competing with TCP sessions
Figure 10: Throughput of pgmcc vs. the two TCP
sessions with low loss rate (f1=82%, f2=85%)
Figure 11: Throughput of pgmcc vs. the two TCP
sessions with high loss rate (f1=78%, f2=54%)
Figure 12: MCC session with receivers having
different delays and loss rates (due to congestion)
competing with TCP sessions
High Loss
TS1
TR2
MS
MR2
Low Loss
TS2
TR1
MR1
Low BW
TS1
TR2
MS
MR2
High BW
TS2
TR1
MR1
0
5000
10000
15000
20000
25000
30000
35000
0 500 1000 1500 2000 2500 3000
Time
Sequence
MCC
TCP1
TCP2
7
0
10000
20000
30000
40000
50000
60000
70000
0 500 1000 1500 2000 2500 3000
Time
Sequence
MCC
TCP1
TCP2
obtained are sometimes very surprising. An interesting result
obtained that is not related actually to congestion losses is in
Figure 13. It shows that the pgmcc session is unfair to one of
the TCP sessions. On first look one may think that the reason
for this is the same as in last experiment. But by taking a
closer look we find a major difference; here we have no acker
changes, which means that pgmcc followed the high
throughput session. The reason for that is the following:
Although the high BW, high RTT side has lower throughput,
due to the high BW no packets get dropped and no NACKs
get sent. So the sender does not receive anything from this
side and gets only the NACKs of the low BW side.
This result leads us to the general question of what is
actually the definition of TCP-friendliness and whether this
the best way to achieve fairness with TCP? For example in
this case there is a TCP session with a lower throughput, but
it seems to suffer no major problems due to pgmcc. Of course
its packets are somewhat delayed by the higher throughput of
the pgmcc session, but this delay may be negligible in
comparison to the high propagation delay
5
. So does pgmcc
really need to slow down to this receiver, even if it is not
suffering?
4.3 Experiment Set 3: Feedback Suppression
In this experiment we are testing the use of feedback
suppression in the routers and its effect on congestion control.
In PGM, if feedback aggregation is used, the first instance of
a NACK for a given data segment is forwarded to the source
and subsequent NACKs are suppressed. This can cause some
problems, because a worse receiver can have its NACKs
suppressed. For example, using the topology in Figure 14 we
find that feedback aggregation will cause pgmcc to be unfair
to TCP, because the worse receiver MR3 will always have its
NACKs suppressed (the link leading to MR3 router has 50
ms delay). The throughput of pgmcc and TCP without
network support is similar to Figure 4. In Figure 15 we see
that with network support, pgmcc gets much higher
5
We did also the same experiment with a TCP session on the
high BW path instead of the pgmcc session and we found that
the resulting throughput seen by the TCP sessions of the high
BW path is the same in both cases.
throughput than TCP. In Figure 15 there are no change of
acker and the acker remains one of the closer receivers.
This experiment shows that feedback suppression can
cause pgmcc to be unfair to TCP. Accordingly we
recommend that some changes are needed in the way
feedback aggregation is performed with pgmcc. In [15] the
authors propose a possible approach to reduce the effect of
suppression, which is to store the loss ratio for each NACK
forwarded and not perform suppression if a higher loss ratio
NACK for the same sequence number occur. But as we see
the loss ratio alone is not enough since both NACKs may
have the same loss ratio but the RTT is the decisive factor. A
solution for that is to store both the loss ratio and RTT for
each NACK and to compare the throughputs using these
values. The RTT in this case cannot be the exact RTT, but it
can be some value such as the sequence number used for
computing RTT or the time of NACK arrival in the network
element. This is accepted since it is used for comparison. This
solution will solve the problem, but it increases storage and
computation overhead in the routers. We propose a low
overhead solution for that problem by random suppressing of
NACKs in the router. The router will suppress NACKs only
with some probability. This will give the worst receiver
NACKs some chances to reach the sender. There will be a
tradeoff here between the amount of feedback passed and the
accuracy of acker selection.
4.4 Experiment Set 4: Group Representatives
This experiment shows the effect of using group
representatives and changing them. In pgmcc we evaluate the
effect of acker switching using the topology shown in Figure
16, which is the same as that in Figure 14 except that the link
leading to the MR3 router has a higher delay (200ms). No
suppression will happen in this case because the
retransmissions will reach MR1 and MR2 before the NACK
of MR3 reaches MR1 router. In PGM retransmissions are
directed by the router only on those links that sent the
NACKs, and these retransmissions delete the NACKs states
from routers. As shown in Figure 17, the throughput of
pgmcc becomes too low, and the TCP throughput is much
higher. This does not constitute a fairness problem, but a
performance degradation problem for pgmcc.
Figure 13: Throughput of pgmcc vs. the two
TCP sessions (f1=56%, f2=96%)
Figure 14: MCC session with receivers having
almost the same loss rate, but different delays
Congested Link
TS
TR
MS
MR1
MR3
MR2
8
0
20000
40000
60000
80000
100000
120000
140000
0 500 1000 1500 2000 2500 3000
Time
Sequence
MCC
TCP
29800
29850
29900
29950
30000
30050
30100
1220 1225 1230 1235
Time
Sequence
Data
ACK
NACK
0
5
10
15
20
25
30
1210 1215 1220 1225 1230
Time
Window Size
The reason for this bad performance under the given
topology is the acker changing between a high RTT receiver
and a low RTT receiver. By looking at acker switches in
detail we found that two switches happen in succession too
close to each other. The first NACK from the closer receiver
causes an acker change then the other NACK causes another
change for the far one when it arrives. This pattern repeats
with packet losses. Although, the used acker switching factor
c has a value of 2/3, to dampen the acker oscillation.
It is interesting to look at why acker changing causes this
bad performance (in other experiments it has no effect). By
taking a more detailed look in Figure 18 we see how the
change from the far receiver to the close receiver, then to the
far receiver again (a vertical line means an acker switch) has
caused the rate of the sender to decrease (the window is cut
several times). In Figure 19 we get a closer look at what
happens between the two changes, we find that new ACKs
arrive before old ACKs after the change to the close receiver.
The old ACKs that arrive at 1226.5 do not cause new packets
to be sent which means that they do not generate new tokens.
After that when new ACKs arrive the window start at slow
rate which means that it has been cut several times. Figure 20
shows how the window is cut at 1226.5.
The reason for that is due to the out-of-order ACK
delivery and the reactions taken accordingly by the sender.
Wrong loss detections can be interpreted, because ACKs for
Figure 15: Throughput of pgmcc vs. TCP
with NACK suppression (f=23%)
Figure 16: MCC session with receivers having
large difference in delay
Figure 17: Throughput of pgmcc vs. TCP in
the topology of Figure 16 (f=164%)
Figure 18: Detailed sequence of pgmcc
packets, during an acker change
Figure 19: More detailed sequence of pgmcc
packets, during acker change
Figure 20: Window size changes of pgmcc
session, during an acker change
0
20000
40000
60000
80000
100000
120000
140000
160000
180000
0 500 1000 1500 2000 2500 3000
Time
Sequence
MCC
TCP
High Delay Congested Link
TS
TR
MS
MR1
MR3
MR2
29960
29970
29980
29990
30000
1225.5 1226 1226.5 1227 1227.5 1228 1228.5
Time
Sequence
Data
ACK
NACK
9
old packets have not arrived yet. Also on a loss detection the
sender try to realign the window to the actual number of
packets in flight, which will not be interpreted correctly after
the switch, because there are still packets and ACKs in flight
to and from the old acker. In order to solve this problem the
sender needs to keep track of the recent history of acker
changing and the ACKs sent by each acker. In addition the
bitmask provides information about the recent packets
received by the acker. Accordingly the sender can adjust its
window and avoid these problems.
This experiment shows that acker switching between
receivers with large difference in delay degrades the
performance of pgmcc. This problem will be more common
on larger scales.
5 Conclusions and Future Work
We presented a set of carefully designed experiments to
evaluate multicast congestion control protocols. These
experiments clarify the operational details of the protocol by
targeting specific mechanisms. They also show the
differences with TCP and the related fairness issues. We
carried the experiments on a case study protocol ‘pgmcc’.
Some problems have been found due to high losses, feedback
suppression, and group representative switch. Improvements
are proposed to cope with some of the problems, such as
random suppression of NACKs, sender response after
representative switches, and the adaptive timeout in case
fairness to TCP Reno is required.
We recommend that researchers consider our scenarios
during the evaluation of multicast congestion control
protocols. The scenarios presented here are picked manually
based on our intuition and understanding of multicast
congestion control mechanisms. However, we are currently
developing a more systematic and general methodology for
generating scenarios that lead to violation of fairness or
performance degradation. We plan to adopt and extend the
STRESS [9][10] framework and hope that this provides a
valuable tool to expedite the development and standardization
of such protocols.
Acknowledgements
We would like to thank Luigi Rizzo and Gianluca Iannaccone
for providing the NS code of pgmcc.
References
[1] L. Breslau, D. Estrin, K. Fall, S. Floyd, J. Heidemann, A.
Helmy, P. Huang, S. McCanne, K. Varadhan, Y. Xu, and H.
Yu. Advances in Network Simulation. IEEE Computer, vol. 33,
No. 5, p. 59-67, May 2000.
[2] S. Bhattacharyya, D. Towsley, and J. Kurose. The Loss Path
Multiplicity Problem for Multicast Congestion Control.
Proceedings of the IEEE Infocom’99, New York, March 1999.
[3] A. Chaintreau, F. Baccelli, C. Diot. Impact of Network Delay
Variation on Multicast Session Performance with TCP-like
Congestion Control. Proceedings of the IEEE Infocom 2001,
Anchorage, Alaska, April 2001.
[4] D. DeLucia and K. Obraczka. Multicast feedback suppression
using representatives. Proceedings of the IEEE Infocom’97,
Kobe, Japan, April 1997.
[5] D. DeLucia and K. Obraczka. A Multicast Congestion Control
Mechanism for Reliable Multicast. Proceedings of the IEEE
ISCC’98, Athens, Greece, June 1998.
[6] C. Diot, W. Dabbous, and J. Crowcroft. Multipoint
Communications: A survey of protocols, functions, and
mechanisms. IEEE Journal of Selected Areas in
Communications, April 1997.
[7] K. Fall and S. Floyd. Simulation-based Comparison of Tahoe,
Reno, and SACK TCP. Computer Communication Review, vol.
26, pp. 5--21, July 1996.
[8] S. J. Golestani and K. K. Sabnani Fundamental Observations
on Multicast Congestion Control in the Internet. Proceedings of
the IEEE Infocom’99, New York, March 1999.
[9] A. Helmy, D. Estrin, and S. Gupta. Systematic Testing of
Multicast Routing Protocols: Analysis of Forward and
Backward Search Techniques. The 9
th
International Conference
on Computer Communications and Networks (IEEE ICCCN
2000), Las Vegas, Nevada, October 2000.
[10] A. Helmy, S. Gupta, D. Estrin, A. Cerpa, and Y. Yu.
Systematic Performance Evaluation of Multipoint Protocols.
Proceedings of FORTE/PSTV, IFIP, Kluwer Academic
Publication, Pisa, Italy, October 2000.
[11] A. Mankin, A. Romanow, S. Bradner, and V. Paxson. IETF
Criteria for Evaluating Reliable Multicast Transport and
Application Protocols. RFC 2357, June 1998.
[12] K. Obraczka. Multicast Transport Mechanisms: A Survey and
Taxonomy. IEEE Communications Magazine, January 1998.
[13] J. Padhye, V. Firoiu, D. Towsley, and J. Kurose. Modeling
TCP throughput: A Simple Model and its Empirical Validation.
ACM SIGCOMM 1998, Vancouver, BC, Canada, September
1998.
[14] J. Padhye and S. Floyd. On Inferring TCP Behavior. ACM
SIGCOMM 2001, San Diego, California, August 2001.
[15] L. Rizzo. pgmcc: A TCP-friendly Single-Rate Multicast
Congestion Control Scheme. ACM SIGCOMM 2000,
Stockholm, Sweden, August 2000.
[16] L. Rizzo, G. Iannaccone, L. Vicisano, and M. Handley.
PGMCC single rate multicast congestion control: Protocol
Specification. Internet-Draft, draft-ietf-rmt-bb-pgmcc-00.txt, 23
February 2001.
[17] T. Speakman, D. Farinacci, J. Crowcroft, J. Gemmell, S. Lin,
A. Tweedly, D. Leshchiner, M. Luby, N. Bhaskar, R.
Edmonstone, K. M. Johnson, T. Montgomery, L. Rizzo, R.
Sumanasekera, and L. Vicisano. PGM Reliable Transport
Protocol Specification. Internet-Draft, draft-speakman-pgm-
spec-05.txt, 24 November 2000.
[18] J Widmer, R Denda, and M Mauve. A Survey on TCP-Friendly
Congestion Control. Special Issue of the IEEE Network
Magazine, May 2001.
[19] J. Widmer, M. Handley. Extending Equation-based Congestion
Control to Multicast Applications. ACM SIGCOMM 2001, San
Diego, California, August 2001.
Abstract (if available)
Linked assets
Computer Science Technical Report Archive
Conceptually similar
PDF
USC Computer Science Technical Reports, no. 755 (2002)
PDF
USC Computer Science Technical Reports, no. 801 (2003)
PDF
USC Computer Science Technical Reports, no. 743 (2001)
PDF
USC Computer Science Technical Reports, no. 837 (2004)
PDF
USC Computer Science Technical Reports, no. 663 (1998)
PDF
USC Computer Science Technical Reports, no. 674 (1998)
PDF
USC Computer Science Technical Reports, no. 727 (2000)
PDF
USC Computer Science Technical Reports, no. 649 (1997)
PDF
USC Computer Science Technical Reports, no. 860 (2005)
PDF
USC Computer Science Technical Reports, no. 673 (1998)
PDF
USC Computer Science Technical Reports, no. 797 (2003)
PDF
USC Computer Science Technical Reports, no. 788 (2003)
PDF
USC Computer Science Technical Reports, no. 753 (2002)
PDF
USC Computer Science Technical Reports, no. 657 (1997)
PDF
USC Computer Science Technical Reports, no. 658 (1997)
PDF
USC Computer Science Technical Reports, no. 803 (2003)
PDF
USC Computer Science Technical Reports, no. 856 (2005)
PDF
USC Computer Science Technical Reports, no. 765 (2002)
PDF
USC Computer Science Technical Reports, no. 690 (1998)
PDF
USC Computer Science Technical Reports, no. 696 (1999)
Description
Karim Seada, Ahmed Helmy. "Fairness evaluation experiments for multicast congestion control protocols." Computer Science Technical Reports (Los Angeles, California, USA: University of Southern California. Department of Computer Science) no. 757 (2002).
Asset Metadata
Creator
Helmy, Ahmed
(author),
Seada, Karim
(author)
Core Title
USC Computer Science Technical Reports, no. 757 (2002)
Alternative Title
Fairness evaluation experiments for multicast congestion control protocols (
title
)
Publisher
Department of Computer Science,USC Viterbi School of Engineering, University of Southern California, 3650 McClintock Avenue, Los Angeles, California, 90089, USA
(publisher)
Tag
OAI-PMH Harvest
Format
9 pages
(extent),
technical reports
(aat)
Language
English
Unique identifier
UC16269309
Identifier
02-757 Fairness Evaluation Experiments for Multicast Congestion Control Protocols (filename)
Legacy Identifier
usc-cstr-02-757
Format
9 pages (extent),technical reports (aat)
Rights
Department of Computer Science (University of Southern California) and the author(s).
Internet Media Type
application/pdf
Copyright
In copyright - Non-commercial use permitted (https://rightsstatements.org/vocab/InC-NC/1.0/
Source
20180426-rozan-cstechreports-shoaf
(batch),
Computer Science Technical Report Archive
(collection),
University of Southern California. Department of Computer Science. Technical Reports
(series)
Access Conditions
The author(s) retain rights to their work according to U.S. copyright law. Electronic access is being provided by the USC Libraries, but does not grant the reader permission to use the work if the desired use is covered by copyright. It is the author, as rights holder, who must provide use permission if such use is covered by copyright.
Repository Name
USC Viterbi School of Engineering Department of Computer Science
Repository Location
Department of Computer Science. USC Viterbi School of Engineering. Los Angeles\, CA\, 90089
Repository Email
csdept@usc.edu
Inherited Values
Title
Computer Science Technical Report Archive
Description
Archive of computer science technical reports published by the USC Department of Computer Science from 1991 - 2017.
Coverage Temporal
1991/2017
Repository Email
csdept@usc.edu
Repository Name
USC Viterbi School of Engineering Department of Computer Science
Repository Location
Department of Computer Science. USC Viterbi School of Engineering. Los Angeles\, CA\, 90089
Publisher
Department of Computer Science,USC Viterbi School of Engineering, University of Southern California, 3650 McClintock Avenue, Los Angeles, California, 90089, USA
(publisher)
Copyright
In copyright - Non-commercial use permitted (https://rightsstatements.org/vocab/InC-NC/1.0/