Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
Computer Science Technical Report Archive
/
USC Computer Science Technical Reports, no. 801 (2003)
(USC DC Other)
USC Computer Science Technical Reports, no. 801 (2003)
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
A Framework for Systematic Evaluation of Multicast Congestion Control Protocols
Karim Seada, Ahmed Helmy, Sandeep Gupta
Electrical Engineering-Systems Department
University of Southern California, Los Angeles, CA 90089
seada@usc.edu, helmy@usc.edu, sandeep@poisson.usc.edu
Keywords: Systematic protocol testing, protocol modeling
simulation, congestion control, multicast, TCP-friendliness,
STRESS
ABSTRACT
Congestion control is a major requirement for multicast to be
deployed in the current Internet. Due to the complexity and
conflicting tradeoffs, the design and testing of a successful
multicast congestion control protocol is difficult. In this paper we
present a novel framework for systematic testing of multicast
congestion control protocols based on the STRESS methodology.
We extend STRESS to study multicast congestion control, so as to
tackle its new semantics and the high complexity of its
verification. In our framework, we start by designing an
appropriate model for the studied protocol based on the protocol
specifications and correctness conditions, and then we develop an
automated search engine to generate all possible error scenarios
and filter these errors to come with a selected set of scenarios that
we evaluate in more detailed simulations. We apply our
methodology to a single-rate case study protocol and we are able to
generate scenarios that can be used for testing multicast congestion
control building blocks. Some of the interesting results are the
effect on the throughput of receivers joining and leaving, the effect
of feedback suppression on congestion control, and the effect of
changes in the special receivers representing the group. Due to the
importance of fairness to TCP, we generate additional scenarios,
and use them in a simulation study to evaluate the fairness of
multicast congestion control mechanisms when running with
competing TCP flows. We design the experiments to reveal the
differences between TCP and the proposed multicasting protocol
by targeting specific congestion control mechanisms such as
timeouts, response to ACKs and losses, the effect of independent
and correlated losses, in addition to multicast mechanisms such as
the effect of multiple receivers, group representative selection, and
feedback suppression when there is network support. Our analysis
shows the strengths and potential problems of the protocol and
points to possible improvements. We hope that this will provide a
valuable tool to expedite the development and standardization of
such protocols.
1 INTRODUCTION
The g rowth of the Internet and its increased heterogeneity has
increased the complexity of network protocol design and
validation. The need for a systematic method to study and evaluate
network protocols is becoming more important. Such methods aim
to expedite protocol development and improve protocol robustness
and performance. The STRESS (Systematic Testing of Robustness
by Evaluation of Synthesized Scenarios) methodology [11][12][13]
provides a framework for systematic design and testing of network
protocols. STRESS uses a finite state machine model of the
protocol and search techniques to generate protocol scenarios that
exercise the protocols in manners that expose errors or worst-case
performance.
In this paper we present a new framework for testing multicast
congestion control protocols based on the STRESS methodology.
The goal of this framework is the generation of scenarios to
evaluate protocol correctness, performance, and fairness. The
evaluation targets specific protocol mechanisms that are
incorporated in current and proposed protocols. Our aim is to have
an abstract evaluation, applicable to a class of protocols. For
illustration, we have chosen, as a case study, a multicast
congestion control scheme called pgmcc. pgmcc [18] which is a
single-rate multicast congestion control scheme that is designed to
be fair to TCP. However, we evaluate the protocol behavior based
on protocol mechanisms and building blocks, so that our results
can be generalized to other protocols that use similar mechanisms.
The motivation and contribution of this work lies mainly in two
areas. First, extending the STRESS methodology by applying it to
a complex problem that has multiple dimensions: multicast,
reliability, and congestion control (with the related issue of TCP-
friendliness). STRESS has been applied earlier to multicast routing
and transport protocols, but congestion control is a new application
that necessitates consideration of new semantics to the
methodology, such as modeling of sequence numbers, traffic, and
congestion window. This problem is also characterized by a high
degree of interleaving between events and long-term effects of
individual events. New challenges are faced due to the large size of
the search space, and are handled by developing new types of
equivalences and modifications to the search strategy to reduce its
complexity. A mechanism for error filtering is developed to cope
with the large number of error scenarios generated.
The second motivation and contribution is to provide a tool to
aid in the design of multicast congestion control protocols.
Multicast transport has been an area of extensive research and
many protocols have been proposed [7][15]. One of the most
important criterion that IETF [14] requires a multicast protocol to
satisfy is to perform congestion control in order to safely deploy
the protocol in the Internet. It is believed that the lack of good,
deployable, well-tested multicast congestion control mechanisms is
one of the factors inhibiting the usage of IP multicast [22]. By
providing a systematic framework for evaluating these protocols
we aim to expedite the development and standardization in this
area. This tool is also targeted to application designers, so that if it
is impossible to eliminate all problems potentially caused by a
protocol, then there should be at least a systematic way to
enumerate them and study their effects on the applications that use
the protocol, so that the application can be designed to work
around these problems.
In addition, we provide a set of experiments, based on the
scenarios generated, that can be used as a benchmark to evaluate
the fairness of multicast congestion control mechanisms when
running with competing TCP flows. We select the experiments to
target specific congestion control mechanisms and to reveal the
differences between TCP and the proposed multicasting protocol.
Several congestion control mechanisms are targeted by the
experiments such as timeouts, response to ACKs and losses, and
the effect of independent and congestion losses. In addition, we
evaluate multicast mechanisms such as the effect of multiple
receivers, group representative selection, and feedback suppression
when there is network support. We analyze pgmcc using the
generated scenarios by the ns-2 simulator. Our analysis shows
some strengths and potential problems of the protocol and point to
possible improvements. Some scenarios reveal TCP unfriendly
behavior, due to high losses or feedback suppression. Also, poor
performance, due to group representative switch has been
observed.
The rest of this paper is organized as follows. In Section 2 we
provide a background about multicast congestion control and a
brief description of pgmcc. Section 3 gives a brief background of
STRESS and a general overview of our methodology. In Section 4
we present the protocol model that we use and the errors that we
examine. In Section 5 we show our search technique and its
complexity. Section 6 shows the problems identified. Section 7
discusses some of the challenges faced in error generation and how
they are solved. Section 8 generalizes the error scenarios to
building blocks. Detailed simulations for some of the identified
problems in addition to the fairness experiments are provided in
Section 9. Conclusions and future work are presented in Section
10.
2 MULTICAST CONGESTION CONTROL
The design of a multicast congestion control protocol that
provides high performance, scalability, and TCP-friendliness is a
difficult task that attracts a lot of research effort. Multicast
congestion control can be classified into two main categories:
single-rate and multi-rate. Single-rate has a limited scalability
because all receivers must receive data at the same rate (that of the
slowest receiver). It also suffers from feedback implosion and
drop-to-zero [2] (where the rate degrades significantly due to
independent losses by a large number of receivers) problems.
Multi-rate, where different receivers can receive at different rates,
is more scalable but it has other concerns such as the complex
encoding of data, possible multiple paths in the layered approach,
and the effects of receivers joining and leaving layers.
TCP-friendly multicast congestion control can also be classified
into window-based and rate-based. Window-based protocols have
similar congestion window control as TCP, while rate-based
protocols depend on the TCP throughput equation [16] for
adjusting the transmission rate. Rate-based protocols typically have
smoother rate transitions and are fair with TCP over a long -term.
Rate-based protocols also require accurate RTT computations
which is not a simple task in multicast. For more details about
these issues see [9][21].
Another possible classification for single-rate protocols is
whether or not they are representative-based. Non-representative-
based protocols solve the scalability problems using some
aggregation hierarchy. This requires complex building of the
hierarchy and may need network support. The performance is still
limited and [4] shows that even without losses, small variations in
delay can cause fast performance degradation with the increase in
number of receivers. Representative-based protocols provide a
promising emerging approach to solve the scalability problems,
where a small dynamic set of receivers is responsible for providing
the feedback [6]. The main challenge is the dynamic selection of
representatives in a scalable and efficient manner with appropriate
handling of changes in representatives. This still needs further
investigation. Examples of single-rate representative-based
protocols are pgmcc (window-based) [18] and TFMCC (rate-
based) [22]. We will use pgmcc as our case study example, and in
the rest of this section we will provide a more detailed description
of pgmcc.
pgmcc [18] is a single-rate multicast congestion control scheme
that is designed to be TCP-friendly. It supports both scalability and
fast response. To achieve fast response while retaining scalability,
a group representative called the acker is selected and a tight
control loop is run between it and the sender. It is called the acker
because it is the receiver that sends the ACKs. Other receivers can
send NACKs when they lose packets, if a reliable transport
protocol is used
1
. pgmcc has been used to implement congestion
control in the PGM protocol [19]. A sample scenario for pgmcc is
shown in Figure 1.
The acker is the representative of the group. It is chosen as the
receiver with the worst throughput to ensure that the protocol will
be TCP-friendly. A window-based TCP-like controller based on
positive ACKs is run between the sender and the acker. The
feedback in pgmcc is provided in receiver reports that are used by
the sender to estimate the throughput. They are embedded into the
NACKs and ACKs and contain the loss rate and information for
computing an estimate for the round trip time (RTT) of the sending
receiver. The RTT can be evaluated by using timestamps or
sequence numbers. The loss rate is computed using a low pass
filter to avoid oscillations.
The most critical operation of pgmcc is the acker selection and
tracking. As mentioned, the receiver with the worst throughput is
selected as the acker. When another receiver with worse
throughput sends a NACK, an acker change may occur. A receiver
that does not send NACKs is assumed to have no congestion
problems, and will not be considered at all. Computation of
throughputs uses information sent by receivers and the TCP-like
formula: p RTT T 1 ∝ , where T is the throughput, RTT is the
round trip time estimate and p is the loss rate [16]. An acker switch
occurs from receiver j to receiver i if T(i) < c T(j), where T(n) is the
1
pgmcc can be used with both reliable and non-reliable transport
protocols. Non-reliable protocols will also need to send NACKs
from time to time for congestion control purposes.
Figure 1: A sample pgmcc scenario
4
4
3
3
3
3
2
1 1
1
1
1
1
(old acker)
1) The sender sends a packet
2) Packet lost by one receiver
3) This receiver sends NACK
and the acker sends ACK
4) If the NACK is from a receiver
worse than the acker, then it is
designated as the new acker
(new acker)
Sender
Receiver
acker
Data
NACK
ACK
throughput computed by the sender for receiver n. c is a constant
(0< c ≤1) that is used during throughput comparison to dampen
oscillations of acker selection when the difference between the
current acker and the worst receiver is not large. There is a 32-bit
field in the ACK called the bitmask, which indicates the receive
status of the most recent 32 packets. This is included to help the
sender deal with lost and out-of-order ACKs.
A window based congestion control scheme similar to that used
by TCP is run between the sender and the acker. The parameters
used are a window W and a token count T. W is the number of
packets that are allowed to be in flight and has the same role as the
TCP window, while T is used to regulate the sending of data
packets by decrementing T for every packet sent and incrementing
it for every ACK received. On packet loss, W is cut by half and T
is adjusted accordingly. A packet is assumed lost when it has not
been acked in a number (normally three) of subsequent ACKs. W
and T are initialized to 1 when the session starts or after a stall
when ACKs stop arriving and a timeout occurs. The congestion
control here is more conservative than that used by TCP. For
example the exponential opening of the window (slow start phase)
is limited to a small window size. The reason is that the
exponential opening performed by TCP is considered very
aggressive and there are still fears of deploying such mechanisms
for multicast protocols over the Internet. pgmcc has a different
flow/reliability window than that used for congestion to decouple
congestion control from retransmissions, so it can be used with
both reliable and non-reliable protocols.
Some multicast transport protocols depend on router support for
feedback aggregation. For example, in PGM routers the first
instance of a NACK for a given data segment is forwarded to the
source, and subsequent NACKs are suppressed. As will be
illustrated by the scenarios we generate, this can have a significant
effect on the operation of pgmcc.
During the study of the protocol there are different variations to
consider. The main two variations are
(i) reliable vs. non-reliable transport, and
(ii) with feedback aggregation (router support) vs. without
feedback aggregation.
We are not aware of any related work in developing automated
modules for systematic testing of multicast congestion control or
similar protocols.
3 METHODOLOGY FRAMEWORK
STRESS (Systematic Testing of Robustness by Evaluation of
Synthesized Scenarios) provides a framework for systematic
design and testing of network protocols. STRESS uses a finite state
machine (FSM) model of the protocol and search techniques to
generate protocol tests. The generated tests include a set of
network scenarios each of which leads to violation of protocol
correctness and behavioral requirements. In [10][11] STRESS has
been used to verify multicast routing protocols using forward and
backward search techniques. In [12][13] it is used for the
performance evaluation of timer suppression mechanisms in
multipoint protocols. In [1] Mobile IP and multicast over ATM are
studied. Currently STRESS is also being used for the verification
of wireless MAC protocols. In this paper we extend STRESS to
study multicast congestion control. Congestion control is a new
application that necessitates consideration of new semantics to the
STRESS methodology, such as modeling of sequence numbers,
traffic, and congestion window. Multicast congestion control is
characterized by a high degree of interleaving between events and
long-term effects of individual events. New challenges are faced
due to the large size of the search space, and are handled by
developing new types of equivalences and modifications to the
search strategy to reduce its complexity. A mechanism for error
filtering is developed to cope with the large number of error
scenarios generated. In the remainder of this section we give a
high-level description for our methodology and how different
blocks described throughout the paper fit within.
Figure 2 shows the main framework. The protocol designer or
tester inputs the protocol specifications and correctness conditions
in the form of a state model (Section 4.1), a transition table
(Section 4.2), and an error model (Section 4.3). The search engine
(Section 5) explores the search space for errors and generates a set
of error scenarios. These error scenarios are fed into the error filter
(Section 7.1) which classifies the scenarios and outputs a compact
set of error scenarios that can be used for testing the protocol.
These scenarios can be used for more detailed simulation to
confirm their ability to disclose errors/low-performance and to
ensure that no crucial details are missing. The feedback from the
simulations can be used by the designer to adjust the input and the
Figure 2: Block diagram of our methodology
Figure 3: Topology Model: The path between the sender and a receiver
is represented by a virtual end-to-end channel. The topological details
(delays, losses, suppression, and routing effects) are captured by taking
all permutations of loss and packet reception events.
selected
scenarios
all error
scenarios
Protocol
Specification
Correctness
Conditions
Search
Engine
Error
Filtering
Protocol
Designer
Detailed
Simulation
feedback for adjusting
the model and the
generated scenarios
end-to-end
packets channel
S: Sender
R: Receiver
S
R
R
R
Queue Manager:
Losses, reordering,
suppression, …
filtering approach. Currently the search engine and error filtering
have been automated.
4 PROTOCOL MODEL
In order to process the protocol and apply the STRESS
methodology we need to have a suitable model that represents the
protocol and the system. The model is built according to the
protocol specification and additional assumptions based on our
understanding of the protocol behavior and its context. The model
consists of
(i) a representation of the protocol states,
(ii) a state transition table defining the possible events
and their effects on states,
(iii) a set of errors that represents our correctness criteria.
Critical to our methodology is how the topology is modeled. A
multicast transport session topology consists of a sender, a group
of receivers, links, and routers. This detailed topology can be very
complex for our model and search. Also, many of the details are
unimportant. For a multicast congestion control protocol operation,
the topological details are reflected mainly on which packets are
received by the sender and receivers, and when these packets are
received. Our goal here is to simplify the topology and still capture
characteristics as delays, reordering, selective and correlated
losses, and feedback suppression.
Figure 3 shows the simplified topology. A virtual end-to-end
channel exists between the sender and each receiver. We are able
to capture all the required characteristics with this topology by
using all permutations of events (this will become clearer when we
explain the search) including losses, reception of packets in
different orders, and suppression. The order in which packets are
received or dropped in this topology emulate the delays, routing
effects and feedback aggregation in larger topologies. This
simplification may hide some details, but we believe that such
details do not affect our protocol evaluation. In addition, to further
verify the validity and effect of our generated scenarios, we
performed simulations with more detailed topologies (see Section
9).
4.1 States
The global state is an aggregation of local states of a sender and
a number of receivers, with a separate end-to-end channel between
the sender and each receiver, as shown in the topology in Figure 3.
Each entity has a number of state parameters that define its local
state.
• Sender state: packets sent, ACKs received, sequence number
of next packet to send, sequence number of next ACK
expected, current acker, current acker throughput, window
size, number of tokens, duplicate count (to count the duplicate
ACKs), ignore count (for adjusting the token count after a
window cut).
• Receiver state: receiver ID, packets received, sequence
number of next packet expected, loss ratio, member or not.
• A Packet channel between sender and each receiver, such that
each packet in the channel has the following parameters:
• Packet: type (data, ACK, or NACK), sequence number,
acker (if it is a data packet), packets received by acker (if
it is an ACK), loss ratio (if it is an ACK or NACK),
missing packet (if it is a NACK)
The channel holds the packets between the sender and the
receiver with data going in one direction and feedback (ACK or
NACK) going in the other. The underlying network characteristics,
such as packet losses, packet reordering (e.g., due to route
changes), and network delays, are handled by the channel.
Multicast group membership is represented by defining at a state
whether each receiver is a group member or not. It is assumed that
an underlying multicast routing protocol exists and delivers the
packets to current members only. In addition to the combination of
these local states we have also added a global loss estimator to our
global state, in order to estimate the loss rates on different channels
independent of local computations by receivers. As will be shown
later, this is required only for protocol evaluation.
4.2 Transition Table
A simplified version of this protocol’s transition table is shown in
Table 1. The transition table describes, for each possible event, the
conditions for its occurrence, and the effects of that event. For
example, Send is the event of sending a packet, its condition is that
the number of tokens is above 1, and its effects are putting the
packet in the group member receivers’ channels and adjusting the
sender parameters. During the search process the conditions of
these events are checked at every state, and if satisfied the
corresponding event is triggered and a new state is generated. Slow
start is ignored in our model, since it is a transient phase and we
are more interested in testing the steady state behavior. Some
changes can be made to the transition table to study variations. For
example the retransmission of packets can be omitted to consider
non-reliable transport. NACK suppression is emulated by NACK
dropping.
4.3 Errors Model
Table 2 shows the errors we check for in our model. These errors
are not necessarily errors in the protocol itself, since some of them
may be compensated for by other layers (e.g., by an application).
Nevertheless, they are potential problems that should be
considered during the implementation of the protocol or by the
applications using it. Some of these errors violate the correctness
such as ‘gaps in sequence space’, which means that, for some
reason, the sender has not sent all the packets in sequence. Most of
the other errors mainly cause performance degradation such as
‘unnecessary starvation’, which means that the sender has packets
to send but it is stalled, although the group receivers are ready.
Duplicate packets means that a receiver received the same packet
more than once, which is a waste of the network bandwidth.
Wrong congestion notification can lead to an unnecessary window
cut and reduction in sending rate. An error that leads to fairness
violation is ‘wrong acker selection’, which means that a receiver
other than the worst receiver is chosen as acker. Since in this
protocol we consider fairness (TCP-friendliness) as being
synonymous to choosing the worst receiver as acker, ‘wrong acker
selection’ models the fairness violation.
During the search we look for erroneous states by checking for
these errors conditions. The point at which an error is checked is
important, since this has a significant impact on the search time
and on the number of erroneous states generated. It also affects the
number of false errors. We want to reduce the error checking as
much as possible by considering only the minimum set of states
required to discover the error. For example whenever possible we
check for errors only at the end of the search path (leaves of search
tree). This is possible for errors that become permanent once they
appear. Whether an error is permanent or not depends on the
events that may change its conditions. For example ‘unnecessary
starvation’ is a permanent error and can be checked at the end,
since it leads to a stall, but if we include timeouts in our model,
then timeout will be the event that can eliminate the starvation
condition and we will need to check for that error at timeouts.
5 SEARCH PROBLEM
The problem of test synthesis can be viewed as a search problem.
By searching the possible sequence of events over possible
network topologies and checking for errors at specific points, we
can construct the test scenarios that stress the protocol. Forward
search is used to investigate the protocol state space. As in
reachability analysis, forward search starts from initial states and
applies the stimuli (events) repeatedly to produce the reachable
state space (or part thereof). Conventionally, an exhaustive search
is conducted to explore the state space. In the exhaustive approach
all reachable states are expanded until the reachable state space is
exhausted. Of course in our case we cannot expand forever, so w e
limit our search to a maximum sequence number, which is the
number of packets to be sent by the sender. The search process is
completely automated.
At each state during the search, the preconditions for events are
checked, and transitions are applied to generate new states. A list
of all visited (already expanded) and working (still to be expanded)
states is kept, so that new states can be checked against it to avoid
redundant search. Error checking is performed at specific states
during the search according to the type of error. An example of a
part of the search tree is shown in Figure 4. Notice that this leads
to state space explosion and techniques must be used to reduce the
complexity of the space to be searched.
5.1 Search Compl exity
To get a measure of the complexity we are facing and to
understand the source of this complexity, we show in Table 3 the
number of visited states in a topology of 2 receivers and increasing
sequence number (the limit of the search). By using all the events
(complete interleaving) we find that the number of visited states
increases at a very high rate with the increase in sequence
numbers, so that search cannot be completed even for 4 sequence
numbers (we are able to reach it after performing some
optimizations as will be shown later). The next three columns
show the reduction in the number of visited states by performing
the search without packet reordering, , packet losses, and members
join and leave, respectively. (Packet reordering means that the
channel delivers packets in all possible orders, so that at each state
there is an event triggered for each packet to be received
independent of its location in channel. Removing reordering means
that packets in the same channel and same direction arrive in order,
but interleaving still exist between packets in different channels or
different directions.) In the next column we show that without all
these three types of events, we can reach 20 sequence numbers
with a number of visited states about half of those reached with 3
sequence numbers and complete interleaving. These three events
are considered as undesirable events in our search. (Another reason
for considering them undesirable is that they are the main cause of
errors.) Figure 4 and Figure 6 show the total number of
comparisons and the number of transitions and their large increase
with the increase in number of receivers and sequence numbers.
5.2 Comparison Time
In order to avoid redundancy in state generation, a list of all
visited (already expanded) and working (still to be expanded)
states is kept, against which new states can be checked. The
comparison time is the most time consuming process during the
search, since the comparison time for a state is proportional to the
number of visited and working states at that time. As the number
of states increases, the comparison time becomes extremely large,
which limits the number of sequence numbers that can be covered.
Some statistics are shown in Table 4. To reduce the comparison
time and expand our search space, we use hashing. A hash table is
used for storing all the states (visited and working); the working
list is still used for the search process only. Use of hashing reduced
the search time by several orders of magnitude and allowed us to
cover 4 sequence numbers instead of 3.
Event Precondition Effect (Transition)
Send Tokens >= 1
-Decrement number of tokens (T=T-1)
-Add packet to sent list and increment
the next sequence to send
-Add packet to all members’ channels
Join
Receiver not
member
-Receiver becomes member
Leave
Receiver
member
-Receiver becomes non-member
Loss
Packet in
channel
-Remove packet from channel
-If it is a data packet, update global loss
estimation for that channel.
Receive
Data
Data packet in
channel
-Remove data packet from channel and
update global loss estimation
-If that receiver is a member, update its
receive list, next sequence to receive,
loss ratio
-If lost packets discovered, generate
and send NACKs, update loss ratio
-If that receiver is acker send ACK
Receive
ACK
ACK in
channel
-Remove ACK from channel
-If ACK from current acker, update its
throughput using its loss ratio and RTT
-Update ACK list and sequence of next
ACK expected
-Increment window and token
(W=W+1/W, T=T+1+1/W)
-If packet losses discovered (duplicate
ACKs), cut window (W=W/2) and
adjust token (ignore next W/2 ACKs)
Receive
NACK
NACK in
channel
-Remove NACK from channel
-Compare that receiver throughput with
acker throughput, and if NACK is from
a receiver worse than acker then choose
it as new acker
-Retransmit
Table 1: A simplified transition table for the protocol
Table 2: The errors checked in our model
Error When to check How to check
Unnecessary
starvation
At the end of search or
at timeout
Tokens<1 and no more
data or ACKs in channels
Duplicate
packets
At data reception by
receiver
Check receive list for if
the packet already exist
Gaps in
sequence
space
At the end of search
Check sent list of the
sender for gaps
Wrong
congestion
notification
At congestion (when
packet loss detected)
Check whether packets
are really lost (using
global loss estimator)
Wrong acker
selection
At sending (or at the
end of search)
Comparing acker and
receivers throughput (by
global loss estimators)
5.3 Equivalences
Exhaustive search has exponential complexity and the number of
states generated is very large. To reduce this complexity we use the
notion of equivalence. The notion of equivalence implies that by
investigating an equivalent subspace we can test for protocol
correctness. That is, if the equivalent subspace is verified to be
correct then the protocol is correct, and if there is an error in the
protocol then it must exist in the equivalent subspace. We use three
types of equivalence: state equivalence, path equivalence, and error
scenarios equivalence. Error scenarios equivalence is presented
later when we describe the error filtering approach used to reduce
the number of generated erroneous scenarios. State equivalence
means that two states are not exactly equal, but they are equivalent
from our viewpoint. Path equivalence is to find paths that lead to
the same states and prune them, this can cause a major reduction in
complexity as will be seen. Table 5 shows the number of visited
states when equivalences and hashing are used. This can be
compared to the direct search results in Table 4. Table 7 shows the
reduction in number of transitions due to equivalences in
comparison to Table 6.
5.3.1 State Equivalence
Examples of state equivalences used in our case study are
channel equivalence and receiver equivalence. Channel
equivalence is when two global states have similar local states and
similar packets in their channels but in different order. They are
considered equivalent, since we consider all orderings of packet
arrival. Receiver equivalence occurs when several receivers have
identical local states. In this case it is sufficient to apply the
transitions for only one of them, since the same search sub-trees
will be generated for others. For example, if we have two receivers
R1 and R2 with the same parameters, we do not need to apply the
same events for both of them, since this will just create identical
states with R1 and R2 swapped. One way to implement that is to
use descriptors to represent channels and receiver states where the
packet order and receiver ID are ignored in the descriptor
representation. A more general approach is to use a global
descriptor for representing the whole state taking all the state
equivalences into account.
5.3.2 Path Equivalence
Path equivalences are based on the notion that if two events do
not affect each other, then it is not necessary to consider
interleaving between them. Detecting path equivalences can be
very complex, but it leads to large reductions in search space. One
way to identify path equivalences is to look at the different events
in the transition table and see how they affect each other, so that
we need not consider interleaving between events that do not affect
each other directly. An example is shown in Figure 5, which is the
search tree for one receiver and one sequence number. Notice that
there are a lot of redundant paths. Consider for example the leave
and join events. After a leave, you need to join only if there is a
change (send or receive) in your channel. Also, after a join, you
need to leave only if there is a change in your channel. A loss of a
packet is affected only by the receiving of that packet, which
means that repeating it on all paths is redundant.
Taking these equivalences into account we see in Figure 6 that
the complexity is reduced, and with higher number of receivers
and sequence numbers the reduction is greater. In Figure 6 the join
and leave events of a receiver are associated with sending and
receiving of packets on the channel of that receiver and the loss
event is triggered only once under the same send sub-tree. So we
have two major events (send and receive) and other events are
internally associated with them. Larger reduction can be obtained
by restricting the interleaving between packets received by
different receivers. A receiver is affected only by changes
occurring on i ts channel, and not by other receivers’ channels
(although the effect of that on the sender is taken into account
during the sending of new packets). So instead of taking the
permutations of packet arrival of all packets in all channels at
certain state, we need to take only the permutations of each
individual channel with the combinations between receivers. This
reduces the number of different packet arrival orders from
Figure 4: An example of search space expansion
Table 3: Number of visited states with 2 receivers. The last
column shows the number of visited states when
equivalences (see Section 5.3) are exploited
Table 4: Number of comparisons
without equivalences and hashing
Seq Complete
Without
reorder
Without
losses
Without
join & leave
Without all
Complete with
equivalences
1 64 64 52 16 7 16
2 384 264 260 96 16 96
3 28264 3820 11372 7066 40 6082
:
20 14520
Table 5: Number of comparisons
with equivalences and hashing
Table 6: Number of transitions
without equivalences
Table 7: Number of transitions
with equivalences
1
2
3
1 2 3
1.16e9
2.00e5
1.50e8
109
311
17891
4650
1.48e5
seq
rcvr
1
2
3
1 2 3
1.8e5
1880
6.7e4
22
42
423
220
1568
seq
rcvr
1
2
3
1 2 3
18632
108
2887
2
4
57
16
71
seq
rcvr
1
2
3
1 2 3
10514
81
1931
7
14
101
25
147
seq
rcvr
Ri: Receiver ‘i’
Di: Data with seq ‘i’
Ai: ACK for seq ‘i’
R0 (acker)
receive D0
& send A0
R1
receive
D0
D0 lost
at R0
channel
R0
leave
R1
leave
D0 lost
at R1
channel
R0
leave
R1
receive
D0
R1
leave
A0
lost
sender
receives
D0
D0 lost
at R1
channel
Send
D0
( )
∑
=
l
k
l
k
k
0
!
, where
∑
=
=
m
i
i
n l
1
is the total number of packets in m
channels (m is the number of receivers, and
i
n is the number of
packets in channel i), to
( ) !
1 0
k
m
i
n
k
n
k
i
i
∏∑
= =
. E.g. in the case of two
receivers (m=2), with
1
n =3 and
2
n =2 packets in their channels,
the number of different packet arrivals reduce from 326 to 80.
6 DETECTED PROBLEMS IN PGMCC
Using our methodology we are able to generate scenarios that
lead to the following problems. We want to emphasize that these
problems are not necessary errors in the protocol itself. The reason
is that they may be handled by other layers, by the application, or
during the implementation. However these problems are important
and should be taken into account by the protocol designer or by
applications that use the protocol. In Table 9 we summarize all the
problems detected. Some of these problems are obvious and
known, but here they are detected in a systematic way using an
automated tool. In the following subsections we will explain the
non-obvious and important problems.
6.1 Unnecessary Starvation
We define unnecessary starvation as a scenario where the sender
has packets but it is not able to send, although the group receivers
are ready to receive more packets. In practice, the state may remain
stalled until the sender starts sending again, e.g. after a timeout.
This leads to a reduction in throughput and longer delays
experienced by receivers. There are two types of scenarios that can
lead to unnecessary starvation. The first type of scenarios is when
all the data to acker or the ACKs from acker are lost, since the
sender depends on the ACKs for generating new packets. This is
an expected problem and, according to the protocol definition, in
this case the rate should follow the worst receiver. The other
scenario that can lead to unnecessary starvation is when the group
representative (the acker in our case study) leaves the group
without notification (or crashes) and there is no more feedback to
the sender. The sender will wait for some time (until a timeout)
before start sending again and switching to another representative.
6.2 Duplicate Packets
In these scenarios a receiver receives the same packet more than
once, which wastes the network bandwidth. A scenario leading to
duplicate packets is when data packets arrive out-of-order at the
receiver causing it to send NACKs for packets s till on their way.
Adding some delays in sending the NACKs by receiver and the
retransmissions by sender can reduce this effect, but there will be a
tradeoff between the delay value and the responsiveness in case of
packet losses. Another possible reason f or duplicate packets is
sender or receiver crashes ending in a non-deterministic state.
6.3 Gaps in Sequence Space
Gaps in sequence space means that the sender is not sending
packets in sequence, which can also occur due to crashes leading to
non-deterministic states. Sender crashes may be considered
unlikely, and if they occur the session can be restarted. However
receiver crashes have a high probability as the number of receivers
increase. Hence we need to make sure that malicious behavior by a
single receiver will not affect the entire session. For example, if a
receiver sends an ACK or NACK with a sequence number greater
than the highest sequence number sent by the sender, how the
sender will respond to that? Such malicious behavior can be due to
a non-deterministic state caused by a receiver crash or may be an
intentional denial of service attack. We believe that such cases
should be considered if the protocol is to be scaled to large number
of receivers.
6.4 Wrong Congestion Notification
Wrong congestion notification means that the sender receives a
congestion notification (duplicate ACKs) and cuts its window,
although there is no congestion and this reduction in sending rate is
not necessary. This can occur due to out-of-order data arrival at the
receiver causing wrong interpretation of loss. Another scenario that
leads to wrong congestion notification is when an acker switch
occurs between receivers with large difference in delay. This leads
to ACKs arriving out-of-order, which can cause severe
performance degradation if not handled properly by the sender as
will be shown by the simulations.
Figure 5: Search tree for one receiver and one sequence
number without path equivalence (number of visited states:
12, number of transitions: 22, number of comparisons: 109)
Figure 6: Search tree for one receiver and one sequence
number with path equivalence (number of visited states: 6,
number of transitions: 7, number of comparisons: 19)
Send D0
Join
Leave
Join
Leave
Leave
Join
Join
Leave
D0
arrive
D0
lost
D0
lost
Send D0
Receive D0
& send A0
A0
lost
A0
lost
A0
received
A0
received
Join
Leave
Send D0
Receive D0
Receive A0
R0 member R0 not member
A0 lost A0 received by sender
D0 lost
Receive D0
& send A0
D0 arrive
6.5 Wrong Acker Selection
Wrong acker selection means that a receiver other than the worst
receiver is selected as the acker. This leads to fairness violation
and affects the TCP-friendliness of the protocol. Several scenarios
can lead to wrong acker selection. If there is a very bad receiver
such that it is not even able to send feedback, another receiver will
be selected as the acker. The other receiver may have a higher rate,
which can make things worse on the bad receiver path. This is
complementary to the unnecessary starvation case. In unnecessary
starvation, the problem is on the multicast session itself, while here
we are concerned about competing traffic. Simulations in Section 9
will show some interesting tradeoffs between these two cases.
Out-of-order packets causing false NACKs can also lead to
wrong acker selection. For example, if data packets arrive at a
receiver out-of-order (e.g. due to route change), the receiver sends
NACKs for the missing packets with a high loss ratio. This loss
ratio may cause the sender to designate it as the acker, although it
is a good receiver. The effect of this scenario will depend on how
loss ratio is computed in the case of out-of-order packets, how long
the receiver waits before sending the NACK, and what happens to
the loss ratio after the receiver discovers that the packets were not
lost. In many cases such specific details are not specified or even
articulated during protocol design.
Some multicast transport protocols depend on router support for
suppressing NACKs to avoid the NACK implosion problem.
NACK suppression can cause the NACKs of worse receivers to be
suppressed, especially if they are far away, which leads to wrong
acker selection. In the simulation section, we show how this
problem affects TCP-friendliness.
Another type of scenarios that can lead to wrong acker selection
is where loss ratio is estimated incorrectly by receivers leaving and
joining the group. If a receiver leaves the group and then joins
again after a while, the loss ratio estimate by that receiver will not
be an accurate estimate of the current channel situation. Moreover,
it may consider packets not received as lost in computing the loss
rate, although they were not lost. This problem can have a large
effect on the operation of the protocol, if receivers are not stable or
if there are problems in the underlying multicast routing and group
membership protocol. Since the throughput computation depends
on the local loss ratio computed by receivers, inaccuracy in that
value can significantly affect the protocol behavior. In pgmcc, the
throughput is used only for acker selection, but in rate-based
protocols that depend on throughput computations for determining
the sending rate, this problem can have a larger impact.
7 CHALLENGES
Several challenges are faced in the design of the framework and
the application of our methodology. Some of these challenges are
already discussed, such as the reduction of complexity and
comparison time mentioned in Section 5. In this section we
consider other challenges and how we manage to solve them.
7.1 Error Filtering
One of the problems we faced is the large number of generated
error scenarios
2
. The reason is that, in our approach, all error states
are stored. These error states may represent actually a small set of
distinct error scenarios, but due to the large number of parameters
in our state definition, a large number of states represent the same
error. Consider as an example out-of-order data packets causing
wrong congestion notification. A large number of scenarios with
different ordering of data packets and interleaving of the other
events are generated, although all these scenarios represent the
same problem. Our solution to tackle this issue is to filter the error
states to keep only those scenarios that are really distinct. This
provides a way to remove similar error scenarios and classify
detected problems (each error scenario represents a class of
problems). We provide two heuristic methods for error filtering,
critical parameters method and critical transition method.
7.1.1 Critical Parameters Method
The idea of this method is choosing a set of parameters
depending on the kind of error, and keeping scenarios leading to
those error states that are different only on those parameters. The
2
An error scenario is a sequence of states (and events) that ends
with an error state based on our error model.
Table 8: Protocol mechanisms in our model Table 9: Summary of detected problems
• Unnecessary starvation
• All data to Acker lost
• All ACKs lost
• Some data, some ACKs
lost
• Acker leaves (crashes)
• Duplicate packets
• False NACKs due to out -
of-order (OOO) data
causing retransmissions
• Sender or receiver
crashes (non-
deterministic)
• Wrong congestion
notification (wrong window
cut)
• OOO data packets
• Acker switch causing
OOO ACKs
• Wrong Acker selection
• Very bad Acker (all
data or ACKs lost),
this can lead to wrong
Acker selection after
timeout
• Very bad receiver
(data lost, ACKs or
NACKs lost)
• NACK suppression
• False NACKs
• Wrong loss ratio
estimate by receiver
due to leaving and
joining
• Gaps in sequence space
• Sender or receiver
crashes (non-
deterministic)
• Data send by sender
• ACK send by Acker
• NACK send by receiver
(&Acker)
• Data receive by receiver
(&Acker)
• ACK receive from current Acker
(by sender)
• ACK receive from old Acker
• NACK receive by sender
• Acker changing (sender)
• Window & token increase
(sender)
• Window cut (sender)
• Loss ratio calculation (receiver)
• Throughput estimation (sender)
• Throughput comparison (sender)
• Token adjustment after cut
(sender)
• Duplicate ACKs (sender)
• Receiver leaving & joining
• Acker leaving & joining
• Channel
• Data reordering (to
Acker, to receiver)
• ACK reordering (from
new Acker, from old
Acker)
• NACK reordering
(from Acker, from
receiver)
• Data loss (to Acker, to
receiver)
• ACK loss (from new
Acker, from old Acker)
• NACK loss (from
Acker, from receiver)
• Data removal due to a
non-member receiver
parameters are chosen in a way so as to differentiate between error
states based on the cause of errors and ignoring parameters that
have no effect on the generation of the error, such that their
different combinations are not included. A more formal algorithm
is provided for automation:
For each error do the following:
• From the error table identify the error condition parameters (the
parameters used for detecting the error).
• For each of these parameters, check the transition table for the
events that can cause the error condition of the parameter to be
satisfied.
• For each error parameter that is affected by more than one event,
identify parameters that change due to the transitions caused by
these events.
• Use the parameters identified in the above step for classification,
and store error states that differ only on those parameters.
Example: Identify the critical parameters of the ‘unnecessary
starvation’ error.
• From the error table ( Table 2) the condition parameters for
unnecessary starvation are tokens and channels.
• According to the transition table (Table 1), the tokens decrease
due to the send event only and the packets are removed from the
channels due to the loss and receive events.
• Since channels can get empty due to different events, namely
loss or receive (data or ACK), other parameters that change due to
these transitions are (see Table 1) the receive list and ACK list.
These two parameters can differentiate between losses, data
receive and ACK receive.
By using these two parameters for classification, the number of
error scenarios generated using 2 receivers and 3 sequence
numbers reduced from 120 to 7.
7.1.2 Critical Transition Method
In this method, classification is done based on the transitions that
lead to the error. We identify the different critical transitions in the
detected error scenarios and keep scenarios that differ only on
those transitions. We argue that the critical transitions detected by
the following algorithm are adequate for classifying error
scenarios.
Starting with a set of error scenarios and valid scenarios:
• For each error scenario find the closest valid scenario and
identify the first different transition as the critical transition (by
closest we mean the scenario that has the largest number of
identical subsequent states starting from the initial state).
• Classify error scenarios according to these error-triggering
transitions.
For example, in Figure 7 we show a simple error scenario
leading to unnecessary starvation and one of its closest valid
scenarios. The loss of data is the critical transition here.
Example: Identify the critical transitions of the ‘unnecessary
starvation’ error.
Using the set of generated error scenarios and valid scenarios, the
critical transitions are the loss of data, loss of ACK, and receive of
data. The last critical transition is due to the case when data arrives
at the acker while it is not a member, which we call the acker leave
(or crash) problem. This reduces the number of error scenarios
generated using 2 receivers and 3 sequence numbers from 120 to 3.
7.1.3 Comparison
Each of these methods has its advantages and drawbacks. The
critical transition method is more general and can be easily adapted
for different applications. It is also more accurate and compact (in
the amount of filtered scenarios). For example, a packet loss
causing a certain error will be the same whether it happens at
sequence number 5 or at sequence number 50. The disadvantage of
this method is the high computational overhead for identifying the
closest scenario of each erroneous scenario. The critical parameters
method has lower overhead and the identification of the critical
parameters requires only the scanning of the transition and errors
tables and can be done offline. At runtime it needs only to compare
error states at those parameters. But critical parameters method is
less accurate and generates larger number of error scenarios,
especially as the search space becomes larger. The number of error
scenarios is fixed in critical transition method, since it depends
only on the number of transitions. So with higher sequence
numbers we have fixed number of error scenarios in critical
transition method versus fixed parameters identification time in
critical parameters method.
We can combine the advantages of both methods by applying
them in sequence, starting with the critical parameters method
followed by the critical transition method. This gives lower
computational overhead and more compact scenarios.
7.2 Error Hiding
This problem is the inverse of equivalent error scenarios. Here
the same error state can be reached by different paths. Consider the
case in Figure 8, the error in this case is unnecessary starvation,
which can happen due to data loss or acker leaving. Since both
scenarios reach the same state and the state is kept only once, the
second scenario will not be shown.
Keeping all paths is a very e xpensive solution, which will
significantly increase the memory and comparison time. Since we
are suffering already from a search space explosion problem as
was shown by the number of visited states, this option is not
considered. Other more practical solutions are the use of some
global estimators to differentiate between such scenarios. For
example, by using the global loss estimator the two error states in
Figure 8 are not similar, since there is a packet loss in one path and
not in the other. But this solution also can increase the number of
states highly, since the states implicitly have some path
information. Another simple solution is the use of different
combinations of events to trigger different error scenarios. For
example in Figure 8, if we disable the packet loss event, errors
generated will be due to the leave and join.
Figure 7: An example of an error scenario and its closest
valid scenario. The last state in the error scenario is an error
and the thick line is the critical transition
Figure 8: Different paths leading to the same state
Data
lost
Send
data
Receiver
leaves
Data
arrive
s
Receiver
rejoins
D0 lost
Send D0
E
Error Scenario
Closest Valid Scenario
D0 received
Send D0
8 GENERALIZATION OF RESULTS TO OTHER
PROTOCOLS
In this section we describe how our scenarios can be used to test
other protocols that use similar mechanisms. In order to identify
the protocols that can be tested by these results, we look at how
they can be decomposed into specific building blocks. IETF
provides a framework for reliable multicast transport building
blocks [20] in order to support the development and
standardization of these protocols. The protocols can be composed
of these building blocks, and the building blocks can be reused
across multiple protocols. This building block approach is also
useful in supporting the testing of protocols, by simplifying it to
testing of individual building blocks. Table 10 shows some of the
components presented in [20] that can be tested by our scenarios.
Here we describe those scenarios generated by our framework for
the specific case study protocol and how they can be used to test
various abstract components:
• All data to acker
3
or ACKs lost: This can be used to test the
congestion feedback component and how it reacts when there
is no feedback. It also tests the receiver controls component,
since a single receiver here can starve the whole multicast
session.
• Acker leaves or crashes: This tests the membership
management component and how it deals with special
members, such as ackers. This is also important for other
protocols having group representatives [6] that are responsible
for providing the feedback to sender. For example, in TFMCC
[22], a special receiver called CLR provides the feedback for
increasing the transmission rate.
• Out-of-order data or ACKs: This tests the congestion
feedback and how feedback is provided and handled. It tests
also the rate regulation and how the rate is adjusted due to
feedback.
• Acker switch: Also tests the congestion feedback and rate
regulation in case of change in the feedback provider.
• Very bad receiver: This tests the receiver controls since we
want to stop that receiver from starving the whole session.
Some protocols (e.g. [5]) prune bad paths from the multicast
tree or force some bad receivers to leave, in order to have an
acceptable rate. It also tests rate regulation, since we do not
want to make it worse for that receiver.
• Sender or receiver crashes: This tests how the group
membership is managed.
• False NACKs: This tests several components such as loss
detection, notification, and recovery, for example how losses
are handled and whether timers are set before sending
3
The term ‘acker’ here can be substituted by any group
representative responsible for providing the feedback to the sender
in other protocols.
notifications. Congestion feedback is affected by false
NACKs, since NACKs carry the feedback.
• Receiver leaves and joins: This tests the membership
management and also the congestion feedback if the receiver
performs computations for things like the loss rate and RTT.
In pgmcc the receiver computes the loss rate. In other
protocols such as TFMCC, it computes the loss rate and also
the RTT.
• NACK suppression: This tests the congestion feedback and its
interaction with router support.
Another building block that can be considered here is security.
Many of the problems presented may not only be caused by the
network, but they can also be due to intentional denial of service
attacks. This is important if applications are covering a wide range
of unknown receivers. This denial of service can be for the
multicast session or for other competing (e.g. TCP) sessions.
Examples of these scenarios are receivers sending wrong feedback,
false NACKs, or not sending feedback at all. This may lead to
performance or fairness problems.
9 SIMULATION OF GENERATED SCENARIOS
To show the utility of our methodology, we have conducted a set
of detailed simulations for the generated error scenarios. These
simulations demonstrate how the low-level errors detected lead to
high-level performance problems and long -range effects. We
provide also a set of experiments target specifically to evaluate the
fairness of a multicast congestion control protocol when running
with competing TCP flows. Some of the fairness experiments are
based on generated scenarios and some are based on differences
with TCP that are not captured by our model.
We use ns-2 [3] for performing the experiments. The source
models used in the simulation are FTP sources with packet size of
1400 bytes. The links have propagation delay of 1ms, and
bandwidth of 10Mb/s, unless otherwise specified. The queues have
a drop-tail discard policy (RED is also examined and the results
are similar) and FIFO service policy, with capacity to hold 30
packets. For TCP sessions, both Reno and SACK are examined
and they support similar conclusions (except for the first
experiment, as will be shown). In the graphs we show the sequence
numbers sent by the sender vs. the time. This has the same effect
as showing the throughput. We provide a fairness metric f as the
ratio between TCP and pgmcc throughput
4
.
9.1 Unnecessary Starvation
A generated scenario that leads to unnecessary starvation is that
of the acker leaving the group (or crashing), as was shown in
Section 6.1. Guided by that scenario we build the topology in
Figure 9, where we have a pgmcc session with a sender and two
receivers. The path to the receiver PR2 has a congested link
(1Mb/s), hence PR2 is the worst receiver and becomes the acker. A
TCP session is added to generate extra traffic over that path. The
experiment runs for 300 seconds. Every 10 seconds the acker PR2
leaves the group for 1 second. During this period the sender
receives no ACKs and is not able to send any packets until it
timeouts. This leads to drastic reduction in throughput by about
75% as shown in Figure 10.
4
The final sequence numbers in the graphs represent the aggregate
throughput. So their ratio can be considered as the ratio between
the average instantaneous throughputs.
Table 10: Multicast transport building blocks
• Data reliability
• Loss detection/notification
• Loss recovery
• Group membership
• Membership notification
• Membership management
• Congestion control
• Congestion feedback
• Rate regulation
• Receiver controls
• Security
0
5000
10000
15000
0 50 100 150 200 250 300
Time
Sequence
MCC
TCP
0
5000
10000
15000
20000
25000
30000
35000
0 50 100 150 200 250 300
Time
Sequence
MCC
TCP
0
2000
4000
6000
8000
10000
12000
14000
0 50 100 150 200 250 300
Time
Sequence
without acker leaving
with acker leaving
9.2 Wrong Acker Selection
Incorrect loss ratio estimation by receivers, due to leaving and
joining the group, can lead to wrong acker selection as detected by
our methodology (Section 6.5). We again use the topology in Figure
9, but this time it is the other receiver PR1 that leaves the group for
1 second, every 10 seconds. After rejoining PR1 can have an
incorrect estimation of the loss ratio, due to the packets missed
when it had left for which it starts sending NACKs. The sender may
select PR1 as the acker instead of PR2. A longer delay (10ms) is
added to the link leading to PR1 to increase the likelihood of that
switch. It is interesting to study the effect of that wrong acker
switch on both the pgmcc and the competing TCP sessions.
Figure 11 shows the throughput of pgmcc and TCP without the
receiver leaving and joining (no acker switching). Figure 12 shows
the effect of receiver leaving and joining on the throughput. In that
figure the rate of pgmcc increases highly during the periods of
wrong acker selection (when PR1 becomes the acker), which is seen
in the high slope of the throughput. The low slope represents the
periods where the correct acker (PR2) is chosen, which is close to
the TCP slope. Although the sender throughput is high, these
packets are received by PR1, but PR2 throughput is much less. At
the high rate periods the congested link is not able to transfer all
packets to PR2 and hence it loses these packets and in a reliable
session they have to be retransmitted.
9.3 Window and Timeouts
This set of experiments contains simple topologies to compare the
MCC protocol to different flavors of TCP in simple cases. The
flavors are Reno, New-Reno, and SACK [8]. This comparison helps
us understand the behavior of the protocol and the subtle differences
between it and TCP. Two congestion control issues are targeted by
this comparison: (1) reaction to losses and ACKs with its effect on
the window size, (2) retransmission timeouts. TCP Reno is still the
most widely deployed flavor in the Internet, but recent statistics
show that TCP New-Reno and TCP SACK deployment is
increasing [17]. New-Reno and SACK solve performance problems
of TCP in case of multiple-packet loss in a window and they reduce
the number of timeouts. When multiple packets are lost from a
single window of data, New-Reno and SACK can recover without a
retransmission timeout. With Reno and New-Reno at most one
dropped packet is retransmitted per round-trip time, while SACK
does not have this limitation [8]. This response to losses and ACKs
has a major impact on the window size, and consequently on the
fairness. According to [16] timeouts also have a significant impact
on the performance of TCP Reno and they constitute a considerable
fraction of the total number of loss indications. Measurements have
shown that in many cases the majority of window decreases are due
to timeouts, rather than fast retransmits. This experiment highlights
the protocol policy in handling ACKs and timeouts, and which
flavor it is closer to.
We use the topology shown in Figure 13 to test the fairness of
pgmcc with the different TCP flavors in a very simple case, where
we have only a single TCP session competing with a pgmcc session
over a bottleneck link (500Kb/s, 50ms). pgmcc has a number of
identical receivers, so any one of them could be the acker.
Starting with TCP Reno and comparing the throughput of the
TCP sender with the pgmcc sender we find in Figure 14 that pgmcc
is not fair to TCP Reno
5
. The reason for this behavior can be
interpreted if we look more closely at how pgmcc works in
comparison to TCP Reno. pgmcc times out after a stall when the
ACKs stop coming in, and a long timeout expires. But there are no
specific information about the exact timeout value for pgmcc and
how it is determined. Without timeout pgmcc reacts to congestion
by cutting the window in half similar to fast recovery in TCP. TCP
on the other hand adjusts its timeout value depending on the
measured RTT and the variance of the measured RTT values. In
addition, ACKs in pgmcc are per-packet as in SACK, while in Reno
ACKs are aggregate only, so for Reno to send an ACK for a packet,
all packets in between have to be received. This has a large effect
when multiple packets are dropped from a window.
5
This experiment runs for 3000 seconds. When we run the
experiment for 300 seconds, as in [15], the unfairness shown here
was not clear.
Figure 9: A MCC session with two receivers PR1 and
PR2, where PR2 is the worst receiver (accordingly
the acker) due to the congested link. A TCP session is
also competing for the congested link.
Figure 10: Throughput of pgmcc without and
with acker leaving
Figure 11: Throughput of pgmcc and TCP
without acker switch (f=108%)
Figure 12: Throughput of pgmcc and TCP
with wrong acker switch (f=39%)
Congested
Link
TS
TR
MS
MR2 MR1
TS: TCP Sender
TR: TCP Receiver
MS: MCC Sender
MR: MCC Receiver
0
10000
20000
30000
40000
50000
60000
70000
80000
0 500 1000 1500 2000 2500 3000
Time
Sequence
MCC
TCP
0
10
20
30
40
50
60
0 200 400 600 800 1000
Time
Window Size
MCC
TCP
Our explanation of the unfairness that is observed over long
periods is due to these differences in handling timeouts and
responding to ACKs and losses. By observing the window size
changes in both of them ( Figure 15), we found that the pgmcc
window is larger most of the time and it does not enter the slow
start phase. We have also conducted several other experiments for
different timeout values, and found that the results obtained depend
heavily on this value. For example, if the timeout is set to a
relatively small value this can cause TCP to have a much higher
throughput. The appropriate value for timeout that achieves fairness
depends on dynamic network conditions that change over time.
Next we try the same experiments with New-Reno and SACK.
New-Reno and SACK reduce the timeouts and solve the
performance problems when multiple packet are dropped from a
window. Simulation results show that pgmcc is fairer (f=92%) with
SACK and New-Reno.
To further clarify the effect of timeout and window size, we run
the same experiment of TCP Reno with an adaptive timeout
mechanism added to pgmcc. In this experiment pgmcc uses an
adaptive timeout similar to that used in TCP and the reset of the
timeout is controlled to be as close as possible to TCP Reno. It is
reset only if there are no packets missing in the received bitmask
(i.e. all packets are acked). Because of differences in RTT between
different ackers, after a switch a fixed timeout is used until the
adaptive timeout for the new acker is computed. Fig. 16 shows the
result of pgmcc compared to TCP Reno after adding the adaptive
timeout. The modified pgmcc is friendly to TCP Reno.
These experiments clarify some of the characteristics and design
choices of pgmcc. It is similar to TCP SACK in handling ACKs and
losses, and it avoids timeouts. Since the deployment of SACK is
permitted and it is currently increasing, there is no requirement to
degrade pgmcc to TCP Reno and these design choices seem to be
correct.
9.4 Diverse Losses and Delay
This set of experiments addresses the effect of having multiple
receivers with different losses and delay. We consider both
independent and correlated (due to congestion) losses. The
throughput of the MCC protocol when the receivers have different
combinations of delay and loss rates (e.g. high loss, low delay vs.
low loss, high delay) is compared to the competing TCP flows.
There are several objectives behind this comparison: First, better
understanding of the effect of losses, retransmissions, and delays
with multiple receivers. Second, many MCC protocols use a TCP
throughput equation to model the TCP behavior. This set of
experiments evaluates the accuracy of the used equation. Third, the
difference in reaction to independent and congestion losses can
show some of the protocol characteristics.
In Fig. 17 we have two pgmcc receivers, one with high RTT
(400ms) and low loss rate (.4% or 2%) and the other with lower
RTT (200ms) and higher loss rate (1.6% or 8%). Losses in this
experiment are considered to be independent and not correlated.
This enables us to control the parameters accurately to have equal
throughputs in both links and evaluate the results in this case.
In Fig. 18 we see that pgmcc and the two TCP sessions have close
throughput
6
. The loss rates here are .4% for the low loss link and
1.6% for the high loss link. In Fig. 19 we are using a loss rate of 2%
for the low loss link and 8% for the high loss link, which causes
pgmcc to be unfair to the high loss rate TCP session. This is mainly
because of the difference in how these protocols react to packet
losses. In pgmcc the reliability window is separated from the
congestion window and the handling of acknowledgements is
different. Unlike TCP, in pgmcc the sender keeps sending new
packets, even if previously lost packets have not been received. This
separation between reliability and congestion seems to be
unavoidable in order to achieve an acceptable performance in
reliable multicast. In addition, according to [16] the simplified
equation used for computing the throughput is for fast
retransmission only and it does not take timeouts into account. It
also overestimates the throughput for losses above 5% and so it is
suitable only when loss rates are below 5% and no timeouts happen.
This experiment shows that at high loss rates pgmcc can be unfair
to TCP due to the ignoring of previously lost packets by congestion
control, and due to the inaccuracy in the throughput equation.
9.5 Feedback Suppression
Most MCC protocols depend on the receivers’ feedback in making
decisions. Some multicast transport protocols have network support
(e.g., by routers) to improve their performance. This support is
normally in the form of feedback aggregation or suppression to
avoid problems such as ACK and NACK implosion. In this part we
consider experiments to test the effect of feedback suppression on
fairness. The experiments consist of topologies with critical
receivers having their feedback suppressed. The definition of
critical receivers depends on the protocol as will be shown in the
next section. Feedback suppression affects the accuracy of
decisions based on feedback. These experiments investigate the
tradeoff between the amount of feedback and the correctness of
decisions or computations.
In PGM [19], if feedback aggregation is used, the first instance of
a NACK for a given data segment is forwarded to the source and
subsequent NACKs may be suppressed. Using the topology in Fig.
20, we find that feedback aggregation will cause pgmcc to be unfair
to TCP, because the worse receiver MR3 will always have its
NACKs suppressed (the link leading to MR3 router has 50 ms
delay). The throughput of pgmcc and TCP without network support
is similar to experiment 1. In Fig. 21 we see that with network
support, pgmcc gets much higher throughput than TCP.
6
We set the parameters of RTT and loss rates to let the two TCP
sessions get the same throughput, according to the TCP equation.
Figure 13: TCP session competing with
MCC session over a bottleneck link
Figure 14: Throughput of pgmcc vs.
TCP Reno (f=74%)
Figure 15: Window size comparison
of pgmcc and TCP Reno
Congested Link
TS
TR
MS
MR1
MR2 MR3
TS: TCP Sender
TR: TCP Receiver
MS: MCC Sender
MR: MCC
0
10000
20000
30000
40000
50000
60000
70000
0 500 1000 1500 2000 2500 3000
Time
Sequence
MCC
TCP
0
10000
20000
30000
40000
50000
60000
70000
80000
0 500 1000 1500 2000 2500 3000
Time
Sequence
MCC
TCP1
TCP2
0
5000
10000
15000
20000
25000
30000
35000
0 500 1000 1500 2000 2500 3000
Time
Sequence
MCC
TCP1
TCP2
0
20000
40000
60000
80000
100000
120000
140000
160000
180000
0 500 1000 1500 2000 2500 3000
Time
Sequence
MCC
TCP
0
5
10
15
20
25
30
1210 1215 1220 1225 1230
Time
Window Size
0
20000
40000
60000
80000
100000
120000
140000
0 500 1000 1500 2000 2500 3000
Time
Sequence
MCC
TCP
29960
29970
29980
29990
30000
1225.5 1226 1226.5 1227 1227.5 1228 1228.5
Time
Sequence
Data
ACK
NACK
In PGM [19], if feedback aggregation is used, the first instance
of a NACK for a given data segment is forwarded to the source
and subsequent NACKs may be suppressed. Using the topology in
Fig. 20, we find that feedback aggregation will cause pgmcc to be
unfair to TCP, because the worse receiver MR3 will always have
its NACKs suppressed (the link leading to MR3 router has 50 ms
delay). The throughput of pgmcc and TCP without network
support is similar to experiment 1. In Fig. 21 we see that with
network support, pgmcc gets much higher throughput than TCP.
This experiment shows that feedback suppression can cause
pgmcc to be unfair to TCP. Accordingly we recommend that some
changes are needed in the way feedback aggregation is performed
with pgmcc. A solution for that is to store both the loss ratio and
RTT for each NACK and to suppress based on a combination of
these values. This solution has high storage and computation
overhead in the routers and it is not general enough to deploy. We
propose a low overhead solution for that problem by random
suppressing of NACKs in the router. The router will suppress
NACKs only with some probability. This will give the worst
receiver NACKs some chances to reach the sender. There will be a
tradeoff here between the amount of feedback suppressed and the
accuracy of acker selection and hence fairness with respect to TCP.
9.6 Group Representatives
Several MCC protocols use the idea of representatives to achieve
scalability. Feedback is normally provided only by these special
receivers. An important task is the selection and changing of the
representatives. The experiments here target this operation by
having configurations containing multiple sets of receivers that can
be selected as representatives and having scenarios that trigger the
changing between them. The aim of these experiments is to study
the effect of these changes on the overall protocol operation and on
its fairness to TCP.
In pgmcc we evaluate the effect of acker switching using also the
topology of Fig. 20, but with a higher delay (200ms) in the link
leading to the MR3 router. No suppression will happen in this case
because the retransmissions will reach MR1 and MR2 router
before the NACK of MR3 reaches there. In PGM retransmissions
are directed by the router only on those links that sent the NACKs,
and these retransmissions delete the NACKs states from routers.
As shown in Fig. 22, the throughput of pgmcc becomes too low,
and the TCP throughput is much higher. This does not constitute a
fairness problem, but a performance degradation problem for
pgmcc.
The reason for this poor performance under the given topology is
the acker switching between a high RTT receiver and a low RTT
Fig. 16: Throughput of pgmcc with the
adaptive timeout vs. TCP Reno (f=96%)
Fig. 17: MCC session with receivers having different
delays and loss rates competing with TCP sessions
Fig. 18: Throughput of pgmcc vs. the two
TCP sessions with low loss rate (f1=82%,
f2=85%)
Fig. 19: Throughput of pgmcc vs. the two
TCP sessions with high loss rate (f1=78%,
f2=54%)
Fig. 20: MCC session with receivers having the
same loss rate, but different delays
Fig. 21: Throughput of pgmcc vs. TCP
with NACK suppression (f=23%)
Fig. 22: Throughput of pgmcc vs. TCP
due to acker switches (f=164%)
Fig. 23: Detailed sequence of pgmcc
packets during acker change
Fig. 24: Window size changes of pgmcc
session, during an acker change
High Loss
TS1
TR2
MS
MR2
Low Loss
TS2
TR1
MR1
High Delay Congested Link
TS
TR
MS
MR1
MR3
MR2
receiver. By looking at acker switches in detail we found that two
switches happen in succession close to each other. The first NACK
from the closer receiver causes an acker change then the other
NACK (upon its arrival) causes another change for the far one.
This pattern repeats with packet losses. It is interesting to examine
why acker changing causes this poor performance (in other
experiments it has no effect). Fig. 23 shows more details and
illustrates what happens between two acker switches (each vertical
line indicates an acker switch). We find that after the switch to the
close receiver, new ACKs arrive before old ACKs. The old ACKs
that arrive at 1226.5 do not cause new packets to be sent, i.e., they
do not generate new tokens. Later, when new ACKs arrive the
window starts at slow rate, which means that, it has been cut. Fig.
24 shows how the window is cut at 1226.5. This is due to the out -
of-order ACK delivery and the consequent decisions taken by the
sender. Wrong loss detections can be interpreted, because ACKs
for old packets have not arrived yet. Also on a loss detection the
sender tries to realign the window to the actual number of packets
in flight, which will not be interpreted correctly after the switch,
because there are still packets and ACKs in flight to and from the
old acker.
To solve this problem the sender needs to keep track of the
recent history of acker changes and the ACKs sent by each acker.
In addition, the bitmask provides information about the recent
packets received by the acker. Accordingly the sender can adjust
its window and avoid these problems.
This experiment shows that acker switching between receivers
with large difference in delay degrades the performance of pgmcc.
This problem will be more common on larger network topologies
and group sizes.
10 CONCLUSIONS
We have described a framework for the systematic evaluation of
multicast congestion control protocols based on the STRESS
methodology. STRESS is extended to cover this type of protocols.
The application of congestion control adds new semantics to
STRESS, such as the modeling of sequence numbers, channels
traffic, and congestion window. New challenges are faced due to
the exponential increase in search space, the high level of
interleaving between events, and the long -term effects of
individual events. These challenges are handled by introducing
new kinds of state and path equivalences. Also several
modifications are made to the search mechanism to provide more
efficient coverage of the search space. Two algorithms are
presented for error filtering to reduce the number of error scenarios
generated.
Several problems have been discovered in our case study and a
set of scenarios are generated, the generated scenarios can be used
for testing multicast congestion control building blocks. Some of
the interesting problems identified include the effect of receivers
joining and leaving on the throughput measurements, the effect of
NACK suppression on feedback, and the effect of having special
receivers (such as the acker) which can leave or change.
Identifying these problems in a systematic way is helpful for the
protocol designer and for applications using that protocol.
We have validated our results through detailed packet-level
simulations and presented a set of simulation experiments for
fairness evaluation. Improvements are proposed to cope with some
of the problems observed, such as random suppression of NACKs,
sender response after representative switches, and the adaptive
timeout in case fairness to TCP Reno is required.
The main limitation of our methodology is its high complexity,
which constrains the search space covered and limits us to the
detection of low-level errors. However, if not handled properly,
these low level errors can lead to high-level performance problems
and long -range effects. Furthermore the low level errors are more
likely to occur in larger topologies and real-life networks. As was
shown by the simulations, these microscopic problems have large
effect on performance, and can be used in generating higher-level
scenarios and guiding simulations of a larger scale in time and
space.
REFERENCES
[1] S. Begum, M. Sharma, A. Helmy, S. Gupta. “Systematic Testing of
Protocol Robustness: Case Studies on Mobile IP and MARS.”
Proceedings of the 25
th
annual IEEE conference on Local Computer
Networks (LCN), Florida, November 2000.
[2] S. Bhattacharyya, D. Towsley, and J. Kurose. “The Loss Path
Multiplicity Problem for Multicast Congestion Control.” Proceedings
of the IEEE Infocom'99, New York, March 1999.
[3] L. Breslau, D. Estrin, K. Fall, S. Floyd, J. Heidemann, A. Helmy, P.
Huang, S. McCanne, K. Varadhan, Y. Xu, and H. Yu. “Advances in
Network Simulation.” IEEE Computer, vol. 33, No. 5, p. 59-67, May
2000.
[4] A. Chaintreau, F. Baccelli, C. Diot. “Impact of Network Delay
Variation on Multicast Session Performance with TCP-like
Congestion Control.” Proceedings of the IEEE Infocom 2001,
Anchorage, Alaska, April 2001.
[5] D. M. Chiu, M. Kadansky, J. Provino, J. Wesley and H. Zhu.
“Pruning Algorithms for Multicast Flow Control.” NGC2000,
Stanford, California, November 2000.
[6] D. DeLucia and K. Obraczka. “Multicast Feedback Suppression using
Representatives.” Proceedings of the IEEE Infocom'97, Kobe, Japan,
April 1997.
[7] C. Diot, W. Dabbous, and J. Crowcroft. “Multipoint
Communications: A Survey of Protocols, Functions, and
Mechanisms.” IEEE Journal of Selected Areas in Communications,
April 1997.
[8] K. Fall and S. Floyd. “Simulation-based Comparison of Tahoe, Reno,
and SACK TCP.” Computer Communication Review, July 1996.
[9] S. J. Golestani and K. K. Sabnani “Fundamental Observations o n
Multicast Congestion Control in the Internet.” Proceedings of the
IEEE Infocom'99, New York, March 1999.
[10] A. Helmy, D. Estrin, S. Gupta. “Fault -oriented Test Generation for
Multicast Routing Protocol Design.” Proceedings of Formal
Description Techniques & Protocol Specification, Testing, and
Verification (FORTE/PSTV), IFIP, Kluwer Academic Publication,
Paris, France, November 1998.
[11] A. Helmy, D. Estrin, S. Gupta. “Systematic Testing of Multicast
Routing Protocols: Analysis of Forward and Backward Search
Techniques.” The 9
th
International Conference on Computer
Communications and Networks (IEEE ICCCN), Las Vegas, Nevada,
October 2000.
[12] A. Helmy, S. Gupta, D. Estrin. “The STRESS Method for
Performance Evaluation of End-to-end Multicast Protocols”.
IEEE/ACM Transactions on Networking (ToN). To appear April
2004.
[13] A. Helmy, S. Gupta, D. Estrin, A. Cerpa, Y. Yu. “Systematic
Performance Evaluation of Multipoint Protocols.” Proceedings of
FORTE/PSTV, IFIP, Kluwer Academic Publication, Pisa, Italy,
October 2000.
[14] A. Mankin, A. Romanow, S. Bradner, and V. Paxson. “IETF Criteria
for Evaluating Reliable Multicast Transport and Application
Protocols.” RFC 2357, June 1998.
[15] K. Obraczka. “Multicast Transport Mechanisms: A Survey and
Taxonomy.” IEEE Communications Magazine, January 1998.
[16] J. Padhye, V. Firoiu, D. Towsley, and J. Kurose. “Modeling TCP
Throughput: A Simple Model and its Empirical Validation.” ACM
SIGCOMM 1998, Vancouver, BC, Canada, September 1998.
[17] J. Padhye and S. Floyd. “On Inferring TCP Behavior.” ACM
SIGCOMM, August 2001.
[18] L. Rizzo. “pgmcc: A TCP-friendly Single-Rate Multicast Congestion
Control Scheme.” ACM SIGCOMM 2000, Stockholm, Sweden,
August 2000.
[19] T. Speakman, J. Crowcroft, J. Gemmell, D. Farinacci, S. Lin, D.
Leshchiner, M. Luby, T. Montgomery, L. Rizzo, A. Tweedly, N.
Bhaskar, R. Edmonstone, R. Sumanasekera, and L. Vicisano. “PGM
Reliable Transport Protocol Specification.” RFC 3208, December
2001.
[20] B. Whetten, L. Vicisano, R. Kermode, M. Handley, S. Floyd, M.
Luby. “Reliable Multicast Transport Building B locks for One-to-
Many Bulk -Data Transfer.” RFC 3048, January 2001.
[21] J Widmer, R Denda, and M Mauve. “A Survey on TCP -Friendly
Congestion Control.” Special Issue of the IEEE Network Magazine,
May 2001.
[22] J. Widmer, M. Handley. “Extending Equation-based Congestion
Control to Multicast Applications.” ACM SIGCOMM 2001, San
Diego, California, August 2001.
Linked assets
Computer Science Technical Report Archive
Conceptually similar
PDF
USC Computer Science Technical Reports, no. 755 (2002)
PDF
USC Computer Science Technical Reports, no. 757 (2002)
PDF
USC Computer Science Technical Reports, no. 727 (2000)
PDF
USC Computer Science Technical Reports, no. 860 (2005)
PDF
USC Computer Science Technical Reports, no. 743 (2001)
PDF
USC Computer Science Technical Reports, no. 837 (2004)
PDF
USC Computer Science Technical Reports, no. 690 (1998)
PDF
USC Computer Science Technical Reports, no. 673 (1998)
PDF
USC Computer Science Technical Reports, no. 726 (2000)
PDF
USC Computer Science Technical Reports, no. 811 (2003)
PDF
USC Computer Science Technical Reports, no. 788 (2003)
PDF
USC Computer Science Technical Reports, no. 696 (1999)
PDF
USC Computer Science Technical Reports, no. 809 (2003)
PDF
USC Computer Science Technical Reports, no. 663 (1998)
PDF
USC Computer Science Technical Reports, no. 674 (1998)
PDF
USC Computer Science Technical Reports, no. 657 (1997)
PDF
USC Computer Science Technical Reports, no. 716 (1999)
PDF
USC Computer Science Technical Reports, no. 765 (2002)
PDF
USC Computer Science Technical Reports, no. 803 (2003)
PDF
USC Computer Science Technical Reports, no. 789 (2003)
Description
Karim Seada, Ahmed Helmy, Sandeep Gupta. "A framework for systematic evaluation of multicast congestion control protocols." Computer Science Technical Reports (Los Angeles, California, USA: University of Southern California. Department of Computer Science) no. 801 (2003).
Asset Metadata
Creator
Gupta, Sandeep
(author),
Helmy, Ahmed
(author),
Seada, Karim
(author)
Core Title
USC Computer Science Technical Reports, no. 801 (2003)
Alternative Title
A framework for systematic evaluation of multicast congestion control protocols (
title
)
Publisher
Department of Computer Science,USC Viterbi School of Engineering, University of Southern California, 3650 McClintock Avenue, Los Angeles, California, 90089, USA
(publisher)
Tag
OAI-PMH Harvest
Format
15 pages
(extent),
technical reports
(aat)
Language
English
Unique identifier
UC16269331
Identifier
03-801 A Framework for Systematic Evaluation of Multicast Congestion Control Protocols (filename)
Legacy Identifier
usc-cstr-03-801
Format
15 pages (extent),technical reports (aat)
Rights
Department of Computer Science (University of Southern California) and the author(s).
Internet Media Type
application/pdf
Copyright
In copyright - Non-commercial use permitted (https://rightsstatements.org/vocab/InC-NC/1.0/
Source
20180426-rozan-cstechreports-shoaf
(batch),
Computer Science Technical Report Archive
(collection),
University of Southern California. Department of Computer Science. Technical Reports
(series)
Access Conditions
The author(s) retain rights to their work according to U.S. copyright law. Electronic access is being provided by the USC Libraries, but does not grant the reader permission to use the work if the desired use is covered by copyright. It is the author, as rights holder, who must provide use permission if such use is covered by copyright.
Repository Name
USC Viterbi School of Engineering Department of Computer Science
Repository Location
Department of Computer Science. USC Viterbi School of Engineering. Los Angeles\, CA\, 90089
Repository Email
csdept@usc.edu
Inherited Values
Title
Computer Science Technical Report Archive
Description
Archive of computer science technical reports published by the USC Department of Computer Science from 1991 - 2017.
Coverage Temporal
1991/2017
Repository Email
csdept@usc.edu
Repository Name
USC Viterbi School of Engineering Department of Computer Science
Repository Location
Department of Computer Science. USC Viterbi School of Engineering. Los Angeles\, CA\, 90089
Publisher
Department of Computer Science,USC Viterbi School of Engineering, University of Southern California, 3650 McClintock Avenue, Los Angeles, California, 90089, USA
(publisher)
Copyright
In copyright - Non-commercial use permitted (https://rightsstatements.org/vocab/InC-NC/1.0/