Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
Computer Science Technical Report Archive
/
USC Computer Science Technical Reports, no. 700 (1999)
(USC DC Other)
USC Computer Science Technical Reports, no. 700 (1999)
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Quality Adaptation for Congestion Controlled Video Playback over the
Internet
Reza Rejaie, Deborah Estrin
University of Southern California
Department of Computer Science
Information Sciences Institute
Marina Del Rey, CA 90292
reza,estrin@isi.edu
Mark Handley
AT&T Center of Internet Research
The International Computer Science Institute
Berkeley, CA 94704-1198
mjh@aciri.org
February 15, 1999
Abstract
Streaming audio and video applications are becoming increasingly
popular on the Internet, and the lack of effective congestion con-
trol in such applications is now a cause for significant concern.
The problem is one of adapting the compression without requir-
ing video-servers to re-encode the data, and fitting the resulting
stream into the rapidly varying available bandwidth. At the same
time, rapid fluctuations in quality will be disturbing to the users
and so should be avoided.
In this paper we present a mechanism for using layered video in
the context of unicast congestion control. This quality adaptation
mechanism adds and drops layers of the video stream to perform
long-term coarse-grain adaptation, while using a TCP-friendly
congestion control mechanism to react to congestion on very short
timescales. The mismatches between the two timescales are ab-
sorbed using buffering at the receiver. We present a piecewise-
optimal scheme for the distribution of buffering among the active
layers in order to maximize perceptual quality while minimizing
rapid, disturbing changes in the quality. We discuss the issues in-
volved in implementing and tuning such a mechanism, and present
our simulation and experimental results.
Keywords: Quality Adaptative Video, Layered Coding
1 Introduction
The Internet has been experiencing explosive growth of au-
dio and video streaming. Most current applications involve
web-based audio and video playback[6, 14] where a stored
video is streamed from the server to a client upon request.
This growth is expected to continue, and such semi-realtime
traffic will form a higher portion of the Internet load. Thus
This work was supported by DARPA under contract No. DABT63-
95-C0095 and DABT63-96-C-0054 as part of SPT and VINT projects
the overall behavior of these applications have a large im-
pact on the Internet traffic.
Since the Internet is a shared environment and does not
currently micro-manage utilization of its resources, end sys-
tems are expected to be cooperative by reacting to conges-
tion properly and promptly[5, 9]. Deploying end-to-end
congestion control results in higher overall utilization of
the network and improves inter-protocol fairness. A con-
gestion control mechanism determines the available band-
width based on the state of the network, and the application
should then use this bandwidth efficiently to maximize the
quality of the delivered service to the user.
Currently, many of the commercial streaming applica-
tions do not perform end-to-end congestion control. This
is mainly because stored video has an intrinsic transmis-
sion rate. These rate-based applications either transmit data
with a near-constant rate or loosely adjust their transmis-
sion rate on long timescales since the required rate adap-
tation for congestion control is not compatible with their
nature. Large scale deployment of these applications could
result in severe inter-protocol unfairness and possibly even
congestion collapse.
This paper is not about congestion control mechanisms,
but about a complementary mechanism to adapt the quality
of streaming video playback while performing congestion
control. However, to design an effective quality adaptation
scheme, we need to know the properties of the deployed
congestion control mechanism. Our main assumption is
that the congestion control mechanism employs an additive
increase and multiplicative decrease (AIMD) algorithm.
The simplest TCP-friendly congestion control mecha-
nism we know of is the Rate Adaptation Protocol (RAP)[?].
RAP is a rate-based congestion control mechanism and em-
ploys an AIMD algorithm in a manner similar to TCP.
There are two variants of RAP - with and without a fine-
1
grain congestion avoidance mechanism. This paper only
discusses the RAP variant without fine-grain adaptation be-
cause it’s properties are much simpler to predict. However,
proposed mechanisms can be adopted to any congestion
control scheme that deploys an AIMD algorithm.
0
2
4
6
8
10
12
14
20 25 30 35 40
Transmission Rate (KB/s)
Time
Transmission Rate of A RAP source without Fine Grain Adaptaion
RAP Transmission Rate
Link Bandwidth
Figure 1: Transmission rate of a single RAP flow
Figure 1 shows the transmission rate of a RAP source
over time. Similar to TCP, it hunts around for a fair share
of the bandwidth. However but unlike TCP, RAP is not
ACK-clocked and variations of transmission rate has a more
regular sawtooth shape. Bandwidth increases linearly for a
period of time, then a packet is lost, and an exponential
backoff occurs, and the cycle repeats.
1.1 Target Environment
Our target environment is a video server that simultane-
ously plays back different video streams on demand for
many heterogeneous clients. As with current Internet video
streaming, we expect length of such streams to range from
30 second clips to full-length movies. The server and
clients are connected through the Internet where the dom-
inant competing traffic is TCP-based. Clients have hetero-
geneous network capacity and processing power. Users ex-
pect that startup playback latency to be low, especially for
shorter clips played back as part of web surfing. Thus pre-
fetching the entire stream before starting its playback is not
an option. We believe that this scenario reasonably repre-
sents many of the today’s Internet streaming applications.
1.2 Motivation
If video for playback is stored at a single lowest-common-
denominator encoding data rate on the server, high-
bandwidth clients will receive poor quality despite avail-
ability of a large amount of bandwidth. However, if the
video is stored at a higher quality encoding (and hence data
rate) on the server, then there will be many low-bandwidth
clients that cannot play back this stream. In the past, we
have often seen RealVideo streams available at 14.4 Kb/s
and 28.8 Kb/s, where the user can choose their connection
speed. However, with the advent of ISDN, ADSL, and ca-
ble modems to the home and faster access rates to busi-
nesses, the Internet is becoming much more heterogeneous.
Customers with higher speed connections feel frustrated to
be restricted to modem-speed playback. The network bot-
tleneck is less likely to be the final hop to the end points;
instead congestion in the backbone, often at provider inter-
connects or links to the server itself, will increasingly dom-
inate. As the user cannot know the congestion level, con-
gestion control mechanisms for streaming video playback
will become increasingly critical.
Clearly then there is a need for the server to be able to
adjust the quality of the stream it plays back so that the per-
ceived quality is as high as the available network bandwidth
will permit. We term this quality adaptation.
1.3 Quality Adaptation Mechanisms
There are several ways to adjust the quality of a
pre-encoded stored stream, including adaptive encoding,
switching between multiple pre-encoded versions, and hi-
erarchical encoding.
One may re-quantizing stored encodings on-the-fly
based on network feedback[2, 15, 18]. However, since en-
coding is CPU-intensive, servers are unlikely to be able to
do this for large numbers of clients. Furthermore, once the
original data has been stored compressed, the output rate of
most encoders can not be changed over a wide range.
In an alternative approach, the server keeps several ver-
sions of each stream with different qualities. As available
bandwidth changes, the server switches playback between
streams of higher or lower quality as appropriate.
With hierarchical encoding[8, 10, 12, 19], the server
maintains a layered encoded version of each stream. As
more bandwidth becomes available, more layers of the en-
coding are delivered. If the average bandwidth decreases,
the server may then drop some of the layers being trans-
mitted. Layered approaches usually have the decoding con-
straint that a particular enhancement layer can only be de-
coded if all the lower quality layers have been received.
There is a duality between adding or dropping of lay-
ers in the layered approach and switching streams in the
multiply-encoded approach. However the layered approach
is more suitable for caching by a proxy for heterogeneous
clients[20]. In addition, it requires less storage at the server,
and it provides an opportunity for selective retransmission
of the more important information. The design of a lay-
ered approach for quality adaptation primarily concerns the
2
design of an efficient add and drop mechanism that max-
imizes quality while minimizing the probability of base-
layer buffer underflow.
The rest of this paper is organized as follows: first we
provide an overview of the layered approach to quality
adaptation and then explain coarse-grain adding and drop-
ping mechanisms in section 2. Also we discuss fine-grain
inter-layer bandwidth allocation for a single backoff sce-
nario. Section 3 motivates the need for smoothing in the
presence of real loss patterns and discusses two possible
approaches. In section 4, we sketch a near-optimal filling
and draining mechanism that not only achieves smoothing
but is also able to cope efficiently with various patterns of
losses. We evaluate our mechanism through simulation and
experiments in section 5. Section 6 briefly reviews related
work. Finally, section 7 concludes the paper and addresses
some of our future plans.
2 Layered Quality Adaptation
Hierarchical encoding provides an effective way that a
video playback server can coarsely adjust the quality of a
video stream without transcoding the stored data. However,
it does not provide fine-grained control over bandwidth, i.e.
bandwidth changes with the granularity of a layer. Further-
more, there needs to be a quality adaptation mechanism to
smoothly adjust the quality (i.e. number of layer) as band-
width changes. Users will tolerate poor quality video, but
rapid variations in quality are disturbing.
Hierarchical encoding allows video quality adjustment
over long periods of time, whereas congestion control
changes the transmission rate rapidly over short time inter-
vals (several round-trip times,(RTTs)). The mismatch be-
tween the two timescales is made up for by buffering data
at the receiver to smooth the rapid variations in available
bandwidth and allow a near constant number of layers to be
played.
Figure 2 graphs a simple simulation of a quality adapta-
tion mechanism in action. The top graph shows the avail-
able network bandwidth and the consumption rate at the re-
ceiver with no layers being consumed at startup, then one
layer, and finally two layers. During the simulation, two
packets are dropped and cause congestion control backoffs,
where the transmission rate goes below the consumption
rate for a period of time. The lower graph shows the playout
sequence numbers of the actual packets against time. The
horizontal lines show the period between arrival time and
playout time of a packet. Thus it indicates the total amount
of buffering for each layer. This simulation shows more
buffered data for Layer 0 (the base layer) than for Layer 1
(the enhancement layer). After the first backoff, the length
of these lines decreases indicating buffered data from Layer
Sequence Number
Time
Time
Bandwidth
Consumption Rate
Transmission Rate
Filling Phase Draining
Phase
Filling
Phase
Draining
Phase
Filling
Phase
Packet
Received
Packet
Playout
in receiver
buffer
Layer 1
Layer 0
backoff 1 backoff 2
Figure 2: Layered Encoding with Receiver Buffering
0 is being used to compensate for the lack of available band-
width. At the time of the second backoff, a little data has
been buffered for Layer 1 in addition to the large amount
for Layer 0. Thus data is drawn from both buffers properly
to compensate for the lack of available bandwidth.
The congestion control mechanism dictates the available
bandwidth
1
. We cannot send more than this amount, and
do not wish to send less
2
. In a real network even the av-
erage bandwidth of a congestion controlled flow constantly
changes over the session lifetime. Thus a quality adaptation
mechanism must continuously evaluate the available band-
width and adjust number of active layers accordingly.
In this analysis we assume that the layers are linearly
spaced - that is each layer has the same bandwidth. This
simplifies the analysis, but is not a requirement. In addition,
we assume each layer has a constant consumption rate over
time. In practice this is unlikely in a real codec, but to a first
approximation it is reasonable. It can be ignored by slightly
increasing the amount of receiver buffering for all layers to
absorb variations in consumption rate.
Figure 3 shows a single cycle of the congestion control
mechanism. The sawtooth waveform is the instantaneous
transmission rate. There are n
a
layers, each of which has a
consumption rate of C. In the left hand side of the figure,
1
Available bandwidth and transmission rate are used inter-changeably
throughout this paper.
2
For simplicity we ignore flow control issues in this paper but imple-
mentations should not. However our final solutions generally require so
little receiver buffering that this is not often an issue.
3
Filling
Phase
Draining
Phase
Time
Bandwidth
deficit
supplied from
receiver buffer
available
bandwidth from
network
total consumption rate
available bandwidth
rate of
increase S
R
R/2
aggregate
filled data
aggregate
drained data
a
b
c d
t
b
spare data
stored in
receiver buffer
e
n C
a
Figure 3: Filling and draining phase
the transmission rate is higher than the consumption rate,
and this data will be temporarily stored in the receiver’s
buffer. The total amount of stored data is equal to the area
of triangle abc. Such a period of time is known as a filling
phase. Then, at time t
b
, a packet is lost and the transmit
rate is reduced multiplicatively. To continue playing out
n
a
layers when the transmission rate drops below the con-
sumption rate, some data must be drawn from the receiver
buffer until the transmission rate reaches the consumption
rate again. The amount of data drawn from the buffer is
shown in this figure as triangle cde. Such a period of time
is known as a draining phase.
Note that the quality adaptation mechanism can only ad-
just the number of active layers and their bandwidth share.
This paper attempts to derive near-optimal solutions for
these two key mechanisms:
A coarse-grain mechanism for adding and dropping
layers. By changing the number of active layers, the
server can perform coarse-grain adjustment on the to-
tal amount of receiver buffered data.
A fine-grain inter-layer bandwidth allocation mecha-
nism among the active layers. If there is receiver-
buffered data available for a layer, we can temporarily
allocate less bandwidth than is being consumed while
taking the remainder from the buffer. This smoothes
out reductions in the available bandwidth. When spare
bandwidth is available, we can send data for a layer at
a rate higher than its consumption rate, and increase
the data buffered for that layer at the receiver.
In the next section, we present coarse-grain adding and
dropping mechanisms as well as their relations with the
fine-grain bandwidth allocation. Then we discuss the fine-
grin bandwidth allocation in the subsequent sections.
2.1 Adding a Layer
A new layer can be added as soon as the instantaneous avail-
able bandwidth exceeds the consumption rate (in the de-
coder) of the existing layers. The excess bandwidth could
then be used to start buffering a new layer. However, this
would be problematic as without knowing future available
bandwidth we cannot decide when it will first be possible
to start decoding the layer. As the new layer’s playout is
decided by the inter-layer timing dependency between its
data and that in the base layer, this means we cannot make
a reasoned decision about which data from the new layer to
actually send
3
.
A more practical approach is to start sending a new layer
when the instantaneous bandwidth exceeds the consump-
tion rate of the existing layers plus the new layer. In this
approach the layer can start to play out immediately. In
this case there is some excess bandwidth from the time the
available bandwidth exceeds the consumption rate of the
existing layers until the new layer is added. This excess
bandwidth can be used to buffer data for existing layers at
the receiver.
In practice, this bandwidth constraint for adding is still
not conservative enough, as it may result in several layers
being added and dropped with each cycle of congestion
control sawtooth. Such rapid-cycling changes in quality
would be disconcerting for the viewer. One way to prevent
rapid changes in quality is to add a buffering condition such
that adding a new layer does not endanger existing layers.
Thus, the server may add a new layer when:
1. The instantaneous available bandwidth is greater than
the consumption rate of the existing layers plus the
new layer, and,
2. There is sufficient total buffering at the receiver to sur-
vive an immediate backoff and continue playing all the
existing layers plus the new layer.
To satisfy the second condition we assume (for now) that
no additional backoff will occur during the draining phase,
and the slope of linear increase can be properly estimated.
These are the minimal criteria for adding a new layer. If
these conditions are held a new layer can be kept for a rea-
sonable period of time during the normal congestion con-
trol cycles. We shall show later that we normally want to
be more conservative than this. Clearly we need to have
sufficient buffering at the receiver to smooth the available
bandwidth signal so that number of active layers does not
change due to the normal hunting behavior of the conges-
tion control mechanism.
3
Note that once the inter-layer timing for a new layer is adjusted, it is
maintained as long as buffer does not dry out
4
Expressing the adding conditions more precisely:
Condition 1: R n
a
C
Condition 2:
n a
X
i buf
i
R n
a
C S
where R is the current transmission rate
n
a
is the number of currently active layers
buf
i
is the amount of buffered data for layer i
S is the rate of linear increase in bandwidth
(typically one packet per RTT)
* For derivation of this equation refer to A.1.
2.2 Dropping a Layer
Once a backoff occurs, if the total amount of buffering
at the receiver is less than the estimated required buffer-
ing for recovery, (i.e, the area of triangle cde in figure
3), the correct course of action is to immediately drop the
highest layer. This reduces the consumption rate (n
a
C)
and hence reduces the buffer requirement for recovery
(i.e. area of triangle cde). If the buffering is still in-
sufficient, the server should iteratively drop the highest
layer until the amount of buffering is sufficient. This rule
clearly doesn’t apply to the base layer which is always sent.
The dropping mechanism more precisely:
WHILE
n
a
C R v
u
u
t
S
n a
X
i buf
i
DO n
a
n
a
* For derivation of this equation refer to A.2.
This mechanism provides a coarse-grain criteria for
dropping a layer. However, it may be insufficient to pre-
vent buffer underflow during the draining phase for one of
several reasons:
We may suffer a further backoff before the current
draining phase completes.
Our estimate of the slope of linear increase may be
incorrect if the network RTT changes substantially.
There may be sufficient total data buffered, but it may
be allocated among the different layers in a manner
that precludes its use to aid recovery.
The first two situations are due to incorrect prediction of
the amount of buffered data needed to recover, and we term
such an event a critical situation. In such events, the only
appropriate course of action is to drop additional layers as
soon as the critical situation is discovered.
The third situation is more problematic, and relates to the
fine-grain bandwidth allocation among active layers during
both filling and draining phases. We devote much of the
rest of this paper to deriving and evaluating a near-optimal
solution to this situation.
2.3 Inter-layer Buffer Allocation
Because of the decoding constraint in hierarchical coding,
each additional layer depends on all the lower layers, and
correspondingly is of decreasing value. Thus a buffer al-
location mechanism should provide higher protection for
lower layers by allocating a higher share of buffering for
them.
The problem of inter-layer buffer allocation is to ensure
the total amount of buffering is sufficient, and it is prop-
erly distributed among active layers to effectively absorb
the short-term reductions in bandwidth that might occur.
The following two examples illustrate ways that improp-
erly allocation of buffered data might fail to compensate for
the lack of available bandwidth.
Dropping layers with buffered data
A simple buffer allocation scheme might allocate an equal
share of buffer to each layer. However, if the highest layer
is dropped after a backoff, its buffered data is no longer
able to assist the remaining layers in the recovery. The top
layer’s data will still be played out, but it is not providing
buffering functionality. This implies that it is more benefi-
cial to buffer data for lower layers.
Insufficient distribution of buffered data
An equally simple buffer allocation scheme might allocate
all the buffering to the base layer. Consider an example
when three layers are playing, where a total consumption
rate of C must be supplied for the receiver’s decoder. If
the transmission rate drops to C, the base layer (L
) can
be played from its buffer. Since neither L
nor L
has any
buffering, they require transmission from the source. How-
ever available bandwidth is only sufficient to feed one layer.
Thus L
must be dropped even if the total buffering were
sufficient for recovery.
Efficiency
In these examples, although buffering is available, it cannot
be used to prevent the dropping of layers. This is inefficient
use of the buffering. In general, we are striving for a dis-
tribution of buffering that is most efficient in the sense that
it provides maximal protection against dropping layers for
any pattern of short-term reduction in available bandwidth
we are likely to encounter.
These examples reveal the following tradeoffs for inter-
layer buffer allocations:
Allocating more buffering for the lower layers not
5
only improves their protection but it also increases ef-
ficiency of buffering.
There is a minimum number of buffering layers that
are required to absorb short-term reductions in avail-
able bandwidth for successful recovery. This mini-
mum is directly determined by the reduction in band-
width that we intend to absorb by buffering.
2.4 Optimal Inter-layer Buffer Allocation
Given a draining phase following a single backoff, we
can derive the optimal inter-layer allocation that maxi-
mizes buffering efficiency. Figure 4 illustrates an optimal
buffer allocation and its corresponding draining pattern for
a draining phase. Here we assume that the total amount of
buffering at the receiver at time t
b
is precisely sufficient for
recovery(i.e. area of triangle af g) with no spare buffering
available at the end of the draining phase.
t
b
ab
c
d
e f
g
C
Draining
Phase
Total
Consumption
Rate (n C)
a
Time
Bandwidth
Available
Bandwidth
From
Network
Data Rate
Drawn from
Receiver
Buffer
L0
L1
L2
R
R/2
Figure 4: The optimal inter-layer buffer distribution
To justify the optimality of this buffer allocation, con-
sider that the consumption rate of a layer must be supplied
either from the network or from the buffer or a combination
of the two. If it is supplied entirely from the buffer, that
layer’s buffer is draining at consumption rate C. The area of
quadrilateral def g in figure 4 shows the maximum amount
of buffer that can be drained from a single layer during this
draining phase. If the draining phase ends as predicted,
there is no preference on buffer distribution among layers
as long as no layer has more than def g worth of buffered
data. However, if the situation becomes critical due to fur-
ther backoffs, layers must be dropped. Thus allocating area
def g of buffering to the base layer would ensure that the
maximum amount of the buffered data is still usable for re-
covery, and maximizes buffering efficiency.
By similar reasoning, the next largest amount an addi-
tional layer’s buffer can contribute is quadrilateral bcde,
and this portion of buffered data should be allocated to
L
, the first enhancement layer, and so on. This ap-
proach minimizes the amount of buffered data allocated
for higher layers that might be dropped in critical sit-
uation and consequently maximizes buffering efficiency.
Expressing this more precisely:
n
b
n
a
R
C
n
a
R
C
n
b
n
a
R
C
where n
b
is the minimum number of buffering layers
R is the transmission rate (before a backoff)
The optimal amount of buffering for layer i is:
Buf
iopt
C
S
C n
a
i R i n
b
Buf
n
b
opt
C
S
n
a
C R
n
b
C * For derivation of this equation refer to A.3.
Although we can calculate the optimal allocation of
buffered data for the active layers, a backoff may occur at
random any time. To tackle this problem, during the filling
phase, we incrementally adjust the allocation of buffered
data so that the buffer state is always as close to an optimal
state as possible.
Filling
Phase
Draining
Phase
Layer 0
Layer 1
Layer 2
Layer 3
Layer 4
Optimal
L2 Buffer
Optimal
L1 Buffer
Optimal
L0 Buffer
Available
Bandwidth
Total Consumption Rate
C
Buffer
Draining
Time
Bandwidth
C
Buffer
Filling
Figure 5: Optimal Buffer sharing
Toward that goal, we assume that a single backoff will
occur immediately, and ask the question: “if we keep only
the base layer, is there sufficient buffering to survive?”. If
there is not sufficient buffering, then we fill up the base
layer’s buffer until there is enough buffering to survive.
Then we ask the question: “if we keep only two layers,
is there enough buffering to survive with those buffers hav-
6
ing optimal allocation?”. If there is not enough base layer
data, we fill the base layer’s buffer up to the optimal level.
Then we start sending L1 data until both layers have the
optimal amount of buffering to survive. We repeat this pro-
cess and increase the number of expected surviving layers
until all the buffering layers are filled up to an optimal level
such that all active layers can survive from a single backoff.
This approach results in a sequential filling pattern among
buffering layers.
Figure 5 illustrates the optimal filling and draining
scheme. If a backoff occurs exactly at time t
b
, all layers
can survive the backoff. Occurrence of a backoff earlier
than t
b
results in dropping one or more active layers. How-
ever the buffer state is always as close as possible to the
optimal state without those layers. If no backoff occurs un-
til adding conditions (section 2.1) are satisfied, a new layer
is added and we repeat the sequential filling mechanism.
It is worth mentioning that the server can control the fill-
ing and draining pattern by proper fine-grain bandwidth al-
location among active layers. Figure 5 illustrates that max-
imally efficient buffering results in the upper layers being
supplied from the network during the draining phase while
the lower layers are supplied from their buffers. For ex-
ample, just after the backoff, layer 2 is supplied entirely
from the buffer, but the amount supplied from the buffer
decreases to zero as data supplied from the network takes
over. Layers 1 and 0 are supplied from the buffer for longer
period.
3 Smoothness Constraints
In the previous section, we derived an optimal filling and
draining scheme based on the assumption that we only
buffer to survive a single backoff with all the layers in-
tact. However, examination of Internet traffic indicates that
real networks exhibit near-random[3] loss patterns with fre-
quent additional backoffs during a draining phase. Thus,
aiming to survive only a single backoff is too aggressive
and results in frequent adding and dropping of layers.
3.1 Smoothing
To achieve reasonable smoothing of the add and drop rate,
an obvious approach is to refine our adding conditions (in
section 2.1) to be more conservative. We have considered
the following two mechanisms to achieve smoothing:
We may add a new layer if the average available band-
width is greater than the consumption rate of the exist-
ing layers plus the new layer.
We may add a new layer if we have sufficient amount
of buffered data to survive K
max
backoffs with ex-
isting layers, where K
max
is a smoothing factor with
value greater than one.
Although each one of these mechanisms results in smooth-
ing, the latter not only allows us to directly relate the adding
decision to appropriate buffer state for adding, but it can
also utilize limited bandwidth links effectively. For exam-
ple, if there is sufficient bandwidth across a modem link to
receive 2.9 layers, the average bandwidth would never be-
come high enough to add the third layer. In contrast, the
latter mechanism would send 3 layers for 90% of the time
which is more desirable. For the rest of this paper we as-
sume that the only condition for adding a new layer is avail-
ability of optimal buffer allocation for recovery from K
max
backoffs.
Changing K
max
allows us to tune the balance be-
tween maximizing the short-term quality and minimizing
the changes in quality. An obvious question is “What de-
gree of smoothing is appropriate?” In the absence of a spe-
cific layered codec and user-evaluation, K
max
can not be
analytically derived. Instead it should be set based on real-
world user perception experiments to determine the appro-
priate degree of smoothing needed to not be disturbing to
the user. In practice, we probably also want to base K
max
on the average bandwidth and RTT since these determine
the duration of a draining phase.
3.2 Buffering Revisited
If we delay adding a new layer to achieve smoothing, this
affects the way we fill and drain the buffers. Figure 6
demonstrates this issue.
Filling
Phase 2
Filling
Phase 1
Draining
Phase 2
a
b
c
d
e
f
g
h
k
tt t t
12 3 4
t
6
t
7
Layer 1
Layer 0
Layer 2
Layer 3
Layer 4
Layer 5
Data added
to buffers
Data streamed
over network
from sender
Data draining
from buffers
5
t
Draining
Phase 1
Figure 6: Revised Draining Phase Algorithm
Up until time t
, this is the same as figure 5. The sec-
ond filling phase starts at time t
, and at t
there is suffi-
cient buffering to survive a backoff. However, because of
smoothing purposes, a new layer is not added at this point
and we continue buffering data until a backoff occurs at t
.
Note that as the available bandwidth increases, the total
amount of buffering increases but the required buffering for
recovery from a single backoff decreases. At time t
,we
7
have more buffering than we need to survive a single back-
off, but insufficient buffering to survive a second backoff
before the end of the draining phase. We need to specify
how we allocate the extra buffering after time t
, and how
we drain these buffers after t
while maintaining efficiency.
Conceptually, during the filling phase, the server sequen-
tially examines the following steps:
Step 1: enough buffer for one backoff with L0 intact.
Step 2: enough buffer for one backoff with L0 and L1.
...
Step n
a
: enough buffer for one backoff with L0 through
L
n a
intact.
Step n
a
+1: enough buffer for one backoff with L0 through
layer L
n a
intact and two backoffs with L0 intact.
At any time in the filling phase we have satisfied one step
and are working towards the next step.
When a backoff occurs between steps, in this case be-
tween steps n
a
and n
a
, we essentially reverse the filling
process. First we calculate between which two steps we’re
currently located. Then we traverse through the steps in the
reverse order to determine which layers must be drained
and by how much. In essence, during consecutive filling
and draining phases, we traverse this sequence of steps (i.e.
optimal buffer states) back and forth such that at any point
of time the buffer state is as close to optimal as possible. In
the next section, we describe this mechanism in more detail.
4 Buffer Allocation with Smoothing
To design an optimal filling and draining mechanisms in the
presence of smoothing, we need to know the optimal buffer
allocation among layers and the corresponding maximally
efficient filling and draining patterns for multiple-backoff
scenarios.
The optimal buffer allocation for a scenario with multi-
ple backoffs is not unique because it depends on the time
when the additional backoffs occur during the draining
phase. If we have knowledge of future loss distribution
patterns it might, in principle, be possible to calculate the
optimal buffer allocation. In practice such a solution would
be excessively complex for the problem it is trying to solve,
and rapidly becomes intractable as number of backoffs in-
creases. Let us first assume that only one additional backoff
occurs during the draining phase. The possible scenarios
are shown in figure 7. This figure illustrates that the opti-
mal buffer allocation for each scenario depends on time of
the second backoff, the value of consumption rate (n
a
C),
and the transmission rate before the first backoff.
We can extend the idea of optimal buffer allocation for
a single backoff (section 2.4) to each individual scenario.
However, an added complexity arises from the fact that dif-
ferent scenarios require different buffer allocations. More
Bandwidth
Time
Backoff 1
Backoff 2
Backoff 1
Backoff 2
Backoff 1
Backoff 2
Scenario 1 Scenario 2 Scenario 3
Available bandwidth
Data consumed from buffers
Consumption
Rate
Figure 7: Possible Double-backoff Scenarios
specifically, for an equal amount of total buffering needed
for recovery, scenarios 1 and 2 are two extreme cases in
the sense that they need the maximum and minimum num-
ber of buffering layers respectively. Thus addressing these
two extreme scenarios will cover all the intermediate sce-
narios(e.g. scenario 3) as well.
We need to decide which scenario to consider during the
filling phase. We make a key observation here. If the total
amount of buffering for scenarios 1 and 2 are equal, having
the optimal buffer distribution for scenario 1 is sufficient
for recovery from scenario 2, although it is not maximally
efficient. However, the converse is not feasible. The higher
flexibility in scenario 1 comes from the fact that buffered
data for a higher layer can always compensate for lack of
buffer in a lower layer, but not vice-versa.
This suggests that during the filling phase for two back-
off scenarios, first we consider the optimal buffer allocation
for scenario 1 and fill up the buffers in a step by step se-
quential fashion as described in section 3.2. Once this is
achieved, then we move on to consider scenario 2.
4.1 Filling Phase with Smoothing
To extend this idea to scenarios of k backoffs, we need to
examine the optimal buffer allocation for scenario 1 and 2
for each successive value of k. Figure 8 illustrates the opti-
mal buffer state, including the total buffer requirement and
its optimal inter-layer allocation in scenario 1 and 2, for dif-
ferent values of k. Ideally, we would like to fill the buffers
during the filling phase such that we traverse through these
buffer states in turn. Once k exceeds K
max
(the smoothing
factor), then we add a new layer and start the process again
with the new sets of optimal buffer states.
Toward this goal, we order these different buffer states
in increasing value of total amount of required buffering in
figure 9. Thus by traversing this sequence of buffer states,
we always work towards the next optimal state that requires
more buffering.
Unfortunately this requires us to occasionally drain an
8
k=1 k=1 k=1 k=1 k=1
Scenario 1
Scenario 2
k=2 k=2 k=2 k=2 k=2
Scenario 1
Scenario 2
k=3 k=3 k=3 k=3 k=3
Scenario 1
Scenario 2
k=4 k=4 k=4 k=4 k=4
Scenario 1
Scenario 2
k=5 k=5 k=5 k=5 k=5
Scenario 1
Scenario 2
Layer 0 buffer
Layer 1 buffer
Layer 2 buffer
Layer 3 buffer
Layer 4 buffer
Figure 8: Buffer distributions for k backoffs
S=1
k=1
S=1
k=1
S=1
k=1
S=1
k=1
S=1
k=1
S=2
k=1
S=2
k=1
S=2
k=1
S=2
k=1
S=2
k=1
S=2
k=2
S=2
k=2
S=2
k=2
S=2
k=2
S=2
k=2
S=1
k=2
S=1
k=2
S=1
k=2
S=1
k=2
S=1
k=2
S=1
k=3
S=1
k=3
S=1
k=3
S=1
k=3
S=1
k=3
S=1
k=4
S=1
k=4
S=1
k=4
S=1
k=4
S=1
k=4
S=2
k=3
S=2
k=3
S=2
k=3
S=2
k=3
S=2
k=3
S=1
k=5
S=1
k=5
S=1
k=5
S=1
k=5
S=1
k=5
S=2
k=4
S=2
k=4
S=2
k=4
S=2
k=4
S=2
k=4
S=2
k=5
S=2
k=5
S=2
k=5
S=2
k=5
S=2
k=5
Layer 0 buffer
Layer 1 buffer
Layer 2 buffer
Layer 3 buffer
Layer 4 buffer
Figure 9: Distributions in increasing order of buffering
existing buffer in order to reach the next state
4
. Two exam-
ples of this phenomenon are visible in figure 9:
Moving from the fscenario 2, k=2g case to the
fscenario 1, k=2g case involves draining L
’s buffer.
Moving from the fscenario 1, k=4g case to the
fscenario 2, k=3g case involves draining L
’s buffer.
We must not drain any layer’s buffer during the filling
phase because that buffering provides protection for a pre-
vious scenario that we have already passed. Thus we must
find the maximally efficient sequence of buffer states that
is consistent with the existing buffering. To achieve this
not only the total amount of required buffering but also per
layer buffer requirement must be monotonically increasing
as we go to the next buffer state.
The key observation that we mentioned earlier allows us
to calculate such a sequence. We recall that higher layer
buffers can substitute for lower layer buffers (they’re just
not as efficient) but not vice-versa. Given this flexibility,
the solution is to constraint per layer buffer allocation in
each scenario 2 state to be no less than the previous sce-
nario 1 state, and no more than the next scenario 1 state in
4
In order words, the order of these states based on increasing value of
total required buffering is different from their order based on increasing
value of per layer buffering
the sequence of states in figure 9. Figure 10 depicts a se-
quence of maximally efficient buffer states after applying
the above constraints where each step in the filling process
is numbered. By enforcing this constraint, we can traverse
through the buffer states such that buffer allocation for each
state satisfies buffer requirement for all the previous states.
This implies that both the total amount of buffering and the
amount of per layer buffering monotonically increase. Thus
the per layer buffering can always be used to aid recovery.
Once we have sufficient buffering for recovery from K
max
backoffs in both scenarios, a new layer will be added.
S=1
k=1
Step 0
S=1
k=1
Step 1
S=1
k=1
Step 2
S=1
k=1
S=1
k=1
S=2
k=1
5
S=2
k=1
6
S=2
k=1
7
S=2
k=1
8
S=2
k=1
S=2
k=2
10
S=2
k=2
11
S=2
k=2
12
S=2
k=2
13
S=2
k=2
S=1
k=2
15
S=1
k=2
16
S=1
k=2
17
S=1
k=2
18
S=1
k=2
S=1
k=3
20
S=1
k=3
21
S=1
k=3
22
S=1
k=3
23
S=1
k=3
24
S=1
k=4
25
S=1
k=4
26
S=1
k=4
27
S=1
k=4
28
S=1
k=4
29
S=2
k=3
30
S=2
k=3
31
S=2
k=3
32
S=2
k=3
33
S=2
k=3
34
S=1
k=5
35
S=1
k=5
36
S=1
k=5
37
S=1
k=5
38
S=1
k=5
39
S=2
k=4
40
S=2
k=4
41
S=2
k=4
42
S=2
k=4
43
S=2
k=4
44
S=2
k=5
45
S=2
k=5
46
S=2
k=5
47
S=2
k=5
48
S=2
k=5
49
Layer 0 buffer
Layer 1 buffer
Layer 2 buffer
Layer 3 buffer
Layer 4 buffer
Figure 10: Step-by-step buffer filling
The following pseudocode expresses our per-packet al-
gorithm for ensuring buffer state remains maximally effi-
cient during the filling phase
5
:
FUNCTION SendPacket
S1Backoffs =0; S2Backoffs =0
BufReq1 =0; BufReq2 =0
WHILE (BufReq1 TotBufAvailable) AND (S1Backoffs K m ax)
INCREMENT S1Backoffs
BufReq1 = TotalBufRequired(CurrentRate, Scenario=1,
S1Backoffs, ActiveLayers)
WHILE BufReq2 TotBufAvailable
INCREMENT S2Backoffs
BufReq2 = TotalBufRequired(CurrentRate, Scenario=2,
S2Backoffs, ActiveLayers)
FOR Layer=1 TO ActiveLayers
LayerBuf1 = BufRequired(CurrentRate, Scenario=1,
S1Backoffs, Layer, ActiveLayers)
LayerBuf2 = BufRequired(CurrentRate, Scenario=2,
S2Backoffs, Layer, ActiveLayers)
IF (BufReq1 BufReq2) AND (S1Backoffs K max)
#We’re considering scenario 1
IF (LayerBuf1 BufAvailable(Layer)
SendPacketFromLayer(Layer)
RETURN
ELSE #We’re considering scenario 2
IF (LayerBuf2 BufAvailable(Layer)) AND
((S1Backoffs K max)OR(LayerBuf1 BufAvailable(Layer)))
SendPacketFromLayer(Layer)
RETURN
5
The algorithm performs fine-grain bandwidth allocation by assigning
the next transmitting packet to a particular layer
9
K max is the smoothing factor, giving the number of
backoffs for which we buffer data before adding a new
layer.
The function TotalBufRequired returns the total amount
of required buffering for all layers in the scenario in
question, given the current send rate, the number of
active layers, and the number of backoffs being considered.
Scenario 1
Buf
total
k log
R
n
a
C
Buf
total
S
n
a
C R
k
k log
R
n
a
C
where k is the number of backoffs being considered
Scenario 2
Buf
total
k log
R
n
a
C
Buf
total
S
n
a
C R
k k k
n
a
C
k
log
R
n
a
C
k log
R
n
a
C
* For derivation of this equation refer to A.4.
The function BufRequired returns the maximally efficient
amount of required buffering for a particular layer in the
scenario of the state we are currently working towards. The
input parameters for this function are: the layer number,
the current sending rate, the number of active layers, and
the number of backoffs being considered.
Scenario 1
Buf
iopt
k log
R
n
a
C
Buf
iopt
C
S
C n
a
i R
k
k log
R
n
a
C
i n
b
Scenario 2
Buf
iopt
k log
R
n
a
C
Buf
iopt
C
S
C n
a
i R
k k k
C n
a
i k log
R
n
a
C
i n
b
* For derivation of this equation refer to A.5.
4.2 Draining Phase with Smoothing
As we traverse through the maximally efficient states, one
or more backoffs eventually move us into a draining phase.
Given that we incrementally traverse along the maximally
efficient path through buffer state during the filling phase,
we would like to traverse along the same path but in the
reverse direction during the draining phase. This approach
guarantees that the highest layers buffers are not drained un-
til they are no longer required, and the lowest layers buffers
are not drained too early such that the resulting buffer dis-
tribution becomes inefficient.
At the start of each step we have the optimal amount
of protective buffering for one particular state, and regres-
sively work toward the previous optimal buffer state along
the maximally efficient path. However, there is an addi-
tional constraint that we cannot drain a layer’s buffer faster
than the layer consumption rate, C.
To achieve such a draining pattern, we periodically cal-
culate the draining pattern for a short period of time, dur-
ing which we expect to drain a certain number of packets.
This number is based on the current estimate of slope of
linear increase and the current value of consumption rate.
We then calculate (using an algorithm similar to the above
pseudocode) the previous optimal state along the maximally
efficient path that we can survive with the current amount
of buffering. Conceptually, then we consider draining data
from each layer in turn, starting from the highest layer and
10
working downwards, such that each layer’s buffering does
not drop below its buffer share at the previous optimal step
we are draining towards. An added constraint is that we
must limit the amount of drained data from a layer to the
maximum amount that can be consumed during this period.
If the buffer state reaches the previous optimal state being
considered before we have allocated all the number of pack-
ets that must be drained in this period, then we move on
to consider the previous state along the maximally efficient
path and so on. We repeat this process until sufficient num-
ber of packets for draining during this period are identified.
Then we allocate the bandwidth during the period such that
each active layer receives the total amount of data that it
must consume during this period minus those packets we
just allocated to drain during the period.
5 Simulation
We have evaluated our quality adaptation mechanism
through simulation using bandwidth traces obtained from
RAP in the ns2 [] simulator and real Internet experiments.
Figure 11 provides a detailed overview of the mecha-
nisms in action. It shows a 40 second trace where the
quality-adaptive RAP flow co-exists with 10 Sack-TCP
flows and 9 additional RAP flows through an 800 Kb/s bot-
tleneck with 40ms RTT. The smoothing factor was set to 2
so that it provides enough receiver buffering for two back-
offs before adding a new layer(K
max
= 2).
Figure 11 shows the following parameters:
The total transmission rate, illustrating the saw-tooth
output of RAP. We have also overlaid the consumption
rate of the active layers over the transmission rate to
demonstrate the add and drop mechanism.
The transmission rate broken down into bandwidth per
layer. This shows that most of the variation in avail-
able bandwidth is absorbed by changing the rate of the
lowest layers (shown with the darkest shading).
The individual bandwidth share per layer. Periods
when a layer is being streamed above its consumption
rate to build up receiver buffering are clearly visible as
spikes in the bandwidth.
The buffer drain rate per layer. Clearly visible are
points where the buffers are used for playout because
the bandwidth share is temporarily less than the layer
consumption rate.
The accumulated buffering at the receiver for each ac-
tive layer.
Graphs in figure 11 clearly demonstrate that the short-term
variations in bandwidth caused by congestion control
mechanism can be effectively absorbed by receiver buffer-
ing. Furthermore playback quality is maximized without
risking complete dropouts in the playback due to buffer
underflow.
Smoothing Factor
To examine the impact of smoothing factor on the behavior,
we repeated the previous simulation with different value of
K
max
. Figure 12 shows number of active layers and buffer
allocation across active layers for K
max
=2, K
max
=3,
and K
max
=4. Clearly higher values of K
max
reduce the
number of changes in quality at the expense of increasing
the time it takes to first observe the best short-term quality.
These graphs also reveal that this manifests itself in two
ways as K
max
increases: firstly the total amount of buffer-
ing is increased, and secondly more of the buffering is
allocated for higher layers to cope with the larger variations
in available bandwidth as a result of successive backoffs.
Responsiveness
We have also explored the responsiveness of the quality
adaptation mechanism to large step changes in available
bandwidth. Figure 13 depicts a RAP trace with the same
parameters as figure 11 but a CBR source with a rate
equal to half the bottleneck bandwidth is started at t=30s
and stopped at t=60s and K
max
=4. The RAP congestion
control mechanism rapidly responds to these changes
by reducing the average transmission rate. The quality
adaptation mechanism closely follows the changes in
bandwidth. L
and then L
are dropped when bandwidth
reduces and then L is added when bandwidth becomes
available again. Notice that every layer’s buffer is involved
in this process, but the reception of the base layer is never
jeopardized. Thus we have satisfied our original design
goal of providing smoothing of quality while providing
protection to the most critical layers.
Efficiency
The optimality of our algorithms can be examined from the
efficiency of the buffer allocation. that apply when a layer
is dropped. The inter-layer buffer allocation is maximally
efficient if the following conditions are both satisfied: (i)
no data is buffered for a layer that is dropped, and (ii) the
layer is only dropped because total amount of buffering is
insufficient. To quantify the efficiency of our scheme, we
have calculated the percentage of remaining buffer for each
dropped layer as follows:
e buf
total
buf
dr op
buf
total
where buf
total
and buf
dr op
denote the total buffering
11
Total Transmit and Consumption Rates
C=10
20
30
40
0
KB/s
Time 40s
Transmit Rate Breakdown by Layer
0
C=10
20
30
40
KB/s
Time 40s
Transmit Rate per Layer
0
C
KB/s
Time 40s
Layer 3
0
C
KB/s
Time 40s
Layer 2
0
C
KB/s
Time 40s
Layer 1
0
C
KB/s
Time 40s
Layer 0
Drain Rate per Layer
KB/s
Time 40s
Layer 0
KB/s
Time 40s
Layer 1
KB/s
Time 40s
Layer 2
KB/s
Time 40s
Layer 3
Data Buffered per Layer
9543
0
bytes
Layer 0
Time 40s
9543
0
bytes
Layer 1
Time 40s
9543
0
bytes
Layer 2
Time 40s
9543
0
bytes
Layer 3
Time 40s
Figure 11: First 40 seconds of K
max
=2 trace
Total Transmit and Consumption Rates
C=10
20
30
40
0
KB/s
Time 40s
Data Buffered per Layer
9543
0
bytes
Layer 0
Time 40s
9543
0
bytes
Layer 1
Time 40s
9543
0
bytes
Layer 2
Time 40s
9543
0
bytes
Layer 3
Time 40s
Total Transmit and Consumption Rates
C=10
20
30
40
0
KB/s
Time 40s
Data Buffered per Layer
13236
0
bytes
Layer 0
Time 40s
13236
0
bytes
Layer 1
Time 40s
13236
0
bytes
Layer 2
Time 40s
13236
0
bytes
Layer 3
Time 40s
Total Transmit and Consumption Rates
C=10
20
30
40
0
KB/s
Time 40s
Data Buffered per Layer
17459
0
bytes
Layer 0
Time 40s
17459
0
bytes
Layer 1
Time 40s
17459
0
bytes
Layer 2
Time 40s
17459
0
bytes
Layer 3
Time 40s
Figure 12: Effect of K
max
on buffering and quality
12
Total Transmit and Consumption Rates
C=10
20
30
40
0
KB/s
Time 90s
Transmit Rate Breakdown by Layer
0
C=10
20
30
40
KB/s
Time 90s
Transmit Rate per Layer
0
C
KB/s
Time 90s
Layer 3
0
C
KB/s
Time 90s
Layer 2
0
C
KB/s
Time 90s
Layer 1
0
C
KB/s
Time 90s
Layer 0
Drain Rate per Layer
KB/s
Time 90s
Layer 0
KB/s
Time 90s
Layer 1
KB/s
Time 90s
Layer 2
KB/s
Time 90s
Layer 3
Data Buffered per Layer
17853
0
bytes
Layer 0
Time 90s
17853
0
bytes
Layer 1
Time 90s
17853
0
bytes
Layer 2
Time 90s
17853
0
bytes
Layer 3
Time 90s
Figure 13: Effect of long-term changes in bandwidth
and the buffer share of the dropping layer. Then we
averaged out the value of e across all drop events during
the simulation and use that as an evaluation metric for
efficiency.
Table 1 shows these efficiency values for different
values of K
max
during two test, T1 and T2. T1 is the
10 RAP, 10 TCP test depicted in figures 11 wheras T2 is
the 10 RAP, 10 TCP test with a large CBR burst shown
in figure 13. These results show that our scheme is very
efficient - very little buffered data is still available in a layer
that is dropped.
K
max
=2 K
max
=3 K
max
=4 K
max
=5 K
max
=8
T1 99.77% 99.97% 99.84% 99.85% 99.99%
T2 99.15% 99.81% 99.92% 99.80% 96.07%
Table 1
Table 2 shows the percentage of drops due to poor
buffer distribution in test T1 and T2. These are drops that
would not have happened if the amount of buffered data
that was at the receiver had been distributed differently.
Our mechanism is completely optimal in this respect for the
T1 tests, and performs fairly well for the T2 case. Clearly
the mechanism becomes less optimal as K
max
increases.
The higher the value of K
max
, the more buffering is
allocated for higher layers. Hence there is a higher prob-
ability of droppong the highest layer with some buffering
specially after sudden drops in available bandwidth such
as when CBR source appears. In essence, conservative
buffering(i.e. higher K
max
) enables the server to cope with
wider variations in bandwidth. However sudden drops of
bandwidth in these situations results in lower efficiency.
K
max
=2 K
max
=3 K
max
=4 K
max
=5 K
max
=8
T1 0% 0% 0% 0% 0%
T2 2.4% 0% 4.8% 11% -
Table 2
6 Related Work
Receiver-based layered transmission have been discussed
in the context of multicast video[1, 11, 21] to accommo-
date heterogeneity while performing coarse-grain conges-
tion control. This differs from our approach that allows
fine-grain congestion control for unicast delivery with no
step-function changes in transmission rate.
Merz et al. [13] present an iterative approach for send-
ing high bandwidth video through a low bandwidth channel.
They suggest segmentation methods that provide the flexi-
bility to playback a high quality stream over several itera-
tions, allowing the client to trade startup latency for quality.
13
Work in [7, 16, 17] discuss congestion control for
streaming applications and focusing on rate adaptation.
However, variations of transmission rate in a long-lived
session could result in client buffer overflow or underflow.
Quality adaptation is complementary for these scheme be-
cause it prevents buffer underflow or overflow while effec-
tively utilizing the available bandwidth.
Feng et al. [4] propose an adaptive smoothing mech-
anism combining bandwidth smoothing with rate adapta-
tion. The send rate is shaped by dropping low-priority
frames based on prior knowledge of the video stream. This
is meant to limit quality degradation caused by dropped
frames but the quality variation cannot be predicted.
Unfortunately, technical information for evaluation of
popular applications such as RealVideo G2 [14] is unavail-
able.
7 Conclusions and Future Work
We have presented a quality adaptation mechanism to
bridge the gap between short-term changes in transmission
rate caused by congestion control and the need for stable
quality in streaming applications. We exploit the flexibility
of layered encoding to adapt the quality along with long-
term variation in available bandwidth. The key issue is ap-
propriate buffer distribution among the active layers. We
have described a near-optimal mechanism that dynamically
adjusts the buffer distribution as the available bandwidth
changes by careful allocation of the bandwidth among the
active layers. Furthermore, we introduced a smoothing pa-
rameter that allows the server to trade short-term improve-
ment for long-term smoothing of quality. The strength of
our approach comes from the fact that we did not make
any assumptions about loss patterns or available bandwidth.
The server adaptively changes the receiver’s buffer state
to incrementally improve its protection against short-term
drops in bandwidth in an efficient fashion. Our simulation
and experimental results reveal that with a small amount
of buffering the mechanism can efficiently cope with short-
term changes in bandwidth due to AIMD congestion con-
trol. The mechanism can rapidly adjust the quality of the
delivered stream to utilize the available bandwidth while
preventing buffer overflow or underflow. Furthermore, by
increasing the smoothing factor, the frequency of quality
variation is effectively limited.
Given that buffer requirements for quality adaptation are
not large, we believe that these mechanism can also be de-
ployed for non-interactive live sessions where the client can
tolerate a short delay in delivery.
We plan to extend the idea of quality adaptation to other
congestion control schemes that employ AIMD algorithms
and investigate the implications of the details of rate adap-
tion on our mechanism. We will also study quality adap-
tation with a non-linear distribution of bandwidth among
layers.
Finally, quality adaptation provides a perfect opportunity
for proxy caching of multimedia streams which we plan to
examine. The proxy would cache each stream and missing
pieces that are likely to be needed would be pre-fetched in
a demand-driven fashion.
References
[1] X. Li amd M. Ammar and S. paul. Layered video multicast
with retransmission(LVMR): Evaluation of hierarchical rate
control. Proc. IEEE Infocom, March 1998.
[2] J. Bolot and T. Turletti. A rate control mechanism for packet
video in the internet. Proc. IEEE Infocomm, pages 1216–
1223, June 1994. http://www.tns.lcs.mit.edu/˜ turletti/.
[3] J. C. Bolot. Characterizing end-to-end packet delay and loss
in the internet. Journal of High Speed Networks, 2(3):289–
298, September 1993.
[4] W. Feng, M. Liu, B. Krishnaswami, and A. Prabhudev. A
priority-based technique for the best-effort delivery of stored
video. Proc. of Multimedia Computing and Networking, Jan-
uary 1999.
[5] S. Floyd and K. Fall. Promoting the use of end-
to-end congestion control in the internet. Un-
der submission, February 1998. http://www-
nrg.ee.lbl.gov/floyd/papers.html/end2end-paper.html.
[6] Microsoft Inc. Netshow ser-
vice, streaming media for business.
http://www.microsoft.com/NTServer/Basics/NetShowServices.
[7] S. Jacobs and A. Eleftheriadis. Real-time dynamic rate shap-
ing and control for internet video applications. Workshop on
Multimedia Signal Processing, pages 23–25, June 1997.
[8] Jae-Yong Lee, Tae-Hyun Kim, , and Sung-Jea Ko. Motion
prediction based on temporal layering for layered video cod-
ing. Proc. ITC-CSCC, 1, 1998.
[9] C. Leffehocz, B. Lyles, S. Shenker, and L. Zhang. Con-
gestion control for best-effort service: why we need a new
paradigm. IEEE Network, 10(1), January 1996.
[10] S. McCanne. Scalable compression and transmission of in-
ternet multicast video. Ph.D. thesis, University of California
Berkeley, UCB/CSD-96-928, 1996.
[11] S. McCanne, V . Jacobson, and M. Vettereli. Receiver-driven
layered multicast. ACM SIGCOMM, August 1996.
[12] S. McCanne and M. Vetterli. Joint source/channel coding
for multicast packet video. Proc. IEEE Intl. Conf. Image
Processing, 1995.
[13] M. Merz, K. Froitzheim, P. Schulthess, and H. Wolf. It-
erative transmission of media streams. ACM Multimedia,
November 1997.
14
[14] Real Networks. Http versus realaudio client-server stream-
ing. http://www.realaudio.com/help/content/http-vs-ra.html.
[15] A. Ortega and M. Khansari. Rate control for video coding
over variable bit rate channels with applications to wireless
transmission. Proceedings of the 2nd IEEE International
Conference on Image Processing (ICIP), October 1995.
[16] Padhye, J. Kurose, D. Towsley, and R. Koodli. TCP-friendly
rate adjustment protocol for continuous media flows over
best effort networks. Technical Report 98 11, UMASS
CMPSCI, 1998.
[17] D. Sisalem and H. Schulzrinne. The loss-delay based adjust-
ment algorithm: A TCP-friendly adaptation scheme. Work-
shop on Network and Operating System Support for Digital
Audio and Video, 1998.
[18] W. Tan and A. Zakhor. Error resilient packet video for the
internet. Proceedings of the IEEE International Conference
on Image Processing (ICIP), October 1998.
[19] M. Vishwanath and P. Chou. An efficient algorithm for hier-
archical compression of video. Proc IEEE Intl. Conf. Image
Processing, 1994.
[20] Reference was removed for anonymity.
[21] L. Wu, R. Sharma, and B. Smith. Thin streams: An architec-
ture for multicasting layered video. Workshop on Network
and Operating System Support for Digital Audio and Video,
May 1997.
A Derivation of formulae
A.1 Adding Condition 2
The left hand side of this equation is the total amount of
buffering for all layers, and the right hand side is the area of
triangle cde assuming that consumption rate is n
a
C
(figure 3 shows a situation when the consumption rate is
n
a
C). Given that L
ij
is the length in figure 3 from i to j,
then the area, A, of the triangle is:
A L
cd
L
ec
Slope S L
ce
L
cd
A L
ce
S
In this case L
ce
= R n
a
C because we are in the
filling phase.
A.2 The Dropping Mechanism
The dropping mechanism simply calculates the area of tri-
angle cde in figure 3 based on equation (1). We solve that
equation to find the value of n
a
that requires buffering less
or equal to the aggregate amount of buffering. Note that
during the draining phase, the transmission rate is always
less than the consumption rate(R n
a
C).
R
Time
k th backoff
1
(k −1) th backoff
First backoff
(k − k ) sequential backoffs
1
1
Figure 14: Buf
total
for scenario 2
A.3 Optimal Inter-layer Buffer Allocation
The optimal buffer share for each layer in a single backoff
scenario (figure 4) can be viewed as an area between two
triangles. For example, the area of quadrilateral bcde is the
difference of the area of two triangles ade and acb.
The area of each triangle can be calculated using equa-
tion (1). Note that L
cd
L
dg
C but L
ac
C. Thus we
have an exceptional case for the optimal buffer share of the
highest buffering layer(n
b
).
A.4 Derivation of Buf
total
Note that there is a minimum number of backoffs that are
needed to drop the transmission rate below the consump-
tion rate, otherwise the total required buffering is zero be-
cause we do not experience a draining phase. The value
of Buf
total
for scenario 1 can be calculated similarly to a
single backoff scenario(figure 3) using equation (1), except
that the transmission rate after k backoffs is R k
.
Figure 14 illustrates the calculation of Buf
total
for sce-
nario 2. k
is the minimum number of backoffs that are
needed to drop the transmission rate below the consump-
tion rate. The remaining (k k
) backoffs then occur in
sequential fashion. Buf
total
is equal to the shaded area in
figure 14. Using equation (1), we can calculate the area of
the first triangle and one of the subsequent triangles because
they have the same size.
A.5 Derivation of Buf
iopt
The value of Buf
iopt
can be calculated by extending the
idea of optimal buffer allocation for a single backoff (sec-
tion 2.4) to scenarios 1 and 2. For scenario 1, we only need
to replace R with
R
k
because we simply have a bigger tri-
angle.
Considering figure 14 for scenario 2, we can calculate
the optimal buffer share of layer i for each individual trian-
gle using the optimal buffer allocation for a single backoff
15
and use the cumulative buffer share as optimal allocation.
Note that for the last (k k
) backoffs, we can calculate
the optimal share for one triangle and simply multiply it by
(k k
).
16
Linked assets
Computer Science Technical Report Archive
Conceptually similar
PDF
USC Computer Science Technical Reports, no. 686 (1998)
PDF
USC Computer Science Technical Reports, no. 709 (1999)
PDF
USC Computer Science Technical Reports, no. 693 (1999)
PDF
USC Computer Science Technical Reports, no. 679 (1998)
PDF
USC Computer Science Technical Reports, no. 718 (1999)
PDF
USC Computer Science Technical Reports, no. 681 (1998)
PDF
USC Computer Science Technical Reports, no. 702 (1999)
PDF
USC Computer Science Technical Reports, no. 725 (2000)
PDF
USC Computer Science Technical Reports, no. 678 (1998)
PDF
USC Computer Science Technical Reports, no. 703 (1999)
PDF
USC Computer Science Technical Reports, no. 704 (1999)
PDF
USC Computer Science Technical Reports, no. 644 (1997)
PDF
USC Computer Science Technical Reports, no. 692 (1999)
PDF
USC Computer Science Technical Reports, no. 706 (1999)
PDF
USC Computer Science Technical Reports, no. 530 (1992)
PDF
USC Computer Science Technical Reports, no. 696 (1999)
PDF
USC Computer Science Technical Reports, no. 628 (1996)
PDF
USC Computer Science Technical Reports, no. 669 (1998)
PDF
USC Computer Science Technical Reports, no. 697 (1999)
PDF
USC Computer Science Technical Reports, no. 724 (2000)
Description
Reza Rejaie, Mark Handley, Deborah Estrin. "Quality adaptation for congestion controlled video playback over the internet." Computer Science Technical Reports (Los Angeles, California, USA: University of Southern California. Department of Computer Science) no. 700 (1999).
Asset Metadata
Creator
Estrin, Deborah
(author),
Handley, Mark
(author),
Rejaie, Reza
(author)
Core Title
USC Computer Science Technical Reports, no. 700 (1999)
Alternative Title
Quality adaptation for congestion controlled video playback over the internet (
title
)
Publisher
Department of Computer Science,USC Viterbi School of Engineering, University of Southern California, 3650 McClintock Avenue, Los Angeles, California, 90089, USA
(publisher)
Tag
OAI-PMH Harvest
Format
16 pages
(extent),
technical reports
(aat)
Language
English
Unique identifier
UC16270750
Identifier
99-700 Quality Adaptation for Congestion Controlled Video Playback over the Internet (filename)
Legacy Identifier
usc-cstr-99-700
Format
16 pages (extent),technical reports (aat)
Rights
Department of Computer Science (University of Southern California) and the author(s).
Internet Media Type
application/pdf
Copyright
In copyright - Non-commercial use permitted (https://rightsstatements.org/vocab/InC-NC/1.0/
Source
20180426-rozan-cstechreports-shoaf
(batch),
Computer Science Technical Report Archive
(collection),
University of Southern California. Department of Computer Science. Technical Reports
(series)
Access Conditions
The author(s) retain rights to their work according to U.S. copyright law. Electronic access is being provided by the USC Libraries, but does not grant the reader permission to use the work if the desired use is covered by copyright. It is the author, as rights holder, who must provide use permission if such use is covered by copyright.
Repository Name
USC Viterbi School of Engineering Department of Computer Science
Repository Location
Department of Computer Science. USC Viterbi School of Engineering. Los Angeles\, CA\, 90089
Repository Email
csdept@usc.edu
Inherited Values
Title
Computer Science Technical Report Archive
Description
Archive of computer science technical reports published by the USC Department of Computer Science from 1991 - 2017.
Coverage Temporal
1991/2017
Repository Email
csdept@usc.edu
Repository Name
USC Viterbi School of Engineering Department of Computer Science
Repository Location
Department of Computer Science. USC Viterbi School of Engineering. Los Angeles\, CA\, 90089
Publisher
Department of Computer Science,USC Viterbi School of Engineering, University of Southern California, 3650 McClintock Avenue, Los Angeles, California, 90089, USA
(publisher)
Copyright
In copyright - Non-commercial use permitted (https://rightsstatements.org/vocab/InC-NC/1.0/