Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
00001.tif
(USC Thesis Other)
00001.tif
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
INFORMATION TO USERS
This manuscript has been reproduced from the microfilm master. UMI
films the text directly from the original or copy submitted. Thus, some
thesis and dissertation copies are in typewriter face, while others may be
from any type of computer printer.
The quality of this reproduction is dependent upon the quality of the
copy submitted. Broken or indistinct print, colored or poor quality
illustrations and photographs, print bleedthrough, substandard margins,
and improper alignment can adversely affect reproduction.
In the unlikely event that the author did not send UMI a complete
manuscript and there are missing pages, these will be noted. Also, if
unauthorized copyright material had to be removed, a note will indicate
the deletion.
Oversize materials (e.g., maps, drawings, charts) are reproduced by
sectioning the original, beginning at the upper left-hand comer and
continuing from left to right in equal sections with small overlaps. Each
original is also photographed in one exposure and is included in reduced
form at the back of the book.
Photographs included in the original manuscript have been reproduced
xerographically in this copy. Higher quality 6” x 9” black and white
photographic prints are available for any photographs or illustrations
appearing in this copy for an additional charge. Contact UMI directly to
order.
UMI
A Bell & Howell Information Company
300 North Zeeb Road, Ann Arbor MI 48106-1346 USA
313/761-4700 800/521-0600
V IR T U A L PA TH R O U T IN G A N D
C A P A C IT Y D E S IG N
IN A N ATM N E T W O R K
by
Tien-Shun Gary Yang
A Dissertation Presented to the
FACULTY OF THE GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
D O CTOR OF PHILOSOPHY
(Electrical Engineering)
March 1995
Copyright 1995 Tien-Shun Gary Yang
UMI Number: 9621654
UMI Microform 9621654
Copyright 1996, by UMI Company. All rights reserved.
This microform edition is protected against unauthorized
copying under Title 17, United States Code.
UMI
300 North Zeeb Road
Ann Arbor, MI 48103
UNIVERSITY OF SOUTHERN CALIFORNIA
THE GRADUATE SCHOOL
UNIVERSITY PARK
LOS ANGELES. CALIFORNIA 90007
This dissertation, written by
T ien-Shun Gary Yang
under the direction of h.xs. Dissertation
Committee, and approved by all its members,
has been presented to and accepted by The
Graduate School, in partial fulfillment of re
quirements for the degree of
DOCTOR OF PHILOSOPHY
Dean of Graduate Studies
Date
DISSERT A t t o m r n v f M T T T E F
Chairperson
D ed ication
This work is dedicated to my dear parents who support me the most in my
academic career and always nourish my hope; to my beloved wife Wei-Ling who
engaged in my self-pressure and undertook the frustration. It is also a milestone
set by Lord to whom I serve.
A cknow ledgem ents
My sincere thanks to Dr. Victor 0 . K. Li, who is more than a great academ ic ad
visor. Thanks to Professor John Silvester and Professor Douglas Ierardi, for their
instructions. Thanks to my academic group members in USC for their construc
tive criticism and help, and thanks to Milly Montenegro for her administrative
help. Last but not the least, thanks to my PacBell colleagues, especially, Sherrie
B. Littlejohn, my advisor in Pacific Bell.
C ontents
D edication i
Acknowledgem ents ii
List Of Figures v
List Of Tables vi
List of N otations vii
Abstract viii
1 Introduction 1
1.1 ATM B a s ic s .................................................................................................. 1
1.2 Fast Packet Technology............................................................................. 9
2 System Model 12
2.1 Asynchronous M ultiplexer and ATM S w it c h ..................................... 12
2.2 Bandwidth Assigned and Quality of Service ..................................... 15
2.3 O N /O FF Traffic and QoS M easurement ........................................... 17
2.4 V irtual Path Capacity and Routing D e sig n ........................................ 23
3 Bandwidth A llocation and CAC 28
3.1 Network M anagement ............................................................................. 28
3.2 Bandwidth Allocation ............................................................................. 29
3.2.1 A ssum ptions.................................................................................... 31
3.2.2 Bandwidth Q ueue.......................................................................... 32
3.2.3 One class of t r a f f i c ....................................................................... 37
3.2.4 Tw o/M ultiple classes of tr a f f ic s ............................................... 41
3.3 P a th C o s t ..................................................................................................... 49
4 V irtual Path R outing and Capacity D esign 56
. iii
4.1 The V irtual Path Concept and the V irtual Path L a y o u t................. 56
4.2 Constrained non-linear optim ization p ro b le m ..................................... 60
4.3 The Reduced G radient A lg o rith m .......................................................... 62
4.4 Joint Routing and Capacity Design in a V irtual Path Network . . 64
4.5 E x a m p le ......................................................................................................... 66
4.6 Solving an Optimization P r o b le m .......................................................... 67
5 Bandwidth A llocation with M ultiplexing Gain 70
5.1 Iterative A p p r o a c h .................................................................................... 70
5.2 C onvergence.................................................................................................. 75
5.3 Numerical R e s u l t ........................................................................................ 77
6 C onclusions and Future Research D irections 85
A ppendix A
Dynamic V P Capacity D e s ig n .......................................................................... 88
A ppendix B
E rg o d ic ity ............................................................................................................... 93
B ib lio g ra p h y 95
iv
List Of Figures
2.1 System M o d e l................................................................................................ 14
2.2 Storage p ro b a b ility ..................................................................................... 18
2.3 User cost c a lc u la tio n .................................................................................. 22
2.4 An example of the convex c h a ra c te ristic s............................................. 27
3.1 Decision making on the bandwidth q u e u e ............................................. 34
3.2 Decision for a delay (jitter) sensitive call under different loads . . 46
3.3 Decision for a loss sensitive call under different loads ....................... 47
3.4 Decisions for both calls under medium lo a d .......................................... 48
3.5 Maikov chain is completed after decisions are m a d e .......................... 49
3.6 Steady state probabilities of three Markov chains .............................. 51
3.7 Flow chart of finding the average cost of a virtual p a t h ................... 53
3.8 An example of the convex c h a ra c te ristic s............................................. 54
3.9 An example of the convex c h a ra c te ristic s............................................. 55
4.1 Network e x a m p le ......................................................................................... 65
5.1 Iteration approach ..................................................................................... 73
5.2 Converged p.m .f............................................................................................. 78
5.3 Steady state p r o b a b ility ........................................................................... 79
5.4 Prim ary Bandwidth A ssignm ent.............................................................. 80
5.5 Multiplexing G a in ........................................................................................ 80
5.6 Sate c o s t......................................................................................................... 82
5.7 State c o s t ...................................................................................................... 82
5.8 Steady state p r o b a b ility ........................................................................... 83
5.9 Prim ary Bandwidth A ssignm ent.............................................................. 83
5.10 Multiplexing G a in ........................................................................................ 84
v
List O f Tables
A.l QoS failure under bandw idth control
List o f N otation s
ADM - Add Drop M ultiplexer
ATM - Asynchronous Transfer Mode
BISDN - Broadband Integrated Services Digital Network
CAC - Connection Admission Control
CIR - Com m itted Information Rate
CO - Central Office
CPE - Customer Premise Equipment
DCS - Digital Cross Connect
DSU - Digital Service Unit
DVP - Determ inistic Virtual Path
FO T - Fiber Optic Terminal
FRS - Frame Relay Service
FRAD - Frame Relay Assembler/Disassembler
MG - Multiplexing Gain
M M PP - Markov M odulated Poisson Process
MTU - M aximum Transmission Unit
PVC - Perm anent V irtual Connection
SAR - Segmentation And Reassembly
SMDS - Switched M ultimegabit D ata Service
SONET - Synchronous Optic NETwork
SVP - Statistical V irtual Path
SVC - Switched Virtual Connection
UPC - Usage Param eter Control
VC - Virtual Channel
VCI - Virtual Channel Identifier
VP - Virtual Path
VPI - V irtual Path Identifier
vc - virtual circuit
A bstract
Asynchronous Transfer Mode (ATM) networks are expected to be the core of
Broadband ISDN (Integrated Service Digital Networks). ATM networks will
handle voice, data, and video in an efficient manner. A customer obtains “band
w idth on dem and” by transm itting cells into the system within a certain rate
range which is derived from the QoS (Quality of Service) requirement, therefore
cell delay and cell loss are controllable, and bandwidths are shared statistically.
In the legacy systems, voice fits into DSO (64 Kbps) and lightly compressed video
fits into DS3 (45 Mbps); they are circuit switched. Files are processed through
a fixed-size pipe (DSO, DS1 or higher level) associated with current X.25 packet
switching networks. Basically they are handled by independent channels and are
multiplexed hierarchically rather than statistically.
T he advent of information superhighway will present an unprecedented de
m and for bandwidth. Statistical multiplexing will become an im portant technique
for the next-generation networks. In a hierarchically multiplexed string of bits
on a physical path, information is identified by its position in the bit stream , in
other words, the tim e slot in TDM frames. This transport m ode is called STM
(Synchronous Transfer Mode). On the contrary, in an ATM network, a specific
connection is identified only by the V PI/V C I (V irtual Path Identifier/V irtual
Channel Identifier) in its header. It does not depend on the position of tim e
slots. Such an “asynchronous” feature is valuable since the network capacity be
comes dynamic, allowing the bandwidth to be efficiently utilized. In a BISDN,
bandwidths are assigned in a way to satisfy different QoS requirem ents for vari
able bit rates and diversified traffic characteristics. Since bandwidths are assigned
in a statistical fashion to m eet the whim of the m ultim edium traffic demands,
ATM is expected to be the solution.
Statistical m ultiplexing is the key feature of ATM, which means service quotas
are assigned to end users in a dynamic way according to their needs. Therefore,
an ATM network can be oversubscribed to a certain degree to increase the ef
ficiency. But first of all, the ATM multiplexer firmware has to be able to do
“idle detection” , which is the ability of knowing which input traffic is currently
in the “O FF” period. The following step is to assign the corresponding free quota
to somebody with a huge bandwidth demand. The first generation ATM mul
tiplexer does not have the “idle detection” ability; it is only a fast cell switch
machine, not an ATM machine. In anticipation of the “idle detection” capabil
ity, the bandwidth allocation algorithm is studied for the next generation ATM,
which includes the QoS (Quality of Service) measurement, admission control and
congestion control. Also, since SVC (Switched Virtual Connection) will be in
cluded in the next generation ATM besides the current PVC (Perm anent Virtual
Connection), the VP (Virtual Path) routing and capacity design will become
crucial in the near future. We link the VP routing and capacity design together
with the bandwidth assignment algorithm. In order to construct a planning tool
for the coming “tru e” ATM networks, we formulate a constrained optim ization
scheme for the whole network, with the cost function derived from the bandwidth
assignment algorithm. The cost includes delay jitter, cell loss, rejection cost and
the system cost. The optimal VP capacity and routing ratio are the decision
variables, which can be determined by implementing the Gradient Projection
m ethod to the optim ization problem formulated.
The m ajor contribution of this work is to study the asynchronous multiplexing
from the network element layer (the hardware level) to the network m anagement
layer; everything is designed based on the hardware feasibility and in a bottom
up fashion. No current ATM research attem pts to build any model based on the
asynchronous m ultiplexing ability; all current academic work either bypasses the
elem ent layer ([7, 12, 13, 16, 21, 18, 31, 55, 64, 65], etc.) or apply queuing model
to analyse the asynchronous multiplexing ([17, 26, 35, 57, 58], etc.). However
the queuing behavior is also beyond the asynchronous multiplexing layer, since
cells will not enter the queue unless multiplexed. Therefore the queuing behavior
is the result of asynchronous multiplexing, and such models are fundamentally
incorrect.
The other m ajor contribution is to design a platform for a joint VP routing
and capacity assignment problems, which will be crucial in the near future. No
sim ilar research effort has ever been attem pted for ATM networks. Last but not
the least, ATM issues such as traffic characteristics, QoS measurement, admission
control, congestion control, bandwidth assignment, VP capacity and VP routing
are studied individually. This work has integrated all fundam ental ATM issues
by providing necessary links, therefore problems are unified, and a way to find
the global optim al solution for this integrated problem is reported.
C hapter 1
Introduction
1.1 A TM B asics
In STM networks, customers are synchronously m ultiplexed (in a round robin
fashion) and dem ultiplexed hierarchically through different levels. Therefore the
network capacity (the m ultiplexer’s “quota” per unit time) distributed to each
user is fixed and the cost of a multi-stage m ultiplexing and demultiplexing is
high. In ATM networks, customers are asynchronously m ultiplexed and the direct
m ulti/dem ultiplexing is possible, therefore the delay and loss become controllable,
and the hardware cost is reduced. To support a wide variety of customers with
diversified bandw idth demands, STM turns out to be inefficient and costly, thus
ATM is recognized as the solution for BISDN.
M ulti-medium traffics generated by users are segmented into cells in the user-
network interface and reassembled in the network-user interface. This can be
1
done by a SAR (Segmentation And Reassembly) chip. A cell is composed of a
5-byte head and a 48-byte payload; it is a fixed-size packet, therefore multiplexing
and switching are easier and can be independent of applications. It is also a small
packet, whose benefits are 1). Small latency: voice will suffer intolerable jitter
if chopped into long packets, 2). “Bandwidth on dem and” needs to m anipulate
bandwidths in the unit of packet (cell), therefore the sm aller the cell unit the
more efficient the bandwidth management.
There are two types of virtual connections in ATM networks: PVC (Per
manent Virtual Connection) and SVC (Switched Virtual Connection). PVC is
analogous to a lease line which is dedicated to a specific subscriber. However it is
not a physically dedicated line, b u t shared w ith other connections. For example,
a physical link is usually over-booked by several PVCs, as long as those PVCs
are not always carrying traffics all the tim e, the QoS of each connection might
be satisfied through the asynchronous multiplexing ability. SVC is similar to a
telephone call, whereby a connection is initiated by a call-setup message from
the origin, and it shares the network resources with other connections through
the asynchronous multiplexing ability. The bandwidth for both PVC and SVC
are not deterministic. That is why we call them “Virtual Connections”. PVC
service is offered currently. Customers, such as banks, universities and hospitals,
subscribe PVCs from the B-ISDN provider who configure their PVCs from switch
to switch manually. To change a PVC, you need to call them , and it takes days
to be reconfigured. SVC is the next phase. A route will be picked in the SVC
setup stage.
There are two types of routes: VPC (V irtual Path Circuit) and vcc (virtual
circuit connection; we use lower case “vc” for virtual circuit and upper case “VC”
2
for Virtual Channel). VPC travels along a single VP. The VPI is used for routing
and the VCI is used to demultiplex different virtual channels within the same VP;
i.e., different connections with the same 0-D pair of the same route. The vcc
takes a route concatenated by more than one VP, therefore VPI and partial VCI
are used for routing. Since the routing decision is m ade in order to minimize the
network cost, the VP capacity and routing decision becomes extrem ely im portant
for SVC, which is studied in our work.
Conceptually, a route is pre-set in the call setup stage, ATM cells are packet-
switched using virtual circuit switching as in the X.25 network rather than data
gram switching. However, there are two m ajor differences between ATM and
virtual circuit switching: 1). Virtual paths (VPs) which are disjoint bandwidths
abstractly configured for different routes are implemented in ATM, and cells
(packets) are multiplexed within VPs instead of within a link. Packets in a X.25
network with a virtual circuit switching framework are multiplexed within physi
cal links. 2). Routing in virtual circuit can be very flexible, but candidate routes
for SVC are determined according to the VP layout, therefore routes are quite
lim ited comparing to the virtual circuit switching of X.25. The VP topology
is fixed for a perm anent VP design and changed weekly for a semi-permanent
VP design. The VP capacity can be either perm anent or dynamic. The latter is
firstly proposed by Sato [47] (called “bandwidth control”) whereby the bandwidth
utilization is increased. However, this conclusion is drawn by simple assumptions
and neglects the negative effects. We suggest deterministic VP capacity assign
m ent instead of statistical assignment. This is explained in Appendix A.
High speed input traffics of ATM networks make the ATM switch buffering
problem acute. In addition, simple FIFO (First In First Out) queuing discipline
3
does not fit different QoS requirem ents, thus further degrading the switch per
formance. The Newbridge ATM switch is currently adopted by Pacific Bell, with
m ore than 300-cell buffering ability per input link, b ut the performance is still
not quite satisfactory, and m ore storage is expected. However, the buffer size is
lim ited by the current switching card design which is too costly to be redesigned,
thus larger buffers are considered for the future generations. Buffers are used to
resolve the blocking problem; internal an d /o r output blocking which cause either
cell loss or intolerable delay jitter. Internal blocking can be solved by properly
rearranging traffics through (Slog^N — 4) stages in a Bayan-based switch [62], or
by sorting input traffics with a Batcher circuit [2, 39, 49] other than buffering.
However, the only solution for output blocking is to buffer collided cells. From
the academic point of view, to enlarge the buffer size under a FIFO discipline
will never pay off for ATM networks. The increased delay may exceed the QoS
requirement for one thing, and a high speed network favors buffering outside the
network for another. In fact, it is recommended not to use buffers to solve the
blocking problem, since an ATM network of PVCs w ith well designed VP capac
ities can avoid output blocking; as for the internal blocking, alternatives have
been mentioned. This idea is explained later. This design reduces the buffer cost
and simplifies switching operations compared to a non-FIFO queuing scheme
proposed to satisfy different QoS requirements.
The customer end of a switch link can be directly connected to a single end
user or to a m ultiplexer in which different low speed users are multiplexed. Usu
ally, an ATM switch input link will carry multiplexed cells of different QoS re
quirements from different end users, and they are asynchronously multiplexed
within the virtual paths which are within that link. The num ber of cells of a
4
particular connection carried by this link per unit tim e is considered as the band
w idth obtained, which is determined by the QoS requirements, traffic character
istics of all co-existing connections and potential connections within the same VP
of a link. Users of low bandw idth demands (less than 45 Mbps) are multiplexed
asynchronously to a high signal level (usually, 155 Mbps) in a multiplexer. CAC
and bandwidth allocation are implemented inside the m ultiplexer before the high
speed cell stream sent to the ATM switch. Users of high bandwidth demands usu
ally send their signal directly to the ATM switch, and m anagement functions are
implemented in the input port of the switch. No m atter where the management
functions are performed, the multiplexing gain from asynchronous multiplexed
cells — the supreme feature of ATM, is desired. At this stage, ATM products
are not popular with lower speed users, thus commercial multiplexers are not as
popular (ADC Cantronic is the only company known to make asynchronous mul
tiplexers) as ATM switches (Newbridge, AT&T, ATM, GD, etc.). However ATM
multiplexers will be widely used in the near future, and the fundamental control
and management enforced in the multiplexer becomes crucial and a model with
ATM multiplexers is proposed in the thesis. Voice, data, video and other media
are expected to be asynchronously m ultiplexed to a single BISDN link through a
multiplexer, and this digital link is further multiplexed asynchronously into high
speed links which enter the DCS (digital cross connect) system in a CO (central
office). The DCS interfaces to an FOT (Fiber Optic Terminal) which transforms
electrical signals to optical signals, for interface to an ATM switch.
Queuing models have been widely used to evaluate the ATM multiplexer and
switch performance. Since we focus on a system with multiplexers where CAC
and bandwidth allocation are performed, the m ultiplexer queuing is studied. Infi
nite multiplexing ability is implicitly assumed in those queuing models currently
5
proposed. Thus, source traffic is assumed to enter an ATM network in real time.
Realistically, a m ultiplexer’s capacity is restricted by the device scale and speed
limit. A practical model stores input traffic in the “input buffers” , then cells of
different input buffers contend for the m ultiplexer bus. In an ideal ATM m ulti
plexer, the bus arbitrator performs asynchronous multiplexing by first detecting
idle periods for all input buffers and then poll each link in a weighted round robin
fashion. The “weight” is decided according to the bandwidth allocation scheme
whereby the minimum QoS requirement of each user is satisfied. However, to
perform queuing analysis for such a system will be difficult. Also, overflow cells
are stored inside the network. Overflow cells are encountered when the burst
arrival rate is higher than the equivalent bandwidth (the prim ary bandwidth
plus the multiplexing gain through statistical multiplexing) obtained. Buffering
is preferred to be performed outside of a high speed network. Therefore, an ideal
ATM m ultiplexer confines the overflow cells outside the network and queues them
in the user’s buffer. Besides its congestion control advantage, the long term sta
tistically averaged QoS, the quality of service (loss and delay jitter) measured
through the proposed local queuing model will be much closer to the quality of
service (QoS) perceived by customers.
ATM is recognized not only as a multiplexing but also as a transport tech
nique characterized by the VP (Virtual Path) concept which is proposed as a
promising solution to support BISDN in ATM networks [8, 24, 39, 50, 54, 55, 60].
The goal of virtual paths is to minimize network management and control costs,
and it appears to be the best approach to enhance ATM network performance
[28, 48, 54, 61]. Path capacity assignment, admission control, and bandwidth
allocation are carried out at the originating VP term inator (an edge node), with
no further processing necessary (except switching) at nodes along the path (inter
nal nodes); this is desirable for high speed networks. Also, the routing decisions
are made at the call set-up phase and cells belonging to a virtual channel (an
accepted call is assigned certain bandwidths within a VP, called virtual channel)
follow the same virtual path, thereby requiring a smaller reassembly delay and a
smaller destination buffer.
A VP is a logic path, its capacity can be measured by counting the number
of cells with a particular VPI (virtual path identifier) per unit tim e, which is
determined by the connection admission control (CAC) and bandw idth allocation
schemes (including traffic characteristics and QoS requirements) implemented
at the edge node originating this VP. The multiplexing effect exists between
virtual channels within a VP, but does not exist between VPs within a link.
This is different from the virtual circuit switching technique in traditional packet
switching networks, whereby packets are multiplexed in the whole link instead of
part of a link. Even though it sacrifices some multiplexing gain compared to the
virtual circuit switching, the VP concept is economic for links supporting a wide
range of QoS requirements. The conservation of logical path capacities reduces
the output blocking probability of ATM cross-connect systems.
VP capacities are designed based on the CAC, bandwidth allocation, traffic
characteristics and their QoS requirements, and controlled at edge nodes. To
exploit the statistical multiplexing effects, Sato [47] suggests assigning dynamic
VP capacities. By using dynamic VPs, the VP capacities are not hierarchical
and can be independent of the routing scheme. Furthermore, a dynamic network
reconfiguration ability is provided. However, all these benefits are obtained at
the expense of more complex operations, especially the coordination between up-
7
and down-stream links of a VP. This cost is not taken into account by Sato
since he calculated the operation cost for a V P traversing only one link. If a VP
capacity changes according to the demand without proper coordination along
the links it travels, then the virtual path capacity of a down-stream link may
not be able to expand with the up-stream one. Therefore the QoS guaranteed
by the up-stream link VP may not be satisfied in down-stream links, and vice
versa. Such a potential of QoS failure is basically not perm itted and will become
a severe problem for a virtual circuit connection which travels on more than
one VP. Even if the coordination cost is tolerable and a dynamic VP capacity
assignment scheme is worthy of coordination, the initial V P capacity assignments
are required by solving a joint optim al VP capacity and routing design problem,
since the optim al solution will be the operating point of the given network, and
dynamic adjustm ents are implemented based on that point. The VP capacities
are fluctuating around the optim al VP assignments w ith a long term average
movement of zero in the dynamic scheme. This is another m otivation of this work.
We assume that the network topology, link capacities and VP layout are given,
and all traffic characteristics, QoS requirements and rejection costs are known,
then we try to design the best V P capacity and routing assignments. Actually,
another advantage of the VP concept is to simplify the network architecture.
Through this research, the steady state probability of the VP occupations are
known as well as the total path cost, therefore an ATM network topology and
VP layout design can also be studied based on this work.
Massive connections with different QoS requirements are expected to be sat
isfied by the multiplexing gain. The equivalent bandw idth obtained for each
connection is the index for QoS, however, a calculation model is needed, and it
is related to the bandwidth allocation as well as the admission control scheme.
8
We link all fundamental ATM issues and develop a handy tool which parcels out
such a complex relationship and untangles a joint CAC and bandwidth allocation
problem in the optimal sense. The final target is to plan a joint virtual path rout
ing and capacity design by solving a constrained non-linear optim ization problem
whose cost function is derived from that tool. The purpose of the V P routing
and capacity design is to distribute the network resource in the optimal sense so
that the aggregate network cost (including cell loss, delay jitter, rejection and
priority cost of all customers) is minimized.
1.2 Fast Packet Technology
Fast packet services switch packets over high speed digital facilities (typically
fiber; very low error rate medium) shared by users, whose speed is much higher
since error-correction is moved to users’ CPE (Customer Premise Equipment)
instead of switches. Consequently, the network throughput is greatly increased.
Frame relay, SMDS (Switched M ultimegabit D ata Service) and ATM are the
current fast packet service products. Frame relay and SMDS are commercially
available while ATM is still in the development stage.
F ra m e R e la y Frame relay is a file transfer service with DSO (64 Kbps) or
DSl (1.544 Mbps) access rates over a pre-subscribed path between two frame
relay end-users. A data file is converted into frames by FRAD (frame relay
assembler/dissembler), which may be an internal card of a host or a PC. A
frame includes a variable sized data section and the m axim um transmission unit
(MTU) is 8192 octects. D ata files will not be segmented unless they exceed the
9
MTU. An MTU affects the switch memory allocations. Frames are passed from
FRAD to CSI/DSU (Channel Service U n it/D ata Service Unit), which provides
the correct electrical interface to a frame relay access port via either 56 kbps or
1.544 Mbps access lines. Frame relay port speed is shared by different customers:
50 perm anent virtual connections associated with DSO port, 250 connections p er
DSl port (virtual connection means it is not dedicated and will be torn down
afterwards). Each connection is assigned a CIR (Com m itted Information Rate).
D ata may be sent with a rate higher than the CIR if access ability is available.
Thus, connections are statistically m ultiplexed within an access link. Such a
policing function can be implemented in either a C P E , FRAD or the switch.
Current frame relay service executes this function inside a switch. Since the
switching operation speed is much higher than the fram e relay d a ta rate, not
much processing cost is incurred.
S M D S SMDS is a fixed sized (53 bytes) packet switching connectionless n et
work with DSl (1.544 Mbps) or DS3 (45 Mbps) single rates. It is also a cell
relay-based technology that follows IEEE802.6 LAN standard for M etropolitan
Area Network (MAN); a public packet switching network with high access port
rate. Its lowest throughput is 1.17 Mbps, and mainly used for graphics. Connec
tionless cells are screened by th e SMDS network to accept only authorized SMDS
cells.
C o m p a rin g w ith A T M T here is no sim ilarity between ATM an d SMDS ex
cept they are both cell-based. Both frame relay services and ATM employ statis
tical multiplexing methods b u t ATM is cell-based and usually works a t 155 Mbps
or above. Besides, an integrated voice, d ata file and video service is supported by
10
ATM while only data service is provided by FRS (Frame Relay Service). Also,
statistical m ultiplexing in ATM exists between virtual channels within a virtual
path, not between virtual paths within a link. However, there is no virtual path
or virtual channel concepts in frame relay services.
Virtual channels with different prim ary bandwidths are multiplexed within
a VP, therefore the equivalent bandwidth of each virtual channel (each call) is
larger than the prim ary assignment. In Chapter 3, a prototype bandwidth queue
model for prim ary bandwidth assignments is introduced. Assignments are re
vised according to the available bandwidth of a VP, while taking both the QoS
and future rejection costs into account. In Chapter 5, a modified model which
calculates the future cost probability together with the multiplexing gain with
greater accuracy is discussed. It is a batch arrival and batch departure queuing
model; however, the arrival batch size (bandwidth assigned) is determined op
tim ally based on the departure batch size, but the bulk departure process is a
direct result of the bulk arrival process. The CAC is enforced by the bandwidth
allocation model implicitly. In the same chapter, path cost is derived through
the QoS measurement and bandwidth allocation scheme proposed. In Chapter
4, an optim ization problem is formulated for the joint VP capacity and routing
design of an ATM network with the sum of all path costs as the objective func
tion, which is derived from the bandwidth allocation, CAC and QoS measurement
schemes proposed. Chapter 6 concludes this work, also, future research directions
are proposed. The “QoS failure” problem for a dynamic virtual path capacity
design is brought up in Appendix A; a trade-off table is made. Ergodicity for a
“bandwidth queue” is discussed in Appendix B.
11
C hapter 2
S ystem M odel
2.1 A synchronous M ultiplexer and ATM Sw itch
Currently ATM multiplexers are not able to perform idle detections and asyn
chronous polling, therefore high capacity buffers are required at input ports to
store cells first, and then cells of different ports compete to access the multiplexer
bus. The bus arbitrator usually collects cells from input ports in a round robin
fashion. The bus speed is much higher than the traffic rate on each link, but
it is m oderately slower than the sum of all input rates, otherwise, asynchronous
multiplexing does not gain anything. Since the bus arbitrator spends very little
time on idle ports by simply “checking”, it will spend more tim e on the port ac
cumulating bursty cells, and thus statistical multiplexing is achieved. However,
if idle detection can be performed and the bus arbitrator can schedule the polling
asynchronously according to the bandwidth required on each input port instead
12
of in a pure round robin fashion, then the multiplexing gain will be increased to
a great extent. The asynchronous scheduling function of an ATM multiplexer is
transparent if we assume an infinite multiplexing speed. Simulation and simple
analytic models are developed based on this assumption, resulting in a queuing
analysis inside an ATM network. Such internal queuing is outperform ed by the
local buffering outside the network by a practical multiplexer, since: 1) Local
congestion is more easily resolved than internal congestion. 2) Bursty sources
incur bursty loss which deviates from the statistical analysis of a queuing model.
3) Real time cells with intolerable delay jitte r should be dropped before entering
the network. 4) Service-dependent congestion control which drops less significant
cells is easily implemented by end users. However, the ATM CPE buffer becomes
an additional capital cost in a local buffering approach, and the debate of local or
central buffering continues. Our proposed model is built on local buffering and
idle detection. It exploits the advantages of statistical multiplexing.
An example of a m ultiplexer with finite multiplexing capacity is shown in
Fig. 2.1. A queue is not necessary inside the multiplexer, but a scheduling
function which collects cells from different users asynchronously is required. The
prim ary bandwidth assigned to each call is the prim ary quotas assigned by the
scheduling function. When the multiplexer detects a period of idle traffic for a
particular connection, the scheduling function will put the corresponding quotas
into a “quota pool” shared by all connections. In other words, a call is served
with an “equivalent bandwidth” which is higher than the prim ary rate, and the
difference is defined as the “m ultiplexing gain” which depends on how m any calls
are accepted in the system, how much prim ary bandwidth is assigned to each call
and how the pool quotas are distributed.
13
Ethernet
FDDI
Asynchronous
scheduling function
Multiplexing
DeMultiplexing
Edge
Node
A T M N etw ork
: VP1
ATM Switch
l/c
: Link 1
l/2c
□ : VP2
l/c
Figure 2.1: System Model
2.2 B andw idth A ssigned and Q uality o f Service
The relationship between the “virtual channel capacity” and the “user cost” is
difficult to determine. Simulation methods are adopted in [12, 13, 18, 64], and
analytic approxim ation models are suggested in [7, 16, 21, 31]. Such a relation
ship is also affected by the congestion control scheme adopted. CAC (connection
admission control) and UPC (Usage Param eter Control) are proposed for con
gestion control by CCITT 1.311 [23, 24, 30]. UPC and the bandwidth allocation
algorithms are generally associated [65]. CAC plus UPC is suggested by CCITT
to perform the traffic control and the resource m anagement (bandwidth alloca
tion) for each O-D pair [24, 30] and this recommendation is adopted by us. The
CAC function accepts a new connection if the required QoS for the connection
can be guaranteed without deteriorating the QoS of the connections within the
same path, and without degrading the QoS of other paths which share the same
links. T he UPC function polices the traffic of the adm itted calls to ensure they
follow established param eters. In other words, each call is served with a rate
upper bounded by the peak rate and lower bounded by the quota (bandwidth)
assigned. Bandwidth assigned to a call m ay be unused during a “low cell arrival
rate period”, and can be used to serve other cells of the same path if necessary.
Thus, cells are multiplexed within a virtual path. Although both the CAC and
UPC are implemented on each VP or VC connection [65], the QoS of a call may
still drop below the requirement temporarily due to the performance of an ATM
multiplexer [46]. Queuing models for an ATM network are introduced in many
interesting papers [17, 26, 35, 57, 58]. Assuming th at the scheduling function
of each ATM m ultiplexer has assigned a proper quota to each virtual path it
15
connects, so th e multiplexing effect exists between virtual channels within a vir
tu al path b u t not between virtual p a th s within a link. Furtherm ore we assume
advanced ATM switches [15, 36, 41, 43, 46, 53] w ith a global V P capacity layout
design are available. Therefore there is no internal blocking, and the queueing
delay in an ATM switch is negligible. For example, in Fig. 2.1, the first output
link of the ATM switch carries only two VPs, V P 1 and V P2. Since the output
link capacity is designed to be equal to the sum o f both V Ps, no serious queuing
is incurred. If cells of two VPs com pete output p o rt 1 at th e same slot, th e cell
being queued will be processed before th e next slot. Therefore, queuing delay and
loss are negligible. But th e “ jitter d e lay ” and th e “cell loss due to overflow traf
fic” which depends on th e bandwidth assigned cannot be ignored. Some papers
focus on the delay and th e loss at an ATM multiplexer. Due to the complexity of
real traffic, simulation is usually used to suggest a bandwidth assignment which
is able to support the QoS of a call. For exam ple, [13, 18] present a “CRR”
(Class Related Rule) simulation m ethod, and [12, 64] show the relationship be-
t%veen the loss and the assigned bandw idth by simulations. Som e papers simplify
the queuing model and derive analytic ways to m easure the required “equivalent
bandw idth” or “effective bandwidth” ( [7, 16,21, 31]); however, “queuing delay”
instead of “jitte r ” is measured. T he reasons w hy we m easure the “ jitte r ” are:
1) In a high speed network, the end-to-end delay is likely to be dom inated by
propagation delay [44], 2) The “ jitte r” reflects th e QoS a user perceives, which is
m uch more im portant th a n the statistical average delay. C hapter 2.3 describes
th e proposed m easurem ent model in detail.
16
2.3 O N /O F F Traffic and QoS M easurem ent
Usually two types of source models are used for ATM traffic, the fluid flow approx
imation and the M M PP (Markov M odulated Poisson Process) [3]. [25] assumes
exponential O N /O FF duration distribution for the fluid flow approxim ation and
[33] works on a general O N /O FF duration distribution. [26] presents a 2-phase
M M PP traffic model and a phase-type Markov renewal process is reported by
[40]. The M M PP suffers from computational complexity for m ultiple classes of
traffic [58], thus the fluid flow approximation is used as the source traffic model
in this proposal.
The “user cost” (delay jitte r and loss) depends on the queuing model. Usually
the bandwidth assigned and the buffer size are two param eters used to determine
the corresponding loss probability in an analytic model. The buffer size also indi
cates the maximum delay. For given delay and loss constraints, the corresponding
minimum bandwidth requirement and the buffer size can be found m athem at
ically. [21] provides a simple closed form based on an M M PP source model,
[16] suggests an analytic way which makes the required bandwidth equivalent
to the largest eigenvalue of a particular m atrix, and [31] assumes a Gaussian
source model. The disadvantage of these models is that the delay measured is
not reasonable: a cell belonging to a delay sensitive call does nothing but jams
the network if the waiting tim e is too long, and the priority control can only
partly solve this problem. Thus, for a bandwidth allocator (or the capacity allo
cation controller) to offer shared memory for all connections may be insufficient.
Therefore, [7] proposes a rather complicated system with a hierarchical control
structure (a queue is allocated to each class of traffic in the capacity allocation
controller).
17
To resolve this issue, a capacity allocation controller structure similar to [7] is
suggested, b u t a queue is allocated to each V C /V P connection instead of a queue
to each class of traffic. T he overflow cells of a V P /V C connection which arrives in
an “ON” period are kept in the queue (since the arrival rate is sometimes higher
than the virtual channel capacity assigned if we do not assign the peak rate), and
they will be processed during the “O F F ” period.
A rriv al R ate A rriv al R ate
T im e
l-L
OFF ON
1 un it tim e
T im e
l-L
O FF ON
lu n it tim e
(a) (b)
Storage P ro b a b ility
A ssig n m en t
Buffer
l-L
Equivalent
Bandwidth
O FF ON OFF ON
l-L
Shaded area are the overflow traffic,
being stored in the buffer and served E quivalent B an d w id th
(C)
only in the next idle period.
< d )
Figure 2.2: Storage probability
We adopt the O N /O FF source model (fluid flow model) instead of MMPP.
In Fig.2.2 (a) , P is the peak rate, B is the mean rate and L is the normalized
burst length (cycle tim e is 1 unit). Assume th at only a small portion of the ac
tive period is of a rate lower than the mean rate, therefore the two shaded areas
are of the same size (Fig. 2.2 (b)), and the relationship between the equivalent
bandwidth and the overflow cells is built through heuristics: If the equivalent
bandwidth equals the mean rate B, then the shaded area in Fig. 2.2 (b) stands
for the overflow cells, and they are stored in the user buffer. So the storage prob
ability is just l-L. If the equivalent bandwidth is 0, then the storage probability
is 1. Any bandwidth between 0 and B gives a storage probability between 1 and
l-L. A linear curve is used to approxim ate the corresponding storage probability
in between. Similarly, any bandwidth between B and P gives a storage proba
bility between l-L and 0, but an exponential curve is used to approxim ate the
corresponding storage probability in between (see Fig. 2.2 (c)).
In order to avoid unnecessary transmissions for the real tim e cells of large delay
jitte r, and to reduce the queuing delay and the local storage space, overflow cells
stored in the user buffer will be served only in the consecutive OFF period (Fig.2.2
(d)). This also reduces the storage space required since no more than B x R cells
are stored, if the service rate per unit bandwidth is R (cell/second/bandwidth).
For loss sensitive cells, a large prim ary bandwidth assignment is recommended, so
the equivalent bandwidth is large, therefore the storage probability is small and
stored cells are able to be sent during the consecutive OFF period. For example,
j prim ary bandwidths are assigned in Fig.2.3 (a) , then the total service time
{T/inish) required for stored cells is the total cells stored (B X P,tore(j + MG) x
RxL) divided by the service rate ((j+MG) x R). P3t0re is the storage probability,
19
MG is the multiplexing gain obtained from the quota pool and j M G is the
equivalent bandwidth. < 1 — L means no cell loss.
Assuming that the occurrence of overflow cells is uniformly distributed in the
busy period, and th eir service tim e slots are also uniformly distributed within
Tfinishi then the corresponding average delay jitte r per connection is:
Delay Jitter caii(j + MG) = x jg x P3tore(j -f M G) x R x T
Where T is the average connection time of th is call.
If j + M G < B, then T/in, - jA = 1 - L, and B x P,torc(j + MG) x R - (j +
MG) x R x (1 — L) cells are dropped. The loss probability per connection is:
L0SScall( j + MG) = B2&,„M+MG)xRxL-{i+Ma)xRx[lzL)
We define U(j+MG), aweighted sum of Delay Jitter(j+MG) and Loss(j+MG),
to be the user cost of a primary assignment j , which reflects the QoS provided.
The weights depend on the traffic types and the service types offered by the
BISDN provider. Since any service quality lower than the QoS requirement is
not allowed, the figures of jitter v.s. equivalent bandwidth (Fig.2.3 (b) ) and loss
v.s. equivalent bandwidth (Fig. 2.3 (c) ) are shaped by the QoS requirement
line into Fig.2.3 (d) and (e). Therefore, any equivalent bandwidth lower than
either the minimum delay jitter requirem ent or the m inim um loss requirement
will cause infinite user cost, so th a t a call will be rejected rather than assigned an
insufficient equivalent bandwidth, and CAC is thus achieved. Also, no loss cost
is incurred if the equivalent bandw idth is larger than B.
Delay jitter is th e variation of cell delays. If overflow cells stay awaiting the
next quota instead of being forwarded to th e overflow buffer, then it works as
20
a stop-and-go queuing scheme, thus delay jitte r disappears by trading an ex
tra bounded end-to-end delay. However, for real tim e service, the stop-and-go
type discipline will cause the frequency distortion problem for voice and frame
synchronization problem for video.
The multiplexing gain (MG) of a call is obtained from the “quota pool” whose
size is the sum of “prim ary bandwidth assignment - m ean rate” for all accepted
calls, therefore the MG depends on the num ber of existing calls and their cor
responding bandwidth assignments. The former is a function of available band
width within a VP, and the latter, unfortunately, is a function of “M G” itself.
An iterative way to find the MG and prim ary assignment is proposed in Chapter
5.
21
Overflow traffic is small since
a large assignment j is m ade
j: primary bandwidth
MG: multiplexing gain
j+MG
finish
1.0 unit time
(a)
Delay jitter Loss
0.5
unit
time
Assignment
M ean rate: D P eak 'rate: P
(b)
User cost (Delay jitter)
100%
Assignment
Mean rate: B
(c)
User cost (Loss)
i
Minimum QoS requirement
for Delay Jitter
IIIMIIIIIIIIIMIIM IIIIIIIIMIMIIIIIIII
Minimum QoS requirement
for Loss Probability
im iiiiiii H im 1111111111111111111111111111
The smallest equivalent bandwith
satisfying the delay jitter requirement
The smallest equivalent bandwidth satisfying the loss requirement
(d) (e)
Figure 2.3: User cost calculation
22
2.4 V irtual P ath C apacity and R ou tin g D esign
An ATM switch is a “self-routing” (digital routing) m ulti-stage slotted system
(usually a Banyan network; Fig. 2.1) with an input port interface speed m atching
with the multiplexer speed, and a slot size equal to the cell transmission time.
The internal operation is usually a little faster than the input port interface speed,
therefore internal blocking will occur when cells of different input links access the
same internal stage a t the same tim e. Internal blocking is resolved by extend
ing the switch into 3log2N — 4 stages without providing buffers [62], but output
blocking whereby cells of different inputs are destined to the same output link can
not be resolved unless buffering is used. Different queuing models are proposed
for output buffering; however, if overflow cells are controlled at end users and
VP capacities are coordinated on both input and output links of a switch, then
an almost zero loss probability is achievable and the delay jitte r is controllable
inside an ATM switch. For example, assume VP1 and VP2 are VPs of the same
capacity C, and they represent all possible traffics destined to output link 1 in
Fig. 2.1, whose capacity is 2C. Both V P l and VP2 cells obtained C quotas per
second. They are distributed by the asynchronous scheduling function as uni
formly as possible in the time axis. Therefore the minimum interarrival tim e of
the worst traffic on both VPs is very close to Since the output link processes
one cell every ^ second, no queue size greater than 1 is required. Also, if a
VP2 cell is forced to wait for one more slot in the output buffer while there is no
congestion, and always served last while contending with a V P l cell, then there
is no delay jitter. In other words, ATM switches can be transparent to users in
an ATM network, which is another advantage of VP. But it is another disadvan
tage for dynamic VP scheme without coordination since additional ATM switch
23
cost is also incurred besides the QoS failure cost. A real asynchronous m ulti
plexer guarantees a specific quota per unit tim e but can not guarantee keeping
the interarrival tim e between input cells exactly the same as the source traffic.
However, a fluid flow model assumes th at cells are continuous in the ON period,
therefore a cell enters the network almost immediately after obtaining a quota,
and the delay jitte r incurred is negligible.
Based on the congestion control and the performance evaluation model sug
gested above, we develop a virtual channel capacity allocation (bandwidth allo
cation) scheme which takes both the “user cost” and the “system cost” (future
cost) into account, and the relationship between the “virtual channel capacity”
(bandwidth) and the “user cost” is explicit. The sum of the “user cost” and
the “system cost” is called the “trunk cost”. Different virtual channel capacities
assigned at the current state of a virtual path will incur different “trunk costs” ,
and the one incurring the minimum “trunk cost” will be chosen in this scheme
as the virtual channel capacity (or bandwidth; see 3.2.3). The m inimum “trunk
cost” is defined as the cost of that state. Through this allocation scheme, the
best trunk (virtual channel) capacity assignment for each state of a virtual path
is found, so that the average cost of this virtual path is also found (see Chapter
3). The average cost of a virtual path is called the “path cost” which is com
posed of all “trunk costs” , and is a weighted average of loss, delay jitte r, rejection
and future costs. Different virtual path capacities assigned to a VP will derive
different “path costs” , and we found th at the higher the path capacity is, the
lower the corresponding “path cost” is (see Fig. 2.4). Since the sum of all virtual
path capacities within a link can not exceed the link capacity, it is desirable to
find the best path capacity distribution on a given network (the best resource
24
distribution pattern), so th at the “total path cost” (the sum of all “path costs”
in a network) is minimized without violating the link capacity constraints.
A “recursive algorithm ” is used on a queuing model to find th e optim al virtual
channel capacity (bandwidth) assignment and the corresponding “virtual channel
costs” (or the “trunk cost”). Through this allocation scheme, the “path cost”
(which is composed of all “trunk costs”) of each virtual path is obtained. Basi
cally, this decision problem, or a bandwidth assignment problem for each state
(a state is defined as the available bandwidth of a VP), is broken down into a
sequence of single decisions made one after the other. In other words, we solve it
by the “dynamic programming” technique [6, 56]. Then, we form a constrained
optim ization problem with the “total path cost” as the objective function, the
“path capacities” as decision variables, and the link capacities as constraints.
Reference [55] recommends an approach for the virtual path capacity design.
The differences between [55] and this work are : 1) In this approach, all virtual
path capacities of a network are determined by solving a global optimization
problem while [55] suggests the capacity of only single virtual path by measuring
the “m ultiplexing effect”. 2) In this approach, the congestion control model (CAC
and UPC) and the “virtual channel capacity” assignment (or, the bandwidth
allocation) scheme are strongly related to the virtual path capacity design while
it is not very clear in [55]. 3) We are designing the virtual p ath capacities for a
network with fixed link capacities while the link capacities are not constraints in
[55]. 4) Virtual path routing is solved together with the capacity design problem
in this work. [50] also suggests the virtual path capacities b ut by simulations
(trial and error).
25
The best routing algorithm distributes traffic flows onto candidate routes so
as to minimize the global cost. Gavish and Neum an solved a joint routing and
capacity design problem in 1989 [19] which differs from this work as follows: 1)
The capacity is chosen from a set o f candidate capacities in Gavish’s work. 2)
The cost function in [19] assumes simple exponential functions derived from an
M /M /1 model, and our cost function is derived from the CAC and bandwidth
allocation scheme on an ATM network. 3) N o facility cost is involved in the
optim ization problem of [19]. 4) A probabilistic routing function is presented by
us instead of a deterministic routing scheme. In [19], a route for a particular
O-D pair is suggested by solving an optim ization problem and it is the only route
connecting that 0-D pair. In this work, a long term flow distribution Xopt (a
vector since a flow is distributed upon all possible routes which connect the same
O-D pair) is suggested for each O-D pair by solving an optim ization problem,
and all candidate routes are used according to th e optimal rate distribution. We
make a special dice for each O-D p air connection: the num ber of faces of the dice
equals the number of candidate routes for th at connection, and a specific face up
probability equals to “the rate on th e specific route divided by the total arrival
rate” . W hen a call arrives, a route is assigned by a “dice tossing” simulator.
W ith VC connections, since significant exchange of information and processing
make it difficult to implement, at m ost two V P s are suggested for a route [22].
Due to the small am ount of VC traffic [8], th e probability of an “insufficient
available bandwidth of the down stream path” is therefore very small and can be
neglected, so the routing decision is also made based on the dice whose “face up”
probability is designed according to rate distributed on all candidate routes.
26
The path co st
x 1 q5 T he path cost is a function of path capacity assigned
100
Virtual path capacity from 5 to 100 unit
Figure 2.4: An example of the convex characteristics
27
C hapter 3
B an d w id th A llocation and C A C
3.1 N etw ork M anagem ent
Network m anagement is implemented using a software package on hardware plat
form over an adm inistrative link. Network management functions include pro
visioning, commissioning, connection (routing), fault m anagement, bandwidth
m anagement, trouble ticketing, performance monitoring, security control, ac
count management, billing and user interface. They are performed by individual
but well-coordinated modules (or building blocks). T he QoS measurement build
ing block works closely w ith the connection m anagement building block who
decides the adm ittance of new arrivals according to th e available network ability
to satisfy their QoS requirement. The bandwidth m anagement building block
will decide the resource allocation subsequently if the call is adm itted. It de
pends on the ATM asynchronous multiplexing schem e adopted, i.e., it follows
28
the asynchronous scheduling functionality in the ATM multiplexer. The average
bandwidth consumed for each user is estim ated and sent to the billing building
block for pricing. The connection building block does routing besides admission
control, thus it is strongly related to the fault management. Building blocks
communicate with each other through the control network, usually a combina
tion of the E thernet as internal LAN and the X.25 packet switched MAN and the
routers between LAN and MAN. Equipments such as ATM m ultiplexer, switch
and SONET ADM (Add Drop M ultiplexer) are intelligent elements, thereby more
functions can be expected in the element layer, and the asynchronous schedul
ing model proposed is believed to be feasible. Thus, a long term average of the
volume of cells with a certain V PI/V C I handled by the network, i.e., the “equiva
lent bandw idth” (or, virtual channel capacity), becomes quantizable through the
model proposed, so that necessary links among building blocks are provided.
3.2 B andw idth A llocation
The “equivalent bandwidth” of a call is equal to the sum of the prim ary band
width assignment and the multiplexing gain (MG). In this chapter, MG is as
sumed to be zero, and an optim al prim ary bandwidth assignment scheme is pro
posed, which is combined w ith the MG in Chapter 5. An assignment is optim al
in the sense th at an aggregate cost (assignment cost) which is a weighted sum
of the “user cost” and “system cost” is minimized. The user cost is the QoS
obtained from such an assignment shown in Fig. 2.3 (d) and (e), and the system
cost is the potential future rejection responsibility an assignment brings about.
There is a trade-off between these two costs: a better QoS corresponds to a high
29
system cost, while a modest QoS to lower future rejection rates. The system cost
is basically a product of the rejection cost and the rejection probability caused by
an assignment. Compared to a general bandw idth allocation scheme, the m ajor
advantage of this scenario is th e flexibility to reject a current call even though it
is acceptable since a more im portant call m ay come next. The cost of rejecting
a highly profitable call becomes noticeable in BISDN because network providers
offer different service types to increase their revenue. Such a cost is evaluated
together with the user cost for all possible assignments, and the one which in
creases the smallest assignment cost (trunk cost) will be the optim al assignment.
The system cost is very small while the available bandwidth is abundant, there
fore high QoS is recommended by assigning more than the minimum requirement
needs. However, it increases dram atically when low bandwidths are available,
so conservative assignments which barely satisfy the QoS are made while the
available bandwidths are small.
A table-look-up bandwidth assignment scheme is developed. Since the m in
imum QoS requirements are included in the user cost table in Fig. 2.3, CAC
suggested by the CCITT is implemented implicitly. A bandwidth assignment
specifies the num ber of cells w ith a particular V Pl and a particular VCI entering
the network per second; V Pl is for routing, and VCI is for demultiplexing. It is
also called the virtual channel since a call is viewed as a logical channel along a
VP. T h e “assignment cost” corresponding to an optim al assignment is also called
the “channel cost” or “trunk cost”, which varies inversely as the “state” (the
available bandwidth) of a VP. A VP is modeled as a “bandwidth queue” in 3.2.2
based on the assum ption in 3.2.1.
30
3.2.1 A ssu m p tio n s
• Given a virtual path of K units of bandwidths, there are two classes of traf
fic: delay (jitter) sensitive and loss sensitive. Each class is characterized by
the peak rate P, mean rate B, and the normalized average burst length L
(L < 1); for non bursty, constant bit rate sources, L = 1 and P = B). For
a delay sensitive call (which arrives as a Poisson process with rate A < r, and
traffic param eters Pj , Bj, Lj), the total cost is (delay cost)+(loss cost) xrcl,
where n l is the weighting factor for delay sensitive call to make th e loss
cost less significant (n l < 1). If there are m ore than one O-D pair in a vir
tual path, then they are treated as multiple classes of traffic. The proposed
scheme is applicable for m ultiple classes of traffics.
For a loss sensitive call (a Poisson process w ith rate Al , and traffic param
eters Pl , Bl , Li ), the total cost is (delay cost) x n2+(loss cost), where
n2 is the weighting factor for loss sensitive call to make the delay cost less
significant (n2 < 1).
• If a call is rejected (assigned zero bandwidth), then it will wait for a random
am ount of tim e which follows the exponential distribution and then retry.
The waiting tim e can be calculated as the “rejection cost” which is assumed
to be finite and known by us.
• QoS of a call is a function of the assigned bandw idth and the buffering
behavior. We assume virtual path transport architecture [12, 48, 54, 55],
and high performance switches without internal blocking [2, 9, 27, 37, 38],
so that the reassembly delay, output blocking and internal blocking delay
of switches are negligible.
31
• We first assume that the bandwidth units assigned to of each call are re
leased independently, and the service time is exponential with rate p, for all
traffic classes. A m odel without the independent assum ption is developed
in C hapter 5. To simplify our analysis, single traffic class is discussed in
Chapter 5. In fact, the proposed bandwidth allocation scheme is applicable
for multi-class traffics with different service tim e.
3.2.2 B a n d w id th Q ueue
A M /M /K /K queue with bulk arrival and bulk departure is modeled for a VP
of I\ units of bandwidth. A state is defined as the available bandw idth of the
VP. We first assume the departure process is shown as Fig. 3.1, and the future
cost (system cost) probability is calculated according to this model.
For example, consider a call which arrives to find m bandwidths left (state
m ). If we assign m units to that call, then we go to state 0. Since any call
arriving during state 0 must be rejected, and the expected rejection cost (also, the
“assignment cost” of state 0) is know, then we multiply this cost by the probability
that the next call arrives before we depart state 0, the product becomes the “ system
cost (or, future cost) in state 0 due to assignment m ”, or, “the system cost with
assignment m for state m ”. If there is no call arriving at state 0, and one unit
of bandwidth is released, then we move to state 1. The “system cost” in state 1
due to such an assignment is found by multiplying the probability th at the next
call arrives a t state 1 and the “assignment cost of state 1”. The system cost
(future cost) in states 2, 3, • • •, m due to assignment “m ” can also be calculated
as the product of the “assignment cost of a state" and the probability that the
32
next call comes in that state. The sum of the “system costs in states 2, 3, • • •,
m (every possible state) due to assignment m ” becomes the “system cost” for
“assignment m”, which is added to the “user cost” of “assignment m ” to get the
“assignment cost of state m due to assignment m ” . The “assignment cost of state
m due to assignment k (k = 0,1, • • •, m — 1)” can be calculated in the same way,
and the assignment causing the least “assignment cost” is chosen as the optim al
assignment for state m. Also, the corresponding cost is the “state cost” of state
m. T he assignment cost of state i is found before we look for the assignment
cost of state j where i < j, so the optimal assignment and the corresponding
assignment cost of every state can be found recursively.
Section 3.2.3 will explain the recursive algorithm in detail based on the The
orems introduced below.
Theorem 1 C’ is the solution of C = min(C°, C 1), C° = x-f a x C , C 1 = y + 6 x C
iff C" is the solution of C = min(C0', C 1'), C0' = x + a x C0', C 1 = y + b x C l ;
1 > a > 0, 1 > 6 > 0.
proof:
(0<=
If C* is the solution of C = min(C 0', C 1'), C0' = x + a x C 0' , C 1 = y + b x C 1
and without loss of generality, assume C * = C°\ then
— < - K-
1 — a v 1-6
Now, if C* which is -r5- is not the solution of the set
' 1— a
C = min(C°, Cl), C° = x + a x C , C l = y + b x C
33
^D ecisions o f a ll slates b e fo re sta te m are k n o w n and are re p re se n te d b y so lid arro w s
“j** units o f b a n d w id th s are a ssig n e d .
..m m ... ........
.............. U ser cost = U (j)
'•'•■it, A call a rriv e s in state
' \
©
W ith c e r ta in p ro b a b ility
to p a y in th is state .
P a y m e n t: C (m -j)
W ith c e rta in p ro b a b ility
d o e s n o t p ay all a lo n g th e
p a th u n til state “m ”
V u ln e ra b le State:
S late w ith possible fu tu re co st co rre sp o n d in g
to a ssig n m e n t “j ” fo r th e call arriv in g in sta te “ m **
(a )
T h e o p tim al d e c isio n in state m is fo u n d , and the M a rk o v chain is c o m p le te d
up t o s ta te m.
Figure 3.1: Decision making on the bandw idth queue
then the solution must be C 1, otherwise C° will be the solution a nd th e set
reduces to C° = x + a x C °, C 1 = y + b x C°, with a solution C = C° ~ y ^ , a
contradiction.
But if C 1 is the solution, then
Cl = y + b x C 1, or C l = ^
C° = x + a x C 1,OTC° = x + ^
C x - C ° = ^rb - x - ^ = _ * < o
i.e. -r^T < r— another contradiction.
l— o 1— a
So, C* is the solution for both sets.
(ii)=>
If C ” is the solution of C — min(C°, C 1), C° = x + a x C , C l = y + b x C and
without loss of generality, assume Cm = C°, then
C' = C °< C 1
C° = x + a x C°, or C° = yfy
C 1 = y + 6 x C°, or C l = y +
1 -a ^ y T l-a> 1— a ^ 1-6
Now, if C° which is -r5 - is not the solution of set
7 1 — a
C = min(C°',C1'), C0' = X + a x C 0' (or C 0' = y ^ ) , C 1 ' = y + b x C 1' (or
C 1' = - M
° l-6/>
then the solution is C 1 ', in which case
< 7 s- which is a contradiction.
1 — a 1— a
So, C* is the solution of both sets.
35
Theorem 2 C' is the solution of C = min(C°, C 1, • ■ •, C'J "), C' — Xi + ai x C ,
i = 0, • • •, j iff C" is the solution of C = min(C°\ C x> , • • •, C*'), C1 ' = x,+a,- xC "’,
i = 0, • • • ,j; 1 > a, > 0.
proof:
(1)<=
If C" = C0' is the solution of C = min(C°', C 1 ' , • • •, C*') — min(min(C 0', C 1'),
min(C°\ C2'), • • •, m m (C °\ CJ')), = min(Ca', C°\ ■ • •, C 0'), C 1 '' = i,- + a,- x C 1 ',
i = 0 ,... ,j.
By theorem 1, if C ’ is the solution of the set
C = min(C0', Ck') = C0', C0' = x0 + a0 x C 0', C k' = xk + x Ck\ k =
then C" is also the solution of the set
C = min{C°, Ck) = C°, C° = x0 + a0 x C, Ck = x k + ak x C , k = 1, • • • ,j
So, min(C°, C 1, • • ■, Cj ) = min(min(C°, C l), min(C°, C2), • • •, min(C°, CJ))
= min(C°, C°, • • •, C°) = C°
+ a * . , x C, A : = 1, • • •, j
(2) =>
If C* = C° is the solution of min(C°, C 1, • • •, C-7) = min(min(C °, C 1),
min(C°, C% • • •, m in (C 0, CJ)) = m in(C °, C°, • • ■ , C°) = C°
Ck = x k + ak x C, fc = 1, • • •, j
By theorem 1, if
36
C = min(C°,Ck) = C°
C° = Xq q ,q X C
Ck = Xk + ajt X C, k = 1,- • • ,j
then
C = m in (C °\C k') = C°
C0’ = x0 + a0 x C0'
Ck' = xk + a t x Ck> , k =!,■■■ ,j
So, min(C0',C 1', • • •, C*') = min(min(C 0', Cl'),min{Ca', C2'), ■ • ■ ,min(C°‘, C*'))
= min(C°, C°, • • ■ , C°) = C°
C k > = xk + ak x C k', k = 1, • • •, j
3.2.3 O ne class o f traffic
We consider single class of traffic first, and derive the algorithm for multiple
classes of traffic later. If only one class of traffic arrives at the edge node, and
there are m units of bandwidth available within a path of capacity K which can
be used for this O-D pair connection, how much bandwidth should be assigned to
this call while both user and system costs are considered? A recursive algorithm
is introduced to answer this question.
R e c u rsiv e A lg o rith m Consider a call arriving at state ” m ” , and uj ” units
of bandwidth are assigned to this call. Different primary assignment j gives
37
a different “assignment cost with assignment j for state m ”: C '-’(m), which is
a weighted sum of the “user cost” : U(j 4- MG) = U(j) (MG is assumed to
be 0) and the “system cost”. We choose the assignment whose corresponding
“assignment cost” is the smallest to be C(m): the “state cost” of state “m ”.
C(m) = minj Ci{m)\ i.e., the smallest sum of the incurred “user cost” and
“system cost” when a call arrives in state m.
The “system cost” is actually a “weighted sum of the state cost of responsible
states”. If we are in state m and j units of bandwidth are assigned to an incoming
call, then we go to state m — j immediately. One of the following two events will
happen: 1) Another call arrives when we are still in state m — j, 2) Another call
arrives while we are no longer in state m — j. Event 1 occurs with probability
(K -m+j)xn+\ and incurs a cost C(m — j): the “state cost” of state m — j. C(0)
is known as the rejection cost, and C(m), m > 0, is found recursively based on
C (0). Event 2 occurs with probability +j/x>i+A •
Event 2 is composed of two exclusive events. 1’) The next call comes in state
m — j + 1, 2’) We are no longer in state m — j + 1 when the next call arrives.
Event 1’ occurs with a probability x lK- m+*-i)xT + \» and incurs a
cost C ( m - j + 1). Event 2’ occurs with probability * (, £ ~ + ^ 7 ) ^ ^ •
Event 2’ is divided into two events again and so on until we are back to state
m (see Fig. 3.1 (a)).
States from (m — j) to (m) are the “vulnerable states” corresponding to the
assignment “j ” in state “m ” since if the next incoming call arrives at any of these
states then a future cost will be incurred, and the weighted sum of the “state
costs” of those vulnerable states becomes the “system cost” for assignment “j ”
(see equation 2).
38
If m < P (P is the peak rate), then “j ” (possible assignment) is from 0 to
m. If m > P , since there is no reason to assign a bandwidth more than the peak
rate, j is from 0 to P.
We choose the j which gives the least “assignment cost” as the optim al as
signment and the corresponding “state cost” is C(m).
C(m) = m in(C J(m )), j = 0, • • • ,m m (m , P ) (3.1)
If C(m)'s are found in an increasing fashion, i.e., C(0), C (l), C(m), •••,
C(k), then C*(0), C (l), • • •, C(m — 1) are known when we are looking for C(m).
CJ(m) can be written as a function of C(m — j), C(m — j + 1), • • •, C(m — 1),
(see equation (32)) and solved. So all C'(m), m = 0, • • ■, K , can be found in such
a recursive way.
Let U(j + MG) = U(j) be the user cost of being assigned j units of band
width, it is independent of which state the call arrives at.
C>\m) = U(j) +
A + (K - m + j)ft
A
x C(m - j) +
A + (K — m + j — l)/i
{K - m + j - l)fi
x C(m - j + 1) +
A
( K - m + j)fi
A + ( /f - m + j)/i
(K — m + j)fi
A + (A T — m + j — l)fi \ + (K - m + j -2)(i
A + ( K - m + j)n
x C(m - j + 2) +
39
( K - m - f j)fi ( K - m + j - l)/r
A + (K — m +j)fi \ + ( K - m + j - l ) n
( K - m + 2)n A ________ _
A + ( tf - m - 2)/r \ + (K - m + l)fi 1 j
| ( K - m + j)p x ( K - m + j - l)/x ^ ^ ( i f - m -H )/!
\ + (K — m + j)y. \ + (K — m + j — l)fi \ + (K — m — l)n
x - — r— x CHm),
A + ( K - m ) n v
(3.2)
j = 0,---)min(m,P). (3.3)
The first term of th e above equation is the “user cost”, the sum of all other terms is
the “system cost” . Let us take a look at the last term of the “system cost”, which
can be divided into two parts. T he first part is the probability that no call arrives
since when the assignment j in state m is m ade until m units of subchannel are
available again. T h e second part is CJ(m), but actually it should be C(m), since
the “s ta te cost” of state m is C(m). However, C(m ) = m m j=1...T n i- n(m tp)CIJ(m)
and C J (m) is what we are looking for in equation (2) so it is not solvable. If we
assume C(m) = C*’(m) where j * gives the minimum C j(m) from all possible j ,
then th e second p a rt should be replaced by C J '*(m). But, by theorem 2 above, we
can use C*(m) instead of C ^(m ), and C J(m) is therefore solved for all possible
j, and the “ j” incurring the least C; (m) will be chosen as the virtual channel
capacity assigned to the call which arrives at state m. The “state cost” at state
m: C(m), is thus obtained, which will be used later to calculate the “virtual
channel capacity” and the “tru n k cost” for calls arriving at those states higher
than m.
40
3.2.4 T w o /M u ltip le classes o f traffics
We have described how to find the optim al virtual channel capacity for each state
of a virtual path which transm its a single class of traffic. If there are two different
classes of traffic on the same path, say a delay sensitive traffic and a loss sensitive
traffic, then we have to modify the recursive equations to accom modate both
delay and loss sensitive calls.
A = Aj + Xl (3-4)
Cd(ni) = niin(C^(m)),j = 0, ■ • •, min(m, Pd) (3-5)
C: d(m) =
UdU) +
x C(m — j) +
A + (K - m + j)n
x C(m — j + 1) +
(K - m + j)n
\ + ( K - T 7 1 + j)fi
(K - m + j)n
A + (I< - m + j - l)/r
(K - m + j - l ) n
A + (K - m + j - l)n A + (K - m + j — 2)fi
A + (I< - m +j)n
x C(m - j + 2) +
+
+
(K - m + j)(i
A + (K - m + j)n
(K — m + 2 )fi
A + (K — m — 2)n
(K - m + j)n
A + (K - m+j)fx
( K - m + l)/x
A + (K — m + 1 )/i
(K — m + j — l)/i ^
A + (K - m + j - l)n
A -h (JiT — m + tj/j * c (tn ~ ‘ )
( K - m + j - l)n (I< - m + 2)n
A + (K - m + j - l)/z
A
A + (K — m + 2 )/i
(3.6)
41
CL(m) = min(C] L(m)), j = 0, • • •, min(m, Pl)
(3.7)
G'£(m) =
- a +
^ (K — m + j)u
* \ + (K — m + j — l)fi * ~ X + (K — m+j)fi
x x _ * x C (m _ j + 2).+ .....
\ + ( K - m + j - l ) n \ + ( K - m + j — 2)n
( K - m + j)n ^ ( K - m + j - l ) n x
A + (K — m + j)n \ + (K — m + j — \)n
^ (K — m + 2)/x A x _ n
A + (A- — m — 2)/x A + (A — m + l)/i
(AT-m + j > ( j j f - m + j - l)n (K - m + 2)/x
A + (K — m + j)n A + (A — m + j — l)/x " A + (A — m + 2)/x
( A ” — m + l ) / i A A i , j A d
x - 777------------r— x- -----777 -7 - x ( - — x C ,(m ) + — X CVm))
A + ( A - m + l)/z A + (A — m)/z v A A v "
(3.8)
C(m ) = j X Cd(m) + y x C i(m )
(3.9)
42
C'd(m) and are unknown but they are required in the recursive equa
tions, so we are stuck. However, an iterative way to approach the solution is
offered:
• Step 1: Neglect the term C l { tti) in the Cd(m) recursive equation (36) and
find the approximate Cd(m).
• Step 2: P ut the Cd{rn) obtained in step 1 back into the C 3L (m) recursive
equation (38) to get a more accurate approximation of Ci(m).
• Step 3: P ut the Cz,(m) obtained in Step 2 back into the Cd(m ) recursive
equation (36) to get a more accurate approximation of Cd(m). Stop if both
Ci{m) and Cd{m) converge to a predeterm ined tolerable region; otherwise,
go to step 2.
If all user costs are measured by positive values, then the exact values of
Cd(m) and C£,(m) are the upper bounds of this iteration m ethod. Since C i(m )
and Cd{m) are always increased in step 2 and step 3 respectively, convergence is
guaranteed.
Since B-BISDN supports multi-media communications, there may be more
than two classes of traffic m ultiplexed on a virtual path. In addition, for VC
connections, different 0-D pairs share the same path and they can be treated as
different classes of connections. However, we can perform sim ilar modifications
on this algorithm to find the optimal decisions for multiple classes of connections
within a virtual path.
N u m e ric a l E x a m p le We next illustrate the “table-look-up” scheme men
tioned (for the CAC function and the bandwidth allocation) by showing some
43
numerical results of a virtual path which contains two classes of traffics with the
following param eters:
• k = 100, n = 1.0.
• n l = n2 = 20.0.
• Pd = 20.0, Bd = 12.0, Ld = 0.95.
• PL = 7.0, B l = 3.0, Ll = 0.6.
• QoS for the delay sensitive call: Delay < 0.008 second/call; Loss < 0.08.
• QoS for the loss sensitive call: Delay < 0.1 second/call; Loss < 0.001.
The above iteration is term inated when the “increment percentage” is less
than 0.001; i.e., if the new Cd(m) minus the previous one (C'd{m)) is less than
0.00001 x Cd(m) and the new Cz,(m) minus the previous one {C^m)) is less than
0.00001 x C'L(m ) then stop.
The CAC function and the bandwidth allocation for delay sensitive calls are
achieved through a table-look-up procedure in Fig. 3.2. This table is obtained by
solving equation (3.1)— (3.9) for each state (m) of a virtual path under different
loads. A delay sensitive call is rejected if the state it arrives at is smaller than
14 (the admission control), and more bandwidth will be allocated if it arrives
at higher states. Obviously, decisions tend to be more conservative for traffic
of higher arrival rates, since we would reserve more resources if the system is
heavily loaded. A delay sensitive call is either rejected or assigned a bandwidth
of at least 14 units which is the bandwidth required to support its QoS; this is
not affected by the arrival rate.
44
Fig. 3.3 shows a sim ilar behavior as in Fig. 3.2, a loss sensitive call is rejected
if the state the call arrives at is smaller than 4, and more bandw idth will be
allocated if it arrives at higher states. But more peak rates are assigned (the
peak rate of the loss sensitive call is ’7’), since the cost of a loss sensitive call
can be significantly reduced by assigning more bandwidth to it. A loss sensitive
call is either rejected or assigned a bandwidth of at least 4 units which is the
bandwidth required to support its QoS; again, it does not change under different
loads.
Fig. 3.2 and 3.3 have illustrated the first improvement: to assign more than
the m inimum QoS requirem ent while it is worthy. To illustrate the second im
provement, we make the delay sensitive call a very im portant call, so that if it
is rejected, we have to pay 20 tim es the rejection cost of a loss sensitive call
(7200cent/second is used). In other words, even though we have enough band
width to support the QoS of a loss sensitive call, which is 4, we m ay still have to
reject it in anticipation of future calls. Fig. 3.4 shows this behavior: a loss sen
sitive call will not be accepted until there are 16 channels available, however, the
important call (delay sensitive calls) will be accepted immediately as long as there
are 1J channels available. Compared to the decisions for a loss sensitive call, in
state 21, for example, “4” channels are assigned in Fig. 3.4 and “7” channels are
assigned in Fig. 3.3, since we are inclined to reserve some network resource for
im portant calls.
45
Assigned bandwidth
QoS of delay sevsitive call: — ► Delay: 0.008 second/call, Loss: 0.08
QoS of loss sensitive call: — » Delay: 0.1 second/call, Loss: 0.001
Optimal bandwidth assignm ent for a delay sensitive call
2° r
18-
16-
14-
1 2 -
1 0 -
8 -
6 -
4 -
2-
Q a o i Q i i B c e io tc w i tci c ----- 1 ------------------ 1 -------------------1 ------------------ * —
0 20 40 60 80
Available bandwidth when the call arrives
, j
i
i -----------
«- - 1
Virtual path capacity: 100; f t = 1.0
• • - heavy load : X d = 6.0, \ L = 3.0
** medium load: A « j = 3.0, X L = 1.5
— light load : X d = 1.0, X t = 0.5
A : Arrival rate
100
Figure 3.2: Decision for a delay (jitter) sensitive call under different loads
Assigned bandwidth
5-
4 -
2 -
1 -
QoS of delay sevsitive call: -♦ Delay: 0.008 second/call, Loss: 0.08
QoS of loss sensitive call: -♦ Delay: 0.1 second/call, Loss: 0.001
Optimal bandwidth assignm ent for a loss sensitive call
A : Arrived rate
Virtual path capacity: 100; f t = 1.0
• • • heavy load : X j = 6.0, A i = 3.0
** medium load: A ^ = 3.0, X t = 1.5
— light load : A * = 1.0, A * , = 0.5
M iM K
20 40 60 80 100
Available bandwidth w hen the call arrives
Figure 3.3: Decision for a loss sensitive call under different loads
47
Assigned bandwidth
QoS of delay sevsitive call: -♦ Delay: 0.008 second/call, Loss: 0.08
QoS of loss sensitive call: — » Delay: 0.1 second/call, Loss: 0.001
A ssignem ts for both calls (the delay sensitive call is important)
I
18L A : Arrival rate
16-
1 2 -
2 -
•)4 - ( Tm-ii-m- i irm Virtual path capacity: 100; f t = 1.0
Load: X j = 3.0, X L = 1.5
oooo: Delay sensitive call
+ + + + : Loss sensitive call
8 -
6F 4
n i n m > nu h
4 1- H u tHu t-HHHiiH H n iiiim n m in iiin iiiH-
20 40 60 80 100
Available bandwidth when the call arrives
Figure 3.4: Decisions for both calls under medium load
48
A S tate transition o f a loss sensitive call 1 : State transition o f a delay jitte r sensitive call
to o
Figure 3.5: Markov chain is completed after decisions are made
3.3 P a th C ost
We have proposed a virtual channel capacity allocation scheme which measures
the weighted integrated costs: delay, loss and rejections (throughput) with VP
connections only. Through our scheme, the virtual channel capacity assignment
within a virtual path of a given capacity is found, so that the assignment behavior
of each state in our bandwidth queue (Fig. 3.1 (b)) is known, thus the Markov
chain is complete. Also the “trunk cost” of each state in the bandwidth queue is
found.
49
Fig. 3.5 is the Markov chain corresponding to its decision table (Fig. 3.4).
The CAC function and the bandwidth allocation are performed by this table. For
example, in state 16, both the delay and the loss sensitive calls are accepted with
virtual channel capacity assignments 14 and 4 respectively, so the light arrow
starting from state 16 ends at state 2 and the dark arrow starting from state
16 ends at state 12 (Fig. 3.5). No dark arrow is shown below state 16, since a
loss sensitive call arriving at any state below 16 will be rejected according to the
decision table. Appendix A proves that such a Markov chain is Ergodic, so that
the steady state probability can be obtained by solving the balance equations,
and some examples are shown in Fig. 3.6. The line indicates the steady
state probability of Fig. 3.5, and if the loading decreases, there will be a greater
chance for an incoming call to have abundant available bandwidths to use (the
solid line), and if the load increases, the probability mass function shifts left (the
dotted line). The decision tables for the heavy and light loads are not shown.
However, we provide some analytic results here: under heavy load, a loss sensitive
call is not accepted until there are 30 channels available, since the arrival rate of
the im portant call (delay sensitive call) is 6.0. On the other hand, a loss sensitive
call is accepted immediately under light load while there are 4 channels available
since the arrival rate of the im portant call is only 1.0.
Since both the “trunk cost” and the “steady state probability” of each state in
a virtual path can be found, we define the “path cost” as the inner product of the
“trunk cost” and “steady state probability”. Fig. 3.7 illustrates the flow chart
of finding a “path cost” . Com puter results show that the “path cost” decreases
as a convex function when the path capacity increases (see Fig. 2.4), and the
“path cost” increases exponentially (also a convex function) with the arrival rate
of a delay or a loss sensitive call (see Fig. 3.8 and 3.9). The param eters we
50
Probability
Virtual path capacity: 100; p = 1.0
• • • heavy load : Aj = 6.0, A t = 3.0
** medium load: Aj = 3.0, \ L = 1.5
— light load : . Aj = 1.0, A t = 0.5
QoS of delay sevsitive call: — Delay: 0.008 second/call, Loss: 0.08
QoS of loss sensitive call: -► Delay: 0.1 second/call, Loss: 0.001
Steady state probability m ass functions for different loads
0.03
0.025
0.02
0.015
0.01
0.005
V
-J ^fUWHlllMK
100 80 60
State (available bandwidths)
40
(available bandwidths)
Figure 3.6: Steady state probabilities of three Markov chains
51
use are the same as the heavy load example above. This is quite reasonable:
the marginal effect of increasing virtual path capacity goes down when the VP
capacity increases, and the marginal effect of increasing traffic in a VP goes
down when the arrival rate increased. We can assume the objective function is
a convex function (then it is also a pseudoconvex function) and its gradient is
known (numerically or an approxim ate closed form), so th a t the reduced gradient
m ethod can be applied to solve the optim ization problem.
The “path cost” with VC connections is different from the above since those
paths which are part of a VC are dependent through the admission control
scheme. The decision table obtained in Section 3.4 considers only one path,
it will be affected by the decision tables of its down-stream paths for VC connec
tions. Such a dependency will increase the “path costs” , and the calculation is
very complicated (by a multi-dimensional Markov chain model or by an iterative
m ethod). We claim that the “path cost” can still be approxim ated as indepen
dent with the following reasons: 1) A VC request needs significant exchange of
information between VP term inators, and its subsequent processing is compli
cated; this makes the admission control difficult if not impractical to implement,
so routes consisting of only a single VP (VP connections) or 2-VPs (with VC
connections) are preferred [22]. That is to say, the dependency is greatly re
duced. 2) A VC request consumes a small number of bandwidth units, usually
one unit; otherwise, a VP connection is suggested (if the client requires multiple
logic channels, a VP connection is preferred [8]). Thus, a situation whereby the
down-stream path rejects a call while the up-stream p ath accepts it occurs with
a very small probability, and the dependency is negligible. Therefore, for a net
work with both V P and VC connections, the admission control and bandwidth
allocation scheme can be applied just as in a network w ith only VP connections,
52
Input:
call arrival rates, service rate,
peak rates, mean rates, normal-
zed burst lengths, capacity (Xj)
I
Proposed Algorithm
(Queueing Model-t-
Dynamic Programming)
Output 1: |
The cost and decision 1
of each state (Markov i
chain is completed now)
Output 2:
Steady State Probabil
ity (the Ergodicity o f
the Markov chain can
be proved)
Output 3:
Average cost = The inner product o f “cost”
and “steady state probility” times the arrival
rate. unit: dollar/second
Figure 3.7: Flow chart of finding the average cost of a virtual path
53
The path cost
.x 10
T he path cost for different arrival rate
I ■ ' '■ " 1 " 1
Virtual capacity: 100
1 ............ 1
Delay sensitive call:
= O-'fi.O
QoS — ► Delay: 0.008 second/call, Loss: 0.08 s '
Loss sensitive call:
X L = 1.5
QoS — * Delay: 0.1 second/call, ■
H = 1.0 /
' Loss: 0.001
.
1 .1 .... L
A : Arrival rate
i i
-
1 2 3 4 5
The arrival rate of the delay sensitive call (call/sec.)
Figure 3.8: An example of the convex characteristics
function, which is actually the lower bound of the true value but they are quite
close.
54
The path cost
_ X 104 The path cost for different arrival rate
3 1 --------------------1 --------------------1 --------------------1 --------------------1 --------------------r
Virtual capacity: 100
Delay sensitive call:
A * = 3.0
QoS — • Delay: 0.008 second/call, Loss: 0.08
Loss sensitive call:
X L = 0 -3.0
QoS — ► Delay: 0.1 second/call,
A : Arrival rate
The arrival rate of the loss sensitive call (call/sec.)
Figure 3.9: An example of the convex characteristics
55
C hapter 4
V irtual P ath R outin g and C apacity D esign
4.1 T he V irtual P ath C oncept and th e V irtual
P ath Layout
A virtual path is designed to serve one or more 0-D pair (end-to-end) connections
(the connectionless service [1, 11, 20, 29, 32, 34, 45, 59] is beyond our scope) of
m ultiple classes of traffic. A call of a particular O-D pair can be transm itted on
different predefined virtual routes (the virtual path layout is an im portant issue,
and should be studied together with the virtual path capacity design problem.
At present, we simply suggest an easy way for VP layout). A route could be a
virtual path (only VP connections) or be composed of several virtual paths (with
vc connections). Each route is associated with the CAC and the UPC functions.
If the call in question may be served by more than one route, then only one is
56
selected to route the call and other routes can be the backup routes in case of link
failures. Since we do not want to lose the simplicity advantage of the virtual path
transport technique [48], the routing function (to select one candidate route) and
the CAC, UPC and the bandwidth allocation can not be too complicated. Thus,
we suggest a dice-tossing scheme to implement the routing and a table-look-up
scheme to implement the CAC function and the bandwidth allocation. We also
recommend the T-X m ethod [66] for the UPC function (the most commonly used
Leaky Bucket algorithm is shown to be inadequate for the ATM UPC function;
see [65]).
In Chapter 3.3, we first assume VP connections only (a route is a single VP)
and develop a way to construct the “bandwidth allocation table” which assigns
the optim al virtual channel capacity for each state of the path, and finds the cor
responding cost (trunk cost). The CAC function is performed by simply looking
at the corresponding decision of the current state from the decision table of the
path where a call arrives (see 3.2). If it is “0”, we reject the call (the admission
control). A general situation whereby both the VP and the VC connections exist
in a network is quite similar and is discussed in Chapter 3.3. Chapter 4 concludes
both Chapter 2 and Chapter 3 by solving a joint virtual path capacity and virtual
path routing problem.
There are three multiplexing and demultiplexing m ethods in a virtual path
network: 1) Use per frame identifiers. 2) Use per cell identifiers. 3) Use of
m ultiple connections [8]. The first one applies only for a small cell intermixing
probability. The third one will cause heavy network overhead and inefficient
bandwidth utilization. So the second one is assumed in this proposal.
57
The virtual path is proposed to achieve point-to-point, point-to-many, many-
to-point and many-to-many connections [4, 8]. Different connection types have
different physical topologies of virtual paths. For example, a virtual path of
point-to-point connection is a straightforward path , but a virtual path of point-to-
many connection type has a tree topology. Point-to-point connection is discussed
in this chapter and an algorithm to find the physical topology of each path is
suggested. This work can be applied on other connection types as well with
small modifications.
If there are only VP connections for an 0-D pair, then each route connecting
that 0-D pair is actually a virtual path. To find the physical topology (virtual
path layout) of all candidate routes for an 0-D pair with only VP connections
and of point-to-point type communication, we first assume each 0-D pair in a
network can have “D” routes (virtual paths), and then suggest a simple minded
layout scheme by looking for the first “D” shortest routes (paths) for each 0-D
pair. Also, all virtual paths of an 0-D pair are disjoint (to enhance the reliability).
Dijkstra’s algorithm [10, 14] introduced below is suggested to find the shortest
hop path for an 0-D pair, and then we remove those links used by this shortest
path from the network, so that a new network topology is generated, and then
we continue to find the next shortest virtual path connecting the same 0-D pair
and so on until all “D” paths are found or there is not any path left to connect
that 0-D pair.
The physical topology of virtual path with vc connections can be done in
a similar way. The m ajor difference is that we have to define the virtual path
term inators which connect paths to a route, thus an 0-D pair connection is
accomplished through several virtual paths concatenated in series. Also, we have
58
to combine traffic of different 0-D pairs within the same link into a path between
two virtual path term inators, according to the traffic of the 0-D pairs.
Assume that the network topology is expressed as a directed graph G, and
the original node is assigned as node “1”, then the algorithm to establish “D”
disjoint routes for each 0-D pair is:
Step 1. d = l,G l= G
Step 2. Execute D ijkstra’s algorithm to find the shortest-hop path connecting
this O-D pair in G(d).
Step 3. If the path connecting this 0-D pair is not found, STOP; else, as
sign the path which connects this O-D pair as its “d”-th virtual path, and then
the links of that path are removed from G(d) to generate a new topology G(d+1).
Step 4. If d=D , STOP; else d= d-f 1, go to step 2.
D ijk s tr a ’s A lg o rith m Let S be a set of nodes, V be the set of all nodes.
1 if i and j are adjacent
d.j = {
00 otherwise
Dj is the distance between the origin (node “1” ) and the j-th node.
1- S={1},
1 if j = l
Di = {
oo otherwise
59
2. Find i £ S s.t. £> , = minj^sD j, S = S U {i}. If 5 = V, STOP.
3. For all j € S, Dj = min[Dj, D, + d,j]; Go to 2.
Based on the physical topology of VPs established above and the congestion
control and the performance evaluation model introduced (Section 4.1), we try
to solve the virtual path capacity design problem for a network which uses vir
tual path transport technology to realize point-to-point connections and uses cell
identifiers to realize the multiplexing and demultiplexing [8]. We show how to
design the virtual path capacities of all virtual paths in a network by solving
a constrained non-linear optimization problem. Actually the technique that we
developed is not restricted to point-to-point connections, as long as the physical
topology of each path is found, the constrained non-linear optim ization problem
can be formed and solved as well. For example, [4] presents heuristic solutions
to discover the optim al physical routes for each virtual path of point-to-many
connection type. After they are found, we sum up all “path cost” to form the
objective function and each link still gives a constraint as it does in the point-to-
point connection problem. The “path cost” with VC connections can be derived
in a similar way as with only VP connections (Chapter 2.4). W ithout loss of
generality, we consider the point-to-point type in our proposal.
4.2 C onstrained non-linear op tim ization problem
Fig. 4.1 is a simple unidirection network (since fiber optical cables is unidirec
tional, a bidirectional network simply means there is another link between every
two nodes in Fig. 4.1) consisting of three 0-D pairs: A-B, B-C and A-C. If two
60
virtual paths are allowed for each 0-D pair ( D = 2 in the algorithm introduced
in the last section), only one virtual path (path 1 with capacity x l) can be found
and assigned to A-B; only one virtual path (path 2 with capacity x2) can be
found and assigned to B-C; and two virtual paths (path 3, path 4 with capacities
x3, x4 respectively) are found and assigned to 0-D pair A-C. Link capacities are
given: link A-B has capacity K a b , link B-C has capacity K b c and link A-C has
capacity K a c • If the “path cost” of path i is /,-, then the constrained non-linear
optim ization problem is
M in. f( x I,x 2 ,x 3 ,x 4 ) = /i( x l) + / 2(x2) + / 3(x3) + / 4(x4) (4.1)
s.t. x l + x3 = K ab (4-2)
x2 + x3 = K Bc (4-3)
x4 = K a c (4.4)
The last constraint is redundant, so / 4(x4) is a constant and can be eliminated
from the objective function:
M in. /(x l,x 2 ,x 3 ) = /i( x l) + / 2( x 2 ) - f /3(x3) (4.5)
s.t. x l + x3 = K ab (4-6)
x2 + x3 = I< b c (4-7)
This is a three dimensional non-linear optim ization problem with linear equality
constraints, and can be solved by the Gradient Projection Method if the objective
61
function is pseudoconvex and its gradient is known [5, 51, 52]. We suggest one
type of the gradient projection method: the Reduced Gradient M ethod [5, 63]
whose complexity is comparatively small.
4.3 T he R educed G radient A lgorithm
Min.f(X) (4.8)
s.t. A ■ X = I<; Xi > 0 (4.9)
(4.10)
A is an m x n constraint m atrix; m < n . In the above example, m = 2,n = 3,
1 0 1
0 1 1
= A
Assume f(X) = f(xl,x2,x3) is a pseudoconvex function (if S7f{Xa) ■ (f(Xb) —
f{Xa)) > 0, then f ( X „) > f(Xb))', this is required to apply the reduced gradient
algorithm.
The detail of the reduced gradient algorithm is in [5, 63], we only describe the
algorithm briefly:
Step 0: Chose an initial feasible solution X \ and set j = 1. Go to step 1.
Step 1: Let Jj= index set of m largest components of Xj. Let B = {A(*,l) :
62
I g Jj}, B is invertible, N = {j4(*,/) : I g Jj}. Columns in B are basic columns,
columns in N are non-basic columns (the reason for selecting the index set of
m largest components of Xj to be basic columns is to ensure this algorithm
-* T
converges to an optim al solution; see [42] for details). Rjv = S7Nf(Xj)T —
V B f ( X k)TB - 'N .
Define d^ and ds as follows: for each I g Jj,
df = —ri r i > 0 , x i ^ 0 (4-11)
= 0 r/>0,X| = 0 (4-12)
= — n r; < 0 (4-13)
where r; is a component of Rn , d\ is a component of dn also, dg = —B~lNdw and
djT = (dl,d2, ...,dn)T = ( d ^ , d ^ ) - If dk = 0, stop; Xj is a K-T (Kuhn-Tucker)
point; else go to step 2.
Step 2: Let uk be an optim al solution to the following line search problem:
M in. f{Xj + uxdj) (4.14)
s.t. 0 < u < umax (4-15)
where
U m a x = oo dj > 0 (4-16)
= Min{-(Xj),/(dj) : (dj < 0,1 < I < n)} dj < 0. (4.17)
“ 4 *■» — *
Let X i+1 = X : i + Uj x dj. Replace j by j + 1 and go to step 1.
63
4 .4 Joint R outing and C apacity D esign in a
V irtual P a th N etw ork
If routing is also considered, the objective function depends not only on the vir
tu al path capacities (a;,), but also on the distribution of arrival rates (A,). For
example, the O-D pair A-C in Fig. 4.1 has two virtual paths: 3, 4. Assume
calls of 0-D pair A-C arrive as a Poisson process with arrival rate AACi then
\ a c should be split onto path 3 and path 4, say A 3, A 4 respectively. Thus, the
constrained non-linear optim ization problem becomes
Min. f ( x I ,x 2 ,x 3 , A 3, A 4) = A4) -f- fi(x2, A2) (4-18)
+ A 3) + / 4 (x4, A4) (4-19)
s.t. a;l-|-x3 = K ab (4.20)
x2 + x3 = K bc (4-21)
A 3 + A 4 = ^ ac (4.22)
It is noted th a t / 4 is included in the optim ization problem now, and there are 8
variables in the objective function, 5 of them are decision variables (Ai, A 2 and
x4 are constants, since only one virtual route is assigned for O-D pair A-B, only
one route for O-D pair B-C and link A-C contains only one virtual path). Also,
a rate conservation constraint is added (equation 4.22).
In general, if there are n virtual paths in a network, and each virtual path
connects y types of connections, then the objective function is of dimension at
64
Kab
P ath AC1, for O -D p air A -C through B -C
P ath AB, for O -D p air A-B
Link A-B
*3
X l
B
IZZ3
P ath AC1 (path 3), w ith
capaci
capacity
Path A C2 (path 4), with
capacity a 4= K AC
L ink A-B w ith cap acity K ^ b
Link B-C, w ith
capacity K b c
P ath B C (path 2), w ith
capacity *2
Figure 4.1: Network example
most n + y x n. In the above example, since only one class of traffic is included,
y = 1. If vve still assume the objective function is pseudoconvex (numerical results
supporting this assumption are shown in Fig. 2.4, 3.8 and 3.9; see Chapter 3 for
details) and its gradient (a vector with x,- and A ,- as its components) can be found,
then the Reduced Gradient Method is still applicable to find both the optimal
virtual path capacities and the optim al arrival rate distributions, and then the
routing is enforced in accordance with these rates.
4.5 E xam ple
M in. /( x 1,x2,x3,A 3,A4) = (e"ll+1 + e "x2+1 (4.23)
+ e — i 3+a3 + e-* 4+A «) x 1 0 0 0 0 (4 .2 4 )
s.t. x l + x3 = 20 (4.25)
x2 + x3 = 20 (4.26)
A3 + A4 = 2 (4.27)
Assume the cost function on each link is the same, i.e., /;(xi,A ,) = Ae^~xx+ > 'i\
where A is a constant. Let x4 = 10, A4 = A2 = 1, A = 10000. The initial feasible
point is: X ° = (x l,x 2 ,x 3 , A 3, A 4) = (10,10,10,0.5,1.5), and the corresponding
value is /( 1 0 ,10,10,0.5,1.5) =5.25.
Since (x l, x2, x3) and (A3, A 4) are two independent constraint sets, the overall
constraint m atrix is of the Jordan form. The basic columns of the overall con
straint m atrix are chosen by verifying the basic variables of both the link capacity
66
constraint m atrix (in this case, since x l = x2 = x3 = 10, we just choose two
of them randomly, say, x l and x2) and the flow conservation constraint matrix
T
(in this case, since A 4 = 1.5 > A 3 = 0.5, A 4 is chosen). So X b = (x l,x 2 ,A 4),
-* T . -< T -> T
X n = (x3,A3), and B is invertible. Then R n — (16.5,12.8) and dn =
(-1 .7 1 9 ,-1 .2 8 6 ), ds = —B~lNdpf = (1.719,1.719,-1.286), and the normalized
improving direction is: d = (0.48,0.48,-0.36,-0.48,0.36) = (ds ,dw ), thus
the line search problem (step 2 in Chapter 4.3) becomes:
Min. f = e-io-o.48U +i + e-10-0.48u+l (42g)
_ |_ e-10+0.4Su+0.5+0.36u _ j _ e~10+1.5-0.36u ^ 9g^
S.t. 0 < U < Umai. (4.30)
The approximated solution is / ( X 1) = 4.67 < f(X°) = 5.25 with uopt = 0.88, and
the next feasible point is X 1 = (xl, x2, x3, A 3, A4) = (10.4,10.4,9.6,0.S2,1.18).
From the first iteration, we observe that we can reduce the total path cost by
loading more traffic on path 3 (decrease x3 and increase A3). Compared to the
intuitive optim al solution X opt = (10,10,10,1,1), this movement is correct.
4.6 Solving an O ptim ization P roblem
According to the QoS measurement introduced above, the user cost and the future
cost for each assignment are obtained, so that an optimal bandwidth allocation
can be found for each state, and the path cost is determined (Chapter 3). We
solve a joint virtual path capacity and virtual path routing problem of a given
network by formulating an optimization problem with the sum of all path costs to
67
be the objective function. For a network of m physical links and n virtual paths
(n < m), the link capacity constraints have to be satisfied (LmXn x X n = K„), as
_ T — T — T — T
well as the rate conservation constraints (/?**, x \ q — f t, Xq = (Aj , A 2 • ■ • A„ )).
Assume there are totally t rate conservation constraints, and they are composed
of q non-zero arrival rates distributed on all routes (t < q)\ q = 5Zi=1...n dim(A,).
— T - T — T
Let the decision variable be Z(n+q) = (X n ,Xq ), the constraint vector be
- - T _ t -
b(n+q) = {Kn 1 fq )• Since the capacity variables (z 1 • ■ • zn) and the rate vari
ables (zn+i • • ■ z (n+q)) are independent in the feasible plane, the overall constraint
m atrix is a Jordan form m atrix as shown in the example of Chapter 4.3, and
we can find the feasible and improving direction for capacity vector X n and for
rate vector A, separately. The “path cost” of the i-th path is /,-(x,-, A,), where
the dimension of A , is determ ined by the number of classes of traffics on the i-th
p ath (with VC connections, clients of different 0-D pairs are multiplexed on a
V P and are treated as different classes of traffic).
The optim ization problem is formed:
min.f(X1,X 2,---,Xn,Xu\ 2,---,K) = f ( X n,Xq)= £ /,(x „ A J4 .3 1 )
i= l " - n
S-t. Lmxn X A n = An (4.32)
s.t. Rtxq x A, = f q (4.33)
This is actually a standard form of an n -f q dimensional nonlinear optim ization
problem with m + 1 linear equality constraints (m link capacity constraints and
t rate conservation constraints), which can be solved by the G radient Projection
M ethod introduced above. However, two ways to implement it are suggested: the
approxim ation of the closed form and the numerical methods.
6 8
T h e a p p ro x im a tio n o f th e c o st fu n ctio n /,(x,-, A,) We assume /i(x,-, A,) is
a convex function (Fig. 2.4, 3.8 and 3.9 are examples to support this assump
tion; the traffic characteristics and the QoS requirem ent of these examples are the
same as the heavy load traffic exam ple in Chapter 3) and can be approximated as
/,■(*,•,£) = a + b i x e - - . x ‘.+*+A-.r x e - + /, (In Fig U j 12j 13) x * = {Xd,XL)). For
a given feasible point X, x ,- and A ,- are known, and the approximation form has
u=5+dim (ei) unknown parameters: ( a , - , h i, C i ,d ; , e f,/;), so we need u equations
which can be done by assigning u sets of x ,- and A; and find their corresponding
values by calculating the inner product mentioned above. A fter those param eters
are found, /,(x;, A,) can be explicitly expressed as a convex function and the gra
dients can be found. O f course, we can use more param eters in the approximation
form to m ake it more accurate.
For a network of n virtual paths, the objective function is therefore approxi
mated by the sum of all “path costs” : /(.) = /.(i;, A,).
69
C hapter 5
B andw idth A llocation w ith M u ltiplexin g Gain
5.1 Iterative A pproach
An optim al bandwidth allocation scheme is essentially a batch arrival and batch
departure process whereby the arrival batch size is determ inistic and varies with
states. Bandwidths assigned to a call are released at the same tim e, so the steady
state departure process is approximated based on the probability mass function
(p.m.f.) of bandwidth assignments. Such a p.m.f. is derived from the knowledge
of steady state probability and state decisions. Chapter 3 assumes a non-batch
departure process as shown in Fig. 3.5. According to the decision table in Fig.3.4,
a determ inistic batch arrival is presented by the upper half links in Fig. 3.5. Since
it is Ergodic (see Appendix A for proof of Ergodicity), the steady state probability
exists and is obtained by solving local balance equations. The line in Fig. 3.6
is the corresponding steady state probability of Fig. 3.4. This curve shifts left as
70
the load increases (dotted line) and shifts right for light traffics (solid line). The
corresponding p.m.f. of bandwidth assignments is a simple combination of the
curve in Fig. 3.6 and the decision table in Fig. 3.4, and the resulting departure
process is very different from the assumed departure process (the lower half)
in Fig. 3.5 whereby each state has multiple outgoing links of different batches
associated with different probabilities.
Fig. 3.5 is a bandw idth unit independent model whereby bandwidth units
of the same call are released independently, therefore arrival and departure pro
cesses are not related. Realistically, these two processes are closely related, not
only because the departure (bandwidth release) is a result of the arrival (band
width assignment), b u t also with the following reason: Both the MG and the
future cost probability which affect an optimal arrival batch size are determined
by the average departure batch size. Thus, the optim al bandw idth allocation and
the bandwidth release pattern are interdependent in an intricate way. Besides the
steady state probability, the bandwidth allocation scheme which accounts for MG
and future cost probability also makes the correlation intricate, and further en
tangle their relationship. For a given bandwidth allocation scheme, there may be
more than one pair of such arrival and departure processes whereby they fit each
other; they are called “matched sets”. Any of them will be considered as an opti
mal bandwidth allocation scheme ( “optim al” is defined in a way that each state
selects the assignment causing the least “assignment cost” a weighted sum of
“user cost” and “future cost”). B ut there may be no such a arrival/departure
pair.
Even without the bandwidth allocation scheme, to simply find two sets of pro
cess (arrival and departure) whereby the average batch departure is resulted from
71
the batch arrival through the p.m.f. calculation is already a difficult problem. In
general, the existence of a pair of “matched sets” of a Markov chain evaluated
by a particular “black box” is an NP hard problem. Since the “black box” can
be m apped into the “combinational circuit” of a “Circuit-Satisfiability problem”
which is N P hard, and any “combinational circuit” can be a “black box” . But for
I
certain “black boxes” , such as the calculation through p.m .f. in this study, the
“matched sets” generally exists and can be found through an iterative approach.
Ite ra tiv e a p p ro a c h fo r a fixed a ssig n m e n t m o d el Fig. 5.1 (a) is a Band
width Queue of length 4, assuming A = y = 1.0 and an independent departure
process as the initial departure set. In this model, the bandwidth assignment is
fixed (0, 1, 1, 2 for state 0, 1, 2 and 3), and {0,1,2} is the decision set. The
initial departure set obviously does not m atch with the arrival set, therefore an
iterative approach m ust be used to find the m atched set.
Fig. 5.1 (b) is the corresponding p.m.f. of decisions, and (c) is the resulting
departure process suggested by the p.m.f. For example, assume y l, y2 to be the
rates of two outgoing links of state 0 in (c). Then y l + 2 x y2 = 3 since all 3
channels are occupied in state 0; and t/1 = 2 x j/2 which is suggested by (b).
Therefore, y l = 1.5 and j/2 = 0.75. New departure process for other states are
calculated in a similar way. We solve the steady state probability again to find a
new p.m.f. (Fig. 5.1 (d)). If (d) and (b) are the same, then the m atched sets are
found to be the departure process in (c) and the fixed assignment set. If it is not,
keep iterating. The departure process suggested by each iteration corresponds
to an improving direction toward “matched sets” . We have to move along the
improving direction very slowly to guarantee the convergence (see Chapter 5.2
for convergence). So instead of using table (d), a linear combination of (b) and
72
(d) with a smaller fraction of (d) is used to update the departure process. Fig.
5.1 (e) is a linear combination of Fig. 5.1 (b) and (d) with fraction of 0.9 and
0.1 respectively. We keep iterating until the updated p.m.f. is the same as
the previous one, thus, any linear combination will be the same p.m .f., i.e., the
departure process, which is the result of the arrival process, does not have to be
updated again since a pair of “m atched sets” has been reached. Fig. 5.1 (g) is the
m atched set found by iteration, and (h) is its corresponding p.m.f. of decisions.
p .m .f.
y l = 1 . 5
"y 2 = 0 .7 5
(c)
0 .5
t
.6
0 1 2
(b)
p .m .f.
t 0 .5 0 8
f ; 9
0 1 2
(d)
0 5 9 0 8
* 0 .3 0 5 6
* m
0 .7 5 6 ------- 0 .5 0 4
p.m .f.
0 .6 0 0 8
Figure 5.1: Iteration approach
73
Ite r a tiv e a p p ro a c h for th e o p tim a l a ssig n m e n t m o d e l In this model, an
optim al bandwidth assignment which incurs the smallest assignment cost is found
recursively for each state. They are affected by both the MG and future cost
probability, which are calculated though the p.m.f. of bandwidth assignments.
Since the p.m.f. is derived partly from the steady state probability, and this
probability is solved based on both the arrival and departure rates, therefore an
updated set of departure rates will go back change the optim al decision set. In an
optim al assignment model, two sets are affected by each other sequentially, and
not every newly generated set signifies an improving direction, unless one set is
always less sensitive to the change than the other. The sensitivity depends on the
“black box” which is a procedure of the following steps: 1) Calculate the MG for
each state, 2) Calculate the future cost probability for each state, 3) Find optimal
assignment for each state, 4) Solve local balance equations, 5) Make p.m.f. table
for decisions, 6) Suggest new departure rates. Steps (2) to (6) are either similar
to Chap 3.2 or straightforward except more complicated calculations are required
for step (2).
Step (1) is explained as follows: If a call arrives at state “m ” and “j ” primary
units of bandwidth is assigned, then “j ” units are prim arily reserved for that call
(so th at the future cost probability is calculated based on prim ary assignments).
If the cell rate is actually less than “j ”, other calls can be served. On the other
hand, if the cell rate is higher than “j ” , available quotas from other calls can
be used to serve it. The expected additional bandwidth obtainable for a call
is called MG (multiplexing gain), which is used in the recursive equation (see
equation 3.2).
74
MG comes from the available quota of existing calls. Let Nh be the average
num ber of calls w ith assignment “h” within “K-m” assigned channels. N/, is a
function of “m ” and obtained from the p.m .f. of assignments. For example,
in state 0 of Fig. 3.5 (a), 3 occupied channels are composed of N i “one unit
decision” and V2 “two unit decision”; Ni -f V 2 = 3. Due to Fig. 3.5 (b), ^- = M
— N i = 1.5, iV 2 = 0.75, which are also th e outgoing rate yl, y2 in Fig. 3.5
(c) since A = p. = 1.0. Then ( j — B) + x — &)) — Supply (B is
the m ean rate) which is the to tal expected number of idle channels when a call
arrives at state “m ” with “j ” units of prim ary bandwidths being assigned. Since
all existing calls and the call in question will compete for this additional resource
to serve those overflow cells (the cell rate is sometimes higher than th e assigned
prim ary bandwidth), and multiplexing without priority m eans pure competition,
so the “Supply” is distributed to each call according to its “Demand” ; i.e., MG =
Supply x jp-'— i—(/ijj> ' vhere Pstorc(j) is the expected number of overflow
cells corresponding to a primary assignment “j ” (see Fig. 2.2). The M G is added
into the iteration scheme above by first assuming M G=0 in the first iteration,
then MG is calculated in subsequent steps.
5.2 Convergence
The multiplexing gain calculation introduced in Chapter 5.1 illustrates the inter
dependent relationship between the bandwidth assignment and the bandwidth
release processes; they are now correlated not only by th e future cost probability,
but also by the steady state probability from which the M G is derived. Thus, to
verify the convergence (or, to claim that a pair of “m atched sets” can be found)
75
m athem atically is almost impossible. However, we can find such a vector by the
proposed iterative m ethod. The p.m.f. of decisions is viewed as a vector in a
hyperspace, thus, the distance between the vector before an iteration and the
vector after the iteration signifies the distance to a convergence. It is difficult to
find an iteration with both vectors (before and after) the same. W hat we have
proposed is to start with an iteration (two vectors are associated), usually the
resulting vector distance is large. And then try to shorten the distance in the
next trial (iteration). Let the vector before the current trial be Vn, and the one
after be Vj2■ Also, let the vector before the next trial be V2i, and the one after
be V 22 • A bold assum ption is raised: if V2 1 is chosen by perturbing Vn a small
am ount toward Vn, then the resulting V2 2 will deviate from Vi2 in an even smaller
am ount. Thus, \Vl2 - Vu| > \V2 2 — [ • •• > \Vn2 - Kul; Via converges to Vn 2
if n is large enough. This assumption is verified through com puters by trying 48
examples, each with different virtual path capacity (100, 50, 30 and 10 ), traffic
loads (heavy, m edium and light), rejection costs (100 and 4000) and different
traffic classes (loss sensitive and delay sensitive). The m axim um iteration times
is 55. Even though we can not prove it mathematically, the reason for such an
assumption is quite convincing. If we move Vn toward V1 2 in a small scale, the
departure batch size of each state is changed slightly, thus the MG and the future
cost probability are adjusted. But the optimal bandwidth allocation may still be
the sam e since it is chosen from candidate assignments; all of them are affected
by the same change. Since the decision change is less sensitive than the vector
deviation, the resulting steady state probability and p.m.f. are less sensitive as
well, consequently, |V22 — J^il is smaller than the deviation |Vj2 — Vji|. If V2\ is
still between Vn and Vn, but ]V 21 — Vn| is large, for example, V2i=V i2, iterations
76
still converge in most cases, however, oscillations have been found in 5 out of 50
examples.
5.3 N um erical R esult
An example to illustrate the convergence is shown Fig. 5.2. To simplify the
analysis, single class of traffic (a loss sensitive call used in Chapter 3) under
heavy load (A = 3.0) is assumed in a VP of capacity 100. In the steady state,
only 1, 2 and 7 (peak rate) are assigned as the prim ary bandwidth, therefore the
departure of most states can be either a short hop (of batch size 1 or 2), or a
long hop of a batch size 7 which results in a wave-like steady state probability in
Fig. 5.3. Average service rate is derived from the steady state probability, and,
a utilization factor 0.8 is found afterwards. From Fig. 5.4, a loss sensitive call is
not rejected due to the m ultiplexing gain even though the available bandwidth
can not afford the minimum QoS requirement (which is 4), also, a peak rate is
not assigned to it due to a high rejection probability in a heavy load system even
though the available bandwidth is higher than 7. Since existing calls are statisti
cally multiplexed within a VP, they compete for the quota from the quota pool
according to their needs, in other words, proportional to the storage probability.
Such a probability increases exponentially as the prim ary bandwidth assignment
decreases. Therefore, while limited am ount of bandwidths are available, a smaller
primary assignment turns out to be m ost profitable since a relatively larger MG
can be obtained from th e quota pool. Thus, extrem e prim ary assignments are
preferred. The multiplexing gain obtained for a call arriving at different states is
shown in Fig. 5.5, and the equivalent bandwidth will be the sum of this MG table
77
in Fig. 5.5 and the prim ary decision table in Fig. 5.4. A call has an equivalent
bandwidth of 6 while 12 or less bandwidths are available, and an equivalent band
w idth of 7 while 13 or more bandwidths are available. Less available bandwidth
indicates more existing calls, i.e., a larger MG, therefore the service quality is not
very responsive to the state compared with Fig. 5 and 6 whereby no statistical
m ultiplexing is provided.
Transitions of the Probability Mass Function of decisions
0.7
Utilization: 0.8 VP capacity: 100
Peak rate: 7.0; Mean rate: 3.0; Normalized burst length: 0.6 0.6
QoS: Average delay jitter < 150ms; Loss Prob.< 0.000001 sec. 0.5
Number of iteration>=40
V : Number of iteration=20
Y : Number of iteration=5 0.3
’o': Number of iterations
0.2
Decisions
Figure 5.2: Converged p.m.f.
78
Probability
Steady state probability of each state
0.07
Utilization = 0.8 VP capacity: 100
Peak rate=7.0;Mean rate=3.0;Normalized burst length=0.6 0.06
QoS:Average delay jitter<150ms;Loss Prob.<0.000001sec. 0.05
0.04
0.03
0.02
0.01
90 100 80 40 60 30
S tate (available bandwidth when a call arrives) when a
Figure 5.3: Steady state probability
79
Decision table
VP capacity; 100 Utilization: 0.8
P eak rate=7.0; Mean rate=3.0; Normalized burst length=0.6
QoS: A verage delay jitter < 150m s: Loss Prob. < 0.000001sec.
Note: D ecisions converge to this table after 4 0 iterations
0 10 2 0 3 0 4 0 50 6 0 70 80 9 0 100
State (available bandwidth w h en a call arrives)
Figure 5.4: Prim ary Bandwidth Assignment
Multiplexing gain ol different states
4.5
4
1 3 . 5
r t
£
£ 3
C
3
l 2 * 5
s >
0 >
1 2
I 1 - 5
1
0.5
0
---------1 ---------1 --------- 1 --------- 1 --------- 1--------- 1 -------- 1--------- r-
VP capacity: 100 Utilization « 0.8
* Peak rate=7.0; Mean rat=3.0; Normalized burst tength=0.6
QoS: Average delay jitter < 150ms: L oss Prob. < 0.000001
10 2 0 3 0 4 0 50 6 0 70 80 9 0 100
State (available bandwidth w hen a call arrives)
Figure 5.5: Multiplexing Gain
80
Fig. 5.6 is the state cost (the assignment cost incurred by the optimal deci
sion of each state), which drops as available bandw idth increases. For a call with
strict rejection probability requirem ent, a high rejection cost is recommended.
Rejection is defined as no prim ary bandwidth assigned. Therefore the m ulti
plexer simply overlooks the customer initiating the call. T he average rejection
probability of the above example is 0.0025, which is obtained from Fig. 5.3 and
Fig. 5.4. If it is larger than required, we can raise the rejection cost which is
currently set to be 4000. Fig. 5.7 is the state cost of a m edium load (A = 1.5)
example whose utilization is 0.33. The rejection cost is still 4000 but it drops
faster than the heavy load example. The corresponding steady state probability
inherits the same wave-like pattern (Fig. 5.8) and the corresponding prim ary
decision table inherits the extreme assignment nature (Fig. 5.9) which is similar
to Fig. 5.4 with equivalent bandwidths 7 for all states (Fig. 5.9 + Fig. 5.10).
However, the prim ary decision (Fig. 5.9) becomes conservative compared with
the medium load system (Fig. 5.4), which is different from the observation of Fig.
3.4. This is because the steady state probability of the medium load is connected
in those states with prim ary decision 7, therefore more MG can be offered, so
smaller prim ary decisions are just fine. The MG of state 0 in both examples are
in fact sufficient to support the minimum QoS requirem ent, b u t since there is no
prim ary assignment of state 0, according the definition of rejection, any call ar
riving at state 0 is rejected. Such a definition simplifies the scheduling complexity
of a multiplexer.
81
Cost of each state
VP capacity: 100 Utilization: 0.6
Peak rate»7.0; Mean rate=3.0; Normalized burst length=0.6
— .2500 ■
j 2000 m QoS: Average delay jitter < 150ms; Loss Prob. < 0.000001
10 20 30 40 50 60 70 80 90 100
State (available bandwidth w hen a call arrives)
Figure 5.6: Sate cost
Cost of each state
Utilization ■ 0.33 VP capacity: 100 3500
Peak rate»7.0; Mean rate=3.0; Normalized burst length=0.6 3000
QoS: Average delay jitter < 150ms; Loss Prob. < 0.000001
1500
1000
500
90 100 70
call arrives)
40 60 20 30
State (available bandwidth when a call arrives) when a
Figure 5.7: State cost
S2
Steady state probability of each state
0.04S
VP capacity: 100 Utilization = 0.33
Peak rate*7.0;Mean rate=3.0,Normalized burst !ength=0.6
0.04
0.035
0.03
‘0.025
0.02
0.015
QoS: Average delay jittefkl 50ms; Loss Prob.<0.000001
0.01
0.005
100
State (available bandwidth when a call arrives)
Figure 5.8: Steady state probability
Decision table
VP capacity: 100 Utilization: 0.33
6|- Peak rate=7.0; Mean rate=3.0; Normalized burst length=0.6
5 ■ QoS: Average delay jitter < 150ms; Loss Prob. < 0.000001
1 3 * Note: Decisions converge to this table after 32 iterations
2 •
0 10 20 30 40 50 60 70 60 90 100
State (available bandwidth when a call arrives)
Figure 5.9: Primary Bandwidth Assignment
83
Multiplexing gain of different states
t -------------1 -------------1 -------------1 ------------- 1 -------------1 -------------r
VP capacity: 100 Utilization = 0.33
Peak rale=7.0; M ean rate=3.0; Normalized burst length=0.6
QoS: Average d elay jitter < 150m s; L oss Prob. < 0.000001
0 10 20 3 0 40 5 0 60 70 8 0 90 100
State (available bandwidth w h en a call arrives)
Figure 5.10: Multiplexing Gain
C hapter 6
C onclusions and Future R esearch D irections
W ithout statistical multiplexing, an ATM network is simply a fast packet switch
ing network. It is faster than the current X.25 packet switching network since
switching (routing) is implemented in the element layer. In the hierarchical mul
tiplexing world, a DSU (D ata Service Unit) is the device to multiplex fractional
T1 signals onto a T l line, and the DCS (Digital Cross Connect) system integrates
hierarchical signals with higher rates. Similarly, in the asynchronous multiplex
ing world, asynchronous multiplexers are needed, and ATM cells are statistical
multiplexed through the multiplexers. Queuing models are widely used to study
the statistical multiplexing ([17, 26, 35, 57, 58], etc.). However, they assume that
cells of different connections enter a common buffer instantaneously, i.e., there
is no statistical multiplexing since the m ultiplexing function in the elem ent layer
is ignored. Numerous ATM studies are not realistic since they do not consider
statistical multiplexing ([7,12,13, 16, 21,18, 31, 47, 55, 64, 65], etc.). ATM man
ufacturers claim that they have ATM products now, but indeed, current products
85
are hierarchical based. For most companies, tru e ATM products will not be re
leased until 1996. Since the implementation of asynchronous multiplexing is not
known for now, perfect idle detection and full quota reallocation capability are
assum ed in this thesis, and then the statistical m ultiplexing is studied in the
bandw idth allocation algorithm proposed.
Existing ATM research examines different pieces of th e ATM puzzle. The
relationship among these pieces are ambiguous, and their approaches in some
cases are ambivalent. For example, routing param eters are not considered in [55],
since the VP capacity is the target. However, th e VP capacity is ignored in [22]
since routing is the target. Furthermore, [21] studies the bandwidth allocation
w ithout taking either VP capacity or VP routing into account, and the bandwidth
allocation is assumed to be known in [55] and [22]. T he interactions among
different ATM pieces are studied in this thesis as well as the tool to solve the
whole puzzle.
T h e bandwidth allocation scheme proposed is the kernel of this research, which
is based on a Bulk-arrival/Bulk-departure Bandwidth Queue model. Since the
future rejection cost is considered while assigning bandwidth, congestion con
trol is explicitly implemented. Also, since th e QoS cost is adjusted to infinite
if th e bandwidth assigned can not satisfy th e minimum QoS requirement (see
Fig. 2.3 (d) and (e)), the admission control is implemented implicitly as well.
Additionally, the multiplexing gain is clearly defined. Once the network man
agem ent software gets a customer request, a bandwidth allocation table and a
multiplexing gain tab le will be produced through the proposed algorithm. The
8 6
corresponding bandwidth is allocated by looking at the decision table. The tra n
sition of asynchronous multiplexing from the element layer to the m anagement
layer is therefore provided, which is the other contribution of this work.
Since the steady state probability of bandwidth occupancy is known, the
bandwidth assigned and multiplexing gain for each customer is known as well as
the aggregate cost, we have enough knowledge to design our system. The V P
layout design is one of the most promising future research directions. Right now,
there is no engineering tool for the VP layout design, and heuristics are applied.
Since the current ATM network topology is simple, and carries only PVC traffic,
heuristics are just fine. But VP layout design will be a real problem for future
ATM deployments.
However, the convergence between the bandwidth and the multiplexing gain
mentioned in Chapter 5 needs further studies in the future. Also, we assume the
SVC traffic arrives as a Poisson process, but for certain real life SVC traffics,
this m ay not be true; therefore, the future rejection cost probability has to be
changed accordingly.
In this thesis, a framework is built to solve an ATM problem where traffic char
acteristics, QoS m easurem ent, admission control, congestion control, bandwidth
allocation, VP routing and VP capacity design are fully integrated. The proposed
framework takes maximum advantage of asynchronous multiplexing. Therefore
the network is designed in a way to fully utilize the bandwidth, increase QoS,
and reduce the aggregated network cost.
87
A pp en d ix A
D ynam ic V P C apacity D esign
Network resources are divided into virtual paths and traffic between an 0-D pair
is carried by one or more virtual paths. W hen the virtual path bandwidth is fixed
(determ inistic virtual path), there is no bandwidth control and thus statistical
multiplexing occurs between virtual channels in paths but not between paths in
links. Therefore, the bandwidths of some virtual paths in a link are usually used
up, although others in the same link are underutilized, thus, “ bandwidth control”
is proposed to improve it [47]. Of course, there is a tradeoff between efficiency
and operational complexity. [47] shows th at a simple three-value bandwidth
control algorithm can improve the efficiency considerably with a slight increase
in processing. In the model proposed, bandwidth (VP capacity) can be adjusted
according to the bandwidth shortage which can be learned from the decision
table. The advantage in reduced node processing of a virtual path network must
not be lost.
8 8
For example, the link AB in Fig. 4.1 contains two virtual paths and the traffics
on these two paths are assumed to be identical, and the same traffic param eters
in Chapter 3 are used (Aj = 3.0, A x , = 1.5, and a loss sensitive call is 20 times as
im portant as a delay sensitive call). Link AB has a capacity of 200 channels, then
we form an optim ization problem with the two virtual path capacities as decision
variables. The solution is 100 for each virtual path (it can be solved intuitively
too), and thus, Fig. 3.4 becomes the decision table for both traffics. In other
words, unless there are 16 (14) channels available, a loss (delay) sensitive call is
rejected. If path 1 (path AC) is in state 10 when a loss sensitive arrives which
is destined to C in Fig. 4.1, in order to achieve multiplexing, we do not reject it
but look at path 2 (path AB) first. I f Path 2 is in a state greater than 6, say 30,
then 6 o f them are moved to path 1 so that the call in question is accepted with
assignment 16 and bring path 1 to state 0, path 2 to state 2 4 .
If there are z virtual paths within a link, then path 1 can obtain the “6”
channels fairly from different paths (may be according to their path capacities).
However, to make it simple, path 1 will get 6 channels from the next path of a
state greater than 6, i.e., the path in need polls cyclically. In this fashion, the
additional operation for node S (the ATM switch) is to search a buffer of size z
and update it; just compared to the operation o f VPI, VCI verifications per cell,
the complexity o f the additional processing is tolerable.
Since we know the steady state probability distribution, the throughput and
utilization of link S C can be calculated. The throughput will be 1 — 5Zi=o --i5 ^*(0 x
(jd+xl x 0 + a^+aT x ^rei«ta(*))» where P(i) is the probability that a call
comes to path 1 at state i, and Prejectt (i) is the probability th at path 2 is in a
state smaller than 16 — i and Prcjcctd(i) is the probability that it is in a state
89
smaller than 14 — i. The previous example shows that i = 10, and by the Fig. 8,
P(10) = 0.002617, Pr'jcctL( 10) = 0.005065 and Pre, ectd( 10) = 0.002541.
Also, the utilization can be calculated as follows:
U tilization = where
E (path) is the expected state of the virtual path where the call arrives, P(i) is
the same as above, and Paccept^^& ) = 1 Prcjcct ^ (2) ? Paccept a (2) ~ 1 Preject ^ (0 •
Table A .l shows the throughput and utilization under different loads. They
are considerably improved when the system is heavily loaded, since the blocking
probability under heavy load is large so that the bandwidth control is highly
recommended. Suppose path 1 traverses w links, and all links have the same link
capacity constraint: 200. Assume th at each link contains two virtual paths: path
1 and path i + 1, i = 1 • • ■ w\ path i + 1 travels only on the 2-th link (e.g., path BC
in Fig. 4.1) with the same traffic param eters as path 1. So path (i + l ) ’s are inde
pendent, and some of them are not able to provide channels to path 1 while path
2 is able to, thus the bandwidth multiplexing is not completed and the efficiency
is degraded. The probabilities of multiplexing failures for w = 8 under different
loads are shown in Table A .I. Obviously, bandwidth multiplexing between paths
does not perform well for an 8-hop connection under heavy load. The QoS of
68.9 percent o f the calls requiring bandwidth control can not be guaranteed. This
is the disadvantage of SVP (statistical virtual path) scheme: a trade off between
the utilization and the QoS. If we investigate the states of path i + 1 before the
bandwidth control, QoS can be preserved. However, a m ajor benefit of virtual
path is to reduce the call set up operation, so this alternative is not acceptable.
If we apply the SVP scheme, QoS failure is inevitable, and does not meet the
general ATM requirement. The solution we suggest is to set up a tolerable QoS
90
failure probability for each path, and the path capacity is designed to make the
QoS failure tolerable. For example, if we set the threshold of p ath 1 to be 0.001,
then we have to assign large enough capacity to this path to make it lightly
loaded. Also, we have to pay for the QoS failures, so the above optimization
solution is not the true optim al. For example, path 1 should have been assigned
a capacity larger than 100 since it travels lots of links while path (i + l ) ’s do
not. But we would prefer to adjust the link capacity rather than the virtual path
capacities in order to avoid the QoS failures. However, if the link capacity is quite
lim ited and long-hop paths are abundant, then no m atter how we distribute the
virtual paths, the QoS failure probabilities of some paths are always intolerable.
If we are authorized to design the link capacity and the network topology, then
we form an optim ization problem which includes the QoS failure penalty, so that
the QoS failure probability of each path is within the tolerable range.
We conclude th at under heavy loads, the bandwidth control of virtual path is
very sensitive to the number of links it traverses, and directed paths are preferred.
Also, in terms of congestion control, the virtual path idea works as a congestion
avoidance scheme which prevents congestion by inducing local congestion (jobs
accumulated on the custom ers’ desks), and the bandwidth control scheme works
as a congestion control at the user ends.
91
Light Medium H eavy
Probability o f performing
bandwidth control
0.000166
0.0235
0.3458
Throughput (without band-
with control) 0.999834 0.9765
0.6542
Throughput (with bandwidth
control)
0.999999 0.9989 0.9586
Throughput improved 0.000165 0.0224 0.3044
Utilization (without band
width control)
22.39% 52.33% 72.83%
Utilization (with bandwidth
control) 22.932% 52.7% 79.24%
Utilization improved
0.002%
0.37% 6.41%
Probability o f QoS failure
(incomplete B\V control);
w=8
l-(0.999
-99)”=
0.00008
l-(0.995)w
"0.03931
l-(0.864)w
= 0.68947
Table A.l: QoS failure under bandwidth control
A ppendix B
Ergodicity
If a M arkov chain is irreducible, aperiodic and positive recurrent, then it is Er-
godic and the steady state probability exists. For a finite Markov chain, as long
as irreducibility and aperiodicity hold, positive recurrence is held automatically,
so only th e irreducibility and aperiodicity need to be checked.
A sim ple rule of thum b to verify the aperiodicity is to check the existence
of “self loop” in the sta te transition graph. In other words, if the probability of
staying in the same s ta te is not zero for each state, then aperiodicity is guaranteed,
and which is just our case: the probability th at no server completes its service
before th e next call is not zero.
The irreducibility means the connectivity between each state, in general, this
is true. A simple rule of thumb is that: as long as a decision of “one bandwidth
unit” exists, all states are reachable by others through the departure link of bulk
size one. If “assignment one” does not exist, further inspection is needed for
93
connectivity. Those states assigning zero bandwidth (rejection) to incoming calls
will form another class which can not be accessed by the higher state class which
assigns at least one unit of bandwidth, but, the initial state of the Markov chain
is K , so we will never reach those states and keep this chain a single class chain,
so the irreducibility is held. Thus, steady state probability exists and can be
solved.
94
R eferen ces
[1] CCITT I. 327. BISDN network functional architecture. 1990.
[2] H. Ahmadi and W. E. Denzel. A survey of modern high-performance
switching techniques. IEEE Journal on Selected Areas in Commun., SAC-
7(7):1091— 1103, September 1989.
[3] M. Akiyama and S. Sato. Teletraffic studies in Japan. IEIC E, E75-
B( 12): 1237-1244, December 1992.
[4] M. H. Am m ar, S. Y. Cheung, and C. M. Scoglio. Routing m ultipoint connec
tions using virtual paths in an ATM network. In INFOCOM, pages 98-105,
1993.
[5] M. S. Bazaraa and C. M. Shetty. Nonlinear programming theory and algo
rithms. In John Wiley and Sons (Sea) Pte LTD , 1990.
[6] D. Berteskas. Dynamic programming. In Prentice Hall, 1987.
[7] R. Bolla, F. Davoli, A. Lombardo, S. Palazzo, and D. Panno. Adaptive
bandwidth allocation by hierarchical control of m ultiple ATM traffic classes.
In INFOCOM , pages 30-38, 1992.
[8] R. Bubenik, M. Gaddis, and J. DeHart. Communicating with virtual paths
and virtual channels. In INFOCOM, pages 1035-1042, 1992.
[9] C. Clos. A study of non-blocking switching network. Bell System Technical
Journal, 32:406-424, March 1953.
[10] T. H. Cormen, C. E. Leiserson, and R. L. Rivest. Introduction to algorithms.
In Prentice Hall, 1989.
[11] P. Crocetti, G. Gallassi, and M. Gerla. Bandwidth advertising for
M AN/ATM connectionless internetting. In INFOCOM, pages 1145-1150,
1991.
[12] M. Decina and T. Toniatti. On bandwidth allocation to bursty virtual con
nections in ATM networks. In ICC, pages 844-851, 1990.
[13] M. Decina, T. Toniatti, P. Vacari, and L. Verri. Bandwidth assignment and
virtual call blocking in ATM networks. In INFOCOM ’ 90, pages 881-888,
1990.
[14] E. W. Dijkstra. A note on two problems in connexion with graphs. In
Numerical mathematik, pages 269-271, 1959.
95
[15] S. Elby and A. S. Acampora. Wavelength-based cell switching in ATM
m ultihop optical networks. In INFOCOM, pages 953-963, 1993.
[16] A. I. Elwalid. Effective bandwidth of general Markovian traffic sources and
admission control of high speed networks. In INFOCOM, pages 256-265,
1993.
[17] A. I. Elwalid and D. M itra. Fluid models for the analysis and design of
statistical multiplexing with loss priority on m ultiple classes of bursty traffic.
In INFOCOM, pages 415-425, 1992.
[18] G. Gallassi, G. Rigolio, and L. Fratta. ATM: Bandwidth assignment and
bandwidth enforcement policies. In GLOBECOM ’ 89, pages 1788-1793,
1989.
[19] B. Gavish and I. Neuman. A system for routing and capacity assignment in
computer communication networks. IEEE Trans, on Comm., 37(4):360— 366,
April 1989.
[20] M. Gerla, T. Y. Tai, J. S. Monteiro, and G. Gallassi. Interconnecting LANs
and MANs to ATM. In LC N Conference Proceedings, Minneapolis, October
1991.
[21] Roch Guerin, Hamid Ahmadi, and Mahmoud Naghshineh. Equivalent ca
pacity and its application to bandwidth allocation in high-speed networks.
IE E E JSAC, 9(7):968— 981, September 1991.
[22] S. G upta and K. W. Ross. Routing in virtual path based ATM networks. In
GLOBECOM, pages 571-575, 1992.
[23] Z. Haas and J. H. W inters. Congestion control by adaptive admission. In
INFOCOM, pages 560-569, 1991.
[24] V. F. Hartanto and H. R. Sirisena. User-network policer: a new approach
for ATM congestion control. In INFOCOM, pages 376-383, 1993.
[25] 0 . Hashida and M. Fujiki. Queueing models for buffer memory in store-and-
forward systems. In 7-th ITC , 1973.
[26] H. Heffes and D. M. Lucantoni. A Markov m odulated characterization of
packetized voice and data traffic and related statistical m ultiplexer perfor
mance. IEEE Journal on Selected Areas in Commun., SAC-4(6):856-868,
Spetem ber 1986.
[27] H. S. Hinton. Photonic switching fabrics. IEEE Commun. Magazine, pages
71-89, April 1990.
96
[28] H. Horigome and H. Uose. Considerations on cost-efficiency of ATM network.
IE IC E Trans. Comm., E75-B(7), July 1992.
[29] C C IT T 1.221. BISDN service aspects. 1990.
[30] C C IT T 1.311. BISDN general network aspect. 1990.
[31] P. Joos and W. Verbiest. A statistical bandwidth allocation and usage mon
itoring algorithm for ATM networks. In ICC, pages 415-422, 1989.
[32] T. Kawasaki, S. Kuroyanagi, and R. Takechi. A study of high speed data
communication system using ATM. In GLOBECOM, page 59.5, December
1991.
[33] I. Kino and M. Miyazawa. The stationary work in system of a G /G /l general
input queue. Journal of applied probability, 30(1), 1993.
[34] T. V. Landegem and R. Peschi. Managing a connectionless virtual overlay
network on top of ATM. In IC C Proceedings, Denver, June 1991.
[35] J. W . Lee and B. G. Lee. Performance analysis of ATM cell m ultiplexer with
M M PP input. IEICE, E75-B(8):709-714, August 1992.
[36] K. C. Lee and V. O. K. Li. Routing and switching in a wavelength convertible
optical network. In INFOCOM, pages 578-584, 1993.
[37] M. J. Lee and S. Q. Li. Performance of a non-blocking space-division packet
switch in a tim e variant non-uniform traffic environment. In ICC, pages
502-508, 1990.
[38] S. Q. Li. Nonuniform traffic analysis on a nonblocking space-division packet
switch. IEEE Trans, on Commun., COM-38(7):1085-1096, July 1990.
[39] V. O. K. Li, J. F. Chang, K. C. Lee, and T. S. G. Yang. A survey of research
and standards in high-speed networks. International Journal o f Digital and
Analog Communication Systems, 4:269-309, 1991.
[40] I. Machihara. Completion tim e of service unit interrupted by PH-Markov
renewal customers and its application. In 12-th ITC , 1988.
[41] C. Magzine. Object-oriented design of ISDN call-processing software. IEICE,
31(4):40— 45, April 1993.
[42] 0 . L. Mangasarian, R. R. Mayer, and S. M. Robinson. Nonlinear program
ming 2. In Academic Press, pages 29-54, 1991.
97
[43] K. M aruyama. Object-oriented switching software technic. IEIC E, pages
957-968, October 1993.
[44] G. M eempat, G. Ram amurthy, and B. Sengupta. A new performance mea
sures for statistical multiplexing: perspective of the individual source. In
INFOCOM, pages 531-537, 1993.
[45] K. Murakami, K. Hajikano, and A. A. Yujikato. Service and media control
using ATM. IEICE, pages 772-779, April 1991.
[46] C. Oh, M. M urata, and H. M iyahara. Priority control for ATM switching
system. IEICE, pages 894-905, September 1992.
[47] S. Ohta, K. I. Sato, and I. Tokizawa. A dynamically controllable ATM
transport network based on the virtual path concept. In GLOBECOM,
pages 1272-1276, 1988.
[48] S. Ohta, I<. I. Sato, and I. Tokizawa. Broad-band ATM network architecture
based on virtual paths. IEEE Trans, on Commun., pages 1212-1222, August
1990.
[49] Y. Oie, T. Suda, M. M urata, D. Kolson, and H. Miyahara. Survey of switch
ing techniques in high-speed networks and their performance. In INFOCOM,
pages 1242-1251, 1990.
[50] R. 0 . Onvural and Y. C. Liu. On the amount of bandwidth allocated to
virtual paths in ATM networks. In GLOBECOM, pages 1461-1464, 1993.
[51] I. B. Rosen and I. Kreuser. A gradient projection algorithm for nonlinear
programming with linear constraints. In Numerical methods fo r nonlinear
optimization, 1972.
[52] J. B. Rosen. The gradient projection m ethod for nonlinear programming,
part i, linear constraints. S IA M J. Applied Methods, 9:514-553, 1961.
[53] F. Sato, M. Hoshi, and Y. Inoue. Functional element for switching software
based on object-oriented paradigm with UPT as an example. IEICE, pages
1052-1060, October 1993.
[54] K. I. Sato and I. Tokizawa. Flexible Asynchronous Transfer Mode networks
utilizing virtual paths. In ICC, pages 831-838, 1990.
[55] Y. Sato and K. I. Sato. V irtual path and link capacity design for ATM.
IE E E Trans, on Comm., 9(1):104— 111, January 1991.
[56] D. K. Smith. Dynamic programming. In Ellis Horwood, 1991.
98
[57] K. Sriram and W. W hitt. Characterizing superposition arrival processes in
packet multiplexers for voice and data. IEEE .JSAC , 4(6):S33— 846, Septem
ber 1986.
[58] H. Suzuki and S. Sato. Temporal cell loss behavior in an ATM multiplexer
with heterogeneous burst input. IE IC E , E75-B(12), Decmber 1992.
[59] A. Takase, J. Yanagi, Y. Sakurai, and Y. Miyamori. An experimental B-
ISDN system for MAN application study. In GLOBECOM, page 59.4, De
cember 1991.
[60] T. E. Tedijanto and L. Gun. Effectiveness of dynamic bandwidth manage
m ent mechanisms in ATM networks. In INFOCOM, pages 358-367, 1993.
[61] I. Tokizawa, T. Kanada, and K. I. Sato. A new transport m ode technique.
Proc. ISSLS, 11-16:11.2.1-11.2.5, Septem ber 1988.
[62] A. Varma and C. S. Raghavendra. Rearrangeability of m ultistage shuf
fle/exchange networks. IE E E Trans, on Commun., COM-36(10):1138-1147,
October 1988.
[63] P. Wolfe and R. L. Graves. Methods of nonlinear programming. In Recent
advances in Mathematical Programming, 1963.
[64] G. M. Woodruff and R. Kositpaiboon. M ultimedia traffic m anagement prin
ciples for guaranteed ATM network performance. IEEE Journal on Selected
Areas in Commun., SAC-S(3):437-446, April 1990.
[65] N. Yamanaka, Y. Sato, and K. I. Sato. Performance lim itation of Leaky
Bucket algorithm for usage param eter control and bandwidth allocation
methods. IEICE, E75-B(2):82-86, February 1992.
[66] N. Yamanaka, Y. Sato, and K. I. Sato. Precise U PC scheme suitable for ATM
networks characterized by widely ranging traffic param eter values. IEICE,
E75-B(12): 1367-1372, December 1992.
99
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
PDF
00001.tif
Asset Metadata
Core Title
00001.tif
Tag
OAI-PMH Harvest
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-oUC11257333
Unique identifier
UC11257333
Legacy Identifier
9621654