Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
Computer Science Technical Report Archive
/
USC Computer Science Technical Reports, no. 931 (2012)
(USC DC Other)
USC Computer Science Technical Reports, no. 931 (2012)
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
TowardsSystematicRoadmapsforNetworked
Systems
Bin Liu
, Hsunwei Hsiung
‡
, Da Cheng
‡
, Ramesh Govindan
, Sandeep Gupta
‡
Computer Science Department,
‡
Electrical Engineering Department
University of Southern California, Los Angeles, CA, USA
fbinliu, hsunweih, dacheng, ramesh, sandeepg@usc.edu
ABSTRACT
Networked systems have benefited from unprecedented
growth in hardware capabilities, but, as we move closer
to the end of the Moore’s law era, future networked sys-
tems are likely to be more constrained by hardware ca-
pabilities than they have been in the past. We take the
position that the networking community should, in re-
sponse to this development, proactively and systemat-
ically develop networking roadmaps, which attempt to
predict how trends in hardware capabilities will impact
networked systems. In this paper, we discuss a possible
methodology for developing networking roadmaps, and
present two case studies that illustrate the methodology
and reveal how increasing hardware unreliability can af-
fect the performance of routing and transport protocols.
CategoriesandSubjectDescriptors
C.2.2 [Computer-CommunicationNetworks]: Network
Protocols; B.0 [Hardware]: General
GeneralTerms
Design, Documentation
1. INTRODUCTION
Networked systems have benefited from unprecedented
growth in hardware capabilities. High speed switching
fabrics, data centers, networked sensing, and wireless
and mobile computing (to name a few) would not have
This material is based upon work supported by the National Science Founda-
tion under Grant No. CNS-1117049. Any opinions, findings and conclusions or
recomendations expressed in this material are those of the author(s) and do not
necessarily reflect the views of the National Science Foundation (NSF).
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
Copyright 200X ACM X-XXXXX-XX-X/XX/XX ...$10.00.
been possible without improvements in the design of un-
derlying circuits and devices, in terms of speed, reliabil-
ity, power, cost, etc. Because the evolution of networked
systems is strongly determined by hardware advances, it
is instructive to examine projected trends in hardware.
The hardware community invests heavily in the de-
velopment of its roadmaps, of which the ITRS roadmap
for chips [4] is the best known (Section 2). These semi-
conductor roadmaps project future directions for hard-
ware (chips and systems) in terms of a wide range of
important metrics, particularly computational and stor-
age capacities, performance, power, cost, yield, relia-
bility (lifetime), and resilience (to internal and external
noise).
Recent versions of semiconductor roadmaps show that
technology improvements are generally slowing down.
In particular, improvements are continuing in some di-
mensions (e.g., cost-per-transistor), slowing in others
(e.g., speed), and recessing in others (e.g., power, re-
liability, and resilience). This slowdown will continue
as we move close to the end of Moore’s law era and be-
yond. Due to this change in technology trends, we ex-
pect future hardware capabilities to more strongly con-
strain the development of networking software than the
networking community has been accustomed to, since
application requirements will start to exceed the capa-
bilities of the underlying hardware systems.
Given these trends, we take the position that the net-
working community should devote some of its resources
to systematically develop a roadmap for networked sys-
tems. Such a roadmap would (Section 3): a) project
how future application demands or other considerations
would drive system requirements in one or more dimen-
sions (e.g., data center applications driving requirements
for switch traffic speeds, server processing capabilities,
the degree of parallelism required, the availability of
servers and networking components, or overall system
power); b) match these requirements with hardware road-
maps to understand what constraints future hardware
will impose on networked systems.
These projected constraints will inform research ef-
1
forts to develop alternative hardware and software tech-
niques to meet application requirements when possible.
For example, if hardware roadmaps project increasing
memory unreliability, the corresponding networking road-
map may be able to determine which networked systems
are affected, when, to what extent, and at what levels
of unreliability (Section 4). Using this information, re-
searchers can examine which of many failure masking
techniques (e.g., via redundancy in space, through cod-
ing, or in time, through retransmissions) would be nec-
essary and when. This examination would also be able
to estimate the storage, network or computation cost of
these techniques.
While the networking community has been reactive
to developments in hardware capabilities (Section 5) a
networking roadmap requires a more proactive and sys-
tematic look at the constraints or “cliffs” we are likely
to face under different projections of the future of hard-
ware components. A complete research agenda for a
networking roadmap should answer the following ques-
tions: What is a networking roadmap? How to system-
atically develop a networking roadmap? How (if at all
possible) to build systems that can be evolved according
to roadmap projections? In this paper, we take a first
step towards addressing some of these questions in or-
der to raise community awareness of the need to develop
networking roadmaps.
2. SEMICONDUCTORROADMAPS
In the early years of integrated circuits, Gordon Moore
captured [11] the trends in, and projected, the rate of
growth of the number of transistors in a chip and the
corresponding decrease in price per transistor. Because
these have proven to have predictive power, projections
of future trends (or roadmaps) have since been a critical
activity in the semiconductor sector.
ITRS Roadmaps. By now, the process of preparing
roadmaps has become formalized and these roadmaps
have become extensive. For example, the International
Technology Roadmap for Semiconductors (ITRS) [4],
is sponsored by the largest industry organizations in five
leading chip manufacturing regions in the world and is
compiled with significant inputs from industry and aca-
demic experts. This roadmap covers the entire range
of activities in this sector, including research in mate-
rials and devices; all aspects of manufacturing, includ-
ing lithography, metrology, chip assembly and packag-
ing, test equipment, factory integration, and environ-
mental and health; all types of chips and components,
including radio frequency, analog, mixed-signal, digi-
tal, and micro-electro-mechanical; all aspects of tools
and methods, including modeling and simulation, de-
sign, and yield enhancement; as well as the primary
system drivers, i.e., new applications that challenge the
semiconductor industry and fuel its growth. Each chap-
ter of the ITRS roadmap is an extensive catalog of the
trends and projections of virtually every important factor
that may affect some aspect of the semiconductor sector.
In addition to serving the semiconductor industry, ITRS
also projects, for major users of chips (computing, con-
sumer electronics, health and wellness, and network-
ing), important metrics governing the next generation
hardware chips and systems, particularly computational
and storage capacities, performance, power, cost, relia-
bility (lifetime), and resilience (to external and internal
noise).
RoadmapExample: StaticRAMs(SRAMs). SRAMs
are central to many aspects of networking, since they
may be used to buffer packets or parts thereof, or store
control information such as routing tables.
The ITRS roadmap for SRAMs is extensive, so we fo-
cus on one part: consumer-relevant parameters for SRAMs.
The roadmap covers SRAMs for many types of appli-
cations, such as portable devices, game consoles, and
computing; for example, within computing, it covers
SRAMs for cost-performance (CP) microprocessors (for
desktops), and high-performance (HP) microprocessors
(for servers). For both these types of microprocessors,
the roadmap projects that the amount of SRAM avail-
able on a constant die area (140mm
2
for CP, 160mm
2
for
HP) will double with each successive technology gen-
eration. (Interestingly, the 2011 roadmap does not pre-
dict Moore’s law, i.e., that successive technology gener-
ations will become available in a particular time dura-
tion, say, every 18 months.)
In addition, the roadmap also provides projections for
many other SRAM characteristics of interest to chip de-
signers, computer architects, and those who use proces-
sors as building blocks of their hardware or hardware-
software system. Tables 1 and 2 summarize the projec-
tions for power supply voltage (V
dd
), power consump-
tion, read and write delays, and read and write failure
rates for each generation of CMOS technology. (The to-
tal power is estimated using SRAM size, access time
(read/write time), and the per-cell static and dynamic
power.) Note the increasing SRAM read-write failure
rates and increasing power requirements: with increas-
ing technology density, SRAM needs more power due to
higher leakage, and becomes more vulnerable to manu-
facturing process variations.
Table 1 assumes existing circuit technology and stan-
dard testing approaches. Circuit and architecture inno-
vations can decrease power consumption. New testing
approaches will eliminate a large proportion of chips
with SRAMs that are failure-prone due to manufactur-
ing variations and reduce the failure rates for chips sold
to customers, at the expense of lower yields and higher
costs. To combat this, other techniques (e.g., fault mask-
2
ing in software) may be needed to control costs.
Table1—SRAM Roadmap for Consumer-relevant Parameters
[3]
Technology (nm) 65 45 35 25 18 13
Size (MB) 4 8 16 32 64 128
V
dd
(V) 1/1.1 1 0.9/1 0.7 0.7 0.7
Static power (mW) 3E 045E 041E 032E 033E 035E 03
Dynamic power (mW/mHz)6E 075E 074E 074E 073E 072E 07
Write/Read time (ns) 1.5 1.2 0.8 0.5 0.3 0.3
Total power (W) 22.4 58.67 200 716.8 2048 5803
Off-roadmap Excursions and Near-threshold com-
puting(NTC). ITRS roadmaps play an important role in
making many crucial decisions, e.g., developing speci-
fications for the next generation computing systems and
making decisions about R&D investments. Their in-
fluence is so widespread, that occasionally some have
wondered whether these roadmaps sometimes become a
self-fulfilling prophecy and limit innovation! This is not
typically true, since the exponential improvements in in-
tegrated circuits over five decades have been enabled by
many disruptive changes enabled by engineers and sci-
entists pursuing off-roadmap excursions. The roadmaps
are continually updated as a result of these excursions.
One such recent off-roadmap excursion is an idea called
near-threshold-voltage computing (NTC [8]), which is
motivated by the increased total power consumption of
SRAMs (Table 1) and logic circuits despite lower power
supply voltage (V
dd
) in future technology generations.
NTC posits that many components can be operated at
supply voltages well-below the 2011 projections, result-
ing in potentially significant power savings.
3. NETWORKINGROADMAPS
With the projected slowdowns in hardware technol-
ogy growth, we take the position that the networking
community should develop systematic networking road-
maps to understand how future generations of hardware
will constrain networked systems. A networking road-
map attempts to project the properties (e.g., performance,
reliability, availability) of important networking and dis-
tributed software subsystems (e.g., TCP, routing, dis-
tributed storage) using projections of hardware obtained
from semiconductor roadmaps. We envision an analy-
sis methodology to develop a networking roadmap that
takes the following generic form:
Projecting the demands of networked applications to
quantify desired critical properties of networked sys-
tems;
Table2—SRAM failure rates [2]
Technology (nm) 65 45 32 22 16 12
Read failure rate - 3E 072E 041E 026E 022E 01
Write failure rate - 3E 031E 024E 021E 012E 01
Deriving the hardware component(s) that may present
a barrier to achieving one or more of these critical prop-
erties, and using the hardware roadmaps to understand
how the properties of these components are likely to
evolve over time;
Roadmapping the application behavior on the derived
hardware systems in order to assess the end-to-end im-
pact of hardware on applications.
This methodology is inspired by the ITRS roadmap method-
ology, but differs in some respects, as discussed below.
We point out three important aspects of this analy-
sis methodology. First, many of the steps necessarily
involve judgement of experts in the field; for example,
quantifying the robustness, or determining the hardware
components that present a barrier to robustness require
judgement and may not yield unique answers. In this
case, the steps above might be repeated for different val-
ues of these “parameters”, or by selecting best-case and
worst-case parameter sets. Second, roadmapping is nec-
essarily approximate given the margin of error in the
hardware roadmaps as well as the errors in judgment,
so this analysis should focus on understanding qualita-
tive trends and when inflection points are likely to set
in; this is discussed in detail below. Third, we expect
this process to be iterative: when new networking ap-
plications (or application classes) emerge or hardware
roadmaps change, this analysis methodology should be
re-executed.
ProjectingRequirements. The first step in our analysis
methodology is to project application needs into require-
ments for the underlying platform or system that would
be used to realize these applications.
There are two challenges in doing this. First, net-
working is a broad and rapidly evolving field, and the
requirements for two different networking applications
can vary significantly in one or more dimensions, e.g.,
data-parallel computations in data centers vs. content
delivery in delay-tolerant networks. Thus, any attempt
to derive a generic set of networking system require-
ments will likely degenerate into a least common de-
nominator specification that may not result in a useful
networking roadmap. For this reason, we recommend
defining separate roadmaps for different classes of net-
works (data centers, the Internet, static wireless meshes,
mobile networks, delay-tolerant networks). We discuss
below how to distinguish different classes of networks.
The second challenge is to find the right granularity
at which to specify system requirements. At one end,
a designer may specify a detailed design that describes
for example, for a projected set of data center applica-
tions, the number of nodes, the precise interconnect, the
switching capabilities and link speeds at different tiers,
and server processing and storage capabilities. At the
other end of the spectrum, one may characterize system
3
requirements using estimates for a single property (e.g.,
bisection bandwidth, or end-to-end latency).
In general, more detailed specifications can increase
the predictive power of roadmaps. However, such spec-
ifications are harder to predict correctly, which can re-
sult either in inaccurate or infeasible roadmaps. On the
other hand, specifying a single requirement (e.g., bisec-
tion bandwidth) leaves other potentially important re-
quirements (e.g., end-to-end latency) unspecified, and
can also result in optimistic roadmaps.
We now discuss two principles that address this ten-
sion. First, to construct a roadmap, a designer specifies
requirements by quantifying a small number of desired
system properties. For example, one might believe that
future networked systems may need 5-nines availabil-
ity, or that (wired) link speeds of 500 Gbps will become
common in a few years. As discussed above, these pre-
dictions should be tailored to specific kinds of network
systems (data centers in our example). The principle
guiding the choice of system properties for a given class
of networked system is that the designer should select
critical properties, mainly those that represent funda-
mental constraints for applications.
Different classes of networked systems are distinguished
by different sets of requirements: for example, to a first
approximation, availability and latency are primary ca-
pabilities in data-center systems, robustness and reach-
ability in Internet-scale systems, energy in wireless sen-
sor networks, or cellular bandwidth usage and energy
in mobile computing systems. These different classes
also have a common requirement (the scale of the net-
work), but may differ by an order of magnitude in that
dimension (e.g., data centers vs. Internet vs. networked
sensors). Hence, our prescription for deciding whether a
network system qualifies for a separate roadmap is that
it should have different sets of critical properties from
other networked systems, or, if its set of critical proper-
ties matches that of another, should differ in one or more
critical properties by an order of magnitude.
Second, since projection is inherently erroneous, con-
clusions drawn from network roadmaps should be qual-
itative. That is, in using these projections, the designer
should look for trends and changes in trends (inflec-
tion points). To take a simple example, suppose that
a hardware roadmap predicts increasing unreliability of
a particular hardware component as a function of semi-
conductor manufacturing technology. Using our road-
map methodology, the designer may find that TCP’s per-
formance degrades slowly until, at a certain technol-
ogy feature size, the protocol is practically unusable.
Our principle suggests that the designer should pay lit-
tle attention to the slope of the degradation (a quanti-
tative concern) but focus on the fact that, at some in-
flection point, the hardware component becomes unre-
liable enough for the application. Before these inflec-
tion points hit, the community should be prepared with
mitigation strategies for extending the usability of the
networking subsystem (TCP in our example).
Deriving Hardware Components. Given these criti-
cal properties, the next step in the process is to gen-
erate one or more canonical hardware configurations.
A canonical hardware configuration instantiates a net-
worked subsystem with the appropriate hardware com-
ponents (nodes/devices, memories, storage systems, links,
controllers etc.) such that the resulting system has (ap-
proximately) the desired critical properties.
There may be more than one canonical hardware con-
figuration for a given set of critical properties. This is
because, by definition, the critical properties only spec-
ify a subset of the properties of the system. For exam-
ple, if power is not one of the critical properties, differ-
ent canonical configurations may have different power
requirements. Similarly, different canonical configura-
tions may have different costs (hardware roadmaps also
project components costs); some may be cost-neutral
(i.e., have the same inflation-adjusted cost as a similar
component costs today), while others may not.
Finally, some sets of critical properties may be in-
feasible in that there may not exist a canonical hard-
ware configuration, or all possible canonical configu-
rations may be prohibitively expensive. In this case,
the designer: (a) has learned about the infeasibility of
a hardware configuration, and (b) can iteratively explore
relaxations of one or more critical properties to deter-
mine feasible canonical configurations. This exercise
will help identify that critical property whose relaxation
can ensure feasible hardware designs, and can help the
designer explore whether this relaxation can be compen-
sated for in software.
Ideally, generating a canonical hardware configura-
tion should be done by a design tool. Designing such
a tool itself is a major intellectual challenge. Hence,
we expect this process to be initially manual, involving
a collaboration between network system designers and
hardware architects, until enough experience has been
gained to devise a design tool.
Roadmapping. The final step in the process is roadmap-
ping: understanding how application behavior evolves
over time, and when inflection points may develop in
some aspect of application performance. Mechanisti-
cally, roadmapping proceeds as follows. A designer first
selects a class of networked systems, and, within that
class, a software “application” whose behavior is to be
studied. We use the term application loosely: from a
hardware perspective, any software subsystem (e.g., re-
liable transport protocols, consensus sub-systems, key-
value stores, etc.) running on top of hardware would
4
qualify. For this choice of application and networked
system, the designer projects critical properties for dif-
ferent points in time (e.g., one set for 2015, another for
2018). For each set, she generates, using the design tool,
one or more canonical hardware configurations.
Finally, she evaluates the application behavior on the
canonical hardware configurations to understand how
application functionality and performance are affected
by different hardware configurations. This step is nec-
essary because, although a canonical hardware configu-
ration may satisfy critical properties, it may be that spe-
cific choices of hardware components may affect end-
to-end performance in unforeseen ways. This evalua-
tion step “closes the loop”, and helps us approximately
quantify application behavior as a function of time; this
output is the roadmap for the given application. The
roadmap can then be used to identify trends and inflec-
tion points.
Roadmapping can either be done using mathematical
modeling, or simulation, or some combination thereof.
Modeling and simulation using network simulators can
provide coarse roadmapping, while a hybrid simulator
that integrates circuit simulators with network simula-
tors may provide more precise roadmaps. Developing
these simulation and modeling techniques is the research
challenge in roadmapping.
Once a roadmap has been developed, and inflection
points have been identified, it will be necessary to ex-
plore mitigation strategies. Such strategies explicitly
counter the adverse effects of projected hardware trends
by performing appropriate trade-offs: increase reliabil-
ity in hardware or software by fault-masking or replica-
tion, hide latency degradations by caching or prefetch-
ing, and so forth.
4. CASESTUDIES
To illustrate some of the methodology for, and the
benefits of, developing networking roadmaps, we dis-
cuss two preliminary case studies that focus on one ma-
jor limitation expected of all future technologies: de-
creasing levels of resilience to internal and external noise.
CaseStudy1: ImpactofSRAMUnreliabilityonTrans-
port Protocols. As Table 2 shows, SRAM failure rates
are projected to increase with CMOS technology scal-
ing. SRAM failures result in erroneous values read from
or written into memory cells. A primary cause of SRAM
failures is the mismatch in the strength of transistors in
individual memory cells [12]. CMOS technology scal-
ing aggravates manufacturing process variations, and these
process variations intensify the mismatch between the
fabricated transistors. Transistor mismatches can be de-
tected by testing after fabrication; as process variations
increase, lower yields may result (i.e., chips may be
erroneous at manufacture), increasing overall memory
costs. A secondary cause of failures, which we ignore,
is bit-flips during memory operations resulting from the
strike of alpha particles, cosmic rays, and so on [13].
The other trend from the roadmap (Table 1) is that
SRAM power usage is predicted to increase dramati-
cally. To counter this, some researchers have proposed
to use lower supply voltages from the nominal 1.2V down
to “near threshold” voltages (400-500mV) to operate elec-
tronic components; these voltages represent the limits
of operation of these components. This off-roadmap
excursion on Near-threshold Computing (NTC) [7, 8]
holds the promise of factors of 2 or 3 reduction in power;
while it seems a promising technique for tackling, say,
datacenter energy consumption, NTC can also increase
bit failure rates (BER) for SRAMs (Figure 1(a)).
Qualitatively, these trends spell trouble for compo-
nents such as switches and routers which use SRAMs,
either for packet buffers (less frequently, since SRAM
is expensive) or for packet headers (in on-chip mem-
ory). To understand more precisely how NTC affects
networked systems, consider the following roadmapping
methodology, modeled on the discussion in Section 3.
NetworkClass Data center
Projection Critical properties are bisection bandwidth and
end-to-end packet latency
Derivation A chain topology of routers with sender at
one end, receiver at the other. Two canonical hardware
configurations, one where Whole-Packets are buffered
in SRAM, another with Headers-Only in SRAM.
Roadmapping Evaluate, analytically and in simulation, end-to-end
packet drop rate and TCP flow completion times
We have conducted a complete road-mapping exer-
cise by building an analytical model of end-to-end packet
loss rates in this chain topology, and validating it with
ns-2 simulations. Our model takes into account error-
correcting capabilities of memories (most SRAMs use
ECC), as well as IP header checksums, MAC-layer CRCs,
and the TCP checksum. We omit the details of the an-
alytical model for brevity, but the results are intriguing:
as Figure 1(b) shows, end-to-end packet delivery proba-
bilities show an inflection point at about 0.94V with the
Whole-Packet configuration and 0.88V with the Header-
Only configuration. These voltages are quite far from
the CMOS threshold voltage of 0.4V . Well above these
voltages, TCP flow completion times (Figure 1(c)) show
an inflection (at 1V and 0.94V , respectively). These pre-
liminary results suggest that NTC, without additional
software changes, is unlikely to reduce data center power
consumption without significantly affecting performance.
We emphasize that our conclusions do not indicate that
NTC is inherently a bad idea; for example, DRAM reli-
ability is relatively unaffected by NTC, so systems that
use DRAMs can exploit the full potential of NTC.
Case Study 2: Low voltage TCAMs. TCAMs are be-
coming indispensable components of networked systems
5
0.85 0.9 0.95 1 1.05 1.1 1.15 1.2
10
-8
10
-6
10
-4
10
-2
SRAM Operation Voltage (V)
BFR
(a) SRAM bit-failure rates
0.86 0.88 0.9 0.92 0.94 0.96 0.98 1
0
0.2
0.4
0.6
0.8
1
SRAM Operation Voltage (V)
Probability
(Packet)5-Hop Mod.
(Packet)5-Hop Sim.
(Packet)10-Hop Mod.
(Packet)10-Hop Sim.
(Header)5-Hop Mod.
(Header)5-Hop Sim.
(Header)10-Hop Mod.
(Header)10-Hop Sim.
(b) Packet delivery probabilities
0.85 0.9 0.95 1 1.05 1.1 1.15 1.2
2
17
32
47
62
77
SRAM Operation Voltage (V)
Number of seconds/packets
(Packet)File transfer time (in seconds)
(Packet)Average CWND(in packets)
(Header)File transfer time (in seconds)
(Header)Average CWND(in packets)
(c) TCP flow completion and window
sizes
Figure1—Effects of SRAM Unreliability on a Transport Protocol
and will likely see widespread deployment in routers
and switches as OpenFlow-style programmable network-
ing becomes common. TCAMs enable prioritized rule
processing based on packet header contents. A TCAM
array consists of multiple rows of TCAM cells, where
each TCAM cell is, in effect, constructed from two SRAM
cells. However, TCAM usage is qualitatively different
from SRAM usage: TCAMs are written infrequently
(whenever the rule base is updated) and are never read.
(TCAM’s match operations, such as packet header match-
ing, are performed using circuitry that is far less likely to
disturb the charge stored in TCAM circuit than the cir-
cuitry used for read operations in SRAM.) Thus, we had
anticipated that TCAMs would be far less susceptible to
low-voltage operation than SRAMs.
Interestingly, we discovered that TCAMs can be un-
reliable at low voltages. TCAMs cells have three states:
a 0 ternary symbol is stored in the two SRAM cells as
10, a 1 symbol as 01 and a don’t care symbol as 00
(hence the name ternary CAM). Using a circuit simula-
tor called SPECTRE [1], with industry-supplied models
of circuits and process variations, we found that writing
a 0 or a 1 ternary value can cause the 1-valued equiv-
alent SRAM cell to flip, with finite, but low, probabil-
ity. For example, at 0.9V , the probability of an incorrect
cell is 5:8 10
3
. However, for a TCAM, it suffices
for one TCAM cell to be incorrect in order to (possi-
bly) violate the semantics of rule matching; consider-
ing all possible combinations of packet header patterns,
rule patterns, and the priority ordering of the rules stored
in TCAM, the combinatorics resulting from rule match-
ing semantics work against TCAMs. Using probabilistic
models, we were able to derive the likelihood of at least
one cell being incorrect in a TCAM array where every
cell was written exactly once, as a function of supply
voltage (Figure 4). These probabilities, which represent
the asymptotic probability that some packet will be in-
correctly matched, are somewhat pessimistic, with error
probabilities under NTC approaching 1 even at 1V for
fairly small TCAMs.
The discussion above illustrates the methodology by
which a semiconductor roadmap can be derived for TCAMs.
We have not yet derived the corresponding networking
0.8 0.85 0.9 0.95 1 1.05 1.1
0
0.2
0.4
0.6
0.8
1
Operation voltage (V)
Match failure probability
8 non-wildcards
16 non-wildcards
32 non-wildcards
64 non-wildcards
104 non-wildcards
Figure2—Error Probabilities for a 128-rule TCAM
roadmap, which would require using projections to de-
rive canonical topology and simulating various rulesets
to determine the effects of incorrect matches on the se-
mantics of the routing system. When we do, we may
find that, because of redundancy, an incorrect match at
one node in the topology may be rectified at another
node so that the correct action on the packet is even-
tually taken, albeit at the cost of additional resources.
We have left this to future work.
Implications. We emphasize that these results are pre-
liminary, and merely illustrate the kinds of conclusions
one can draw from a systematic attempt to match hard-
ware roadmaps with networked subsystems’ end-to-end
performance. Many of the inflection points illustrated
above can be mitigated using a variety of (not mutually
exclusive) techniques, each of which has different cost
implications: more extensive chip testing, new hardware
approaches to improve resilience, node-level software
fault masking, end-to-end error detection and correc-
tion, and so forth. More generally, systematic roadmap-
ping can open up new research directions to address
hardware constraints using novel software techniques.
5. RELATEDWORK
The networking community has had a good history
of recognizing hardware trends and pursuing research
agendas to address limitations imposed by hardware. Ex-
amples include developing fast packet forwarding algo-
rithms to deal with memory latency [14] or re-arranging
packet processing rules to match power management ca-
pabilities of TCAMs [10], designing novel data center
interconnects to achieve high bisection bandwidth while
leveraging switch commoditization [5], devising soft-
6
ware radios to leverage processing power improvements [6],
and exploring data aggregation mechanisms to reduce
the energy cost of communication in networked sens-
ing [9]. Our paper makes the case for a concerted effort
to use available semiconductor roadmaps to proactively
develop networking roadmaps.
The semiconductor sector has been actively roadmap-
ping future technologies periodically for some time [4].
Off-roadmap excursions, like NTC [8], illustrate attempts
to balance different dimension of future hardware trends
(power and reliability). Our proposed methodology for
networking roadmaps is inspired by the methodologies
adopted by the semiconductor sector, but differs cru-
cially in its reliance on qualitative assessments.
6. CONCLUSIONS&DISCUSSIONS
As we approach the end of an era of Moore’s law fu-
eled hardware improvements, the networking commu-
nity needs to develop systematic roadmaps for networked
systems modeled after, and informed by, semiconduc-
tor roadmaps. Such an exercise can better direct re-
search resources to relevant networking problems iden-
tified by roadmap inflection points, at which innovative
solutions may be needed to address constraints imposed
by future generations of hardware. Our case studies on
the reliability of memory technology illustrate both the
methodology of developing roadmaps and the kinds of
insights that may be obtained from roadmapping. In the
future, we expect to work out details of the roadmap-
ping methodology, develop design tools for the deriva-
tion step, and modeling methods for roadmapping.
7. REFERENCES
[1] Cadence virtuoso spectre circuit simulator.
http://www.cadence.com/products/cic/
spectre_circuit/pages/default.aspx.
[2] International technology roadmap for semiconductors 2011
design.http://www.itrs.net/Links/2011ITRS/
2011Chapters/2011Design.pdf.
[3] International technology roadmap for semiconductors 2011
system drivers.http://www.itrs.net/Links/
2011ITRS/2011Chapters/2011SysDrivers.pdf.
[4] International technology roadmap for semiconductors. 2011.
[5] M. Al-Fares, A. Loukissas, and A. Vahdat. A scalable,
commodity data center network architecture. In Proc. ACM
SIGCOMM, 2008.
[6] V . Bose. Design and implementation of software radios using a
general purpose processor. PhD thesis, MIT, 1999.
[7] R. Dreslinski, M. Wieckowski, Blaauw, D. Sylvester, and
T. Mudge. Near threshold computing: Overcoming performance
degradation from aggressive voltage scaling. In Proc. ISCA
Workshop on Energy-Efficient Design, 2009.
[8] R. Dreslinski, M. Wieckowski, D. Blaauw, D. Sylvester, and
T. Mudge. Near-threshold computing: Reclaiming moore’s law
through energy efficient integrated circuits. Proceedings of the
IEEE, 98(2):253–266, 2010.
[9] C. Intanagonwiwat, R. Govindan, D. Estrin, J. Heidemann, and
F. Silva. Directed diffusion for wireless sensor networking.
IEEE/ACM Trans. Networking, 11(1):2–16, 2003.
[10] Y . Ma and S. Banerjee. A smart pre-classifer to reduce power
consumption of tcams for multi-dimensional packet
classification. In Proc. ACM SIGCOMM, 2012.
[11] G. Moore. Cramming more components onto integrated
circuits. Electronics, 38(8), 1965.
[12] S. Mukhopadhyay, H. Mahmoodi, and K. Roy. Modeling of
failure probability and statistical design of sram array for yield
enhancement in nanoscaled cmos. IEEE Tran. Computer-Aided
Design of Integrated Circuits and Systems, 24(12):1859–1880,
2005.
[13] T. Semiconductor. Soft errors in electronic memory-a white
paper. 2004.
[14] V . Srinivasan, S. Suri, and G. Varghese. Packet classification
using tuple space search. In Proc. ACM SIGCOMM, 1999.
7
Linked assets
Computer Science Technical Report Archive
Conceptually similar
PDF
USC Computer Science Technical Reports, no. 937 (2013)
PDF
USC Computer Science Technical Reports, no. 941 (2014)
PDF
USC Computer Science Technical Reports, no. 872 (2005)
PDF
USC Computer Science Technical Reports, no. 938 (2013)
PDF
USC Computer Science Technical Reports, no. 692 (1999)
PDF
USC Computer Science Technical Reports, no. 746 (2001)
PDF
USC Computer Science Technical Reports, no. 957 (2015)
PDF
USC Computer Science Technical Reports, no. 745 (2001)
PDF
USC Computer Science Technical Reports, no. 771 (2002)
PDF
USC Computer Science Technical Reports, no. 786 (2003)
PDF
USC Computer Science Technical Reports, no. 921 (2011)
PDF
USC Computer Science Technical Reports, no. 873 (2005)
PDF
USC Computer Science Technical Reports, no. 750 (2001)
PDF
USC Computer Science Technical Reports, no. 801 (2003)
PDF
USC Computer Science Technical Reports, no. 727 (2000)
PDF
USC Computer Science Technical Reports, no. 971 (2017)
PDF
USC Computer Science Technical Reports, no. 910 (2009)
PDF
USC Computer Science Technical Reports, no. 669 (1998)
PDF
USC Computer Science Technical Reports, no. 741 (2001)
PDF
USC Computer Science Technical Reports, no. 726 (2000)
Description
Bin Liu, Hsunwei Hsiung, Da Cheng, Ramesh Govindan, and Sandeep Gupta. "Towards systematic roadmaps for networked systems." Computer Science Technical Reports (Los Angeles, California, USA: University of Southern California. Department of Computer Science) no. 931 (2012).
Asset Metadata
Creator
Cheng, Da
(author),
Govindan, Ramesh
(author),
Gupta, Sandeep
(author),
Hsiung, Hsunwei
(author),
Liu, Bin
(author)
Core Title
USC Computer Science Technical Reports, no. 931 (2012)
Alternative Title
Towards systematic roadmaps for networked systems (
title
)
Publisher
Department of Computer Science,USC Viterbi School of Engineering, University of Southern California, 3650 McClintock Avenue, Los Angeles, California, 90089, USA
(publisher)
Tag
OAI-PMH Harvest
Format
7 pages
(extent),
technical reports
(aat)
Language
English
Unique identifier
UC16270821
Identifier
12-931 Towards Systematic Roadmaps for Networked Systems (filename)
Legacy Identifier
usc-cstr-12-931
Format
7 pages (extent),technical reports (aat)
Rights
Department of Computer Science (University of Southern California) and the author(s).
Internet Media Type
application/pdf
Copyright
In copyright - Non-commercial use permitted (https://rightsstatements.org/vocab/InC-NC/1.0/
Source
20180426-rozan-cstechreports-shoaf
(batch),
Computer Science Technical Report Archive
(collection),
University of Southern California. Department of Computer Science. Technical Reports
(series)
Access Conditions
The author(s) retain rights to their work according to U.S. copyright law. Electronic access is being provided by the USC Libraries, but does not grant the reader permission to use the work if the desired use is covered by copyright. It is the author, as rights holder, who must provide use permission if such use is covered by copyright.
Repository Name
USC Viterbi School of Engineering Department of Computer Science
Repository Location
Department of Computer Science. USC Viterbi School of Engineering. Los Angeles\, CA\, 90089
Repository Email
csdept@usc.edu
Inherited Values
Title
Computer Science Technical Report Archive
Description
Archive of computer science technical reports published by the USC Department of Computer Science from 1991 - 2017.
Coverage Temporal
1991/2017
Repository Email
csdept@usc.edu
Repository Name
USC Viterbi School of Engineering Department of Computer Science
Repository Location
Department of Computer Science. USC Viterbi School of Engineering. Los Angeles\, CA\, 90089
Publisher
Department of Computer Science,USC Viterbi School of Engineering, University of Southern California, 3650 McClintock Avenue, Los Angeles, California, 90089, USA
(publisher)
Copyright
In copyright - Non-commercial use permitted (https://rightsstatements.org/vocab/InC-NC/1.0/