Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
Computer Science Technical Report Archive
/
USC Computer Science Technical Reports, no. 971 (2017)
(USC DC Other)
USC Computer Science Technical Reports, no. 971 (2017)
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Making Effective Use of HTTP/2 Server Push in
Content Delivery Networks
Kyriakos Zarifis
†?
, Mark Holland
?
, Manish Jain
?
Ethan Katz-Bassett
†
, Ramesh Govindan
†
†
University of Southern California
?
Akamai Technologies
Technical Report 17-971, University of Southern California
Abstract
Server Push, one of the most promising features of HTTP/2, allows web servers
to speculatively send unsolicited web resources to clients. While the mechanism
for Push is well defined, policies for effectively utilizing push to maximize page
load performance are poorly understood. In this work we investigate the factors
that should be considered when implementing a Push policy and combine them
to propose an efficient Push policy geared for Content Delivery Networks. We test
this policy on a set of real web pages and evaluate the impact of Push on their
load times. Our results indicate that recommended Push configurations yield
performance benefits of up to 27% for the pages we tested and avoid negative
impact in all cases, whereas Push can have little or negative impact on page
load speeds if used overly aggressively. We analyze the reasons that can lead to
suboptimal Push performance and discuss ways to maximize its efficiency.
1 Introduction
Web page load times have repeatedly been associated with business revenue [23,
3]. They are particularly important to e-commerce organizations like Amazon,
which reportedly suffers 1% decrease in sales for every additional 100ms delay in
page loads[13]. As such, content publishers often utilize CDNs to speed up page
delivery by caching their content geographically close to their customers.
Inatypicalpagedownload,aclientrequestsanHTMLfilefromaserver,parses
it, and requests from that and other servers more files that are needed to display
the page. HTML files usually reference inter-dependent CSS and JavaScript files[4,
14]. Unless specified otherwise by special HTML tags, processing JavaScript files
blocks HTML parsing because executing them can update the Document Object
Model (DOM), a logical representation of the page structure that is created as
the HTML is parsed. Since JavaScript can reference CSS files, their execution
is in turn blocked by fetching and processing CSS files. For that reason, CSS
and Javascript files are, generally, on the Critical Render Path (CRP) of a page,
since their download and processing time can determine when the first pixels are
rendered. The delivery and render time of media files and images also affects user
experience. Since the client cannot request any of those objects before it receives
and starts parsing the HTML, at least two round-trips between the client and
the server are required before they are delivered to the client.
Fig.1: Page download through a CDN without Push (left) and with Push (right)
A CDN optimizes web page downloads by utilizing a proxy server that is close
to the client. The client requests the HTML file from that nearby CDN proxy,
which typically forwards the request to the remote origin server and returns the
reply to the client. When the client now parses the HTML and sends subsequent
requests for embedded objects, the CDN proxy serves those objects from its
cache, without requiring more trips to the remote origin web server (Fig. 1).
HTTP/2, which is getting adopted [21] as the successor to the almost 20-year
old HTTP/1.1, comes with features that are expected to improve web page
delivery speed. HTTP/2 Push, one of the most promising new features, allows a
web server to send objects to a client without a client having requested them.
For example, a web server could send embedded objects for a page along with
the HTML file avoiding extra round trips and helping increase page load speed.
While Push could be implemented at the origin server, the CDN proxy is in
an ideal position to push objects to the client: since it is usually much closer to
the client than the origin server is, and the client’s initial HTML request needs
to be forwarded to the origin server, there is usually a long network idle time
before the HTML is available to the CDN proxy to be sent to the client. The
proxy can utilize this idle time to push to the client objects that it will need after
parsing the HTML (Fig. 1). When a browser requests a resource that has been
pushed to it, it can fetch from its memory instead of sending a network request.
However, while Push is defined as a mechanism, there are no known best
practices on how the feature should be used [6], and on implementing a push
policy, i.e. deciding which objects to push or when to push them. This work aims
at better understanding the performance of different push policies on real web
pages and developing an effective push policy for a CDN.
In this work, we explore the factors that can affect the impact of Push, and
design best practices for implementing Push on a CDN, utilizing the proximity
to clients. We then evaluate the impact of Push on a representative set of live
web pages. We show that if used aggressively, Push can have adverse effects on
performance, but deciding what to Push can yield improvements of up to 27%
on user-perceived load times. We then highlight the factors that can hinder the
impact of Push and discuss ways to tune them to maximize its efficiency.
2 Factors that can impact Push
This section describes the factors that affect Push policies which include deciding
when to push, which objects can be pushed, which of those should be pushed
and in what order, and how much can be pushed given network characteristics.
Push can be used during idle network times to maximize network resource
utilization. This can happen: a) Between the moment a client requests a web page
and the moment it receives the HTML file (pre-HTML); b) While the browser
is in the process of fetching all the resources it needs to render the page as it
parses the HTML, but network activity is blocked on object parsing/execution
(post-HTML); and c) After the client has completely downloaded and rendered a
page, and before the user initiates the download of another page (post-OnLoad).
In this work we focus onpre-HTML push for CDNs, to optimize the delivery
of the currently accessed page. In CDNs, there is usually a long idle time between
the time an HTML file is requested by a client and when the CDN forwards it to
it (400ms in the median case), because it often has to be retrieved from the origin
where it is generated dynamically. This idle time can be used to speculatively
push objects referenced by the HTML page that are replicated on the CDN.
What can we push? For a CDN proxy server to push an object, the object
must satisfy 3 conditions. First, the object must be served through the CDN.
Objects delivered through the CDN can be pushed through the same HTTP/2
channel even if they originate in different domains (e.g., either external domains
or origin subdomains) as long as the domain names are under the same certificate
[11]. However, cross-origin resources delivered outside the CDN cannot be pushed.
Second, the object must be cacheable at the CDN. Web developers control this by
setting the cache-control HTTP header or encoding object versions or timestamps
to a URL. Finally, the object needs to be available on the CDN. For large CDNs,
cache hit rates are very high, so this condition is easily satisfied. Challenge 1:
We cannot push every object embedded in any web page.
What should we consider pushing? The CDN proxy should push objects
that the client will end up using. However, even the client will not know the
objects required to load a page until it receives and parses the HTML file. The
CDN proxy itself only knows, when it receives a client request, the URL of the
page that the client wants to access. The proxy must somehow determine the
objects needed to load the page. This mapping needs to be accurate, to avoid
sending objects that the client will end up discarding, which would introduce
unnecessaryoverheadandwastenetworkresources,aconcernespeciallyimportant
for mobile clients with paid data plans or bandwidth quotas. Challenge 2: We
do not know in advance what objects the client will require to load the page.
How much should we push? Once we have determined the set of objects a
client will request once it parses the HTML (we call these the Push Candidates),
we should decide how many of those we should push. Pushing past the moment
the HTML is ready to be served could delay the Base-Page Time-To-First-Byte
(BPTTFB) on the client. This means that the Critical Render Path (CRP) would
start later, so all other metrics could be delayed. Challenge 3: Push must not
increase Base Page Time-To-First-Byte.
How should we prioritize pushing? Rendering a page involves fetching
and processing many objects, of different types, that are inter-dependent. The
critical path of loading a page is not known in advance, so it is not clear which of
those objects are more important for the browser to have first. Challenge 4: We
don’t know which types of objects the browser would benefit by having pushed.
Which objects do not need to be pushed? Pushing objects that are
already on the client browser cache would be wasting network resources, induce
unnecessaryoverheadonthebrowser,andmayevenworsenpageloadperformance.
Challenge 5: We do not know what objects the browser has cached.
3 Designing efficient policies
In order to design an efficient Push policy, we want to measure the performance
impact of the factors described in the previous section. This section describes
how the challenge tied to each factor can be overcome.
Push Candidate list. The first step towards implementing a Push policy is
toidentifytheobjectsthattheclientwillneed,beforeitrequeststhem.Thiscanbe
predicted by looking at historic data of downloads of a page. Real User Monitoring
(RUM) technologies are commonly used to monitor end-to-end user interactions
with websites, ensure that quality of service is achieved, and identify performance
bottlenecks. The Navigation Timing [15] and Resource Timing [18] specifications
expose detailed information of application layer events raised by browsers of real
page downloads. These datasets can provide valuable historical information of
page downloads at very large scale. Specifically, they can be analyzed to identify,
for any web page, the objects that are almost always requested by clients as they
load that page. Using that information, the RUM backend can provide an initial
set of Push Candidates, thereby addressing challenges 1 and 2.
Many pages, especially in e-commerce, are highly dynamic and change often.
Short-term changes occur, for example, due to objects like pictures of daily deals.
Objects that are part of the structure of a website, like CSS and Javascript files,
are expected to change less regularly. If RUM data collection happens at a global
scale, there are enough samples to run the Push Candidate list calculation often
enough to capture both long and short-term changes.
Push payload quota. We want to stop pushing and send the HTML to the
client the moment it is available on the proxy. A practical impediment to achieving
this is that a server cannot cancel pushed data which has been written to the
SSL socket. The HTML file can queue up behind this data, thereby increasing
BPTTFB. In order to avoid this, the proxy should know the payload that can be
pushed safely (the push quota). This can be estimated using historical data of
HTML response times from the origin servers. We use the following formula to
calculate how much we can be safely pushed:
Q =min((TotalPushable),Q
0
)),where
Q
0
=
R
X
n=1
cwn(i)andR =BPTTFB/RTT
(
proxy,client)
This describes that the payload we can push depends on how many proxy-to-client
RTTs can fit in one proxy-to-origin RTT. This addresses challenge 3.
Push prioritization. A simple, and generally good prioritization order is:
CSS, Javascript, Images. This is also the default prioritization implemented
by browsers for normally requested objects. For visual user experience metrics,
we want to prioritize objects that are on the CRP. Enforcing this default order
generallyachievesthatgoal,buttherearecasesofCSSfilesthatarenotnecessarily
on the CRP, and definitely cases of Javascript that are not blocking rendering,
so if we want to be even more meticulous we could identify and exclude those.
While not examined in this work, analyzing waterfalls and critical paths could
help inform further prioritization by strictly prioritizing objects on the CRP [16].
Browser cache state. To avoid pushing objects from the Push Candidate
list that a client has available in its browser cache, the client can send an encoded
list of objects that it already has in its cache from this domain piggybacked on
the basepage request [5]. The CDN proxy can then refine the Push Candidates
list calculated by the RUM backend by removing from it any objects signaled as
cached on the browser. The remaining set is considered for pushing. This can
resolve challenge 5. However, modern browsers don’t support this capability yet,
so we have left an exploration of this factor to future work.
4 Evaluation of Push policies
4.1 Experiment setup
In order to evaluate the efficacy of pushing, we test the design described above
on live web pages. We set up an Akamai CDN proxy and 2 clients in a lab in WA
and we configure the proxy server to serve 25 real websites via HTTP/2, with
and without Push configured.
Target web sites. Our 25 target web sites are among the most popular ones
in the CDN. The evaluation is limited to 25 targets for a few reasons: a) Setting
up HTTP/2 Push policies for a target involves considerable manual configuration
and b) Since we want to collect and analyze many samples the time required
to issue all active measurements for each target is also long. For all targets and
experiments, we issue around 3,000 downloads per day, which, including the
setup time for each download, amounts to around 23 hours per daily run, and c)
We wanted a manageable number of targets to be able to manually inspect each
page structure and understand how it impacts Push performance.
Clients, Proxy Server and Network. We use WebPageTest [24] which
automates page downloads using full browsers and exports various metrics. We
use a private instance of WebPageTest and two clients that split the workload.
Both clients are dedicated 64-bit machines with a 2.5GHz CPU and 16GB
memory running Windows 8. The CDN proxy is a server with specifications and
configurations identical to deployed proxy servers. We set the latency between
the client and the CDN proxy to 20ms, which is a realistic representative value
for the latency between real clients and their closest CDN proxy.
4.2 Dataset validation
To verify that our targets are representative of a large set of popular websites, we
compare the distributions of characteristics that can impact Push, as described in
the previous section, across our targets and the top-1000 Alexa pages: 1) object
“pushability” (whether an object is both cacheable and served from CDN), 2)
object types, 3) time to push. The first two factors are directly related to the
page structure. “Time to push” is not related to the page structure itself, but
it is related to the target, since part of it includes the server time to think (the
other part is network latency between client-server). We omit cache status, which
is relevant to a specific download, and not the page structure.
For the first two metrics, we use web page downloads from HTTPArchive
(top 1K Alexa pages) [9] to get their distributions. We find that our targets are
distributionally consistent with Alexa sites, for payloads of object types (Fig. 2)
as well as BPTTFBs (Fig. 3). The distributions of number of objects per type
are also consistent (not shown here for space, but available at [10]).
Fig.2: Page structure comparison Alexa
top 1K (solid) VS target pages (dashed)
Fig.3:BPTTFBsofAlexatop1K(solid)
VS target pages (dashed)
4.3 Performance metrics
Several metrics have been suggested to measure page delivery speed and user-
perceived speed. These quantities are different: If network/browser activity ends
faster it does not necessarily mean that the user perceives a better browsing
experience, or the other way around [1]. We aim primarily at optimizing user-
perceived page load times and focus on the that are most relevant to it:
BPTTFB. Base Page Time-To-First-Byte (BPTTFB) is the time taken to
receive the first byte of a base page at the browser. While not a direct user
experience indicator, it is a very relevant metric since it can determine other
metrics, because the client will not initiate further requests before parsing the
HTML. Certainly no visual content can render before this moment.
SpeedIndex. The Speed Index [20] captures the visual progress of the visible
page loading and computes an overall score for how quickly the content painted.
This is done by calculating how visually complete the above-the-fold content is
over time. Capturing visual progress is the most indicative way to describe user
experience, so we focus on this metric to present results in this work.
We have also experimented with other metrics [10]. These include FirstPaint
and Page Load Time (PLT). FirstPaint is the time the browser reported painting
the first pixel, making it a good estimator of user-perception, but we focus on
SpeedIndex since additionally captures visual progress. PLT is often used as
an indication of when all page-loading activity has stopped. This is a complex
metric which can be misleading to look at alone, since it is often delayed by
factors not relevant to user experience, like loading non-rendering third-party
content like analytics scripts or content well below-the-fold so it doesn’t capture
user-experience faithfully.
In order to measure performance change, we are interested in how the dis-
tribution of SpeedIndex values over many runs changes when using Push. To
capture this change, we use the difference of its mean value with and without
Push over many runs, after ignoring the top 5% outliers (e.g. time-outs) [7].
Pushing an object is similar to having it in the browser cache, in the sense
that it is available at the client before it requests it. In the evaluation, we also
use the performance of loading pages with a warm cache as a point of reference,
since that would be the theoretical maximum benefit that Push could provide.
4.4 Results
We first examine characteristics of the target pages that are relevant to the
impact of Push. In Fig. 4, the first bar for each target shows the total payload of
objects in its Push Candidate list while the second bar shows how much time
there is available to push objects (10th percentile of BPTTFBs).
Why not push everything? The purple bars in Fig. 5 show the page load
performance change when we try to push all the objects in each target’s Push
Candidate list. For reference, the green bars show the respective change in
SpeedIndex when the browser cache is warmed up (repeat view of the page).
The purple bars highlight that pushing everything that we can is not an ideal
approach. In almost half of the pages, performance degrades because the server
socket is busy sending objects that the client cannot use until it receives and
parses the HTML. This delays sending the HTML, which delays the whole page
loading process. These are generally the pages that have a bad combination of
(large) pushable payload and (low) BPTTFB. Not all targets see damage, and
some see improvement: for those targets the pushable payload is small, so even if
we push all objects, the HTML is not delayed significantly.
Fig.4: Total pushable payload and BPTTFBs for each target
Fig.5: Impact of aggressive Push on SpeedIndex
The benefit of a push quota The yellow bars in Fig. 6 show the impact
of Push if we limit the pushed payload according to each target’s BPTTFB
(the idle network time before HTML arrival), using the push payload quota
formula described earlier. We use the 10th percentile of each target’s BPTTFB
to determine its push quota. This conservative approach is to ensure sure that
the HTML is not delayed by pushed objects. The damage is now avoided in all
cases and the benefit is increased, ranging from 3% to 27%.
Fig.6: Impact of Push on SpeedIndex for different scenarios
Achieving better Push performance Applying a Push quota avoids damage
and increases the benefit, but can we do better? The green bars in Fig. 6 show
the performance change when all cacheable objects are cached on the browser. If
we had time to push all the Push Candidates, this is the maximum theoretical
benefit Push could provide. In some cases even fully warm cache does not provide
huge benefit (because the bottleneck is either on browser processing or due to
uncacheable objects), so the Push benefit is also limited by that.
There are two factors that constrain the potential of Push, leading to the
difference between the yellow and green bars in Fig. 6:
Unpushable objects: Objects served from 3rd-party domains and not through
the CDN cannot be pushed by the CDN proxy. The same holds for objects that
use cache-busting mechanisms (i.e. timestamps or file versions in the URLs). In
order to measure what the impact of Push would be if those objects were also
available to push from the CDN, we emulate this scenario by identifying them and
pre-warming them in the browser cache before issuing a measurement. Since this
meant to emulate pushing those objects, their size counts towards the push quota.
The impact is illustrated by the red bars in Fig. 6. Some web pages see significant
additional performance boost (more than 10%), while others have little to no
performance change. By manually examining the objects that were pre-warmed,
we observe that for the pages that didn’t observe additional gains, those extra
“pushed” objects were non-render-blocking. These were mostly analytics scripts,
tags used for marketing optimization, and other objects that are not critical to
displaying a page, and are loaded after the onLoad event. Pushing such objects is
not expected to provide additional benefit to user-perceived load times. For pages
that observed additional boost, the respective objects included CSS, JavaScript
or fonts downloaded from 3rd party domains or included cache-busting strings
in their URLs. In two of those pages, Google fonts loading synchronously from
cross-origin servers blocked the first render. It is a best-practice to load such
non critical resources asynchronously, which removes from the CRP by deferring
their request until after the onLoad event. Not doing so can impact performance,
and Push cannot break that bottleneck if the object is not served through the
CDN. In another example, the culprit was a version of jQuery referenced from a
3rd party domain not served by the CDN. In order to maximize the potential
of Push, render blocking objects should be cached at the and served by the
proxy. This means that cache-blocking mechanisms should be avoided, and 3rd
party providers should collaborate with CDNs to ensure that all traffic critical
to rendering a page is served through the CDN.
Fig.7: Push impact for default and in-
creased BPTTFB/Quota values
Fig.8: Push impact across all targets
Available time to push: The page structure analysis shows that the imposed
Quota is usually enough to push render blocking types (CSS and Javascript).
However there are cases for which the combination of TTFB and total payload
of critical objects does not allow to push all of them. Additionally, images, which
can also affect the SpeedIndex if they are above-the-fold, cannot all be pushed.
We specifically focus on targets for which the Quota was determined to be the
bottleneck. Fig. 7 compares the observed benefit of Push (in blue bars) to what
the benefit would be for those targets if the TTFB was larger. For this experiment
we artificially increased the RTT between CDN proxy and origin by 400ms, and
the Quota was set accordingly. The red bars show that the benefit from Push
would be higher for similar pages whose origin server was farther from the CDN
proxy or whose HTML took longer to generate.
Fig. 8 summarizes the results, showing the impact of Push on the load
performance of the 25 target pages for 3 different scenarios, and how far off
they are from the warm-cache scenario. When pushing aggressively, half of the
pages see performance degradation. Applying a push quota avoids damage and
increases the benefit in all cases, providing a performance boost of up 13.4% in
the median case, and up to 27%. Emulating the ability to push all theoretically
pushable objects while still applying the same quota increases the benefit for
most of the websites, by up to an additional 5.7%.
Fig.9: Estimated Pushability vs Push
impact
Fig.10: Pushability for Alexa 1K sites
Generalizing conclusions to a larger dataset To understand how many
websites would benefit significantly from Push if they were CDN-hosted, we use
the following methodology. First, we use our target sites to develop a simple
binary classifier to predict whether a page is expected to observe significant
benefit from push, based on its structure. We define “significant” as at least 15%
improvement, since that is the median in the results. We then apply this classifier
to websites listed in the Alexa top-1000.
Our classifier uses four features: the fraction of payload of CSS, Javascript,
fonts, and images pushed. We derived this list of features based on a manual
inspection of the structure of sites that perform well and those that do not
perform well. We then use our target websites to train a linear classifier based
on these features. Interestingly, for the target websites that we have, the best-
performing linear classifier prioritizes these features in the order in which browsers
prioritize their processing. Thus, our linear classifier assigns the highest weight
to CSS, then to Javascript, and so forth. Fig. 9 depicts the performance of this
classifier visually: the x-axis is the value of the classifier and the y-axis is the
Push performance gain. Most sites that see a larger than 15% gain in our dataset
have classifier “scores” greater than 50.
To apply this classifier to the Alexa top-1000 pages, we obtained their page
structure from HTTPArchive [9] and assuming a 200ms TTFB (a conservative
20th percentile in the global distribution of BPTTFs for the CDN’s users) we find
that 28% of the web pages are expected to see a performance boost higher that
15%, and the rest would see an improvement up to 15% (Fig. 10). This fraction is a
conservative underestimation because classification of a URL as 3rd party is based
on simple pattern matching. However, there are cases where 1st party domains
don’t match the base hostname (e.g. “ytimg.com” is 1st-party a subdomain of
“youtube.com”). The orange line in Fig. 10 shows the distribution of ranks when
we assume that we can additionally push all theoretically pushable content (this
also captures 1st party domains that do not match the base hostname). In this
case, 37% of the web pages are expected to see at least 15% improvement. Our
conclusions are obtained by extrapolating from our 25 target sites: future work
should expand the set of websites to arrive at more accurate estimates.
5 Related Work
Previousworkhasutilizedintermediateproxiestospeeduppageloadtimes,either
by compressing data [2] or executing part of the page load path on behalf of the
client in order to reduce its overhead [23]. Other approaches focus on prioritizing
web page content to reduce load time by analyzing web dependencies either
offline [4] or on the fly [16]. These techniques can complement Push to provide
additional benefit. Several optimizations have been proposed on the network layer
to improve page load times, like HTTP/2’s predecessor, SPDY [12], and QUIC
[17]. These are also orthogonal to Push. Metapush [8] focuses mainly on reducing
load times of subpages that a client might visit, by sending a list of URLs that the
client can request immediately if the user visits a subpage instead of waiting to
parse the HTML of that page, assuming those objects are not cached. Our work
focuses on optimizing the currently fetched page, by utilizing idle network time
to send full objets. Push impact has previously been studied either by replaying
pages in controlled environment [22, 19], which can be unrealistic since a large
fraction of 3rd party transfers cannot be replayed faithfully, or using a model [25].
In our evaluation the targets are active measurements of live pages, exposing all
realistic characteristics.
6 Conclusion
In this work we examined the factors that can influence the effectiveness of
HTTP/2 Server Push, and designed a policy for maximizing its benefit. We
measured the impact of Push through active measurements on a set of real,
popular web pages, showing that it can provide up to 27% benefit. We identified
characteristics that can mask the potential of Push, like 3rd party objects or
cache-busting mechanisms that make objects unpushable. Lastly, we extrapolated
the conclusions from our set of target pages to the Alexa top 1000 pages, showing
that based on their page structure, 28% of them would see at least a 15%
performance boost by using Push.
Bibliography
[1] Above the Fold Time: Measuring Web Page Performance Visually. http:
//conferences.oreilly.com/velocity/velocity-mar2011/public/schedule/
detail/18692.
[2] Victor Agababov, Michael Buettner, Victor Chudnovsky, Mark Cogan, Ben
Greenstein, Shane McDaniel, Michael Piatek, Colin Scott, Matt Welsh, and
Bolian Yin. “Flywheel: Google’s Data Compression Proxy for the Mobile
Web”. In: Proceedings of the 12th USENIX Symposium on Networked
Systems Design and Implementation (NSDI 2015).
[3] Anna Bouch, Allan Kuchinsky, and Nina T. Bhatti. “Quality is in the
eye of the beholder: meeting users’ requirements for Internet quality of
service”. In: Proceedings of the CHI 2000 Conference on Human factors in
computing systems, The Hague, The Netherlands, April 1-6, 2000.
[4] Michael Butkiewicz, Daimeng Wang, Zhe Wu, Harsha V. Madhyastha,
and Vyas Sekar. “Klotski: Reprioritizing Web Content to Improve User
Experience on Mobile Devices”. In: 12th USENIX Symposium on Networked
Systems Design and Implementation (NSDI 15).
[5] Cache Digests for HTTP/2. https://tools.ietf.org/html/draft-ietf-httpbis-
cache-digest-00.
[6] CaddyServer: Implementing HTTP/2 Isn’t Trivial. https://caddyserver.
com/blog/implementing-http2-isnt-trivial.
[7] Bruje Hajek. In: Random Processes for Engineers. 2015. Chap. 1, pp. 19–20.
[8] Bo Han, Shuai Hao, and Feng Qian. “MetaPush: Cellular-Friendly Server
Push For HTTP/2”. In: Proceedings of the 5th Workshop on All Things
Cellular: Operations, Applications and Challenges.
[9] HTTP Archive. http://httparchive.org/.
[10] HTTP/2 Push dashboard. https://nsl.cs.usc.edu/Projects/http2push.
[11] HTTP2 RFC. https://tools.ietf.org/html/rfc7540.
[12] http://dev.chromium.org/spdy. http://dev.chromium.org/spdy.
[13] Latency Is Everywhere And It Costs You Sales. http://highscalability.com/
latency-everywhere-and-it-costs-you-sales-how-crush-it.
[14] Zhichun Li, Ming Zhang, Zhaosheng Zhu, Yan Chen, Albert G. Greenberg,
and Yi-Min Wang. “WebProphet: Automating Performance Prediction
for Web Services”. In: Proceedings of the 7th USENIX Symposium on
Networked Systems Design and Implementation, NSDI 2010, April 28-30,
2010, San Jose, CA, USA.
[15] Navigation Timing. https://www.w3.org/TR/navigation-timing.
[16] Ravi Netravali, James Mickens, and Hari Balakrishnan. “Polaris: Faster
Page Loads Using Fine-grained Dependency Tracking”. In: 13th USENIX
Symposium on Networked Systems Design and Implementation (NSDI 16).
[17] QUIC, a multiplexed transport over UDP. https://www.chromium.org/quic.
[18] Resource Timing. https://www.w3.org/TR/resource-timing.
[19] Sanae Rosen, Bo Han, Shuai Hao, Z Morley Mao, and Feng Qian. “Push or
Request: An Investigation of HTTP/2 Server Push for Improving Mobile
Performance”. In: ().
[20] Speed Index. https://sites.google.com/a/webpagetest.org/docs/using-
webpagetest/metrics/speed-index.
[21] Matteo Varvello, Kyle Schomp, David Naylor, Jeremy Blackburn, Alessan-
dro Finamore, and Konstantina Papagiannaki. “Is the Web HTTP/2 Yet?”
In: Passive and Active Measurement, 2016.
[22] Xiao Sophia Wang, Aruna Balasubramanian, Arvind Krishnamurthy, and
David Wetherall. “How Speedy is SPDY?” In: Proceedings of the 11th
USENIX Symposium on Networked Systems Design and Implementation,
NSDI 2014, Seattle, WA, USA.
[23] XiaoSophiaWang,ArvindKrishnamurthy,andDavidWetherall.“Speeding
up Web Page Loads with Shandian”. In: 13th USENIX Symposium on
Networked Systems Design and Implementation (NSDI 16).
[24] WepPageTest. http://www.webpagetest.org/.
[25] Kyriakos Zarifis, Mark Holland, Manish Jain, Ethan Katz-Bassett, and
Ramesh Govindan. “Modeling HTTP/2 Speed from HTTP/1 Traces”. In:
Passive and Active Measurement - 17th International Conference, PAM
2016, Heraklion, Greece, March 31 - April 1, 2016. Proceedings.
Linked assets
Computer Science Technical Report Archive
Conceptually similar
PDF
USC Computer Science Technical Reports, no. 934 (2013)
PDF
USC Computer Science Technical Reports, no. 944 (2014)
PDF
USC Computer Science Technical Reports, no. 957 (2015)
PDF
USC Computer Science Technical Reports, no. 935 (2013)
PDF
USC Computer Science Technical Reports, no. 848 (2005)
PDF
USC Computer Science Technical Reports, no. 852 (2005)
PDF
USC Computer Science Technical Reports, no. 903 (2009)
PDF
USC Computer Science Technical Reports, no. 774 (2002)
PDF
USC Computer Science Technical Reports, no. 782 (2003)
PDF
USC Computer Science Technical Reports, no. 841 (2005)
PDF
USC Computer Science Technical Reports, no. 931 (2012)
PDF
USC Computer Science Technical Reports, no. 937 (2013)
PDF
USC Computer Science Technical Reports, no. 958 (2015)
PDF
USC Computer Science Technical Reports, no. 961 (2015)
PDF
USC Computer Science Technical Reports, no. 603 (1995)
PDF
USC Computer Science Technical Reports, no. 797 (2003)
PDF
USC Computer Science Technical Reports, no. 760 (2002)
PDF
USC Computer Science Technical Reports, no. 692 (1999)
PDF
USC Computer Science Technical Reports, no. 732 (2000)
PDF
USC Computer Science Technical Reports, no. 915 (2010)
Description
Kyriakos Zarifis, Mark Holland, Manish Jain, Ethan Katz-Bassett, and Ramesh Govindan. "Making effective use of HTTP/2 server push in content delivery networks." Computer Science Technical Reports (Los Angeles, California, USA: University of Southern California. Department of Computer Science) no. 971 (2017).
Asset Metadata
Creator
Govindan, Ramesh (author), Holland, Mark (author), Jain, Manish (author), Katz-Bassett, Ethan (author), Zarifis, Kyriakos (author)
Core Title
USC Computer Science Technical Reports, no. 971 (2017)
Alternative Title
Making effective use of HTTP/2 server push in content delivery networks (
title
)
Publisher
Department of Computer Science,USC Viterbi School of Engineering, University of Southern California, 3650 McClintock Avenue, Los Angeles, California, 90089, USA
(publisher)
Tag
OAI-PMH Harvest
Format
13 pages
(extent),
technical reports
(aat)
Language
English
Unique identifier
UC16270239
Identifier
17-971 Making Effective Use of HTTP2 Server Push in Content Delivery Networks (filename)
Legacy Identifier
usc-cstr-17-971
Format
13 pages (extent),technical reports (aat)
Rights
Department of Computer Science (University of Southern California) and the author(s).
Internet Media Type
application/pdf
Copyright
In copyright - Non-commercial use permitted (https://rightsstatements.org/vocab/InC-NC/1.0/
Source
20180426-rozan-cstechreports-shoaf
(batch),
Computer Science Technical Report Archive
(collection),
University of Southern California. Department of Computer Science. Technical Reports
(series)
Access Conditions
The author(s) retain rights to their work according to U.S. copyright law. Electronic access is being provided by the USC Libraries, but does not grant the reader permission to use the work if the desired use is covered by copyright. It is the author, as rights holder, who must provide use permission if such use is covered by copyright.
Repository Name
USC Viterbi School of Engineering Department of Computer Science
Repository Location
Department of Computer Science. USC Viterbi School of Engineering. Los Angeles\, CA\, 90089
Repository Email
csdept@usc.edu
Inherited Values
Title
Computer Science Technical Report Archive
Description
Archive of computer science technical reports published by the USC Department of Computer Science from 1991 - 2017.
Coverage Temporal
1991/2017
Repository Email
csdept@usc.edu
Repository Name
USC Viterbi School of Engineering Department of Computer Science
Repository Location
Department of Computer Science. USC Viterbi School of Engineering. Los Angeles\, CA\, 90089
Publisher
Department of Computer Science,USC Viterbi School of Engineering, University of Southern California, 3650 McClintock Avenue, Los Angeles, California, 90089, USA
(publisher)
Copyright
In copyright - Non-commercial use permitted (https://rightsstatements.org/vocab/InC-NC/1.0/