Content Distribution Networks CPE 401 / 601 Computer Network Systems Content Distribution Networks Modified from Ravi Sundaram, Janardhan R. Iyengar, and others
Content and Internet Traffic Shifts seismically (emailFTPWebP2Pvideo) Has many small/unpopular and few large/popular flows Zipf popularity distribution, 1/k Shows up as a line on log-log plot
Server Farms and Web Proxies (1) Server farms enable large-scale Web servers: Front-end load-balances requests over servers Servers access the same backend database
Server Farms and Web Proxies (2) Proxy caches help organizations to scale the Web Caches server content over clients for performance implements organization policies (e.g., access)
HTTP Caching Clients often cache documents When should origin be checked for changes? Every time? Every session? Date? HTTP includes caching information in headers HTTP 0.9/1.0 used: “Expires: <date>”; “Pragma: no-cache” HTTP/1.1 has “Cache-Control” “No-Cache”, “Private”, “Max-age: <seconds>” “E-tag: <opaque value>” If not expired, use cached copy If expired, use condition GET request to origin “If-Modified-Since: <date>”, “If-None-Match: <etag>” 304 (“Not Modified”) or 200 (“OK”) response Ask: What’s the problem with just using date for IMS queries? - Web servers may generate content per-client Examples: CGI scripts, things like google customized home, vary with cookies, client IP, referrer, accept-encoding, etc.
Web Proxy Caches User configures browser: Web accesses via cache Browser sends all HTTP requests to cache Object in cache: cache returns object Else: cache requests object from origin, then returns to client origin server Proxy server HTTP request HTTP request client HTTP response HTTP response HTTP request HTTP response client origin server
Caching Example (1) Assumptions Consequences Average object size = 100K bits Avg. request rate from browsers to origin servers = 20/sec Delay from institutional router to any origin server and back to router = 2 sec Consequences Utilization on LAN = 20% Utilization on access link = 100% Total delay = Internet delay + access delay + LAN delay = 2 sec + minutes + milliseconds origin servers public Internet 1.5 Mbps access link institutional network 10 Mbps LAN
Caching Example (2) Possible Solution Consequences Increase bandwidth of access link to, say, 10 Mbps Often a costly upgrade Consequences Utilization on LAN = 20% Utilization on access link = 20% Total delay = Internet delay + access delay + LAN delay = 2 sec + milliseconds origin servers public Internet 10 Mbps access link institutional network 10 Mbps LAN
Caching Example (3) Install Cache Consequences Suppose hit rate is 60% 60% requests satisfied almost immediately (say 10 msec) 40% requests satisfied by origin Utilization of access link down to 53%, yielding negligible delays Weighted average of delays = .6*2 s + .4*10 ms < 1.3 s origin servers public Internet 1.5 Mbps access link institutional network 10 Mbps LAN institutional cache
When a single cache isn’t enough What if the working set is > proxy disk? Cooperation! A static hierarchy Check local If miss, check siblings If miss, fetch through parent Internet Cache Protocol (ICP) ICPv2 in RFC 2186 (& 2187) UDP-based, short timeout public Internet Parent web cache
Problems with discussed approaches: Server farms and Caching proxies Server farms do nothing about problems due to network congestion, or to improve latency issues due to the network Caching proxies serve only their clients, not all users on the Internet Content providers (say, Web servers) cannot rely on existence and correct implementation of caching proxies Accounting issues with caching proxies. For instance, www.cnn.com needs to know the number of hits to the webpage for advertisements displayed on the webpage
Problems But…most dynamic content is small, Significant fraction (>50% ) of HTTP objects un-cacheable Sources of dynamism? Dynamic data: Stock prices, scores, web cams CGI scripts: results based on passed parameters Cookies: results may be based on passed data SSL: encrypted data is not cacheable Advertising / analytics: owner wants to measure # hits Random strings in content to ensure unique counting But…most dynamic content is small, while static content is large (images, video, .js, .css, etc.)
Content Distribution Networks (CDNs) Content providers are CDN customers Content replication CDN company installs thousands of servers throughout Internet In large datacenters or, close to users CDN replicates customers’ content When provider updates content, CDN updates servers origin server in North America CDN distribution node CDN server in S. America CDN server in Asia CDN server in Europe
The Web: Simple on the Outside… Content Providers End Users Internet
…But Problematic on the Inside Content Providers Peering Points Network Providers End Users UUNet NAP Qwest NAP AOL
Why does my click not work Latency - Browser takes a long time to load the page Packet Loss - Browser hangs, user needs to hit refresh Jitter - Streams are jerky Server load - Browser connects but does not fully load the page Broken/missing content Mention that in a nut-shell this is the problem you tried to solve and here you will talk about the ideas that go into the solution
The Akamai Solution Content Providers Servers at Network Edge End Users NAP NAP
Benefits of CDNs Improved end-user experience reduce latency reduce loss reduce jitter Reduced network congestion Increased scalability Improved fault-tolerance Reduced vulnerability Reduced costs Network congestion is reduced because traffic is moved to the edge Scalability is improved because with more servers more requests can be served Fault-tolerance improved because no single point of failure - distributed system Vulnerability reduced because denial of service attacks are diffused Reduced costs because of economies of scale, multiplexing and buying in bulk
CDN vs. Caching Proxies Caches are used by ISPs to reduce bandwidth consumption, CDNs are used by content providers to improve quality of service to end users Caches are reactive, CDNs are proactive Caching proxies cater to their users (web clients) and not to content providers (web servers), CDNs cater to the content providers (web servers) and clients CDNs give control over the content to the content providers, caching proxies do not
CDNs – Content Delivery Networks (1) CDNs scale Web servers by having clients get content from a nearby CDN node (cache)
Content Delivery Networks (2) Directing clients to nearby CDN nodes with DNS: Client query returns local CDN node as response Local CDN node caches content for nearby clients and reduces load on the origin server
Content Delivery Networks (3) Origin server rewrites pages to serve content via CDN Traditional Web page on server Page that distributes content via CDN
CDN Architecture Origin Server CDN Client Request Distribution Routing Infrastructure Distribution and Accounting Infrastructure Surrogate
CDN Components Content Delivery Infrastructure: Delivering content to clients from surrogates Request Routing Infrastructure: Steering or directing content request from a client to a suitable surrogate Distribution Infrastructure: Moving or replicating content from content source (origin server, content provider) to surrogates Accounting Infrastructure: Logging and reporting of distribution and delivery activities
Server Interaction with CDN Origin Server www.unr.edu Distribution Infrastructure 1 Origin server pushes new content to CDN OR CDN pulls content from origin server Accounting Infrastructure 2 2. Origin server requests logs and other accounting info from CDN OR CDN provides logs and other accounting info to origin server
Client Interaction with CDN Surrogate (DE) (CA) CDN delaware.unr.akamai.com california.cnn.akamai.com 1 1. Hi! I need www.unr.com/cse 2 Go to surrogate california.unr.akamai.com Request Routing Infrastructure 3 3. Hi! I need content /sochi Q: How did the CDN choose the California surrogate over the Delaware surrogate ?
Content Distribution Networks & Server Selection Replicate content on many servers Challenges How to replicate content Where to replicate content How to find replicated content How to choose among known replicas How to direct clients towards replicas
Server Selection Which server? Lowest load: to balance load on servers Best performance: to improve client performance Based on Geography? RTT? Throughput? Load? Any alive node: to provide fault tolerance How to direct clients to a particular server? As part of routing: anycast, cluster load balancing As part of application: HTTP redirect As part of naming: DNS Ask: What things might you want to pick the server using? How do you direct clients to a server? Anyone know how Akamai works?
Trade-offs between approaches Routing based (IP anycast) Pros: Transparent to clients, circumvents many routing issues Cons: Little control, complex, scalability Application based (HTTP redirects) Pros: Application-level, fine-grained control Cons: Additional load and RTTs, hard to cache Naming based (DNS selection) Pros: Well-suitable for caching, reduce RTTs Cons: Request by resolver not client, request for domain not URL, hidden load factor of resolver’s population Much of this data can be estimated “over time”
DNS based Request-Routing Common due to the ubiquity of DNS as a directory service Specialized DNS server inserted in DNS resolution process DNS server is capable of returning a different set of A, NS or CNAME records based on policies/metrics
DNS based Request-Routing How does the Akamai DNS know which surrogate is closest ? www.unr.edu Surrogate 145.155.10.15 58.15.100.152 Akamai CDN cis.ucdavis.edu 128.4.30.15 california.unr.akamai.com delaware.cnn.akamai.com Akamai DNS A 145.155.10.15 DNS response: DNS query: www.unr.edu Session local DNS server (louie.ucdavis.edu) 128.4.4.12 DNS query: www.unr.edu DNS response: A 145.155.10.15
DNS based Request-Routing Akamai DNS www.cnn.com Surrogate Akamai CDN cis.ucdavis.edu 128.4.30.15 Measurement results Measure to Client DNS DNS response DNS query Measurements Session local DNS server (louie. ucdavis.edu) 128.4.4.12 DNS query DNS response
What is the Mapping Problem Problem of directing requests to servers so as to optimize end-user experience reduce latency reduce loss reduce jitter Assumption - servers are fine Applicable to 2 mirrors or 1000s of Akamai locations
Server Selection Metrics Network Proximity (Surrogate to Client): Network hops (traceroute) Internet mapping services (NetGeo, IDMaps) … Surrogate Load: Number of active TCP connections HTTP request arrival rate Other OS metrics Bandwidth Availability
Attempt Measure which is closer Measure frequently Closeness changes over time Measure frequently Bothers people Too many to do If nobody asks why not use DNS queries then bring it up by myself - point out that 15 - 20% of nameservers are closed to the world, that their state tends to be variable, different size packets cannot be used and sysadmins tend to be sensitive about repeated queries. Talk about as a naive attempt to do the impossible. Experts said it was not possible but say that we learnt important things. Never eliminate the obvious without trying it. Also known as the open design principle - need all the help we can get. Scalability is the key problem. Say that 10s was set as a challenge ~500,000 unique nameservers on any given day 10 sec per measurement cycle
Idea Topology Congestion relatively static changes in BGP time order of hours if not days Congestion dynamic changes in round-trip time order of milliseconds Mention that the naive approach taught us a lot. To some extent we were theoreticians and by doing we learnt.
Set cover Let sets represent proxy points 1 2 3 4 Let sets represent proxy points Let elements represent nameservers Find minimum collection of proxy points covering nameservers X covers 1, 2, 3 and 4
Topology Discovery At each mirror maintain list of partial paths to nameservers At each epoch extend paths by 1, in randomized fashion, and exchange with other mirror If the two (partial) paths to a namerver have intersected then declare that nameserver done. If path has reached forbidden IP then wait Use pair of proxies in case of failure
90,000 proxy points (clusters) Topology Discovery 500,000 nameservers reduced to 90,000 proxy points (clusters) Problem - Still too many measurements to do. 90,000 measurements every 10s with 32B packets requires a few Mbps per mirror. State that fewer than 1% or 5,000 require proxy pairs. Solution - Importance based sampling 7,000 account for 95% end-user load! Maps built every 10s
CDN Types (Skeletal) CDNs Hosting CDN Relaying CDN Partial Site Content Delivery Full Site Content Delivery Request Routing Techniques DNS based URL Rewriting
Value of a CDN Scale: Aggregate infrastructure size Reach: Diversity of content locations diverse placement of surrogates Request routing efficiency, delivery techniques
How Akamai Works Clients fetch html document from primary server E.g. fetch index.html from unr.edu URLs for replicated content are replaced in HTML E.g. <img src=“http://unr.edu/af/x.gif”> replaced with <img src=http://a73.g.akamai.net/7/23/unr.edu/af/x.gif> Or, cache.unr.edu, and UNR adds CNAME (alias) for cache.unr.edu a73.g.akamai.net Client resolves aXYZ.g.akamaitech.net hostname Maps to a server in one of Akamai’s clusters
How Akamai Works Akamai only replicates static content At least, simple version. Akamai also lets sites write code that run on their servers, but that’s a pretty different beast Modified name contains original file name Akamai server is asked for content Checks local cache Check other servers in local cluster Otherwise, requests from primary server and cache file CDN is a large-scale, distributed network Akamai has ~25K servers spread over ~1K clusters world-wide Why do you want servers in many different locations? Why might video distribution architectures be different?
How Akamai Works Root server gives NS record for akamai.net This nameserver returns NS record for g.akamai.net Nameserver chosen to be in region of client’s name server TTL is large g.akamai.net nameserver chooses server in region Should try to chose server that has file in cache Uses aXYZ name and hash TTL is small Small modification to before: CNAME cache.unr.edu cache.unr.edu.akamaidns.net CNAME cache.unr.edu.akamaidns.net a73.g.akamai.net
How Akamai Works – Already Cached unr.edu (content provider) DNS root server Akamai server GET index.html Akamai high-level DNS server 1 2 Akamai low-level DNS server 7 8 Nearby hash-chosen Akamai server 9 10 End-user GET /unr.edu/foo.jpg
How Akamai Works unr.edu (content provider) DNS root server Akamai server GET foo.jpg 12 GET index.html 11 Akamai high-level DNS server 5 1 2 3 6 4 Akamai low-level DNS server 7 8 Nearby hash-chosen Akamai server 9 10 End-user GET /unr.edu/foo.jpg