Download presentation
Presentation is loading. Please wait.
1
1 Prefetching the Means for Document Transfer: A New Approach for Reducing Web Latency 1. Introduction 2. Data Analysis 3. Pre-transfer Solutions 4. Performance Evaluation 5. Conclusions
2
2 Introduction (1) Typical communication between Web clients and servers Client request DNS lookup (get server IP address) TCP connection-establishment HTTP request server processing server response and data transmission.
3
3 Introduction (2) Factors of Latency 1) name-to-address resolution time, 2) TCP connection-establishment time, 3) HTTP request-response time, 4) Server processing, 5) Transmission time. (3) HTTP/1.1 Re-using a single long-lived TCP connection for multiple HTTP requests. HTTP/1.1 reduces the latency incurred in subsequent requests to a server utilizing an existing connection, but longer latency is still existed when a request needs establishment of a new connection.
4
4 Introduction (cont) (4) Other techniques for latency reduction Caching: 1)Caching documents at browsers and proxies can reduce latency only applicable to 30%-50%. 2)Limited by copyright issues. 3)Caching eliminates transmission time, often when pre-validation is used, so incurs considerable latency. Document prefetching 1)Reduces latency by predicting requests and initiating document transfer prior to an actual request. 2)The effectiveness is limited by accuracy of predictions and the availability of lead time and bandwidth. 3)The use of document prefetching is controversial due to its extensive overhead on network bandwidth.
5
5 Data Analysis (1) Data resource 1)The data is from a log of the AT&T Research proxy server. 2)Data format: each HTTP request, the time of the request, the user’s IP address, the requested URL and Web server, and the referring URL. 3)1.1 million requests (8/11 - 25/11, 1996), 463 users (IP addresses), 17 thousand different Web servers, and for 521 thousand different URLs. 4)We extract users requests sequences and lists of servers. (2) Data simulation 1)Use a simple “Web browser” program utilizing a BSD-based socket interface. This program replayed logged requests and timed latencies. 2)Measurements are done from 3 locations: AT&T Shannon Lab, Stanford Univ., Tel Aviv Univ. 3)Requests referred by search engine or web-portals: AltaVista and Yahoo!
6
6 Data Analysis (cont) (3) Study results of latency factors DNS lookup time: 1) long query times which is more than 3 seconds for about 10% of servers 2)Most DNS times are around 1secs.
7
7 Data Analysis (cont) (3) Study results of latency factors DNS lookup time: 3) query times tend to increase as the time from a previous identical query increases.
8
8 Data Analysis (cont) (3) Study results of latency factors TCP Connection establishment time: 1) more than 0.5 seconds for about 10% of servers 2)The effect of route warming is visible but small for short time frames
9
9 Data Analysis (cont) (3) Study results of latency factors TCP Connection establishment time: 3)significant especially in busy hours
10
10 Data Analysis (cont) (3) Study results of latency factors Cold-and warm-server request-response time: 1)HTTP request-response time is major part of latency 2)HTTP request-response time of cold server is longer than warm server *4 consecutive TCP connection establishments(SYN1-SYN4), 3 consecutive HTTP HEAD requests(HEAD1- HEAD3), followed by 3 consecutive GET requests(GET1-GET3). *SYN2-SYN4, HEAD2-HEAD3, GET1-GET3 is similar.
11
11 Data Analysis (cont) (3) Study results of latency factors Summary: 1)DNS time and cold server state are major cause of long latencies 2)TCP connection-establishment time is significant with respect to HTTP request- response time
12
12 Pre-transfer Solutions (1) pre-transfer prefetching Pre-resolving: Perform the DNS query prior to the user request Browser or proxy performs DNS lookup before a request to the server. Example: pre-solve server names appeared in hyperlinks; DNS renew Pre-connecting: Establish of a TCP connection prior to a user request Browser or proxy establishes a TCP connection to a server prior to the user’s request. When an IP address is not yet available pre-connecting requires pre-resolving. Pre-warming: Send a “dummy” HTTP request prior to an actual request Browser or proxy sends a “dummy” HTTP HEAD request to the server prior to the actual request. It requires connection-establishment and also DNS lookup when IP-address is not available.
13
13 Pre-transfer Solutions (cont) (2) Overhead consideration Pre-resolving increases the number of DNS queries; Pre-connecting increases the number of connection-establishments; Pre-warming results in additional work for the HTTP application. Proxies, routers, DNS servers, and Web servers can reduce the extra overhead: --> Proxies can multiplex many users’ traffic on the same TCP-connection to the server, pre-connected connections can be shared. Similarly, the benefits of a single pre-resolve or pre-warm can be applicable to several users. --> Routers may tune themselves for the change in the balance of total bandwidth and number of different destinations. They, for example, can cache route related parameters only after at least few packets are routed rather than just the first one. This strategy would reduce the use of valuable router cache space on pre-connects or pre-warms traffic. --> More name-servers used. --> Web servers have highest overhead when use pre-transfer techniques, we use the techniques only instead of document pre-fetching.
14
14 Pre-transfer Solutions (cont) (3) Deployment guidelines 1)Sharing of connections and pre-warms across their clients; 2)For browsers and proxies to terminate prefetched connections as soon as access likelihood drops; 3)A prefetching algorithm should include a prediction scheme, to predict which servers are likely to be accessed, and a pricing scheme, to evaluate overheads, set threshold for the likelihood of access; 4)Add additional information in the DNS; 5)Add a new access request to HTTP, give them low priority in times of congestion.
15
15 Performance Evaluation 1)Timed Factors of latency (1)DNS lookup, (2)TCP connection-establishment, (3)HTTP GET request-response, (4)a consecutive TCP connection-establishment (warm-route), (5)a consecutive HTTP GET request (warm-server), (6)a consecutive HTTP HEAD request (warm-server minimal transmission time). 2)Calculated different latencies involved different pre-transfer techniques with neither technique: (1)+(2)+(3). with pre-resolved DNS query: (2)+(3). with pre-resolved DNS query and a pre-warmed route: (3)+(4) with pre-connecting: (3) with pre-resolved DNS query, warm route, and warm server: (4)+(5) with pre-connecting and warm server: (5)
16
16 Performance Evaluation
17
17 Performance Evaluation For requests referred by AltaVista: with no techniques pre-resloving pre-connecting pre-warming Percentage of all requests 22% 10% 6% 3% (latency exceeds 1 second): Percentage of all requests 8% 3% 2.5% 1% (latency exceeds 5 second):
18
18 Experiment and Results (cont) For all requests in the log: with no techniques pre-resloving pre-connecting pre-warming Percentage of all requests 14% 10% 6% 4% (latency exceeds 1 second): Percentage of all requests 7% 4.5% 2% 1% (latency exceeds 4 second): Cold servers and long DNS query times are prevalent and pre-resolving and pre-warming are particularly effective.
19
19 Conclusions There 3 main factors for long latency: 1) DNS lookup time 2) TCP connection-establishment time 3) HTTP request-response time (Cold and warm server state) We can use pre-transfer solutions to reduce long latency: 1) Pre-resolving --> eliminate DNS lookup time 2) Pre-connecting --> eliminate TCP establishment time 3) Pre-warming -->reduce HTTP request-response time
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.