15-744: Computer Networking L-20 Applications. L -20; 4-2-01© Srinivasan Seshan, 20012 Next Lecture: Application Networking HTTP Adaptive applications.

Slides:



Advertisements
Similar presentations
© 2007 Cisco Systems, Inc. All rights reserved.Cisco Public 1 Version 4.0 OSI Transport Layer Network Fundamentals – Chapter 4.
Advertisements

Chapter 9 Application Layer, HTTP Professor Rick Han University of Colorado at Boulder
Lecture 3: Design Philosophy & Applications : Computer Networking Copyright © CMU,
15-441: Computer Networking The “Web” Thomas Harris (slides from Srini Seshan’s Fall ’01 course)
How the web works: HTTP and CGI explained
15-744: Computer Networking L-20 Applications. L -20; © Srinivasan Seshan, Application Networking HTTP APIs Assigned reading [BSR99] An Integrated.
HTTP and Web Content Delivery COS 461: Computer Networks Spring 2011 Mike Freedman
Web Content Delivery Reading: Section and COS 461: Computer Networks Spring 2010 (MW 3:00-4:20 in CS105) Mike Freedman
Web, HTTP and Web Caching
1 Computer Networks Application layer (Part 2). 2 Application layer protocols Last class –Application layer functions –Application protocols DNS ftp This.
Department of Electronic Engineering City University of Hong Kong EE3900 Computer Networks Transport Protocols Slide 1 Transport Protocols.
TCP. Learning objectives Reliable Transport in TCP TCP flow and Congestion Control.
Congestion Manager and its relevance to WebTP Rajarshi Gupta.
15-744: Computer Networking L-21 Applications. L -21; © Srinivasan Seshan, Next Lecture: Application Networking HTTP Adaptive applications.
2/9/2004 Web and HTTP February 9, /9/2004 Assignments Due – Reading and Warmup Work on Message of the Day.
Web Content Delivery Reading: Section and COS 461: Computer Networks Spring 2009 (MW 1:30-2:50 in CS105) Mike Freedman Teaching Assistants:
Process-to-Process Delivery:
Network Protocols: Design and Analysis Polly Huang EE NTU
The Internet Congestion Manager Hari Balakrishnan MIT LCS
The Transport Layer.
1 Transport Layer Computer Networks. 2 Where are we?
CS640: Introduction to Computer Networks Aditya Akella Lecture 18 - The Web, Caching and CDNs.
HTTP Protocol Specification
HTTP Reading: Section and COS 461: Computer Networks Spring
CP476 Internet Computing Lecture 5 : HTTP, WWW and URL 1 Lecture 5. WWW, HTTP and URL Objective: to review the concepts of WWW to understand how HTTP works.
Rensselaer Polytechnic Institute Shivkumar Kalvanaraman, Biplab Sikdar 1 The Web: the http protocol http: hypertext transfer protocol Web’s application.
Maryam Elahi University of Calgary – CPSC 441.  HTTP stands for Hypertext Transfer Protocol.  Used to deliver virtually all files and other data (collectively.
CS640: Introduction to Computer Networks Aditya Akella Lecture 4 - Application Protocols, Performance.
An Integrated Congestion Management Architecture for Internet Hosts Hari Balakrishnan MIT Lab for Computer Science
Digital Multimedia, 2nd edition Nigel Chapman & Jenny Chapman Chapter 17 This presentation © 2004, MacAvon Media Productions Multimedia and Networks.
Hui Zhang, Fall Computer Networking Web, HTTP, Caching.
Chapter 2 Applications and Layered Architectures Sockets.
1 CS 4396 Computer Networks Lab TCP/IP Networking An Example.
CIS679: Lecture 13 r Review of Last Lecture r More on HTTP.
Application layer (Part 2)
CSE 524: Lecture 4 Application layer protocols. Administrative ● Reading assignment Chapter 2 ● Mid-term exam may be delayed to 11/2/2004 – Mostly on.
Network Protocols: Design and Analysis Polly Huang EE NTU
2: Application Layer 1 Chapter 2: Application layer r 2.1 Principles of network applications  app architectures  app requirements r 2.2 Web and HTTP.
The Internet Congestion Manager Hari Balakrishnan MIT Laboratory for Computer Science CM team Dave Andersen, Deepak Bansal, Dorothy.
Web Technologies Lecture 1 The Internet and HTTP.
Digital Multimedia, 2nd edition Nigel Chapman & Jenny Chapman Chapter 17 This presentation © 2004, MacAvon Media Productions Multimedia and Networks.
HTTP Here, we examine the hypertext transfer protocol (http) – originally introduced around 1990 but not standardized until 1997 (version 1.0) – protocol.
EE 122: Lecture 21 (HyperText Transfer Protocol - HTTP) Ion Stoica Nov 20, 2001 (*)
Overview of Servlets and JSP
Computer Networking Lecture 25 – The Web.
An End-System Architecture for Unified Congestion Management Hariharan S. Rahul, Hari Balakrishnan, Srinivasan Seshan MIT Lab for Computer Science
Data Communications and Computer Networks Chapter 2 CS 3830 Lecture 7 Omar Meqdadi Department of Computer Science and Software Engineering University of.
EEC-484/584 Computer Networks Lecture 4 Wenbing Zhao (Part of the slides are based on Drs. Kurose & Ross ’ s slides for their Computer.
Computer Networking The Web. 2 Web history 1945: Vannevar Bush, “As we may think”, Atlantic Monthly, July, describes the idea of a distributed.
IP1 The Underlying Technologies. What is inside the Internet? Or What are the key underlying technologies that make it work so successfully? –Packet Switching.
McGraw-Hill Chapter 23 Process-to-Process Delivery: UDP, TCP Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Week 11: Application Layer 1 Web and HTTP r Web page consists of objects r Object can be HTML file, JPEG image, Java applet, audio file,… r Web page consists.
TCP/IP1 Address Resolution Protocol Internet uses IP address to recognize a computer. But IP address needs to be translated to physical address (NIC).
Two Transport Protocols Available Transmission Control Protocol (TCP) User Datagram Protocol (UDP) Provides unreliable transfer Requires minimal – Overhead.
The Transport Layer Implementation Services Functions Protocols
Block 5: An application layer protocol: HTTP
Reddy Mainampati Udit Parikh Alex Kardomateas
Web Caching? Web Caching:.
Transport Layer Our goals:
Lecture 6 – Web Optimizations
Process-to-Process Delivery:
The Future of Transport
CSE 461 HTTP and the Web.
An Integrated Congestion Management Architecture for Internet Hosts
15-441: Computer Networking
Hypertext Transfer Protocol (HTTP)
Transport Protocols: TCP Segments, Flow control and Connection Setup
Process-to-Process Delivery: UDP, TCP
CSCI-351 Data communication and Networks
Presentation transcript:

15-744: Computer Networking L-20 Applications

L -20; © Srinivasan Seshan, Next Lecture: Application Networking HTTP Adaptive applications Assigned reading [BSR99] An Integrated Congestion Management Architecture for Internet Hosts [CT90] Architectural Consideration for a New Generation of Protocols

L -20; © Srinivasan Seshan, Overview HTTP Basics HTTP Fixes ALF Congestion Manager

L -20; © Srinivasan Seshan, HTTP Basics HTTP layered over bidirectional byte stream Almost always TCP Interaction Client sends request to server, followed by response from server to client Requests/responses are encoded in text How to mark end of message? Size of message  Content-Length Must know size of transfer in advance Delimiter  MIME style Content-Type Server must “byte-stuff” Close connection Only server can do this

L -20; © Srinivasan Seshan, HTTP Request Request line Method GET – return URI HEAD – return headers only of GET response POST – send data to the server (forms, etc.) URI E.g. with a proxyhttp:// E.g. /index.html if no proxy HTTP version

L -20; © Srinivasan Seshan, HTTP Request Request headers Authorization – authentication info Acceptable document types/encodings From – user If-Modified-Since Referer – what caused this page to be requested User-Agent – client software Blank-line Body

L -20; © Srinivasan Seshan, HTTP Request Example GET / HTTP/1.1 Accept: */* Accept-Language: en-us Accept-Encoding: gzip, deflate User-Agent: Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0) Host: Connection: Keep-Alive

L -20; © Srinivasan Seshan, HTTP Response Status-line HTTP version 3 digit response code 1XX – informational 2XX – success 3XX – redirection 4XX – client error 5XX – server error Reason phrase

L -20; © Srinivasan Seshan, HTTP Response Headers Location – for redirection Server – server software WWW-Authenticate – request for authentication Allow – list of methods supported (get, head, etc) Content-Encoding – E.g x-gzip Content-Length Content-Type Expires Last-Modified Blank-line Body

L -20; © Srinivasan Seshan, HTTP Response Example HTTP/ OK Date: Tue, 27 Mar :49:38 GMT Server: Apache/ (Unix) (Red-Hat/Linux) mod_ssl/2.7.1 OpenSSL/0.9.5a DAV/1.0.2 PHP/4.0.1pl2 mod_perl/1.24 Last-Modified: Mon, 29 Jan :54:18 GMT ETag: "7a11f-10ed-3a75ae4a" Accept-Ranges: bytes Content-Length: 4333 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/html …..

L -20; © Srinivasan Seshan, Typical Workload Multiple (typically small) objects per page Request sizes In this paper  Median 1946 bytes, mean bytes Why such a difference? Heavy-tailed distribution Pareto – p(x) = ak a x -(a+1) File sizes Why different than request sizes? Also heavy-tailed Pareto distribution for tail Lognormal for body of distribution

L -20; © Srinivasan Seshan, Typical Workload Popularity Zipf distribution (P = kr -1 ) Surprisingly common Embedded references Number of embedded objects = pareto Temporal locality Modeled as distance into push-down stack Lognormal distribution of stack distances Request interarrival Bursty request patterns

L -20; © Srinivasan Seshan, HTTP Caching Clients often cache documents Challenge: update of documents If-Modified-Since requests to check HTTP 0.9/1.0 used just date HTTP 1.1 has file signature as well When/how often should the original be checked for changes? Check every time? Check each session? Day? Etc? Use Expires header If no Expires, often use Last-Modified as estimate

L -20; © Srinivasan Seshan, Example Cache Check Request GET / HTTP/1.1 Accept: */* Accept-Language: en-us Accept-Encoding: gzip, deflate If-Modified-Since: Mon, 29 Jan :54:18 GMT If-None-Match: "7a11f-10ed-3a75ae4a" User-Agent: Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0) Host: Connection: Keep-Alive

L -20; © Srinivasan Seshan, Example Cache Check Response HTTP/ Not Modified Date: Tue, 27 Mar :50:51 GMT Server: Apache/ (Unix) (Red- Hat/Linux) mod_ssl/2.7.1 OpenSSL/0.9.5a DAV/1.0.2 PHP/4.0.1pl2 mod_perl/1.24 Connection: Keep-Alive Keep-Alive: timeout=15, max=100 ETag: "7a11f-10ed-3a75ae4a"

L -20; © Srinivasan Seshan, HTTP 0.9/1.0 One request/response per TCP connection Simple to implement Disadvantages Multiple connection setups  three-way handshake each time Several extra round trips added to transfer Multiple slow starts

L -20; © Srinivasan Seshan, Single Transfer Example ClientServer SYN ACK DAT FIN ACK 0 RTT 1 RTT 2 RTT 3 RTT 4 RTT Server reads from disk FIN Server reads from disk Client opens TCP connection Client sends HTTP request for HTML Client parses HTML Client opens TCP connection Client sends HTTP request for image Image begins to arrive

L -20; © Srinivasan Seshan, More Problems Short transfers are hard on TCP Stuck in slow start Loss recovery is poor when windows are small Lots of extra connections Increases server state/processing Server also forced to keep TIME_WAIT connection state Why must server keep these? Tends to be an order of magnitude greater than #of active connections, why?

L -20; © Srinivasan Seshan, Overview HTTP Basics HTTP Fixes ALF Congestion Manager

L -20; © Srinivasan Seshan, Netscape Solution Use multiple concurrent connections to improve response time Different parts of Web page arrive independently Can grab more of the network bandwidth than other users Doesn’t necessarily improve response time TCP loss recovery ends up being timeout dominated because windows are small

L -20; © Srinivasan Seshan, Persistent Connection Solution Multiplex multiple transfers onto one TCP connection Serialize transfers  client makes next request only after previous response How to demultiplex requests/responses Content-length and delimiter  same problems as before Block-based transmission – send in multiple length delimited blocks Store-and-forward – wait for entire response and then use content-length PM95 solution – use existing methods and close connection otherwise

L -20; © Srinivasan Seshan, Persistent Connection Example ClientServer ACK DAT ACK 0 RTT 1 RTT 2 RTT Server reads from disk Client sends HTTP request for HTML Client parses HTML Client sends HTTP request for image Image begins to arrive DAT Server reads from disk DAT

L -20; © Srinivasan Seshan, Persistent Connection Solution Serialized requests do not improve response time Pipelining requests Getall – request HTML document and all embeds Requires server to parse HTML files Doesn’t consider client cached documents Getlist – request a set of documents Implemented as a simple set of GETs Prefetching Must carefully balance impact of unused data transfers Not widely used due to poor hit rates

L -20; © Srinivasan Seshan, Persistent Connection Performance Benefits greatest for small objects Up to 2x improvement in response time Server resource utilization reduce due to fewer connection establishments and fewer active connections TCP behavior improved Longer connections help adaptation to available bandwidth Larger congestion window improves loss recovery

L -20; © Srinivasan Seshan, Remaining Problems Application specific solution to transport protocol problems Stall in transfer of one object prevents delivery of others Serialized transmission Much of the useful information in first few bytes Can “packetize” transfer over TCP HTTP 1.1 recommends using range requests MUX protocol provides similar generic solution Solve the problem at the transport layer Tcp-Int/CM/TCP control block interdependence

L -20; © Srinivasan Seshan, Overview HTTP Basics HTTP Fixes ALF Congestion Manager

L -20; © Srinivasan Seshan, Integrated Layer Processing (ILP) Layering is convenient for architecture but not for implementations Combining data manipulation operations across layers provides gains E.g. copy and checksum combined provides 90Mbps vs. 60Mbps separated Protocol design must be done carefully to enable ILP Presentation overhead, application-specific processing >> other processing Target for ILP optimization

L -20; © Srinivasan Seshan, Application Lever Framing (ALF) Objective: enable application to process data ASAP Application response to loss Retransmit (TCP applications) Ignore (UDP applications) Recompute/send new data (clever application) Expose unit of application processing (ADU) to protocol stack

L -20; © Srinivasan Seshan, Application Data Units Requirements ADUs can be processed in any order Naming of ADUs should help identify position in stream Size Enough to process independently Impact on loss recovery What if size is too large?

L -20; © Srinivasan Seshan, Overview HTTP Basics HTTP Fixes ALF Congestion Manager

L -20; © Srinivasan Seshan, Socket API Connection establishment Connect – active connect Bind, listen, accept – passive connect Transmission/reception Send & sendto Recv & recvfrom Pass large buffer to protocol stack Protocol transmits at either congestion controlled or line rate Configuration/status E.g advertised window, nagle, multicast membership, etc. Setsockopt, getsockopt Very little information really passed to/controlled by application

L -20; © Srinivasan Seshan, CM Requirements Determining correct rate of transmission = congestion control What is wrong with congestion control today? “Multimedia Transmissions Drive Net Toward Gridlock” New York Times 8/23/99 Tightly integrated with TCP Congestion control is vital to network stability

L -20; © Srinivasan Seshan, Problems with Concurrent Flows Compete for resources N “slow starts” = aggressive No shared learning = inefficient f(n) f2 f1 Server Client Abstract all congestion-related information into one location Internet

L -20; © Srinivasan Seshan, Problems Adapting to Network State TCP hides network state New applications may not use TCP Often do not adapt to congestion Need system that helps applications learn and adapt to congestion f1 Server Client ? Internet

L -20; © Srinivasan Seshan, API High Level View IP HTTPVideo1 TCP2UDP AudioVideo2 All congestion management tasks performed in CM Applications learn and adapt using API All congestion management tasks performed in CM Applications learn and adapt using API Congestion Manager Per-macroflow statistics (cwnd, rtt, etc.) TCP1

L -20; © Srinivasan Seshan, Design Requirements How does CM control when and whose transmissions occur? Keep application in control of what to send How does CM discover network state? What information is shared? What is the granularity of sharing? Key issues: API and information sharing

L -20; © Srinivasan Seshan, The CM Architecture Transmitting Application (TCP, conferencing app, etc) Prober Congestion Controller Scheduler Responder Congestion Detector SenderReceiver CM Protocol API Receiving Application Protocol

L -20; © Srinivasan Seshan, Congestion Controller Responsible for deciding when to send a packet Window-based AIMD with traffic shaping Exponential aging when feedback low Halve window every RTT (minimum) Plug in other algorithms Selected on a “macro-flow” granularity

L -20; © Srinivasan Seshan, Scheduler Responsible for deciding who should send a packet Hierarchical round robin Hints from application or receiver Used to prioritize flows Plug in other algorithms Selected on a “macro-flow” granularity Prioritization interface may be different

L -20; © Srinivasan Seshan, Transmission API Buffered send cm_send(data, length) Request/callback-based send cm_request( ) cmapp_send( ) App CM IP send( ) cm_notify(nsent)

L -20; © Srinivasan Seshan, Transmission API (cont.) Request API: asynchronous sources wait for (some_events) { get_data( ); send( ); } Synchronous sources do_every_t_ms { get_data( ); send( ); } Solution: cmapp_update(rate, srtt) callback

L -20; © Srinivasan Seshan, Feedback about Network State Monitoring successes and losses Application hints Probing system Notification API (application hints) Application calls cm_update(nsent, nrecd, congestion indicator, rtt)

L -20; © Srinivasan Seshan, Probing System Receiver modifications necessary Support for separate CM header Uses sequence number to detect losses Sender can request count of packets received Receiver modifications detected/negotiated via handshake Enables incremental deployment IP payload CM Header IP Header IP Payload

L -20; © Srinivasan Seshan, How will applications use CM? TCP Asynchronous API Congestion controlled UDP Buffered API Conferencing applications Synchronous API for real-time streams Combination of others for other streams

L -20; © Srinivasan Seshan, TCP/CM CM Application IP 2. cm_open 1. connect3. write 4. cm_request 6. transmit 5. cmapp_send 8. recv ACK 7. cm_notify 7. cm_request 9. cm_update 11. cm_close 10. close

L -20; © Srinivasan Seshan, CM Web Performance With CM TCP Newreno Time Sequence number CM greatly improves predictability and consistency CM greatly improves predictability and consistency

L -20; © Srinivasan Seshan, Layered Streaming Audio TimeSequence number Competing TCP TCP/CM Audio/CM Audio adapts to available bandwidth Combination of flows compete equally with normal TCP Audio adapts to available bandwidth Combination of flows compete equally with normal TCP

L -20; © Srinivasan Seshan, CM Summary Enables proper and stable congestion behavior Provides simple API that enables application to learn and adapt to network state Improves consistency and predictability of network transfers Provides benefit even when deployed at senders alone

L -20; © Srinivasan Seshan, Next Lecture: Caching & CDN’s Web caching hierarchies Content distribution networks Peer-to-peer networks Assigned reading [FCAB98] Summary Cache: A Scalable Wide- Area Cache Sharing Protocol [K+99] Web Caching with Consistent Hashing