Network Applications: HTTP/1

Slides:



Advertisements
Similar presentations
TCP/IP Protocol Suite 1 Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Chapter 22 World Wide Web and HTTP.
Advertisements

1 Chapter 5 Threads 2 Contents  Overview  Benefits  User and Kernel Threads  Multithreading Models  Solaris 2 Threads  Java Threads.
COEN 445 Communication Networks and Protocols Lab 4
HyperText Transfer Protocol (HTTP)
Application Layer  We will learn about protocols by examining popular application-level protocols  HTTP  FTP  SMTP / POP3 / IMAP  Focus on client-server.
Chapter 2: Application Layer
HyperText Transfer Protocol (HTTP) Computer Networks Computer Networks Spring 2012 Spring 2012.
Concurrent Client server L. Grewe. Reveiw: Client/server socket interaction: TCP wait for incoming connection request connectionSocket = welcomeSocket.accept()
Chapter 2 Application Layer Computer Networking: A Top Down Approach Featuring the Internet, 3 rd edition. Jim Kurose, Keith Ross Addison-Wesley, July.
Web, HTTP and Web Caching
Definitions, Definitions, Definitions Lead to Understanding.
Application Layer  We will learn about protocols by examining popular application-level protocols  HTTP  FTP  SMTP / POP3 / IMAP  Focus on client-server.
2/9/2004 Web and HTTP February 9, /9/2004 Assignments Due – Reading and Warmup Work on Message of the Day.
EEC-484/584 Computer Networks Lecture 4 Wenbing Zhao (Part of the slides are based on Drs. Kurose & Ross ’ s slides for their Computer.
PL-IV- Group A HTTP Request & Response Header
HTTP; The World Wide Web Protocol
1 Chapter Client-Server Interaction. 2 Functionality  Transport layer and layers below  Basic communication  Reliability  Application layer.
Rensselaer Polytechnic Institute Shivkumar Kalvanaraman, Biplab Sikdar 1 The Web: the http protocol http: hypertext transfer protocol Web’s application.
20-1 Last time □ NAT □ Application layer ♦ Intro ♦ Web / HTTP.
2: Application Layer1 Internet apps: their protocols and transport protocols Application remote terminal access Web file transfer streaming multimedia.
Week 11: Application Layer1 Web and HTTP First some jargon r Web page consists of objects r Object can be HTML file, JPEG image, Java applet, audio file,…
2: Application Layer1 Web and HTTP First some jargon Web page consists of base HTML-file which includes several referenced objects Object can be HTML file,
Web application architecture
1 Computer Communication & Networks Lecture 28 Application Layer: HTTP & WWW p Waleed Ejaz
Sockets process sends/receives messages to/from its socket
U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science Computer Systems Principles Concurrency Patterns Emery Berger and Mark Corner University.
Network Applications: The Web and High-Performance Web Servers Y. Richard Yang 9/24/2013.
Application Layer 2-1 Chapter 2 Application Layer 2.2 Web and HTTP.
CIS679: Lecture 13 r Review of Last Lecture r More on HTTP.
1 WWW. 2 World Wide Web Major application protocol used on the Internet Simple interface Two concepts –Point –Click.
Application Layer 2-1 Lecture 4: Web and HTTP. Web and HTTP First, a review… web page consists of objects object can be HTML file, JPEG image, Java applet,
Important r There will be NO CLASS on Friday 1/30/2015! r Please mark you calendars 1.
2: Application Layer 1 Chapter 2: Application layer r 2.1 Principles of network applications  app architectures  app requirements r 2.2 Web and HTTP.
Web Technologies Lecture 1 The Internet and HTTP.
27.1 Chapter 27 WWW and HTTP Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
1 COMP 431 Internet Services & Protocols HTTP Persistence & Web Caching Jasleen Kaur February 11, 2016.
Data Communications and Computer Networks Chapter 2 CS 3830 Lecture 7 Omar Meqdadi Department of Computer Science and Software Engineering University of.
EEC-484/584 Computer Networks Lecture 4 Wenbing Zhao (Part of the slides are based on Drs. Kurose & Ross ’ s slides for their Computer.
HyperText Transfer Protocol (HTTP) Deepti Kulkarni CISC 856: TCP/IP and Upper Layer Protocols Fall 2008 Acknowledgements Professor Amer Richi Gupta.
Data Communication EDA344, DIT420 Description of Lab 1 and Optional Programming HTTP Assignment Bapi Chatterjee Prajith R G.
Application Layer 2-1 Chapter 2 Application Layer Computer Networking: A Top Down Approach 6 th edition Jim Kurose, Keith Ross Addison-Wesley March 2012.
Week 11: Application Layer 1 Web and HTTP r Web page consists of objects r Object can be HTML file, JPEG image, Java applet, audio file,… r Web page consists.
Network Applications: Network App Programming: HTTP/1.1/2; High-performance Server Design Y. Richard Yang 2/15/2016.
TCP/IP Protocol Suite 1 Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Chapter 22 World Wide Web and HTTP.
Network Applications: HTTP; High-Perf HTTP Server 2/2/2012.
Network Applications: HTTP/1.0
2: Application Layer 1 Chapter 2 Application Layer These ppt slides are originally from the Kurose and Ross’s book. But some slides are deleted and added.
Data Communication EDA344, DIT420 Description of Lab 1 and Optional Programming HTTP Assignment Aras Atalar Prajith R G.
Block 5: An application layer protocol: HTTP
How HTTP Works Made by Manish Kushwaha.
WWW and HTTP King Fahd University of Petroleum & Minerals
Lecture 2 Dr. Richard Spillman Fall 2009
COMP2322 Lab 2 HTTP Steven Lee Feb. 8, 2017.
Internet transport protocols services
Web Caching? Web Caching:.
ECE 671 – Lecture 16 Content Distribution Networks
Distributed Systems - Comp 655
Computer Communication & Networks
EEC-484/584 Computer Networks
CSE 461 HTTP and the Web.
لایه ی کاربرد مظفر بگ محمدی 2: Application Layer.
Data Communication EDA344, DIT420 Description of Lab 1 and Optional Programming HTTP Assignment Aras Atalar Prajith R G 24/01/2018.
EEC-484/584 Computer Networks
EE 122: HyperText Transfer Protocol (HTTP)
Hypertext Transfer Protocol (HTTP)
EEC-484/584 Computer Networks
Chapter 2 Application Layer
Network Applications: HyperText Transfer Protocol
CSCI-351 Data communication and Networks
Presentation transcript:

Y. Richard Yang http://zoo.cs.yale.edu/classes/cs433/ 9/28/2017 Network Applications: HTTP/1.1/2; High-Performance Server Design (Per Thread) Y. Richard Yang http://zoo.cs.yale.edu/classes/cs433/ 9/28/2017

Outline Admin and recap HTTP “acceleration” Network server design

Admin HTTP server assignment (part 1) to be posted later today

Recap: HTTP Wide use of HTTP for Web applications Example: RESTful API RESTful design http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm http://docs.oracle.com/javaee/6/tutorial/doc/giepu.html HTTP Ethernet Cable/DSL Wireless TCP UDP WebMail WebApp…

Recap: HTTP C-S app serving Web pages message format message flow request/response line, header lines, entity body simple methods, rich headers message flow stateless server, thus states such as cookie and authentication are needed in each message

Recap: Basic HTTP/1.0 Server create ServerSocket(6789) connSocket = accept() It does not have to be a static file read request from connSocket Map URL to file Read from file/ write to connSocket close connSocket

Recap: Latency of Basic HTTP/1.0 >= 2 RTTs per object: TCP handshake --- 1 RTT client request and server responds --- at least 1 RTT (if object can be contained in one packet) TCP SYN TCP/ACK; HTTP GET TCP ACK base page TCP SYN TCP/ACK; HTTP GET TCP ACK image 1 Typical approaches to reduce latency: parallel TCP connections persistent HTTP cache and conditional GET Discussion: How to reduce HTTP latency?

Outline Admin and recap HTTP “acceleration”

Substantial Efforts to Speedup Basic HTTP/1.0 Reduce the number of objects fetched [Browser cache] Reduce data volume [Compression of data] Header compression [HTTP/2] Reduce the latency to the server to fetch the content [Proxy cache] Remove the extra RTTs to fetch an object [Persistent HTTP, aka HTTP/1.1] Increase concurrency [Multiple TCP connections] Asynchronous fetch (multiple streams) using a single TCP [HTTP/2] Server push [HTTP/2]

Browser Cache and Conditional GET client server Goal: don’t send object if client has up-to-date stored (cached) version client: specify date of cached copy in http request If-modified-since: <date> server: response contains no object if cached copy up-to-date: HTTP/1.0 304 Not Modified http request msg If-modified-since: <date> object not modified http response HTTP/1.0 304 Not Modified http request msg If-modified-since: <date> object modified http response HTTP/1.1 200 OK … <data>

Web Caches (Proxy) Goal: satisfy client request without involving origin server client Proxy server http request http response origin

Typically in the same network as the server http://www.celinio.net/techblog/?p=1027 Two Types of Proxies Typically in the same network as the client Typically in the same network as the server

Benefits of Forward Proxy origin servers Assume: cache is “close” to client (e.g., in same network) smaller response time: cache “closer” to client decrease traffic to distant servers link out of institutional/local ISP network often is bottleneck public Internet 1.5 Mbps access link institutional network 10 Mbps LAN institutional cache

No Free Lunch: Problems of Web Caching The major issue of web caching is how to maintain consistency Two ways pull Web caches periodically pull the web server to see if a document is modified push whenever a server gives a copy of a web page to a web cache, they sign a lease with an expiration time; if the web page is modified before the lease, the server notifies the cache

HTTP/1.1: Persistent (keepalive/pipelining) HTTP HTTP/1.0 allows a single request outstanding, while HTTP/1.1 allows request pipelining On same TCP connection: server parses request, responds, parses new request, … Client sends requests for all referenced objects as soon as it receives base HTML Benefit Fewer RTTs See Joshua Graessley WWDC 2012 talk: 3x within iTunes

HTTP/1.0, Keep-Alive, Pipelining Is this the best we can do? https://httpd.apache.org/docs/2.2/howto/ssi.html Source: http://chimera.labs.oreilly.com/books/1230000000545/ch11.html

HTTP/2 Basic Idea: Remove Head-of-Line Blocking in HTTP/1.1 Data flows for sequential to parallel: two requests must be concurrent. https://httpd.apache.org/docs/2.2/howto/ssi.html Demo: https://http2.akamai.com/demo Source: http://chimera.labs.oreilly.com/books/1230000000545/ch11.html

Observing HTTP/2 export SSLKEYLOGFILE=/tmp/keylog.txt Start Chrome, e.g. /Applications/Google Chrome.app/Contents/MacOS/Google Chrome Visit HTTP/2 pages, such as https://http2.akamai.com See chrome://net-internals/#http2

HTTP/2 Design: Multi-Streams HTTP/2 Binary Framing https://tools.ietf.org/html/rfc7540

HTTP/2 Header Compression

HTTP/2 Stream Dependency and Weights

HTTP/2 Server Push

Outline Admin and recap HTTP “acceleration” Network server design

WebServer Implementation create ServerSocket(6789) TCP socket space state: listening address: {*.6789, *.*} completed connection queue: sendbuf: recvbuf: 128.36.232.5 128.36.230.2 address: {*.25, *.*} completed connection queue: sendbuf: state: established address: {128.36.232.5:6789, 198.69.10.10.1500} connSocket = accept() read request from connSocket read local file write file to connSocket close connSocket Discussion: what does each step do and how long does it take?

Demo Try TCPServer Start two TCPClient Client 1 starts early but stops Client 2 starts later but inputs first

Server Processing Steps Accept Client Connection Read Request Find File Send Response Header Read File Send Data may block waiting on disk I/O may block waiting on network

Writing High Performance Servers: Major Issues Many socket and IO operations can cause a process to block, e.g., accept: waiting for new connection; read a socket waiting for data or close; write a socket waiting for buffer space; I/O read/write for disk to finish

Goal: Limited Only by Resource Bottleneck CPU DISK Before NET CPU DISK After NET

Outline Admin and recap HTTP “acceleration” Network server design Overview Multi-thread network servers

Multi-Threaded Servers Motivation: Avoid blocking the whole program (so that we can reach bottleneck throughput) Idea: introduce threads A thread is a sequence of instructions which may execute in parallel with other threads When a blocking operation happens, only the flow (thread) performing the operation is blocked

Background: Java Thread Model Every Java application has at least one thread The “main” thread, started by the JVM to run the application’s main() method Most JVMs use POSIX threads to implement Java threads main() can create other threads Explicitly, using the Thread class Implicitly, by calling libraries that create threads as a consequence (RMI, AWT/Swing, Applets, etc.)

Thread vs Process

Creating Java Thread Two ways to implement Java thread Extend the Thread class Overwrite the run() method of the Thread class Create a class C implementing the Runnable interface, and create an object of type C, then use a Thread object to wrap up C A thread starts execution after its start() method is called, which will start executing the thread’s (or the Runnable object’s) run() method A thread terminates when the run() method returns http://java.sun.com/javase/6/docs/api/java/lang/Thread.html

Option 1: Extending Java Thread class PrimeThread extends Thread { long minPrime; PrimeThread(long minPrime) { this.minPrime = minPrime; } public void run() { // compute primes larger than minPrime  . . . } } PrimeThread p = new PrimeThread(143); p.start();

Option 1: Extending Java Thread class RequestHandler extends Thread { RequestHandler(Socket connSocket) { // … } public void run() { // process request } … } Thread t = new RequestHandler(connSocket); t.start();

Option 2: Implement the Runnable Interface class PrimeRun implements Runnable { long minPrime; PrimeRun(long minPrime) { this.minPrime = minPrime; } public void run() { // compute primes larger than minPrime  . . . } } PrimeRun p = new PrimeRun(143); new Thread(p).start();

Example: a Multi-threaded TCPServer Turn TCPServer into a multithreaded server by creating a thread for each accepted request

Per-Request Thread Server main() { ServerSocket s = new ServerSocket(port); while (true) { Socket conSocket = s.accept(); RequestHandler rh = new RequestHandler(conSocket); Thread t = new Thread (rh); t.start(); } main thread thread starts thread starts class RequestHandler implements Runnable { RequestHandler(Socket connSocket) { … } public void run() { // } } thread ends thread ends Try the per-request-thread TCP server: TCPServerMT.java

Summary: Implementing Threads class RequestHandler extends Thread { RequestHandler(Socket connSocket) { … } public void run() { // process request } … } Thread t = new RequestHandler(connSocket); t.start(); class RequestHandler implements Runnable { RequestHandler(Socket connSocket) { … } public void run() { // process request } … RequestHandler rh = new RequestHandler(connSocket); Thread t = new Thread(rh); t.start();

Modeling Per-Request Thread Server: Theory  1 k k+1 N (k+1) p0 p1 pk pk+1 pN Welcome Socket Queue

Problem of Per-Request Thread: Reality High thread creation/deletion overhead Too many threads  resource overuse  throughput meltdown  response time explosion Q: given avg response time and connection arrival rate, how many threads active on avg?

Background: Little’s Law (1961) For any system with no or (low) loss. Assume mean arrival rate , mean time R at system, and mean number Q of requests at system Then relationship between Q, , and R: R, Q  Proof of Little’s Law: Example: Yale College admits 1500 students each year, and mean time a student stays is 4 years, how many students are enrolled?

Little’s Law arrival A 3 2 1 time t

Discussion: How to Address the Issue