Presentation is loading. Please wait.

Presentation is loading. Please wait.

Network Applications: HTTP; High-Perf HTTP Server 2/2/2012.

Similar presentations


Presentation on theme: "Network Applications: HTTP; High-Perf HTTP Server 2/2/2012."— Presentation transcript:

1 Network Applications: HTTP; High-Perf HTTP Server 2/2/2012

2 2 Outline  Admin and recap r HTTP r High performance HTTP server

3 3 Recap: TCP Socket state: listening address: {*.6789, *:*} completed connection queue: sendbuf: recvbuf: state: listening address: {*.25, *:*} completed connection queue: sendbuf: recvbuf: state: established address: {128.36.232.5:6789, 198.69.10.10.1500} sendbuf: recvbuf:

4 4 Recap: FTP: Client-Server with Separate Control, Data Connections r Two parallel TCP connections opened: m control: exchange commands, responses between client, server. “out of band control” m data: file data to/from server FTP client FTP server TCP control connection port 21 at server TCP data connection server:20 clientip:cport PORT clientip:cport RETR file.dat

5 5 Recap: the HTTP Protocol PC running Explorer Server running Apache Web server Linux running Firefox http request http response GET /somedir/page.html HTTP/1.0 Host: www.somechool.edu Connection: closewww.somechool.edu User-agent: Mozilla/4.0 Accept: text/html, image/gif Accept-language: en (extra carriage return, line feed)

6 6 Design Exercise  Workflow of an HTTP server processing a GET request: GET /somedir/page.html HTTP/1.0 Host: www.somechool.edu

7 Simple HTTP Server TCP socket space state: listening address: {*.6789, *.*} completed connection queue: sendbuf: recvbuf: 128.36.232.5 128.36.230.2 state: listening address: {*.25, *.*} completed connection queue: sendbuf: recvbuf: state: established address: {128.36.232.5:6789, 198.69.10.10.1500} sendbuf: recvbuf: connSocket = accept() Create ServerSocket(6789) read request from connSocket Map URL to file Read from file/ write to connSocket close connSocket

8 8 Dynamic Content Pages r There are multiple approaches to make dynamic web pages: m Embedding code into pages http server includes an interpreter for the type of pages m Invoke external programs Q: how to integrate an external program’s output http://www.cs.yale.edu/index.shtml http://www.cs.yale.edu/cgi-bin/ureserve.pl http://www.google.com/search?q=Yale&sourceid=chrome

9 9 Invoking External Programs r Two issues m Pass HTTP request parameters to the external program m Redirect external program output to socket

10 10 Example: CGI r Configuration indicates that a mapped file is executable m Web server sets up environment variables http://httpd.apache.org/docs/2.2/env.html CGI standard: http://www.ietf.org/rfc/rfc3875 m Starts the executable as a child process m Redirects input/output of the child process to the socket

11 11 Example: CGI r Example: m GET /search?q=Yale&sourceid=chrome HTTP/1.0 m mapped file search is an executable m setup environment variables, in particular $QUERY_STRING= q=Yale&sourceid=chrome m start search and redirect its input/output http://docs.oracle.com/javase/1.5.0/docs/api/java/lang/ProcessBuilder.html http://docs.oracle.com/javase/1.5.0/docs/api/java/lang/Process.html

12 12 Client Dynamic Pages r There is no need to change the server r See ajax.html for client code example

13 HTTP Message Extension: POST r if an HTML page contains forms or parameter too large, they are sent using POST and encoded in message body 13

14 14 HTTP Message Flow Extensions: Keeping State r Why do we need to keep state? r How does FTP keep state (e.g., current dir) and why does HTTP not use it?

15 15 User-server Interaction: Cookies Goal: no explicit application level session r Server sends “cookie” to client in response msg Set-cookie: 1678453 r Client presents cookie in later requests Cookie: 1678453 r Server matches presented-cookie with server-stored info m authentication m remembering user preferences, previous choices client server usual http request msg usual http response + Set-cookie: # usual http request msg Cookie: # usual http response msg usual http request msg Cookie: # usual http response msg cookie- specific action cookie- specific action

16 16 User-Server Interaction: Authentication Authentication goal: control access to server documents r stateless: client must present authorization in each request r authorization: typically name, password  Authorization: header line in request m if no authorization presented, server refuses access, sends WWW-authenticate: header line in response client server usual http request msg 401: authorization req. WWW-authenticate: usual http request msg + Authorization:line usual http response msg usual http request msg + Authorization:line usual http response msg time Browser caches name & password so that user does not have to repeatedly enter it.

17 17 HTTP/1.0 Delay r For each object: m TCP handshake --- 1 RTT m client request and server responds --- at least 1 RTT (if object can be contained in one packet) r Discussion: how to reduce delay? TCP SYN TCP/ACK; HTTP GET TCP ACK base page TCP SYN TCP/ACK; HTTP GET TCP ACK image 1

18 18 HTTP Message Flow: Persistent HTTP r Default for HTTP/1.1 r On same TCP connection: server parses request, responds, parses new request, … r Client sends requests for all referenced objects as soon as it receives base HTML r Fewer RTTs

19 19 Browser Cache and Conditional GET r Goal: don’t send object if client has up-to-date stored (cached) version r client: specify date of cached copy in http request If-modified-since: r server: response contains no object if cached copy up- to-date: HTTP/1.0 304 Not Modified client server http request msg If-modified-since: http response HTTP/1.0 304 Not Modified object not modified http request msg If-modified-since: http response HTTP/1.1 200 OK … object modified

20 20 Summary: HTTP r HTTP message format m ASCII (human-readable format) requests, header lines, entity body, and responses line r HTTP message flow m stateless server each request is self-contained; thus cookie and authentication, are needed in each message m reducing latency persistent HTTP –the problem is introduced by layering ! conditional GET reduces server/network workload and latency cache and proxy reduce traffic and latency - Is the application extensible, scalable, robust, secure?

21 WebServer Implementation TCP socket space state: listening address: {*.6789, *.*} completed connection queue: sendbuf: recvbuf: 128.36.232.5 128.36.230.2 state: listening address: {*.25, *.*} completed connection queue: sendbuf: recvbuf: state: established address: {128.36.232.5:6789, 198.69.10.10.1500} sendbuf: recvbuf: connSocket = accept() Create ServerSocket(6789) read request from connSocket read local file write file to connSocket close connSocket Discussion: what does each step do and how long does it take?

22 Test r telnet locahost 6789 r Start a browser to connect to the server

23 Writing High Performance Servers: Major Issues r Many socket/IO operations can cause a process to block, e.g.,  accept : waiting for new connection;  read a socket waiting for data or close;  write a socket waiting for buffer space;  I/O read/write for disk to finish r Thus a crucial perspective of network server design is the concurrency design (non-blocking) m for high performance m to avoid denial of service r Concurrency is also important for clients!

24 Using Multi-Threads for Servers r A thread is a sequence of instructions which may execute in parallel with other threads r A multi-thread server is a concurrent program as it has multiple threads that are active at the same time, e.g., m we can have one thread for each client connection m thus, only the flow (thread) processing a particular request is blocked

25 Multi-Threaded Server r A multithreaded server might run on one CPU m The CPU alternates between running different threads m The scheduler takes care of the details m Switching between threads might happen at any time r Might run in parallel on a multiprocessor machine 25

26 Java Thread Model r Every Java application has at least one thread m The “main” thread, started by the JVM to run the application’s main() method m Most JVM’s use POSIX threads to implement Java threads r main() can create other threads m Explicitly, using the Thread class m Implicitly, by calling libraries that create threads as a consequence (RMI, AWT/Swing, Applets, etc.) 26

27 Thread vs Process 27

28 Java Thread Class  Concurrency is introduced through objects of the class Thread m Provides a ‘handle’ to an underlying thread of control r Threads are organized into thread group  A thread group represents a set of threads activeGroupCount (); m A thread group can also include other thread groups to form a tree m Why thread group? 28 http://java.sun.com/javase/6/docs/api/java/lang/ThreadGroup.html

29 Some Main Java Thread Methods  Thread(Runnable target) Allocates a new Thread object. ThreadRunnable  Thread(String name) Allocates a new Thread object. ThreadString  Thread(ThreadGroup group, Runnable target) Allocates a new Thread object. ThreadThreadGroup Runnable r start() Start the processing of a thread; JVM calls the run method 29

30 Creating Java Thread r Two ways to implement Java thread  Extend the Thread class Overwrite the run() method of the Thread class  Create a class C implementing the Runnable interface, and create an object of type C, then use a Thread object to wrap up C  A thread starts execution after its start() method is called, which will start executing the thread’s (or the Runnable object’s) run() method  A thread terminates when the run() method returns 30 http://java.sun.com/javase/6/docs/api/java/lang/Thread.html

31 Option 1: Extending Java Thread 31 class PrimeThread extends Thread { long minPrime; PrimeThread(long minPrime) { this.minPrime = minPrime; } public void run() { // compute primes larger than minPrime... } } PrimeThread p = new PrimeThread(143); p.start();

32 Option 1: Extending Java Thread 32 class RequestHandler extends Thread { RequestHandler(Socket connSocket) { // … } public void run() { // process request } … } Thread t = new RequestHandler(connSocket); t.start();

33 Option 2: Implement the Runnable Interface 33 class PrimeRun implements Runnable { long minPrime; PrimeRun(long minPrime) { this.minPrime = minPrime; } public void run() { // compute primes larger than minPrime... } } PrimeRun p = new PrimeRun(143); new Thread(p).start();

34 Option 2: Implement the Runnable Interface 34 class RequestHandler implements Runnable { RequestHandler(Socket connSocket) { … } public void run() { // } … } RequestHandler rh = new RequestHandler(connSocket); Thread t = new Thread(rh); t.start();

35 Example: a Multi-threaded TCPServer 35 r The program creates a thread for each request

36 Per-Request Thread Server 36 main() { ServerSocket s = new ServerSocket(port); while (true) { Socket conSocket = s.accept(); Thread t = new RequestHandler(conSocket); t.start(); } Try the per-request-thread TCP server: TCPServerMT.java main thread thread starts thread ends thread ends

37 Summary: Scaling Servers r Many server socket/IO operations can cause a process to block, e.g.,  accept : waiting for new connection;  read a socket waiting for data or close;  write a socket waiting for buffer space;  I/O read/write for disk to finish r Thus a crucial perspective of large- scale server design is the concurrency design (non-blocking) r To support concurrency m Multiplexing: A single server program at a given port can handle multiple connections simultaneously m Thread or other non-blocking execution techniques connSocket = accept() Create ServerSocket(6789) read request from connSocket Processing request close connSocket

38 Modeling Per-Request Thread Server 38 01kN p0p0 p1p1 pkpk k+1 p k+1 pNpN Welcome Socket Queue (k+1) 

39 Problem of Per-Request Thread Server r High thread creation/deletion overhead r Too many threads  resource overuse  throughput meltdown  response time explosion

40 Backup Slides

41 41 Web Caches (Proxy) r User sets browser: Web accesses via web cache r Client sends all http requests to web cache m if object at web cache, web cache immediately returns object in http response m else requests object from origin server, then returns http response to client Goal: satisfy client request without involving origin server client Proxy server client http request http response http request http response http request http response origin server origin server

42 42 Benefits of Web Caching Assume: cache is “close” to client (e.g., in same network) r smaller response time: cache “closer” to client r decrease traffic to distant servers m link out of institutional/local ISP network often bottleneck origin servers public Internet institutional network 10 Mbps LAN 1.5 Mbps access link institutional cache

43 43 Proxy Caches Users Regional Network Rest of Internet Bottleneck... Cache Sharing: Internet Cache Protocol (ICP)

44 44 Cache Sharing via ICP  When one proxy has a cache miss, send queries to all siblings (and parents): “do you have the URL?”  Whoever responds first with “Yes”, send a request to fetch the file  If no “Yes” response within certain time limit, send request to Web server Parent Cache (optional) Discussion: where is the performance bottleneck of ICP?

45 45 Summary Cache r Basic idea: m let each proxy keep a directory of what URLs are cached in every other proxy, and use the directory as a filter to reduce number of queries r Problem: storage requirement m solution: compress the directory => imprecise, but inclusive directory

46 46 The Problem abc.com/index.html xyz.edu/................ Proxy A Proxy B Compact Representation ?

47 47 Bloom Filters r Support membership test for a set of keys r To check if URL x is at B, compute H 1 (x), H 2 (x), H 3 (x), H 4 (x), and check V B URL u Bit Vector V B k hash functions H (u) = P 11 22 33 44 1 1 1 1 m bits

48 48 No Free Lunch: Problems of Web Caching r The major issue of web caching is how to maintain consistency r Two ways m pull Web caches periodically pull the web server to see if a document is modified m push whenever a server gives a copy of a web page to a web cache, they sign a lease with an expiration time; if the web page is modified before the lease, the server notifies the cache


Download ppt "Network Applications: HTTP; High-Perf HTTP Server 2/2/2012."

Similar presentations


Ads by Google