Presentation is loading. Please wait.

Presentation is loading. Please wait.

Web Server Support for Tired Services Telecommunication Management Lab M.G. Choi.

Similar presentations


Presentation on theme: "Web Server Support for Tired Services Telecommunication Management Lab M.G. Choi."— Presentation transcript:

1 Web Server Support for Tired Services Telecommunication Management Lab M.G. Choi

2 1. Introduction 2. Servers and QoS 3. Architecture for Server Based QoS 4. Prototype 5. Result 6. Summary Content

3 Introduction  The Network QoS has been focused for Performance  The benefits of Non-isochronous applications such as web pages from QoS  Example ; North America ’ s EC traffic increase  The necessity of Server QoS The mechanisms and policies are need for establishing and Supporting QoS. Network QoS is not sufficient to support end to end QoS  The hypothesis that servers become an important element in delivering QoS

4  The research Question to be answered Impact of Internet Workloads on servers. Impact of server latency on end to end latency. Server mechanism to improve Quality of Service The way to protect server from overload The way for server to support tiered user service level with unique performance attribute  The Importance of requirement on servers and networks  Show the increasing role of the servers to provide end to end and the potentiality of tired services Introduction (Cont ’ d)

5 Server and QoS  Empirical Study Instrument and monitor one of large ISPs in North America in 1997 Quantifying the delay components for web, news and mail server The Network Typology [figure 1] A mixture of active and passive measurement technique The nntp server response time [figure 2] The coast to coast network response time [figure3]

6

7 Trends affecting the complex E-commerce Applications  The trends increasing network performances The decreasing Network latency due to increasing capacity of network backbone The guaranteed network latency by the ISP The caches becomes more pervasive  The trends increasing Server latency time The Flash The new application technologies[JAVA, SSL, DB, M/W] The Media with much richer, larger, more image [Audio Voice, Video]

8 The Overload Server Causing poor end to end QoS  The Measurement of busy Web Sites  The response rate grows linearly until the server nears maximum capacity in terms of HTTP request [figure 4]  The HTTP and a User transaction [figure 5]

9 Over-Provision Server  The evolution of web applications grows very steeply in the client demand curve  Now, Internet Applications have unaccountable client population  No reasonable amount of H/W can guarantee predictable performance for flash crowds  The over-provisioning of servers can not provide tired services or application  Network QoS can not solve scheduling or bottleneck problems at Server and ignored by server FIFO  The Server QoS mechanism supports tired service, and to provide overload protection

10  An architecture Servers consisting of multiple node web, application and database  The philosophy to create a low overhead, scalable infrastructure that is transparent to application, web servers  The two goals to support two key capabilities The architecture manages effectively peaks in client HTTP request rates To support tiered service levels that enable preferential treatment of users or services (to improve performance of premium tires)  The architecture to be presented [Figure 6]  The request class is introduced for tired service  The architecture supports integration with network QoS mechanisms and management systems Architecture for Server Based QoS

11

12 Related Work and Prototype  Related Work The operating systems control mechanism to ensure class-based performance in web servers The scheduling of web server worker processes with the same priority The research of intelligent switch or router  Prototype Modifying the FIFO servicing model of Apache Ver.1.2.4. The identical worker processes that listen on a UNIX socket for HTTP connections and serve requests The connection manager, request classifier, admission controller, request scheduler and resource scheduler

13 Connection Manager  A new unique acceptor process that intercepts all requests  Classifying the request and Placing the request on the appropriate tier queue  The connection manager must run frequently enough to keep request queues full Worker processes may execute requests from lower tires Premium requests are prohibited from establishing a TCP connection and thus drop

14

15 Request Classification  To identify and classify the incoming requests of each class  The classification mechanism are user class based or target-class based  The User Class Based (Source of Request) Client IP address HTTP cookie Browser plug-ins  The Target Based URL Request type or file name path Destination IP address

16 Admission Control `  When the server is saturated, the Admission Control Rule (Premium > Basic)  The two admission control trigger parameter Total Requests Queued Number of Premium Requests Queued  Rejection is done by simply closing the connection

17 Request and Resource Scheduling  To process request, selection of requests is based on the scheduling policy  The scheduling policy may have many options for processing  Several Potential Policies Strict Priority Weighted Priority Shared Capacity Fixed Capacity Earliest Deadline First  Resource Scheduling provide more resources to premium request and less resources to basic request

18 Apache Source Modification  The number of Apache code changes is minimal  http_main.c modification Start the connection manager process, setup queues Change the child Apache process to accept request from connection manager not HTTP socket  Additional connection_mgr.c is linked The classification policy, enqueue mechanism, dequeue policy, connection manager process code  Additional shared memory and semaphores The state of queues, each class queue length, number of requests executing in class last class to have a request dequeued, total count of waiting requests on classes Access to shared memory is synchronized through Semaphore

19 Results  The comparison of response time, throughput, error rate for premium and basic clients with priority scheduling  The comparison of performance in premium and in basic clients The premium rate is fixed The premium request rate identical to the basic client  The quality of service in premium clients is better than in basic clients in above both case  The four Clients, The one Server, 100 based Network  The httperf application is used by four clients

20

21

22 Summary  Contribution To motivate the need for Serer QoS to support tired user service level To protect servers from client demand overload To develop architecture of WebQoS To show the benefit of architecture through experiment  The unsolved problems The tighter integration of server, network QoS and the ability to communicate QoS attribute across network More Flexible admission control mechanisms Lightweight signalling mechanisms for high priority traffic What benefits can be obtained by the end to end QoS

23 Critique  The Strong points To show the Web bottleneck is the server side not network To present the architecture performing the differentiated Service and verify the availability of the architecture To combine other differentiated service approaches through showing architecture  The weak points The architecture presented in this paper may be deeply influenced by the status of the connection manager and may be bottleneck The experiment that is similar to real environment does not provide the effectiveness of the architecture presented enough.


Download ppt "Web Server Support for Tired Services Telecommunication Management Lab M.G. Choi."

Similar presentations


Ads by Google