Web Server Support for Tired Services Telecommunication Management Lab M.G. Choi.

Slides:



Advertisements
Similar presentations
QoS Strategy in DiffServ aware MPLS environment Teerapat Sanguankotchakorn, D.Eng. Telecommunications Program, School of Advanced Technologies Asian Institute.
Advertisements

Switching Techniques In large networks there might be multiple paths linking sender and receiver. Information may be switched as it travels through various.
Welcome to Middleware Joseph Amrithraj
Scheduling in Web Server Clusters CS 260 LECTURE 3 From: IBM Technical Report.
Module 5: TLS and SSL 1. Overview Transport Layer Security Overview Secure Socket Layer Overview SSL Termination SSL in the Hosted Environment Load Balanced.
24.1 Chapter 24 Congestion Control and Quality of Service Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
1 Content Delivery Networks iBAND2 May 24, 1999 Dave Farber CTO Sandpiper Networks, Inc.
1 Routing and Scheduling in Web Server Clusters. 2 Reference The State of the Art in Locally Distributed Web-server Systems Valeria Cardellini, Emiliano.
A Case for Relative Differentiated Services and the Proportional Differentiation Model Constantinos Dovrolis Parameswaran Ramanathan University of Wisconsin-Madison.
Web Caching Schemes1 A Survey of Web Caching Schemes for the Internet Jia Wang.
Dynamic Process Allocation in Apache Server Yu Cai.
CSE 190: Internet E-Commerce Lecture 16: Performance.
ACN: IntServ and DiffServ1 Integrated Service (IntServ) versus Differentiated Service (Diffserv) Information taken from Kurose and Ross textbook “ Computer.
Lesson 11-Virtual Private Networks. Overview Define Virtual Private Networks (VPNs). Deploy User VPNs. Deploy Site VPNs. Understand standard VPN techniques.
Adaptive Content Delivery for Scalable Web Servers Authors: Rahul Pradhan and Mark Claypool Presented by: David Finkel Computer Science Department Worcester.
Dynamic Process Allocation in Apache Server Yu Cai.
1 Content Distribution Networks. 2 Replication Issues Request distribution: how to transparently distribute requests for content among replication servers.
Switching Techniques Student: Blidaru Catalina Elena.
Process-to-Process Delivery:
Server Load Balancing. Introduction Why is load balancing of servers needed? If there is only one web server responding to all the incoming HTTP requests.
1 3 Web Proxies Web Protocols and Practice. 2 Topics Web Protocols and Practice WEB PROXIES  Web Proxy Definition  Three of the Most Common Intermediaries.
Integrated Services (RFC 1633) r Architecture for providing QoS guarantees to individual application sessions r Call setup: a session requiring QoS guarantees.
SEDA: An Architecture for Well-Conditioned, Scalable Internet Services
1 Computer Communication & Networks Lecture 28 Application Layer: HTTP & WWW p Waleed Ejaz
McGraw-Hill©The McGraw-Hill Companies, Inc., 2004 Chapter 23 Congestion Control and Quality of Service.
© 2006 Cisco Systems, Inc. All rights reserved. 3.3: Selecting an Appropriate QoS Policy Model.
© 2006 Cisco Systems, Inc. All rights reserved. Optimizing Converged Cisco Networks (ONT) Module 3: Introduction to IP QoS.
Implementing ISA Server Publishing. Introduction What Are Web Publishing Rules? ISA Server uses Web publishing rules to make Web sites on protected networks.
Version 4.0. Objectives Describe how networks impact our daily lives. Describe the role of data networking in the human network. Identify the key components.
Adaptive Overload Control for Busy Internet Servers Matt Welsh and David Culler USENIX Symposium on Internet Technologies and Systems (USITS) 2003 Alex.
1.1 What is the Internet What is the Internet? The Internet is a shared media (coaxial cable, copper wire, fiber optics, and radio spectrum) communication.
Copyright © 2002 Pearson Education, Inc. Slide 3-1 CHAPTER 3 Created by, David Zolzer, Northwestern State University—Louisiana The Internet and World Wide.
© 2006 Cisco Systems, Inc. All rights reserved.Cisco Public 1 Version 4.0 Identifying Application Impacts on Network Design Designing and Supporting Computer.
Networks QUME 185 Introduction to Computer Applications.
Professor OKAMURA Laboratory. Othman Othman M.M. 1.
HOW WEB SERVER WORKS? By- PUSHPENDU MONDAL RAJAT CHAUHAN RAHUL YADAV RANJIT MEENA RAHUL TYAGI.
Othman Othman M.M., Koji Okamura Kyushu University 1.
© 2006 Cisco Systems, Inc. All rights reserved.Cisco PublicITE I Chapter 6 1 Identifying Application Impacts on Network Design Designing and Supporting.
CS848 Class Project: A Survey on QoS for Multi-tier Web Systems Huaning(Mike) Nie
Computing Infrastructure for Large Ecommerce Systems -- based on material written by Jacob Lindeman.
Web Cache Redirection using a Layer-4 switch: Architecture, issues, tradeoffs, and trends Shirish Sathaye Vice-President of Engineering.
Othman Othman M.M., Koji Okamura Kyushu University 1.
Multimedia Wireless Networks: Technologies, Standards, and QoS Chapter 3. QoS Mechanisms TTM8100 Slides edited by Steinar Andresen.
思科网络技术学院理事会. 1 Living in a Network Centric World Network Fundamentals – Chapter 1.
© 2007 Cisco Systems, Inc. All rights reserved.Cisco Public 1 Version 4.0 Living in a Network Centric World Network Fundamentals – Chapter 1.
Distributed Information Systems. Motivation ● To understand the problems that Web services try to solve it is helpful to understand how distributed information.
A Method for Transparent Admission Control and Request Scheduling in E-Commerce Web Sites S. Elnikety, E. Nahum, J. Tracey and W. Zwaenpoel Presented By.
Web Server Support for Tiered Services 한국과학기술원 전자전산학과 황호영.
Providing Differentiated Levels of Service in Web Content Hosting Jussara Almeida, etc... First Workshop on Internet Server Performance, 1998 Computer.
Multimedia and Networks. Protocols (rules) Rules governing the exchange of data over networks Conceptually organized into stacked layers – Application-oriented.
© 2007 Cisco Systems, Inc. All rights reserved.Cisco Public 1 Living in a Network Centric World Network Fundamentals – Chapter 1.
© 2007 Cisco Systems, Inc. All rights reserved.Cisco Public 1 Living in a Network Centric World Network Fundamentals – Chapter 1.
Processes CSCI 4534 Chapter 4. Introduction Early computer systems allowed one program to be executed at a time –The program had complete control of the.
11 CLUSTERING AND AVAILABILITY Chapter 11. Chapter 11: CLUSTERING AND AVAILABILITY2 OVERVIEW  Describe the clustering capabilities of Microsoft Windows.
Project 2 Presentations CS554 – Designs for Software and Systems Team HAND – Seokin Hong, Gieil Lee, Jesung Kim, Yebin Lee Department of Computer Science,
CHUL LEE, CORE Lab. E.E. 1 Web Server QoS Management by Adaptive Content Delivery September Chul Lee Tarek F. Abdelzaher and Nina Bhatti Quality.
Measuring the Capacity of a Web Server USENIX Sympo. on Internet Tech. and Sys. ‘ Koo-Min Ahn.
High-Speed Policy-Based Packet Forwarding Using Efficient Multi-dimensional Range Matching Lakshman and Stiliadis ACM SIGCOMM 98.
1 Transport Layer: Basics Outline Intro to transport UDP Congestion control basics.
Version 4.0 Living in a Network Centric World Network Fundamentals – Chapter 1.
The Internet Technological Background. Topic Objectives At the end of this topic, you should be able to do the following: Able to define the Internet.
Spark on Entropy : A Reliable & Efficient Scheduler for Low-latency Parallel Jobs in Heterogeneous Cloud Huankai Chen PhD Student at University of Kent.
1 Flow-Aware Networking Introduction Concepts, graphics, etc. from Guide to Flow-Aware Networking: Quality-of-Service Architectures and Techniques for.
Instructor Materials Chapter 6: Quality of Service
Chapter 3 TCP and IP Chapter 3 TCP and IP.
Switching Techniques In large networks there might be multiple paths linking sender and receiver. Information may be switched as it travels through various.
Utilization of Azure CDN for the large file distribution
Web Development & Design Chapter 1, Sections 4, 5 & 6
© 2008 Cisco Systems, Inc. All rights reserved.Cisco ConfidentialPresentation_ID 1 Chapter 6: Quality of Service Connecting Networks.
Process-to-Process Delivery:
Presentation transcript:

Web Server Support for Tired Services Telecommunication Management Lab M.G. Choi

1. Introduction 2. Servers and QoS 3. Architecture for Server Based QoS 4. Prototype 5. Result 6. Summary Content

Introduction  The Network QoS has been focused for Performance  The benefits of Non-isochronous applications such as web pages from QoS  Example ; North America ’ s EC traffic increase  The necessity of Server QoS The mechanisms and policies are need for establishing and Supporting QoS. Network QoS is not sufficient to support end to end QoS  The hypothesis that servers become an important element in delivering QoS

 The research Question to be answered Impact of Internet Workloads on servers. Impact of server latency on end to end latency. Server mechanism to improve Quality of Service The way to protect server from overload The way for server to support tiered user service level with unique performance attribute  The Importance of requirement on servers and networks  Show the increasing role of the servers to provide end to end and the potentiality of tired services Introduction (Cont ’ d)

Server and QoS  Empirical Study Instrument and monitor one of large ISPs in North America in 1997 Quantifying the delay components for web, news and mail server The Network Typology [figure 1] A mixture of active and passive measurement technique The nntp server response time [figure 2] The coast to coast network response time [figure3]

Trends affecting the complex E-commerce Applications  The trends increasing network performances The decreasing Network latency due to increasing capacity of network backbone The guaranteed network latency by the ISP The caches becomes more pervasive  The trends increasing Server latency time The Flash The new application technologies[JAVA, SSL, DB, M/W] The Media with much richer, larger, more image [Audio Voice, Video]

The Overload Server Causing poor end to end QoS  The Measurement of busy Web Sites  The response rate grows linearly until the server nears maximum capacity in terms of HTTP request [figure 4]  The HTTP and a User transaction [figure 5]

Over-Provision Server  The evolution of web applications grows very steeply in the client demand curve  Now, Internet Applications have unaccountable client population  No reasonable amount of H/W can guarantee predictable performance for flash crowds  The over-provisioning of servers can not provide tired services or application  Network QoS can not solve scheduling or bottleneck problems at Server and ignored by server FIFO  The Server QoS mechanism supports tired service, and to provide overload protection

 An architecture Servers consisting of multiple node web, application and database  The philosophy to create a low overhead, scalable infrastructure that is transparent to application, web servers  The two goals to support two key capabilities The architecture manages effectively peaks in client HTTP request rates To support tiered service levels that enable preferential treatment of users or services (to improve performance of premium tires)  The architecture to be presented [Figure 6]  The request class is introduced for tired service  The architecture supports integration with network QoS mechanisms and management systems Architecture for Server Based QoS

Related Work and Prototype  Related Work The operating systems control mechanism to ensure class-based performance in web servers The scheduling of web server worker processes with the same priority The research of intelligent switch or router  Prototype Modifying the FIFO servicing model of Apache Ver The identical worker processes that listen on a UNIX socket for HTTP connections and serve requests The connection manager, request classifier, admission controller, request scheduler and resource scheduler

Connection Manager  A new unique acceptor process that intercepts all requests  Classifying the request and Placing the request on the appropriate tier queue  The connection manager must run frequently enough to keep request queues full Worker processes may execute requests from lower tires Premium requests are prohibited from establishing a TCP connection and thus drop

Request Classification  To identify and classify the incoming requests of each class  The classification mechanism are user class based or target-class based  The User Class Based (Source of Request) Client IP address HTTP cookie Browser plug-ins  The Target Based URL Request type or file name path Destination IP address

Admission Control `  When the server is saturated, the Admission Control Rule (Premium > Basic)  The two admission control trigger parameter Total Requests Queued Number of Premium Requests Queued  Rejection is done by simply closing the connection

Request and Resource Scheduling  To process request, selection of requests is based on the scheduling policy  The scheduling policy may have many options for processing  Several Potential Policies Strict Priority Weighted Priority Shared Capacity Fixed Capacity Earliest Deadline First  Resource Scheduling provide more resources to premium request and less resources to basic request

Apache Source Modification  The number of Apache code changes is minimal  http_main.c modification Start the connection manager process, setup queues Change the child Apache process to accept request from connection manager not HTTP socket  Additional connection_mgr.c is linked The classification policy, enqueue mechanism, dequeue policy, connection manager process code  Additional shared memory and semaphores The state of queues, each class queue length, number of requests executing in class last class to have a request dequeued, total count of waiting requests on classes Access to shared memory is synchronized through Semaphore

Results  The comparison of response time, throughput, error rate for premium and basic clients with priority scheduling  The comparison of performance in premium and in basic clients The premium rate is fixed The premium request rate identical to the basic client  The quality of service in premium clients is better than in basic clients in above both case  The four Clients, The one Server, 100 based Network  The httperf application is used by four clients

Summary  Contribution To motivate the need for Serer QoS to support tired user service level To protect servers from client demand overload To develop architecture of WebQoS To show the benefit of architecture through experiment  The unsolved problems The tighter integration of server, network QoS and the ability to communicate QoS attribute across network More Flexible admission control mechanisms Lightweight signalling mechanisms for high priority traffic What benefits can be obtained by the end to end QoS

Critique  The Strong points To show the Web bottleneck is the server side not network To present the architecture performing the differentiated Service and verify the availability of the architecture To combine other differentiated service approaches through showing architecture  The weak points The architecture presented in this paper may be deeply influenced by the status of the connection manager and may be bottleneck The experiment that is similar to real environment does not provide the effectiveness of the architecture presented enough.