Dynamic Process Allocation in Apache Server Yu Cai.

Slides:



Advertisements
Similar presentations
Dissemination-based Data Delivery Using Broadcast Disks.
Advertisements

Managing Web server performance with AutoTune agents by Y. Diao, J. L. Hellerstein, S. Parekh, J. P. Bigu Jangwon Han Seongwon Park
Dr. Kalpakis CMSC 621, Advanced Operating Systems. Distributed Scheduling.
Scheduling in Web Server Clusters CS 260 LECTURE 3 From: IBM Technical Report.
Hadi Goudarzi and Massoud Pedram
Imbalanced data David Kauchak CS 451 – Fall 2013.
CS 443 Advanced OS Fabián E. Bustamante, Spring 2005 Resource Containers: A new Facility for Resource Management in Server Systems G. Banga, P. Druschel,
Quality of Service Requirements
LOAD BALANCING IN A CENTRALIZED DISTRIBUTED SYSTEM BY ANILA JAGANNATHAM ELENA HARRIS.
CSE Computer Networks Prof. Aaron Striegel Department of Computer Science & Engineering University of Notre Dame Lecture 20 – March 25, 2010.
Distributed Control Algorithms for Service Differentiation in Wireless Packet Networks Michael Barry, Andrew T Campbell, Andras Veres
Copyright © 2001 Qusay H. Mahmoud RMI – Remote Method Invocation Introduction What is RMI? RMI System Architecture How does RMI work? Distributed Garbage.
XENMON: QOS MONITORING AND PERFORMANCE PROFILING TOOL Diwaker Gupta, Rob Gardner, Ludmila Cherkasova 1.
SEDA: An Architecture for Well- Conditioned, Scalable Internet Services Matt Welsh, David Culler, and Eric Brewer Computer Science Division University.
1 Routing and Scheduling in Web Server Clusters. 2 Reference The State of the Art in Locally Distributed Web-server Systems Valeria Cardellini, Emiliano.
Dynamic Process Allocation in Apache Server Yu Cai.
The Museum Project The Museum Project Yoav Gvili & Asaf Stein Supervisor : Alexander Arlievsky.
Pricing Granularity for Congestion-Sensitive Pricing Murat Yüksel and Shivkumar Kalyanaraman Rensselaer Polytechnic Institute, Troy, NY {yuksem, shivkuma}
Page: 1 Director 1.0 TECHNION Department of Computer Science The Computer Communication Lab (236340) Summer 2002 Submitted by: David Schwartz Idan Zak.
Computer Science Scalability of Linux Event-Dispatch Mechanisms Abhishek Chandra University of Massachusetts Amherst David Mosberger Hewlett Packard Labs.
Fair Scheduling in Web Servers CS 213 Lecture 17 L.N. Bhuyan.
Energy Efficient Web Server Cluster Andrew Krioukov, Sara Alspaugh, Laura Keys, David Culler, Randy Katz.
Differentiated Multimedia Web Services Using Quality Aware Transcoding S. Chandra, C.Schlatter Ellis and A.Vahdat InfoCom 2000, IEEE Journal on Selected.
Performance Analysis of Downlink Power Control Algorithms for CDMA Systems Soumya Das Sachin Ganu Natalia Rivera Ritabrata Roy.
Adaptive Content Delivery for Scalable Web Servers Authors: Rahul Pradhan and Mark Claypool Presented by: David Finkel Computer Science Department Worcester.
Enhanced Secure Dynamic DNS Update with Indirect Route David Wilkinson, C. Edward Chow, Yu Cai 06/11/2004 University of Colorado at Colorado Springs IEEE.
LDU Parametrized Discrete-Time Multivariable MRAC and Application to A Web Cache System Ying Lu, Gang Tao and Tarek Abdelzaher University of Virginia.
16: Distributed Systems1 DISTRIBUTED SYSTEM STRUCTURES NETWORK OPERATING SYSTEMS The users are aware of the physical structure of the network. Each site.
Admission Control and Dynamic Adaptation for a Proportional-Delay DiffServ-Enabled Web Server Yu Cai.
Understanding Factors That Influence Performance of a Web Server Presentation CS535 Project By Thiru.
Computer Networks IGCSE ICT Section 4.
Locality-Aware Request Distribution in Cluster-based Network Servers Presented by: Kevin Boos Authors: Vivek S. Pai, Mohit Aron, et al. Rice University.
Computer Science Cataclysm: Policing Extreme Overloads in Internet Applications Bhuvan Urgaonkar and Prashant Shenoy University of Massachusetts.
Distributed Denial of Service Attack and Prevention Andrew Barkley Quoc Thong Le Gia Matt Dingfield Yashodhan Gokhale.
Server Load Balancing. Introduction Why is load balancing of servers needed? If there is only one web server responding to all the incoming HTTP requests.
Advanced Network Architecture Research Group 2001/11/149 th International Conference on Network Protocols Scalable Socket Buffer Tuning for High-Performance.
SEDA: An Architecture for Well-Conditioned, Scalable Internet Services
OPTIMAL SERVER PROVISIONING AND FREQUENCY ADJUSTMENT IN SERVER CLUSTERS Presented by: Xinying Zheng 09/13/ XINYING ZHENG, YU CAI MICHIGAN TECHNOLOGICAL.
Firewall and Internet Access Mechanism that control (1)Internet access, (2)Handle the problem of screening a particular network or an organization from.
Web Server Support for Tired Services Telecommunication Management Lab M.G. Choi.
1 A Feedback Control Architecture and Design Methodology for Service Delay Guarantees in Web Servers Presentation by Amitayu Das.
CHEN Ge CSIS, HKU March 9, Jigsaw W3C’s Java Web Server.
Budget-based Control for Interactive Services with Partial Execution 1 Yuxiong He, Zihao Ye, Qiang Fu, Sameh Elnikety Microsoft Research.
Fundamentals of Computer Networks ECE 478/578 Lecture #19: Transport Layer Instructor: Loukas Lazos Dept of Electrical and Computer Engineering University.
Rapid Development of High Performance Servers Khaled ElMeleegy Alan Cox Willy Zwaenepoel.
Advanced Network Architecture Research Group 2001/11/74 th Asia-Pacific Symposium on Information and Telecommunication Technologies Design and Implementation.
1 Dr. Ali Amiri TCOM 5143 Lecture 8 Capacity Assignment in Centralized Networks.
Managing Server Energy and Operational Costs Chen, Das, Qin, Sivasubramaniam, Wang, Gautam (Penn State) Sigmetrics 2005.
Providing Differentiated Levels of Service in Web Content Hosting Jussara Almeida, etc... First Workshop on Internet Server Performance, 1998 Computer.
Empirical Quantification of Opportunities for Content Adaptation in Web Servers Michael Gopshtein and Dror Feitelson School of Engineering and Computer.
Nikolaos Vasiliou and Hanan Lutfiyya The University of Western Ontario London, Ontario, Canada Differentiated Quality of Service for Web Servers for Electronic.
Processes CSCI 4534 Chapter 4. Introduction Early computer systems allowed one program to be executed at a time –The program had complete control of the.
Jigsaw Performance Analysis Potential Bottlenecks.
CHUL LEE, CORE Lab. E.E. 1 Web Server QoS Management by Adaptive Content Delivery September Chul Lee Tarek F. Abdelzaher and Nina Bhatti Quality.
The Client-Server Model And the Socket API. Client-Server (1) The datagram service does not require cooperation between the peer applications but such.
Measuring the Capacity of a Web Server USENIX Sympo. on Internet Tech. and Sys. ‘ Koo-Min Ahn.
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved DISTRIBUTED SYSTEMS.
Threads. Readings r Silberschatz et al : Chapter 4.
Providing Differentiated Levels of Service in Web Content Hosting J ussara Almeida, Mihaela Dabu, Anand Manikutty and Pei Cao First Workshop on Internet.
Day 15 Apache. Being a web server Once your system is correctly connected to the network, you could be a web server. –When you go to a web site such as.
Multithreading vs. Event Driven in Code Development of High Performance Servers.
Understanding Solutions
Threads vs. Events SEDA – An Event Model 5204 – Operating Systems.
Warm Handshake with Websites, Servers and Web Servers:
Regulating Data Flow in J2EE Application Server
Liang Chen Advisor: Gagan Agrawal Computer Science & Engineering
Dynamic Process Allocation in Apache Server
Dissemination of Dynamic Data on the Internet
Outline System architecture Current work Experiments Next Steps
COMP755 Advanced Operating Systems
Presentation transcript:

Dynamic Process Allocation in Apache Server Yu Cai

Introduction Degrading DDoS attacks. We want a web server system which can provide clients with differentiated services of proportional response time Two ways: –A) admission control and traffic classification: classify the incoming traffic into different queues with different QoS requirement. –B) dynamic resource management (our focus): dynamically control the number of processes assigned to each class.

Problem Formulation We use M /M /1 queue to model the web server traffic. Proportional-delay of response time for two classes 1 and 2. Assumption of processing rate:

Problem Formulation We want to achieve the resource allocation by process allocation in Apache. The ratio of process allocation can be calculated by:

Implementation in Apache We modify Apache to make it listen to two ports (80, 8000). The traffic of class 1 will be routed to port 80, and class 2 to port 8000 by admission control. We dynamically control the number of processes allocated for each port, while maintaining a process ratio specified in a ratio file. It is achieved by modifying the child_main() function in http_main.c file of apache. The Process forking and killing is still handled by Apache itself.

Implementation in Apache The process allocation ratio is put in a local text file( ratio file). Sample ratio file: ratio multiplier port1: ratio1 port2: ratio2 total processes 180:3 8000:13+1=4 280:3 8000:16+2=8 A resource management program will dynamically calculate the process ratio and update the ratio file by assigning ratio multiplier( usually is 1), ratio1 and ratio2. The ratio file will be read/write by the apache process. –If all processes are busy, we will increase the ratio multiplier by 1, and apache will be allow to fork more processes, therefore the total processes increases accordingly. –If any process is not busy, we will decrease the multiplier by 1, and let the server not to listen to the requests. Therefore apache will kill some processes, the total processes decrease accordingly.

Dynamic Process Allocation Algorithm 1. Read apache scoreboard to get number of active processes currently allocated for each port 2. Access the process ratio file to get ratio multiplier, ratio1 and ratio2. Number of processes for a port = multiplier * ratio, Total number of processes = multiplier * ( ratio1 + ratio2 ) 3. If number of active process for all ports >= total number of processes given in the ratio file, // update ratio file increase the multiplier by 1 and update the ratio file else decrease the multiplier by 1 and update the ratio file end if 4. If number of active processes for a port >= number of processes given in the ratio file for that port //too many active processes for a port Do not listen for requests on the server socket. //allow apache to kill idle processes end if 5. Exclusively wait for requests on all available server socket. 6. Handle request 7. Update scoreboard

System Architect

Performance Evaluation Use two HP PC(PIII 1GHz, 516M RAM, Redhat 9, 100M Ethernet connection) as router and task web server. Use four HP PC(PIII 233MHz, 96M RAM, Redhat 9, 100M Ethernet connection) to generate http request. Use Httperf and Webbench as the request generator. They can generate http requests with Poisson distribution. The apache we used is version In the experiments, the system first warmed up for 100 time units. We collect the response time during the next 1000 time units. We average the result after 50 runs. Then we change the system load and repeat the above process.

Without Dynamic Process Adaptation

We set the process ratio to 3:1, and want to achieve response time ratio 3:1. The result show it is not possible to achieve proportional response time by set a fixed process ratio.

With Dynamic Process Adaptation

With dynamic process adaptation, we can achieve proportional response time at range 40% – 80 % system load. It the load is too light or too heavy, it is hard to control.

With Dynamic Process Adaptation

Class 1 Fix and Class 2 Change

Class 1 change and Class 2 fix

System Capacity vs. Total Num of Processes

Future Work Find better dynamic process allocation algorithm. Combine admission control and traffic classification together Add feedback and notification mechanism in the system.