Presentation is loading. Please wait.

Presentation is loading. Please wait.

Scalable Server Load Balancing Inside Data Centers Dana Butnariu Princeton University Computer Science Department July – September 2010 Joint work with.

Similar presentations


Presentation on theme: "Scalable Server Load Balancing Inside Data Centers Dana Butnariu Princeton University Computer Science Department July – September 2010 Joint work with."— Presentation transcript:

1 Scalable Server Load Balancing Inside Data Centers Dana Butnariu Princeton University Computer Science Department July – September 2010 Joint work with Richard Wang and Professor Jennifer Rexford

2 Getting started Client2 Client5 Client4 Switch Server1 Server2 Server3 Server4 The Internet Client3 Client1 Client6

3 Getting started Client – any device requesting a web service Server – device which handles a client request and provides the web service Data center – location containing a group of servers Server load – number of client requests a server must handle Load balancing – directing a client request to a particular server, managing server loads according to a certain algorithm Switch – device which enables client and server communication

4 Energy and Load balancing Servers:  Are located in data centers in different areas of the world.  Energy cost and availability varies from one location to the other.  Energy cost and consumption depends on client – server distance. Load balancing:  Tries to lower the energy cost and usage without affecting user perceived performance.  Can achieve this goal by selecting close by data centers.  Can achieve this goal by using only certain servers and powering down the rest.

5 Load balancing today Old approach – have a separate device, the load balancer. New approach – implement the load balancing in devices already existent in the network. Old approach:  Costly device  Consumes energy  Hard to program  Crashes easily New approach:  Already existing device  No additional costs  Easy to program and customize  Stable and reliable

6 What, why and how? What: Scalable Server Load Balancing without sacrificing user perceived performance. Why: To save energy and lower the cost of energy used to process client requests. How: Using a new emerging technology called OpenFlow which enables switch programming.

7 Project Steps Establish the network design. Design the load balancing application. Implement the load balancing application. Test the load balancing application.

8 Network Design Establish the network design.  How many clients, servers, switches?  How are they connected?  What knowledge do they have of one another? There is a “Brain”.  It is just another computer.  It controls switch behavior.  It installs rules in the switch.  Rules tell switch which server handles a client request.

9 Network Design Data Center Switch Server 1Server 2Server 3Server 4 Clients Client request Send request to servers Install Load Balancing rules Brain implementing design algorithm

10 How it works Client sends request for web service. Request arrives at switch. Switch decides server to handle request. Decision is based on:  Closest server so that less energy is used for transport  Cheapest path in terms of energy cost  Server usage so that less used servers can be powered down Server sends a reply back to the client. Client is provided with the web service.

11 Load Balancing Components Partitioning Class Transitioning Class Decide servers to be powered down Old server still handles old client requests Monitor server usage Load Balancing Switch Install rules Statistics Class New servers handle new client requests

12 Application Components Partitioning:  Responsible for implementing the Load Balancing algorithm  Decides which server handles which client request Transitioning:  Ensures that when a server is powered down all new client requests are handled by another server  Ensures that all old client requests are answered by the old server Statistics:  Provides statistics regarding server usage

13 Prototype Implementation and Future Steps The first prototype for this project was created using:  OpenFlow programmable switches.  OpenFlow, NOX and Mininet to program the switches.  Applications written in Python.  Applications tested using VMWare (Virtual Machine) and Debian version of Linux running on VMWare. Future steps:  Running the application on a web server and on a real network.  Designing a more accurate partitioning component.  Adapting the partitioning component according to the statistics component.

14 Conclusions New solution which saves energy by:  Being implemented in already existing hardware, the switches.  Finding the path which uses the lowest amount of energy at the lowest cost.  Turning down severs which are handling a small amount of client requests. Solution offers:  flexibility due to the software component – the Load Balancing algorithm can be easily modified.  speed due to the underlining hardware component – switch which applies rules.

15 Acknowledgements Thank you:  Professor Jennifer Rexford  Richard Wang – team member  Rob Harrison and David Shue – graduate students  Nate Foster – postdoctoral research fellow Questions?


Download ppt "Scalable Server Load Balancing Inside Data Centers Dana Butnariu Princeton University Computer Science Department July – September 2010 Joint work with."

Similar presentations


Ads by Google