Scalable Load-Distance Balancing

Slides:



Advertisements
Similar presentations
Energy-Efficient Distributed Algorithms for Ad hoc Wireless Networks Gopal Pandurangan Department of Computer Science Purdue University.
Advertisements

The strength of routing Schemes. Main issues Eliminating the buzz: Are there real differences between forwarding schemes: OSPF vs. MPLS? Can we quantify.
Cluster Analysis: Basic Concepts and Algorithms
Resource Management §A resource can be a logical, such as a shared file, or physical, such as a CPU (a node of the distributed system). One of the functions.
Gossip Algorithms and Implementing a Cluster/Grid Information service MsSys Course Amar Lior and Barak Amnon.
On Selfish Routing In Internet-like Environments Lili Qiu (Microsoft Research) Yang Richard Yang (Yale University) Yin Zhang (AT&T Labs – Research) Scott.
K-means clustering Hongning Wang
Web Caching Schemes1 A Survey of Web Caching Schemes for the Internet Jia Wang.
Scalable Application Layer Multicast Suman Banerjee Bobby Bhattacharjee Christopher Kommareddy ACM SIGCOMM Computer Communication Review, Proceedings of.
October 5th, 2005 Jitter Regulation for Multiple Streams David Hay and Gabriel Scalosub Technion, Israel.
Architecture and Routing for NoC-based FPGA Israel Cidon* *joint work with Roman Gindin and Idit Keidar.
Rethinking Internet Traffic Management: From Multiple Decompositions to a Practical Protocol Jiayue He Princeton University Joint work with Martin Suchara,
Adaptive Web Caching Lixia Zhang, Sally Floyd, and Van Jacob-son. In the 2nd Web Caching Workshop, Boulder, Colorado, April 25, System Laboratory,
A Local Facility Location Algorithm Supervisor: Assaf Schuster Denis Krivitski Technion – Israel Institute of Technology.
K-means Clustering. What is clustering? Why would we want to cluster? How would you determine clusters? How can you do this efficiently?
Local Computations in Large-Scale Networks Idit Keidar Technion.
Correctness of Gossip-Based Membership under Message Loss Maxim Gurevich, Idit Keidar Technion.
FLANN Fast Library for Approximate Nearest Neighbors
Chapter 3: Cluster Analysis  3.1 Basic Concepts of Clustering  3.2 Partitioning Methods  3.3 Hierarchical Methods The Principle Agglomerative.
Algorithms for Self-Organization and Adaptive Service Placement in Dynamic Distributed Systems Artur Andrzejak, Sven Graupner,Vadim Kotov, Holger Trinks.
Internet Traffic Engineering by Optimizing OSPF Weights Bernard Fortz (Universit é Libre de Bruxelles) Mikkel Thorup (AT&T Labs-Research) Presented by.
Chapter 3 Parallel Algorithm Design. Outline Task/channel model Task/channel model Algorithm design methodology Algorithm design methodology Case studies.
A Distributed Clustering Framework for MANETS Mohit Garg, IIT Bombay RK Shyamasundar School of Tech. & Computer Science Tata Institute of Fundamental Research.
A novel approach of gateway selection and placement in cellular Wi-Fi system Presented By Rajesh Prasad.
June 21, 2007 Minimum Interference Channel Assignment in Multi-Radio Wireless Mesh Networks Anand Prabhu Subramanian, Himanshu Gupta.
PIMA-motivation PIMA: Partition Improvement using Mesh Adjacencies  Parallel simulation requires that the mesh be distributed with equal work-load and.
A Scalable Content-Addressable Network (CAN) Seminar “Peer-to-peer Information Systems” Speaker Vladimir Eske Advisor Dr. Ralf Schenkel November 2003.
1 Shape Segmentation and Applications in Sensor Networks Xianjin Xhu, Rik Sarkar, Jie Gao Department of CS, Stony Brook University INFOCOM 2007.
Chapter 14 – Cluster Analysis © Galit Shmueli and Peter Bruce 2010 Data Mining for Business Intelligence Shmueli, Patel & Bruce.
Click to edit Master subtitle style 2/23/10 Time and Space Optimization of Document Content Classifiers Dawei Yin, Henry S. Baird, and Chang An Computer.
Lecture 4 TTH 03:30AM-04:45PM Dr. Jianjun Hu CSCE569 Parallel Computing University of South Carolina Department of.
Plethora: Infrastructure and System Design. Introduction Peer-to-Peer (P2P) networks: –Self-organizing distributed systems –Nodes receive and provide.
CS 484 Load Balancing. Goal: All processors working all the time Efficiency of 1 Distribute the load (work) to meet the goal Two types of load balancing.
Nomadic Service Points Edward Bortnikov Israel Cidon Idit Keidar.
Topologically-Aware Overlay Construction and Sever Selection Sylvia Ratnasamy, Mark Handley, Richard Karp, Scott Shenker.
LOOKING UP DATA IN P2P SYSTEMS Hari Balakrishnan M. Frans Kaashoek David Karger Robert Morris Ion Stoica MIT LCS.
Distributed, Self-stabilizing Placement of Replicated Resources in Emerging Networks Bong-Jun Ko, Dan Rubenstein Presented by Jason Waddle.
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
1 Roie Melamed, Technion AT&T Labs Araneola: A Scalable Reliable Multicast System for Dynamic Wide Area Environments Roie Melamed, Idit Keidar Technion.
BAHIR DAR UNIVERSITY Institute of technology Faculty of Computing Department of information technology Msc program Distributed Database Article Review.
William Stallings Data and Computer Communications
Impact of Interference on Multi-hop Wireless Network Performance
2010 IEEE Global Telecommunications Conference (GLOBECOM 2010)
Chapter 15 – Cluster Analysis
Data Driven Resource Allocation for Distributed Learning
Presented by Tae-Seok Kim
Monitoring Churn in Wireless Networks
Computing and Compressive Sensing in Wireless Sensor Networks
Vineet Mittal Should more be added here Committee Members:
Parallel Density-based Hybrid Clustering
Cloud Data Anonymization Using Hadoop Map-Reduce Framework With Qos Evaluation and Behaviour analysis PROJECT GUIDE: Ms.S.Subbulakshmi TEAM MEMBERS: A.Mahalakshmi( ).
Unsupervised Learning
Plethora: Infrastructure and System Design
ISP and Egress Path Selection for Multihomed Networks
Effective Social Network Quarantine with Minimal Isolation Costs
TexPoint fonts used in EMF.
Shape Segmentation and Applications in Sensor Networks
Coded Caching in Information-Centric Networks
Distributed Channel Assignment in Multi-Radio Mesh Networks
Buffered tree construction for timing optimization, slew rate, and reliability control Abstract: With the rapid scaling of IC technology, buffer insertion.
Introduction Wireless Ad-Hoc Network
   Storage Space Allocation at Marine Container Terminals Using Ant-based Control by Omor Sharif and Nathan Huynh Session 677: Innovations in intermodal.
Peer-to-Peer Streaming: An Hierarchical Approach
CS510 - Portland State University
Birch presented by : Bahare hajihashemi Atefeh Rahimi
Dynamic Replica Placement for Scalable Content Delivery
Advisor: Yeong-Sung, Lin, Ph.D. Presented by Yu-Ren, Hsieh
Parallel Programming in C with MPI and OpenMP
Fast Min-Register Retiming Through Binary Max-Flow
Presentation transcript:

Scalable Load-Distance Balancing 12.06.02 EE Department The Technion, Haifa, Israel Scalable Load-Distance Balancing Edward Bortnikov, Israel Cidon, Idit Keidar

Service Assignment Assign (many) users to (a few) servers 12.06.02 Service Assignment Assign (many) users to (a few) servers Applications: Content/game servers Internet gateways in a wireless mesh network (WMN) The increased demand for real-time access  multiple service points (servers) Gives rise to the problem of service assignment – associating each user with a server s.t. the QoS is improved Many technologies

Load-Distance Balancing (LDB) 12.06.02 Load-Distance Balancing (LDB) Two sources of service delay Network delay – due to user-server distance e.g., depends on the number of network hops Congestion delay – due to server load General monotonic non-decreasing function Total delay = Network delay + Congestion delay The Load-Distance Balancing problem (LDB) Minimize the maximum total delay (cost) NP-complete (a 2-approximation exists)

LDB versus LB Network distance OK Congestion high Network distance OK 12.06.02 LDB versus LB Network distance OK Congestion high Network distance OK Congestion OK Network distance high Congestion OK

Distributed LDB Distributed assignment computation Requirements 12.06.02 Distributed LDB Distributed assignment computation Initially, users report locations to closest servers Servers communicate and compute the assignment Synchronous failure-free communication model Requirements Eventual quiescence Eventual stability of assignment Constant α-approximation of the optimal cost α ≥ 2 is a parameter (trade communication/time for cost) At startup, every user is assigned to the nearest server Servers can communicate and change assignment Eventually, the following conditions must hold: inter-server communication stops Assignment stops changing An a-approximation of the optimal assignment is computed….

What About Locality? Extreme global solution Extreme local solution 12.06.02 What About Locality? Extreme global solution Collect all data and compute assignment centrally Guarantees 2-approximation of optimal cost Excessive communication/network latency Extreme local solution Nearest-Server assignment No communication No approximation guarantee (can’t handle crowds) In addition to cost, in the distributed case we are interested about locality.

Workload-Sensitive Locality 12.06.02 Workload-Sensitive Locality The cost function is distance-sensitive Most assignments can go to the near servers … except for dissipating congestion peaks Distributed solution structure Start from the nearest-server assignment Offload congestion to near servers Workload-sensitive locality Go as far as needed … … to achieve the desired approximation α

Example: Light Uniform Workload 12.06.02 Example: Light Uniform Workload

Example: Peaky Workload 12.06.02 Example: Peaky Workload LDB-approximation LDB-approximation LDB-approximation

Iterative Clustering Partition servers into clusters 12.06.02 Iterative Clustering Partition servers into clusters Assign each user within its cluster Choose one leader per cluster Leader collects local data Computes within-cluster assignments Clusters may merge A cluster tries to merge as long as its cost is ε-improvable Can be decreased by ≥ 1+ε if all servers are harnessed No ε-improvable cluster  desired approximation achieved (α =2(1+ε)) slack factor

Tree (Structured) Clustering 12.06.02 Tree (Structured) Clustering Maintain a hierarchy of clusters Employ clusters of size 2i While some cluster ε-improvable Double it (merge with hierarchy neighbor) Simple, O(log N) convergence Requires hierarchy maintenance May not adapt well Miss cross-boundary optimization

Ripple (Unstructured) Clustering 12.06.02 Ripple (Unstructured) Clustering Define linear order among servers While cluster improvable Merge with smaller-cost L/R neighbor Adaptive merging Conflicts possible A BC, ABC Randomized tie-breaking to resolve Many race conditions (we love it )

Ripple Example: Merging Without Conflicts 12.06.02 Ripple Example: Merging Without Conflicts Propose merge High cost, improvable Low cost Accept proposal

Example Ripple Conflict Resolution ABC 12.06.02 Example Ripple Conflict Resolution ABC Propose Propose Accept

Ripple’s Properties Near-Optimal Cost Convergence Locality 12.06.02 Ripple’s Properties Near-Optimal Cost α-approximation of the optimal cost Convergence Communication quiescence + stability of assignment N rounds in the worst case (despite coin-tossing) Much faster in practice Locality For isolated load peaks, final clusters are at most twice as large as minimum required to obtain cost

Sensitivity to Required Approximation: Cost 12.06.02 Sensitivity to Required Approximation: Cost Urban WMN 64 servers (grid) Internet gateways 12800 users Mixed distribution (50%uniform / 50% peaky) 2-approximation algorithm applied within each cluster Nearest Server Theoretical Worst-Case Bound Tree Ripple Tree and Ripple outperform NS Beyond upper bound Ripple s better α =2(1+ε)

Sensitivity to Required Approximation: Locality 12.06.02 Sensitivity to Required Approximation: Locality Urban WMN 64 servers Internet gateways 12800 users Mixed distribution (50%uniform / 50% peaky) Ripple: max cluster size Most clusters built by Ripple are smaller Ripple: avg cluster size

Scalability with Network Size: Cost 12.06.02 Scalability with Network Size: Cost Urban WMN 64 to 1024 servers 12800 to 204800 users Nearest Server Tree Ripple

Scalability with Network Size: Locality 12.06.02 Scalability with Network Size: Locality Urban WMN 64 to 1024 servers 12800 to 204800 users Tree Ripple

Summary LD Balancing: novel optimization problem 12.06.02 Summary LD Balancing: novel optimization problem Distributed workload-sensitive solutions Tree and Ripple algorithms Ripple is more complex but performs better No infrastructure Scales better in practice Achieves lower costs in practice

Thank you