Asynchronous ASMs Ring Load Balance Egon Börger Dipartimento di Informatica, Universita di Pisa

Slides:



Advertisements
Similar presentations
The SPIN System. What is SPIN? Model-checker. Based on automata theory. Allows LTL or automata specification Efficient (on-the-fly model checking, partial.
Advertisements

The SPIN System. What is SPIN? Model-checker. Based on automata theory. Allows LTL or automata specification Efficient (on-the-fly model checking, partial.
© Egon Börger: Linked List 1 Double Linked Lists : Desired Operations Define an ASM which offers the following operations, predicates and functions on.
1 Routing Protocols I. 2 Routing Recall: There are two parts to routing IP packets: 1. How to pass a packet from an input interface to the output interface.
Real-Time Process Control Systems (Real-Time Controller ASM for the Railroad Crossing Case Study) Egon Börger Dipartimento di Informatica, Universita di.
THE WELL ORDERING PROPERTY Definition: Let B be a set of integers. An integer m is called a least element of B if m is an element of B, and for every x.
CS 542: Topics in Distributed Systems Diganta Goswami.
CS425 /CSE424/ECE428 – Distributed Systems – Fall 2011 Material derived from slides by I. Gupta, M. Harandi, J. Hou, S. Mitra, K. Nahrstedt, N. Vaidya.
Chapter 15 Basic Asynchronous Network Algorithms
Chapter 4: Trees Part II - AVL Tree
Gillat Kol (IAS) joint work with Ran Raz (Weizmann + IAS) Interactive Channel Capacity.
Termination Detection of Diffusing Computations Chapter 19 Distributed Algorithms by Nancy Lynch Presented by Jamie Payton Oct. 3, 2003.
Modeling Database Recovery An Exercise in Stepwise Refinement Egon Börger Dipartimento di Informatica, Universita di Pisa
Byzantine Generals Problem: Solution using signed messages.
Asynchronous ASMs Network Leader Election Egon Börger Dipartimento di Informatica, Universita di Pisa
Network Operating Systems Users are aware of multiplicity of machines. Access to resources of various machines is done explicitly by: –Logging into the.
1 Complexity of Network Synchronization Raeda Naamnieh.
Chapter 8 - Self-Stabilizing Computing1 Chapter 8 – Self-Stabilizing Computing Self-Stabilization Shlomi Dolev MIT Press, 2000 Draft of January 2004 Shlomi.
Chapter 5 Routing Algorithm in Networks. How are message routed from origin to destination? 1) Circuit-Switching → telephone net. Dedicated bandwidth.
© Egon Börger: Lift 1 Lift Control (N.Davis, 1984) The Problem Design the logic to move n lifts bw m floors, and prove it to be well functioning, where.
Asynchronous Consensus (Some Slides borrowed from ppt on Web.(by Ken Birman) )
CPSC 668Set 3: Leader Election in Rings1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
Asynchronous ASMs Master/Slave Agreement Egon Börger Dipartimento di Informatica, Universita di Pisa
© nCode 2000 Title of Presentation goes here - go to Master Slide to edit - Slide 1 Reliable Communication for Highly Mobile Agents ECE 7995: Term Paper.
CPSC 668Set 10: Consensus with Byzantine Failures1 CPSC 668 Distributed Algorithms and Systems Fall 2006 Prof. Jennifer Welch.
© Egon Börger: Daemon Game 1 Daemon Game (Turner 1993) Problem Description Design a system for the following multi-player game: –A Daemon generates Bump.
Asynchronous Multi-Agent ASMs An Introduction Egon Börger Dipartimento di Informatica, Universita di Pisa
Distributed systems Module 2 -Distributed algorithms Teaching unit 1 – Basic techniques Ernesto Damiani University of Bozen Lesson 4 – Consensus and reliable.
Asynchronous ASMs Phase Synchronization on Undirected Trees Egon Börger Dipartimento di Informatica, Universita di Pisa
 Idit Keidar, Principles of Reliable Distributed Systems, Technion EE, Spring Principles of Reliable Distributed Systems Lecture 6: Impossibility.
Strategies for Implementing Dynamic Load Sharing.
Illustrating Stepwise Refinement Shortest Path ASMs Egon Börger Dipartimento di Informatica, Universita di Pisa
© Egon Börger: Turing Machine 1 Turing Machine ASM Instruction (q1,a)  (q2,b,-1) aaa q1 aab q2 Signature Q with q,  with Blank Int with +1 and –1,0,1,h.
16: Distributed Systems1 DISTRIBUTED SYSTEM STRUCTURES NETWORK OPERATING SYSTEMS The users are aware of the physical structure of the network. Each site.
Distributed Systems Tutorial 4 – Solving Consensus using Chandra-Toueg’s unreliable failure detector: A general Quorum-Based Approach.
1 Distributed Systems: Distributed Process Management – Process Migration.
© Egon Börger: Clock and Stopwatch ASM 1 Illustrating the ASM Function Classification A real-time CLOCK: –Monitored: CurrTime: Real (supposed to be increasing.
CS 221 – May 13 Review chapter 1 Lab – Show me your C programs – Black spaghetti – connect remaining machines – Be able to ping, ssh, and transfer files.
AN OPTIMISTIC CONCURRENCY CONTROL ALGORITHM FOR MOBILE AD-HOC NETWORK DATABASES Brendan Walker.
Paxos Made Simple Jinghe Zhang. Introduction Lock is the easiest way to manage concurrency Mutex and semaphore. Read and write locks. In distributed system:
Distributed Algorithms 2014 Igor Zarivach A Distributed Algorithm for Minimum Weight Spanning Trees By Gallager, Humblet,Spira (GHS)
Distributed Computing 5. Synchronization Shmuel Zaks ©
© 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen 1 Concurrency in Programming Languages Matthew J. Sottile Timothy G. Mattson Craig.
Introduction to ASMs Dumitru Roman Digital Enterprise Research Institute
Parallel Programming Models Jihad El-Sana These slides are based on the book: Introduction to Parallel Computing, Blaise Barney, Lawrence Livermore National.
Mobile Agent Technology for the Management of Distributed Systems - a Case Study Claudia Raibulet& Claudio Demartini Politecnico di Torino, Dipartimento.
Lecture #12 Distributed Algorithms (I) CS492 Special Topics in Computer Science: Distributed Algorithms and Systems.
Active Monitoring in GRID environments using Mobile Agent technology Orazio Tomarchio Andrea Calvagna Dipartimento di Ingegneria Informatica e delle Telecomunicazioni.
CS4231 Parallel and Distributed Algorithms AY 2006/2007 Semester 2 Lecture 10 Instructor: Haifeng YU.
Asynchronous Multi-Agent ASMs An Introduction Egon Börger Dipartimento di Informatica, Universita di Pisa
CSCI 465 D ata Communications and Networks Lecture 15 Martin van Bommel CSCI 465 Data Communications & Networks 1.
Page 1 Distributed Systems Election Algorithms* *referred to slides by Prof. Hugh C. Lauer at Worcester Polytechnic Institute.
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 February 10, 2005 Session 9.
1 © R. Guerraoui Regular register algorithms R. Guerraoui Distributed Programming Laboratory lpdwww.epfl.ch.
1 Leader Election in Rings. 2 A Ring Network Sense of direction left right.
Hwajung Lee. The State-transition model The set of global states = s 0 x s 1 x … x s m {s k is the set of local states of process k} S0  S1  S2  Each.
Agent Based Transaction System CS790: Dr. Bruce Land Sanish Mondkar Sandeep Chakravarty.
Page 1 Mutual Exclusion & Election Algorithms Paul Krzyzanowski Distributed Systems Except as otherwise noted, the content.
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Spring 2014 Prof. Jennifer Welch CSCE 668 Set 3: Leader Election in Rings 1.
A Bandwidth Scheduling Algorithm Based on Minimum Interference Traffic in Mesh Mode Xu-Yajing, Li-ZhiTao, Zhong-XiuFang and Xu-HuiMin International Conference.
Asynchronous ASMs Echo Algorithm Egon Börger Dipartimento di Informatica, Universita di Pisa
Paolo Ferragina Dipartimento di Informatica, Università di Pisa
Load Balancing: List Scheduling
DHT Routing Geometries and Chord
Chapter 8-2: Sorting in Linear Time
Shortest-Path Property 4.1
CSC4005 – Distributed and Parallel Computing
Abstraction.
Load Balancing: List Scheduling
CSE 486/586 Distributed Systems Leader Election
Presentation transcript:

Asynchronous ASMs Ring Load Balance Egon Börger Dipartimento di Informatica, Universita di Pisa

© Egon Börger: Load Balance 2 Ring Load Balance: problem statement Goal: Design a distributed algorithm for reaching workload balance among the agents of a ring (using only communication between right/left neighbors) –keeping message passing mechanism and effective task transfer abstract, assuming fixed number of agents and total workload Algorithmic Idea: every agent (ring node) –alternately sends a workload info mssg to his right neighbor a task transfer mssg to his left neighbor, transfering a task to balance the workload with the left neighbor –updates his workload to balance it with his right neighbor so that eventually the difference between the workload of two nodes becomes  1

© Egon Börger: Load Balance 3 Ring Load Balance ASM: Agent Signature Agent : a ring of agents equipped with: –leftNeighb, rightNeighb: Agent (external functions) –workLoad: the current workload –neighbLoad: current workload of leftNeighb –transferLoad: workload transfered by rightNeighb –ctl_state : {informRightNeighb, checkWithLeftNeighb, checkWithRightNeighb } Initially –ctl_state = informRightNeighb –neighbLoad = transferLoad = undef

© Egon Börger: Load Balance 4 Ring Load Balance ASM Send(workLoad)  rightNeighb.neighbLoad := workLoad inform Right Neighb Send(workLoad) checkWith Right Neighb checkWith Left Neighb BalanceWithLeftNeighb BalanceWithRightNeighb BalanceWithLeftNeighb  If arrived(neighbLoad) then transfer task to leftNeighb BalanceWithRightNeighb  If arrived(transferLoad) then accept task from rightNeighb transfer task to leftNeighb  let transfer = workLoad > neighbLoad in leftNeighb.transferLoad:= transfer workLoad:= workLoad  transfer neighbLoad:= undef arrived(l)  l  undef accept task from rightNeighb  workLoad:= workLoad + transferLoad transferLoad:= undef

© Egon Börger: Load Balance 5 Ring Load Balance ASM : Correctness property Proposition: In every distributed run of agents equipped with the ring load balance ASM, eventually the workload difference between two neighboring nodes becomes  1. Proof (assuming that every enabled agent will eventually make a move): induction on the wieght of run workload differences. Let w = total workLoad of all nodes, a= |Agent|. –Case 1: t|a. Then eventually workLoad(n)=w/a for every node n. –Case 2: otherwise. Then eventually the workLoad of some neighboring nodes will differ by 1.

© Egon Börger: Load Balance 6 Reference W.Reisig: Elements of Distributed Algorithms Springer-Verlag 1998 –See Section 37 (in particular Fig. 37.1) and Section 82 for a correctness proof. E. Börger, R. Stärk: Abstract State Machines. A Method for High-Level System Design and Analysis Springer-Verlag 2003, see –See Chapter 6.1