Rutgers May 25, 2011 TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A AA A AAA A A A A A A DIMACS Workshop on Perspectives.

Slides:



Advertisements
Similar presentations
The Primal-Dual Method: Steiner Forest TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A A AA A A A AA A A.
Advertisements

Chapter 5: Tree Constructions
Lecture 15. Graph Algorithms
A. S. Morse Yale University University of Minnesota June 4, 2014 TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A.
Midwestern State University Department of Computer Science Dr. Ranette Halverson CMPS 2433 – CHAPTER 4 GRAPHS 1.
Review Binary Search Trees Operations on Binary Search Tree
1 Slides based on those of Kenneth H. Rosen Slides by Sylvia Sorkin, Community College of Baltimore County - Essex Campus Graphs.
Edited by Malak Abdullah Jordan University of Science and Technology Data Structures Using C++ 2E Chapter 12 Graphs.
 Graph Graph  Types of Graphs Types of Graphs  Data Structures to Store Graphs Data Structures to Store Graphs  Graph Definitions Graph Definitions.
MATH 685/ CSI 700/ OR 682 Lecture Notes
C++ Programming: Program Design Including Data Structures, Third Edition Chapter 21: Graphs.
Applied Discrete Mathematics Week 12: Trees
Tirgul 12 Algorithm for Single-Source-Shortest-Paths (s-s-s-p) Problem Application of s-s-s-p for Solving a System of Difference Constraints.
Graph Algorithms: Minimum Spanning Tree We are given a weighted, undirected graph G = (V, E), with weight function w:
Chapter 9 Graph algorithms. Sample Graph Problems Path problems. Connectedness problems. Spanning tree problems.
Chapter 9 Graph algorithms Lec 21 Dec 1, Sample Graph Problems Path problems. Connectedness problems. Spanning tree problems.
Algebraic Structures and Algorithms for Matching and Matroid Problems Nick Harvey.
Graphs G = (V,E) V is the vertex set. Vertices are also called nodes and points. E is the edge set. Each edge connects two different vertices. Edges are.
Linear Equations in Linear Algebra
Graph Operations And Representation. Sample Graph Problems Path problems. Connectedness problems. Spanning tree problems.
9.3 Representing Graphs and Graph Isomorphism
Arithmetic Operations on Matrices. 1. Definition of Matrix 2. Column, Row and Square Matrix 3. Addition and Subtraction of Matrices 4. Multiplying Row.
MATRICES. Matrices A matrix is a rectangular array of objects (usually numbers) arranged in m horizontal rows and n vertical columns. A matrix with m.
Introduction to Flocking {Stochastic Matrices} A. S. Morse Yale University Gif – sur - Yvette May 21, 2012 TexPoint fonts used in EMF. Read the TexPoint.
A. S. Morse Yale University University of Minnesota June 4, 2014 TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A.
Social Media Mining Graph Essentials.
The Erdös-Rényi models
Data Structures Using C++ 2E
Computer Science 112 Fundamentals of Programming II Introduction to Graphs.
GRAPHS CSE, POSTECH. Chapter 16 covers the following topics Graph terminology: vertex, edge, adjacent, incident, degree, cycle, path, connected component,
Graph Algorithms. Definitions and Representation An undirected graph G is a pair (V,E), where V is a finite set of points called vertices and E is a finite.
TCP Traffic and Congestion Control in ATM Networks
1 1.3 © 2012 Pearson Education, Inc. Linear Equations in Linear Algebra VECTOR EQUATIONS.
8.1 Matrices & Systems of Equations
Average Consensus Distributed Algorithms for Multi-Agent Networks Instructor: K. Sinan YILDIRIM.
1 CS104 : Discrete Structures Chapter V Graph Theory.
Week 11 - Monday.  What did we talk about last time?  Binomial theorem and Pascal's triangle  Conditional probability  Bayes’ theorem.
Spectral Analysis based on the Adjacency Matrix of Network Data Leting Wu Fall 2009.
Most of contents are provided by the website Graph Essentials TJTSD66: Advanced Topics in Social Media.
Prepared by Deluar Jahan Moloy Lecturer Northern University Bangladesh
1 Network Models Transportation Problem (TP) Distributing any commodity from any group of supply centers, called sources, to any group of receiving.
A. S. Morse Yale University University of Minnesota June 2, 2014 TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A.
Random walks on undirected graphs and a little bit about Markov Chains Guy.
Meeting 18 Matrix Operations. Matrix If A is an m x n matrix - that is, a matrix with m rows and n columns – then the scalar entry in the i th row and.
Relevant Subgraph Extraction Longin Jan Latecki Based on : P. Dupont, J. Callut, G. Dooms, J.-N. Monette and Y. Deville. Relevant subgraph extraction from.
Matrices Section 2.6. Section Summary Definition of a Matrix Matrix Arithmetic Transposes and Powers of Arithmetic Zero-One matrices.
Relation. Combining Relations Because relations from A to B are subsets of A x B, two relations from A to B can be combined in any way two sets can be.
Flow in Network. Graph, oriented graph, network A graph G =(V, E) is specified by a non empty set of nodes V and a set of edges E such that each edge.
Data Structures & Algorithms Graphs Richard Newman based on book by R. Sedgewick and slides by S. Sahni.
GRAPHS. Graph Graph terminology: vertex, edge, adjacent, incident, degree, cycle, path, connected component, spanning tree Types of graphs: undirected,
Graphs Basic properties.
Trees Thm 2.1. (Cayley 1889) There are nn-2 different labeled trees
Napa Valley August 3, M. Cao A. S. Morse B. D. O. Anderson Yale University Yale University Australian National University Vicsek’s System with Integer.
Spectral Clustering Shannon Quinn (with thanks to William Cohen of Carnegie Mellon University, and J. Leskovec, A. Rajaraman, and J. Ullman of Stanford.
Data Structures and Algorithm Analysis Graph Algorithms Lecturer: Jing Liu Homepage:
UCLA March 2, 2006 IPAM Workshop on Swarming by Nature and by Design thanks to the organizers: A. Bertozzi D. Grunbaum P. S. Krishnaprasad I. Schwartz.
Week 11 - Wednesday.  What did we talk about last time?  Graphs  Paths and circuits.
(CSC 102) Lecture 30 Discrete Structures. Graphs.
Lecture 5 Graph Theory prepped by Lecturer ahmed AL tememe 1.
CHAPTER SIX T HE P ROBABILISTIC M ETHOD M1 Zhang Cong 2011/Nov/28.
Applied Discrete Mathematics Week 14: Trees
Review of Matrix Operations
DETERMINANTS A determinant is a number associated to a square matrix. Determinants are possible only for square matrices.
Degree and Eigenvector Centrality
Deterministic Gossiping
Graphs Chapter 11 Objectives Upon completion you will be able to:
2. Matrix Algebra 2.1 Matrix Operations.
Graph Operations And Representation
1.3 Vector Equations.
Graphs G = (V, E) V are the vertices; E are the edges.
Presentation transcript:

Rutgers May 25, 2011 TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A AA A AAA A A A A A A DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory

A. S. Morse Yale University TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A AA A AAA A A A A A A DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory Rutgers May 25, 2011 你好 Good Day!

Deterministic Gossiping TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A AA A AAA A A A A A A. S. Morse Yale University Deterministic Distributed Averaging DIMACS Workshop on Perspectives and Future Directions in Systems and Control Theory Rutgers May 25, 2011

Dedicated to Eduardo Sontag

Fenghua He Ming Cao Brian Anderson Ji Liu Oren Mangoubi Changbin {Brad} Yu Jie {Archer} Lin Ali Jadbabaie Shaoshuai Mou

Prior work by Boyd, Ghosh, Prabhakar, Shan Cao, Spielman, Yeh Muthukrishnan, Ghosh, Schultz Olshevsky, Tsitsiklis Liu, Anderson Mehyar, Spanos, Pongsajapan, Low, Murray Benezit, Blondel, Thiran, Tsitsiklis, Vetterli and many others

ROADMAP Consensus and averaging Linear iterations Gossiping Double linear iterations

CRAIG REYNOLDS BOIDS The Lion King

Consider a group of n agents labeled 1 to n Each agent i controls a real, scalar-valued, time-dependent, quantity x i called an agreement variable. The neighbors of agent i, correspond to those vertices which are adjacent to vertex i The groups’ neighbor graph N is an undirected, connected graph with vertices labeled 1,2,...,n The goal of a consensus process is for all n agents to ultimately reach a consensus by adjusting their individual agreement variables to a common value. This is to be accomplished over time by sharing information among neighbors in a distributed manner. Consensus Process

A consensus process is a recursive process which evolves with respect to a discrete time scale. In a standard consensus process, agent i sets the value of its agreement variable at time t +1 equal to the average of the current value of its own agreement variable and the current values of its neighbors’ agreement variables. Average at time t of values of agreement variables of agent i and the neighbors of agent i. N i = set of indices of agent i 0 s neighbors. d i = number of indices in N i Consensus Process

An averaging process is a consensus process in which the common value to which each agreement variable is suppose to converge, is the average of the initial values of all agreement variables: Averaging Process Application: distributed temperature calculation Generalizations: Time-varying case - N depends on time Integer-valued case - x i (t) is to be integer-value Asynchronous case - each agent has its own clock Implementation Issues:How much network information does each agent need? To what extent is the protocol robust? General Approach: Probabilistic Deterministic Standing Assumption: N is a connected graph Performance metrics:Convergence rate Number of transmissions needed

ROADMAP Consensus and averaging Linear iterations Gossiping Double linear iterations

N i = set of indices of agent i’s neighbors. w j = suitably defined weights Linear Iteration Want x(t) ! x avg 1 If A is a real n £ n matrix, then A t converges to a rank one matrix of the form qp 0 if and only if A has exactly one eigenvalue at value 1 and all remaining n -1 eigenvalues are strictly smaller than 1 in magnitude. If A so converges, then Aq = q, A 0 p = p and p 0 q = 1. Thus if W is such a matrix and then

Linear Iteration with Nonnegative Weights x(t) ! x avg 1 iff W1 = 1, W 0 1 =1 and all n - 1 eigenvalues of W, except for W’s single eigenvalue at value 1, have magnitudes less than 1. A square matrix S is doubly stochastic if it has only nonnegative entries and if its row and column sums all equal 1. A square matrix S is stochastic if it has only nonnegative entries and if its row sums all equal 1. S1 = 1 S1 = 1 and S 0 1 = 1 For the nonnegative weight case, x(t) converges to x avg 1 if and only if W is doubly stochastic and its single eigenvalue at 1 has multiplicity 1. How does one choose the w ij ¸ 0 so that W has these properties? ||S|| 1 = 1 Spectrum S contained in the closed unit circle All eignvalue of value 1 have multiplicity 1

g > max {d 1,d 2,…d n } L = D - A Adjacency matrix of N : matrix of ones and zeros with a ij = 1 if N has an edge between vertices i and j. L1 = 0 Each agent needs to know max {d 1,d 2,…d n } to implement this doubly stochastic The eigenvalue of L at 0 has multiplicity 1 because N is connected single eigenvalue at 1 has multiplicity 1

Each agent needs to know the number of neighbors of each of its neighbors. Metropolis Algorithm L = QQ’ Q is a -1,1,0 matrix with rows indexed by vertex labels in N and columns indexed by edge labels such that q ij = 1 if edge j is incident on vertex i, and -1 if edge i is incident on vertex j and 0 otherwise. I – Q¤ Q 0 A Better Solution Total number of transmissions/iteration: {Boyd et al}

Agent i’s queue is a list q_i(t) of agent i’s neighbor labels. Agent i’s preferred neighbor at time t, is that agent whose label is in the front of q(t). Between times t and t+1 the following steps are carried out in order. Agent i transmits its label i and x i (t) to its current preferred neighbor. At the same time agent i receives the labels and agreement variable values of those agents for whom agent i is their current preferred neighbor. Agent i transmits m i (t) and its current agreement variable value to each neighbor with a label in M i (t). M i (t) = is the set of the label of agent i’s preferred neighbor together with the labels of all neighbors who send agent i their agreement variables at time t. m i (t) = the number of labels in M i (t) Agent i then moves the labels in M i (t) to the end of its queue maintaining their relative order and updates as follows: nd avg 3n3n d avg > 3 Modification n transmissions at most 2n transmissions

Randomly chosen graphs with 200 vertices. 10 random graphs for each average degree. Metropolis Algorithm vs Modified Metropolis Algorithm

ROADMAP Consensus and averaging Linear iterations Gossiping Double linear iterations

Gossip Process A gossip process is a consensus process in which at each clock time, each agent is allowed to average its agreement variable with the agreement variable of at most one of its neighbors. The index of the neighbor of agent i which agent i gossips with at time t. In the most commonly studied version of gossiping, the specific sequence of gossips which occurs during a gossiping process is determined probabilistically. In a deterministic gossiping process, the sequence of gossips which occurs is determined by a pre-specified protocol. This is called a gossip and is denoted by (i, j). If agent i gossips with neighbor j at time t, then agent j must gossip with agent i at time t.

Gossip Process A gossip process is a consensus process in which at each clock time, each agent is allowed to average its agreement variable with the agreement variable of at most one of its neighbors. 1. The sum total of all agreement variables remains constant at all clock steps. 2. Thus if a consensus is reached in that all agreement variables reach the same value, then this value must be the average of the initial values of all gossip variables. 3. This is not the case for a standard consensus process.

State Space Model For n = 7, i = 2, j = 5 A gossip (i, j) primitive gossip matrix P ij A doubly stochastic matrix

(5,6)(5,7)(5,6)(3,2)(3,5) (3,4)(1,2) (5,6)(3,2)(3,5) (3,4)(1,2)(5,7) T = 6 T T convergence rate = Periodic Gossiping as fast as ¸ i ! 0 where ¸ is the second largest eigenvalue {in magnitude} of A. Can be shown that because the subgraph in re is a connected spanning subgraph of N

(5,7)(5,6)(3,2)(3,5) (3,4)(1,2) (5,6)(3,2)(3,5) (3,4)(1,2) (5,6) (5,7) 6 6 (3,4)(1,2)(3,2) (5,6)(3,5) (3,4)(1,2)(3,2) (5,6)(3,5)(5,7) (3,4) How are the second largest eigenvalues {in magnitude} related? If the neighbor graph N is a tree, then the spectrums of all possible minimally complete gossip matrices determined by N are the same!

Modified Gossip Rule Suppose agents i and j are to gossip at time t. Standard update rule: Modified gossip rule: 0 < ® < 1

|¸ 2 | vs ®

ROADMAP Consensus and averaging Linear iterations Gossiping Double linear iterations

Double Linear Iteration left stochastic S 0 (t) = stochastic Suppose q > 0 Suppose z(t) > 0 8 t < 1 Suppose each S(t) has positive diagonals Benezit, Blondel, Thiran, Tsitsiklis, Vetterli z(t) > 0 8 t · 1 y i = unscaled agreement variable z i = scaling variable

Broadcast-Based Double Linear Iteration Initialization: y i (0) = x i (0) z i (0) = 1 Transmission: Agent i broadcasts the pair {y i (t), z i (t)} to each of its neighbors. Update: Agent’s require same network information as Metropolis n transmissions/iteration Works if N depends on t Why does it work? y(0) = x(0) z(0) = 1 S = (I+A) (I+D) -1 A = adjacency matrix of N D = degree matrix of N S = left stochastic with positive diagonals

y(0) = x(0) z(0) = 1 S = left stochastic with positive diagonals because N is connected z(t) > 0, 8 t < 1 because S has positive diagonals z(t) > 0, 8 t · 1 because z( 1 ) = nq and q > 0 So is well-defined

Metropolis Iteration vs Double Linear Iteration 30 vertex random graphs Metropolis Double Linear

Round Robin - Based Double Linear Iteration At the same time agent i receives the values from the agents j 1, j 2, … j k who have chosen agent i as their current preferred neighbor. No required network information n transmissions/iteration Update: Agent i then moves the label of its current preferred neighbor to the end of its queue and sets Transmission: Agent i transmits the pair {y i (t), z i (t)} its preferred neighbor. Initialization: y i (0) = x i (0) z i (0) = 1 Why does it work?

S(0), S(1),...is periodic with period T = lcm {d 1, d 2,..., d n }. P ¿ k > 0 for some k > 0 S(t) = left stochastic, positive diagonals P ¿ is primitive z(t) > 0, 8 t < 1 because each S(t) has positive diagonals q ¿ > 0Perron-Frobenius:P ¿ has single eigenvalue at 1 and it has multiplicity 1. z(t) > 0, 8 t · 1 because z(t) ! {nq 1, nq 2,...,nq T } and q ¿ > 0

Happy Birthday Eduardo!