Local Search Algorithms & Optimization Problems

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Informed search algorithms
Local Search Algorithms
LOCAL SEARCH AND CONTINUOUS SEARCH. Local search algorithms  In many optimization problems, the path to the goal is irrelevant ; the goal state itself.
1 K-clustering in Wireless Ad Hoc Networks Fernandess and Malkhi Hebrew University of Jerusalem Presented by: Ashish Deopura.
1 K-clustering in Wireless Ad Hoc Networks using local search Rachel Ben-Eliyahu-Zohary JCE and BGU Joint work with Ran Giladi (BGU) and Stuart Sheiber.
Local search algorithms
Local search algorithms
Two types of search problems
1 Minimum-energy broadcasting in multi-hop wireless networks using a single broadcast tree Department of Computer Science and Information Engineering National.
Recent Development on Elimination Ordering Group 1.
WiOpt’04: Modeling and Optimization in Mobile, Ad Hoc and Wireless Networks March 24-26, 2004, University of Cambridge, UK Session 2 : Energy Management.
Review Best-first search uses an evaluation function f(n) to select the next node for expansion. Greedy best-first search uses f(n) = h(n). Greedy best.
Informed Search Next time: Search Application Reading: Machine Translation paper under Links Username and password will be mailed to class.
Local Search and Optimization
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Network Aware Resource Allocation in Distributed Clouds.
1 Local search and optimization Local search= use single current state and move to neighboring states. Advantages: –Use very little memory –Find often.
An Introduction to Artificial Life Lecture 4b: Informed Search and Exploration Ramin Halavati In which we see how information.
Local Search Algorithms This lecture topic Chapter Next lecture topic Chapter 5 (Please read lecture topic material before and after each lecture.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
Informed search algorithms
1 Maximal Independent Set. 2 Independent Set (IS): In a graph G=(V,E), |V|=n, |E|=m, any set of nodes that are not adjacent.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
Iterative Improvement Algorithm 2012/03/20. Outline Local Search Algorithms Hill-Climbing Search Simulated Annealing Search Local Beam Search Genetic.
Local Search Algorithms
Local Search Pat Riddle 2012 Semester 2 Patricia J Riddle Adapted from slides by Stuart Russell,
Princess Nora University Artificial Intelligence Chapter (4) Informed search algorithms 1.
Local Search and Optimization Presented by Collin Kanaley.
When A* doesn’t work CIS 391 – Intro to Artificial Intelligence A few slides adapted from CS 471, Fall 2004, UBMC (which were adapted from notes by Charles.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Local search algorithms In many optimization problems, the state space is the space of all possible complete solutions We have an objective function that.
Optimization Problems
Chapter 4 (Section 4.3, …) 2 nd Edition or Chapter 4 (3 rd Edition) Local Search and Optimization.
Lecture 6 – Local Search Dr. Muhammad Adnan Hashmi 1 24 February 2016.
Local Search Algorithms and Optimization Problems
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Local Search Algorithms CMPT 463. When: Tuesday, April 5 3:30PM Where: RLC 105 Team based: one, two or three people per team Languages: Python, C++ and.
CMPT 463. What will be covered A* search Local search Game tree Constraint satisfaction problems (CSP)
Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Games: Expectimax MAX MIN MAX Prune if α ≥ β. Games: Expectimax MAX MIN MAX
Lecture 3: Uninformed Search
Optimization Problems
CSCI 4310 Lecture 10: Local Search Algorithms
Presented by Tae-Seok Kim
Department of Computer Science
CS 326A: Motion Planning Probabilistic Roadmaps for Path Planning in High-Dimensional Configuration Spaces (1996) L. Kavraki, P. Švestka, J.-C. Latombe,
For Monday Chapter 6 Homework: Chapter 3, exercise 7.
Local Search Algorithms
Instructor: Vincent Conitzer
Maximal Independent Set
Computer Science cpsc322, Lecture 14
Artificial Intelligence (CS 370D)
Subject Name: Operation Research Subject Code: 10CS661 Prepared By:Mrs
Local Search and Optimization
Optimization Problems
CSE 589 Applied Algorithms Spring 1999
Introduction Wireless Ad-Hoc Network
Informed Search Chapter 4 (b)
School of Computer Science & Engineering
Informed search algorithms
CO Games Development 1 Week 8 Depth-first search, Combinatorial Explosion, Heuristics, Hill-Climbing Gareth Bellaby.
Artificial Intelligence
Lecture 9 Administration Heuristic search, continued
More on HW 2 (due Jan 26) Again, it must be in Python 2.7.
More on HW 2 (due Jan 26) Again, it must be in Python 2.7.
First Exam 18/10/2010.
Local Search Algorithms
Search.
Local Search Algorithms
Presentation transcript:

Local Search Algorithms & Optimization Problems Based on: Stuart J. Russel, Peter Norvig, Artificial Intelligence, A modern Approach 2nd Ed. p110-p116

Why Search algorithms are problematic? The solution is mostly a path, so we have to keep one, more paths in memory. State space may be VERY large (grows exponentially). What if there is no solution and there is an INFINITE state space?

But what if the path to the goal is irrelevant? 8 queens problem Final configuration is of importance, NOT the path! Other examples: VLSI design, factory floor layout, network topology optimization….

If no path to the goal matters….. Lets consider a different approach: Forget about the path GENERALLY: Iteratively move from one state to a neighbor state until reaching the goal. Not retaining the path at all.

2 key advantages: A Light-memory search method (usually a constant amount) No search tree; only the current state is represented! OFTEN find reasonable solution in large/infinite state spaces !!! Only applicable to problems where the path is irrelevant (e.g., 8-queen, Optimization problems)

But how to find the “best” neighbor??? What we need some kind of “sense of smell” A HEURISTIC FUNCTION (also called objective function) evaluating the “fitness” of all possible neighbor states. Seek for the MINIMUM or MAXIMUM When to stop? Which problems arise?

State space landscape (here: 1 dimentional) shoulder Global maximum Local maximum Flat local maximum State space Objective function

Hill Climbing (steepest ascent) current  MakeANode(initialState(problem)) Repeat: neighbor  highest valued succesor of current if value(neighbor) ≤ value(current) then return State(current) current neighbor Trying to find the top of Mount Everest while suffering from amnesia. IS THIS COMPLETE???

All next minimal states Application: 8-Queen (complete state formulation), h=pairs of queens attacking each other Repeat n times: Pick an initial state S at random with one queen in each column Repeat k times: S’ pick a column and move the queen to one of the best spots If (no better state exists) return S. S S’ All next minimal states 12 12 12 12 12 12 5 steps 12 12 h=17

This is GREEDY! We do not try to think ahead (also called greedy local search). GREED is one of the 7 deadly sins!! But this works usually quite well and VERY RAPID. Why? It is in general easy to improve a bad state. States are well distributed over state space.

Can we fail? Sure. Objective function Flat local maximum State space shoulder Global maximum Local maximum Flat local maximum State space Objective function

How bad is it???? Very bad… Starting from a random state, 8-queens solution will get stuck 84% of the times. But solutions are very quickly!! Only average of 4 steps! (3 steps to failure).

Can we do better? Overcoming plateaus (How?). Consider go sideways (how much?) By allowing going sideways 100 times we will solve 8-queens in 94% of trials!! But success comes at cost: 21 steps for a success and 64 for failure…

We still strive to completeness!! Random restart- a series of hill climbs. complete with probability approaching 1. First choice hill climbing- If there are many successors. Stochastic hill climbing- choose among all uphill moves with probability vary with steepness. Converges more slowly but may find better solutions.

What about this kind of state space??? NP-hard problems typically have exponential number of local maxima to get stuck on

Simulated annealing

Simulated annealing A simple hill climbing- NOT COMPLETE A random walk- COMPLETE but super inefficient. Let’s try to combine the two. “Try to throw a ping pong ball into the deepest narrow crack.” Shake the surface just enough.

Simulated annealing S  initial state Repeat forever: T = mapping of time If (T= 0) then return S S’  successor of S picked at random Dh = h(S’) - h(S) if(Dh ≥0) then S  S’ else S  S’ with probability ~ e(∆H/t) where T is called the “temperature” Simulated annealing lowers T over the k iterations. It starts with a large T and slowly decreases T. “bad” moves are more likely to be allowed at start. Proven: Will find a global maximum at probability approaching 1.

Questions???

K-clustering in Wireless Ad Hoc Networks using local search Rachel Ben-Eliyahu-Zohary Software Engineering Department Jerusalem College of Engineering Hello, my name is Yaacov Fernandess I’m a master student at the Hebrew University of Jerusalem and in the following presentation I’ll present a joint work of Dahlia Malkhi and mine: K-clustering in Mobile ad hoc networks. Joint work with Ran Giladi (BGU) and Stuart Sheiber and Philip Hendrix (Harverad)

Minimum k-clustering. Problem Statement: Minimum k-clustering: given a graph G = (V,E) and a positive integer k, find the smallest value of ƒ such that there is a partition of V into ƒ disjoint subsets V1,…,Vƒ and diam(G[Vi]) <= k for i = 1…ƒ. The algorithmic complexity of k-clustering is known to be NP-complete for simple undirected graphs. A formal definition of the problem dealt in this presentation Minimum k-clustering: given a graph G = (V,E) and a positive integer k, find the smallest value of ƒ such that there is a partition of V into ƒ disjoint subsets V1,…,Vƒ and diam(G[Vi]) <= k for i = 1…ƒ. The algorithmic complexity of k-clustering is known to be NP-complete for simple undirected graphs.

K-clustering K = 3 1 2 1 2 2 1 1 2 1 2 1 2 2 An example of k-clustering (k = 3) on a given graph. 1 2 2

MOTIVATION!!!

Cluster-based Routing Protocol The network is divided to non overlapping sub-networks (clusters) with bounded diameter. Intra-cluster routing: pro-actively maintain state information for links within the cluster. Inter-cluster routing: use a route discovery protocol for determining routes. Route requests are propagated via peripheral nodes. One example of a hybrid routing protocol is cluster-based routing protocol. In cluster-based routing protocols the network is divided to non overlapping sub-networks (clusters) with bounded diameter, where intra-cluster routing is done using proactive routing protocol, whereas inter-cluster routing is done using reactive routing protocol with a slight difference that route request are propagate via cluster’s border nodes.

Cluster-based Routing Protocol Limit the amount of routing information stored and maintained at individual hosts. Clusters are manageable. Node mobility events are handled locally within the clusters. Hence, far-reaching effects of topological changes are minimized. Cluster based routing protocols limit the amount of routing information stored and maintained at individual hosts. In addition, cluster-base routing protocols were proposed for routing in large networks because clusters are manageable - node mobility events are handled locally within the cluster. Hence, far-reaching effects of topological changes are minimized. Furthermore, In order to overcome nodes mobility one can adjust cluster size according to network stability.

System Model Two general assumptions regarding the state of the network’s communication links and topology: The network may be modeled as an unit disk graph. The network topology remains unchanged throughout the execution of the algorithm. Our system model consists of two general assumptions regarding the state of the network’s communication links and topology: The first - the network may be modeled as a unit disk graph. This is not necessary true. Communication between nodes is dynamic function of few factors such as interference such as physical obstacles. The network topology remains unchanged throughout the execution of the algorithm. We ignore any topological changes during algorithm execution.

Unit Disk Graph The distance between adjacent nodes <= 2 F H J D C G I K M N L O The distance between adjacent nodes <= 2 The distance between non adjacent nodes is > 2 B A S E F H J D C G I K M N L O An ad hoc network can be viewed as a unit disk graph by viewing every transmitter/receiver in the broadcast network as a point in the graph and by representing the effective broadcast range of each point as a unit disk. We use intersection model – n unit disk in the plane is represented by n-node graph, where every node corresponds to a unit disk and there is an edge between two nodes if the corresponding unit disks intersect or tangent.

Contribution of Fernandess and Malkhi A two phase distributed asynchronous polynomial approximation for k-clustering where k > 1 that has a competitive worst case ratio of O(k): First phase – constructs a spanning tree of the network. Second phase – partitions the spanning tree into sub-trees with bounded diameter. Our contribution Is a two phase distributed asynchronous polynomial approximation for k-clustering where k > 1 that has a competitive worst case ratio of O(k): First phase – constructs a spanning tree of the network. Second phase – partitions the spanning tree into sub-trees with bounded diameter. Note our algorithm achieves flexibility by choosing k as a dynamic parameter of the network.

Second Phase: K-sub-tree Given a tree T=(V,E) the algorithm finds a sub-tree whose diameter exceeds k, it then detaches the highest child of the sub-tree and repeats over on the reduced tree. root of the sub-tree detach highest sub-tree sub-tree In the second phase we solve the k-clustering problem on special class of graphs, trees. We implement, in a distributed manner, a simple polynomial sequential algorithm, which solves the problem at hand, k-sub tree. Given a tree T=(V,E) the algorithm finds a minimal sub-tree whose diameter exceeds k, where minimalism means that no child sub-tree already has a diameter larger than k. It then detaches the highest child of the sub-tree and repeats over on the reduced tree. sub-tree k- r k

K-sub-tree Converge-cast (K=4) 2 5 3 leaf 6 4 1 3 5 root 4 2 6 5 Each node v sends its height, The height of tree is the longest path from the root to a leaf. Therefore, height(v) is the height of the sub tree rooted at v. 3 The tree rooted at this node exceeds k  detach the highest child 4 6 Denote MCDS spanning tree edge.

K-sub-tree Converge-cast (K=4) The tree rooted at this node exceeds k  detach the highest child. 2 5 3 leaf 6 4 1 3 5 root 2 6 Denote MCDS spanning tree edge.

K-sub-tree Converge-cast (K=4) 2 1 3 root 2 Denote MCDS spanning tree edge.

Random Decent- reminder RANDOM_DESCENT(problem, terminate) returns solution state inputs: problem, a problem termination condition, a condition for stopping local variables: current, a solution state next, a solution state current ← Initial State (problem( while (not terminate) next ← a selected neighbor of current ∆ E← Value(next) - Value(current) if ∆ E <0 then current ←next

Initial State (k=2)

K is even (e.g. 2)

K = 2 (cont.)

K = 2 (cont.)

K = 2 (cont.)

K = 2 (cont.)

K = 2 Total: 8 clusters

A better State

Building the neighbor

Building the neighbor

Building the neighbor

Building the neighbor

Experimental Evaluation Randomly Generated Graphs (Unit disk graph model) Grid Graphs

Randomly Generated Graphs Parameters: n – number of nodes l – length of a unit Graph Generation: - n points are placed randomly on a 1X1 square - two vertices are connected iff the distance between them is less than l.

400 nodes, k=5

Experiments on Grids

In general, number of nodes in a maximal cluster: If K is even, If k is odd, e.g. 13 if k=4 e.g. 8 if k=3

A maximal cluster on grid x-y=s x+y=r x+y=r+k x-y=s-k

A maximal cluster on grid x-y=s x+y=r x+y=r+k x-y=s-k

A maximal cluster – k is even x-y=s x+y=r x+y=r+4 x-y=s-4

A maximal cluster – k is odd x-y=s x+y=r x+y=r+3 x-y=s-3

Optimal Clustering for k=4

Optimal Clustering for k=3

Related Work Local search techniques were used for network partitioning Simulated annealing and genetic algorithms were tested on a very limited network size : 20-60 nodes. We present solid criteria for evaluating the local search

Conclusions A new local search algorithm for k-clustering was introduced It outperforms existing distributed algorithm for large k and dense networks. Grids can be built using optimal clustering Clustering on grids needs improvement.

Future Work Improve the algorithm, perhaps by using other methods like simulated annealing and genetic algorithms. Change the algorithm for local search Find an efficient way to fix a solution – e.g. by merging small clusters Use local search for other optimization problems in networking