Presentation is loading. Please wait.

Presentation is loading. Please wait.

Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349 Machine Learning Lecture 4: Greedy Local Search (Hill Climbing)

Similar presentations


Presentation on theme: "Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349 Machine Learning Lecture 4: Greedy Local Search (Hill Climbing)"— Presentation transcript:

1 Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349 Machine Learning Lecture 4: Greedy Local Search (Hill Climbing)

2 Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349 Local search algorithms We’ve discussed ways to select a hypothesis h that performs well on training examples, e.g. –Candidate-Elimination –Decision Trees Another technique that is quite general: –Start with some (perhaps random) hypothesis h –Incrementally improve h Known as local search

3 Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349 Example: n-queens Put n queens on an n × n board with no two queens on the same row, column, or diagonal

4 Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349 Hill-climbing search "Like climbing Everest in thick fog with amnesia“ h = initialState loop: h’ = highest valued Successor(h) if Value(h) >= Value(h’) return h else h = h’

5 Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349 Hill-climbing search Problem: depending on initial state, can get stuck in local maxima

6 Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349 Underfitting Overfitting: Performance on test examples is much lower than on training examples Underfitting: Performance on training examples is low Two leading causes: –Hypothesis space is too small/simple –Training algorithm (i.e., hypothesis search algorithm) stuck in local maxima

7 Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349 Hill-climbing search: 8-queens problem v = number of pairs of queens that are attacking each other, either directly or indirectly v =17

8 Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349 Hill-climbing search: 8-queens problem A local minimum with v = 1

9 Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349 Simulated annealing search Idea: escape local maxima by allowing some "bad" moves but gradually decrease their frequency h = initialState T = initialTemperature loop: h’ = random Successor(h) if (  V = Value(h’)-Value(h)) > 0 h = h’ else h = h’ with probability e  V/T decrease T; if T==0, return h

10 Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349 Properties of simulated annealing One can prove: If T decreases slowly enough, then simulated annealing search will find a global optimum with probability approaching 1 Widely used in VLSI layout, airline scheduling, etc

11 Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349 Local beam search Keep track of k states rather than just one Start with k randomly generated states At each iteration, all the successors of all k states are generated If any one is a goal state, stop; else select the k best successors from the complete list and repeat.

12 Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349 Gradient Descent Hill Climbing and Simulated Annealing are “generate and test” algorithms –Successor function generates candidates, Value function helps select In some cases, we can do much better: –Define: Error(training data D, hypothesis h) –If h is represented by parameters w 1,…w n and dError/dw i is known, we can compute the error gradient, and descend in the direction that is (locally) steepest

13 Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349

14 About distance…. Clustering requires distance measures. Local methods require a measure of “locality” Search engines require a measure of similarity So….when are two things close to each other?

15 Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349 Euclidean Distance What people intuitively think of as “distance” Dimension 1: x Dimension 2: y

16 Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349 Generalized Euclidean Distance

17 Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349 Weighting Dimensions Apparent clusters at one scaling of X are not so apparent at another scaling

18 Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349 Weighted Euclidean Distance You can, of course compensate by weighting your dimensions….

19 Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349 More Generalization: Minkowsky metric My three favorites are special cases of this:

20 Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349 What is a “metric”? A metric has these four qualities. …otherwise, call it a “measure”

21 Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349 Metric, or not? Driving distance with 1-way streets Categorical Stuff : –Is distance Jazz -> Blues -> Rock no less than distance Jazz -> Rock?

22 Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349 What about categorical variables? Consider feature vectors for genre & vocals –Genre: {Blues, Jazz, Rock, Zydeco} –Vocals: {vocals,no vocals} s1 = {rock, vocals} s2 = {jazz, no vocals} s3 = { rock, no vocals} Which two songs are more similar?

23 Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349 Binary Features + Hamming distance s1 = {rock, yes} s2 = {jazz, no} s3 = { rock, no vocals} 00101 01001 00100 BluesJazzZydecoRockVocals Hamming Distance = number of bits different between binary vectors

24 Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349 Hamming Distance

25 Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349 Other approaches… Define your own distance: f (a,b) BeethovenBeatlesLiz Phair Beethoven700 Beatles450 Liz Phair?12 Quote Frequency

26 Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349 Missing data What if, for some category, on some examples, there is no value given? Approaches: –Discard all examples missing the category –Fill in the blanks with the mean value –Only use a category in the distance measure if both examples give a value

27 Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349 Dealing with missing data


Download ppt "Adapted by Doug Downey from Bryan Pardo Fall 2007 Machine Learning EECS 349 Machine Learning Lecture 4: Greedy Local Search (Hill Climbing)"

Similar presentations


Ads by Google