Presentation is loading. Please wait.

Presentation is loading. Please wait.

Exploration Strategies for Learned Probabilities in Smart Terrain Dr. John R. Sullins Youngstown State University.

Similar presentations


Presentation on theme: "Exploration Strategies for Learned Probabilities in Smart Terrain Dr. John R. Sullins Youngstown State University."— Presentation transcript:

1 Exploration Strategies for Learned Probabilities in Smart Terrain Dr. John R. Sullins Youngstown State University

2 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 2 Problem Definition Agent given a map of world Map gives locations where goals may possibly be Different categories of locations have different probabilities

3 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 3 Learned Probabilities Problem: Agent does not know these probabilities Agent must learn them from examples [a, b] of that category a i = number of past examples of category C i where goal has been present b i = number of past examples of category C i where goal has not been present

4 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 4 Learning with Costs Agent must physically move to a target to know whether it meets goal Cost usually proportional to distance traveled

5 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 5 Learning with Costs Knowledge gained by exploring target Cost of exploring target tradeoff Requires a rational strategy for exploration

6 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 6 Outline Learning as reducing future costs Beta functions and probabilistic smart terrain Defining an information gain function –Estimating extra distances traveled due to errors –Factoring in category prevalence Creating an influence map for agent movement Benchmark and empirical testing

7 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 7 Exploration Strategy Main idea: Exploration now reduces travel time in future –t 1 is instance of category C 1 with prior knowledge [a 1, b 1 ] –t 2 is instance of category C 2 with prior knowledge [a 2, b 2 ] Agent t1t1 t2t2 d d

8 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 8 Value of Information Rational action: Move to target in more probable category first Problem: Agent must estimate probabilities from examples Fewer examples  Greater likelihood estimate wrong Agent t1t1 t2t2 d d

9 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 9 Value of Information Probabilities estimated from limited data: p 1 estimate = 0.15 p 2 estimate = 0.2 –Agent will move towards t 2 Suppose actual probabilities different: p 1 actual = 0.25 p 2 actual = 0.1 Would have been better to move to t 1 first Agent t1t1 t2t2

10 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 10 Value of Information Agent will have to backtrack to t 1 if goal not met by t 2 Expected distance traveled will be greater than if moved towards t 1 first Better estimates of probabilities  less travel time Agent t1t1 t2t2

11 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 11 Outline Learning as reducing future costs Beta functions and probabilistic smart terrain Defining an information gain function –Estimating extra distances traveled due to errors –Factoring in category prevalence Creating an influence map for agent movement Benchmark and empirical testing

12 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 12 Beta Distribution Estimate of probability category meets goal given examples [a, b] of category beta[a, b](  ) = α  a -1  b -1 “Liklihood” the actual probability is  given [a, b] Best estimate of actual probability = Exp(beta[a, b](  ) )

13 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 13 Beta Distribution “Narrows” as more examples explored More examples  less error in estimate of 

14 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 14 Probabilistic Smart Terrain Agent movement in worlds where targets have probability of meeting goal –p i : probability target i meets goal –d i : distance (in moves) from agent to target i –Based on targets within d max moves For each adjacent tile, computes expected distance to some target that meets goal

15 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 15 Probabilistic Smart Terrain Expected number of moves character must travel from x to target that meets goal d max Dist(x) = Σ  (1 – p i ) d d i < d Probability no target within d moves of x meets goal (assumption of conditional independence) Summed over all distances up to some maximum d max (otherwise sum could be infinite)

16 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 16 Probabilistic Smart Terrain Compute expected distance Dist(x) for all tiles x Agent moves to adjacent tile with lowest Dist(x)

17 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 17 Outline Learning as reducing future costs Beta functions and probabilistic smart terrain Defining an information gain function –Estimating extra distances traveled due to errors –Factoring in category prevalence Creating an influence map for agent movement Benchmark and empirical testing

18 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 18 Simple Two-target Case Simple case where agent must “choose” between two targets to explore –t i is instance of category C i with prior knowledge [a i, b i ] –t J is instance of category C j with prior knowledge [a J, b J ] Targets equidistant at distance d d is average distance between targets in world Agent titi tJtJ d d

19 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 19 Estimating Distance Traveled Assume t i has higher estimated probability ( Exp(beta[a i, b i ](  i ) ) > Exp(beta[a j, b j ](  j ) ) Expected distance traveled: Dist(  i,  J ) = d + 2d(1 -  i ) + (d max - 3d) (1 -  i ) (1-  J ) Agent titi tJtJ d d Move to t i Backtrack to t j if t i does not meet goal Case where neither t i nor t j meet goal

20 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 20 Defining an Error Function  i,  J may take on many values –Likelihood of a particular  defined by beta(  )[a, b] Moving to t i first is error in cases where  i <  J CiCi CJCJ  i i  J J

21 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 21 Defining an Error Function Amount of error for given (  i,  J ) defined as Err Dist (  i,  J ) = Dist(  i,  J ) - Dist(  J,  i ) = 2d(  J -  i ) if  J >  i 0 otherwise Expected distance if move to t i first Expected distance if move to t j first

22 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 22 Defining an Error Function Error weighted by likelihood of  i,  J (as defined by beta function) Err Pair ([a i, b i ], [a J, b J ]) =  0  0 Err Dist (  i,  J ) beta[a i, b i ](  i ) beta[a J, b J ](  J )  i  J Total error possible given these examples of C i and C j Summed over all possible combinations of  i,  J weighted by their likelihoods

23 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 23 Value of Information Additional values of [a, b] narrow the beta distributions Narrow distributions allow less error P(  i <  J ) much smaller CJCJ CiCi

24 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 24 Value of Information Categories with similar [a, b] may still overlap However,  i and  j will likely be similar even if  i <  j Err Dist (  i,  j ) will be very small CjCj CiCi  i i  j j

25 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 25 Outline Learning as reducing future costs Beta functions and probabilistic smart terrain Defining an information gain function –Estimating extra distances traveled due to errors –Factoring in category prevalence Creating an influence map for agent movement Benchmark and empirical testing

26 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 26 Category Prevalence Prioritize instances of more prevalent categories –t i  category C i with | C i | instances in world –t J  category C J with | C J | instances in world –| C i | >> | C J | (many more instances of C i ) More benefit to be gained by exploring t i Agent titi tJtJ

27 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 27 Category Pair Likilihood Agent is between two targets in different categories What is likelihood those categories are C i and C j ? L(C i, C j ) = | C i | | C J | + | C i | | C J | | C total | ( | C total | - | C j | ) | C total | ( | C total | - | C J | ) C total = total number of targets in all categories Agent titi tJtJ d d

28 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 28 Category Error Measure Total error measure for category C i based on relationship to all other categories C J : –Error Err Pair ([a i, b i ], [a J, b J ]) relative to that category (based on overlap of their beta functions) –Likelihood L(C i, C J ) agent must choose between two targets in those categories Err Cat (C i, [a i, b i ]) =  Err Pair ([a i, b i ], [a J, b J ]) L(C i, C J ) i ≠ J

29 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 29 Defining Information Gain Information gain from exploring instance of C i  How incrementing [a i, b i ] would decrease Err Cat (C i, [a i, b i ]) by narrowing the beta function Gain(C i, [a i, b i ]) ) = Err Cat (C i, [a i, b i ]) – Err Cat (C i, [a i ′, b i ′]) Current error before target explored Estimated error if target were explored

30 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 30 Defining Information Gain Problem: Do not know whether given target meets goal until explored –Do not know whether it would increment a i or b i Solution: Estimate from current expected value Exp(beta[a i, b i ](  i )) [a i ′, b i ′] = [a i + Exp(beta[a i, b i ](  i )), b i + (1 - Exp(beta[a i, b i ](  i )))]

31 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 31 Example of Information Gain Example: Information gain for [2, 6] and [4, 4] –Same prevalence, average distance = 10 New Examples Category [4, 4] Category [2, 6] 1 1.9412.119 2 1.5971.727 3 1.3361.435 4 1.1331.212 5 0.9721.038

32 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 32 Prior Category Knowledge More existing examples  Less valuable future examples become Preference given to categories about which less is known

33 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 33 Outline Learning as reducing future travel costs Beta functions and probabilistic smart terrain Defining an information gain function –Estimating extra distances traveled due to errors –Factoring in category prevalence Creating an influence map for agent movement Benchmark and empirical testing

34 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 34 Influence Maps Targets influence nearby agents –Influence = information gain of target category Influence decreases with distance from target Agent moves in direction of increasing influence

35 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 35 Falloff Function Inverse function used to decrease influence over distance Influence(t) = Gain(C i, [a i, b i ])) 1 + t / d t = distance in tiles d = average distance between targets

36 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 36 Combining Influences Question: How should influences from multiple targets be combined Goal: Prioritize exploring groups of targets –|Ci| ≈ |Cj| ≈ |Ck|–|Ci| ≈ |Cj| ≈ |Ck| –| [a i, b i ] | ≈ | [a j, b j ] | ≈ | [a k, b k ] | Can quickly explore both t i and t k by moving left Agent titi tjtj tktk Prior information and prevalence similar

37 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 37 Additive Combined Influences Influences from targets in different categories added to compute total influence at a tile Inverse falloff function chosen to minimize possibility of local maxima in influence map

38 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 38 Influences in Single Category Information gain decreases for each target explored in same category Decrease must be factored into influence map Agent ti1ti1 ti3ti3 ti2ti2 2.119 1.727 1.435

39 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 39 Computing Total Influence Influence at tile t from all targets in category C i : TotalInfluence(t, i) =  Gain(C i, [a i, b i ], k) k 1 + t k / d –t k = distance to k th nearest target –Gain(C i, [a i, b i ], k) = expected information gain from k th example Influence at tile t from targets in all categories: TotalInfluence(t) =   Gain(C i, [a i, b i ], k) i k 1 + t k / d

40 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 40 Updating the Influence Map Influence map computed for all tiles in area of agent Agent moves in direction of increasing influence until some target t i reached Agent determines whether target meets goal, and either increments a i or b i for category C i Information gain recomputed for all categories Influence map recomputed (with t i removed)

41 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 41 Updating the Influence Map

42 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 42 Outline Learning as reducing future travel costs Beta functions and probabilistic smart terrain Defining an information gain function –Estimating extra distances traveled due to errors –Factoring in category prevalence Creating an influence map for agent movement Benchmark and empirical testing

43 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 43 Prior Knowledge Benchmark Instance of category with knowledge [1, 2] Instance of category with knowledge [2, 4] –Category prevalence similar Agent should move towards instance of category with less knowledge

44 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 44 Category Prevalence Benchmark Instance of category with two instances Instance of category with single instance –Prior knowledge of both = [1, 2] Agent should move towards instance of category with greater prevalence

45 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 45 Much Closer Distance Benchmark Knowledge = [10, 15] and prevalence = 7 Knowledge = [8, 12] and prevalence = 8 Even though further target has better information gain and prevalence, agent should move towards significantly closer targets

46 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 46 Large-scale Testing 30 x 20 world (with obstacles) 4 categories of targets Targets placed randomly for each trial Probability tile contains target = 0.05

47 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 47 Category Data Preva- lence Actual probability Prior knowledge [a, b] A0.20.1[10, 90] B0.20.1[1, 3] C0.30.25[1, 5] D0.30.25[25, 75] High priority due to information gain Somewhat high priority due to category prevalence

48 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 48 Importance of Learning Limited category data can cause errors in estimated probabilities This can lead to incorrect decisions about which target to move to next Actual PPrior knowledge A 0.1[10, 90] B 0.1[1, 3] C 0.25[1, 5] D 0.25[25, 75] Overestimates probability of B – moves towards instances too often Underestimates probability of C – ignores instances too often

49 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 49 Does the Learning Strategy Work? 100 trials with targets randomly placed For each trial, agent given 50 moves for learning –Influence map generated –Agent followed influence map to target –Actual probabilities used to update [a, b] for that category –Information gains updated and map recomputed Question: Which categories were explored most?

50 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 50 Does the Learning Strategy Work? Average number of each category explored per trial: CategoryAverage explored per trial A1.17 B2.52 C3.66 D2.10 Greater information gain Higher prevalence

51 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 51 Is the Learning Strategy Useful? Does the information gain strategy reduce future search time for targets that meet goals? Comparison of results to simpler “naïve” strategy –During learning phase, simply move to closest target instead of computing information gains

52 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 52 Training and Testing Training phase: –Learning strategy (information gain or naïve) used to move agent 50 moves –Each time target in category C i reached, update its [a i, b i ] based on actual category probabilities –Product of learning: estimated probabilities p i for each category computed as Exp(beta[a i, b i ](  i )

53 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 53 Training and Testing Testing Phase: –Agent placed at every location in world (536 non-wall tiles) –Existing probabilistic smart terrain algorithm used to search for a target that meets goal from that point Based on estimated probabilities from training phase Question: How many moves were required on average to find a goal?

54 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 54 Results of Testing 100 trials using both naïve and information gain learning Information gain learning focused on categories about which less was known (B and C) More accurate estimated probabilities Less travel time due to moving to wrong targets StrategyAverage tiles explored until goal found Information gain 5.294 Naive 6.473

55 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 55 Ongoing Work Learning while acting to meet goals –Agent must meet current needs (which presumably have some urgency) –Agent must also explore to learn knowledge to better meet future needs Tradeoff Costs of not meeting current needs while exploring Costs of extra travel in future if exploration not done now

56 John Sullins Youngstown State University Exploration Strategies for Learned Probabilities in Smart Terrain 56 Ongoing Work Learning in hierarchical worlds –Agent does not know exact location of all targets –Agent only knows expected number in a given region –Will not know what region actually contains until move to it A Exp(C 1 ) = 3.2 Exp(C 2 ) = 2.4 Exp(C 1 ) = 1.7 Exp(C 2 ) = 4.5 ? ?

57 Exploration Strategies for Learned Probabilities in Smart Terrain Dr. John R. Sullins Youngstown State University


Download ppt "Exploration Strategies for Learned Probabilities in Smart Terrain Dr. John R. Sullins Youngstown State University."

Similar presentations


Ads by Google