Download presentation
Presentation is loading. Please wait.
Published byKatrina Wilkins Modified over 9 years ago
1
CS 484 – Artificial Intelligence1 Announcements Project 1 is due Tuesday, October 16 Send me the name of your konane bot Midterm is Thursday, October 18 Bring 1 8x11.5 piece of paper with anything you want written on it Book Review is due Thursday, October 25 Andrew's Current Event Volunteer for Tuesday?
2
Introduction to Machine Learning Lecture 14
3
CS 484 – Artificial Intelligence3 Effects of Programs that Learn Application areas Learning from medical records which treatments are most effective for new diseases Houses learning from experience to optimize energy costs based on usage patterns of their occupants Personal software assistants learning the evolving interests of users in order to highlight especially relevant stories from the online morning newspaper
4
CS 484 – Artificial Intelligence4 Effective Applications of Learning Speech recognition outperform all other approaches that have been attempted to date Data mining Learning algorithms being used to discover valuable knowledge from large commercial databases detect fraudulent use of credit cards Play Games Play backgammon at levels approaching the performance of human world champions
5
CS 484 – Artificial Intelligence5 Learning Programs A computer program is said to learn from experiences E with respect to some class of tasks T and performance P, if its performance at tasks in T, as measured by P, improves with experience E Examples A checkers learning problem: Task T: playing checkers Performance measure P: percent of games won against opponents Training experience E: playing practice games against itself Handwriting recognition learning problem: Task T: recognizing and classifying handwritten words within images Performance measure P: percent of words correctly classified Training experience E: a database of handwritten words with given classifications
6
CS 484 – Artificial Intelligence6 Designing a Learning System Consider designing a program to learn to play checkers, with the goal of entering it in the world checkers tournament Requires the following sets Choosing Training Experience Choosing the Target Function Choosing the Representation of the Target Function Choosing the Function Approximation Algorithm
7
CS 484 – Artificial Intelligence7 Choosing the Training Experience (1) Will the training experience provide direct or indirect feedback? Direct Feedback: system learns from examples of individual checkers board states and the correct move for each Indirect Feedback: Move sequences and final outcomes of various games played Credit assignment problem: Value of early states must be inferred from the outcome Degree to which the learner controls the sequence of training examples Teacher selects informative boards and gives correct move Learner proposes board states that it finds particularly confusing. Teacher provides correct moves Learner controls board states and (indirect) training classifications
8
CS 484 – Artificial Intelligence8 Choosing the Training Experience (2) How well the training experience represents the distribution of examples over which the final system performance P will be measured If training the checkers program consists only of experiences played against itself, it may never encounter crucial board states that are likely to be played by the human checkers champion Most theory of machine learning rests on the assumption that the distribution of training examples is identical to the distribution of test examples
9
CS 484 – Artificial Intelligence9 Partial Design of Checkers Learning Program A checkers learning problem: Task T: playing checkers Performance measure P: percent of games won in the world tournament Training experience E: games played against itself Remaining choices The exact type of knowledge to be learned A representation for this target knowledge A learning mechanism
10
CS 484 – Artificial Intelligence10 Choosing the Target Function (1) Assume that you can determine legal moves Program needs to learn the best move from among legal moves Defines large search space known a priori target function: ChooseMove : B → M ChooseMove is difficult to learn given indirect training Alternative target function An evaluation function that assigns a numerical score to any given board state V : B → ( where is the set of real numbers) V(b) for an arbitrary board state b in B if b is a final board state that is won, then V(b) = 100 if b is a final board state that is lost, then V(b) = -100 if b is a final board state that is drawn, then V(b) = 0 if b is not a final state, then V(b) = V(b '), where b' is the best final board state that can be achieved starting from b and playing optimally until the end of the game
11
CS 484 – Artificial Intelligence11 Choosing the Target Function (2) V(b) gives a recursive definition for board state b Not usable because not efficient to compute except is first three trivial cases nonoperational definition Goal of learning is to discover an operational description of V Learning the target function is often called function approximation Referred to as
12
CS 484 – Artificial Intelligence12 Choosing a Representation for the Target Function Choice of representations involve trade offs Pick a very expressive representation to allow close approximation to the ideal target function V More expressive, more training data required to choose among alternative hypotheses Use linear combination of the following board features: x1: the number of black pieces on the board x2: the number of red pieces on the board x3: the number of black kings on the board x4: the number of red kings on the board x5: the number of black pieces threatened by red (i.e. which can be captured on red's next turn) x6: the number of red pieces threatened by black
13
CS 484 – Artificial Intelligence13 Partial Design of Checkers Learning Program A checkers learning problem: Task T: playing checkers Performance measure P: percent of games won in the world tournament Training experience E: games played against itself Target Function: V: Board → Target function representation
14
CS 484 – Artificial Intelligence14 Choosing a Function Approximation Algorithm To learn we require a set of training examples describing the board b and the training value V train (b) Ordered pair
15
CS 484 – Artificial Intelligence15 Estimating Training Values Need to assign specific scores to intermediate board states Approximate intermediate board state b using the learner's current approximation of the next board state following b Simple and successful approach More accurate for states closer to end states
16
CS 484 – Artificial Intelligence16 Adjusting the Weights Choose the weights w i to best fit the set of training examples Minimize the squared error E between the train values and the values predicted by the hypothesis Require an algorithm that will incrementally refine weights as new training examples become available will be robust to errors in these estimated training values Least Mean Squares (LMS) is one such algorithm
17
CS 484 – Artificial Intelligence17 LMS Weight Update Rule For each train example Use the current weights to calculate For each weight w i, update it as where is a small constant (e.g. 0.1)
18
CS 484 – Artificial Intelligence18 Final Design Experiment Generator Performance System Critic Generalizer Hypothesis New problem (initial game board) Solution trace (game history) Training examples
19
CS 484 – Artificial Intelligence19 Summary of Design Choices Determine Type of Training Experience Determine Target Function Determine Representation of Learned Function Determine Learning Algorithm Complete Design Games against itself Table of correct Moves Games against Experts … Board → valueBoard → move … Linear function of six features Polynomial Artificial neural network … Gradient descent Linear Programming …
20
CS 484 – Artificial Intelligence20 Training Classification Problems Many learning problems involve classifying inputs into a discrete set of possible categories. Learning is only possible if there is a relationship between the data and the classifications. Training involves providing the system with data which has been manually classified. Learning systems use the training data to learn to classify unseen data.
21
CS 484 – Artificial Intelligence21 Rote Learning A very simple learning method. Simply involves memorizing the classifications of the training data. Can only classify previously seen data – unseen data cannot be classified by a rote learner.
22
CS 484 – Artificial Intelligence22 Concept Learning Concept learning involves determining a mapping from a set of input variables to a Boolean value. Such methods are known as inductive learning methods. If a function can be found which maps training data to correct classifications, then it will also work well for unseen data – hopefully! This process is known as generalization.
23
CS 484 – Artificial Intelligence23 Example Learning Task Learn the "days on which my friend Aldo enjoys his favorite water sport" ExampleSkyAirTempHumidityWindWaterForecastEnjoySport 1SunnyWarmNormalStrongWarmSameYes 2SunnyWarmHighStrongWarmSameYes 3RainyColdHighStrongWarmChangeNo 4SunnyWarmHighStrongCoolChangeYes
24
CS 484 – Artificial Intelligence24 Hypotheses A hypothesis is a vector of constraints for each attribute indicate by a "?" that any value is acceptable for this attribute specify a single required value for the attribute indication by a " Ø " that no value is acceptable If some instance x satisfies all the constraints of hypothesis h, then h classifies x as a positive example (h(x) = 1) Example hypothesis for EnjoySport:
25
CS 484 – Artificial Intelligence25 EnjoySport concept learning task Given Instances X: Possible days, each described by the attributes Sky (with possible values Sunny, Cloudy, and Rainy) AirTemp (with values Warm and Cold) Humidity (with values Normal and High) Wind (with values Strong and Weak) Water (with values Warm and Cool), and Forecast (with values Same and Change) Hypothesis H: Each hypothesis is described by a conjunction of constraints on the attributes. The constraints may be "?", " Ø ", or a specific value Target concept c: EnjoySport : X → {0,1} Training Examples D: Positive or negative examples of the target function Determine A hypothesis h in H such that h(x) = c(x) for all x in X
26
CS 484 – Artificial Intelligence26 Inductive Learning Hypothesis Determine a hypothesis h identical to the target concept c over the entire set of instances X only information about c is its values over the training examples Inductive learning at best guarantees that the output hypothesis fits the target concept over the training data Fundamental assumption of inductive learning Inductive Learning Hypothesis: Any hypothesis found to approximate the target function well over a sufficiently large set of training examples will also approximate the target function well over other unobserved examples
27
CS 484 – Artificial Intelligence27 Concept Learning As Search Search through a large space of hypothesis implicitly defined by the hypothesis representation Find the hypothesis that best fits the training examples How big is the hypothesis space? In EnjoySport six attributes: Sky has 3 values, and the rest have 2 How many distinct instances? How many hypothesis?
28
CS 484 – Artificial Intelligence28 General to Specific Ordering This hypothesis is the most general hypothesis. It represents the idea that every day is a positive example: h g = The following hypothesis is the most specific hypothesis: it says that no day is a positive example: h s = We can define a partial order over the set of hypotheses: h 1 > g h 2 This states that h 1 is more general than h 2 Let h 1 = Let h 2 = Given hypothesis h j and h k, h j is more_general_than_or_equal_to h k if and only if any instance that satisfies h k also satisfies h j One learning method is to determine the most specific hypothesis that matches all the training data.
29
CS 484 – Artificial Intelligence29 Partial Ordering x1x1 x2x2 Specific General Hypotheses HInstances X h1h1 h3h3 h2h2 x 1 = x 2 = h 1 = h 2 = h 3 =
30
CS 484 – Artificial Intelligence30 Find-S: Finding a maximally Specific Hypothesis 1.Initialize h to the most specific hypothesis in H 2.For each positive training instance x For each attribute constraint a i in h If the constraint a i is satisfied by x Then do nothing Else replace a i in h by the next more general constraint that is satisfied by x 3.Output hypothesis h Begin: h ← ExampleSkyAirTempHumidityWindWaterForecastEnjoySport 1SunnyWarmNormalStrongWarmSameYes 2SunnyWarmHighStrongWarmSameYes 3RainyColdHighStrongWarmChangeNo 4SunnyWarmHighStrongCoolChangeYes
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.