Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Wednesday, January 26, 2000.

Slides:



Advertisements
Similar presentations
Initializing Student Models in Web-based ITSs: a Generic Approach Victoria Tsiriga & Maria Virvou Department of Informatics University of Piraeus.
Advertisements

Critical Reading Strategies: Overview of Research Process
Alina Pommeranz, MSc in Interactive System Engineering supervised by Dr. ir. Pascal Wiggers and Prof. Dr. Catholijn M. Jonker.
Active Appearance Models
Lecture 5 Newton-Raphson Method
Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
RL for Large State Spaces: Value Function Approximation
For Wednesday Read chapter 19, sections 1-3 No homework.
Tuomas Sandholm Carnegie Mellon University Computer Science Department
Artificial Intelligence Lecture 2 Dr. Bo Yuan, Professor Department of Computer Science and Engineering Shanghai Jiaotong University
SENG 531: Labs TA: Brad Cossette Office Hours: Monday, Wednesday.
Data Mining Techniques Outline
1 Lecture 5: Automatic cluster detection Lecture 6: Artificial neural networks Lecture 7: Evaluation of discovered knowledge Brief introduction to lectures.
CSci 6971: Image Registration Lecture 4: First Examples January 23, 2004 Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware Prof. Chuck Stewart, RPI Dr.
Neural Networks Marco Loog.
Ensemble Learning: An Introduction
Evaluating a Scientific Paper. Organization 1.Title 2. Summary or Abstract 4. Material and Methods 5. Results 6. Discussion and Conclusions 7. Bibliography.
Introduction to Boosting Aristotelis Tsirigos SCLT seminar - NYU Computer Science.
Active Appearance Models Suppose we have a statistical appearance model –Trained from sets of examples How do we use it to interpret new images? Use an.
Data Mining CS 341, Spring 2007 Lecture 4: Data Mining Techniques (I)
D Nagesh Kumar, IIScOptimization Methods: M1L4 1 Introduction and Basic Concepts Classical and Advanced Techniques for Optimization.
Information Fusion Yu Cai. Research Article “Comparative Analysis of Some Neural Network Architectures for Data Fusion”, Authors: Juan Cires, PA Romo,
McGraw-Hill/Irwin ©2005 The McGraw-Hill Companies, All rights reserved ©2005 The McGraw-Hill Companies, All rights reserved McGraw-Hill/Irwin.
Radial Basis Function Networks
Advanced Research Methodology
CSC 4510 – Machine Learning Dr. Mary-Angela Papalaskari Department of Computing Sciences Villanova University Course website:
Regression and Correlation Methods Judy Zhong Ph.D.
Revision Michael J. Watts
Operations Research Models
Artificial Intelligence Lecture No. 28 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
IMSS005 Computer Science Seminar
COMP3503 Intro to Inductive Modeling
Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Wednesday, February 7, 2001.
1 Artificial Neural Networks Sanun Srisuk EECP0720 Expert Systems – Artificial Neural Networks.
From Bayesian Filtering to Particle Filters Dieter Fox University of Washington Joint work with W. Burgard, F. Dellaert, C. Kwok, S. Thrun.
Chapter 9 Neural Network.
CPSC 502, Lecture 15Slide 1 Introduction to Artificial Intelligence (AI) Computer Science cpsc502, Lecture 16 Nov, 3, 2011 Slide credit: C. Conati, S.
Monday, January 31, 2000 Aiming Wu Department of Computing and Information Sciences, KSU Readings: Chown and Dietterich.
Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Friday, February 4, 2000 Lijun.
Department of Electrical Engineering, Southern Taiwan University Robotic Interaction Learning Lab 1 The optimization of the application of fuzzy ant colony.
Kansas State University Department of Computing and Information Sciences CIS 730: Introduction to Artificial Intelligence Lecture 9 of 14 Friday, 10 September.
Computing & Information Sciences Kansas State University Wednesday, 15 Oct 2008CIS 530 / 730: Artificial Intelligence Lecture 20 of 42 Wednesday, 15 October.
CS 445/545 Machine Learning Winter, 2012 Course overview: –Instructor Melanie Mitchell –Textbook Machine Learning: An Algorithmic Approach by Stephen Marsland.
Computing & Information Sciences Kansas State University Paper Review Guidelines KDD Lab Course Supplement William H. Hsu Kansas State University Department.
Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Monday, January 22, 2001 William.
I Robot.
New Measures of Data Utility Mi-Ja Woo National Institute of Statistical Sciences.
Kansas State University Department of Computing and Information Sciences CIS 730: Introduction to Artificial Intelligence Lecture 9 of 42 Wednesday, 14.
General Writing - Audience What is their level of knowledge? Advanced, intermediate, basic? Hard to start too basic – but have to use the right terminology.
Boosted Particle Filter: Multitarget Detection and Tracking Fayin Li.
Computing & Information Sciences Kansas State University Paper Review Guidelines KDD Lab Course Supplement William H. Hsu Kansas State University Department.
Insight: Steal from Existing Supervised Learning Methods! Training = {X,Y} Error = target output – actual output.
1 Choosing a Computer Science Research Problem. 2 Choosing a Computer Science Research Problem One of the hardest problems with doing research in any.
Marshall University School of Medicine Department of Biochemistry and Microbiology BMS 617 Lecture 11: Models Marshall University Genomics Core Facility.
Classification Ensemble Methods 1
Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Monday, January 24, 2000 William.

Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Genetic algorithms: A Stochastic Approach for Improving the Current Cadastre Accuracies Anna Shnaidman Uri Shoshani Yerach Doytsher Mapping and Geo-Information.
Flexible Speaker Adaptation using Maximum Likelihood Linear Regression Authors: C. J. Leggetter P. C. Woodland Presenter: 陳亮宇 Proc. ARPA Spoken Language.
An Evolutionary Algorithm for Neural Network Learning using Direct Encoding Paul Batchis Department of Computer Science Rutgers University.
Artificial Neural Networks By: Steve Kidos. Outline Artificial Neural Networks: An Introduction Frank Rosenblatt’s Perceptron Multi-layer Perceptron Dot.
George Yauneridge.  Machine learning basics  Types of learning algorithms  Genetic algorithm basics  Applications and the future of genetic algorithms.
Chapter 12 Case Studies Part B. Control System Design.
CS 9633 Machine Learning Inductive-Analytical Methods
CSE 4705 Artificial Intelligence
Luís Filipe Martinsª, Fernando Netoª,b. 
Objective of This Course
Overview of Machine Learning
CHAPTER I. of EVOLUTIONARY ROBOTICS Stefano Nolfi and Dario Floreano
Presentation transcript:

Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Wednesday, January 26, 2000 (Given Friday, January 28, 2000) Steve Gustafson Department of Computing and Information Sciences, KSU Readings: “Iterated Phantom Induction: A Little Knowledge Can Go a Long Way”, Brodie and DeJong Analytical Learning Presentation (2 of 4): Iterated Phantom Induction Lecture 4

Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Presentation Overview Paper –“Iterated Phantom Induction: A Little Knowledge Can Go a Long Way” –Authors: Mark Brodie and Gerald DeJong, Beckman Institute, University of Ilinois at Urbana-Champaign Overview –Learning in failure domains by using phantom induction Goals: don’t need to rely on positive examples or as many examples as needed by conventional learning methods. –Phantom Induction Knowledge representation: Collection of points manipulated by Convolution, Linear regression, Fourier methods or Neural networks Idea: Perturb failures to be successes, train decision function with those “phantom” successes Issues –Can phantom points be used to learn effectively? –Key strengths: Robust learning method, convergence seems inevitable –Key weakness: Domain knowledge for other applications?

Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Outline Learning in Failure Domains –An example - basketball “bank-shot” –Conventional methods versus Phantom Induction –Process figure from paper The Domain –Air-hockey environment Domain Knowledge –Incorporating prior knowledge to explain world-events –Using prior knowledge to direct learning The Algorithm –The Iterated Phantom Induction algorithm –Fitness measure, inductive algorithm, and methods Interpretation –Results –Interpretation graphic - explaining a phenomenon Summary

Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Learning in Failure Domains Example - Learning to make a “bank-shot” in basketball - We must fail to succeed Distance Velocity Angle Backboard Ball

Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Learning in Failure Domains Conventional learning methods –Using conventional learning methods in failure domains can require many, many examples before a good approximation to the target function is learned –Failure domains may require prior domain knowledge, something which may be hard to encode in conventional methods, like neural networks and genetic algorithms Phantom Decision method –Propose a problem, generate a solution, observe the solution, explain the solution and develop a “fix”. (assumes the solution resulted in a failure ) –The “fix” added to the previous solution creates a “phantom” solution, which should lead the original problem to the goal –Domain knowledge is used to explain the solution’s results, and only perfect domain knowledge will lead to a perfect phantom solution. –After collecting phantom points, an INDUCTIVE algorithm is used to develop a new decision strategy –Another problem is proposed and a new solution is generated, observed, phantom decision found and decision strategy is again updated.

Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Learning in Failure Domains Observed World Behavior Decision Strategy Hypothesized Solution World Problem Behavior Evaluation Revised Strategy Learning Module Figure 1: Interactive Learning Recreated from “Iterated Phantom Induction: A Little Knowledge Can Go a Long Way”, Brodi and DeJong, AAAI, 1998

Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence The Domain Air hockey table –Everything is fixed except angle at which puck is released –Paddle moved to direct puck to the goal –Highly non-linear relationship between puck’s release angle and paddle’s offset (does this have to do with the effort to simulate real world?) angle a offset observed error d goal puck paddle angle b Figure 2: Air-Hockey Modified from “Iterated Phantom Induction: A Little Knowledge Can Go a Long Way”, Brodi and DeJong, AAAI, 1998

Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Domain Knowledge –f* is the ideal function which produces the paddle offset to put the puck in the goal, determined from the puck’s angle a –The learning problem is to approximate f* –e* is the ideal function which produces the correct offset from the error, d, from f(a) –e*(d,a) + f(a) should place the puck in the goal –Both f* and e* are highly non-linear and require a perfect domain knowledge –So, the system needs to approximate e* so that it can adequately approximate f* –What domain knowledge is needed to approximate e* ? As angle b increases, error d increases As offset increases, b increases –System Inference: positive error = decrease offset proportional to size of error

Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence The Algorithm 1.f 0 = 0 2.j = 0 3.for i = 1 to n i: generate a i [puck angle] ii: o i = f j ( a i ) [apply current strategy to get offset] iii:find d [observe error d from puck and goal] iv:find e( d i ) [decision error, using error function e] v:find o i + e( d i ) [ phantom offset that should puck with a i in the goal] vi:add (a i, o i + e( d i ) ) to training points [ phantom point ] 4.j = j Find a new f j from training points [ use inductive algorithm ] 6.Apply fitness function to f j 7. If “fit” function, exit, otherwise go to step 3

Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence The Algorithm Performance Measure –100 randomly generated points, no learning or phantoms produced, mean-squared error Inductive algorithm –instance-based, convolution of phantom points –Place a Gaussian point at center of puck angle –Paddle offset is weighted average of phantom points where the weights are come from the values of the Gaussian. Other Algorithms –Linear Regression, Fourier Methods, and Neural Networks –All yielded similar results –Initial divergence, but eventual convergence

Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence The Experiments Experiment 1 - Best Linear Error Function –Similar to performance e* - errors remains due to complexity target function Experiment 2 - Underestimating Error Function –slower convergence rate Experiment 3 - Overestimating Error Function –fails to oscillate as expected, converges after initial divergence Experiment 4 - Random Error Function –How can it fail?? Error function sometime underestimates error, sometimes overestimates error Interpretation –The successive strategies are computed by convoluting the previous phantom points, therefore, the following strategy passes through their average. –Hence, even large errors result in convergence

Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Interpretation Example offset observed error d2 goal puck paddle observed error d1 Overestimated phantom point 1 Overestimated phantom point 2

Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Summary Points Content Critique –Key contribution: “iterated phantom induction converges quickly to a good decision strategy.” Straight-forward learning method which models real world. –Strengths Robust - when doesn’t this thing diverge! Interesting possibilities for applications ( failure domains ) –Weaknesses Domain knowledge is crucial. Unclear on how to determine sufficient domain knowledge given a problem No comparison to other learning methods Presentation Critique –Audience: Artificial intelligence enthusiasts - robot, game, medical applications –Positive points Good introduction, level of abstraction, and explanations Understandable examples and results –Negative points Some places could use more detail - inductive algorithm, fitness measure