Download presentation
Presentation is loading. Please wait.
Published bySydney Chambers Modified over 9 years ago
1
CS 4100 Artificial Intelligence Prof. C. Hafner Class Notes Jan 10, 2012
2
Goals of artificial intelligence (AI) field 1.Artificial systems with humanlike ability to understand and reason (cf. cognitive science) 2.Solve problems that are too large to find the best answer algorithmically, using heuristic (incomplete) methods 3.Solve problems that are not well-understood
3
Artificial systems with humanlike ability to understand and reason Main techniques: ontology, automated reasoning, formal logic, state-space search Uses: problem-solving/planning, natural language processing, intelligent HCI Applications: game characters, search engines, recommender systems
4
Artificial systems with humanlike ability to understand and reason (cont.) Main techniques: evidential logics (probability, fuzzy logic,...), Bayesian inference nets, Markov models Uses: Problem solving under uncertainty, decision support systems (“expert systems”) Application: medical diagnosis and advice
5
Solve problems that are too large to find the best answer algorithmically, so require incomplete methods Main techniques: heuristic search; dependency- directed backtracking Uses: scheduling, resource allocation Application: factory production
6
Solve problems that are not well-understood Main techniques: weighted linear models; bayesian inference nets; statistical induction and machine learning in general Uses: finance, computational science (discovery), data mining Application: oil exploration
7
Go over syllabus
8
History of AI Initial Optimism 1960’s Samuel’s Checker player – early ML Problem Solver (Simon & Newell) Employed means-ends analysis (a pre-cursor of backward chaining now used in many systems) Match A to B to find difference D Goal: Transform situation A to situation B none Success Subgoal: Reduce D fail Fail Transform A’ into B fail Fail A’ Success suceed
9
Means-ends analysis (cont) Search for operator Q relevant to reducing D Goal: Reduce difference D between situations A and B none Fail Subgoal: Apply Q to A producing A’ fail Success A’ Match A to the conditions of Q, finding difference F Goal: Apply operator Q to A none Subgoal: Reduce F fail Fail Apply Q to A’’ fail Fail Success A’ A’’
10
Knowledge-based systems - 1970’s – mid 80’s “Micro-world experiments SHRDLU (Terry Winograd) Rule-based “expert” systems Mycin, Dendral (Ed Feigenbaum) Acceptance by industry – huge oversell The knowledge acquisition bottleneck History of AI (cont.)
11
Late 80’s – mid 90’s – AI winter Hopes pinned on neural nets/ML to overcome KA bottleneck Late 90’s to present – more computing power Rise of probabilistic approaches Lexical tagging breakthrough in NLP More rigorous experiments/evaluation methods 2000’s – influence of the Web revives AI Massive text corpuses and need for better web browsers inspires NLP Hardware advances inspire robotics Intelligent Agents/Web Bots – applications to e- commerce
12
Agents and environments
13
Black box vs. glass box approach f Input (stimulus i ) Output (response j ) Weighted linear functions Bayesian models Neural nets Support vector machines
14
Black box vs. glass box approach Goals Strategies, plans General Knowledge: Concepts/categories Relationships Schemas/scripts rules of behavior The current environment (situation) Models of other agents Goal: tell Sam the details of a social event
15
Designing an Intelligent Agent What can the agent do? (range of possible actions) What about the environment ? Inputs to the program are called percepts Symbolic input from keyboard, files, networks Sensor data from the physical world Often must be interpreted into meaningful concepts What can the agent know? History of its own previous inputs and actions Properties of the environment + world knowledge Knowledge of its own goals, preferences, etc. Strategies for its behavior
16
Vacuum-cleaner world Percepts: location and contents, e.g., [A,Dirty] Actions: Left, Right, Suck, NoOp
17
Types of Agents Reflex agent: no “state” or memory Reacts to current input according to its program (condition action rules) Knowledge-based agent: Uses an explicit knowledge base Exhibits “understanding” of its input by relating it to prior knowledge Reacts according to rules, but the conditions may be complex and require inference to evaluate
18
Types of Agents (cont) Planning agents Explicitly represent their own goals and/or preferences (“utilities”) and can reason about them (i.e., planning) Exhibit a kind of autonomy – actions do not follow directly from a table lookup Learning agents Learning from positive and negative examples – “supervised learning” Learn from experience to improve its outcomes – “reinforcement learning”
19
How do we judge whether we have succeeded? getting the “right answer” ? having a good outcome ? (using some “utility” function) the Turing Test ? (and modified versions) we know it when we see it?
20
Types of Agents (cont.) Reflex agent Behavior depends only on current input (no history or model of the overall environment) Ex: Vacuum Agent – Percepts: 2-tuple: A or B, Clean or Dirty – Actions: Left, Right, Suck, NoOp
21
Vacuum agent’s behavior The vacuum agent might follow this simple strategy: – if current location is dirty, clean it (Suck) – If current location is clean move to the other location
22
Knowledge Representation: Two Approaches Procedural representation – Program’s statements directly encode the knowledge (for example, of the strategy to follow) Declarative representation – A data structure encodes the knowledge and the program’s statements act like an interpreter
23
Approach 1 (procedural) if status = Dirty then return Suck else if location = A then return Right else if location = B then return Left Implementing the Vacuum Agent
24
Approach 2 (declarative): state perceive() rule rule-match(state, rule base) action RHS(rule) return action Implementing the Vacuum agent Per ceptAction [A, Clean]Right [A, Dirty]Suck … Rule Base
25
Simple reflex agents
26
Production Rule Systems Behavior is expressed as a set of production rules (called “table-driven” by RN) 1. Condition Action 2. Condition Action... Condition called left-hand-side (LHS) Action called right-hand-side (RHS)
27
In what sense is an agent that uses declarative knowledge more intelligent ??
28
Production rule systems Drawbacks: – Huge RULE BASE (time consuming to build by hand) – What if more than one condition is satisfied? – Inflexible (no adaptation or learning)
29
Analyzing Agent Performance Discussion of “rationality” – Must define a performance measure How clean (but penalty for extra work?) – Rationality maximizes expected value of the measure – Depends on knowledge of the environment Can a clean square get dirty again? At what rate?
30
Knowledgeable agents
31
Knowledge bases Knowledge base = set of sentences in a formal language Declarative approach to building an agent (or other system): – Tell it what it needs to know (percept pre-interpreted) Then it can Ask itself what to do - answers should follow from the KB Agents can be viewed at the knowledge level -- what they know, regardless of how implemented Or at the implementation level -- data structures and algorithms that manipulate them
32
Q: What formal language(s) can we use to represent Current facts about the state of the world General facts about how the world behaves General facts about the effects of actions that the agent can perform Condition action rules that specify how the agent is to behave A: Formal logic Syntax and semantics well understood Computational tractability known for important subsets (Horn clause logic)
33
Introduce Python Assignment 1 http://relationalagents.com/demos/index.html
34
Relational Agent Systems How are you feeling today ?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.