Download presentation
Presentation is loading. Please wait.
Published byElwin Goodman Modified over 8 years ago
1
Unit 1 1
2
Introduction What is AI? The foundations of AI A brief history of AI The state of the art Introductory problems 2Unit1
3
What is AI? Intelligence: “ability to learn, understand and think” (Oxford dictionary) 3Unit1
4
What is AI? Thinking humanlyThinking rationally Acting humanlyActing rationally 4Unit1
5
Acting Humanly: The Turing Test Alan Turing (1912-1954) “Computing Machinery and Intelligence” (1950) Human Interrogator Human AI System Imitation Game 5Unit1
6
Acting Humanly: The Turing Test Predicted that by 2000, a machine might have a 30% chance of fooling a lay person for 5 minutes. Anticipated all major arguments against AI in following 50 years. Suggested major components of AI: knowledge, reasoning, language, understanding, learning. 6Unit1
7
Thinking Humanly: Cognitive Modelling Not content to have a program correctly solving a problem. More concerned with comparing its reasoning steps to traces of human solving the same problem. Requires testable theories of the workings of the human mind: cognitive science. 7Unit1
8
Thinking Rationally: Laws of Thought Aristotle was one of the first to attempt to codify “right thinking”, i.e., irrefutable reasoning processes. Formal logic provides a precise notation and rules for representing and reasoning with all kinds of things in the world. Obstacles: Informal knowledge representation. Computational complexity and resources. 8Unit1
9
Acting Rationally Acting so as to achieve one’s goals, given one’s beliefs. Does not necessarily involve thinking. Advantages: More general than the “laws of thought” approach. More amenable to scientific development than human- based approaches. 9Unit1
10
The Foundations of AI Philosophy (423 BC present): Logic, methods of reasoning. Mind as a physical system. Foundations of learning, language, and rationality. Mathematics (c.800 present): Formal representation and proof. Algorithms, computation, decidability, tractability. Probability. 10Unit1
11
The Foundations of AI Psychology (1879 present): Adaptation. Phenomena of perception and motor control. Experimental techniques. Linguistics (1957 present): Knowledge representation. Grammar. 11Unit1
12
A Brief History of AI The gestation of AI (1943 1956): 1943: McCulloch & Pitts: Boolean circuit model of brain. 1950: Turing’s “Computing Machinery and Intelligence”. 1956: McCarthy’s name “Artificial Intelligence” adopted. Early enthusiasm, great expectations (1952 1969): Early successful AI programs: Samuel’s checkers, Newell & Simon’s Logic Theorist, Gelernter’s Geometry Theorem Prover. Robinson’s complete algorithm for logical reasoning. 12Unit1
13
A Brief History of AI A dose of reality (1966 1974): AI discovered computational complexity. Neural network research almost disappeared after Minsky & Papert’s book in 1969. Knowledge-based systems (1969 1979): 1969: DENDRAL by Buchanan et al.. 1976: MYCIN by Shortliffle. 1979: PROSPECTOR by Duda et al.. 13Unit1
14
A Brief History of AI AI becomes an industry (1980 1988): Expert systems industry booms. 1981: Japan’s 10-year Fifth Generation project. The return of NNs and novel AI (1986 present): Mid 80’s: Back-propagation learning algorithmreinvented. Expert systems industry busts. 1988: Resurgence of probability. 1988: Novel AI (ALife, GAs, Soft Computing, …). 1995: Agents everywhere. 2003: Human-level AI back on the agenda. 14Unit1
15
The State of the Art Computer beats human in a chess game. Computer-human conversation using speech recognition. Expert system controls a spacecraft. Robot can walk on stairs and hold a cup of water. Language translation for webpages. Home appliances use fuzzy logic....... 15Unit1
16
Introductory Problem: Tic-Tac-Toe X X o 16Unit1
17
Introductory Problem: Tic-Tac-Toe Program 1: 1.View the vector as a ternary number. Convert it to a decimal number. 2.Use the computed number as an index into Move-Table and access the vector stored there. 3.Set the new board to that vector. 17Unit1
18
Introductory Problem: Tic-Tac-Toe Comments: 1.A lot of space to store the Move-Table. 2.A lot of work to specify all the entries in the Move-Table. 3.Difficult to extend. 18Unit1
19
Introductory Problem: Tic-Tac-Toe 1 2 3 4 5 6 7 8 9 19Unit1
20
Introductory Problem: Tic-Tac-Toe Program 2: Turn = 1Go(1) Turn = 2If Board[5] is blank, Go(5), else Go(1) Turn = 3If Board[9] is blank, Go(9), else Go(3) Turn = 4If Posswin(X) 0, then Go(Posswin(X))....... 20Unit1
21
Introductory Problem: Tic-Tac-Toe Comments: 1.Not efficient in time, as it has to check several conditions before making each move. 2.Easier to understand the program’s strategy. 3.Hard to generalize. 21Unit1
22
Introductory Problem: Tic-Tac-Toe 8 3 4 1 5 9 6 7 2 15 (8 + 5) 22Unit1
23
Introductory Problem: Tic-Tac-Toe Comments: 1.Checking for a possible win is quicker. 2.Human finds the row-scan approach easier, while computer finds the number-counting approach more efficient. 23Unit1
24
Introductory Problem: Tic-Tac-Toe Program 3: 1.If it is a win, give it the highest rating. 2.Otherwise, consider all the moves the opponent could make next. Assume the opponent will make the move that is worst for us. Assign the rating of that move to the current node. 3.The best node is then the one with the highest rating. 24Unit1
25
Introductory Problem: Tic-Tac-Toe Comments: 1.Require much more time to consider all possible moves. 2.Could be extended to handle more complicated games. 25Unit1
26
Introductory Problem: Question Answering “Mary went shopping for a new coat. She found a red one she really liked. When she got it home, she discovered that it went perfectly with her favourite dress”. Q1: What did Mary go shopping for? Q2: What did Mary find that she liked? Q3: Did Mary buy anything? 26Unit1
27
Introductory Problem: Question Answering Program 1: 1. Match predefined templates to questions to generate text patterns. 2.Match text patterns to input texts to get answers. “What did X Y” “What did Mary go shopping for?” “Mary go shopping for Z” Z = a new coat 27Unit1
28
Introductory Problem: Question Answering Program 2: Structured representation of sentences: Event2:Thing1: instance:Findinginstance: Coat tense:Pastcolour:Red agent:Mary object:Thing 1 28Unit1
29
Introductory Problem: Question Answering Program 3: Background world knowledge: C finds M C leaves LC buys M C leaves L C takes M 29Unit1
30
What is AI? Not about what human beings can do! About how to instruct a computer to do what human beings can do! 30Unit1
31
Problems and Search 31Unit1
32
32 Outline State space search Search strategies Problem characteristics Design of search programs Unit1
33
33 State Space Search Problem solving Searching for a goal state Unit1
34
34 State Space Search: Playing Chess Each position can be described by an 8-by-8 array. Initial position is the game opening position. Goal position is any position in which the opponent does not have a legal move and his or her king is under attack. Legal moves can be described by a set of rules: Left sides are matched against the current state. Right sides describe the new resulting state. Unit1
35
35 State Space Search: Playing Chess State space is a set of legal positions. Starting at the initial state. Using the set of rules to move from one state to another. Attempting to end up in a goal state. Unit1
36
36 State Space Search: Water Jug Problem “You are given two jugs, a 4-litre one and a 3-litre one. Neither has any measuring markers on it. There is a pump that can be used to fill the jugs with water. How can you get exactly 2 litres of water into 4-litre jug.” Unit1
37
37 State Space Search: Water Jug Problem State: (x, y) x = 0, 1, 2, 3, or 4y = 0, 1, 2, 3 Start state: (0, 0). Goal state: (2, n) for any n. Attempting to end up in a goal state. Unit1
38
38 State Space Search: Water Jug Problem 1.(x, y) (4, y) if x 4 2.(x, y) (x, 3) if y 3 3.(x, y) (x d, y) if x 0 4.(x, y) (x, y d) if y 0 Unit1
39
39 State Space Search: Water Jug Problem 5.(x, y) (0, y) if x 0 6.(x, y) (x, 0) if y 0 7.(x, y) (4, y (4 x)) if x y 4, y 0 8.(x, y) (x (3 y), 3) if x y 3, x 0 Unit1
40
40 State Space Search: Water Jug Problem 9.(x, y) (x y, 0) if x y 4, y 0 10.(x, y) (0, x y) if x y 3, x 0 11.(0, 2) (2, 0) 12.(2, y) (0, y) Unit1
41
41 State Space Search: Water Jug Problem 1.current state = (0, 0) 2.Loop until reaching the goal state (2, 0) Apply a rule whose left side matches the current state Set the new current state to be the resulting state (0, 0) (0, 3) (3, 0) (3, 3) (4, 2) (0, 2) (2, 0) Unit1
42
42 State Space Search: Water Jug Problem The role of the condition in the left side of a rule restrict the application of the rule more efficient 1.(x, y) (4, y) if x 4 2.(x, y) (x, 3) if y 3 Unit1
43
43 State Space Search: Water Jug Problem Special-purpose rules to capture special-case knowledge that can be used at some stage in solving a problem 11.(0, 2) (2, 0) 12.(2, y) (0, y) Unit1
44
44 State Space Search: Summary 1.Define a state space that contains all the possible configurations of the relevant objects. 2.Specify the initial states. 3.Specify the goal states. 4.Specify a set of rules: What are unstated assumptions? How general should the rules be? How much knowledge for solutions should be in the rules? Unit1
45
45 Search Strategies Requirements of a good search strategy: 1. It causes motion Otherwise, it will never lead to a solution. 2. It is systematic Otherwise, it may use more steps than necessary. 3. It is efficient Find a good, but not necessarily the best, answer. Unit1
46
46 Search Strategies 1. Uninformed search (blind search) Having no information about the number of steps from the current state to the goal. 2. Informed search (heuristic search) More efficient than uninformed search. Unit1
47
47 Search Strategies (0, 0) (4, 0)(0, 3) (1, 3)(0, 0)(4, 3)(3, 0)(0, 0)(4, 3) Unit1
48
48 Search Strategies: Blind Search Breadth-first search Expand all the nodes of one level first. Depth-first search Expand one of the nodes at the deepest level. Unit1
49
49 Search Strategies: Blind Search CriterionBreadth- First Depth- First Time Space Optimal? Complete? b: branching factord: solution depthm: maximum depth Unit1
50
50 Search Strategies: Blind Search CriterionBreadth- First Depth- First Time bdbd bmbm Space bdbd bm Optimal?YesNo Complete?YesNo b: branching factord: solution depthm: maximum depth Unit1
51
51 Search Strategies: Heuristic Search Heuristic: involving or serving as an aid to learning, discovery, or problem-solving by experimental and especially trial-and-error methods. (Merriam-Webster’s dictionary) Heuristic technique improves the efficiency of a search process, possibly by sacrificing claims of completeness or optimality. Unit1
52
52 Search Strategies: Heuristic Search Heuristic is for combinatorial explosion. Optimal solutions are rarely needed. Unit1
53
53 Search Strategies: Heuristic Search The Travelling Salesman Problem “A salesman has a list of cities, each of which he must visit exactly once. There are direct roads between each pair of cities on the list. Find the route the salesman should follow for the shortest possible round trip that both starts and finishes at any one of the cities.” A B C DE 110 55 515 Unit1
54
54 Search Strategies: Heuristic Search Nearest neighbour heuristic: 1. Select a starting city. 2. Select the one closest to the current city. 3. Repeat step 2 until all cities have been visited. Unit1
55
55 Search Strategies: Heuristic Search Nearest neighbour heuristic: 1. Select a starting city. 2. Select the one closest to the current city. 3. Repeat step 2 until all cities have been visited. O(n 2 ) vs. O(n!) Unit1
56
56 Search Strategies: Heuristic Search Heuristic function: state descriptions measures of desirability Unit1
57
57 Problem Characteristics To choose an appropriate method for a particular problem: Is the problem decomposable? Can solution steps be ignored or undone? Is the universe predictable? Is a good solution absolute or relative? Is the solution a state or a path? What is the role of knowledge? Does the task require human-interaction? Unit1
58
58 Is the problem decomposable? Can the problem be broken down to smaller problems to be solved independently? Decomposable problem can be solved easily. Unit1
59
59 Is the problem decomposable? (x 2 + 3x + sin 2 x.cos 2 x)dx x 2 dx 3xdx sin 2 x.cos 2 xdx (1 cos 2 x)cos 2 xdx cos 2 xdx cos 4 xdx Unit1
60
60 Is the problem decomposable? CLEAR(x) ON(x, Table) CLEAR(x) and CLEAR(y) ON(x, y) A C BC B A StartGoal Blocks World Unit1
61
61 Is the problem decomposable? ON(B, C) and ON(A, B) ON(B, C)ON(A, B) CLEAR(A)ON(A, B) A C BC B A Unit1
62
62 Can solution steps be ignored or undone? Theorem Proving A lemma that has been proved can be ignored for next steps. Ignorable! Unit1
63
63 Can solution steps be ignored or undone? The 8-Puzzle Moves can be undone and backtracked. Recoverable! 283 164 75 123 84 765 Unit1
64
64 Can solution steps be ignored or undone? Playing Chess Moves cannot be retracted. Irrecoverable! Unit1
65
65 Can solution steps be ignored or undone? Ignorable problems can be solved using a simple control structure that never backtracks. Recoverable problems can be solved using backtracking. Irrecoverable problems can be solved by recoverable style methods via planning. Unit1
66
66 Is the universe predictable? The 8-Puzzle Every time we make a move, we know exactly what will happen. Certain outcome! Unit1
67
67 Is the universe predictable? Playing Bridge We cannot know exactly where all the cards are or what the other players will do on their turns. Uncertain outcome! Unit1
68
68 Is the universe predictable? For certain-outcome problems, planning can used to generate a sequence of operators that is guaranteed to lead to a solution. For uncertain-outcome problems, a sequence of generated operators can only have a good probability of leading to a solution. Plan revision is made as the plan is carried out and the necessary feedback is provided. Unit1
69
69 Is a good solution absolute or relative? 1.Marcus was a man. 2.Marcus was a Pompeian. 3.Marcus was born in 40 A.D. 4.All men are mortal. 5.All Pompeians died when the volcano erupted in 79 A.D. 6.No mortal lives longer than 150 years. 7.It is now 2004 A.D. Unit1
70
70 Is a good solution absolute or relative? 1.Marcus was a man. 2.Marcus was a Pompeian. 3.Marcus was born in 40 A.D. 4.All men are mortal. 5.All Pompeians died when the volcano erupted in 79 A.D. 6.No mortal lives longer than 150 years. 7.It is now 2004 A.D. Is Marcus alive? Unit1
71
71 Is a good solution absolute or relative? 1.Marcus was a man. 2.Marcus was a Pompeian. 3.Marcus was born in 40 A.D. 4.All men are mortal. 5.All Pompeians died when the volcano erupted in 79 A.D. 6.No mortal lives longer than 150 years. 7.It is now 2004 A.D. Is Marcus alive? Different reasoning paths lead to the answer. It does not matter which path we follow. Unit1
72
72 Is a good solution absolute or relative? The Travelling Salesman Problem We have to try all paths to find the shortest one. Unit1
73
73 Is a good solution absolute or relative? Any-path problems can be solved using heuristics that suggest good paths to explore. For best-path problems, much more exhaustive search will be performed. Unit1
74
74 Is the solution a state or a path? Finding a consistent intepretation “The bank president ate a dish of pasta salad with the fork”. –“bank” refers to a financial situation or to a side of a river? –“dish” or “pasta salad” was eaten? –Does “pasta salad” contain pasta, as “dog food” does not contain “dog”? –Which part of the sentence does “with the fork” modify? What if “with vegetables” is there? No record of the processing is necessary. Unit1
75
75 Is the solution a state or a path? The Water Jug Problem The path that leads to the goal must be reported. Unit1
76
76 Is the solution a state or a path? A path-solution problem can be reformulated as a state-solution problem by describing a state as a partial path to a solution. The question is whether that is natural or not. Unit1
77
77 What is the role of knowledge Playing Chess Knowledge is important only to constrain the search for a solution. Reading Newspaper Knowledge is required even to be able to recognize a solution. Unit1
78
78 Does the task require human-interaction? Solitary problem, in which there is no intermediate communication and no demand for an explanation of the reasoning process. Conversational problem, in which intermediate communication is to provide either additional assistance to the computer or additional information to the user. Unit1
79
79 Problem Classification There is a variety of problem-solving methods, but there is no one single way of solving all problems. Not all new problems should be considered as totally new. Solutions of similar problems can be exploited. Unit1
80
Heuristic Search 80Unit1
81
81 Outline Generate-and-test Hill climbing Best-first search Problem reduction Constraint satisfaction Means-ends analysis Unit1
82
82 Generate-and-Test Algorithm 1.Generate a possible solution. 2.Test to see if this is actually a solution. 3.Quit if a solution has been found. Otherwise, return to step 1. Unit1
83
83 Generate-and-Test Acceptable for simple problems. Inefficient for problems with large space. Unit1
84
84 Generate-and-Test Exhaustive generate-and-test. Heuristic generate-and-test: not consider paths that seem unlikely to lead to a solution. Plan generate-test: Create a list of candidates. Apply generate-and-test to that list. Unit1
85
85 Generate-and-Test Example: coloured blocks “Arrange four 6-sided cubes in a row, with each side of each cube painted one of four colours, such that on all four sides of the row one block face of each colour is showing.” Unit1
86
86 Generate-and-Test Example: coloured blocks Heuristic: if there are more red faces than other colours then, when placing a block with several red faces, use few of them as possible as outside faces. Unit1
87
87 Hill Climbing Searching for a goal state = Climbing to the top of a hill Unit1
88
88 Hill Climbing Generate-and-test + direction to move. Heuristic function to estimate how close a given state is to a goal state. Unit1
89
89 Simple Hill Climbing Algorithm 1.Evaluate the initial state. 2.Loop until a solution is found or there are no new operators left to be applied: Select and apply a new operator Evaluate the new state: goal quit better than current state new current state Unit1
90
90 Simple Hill Climbing Evaluation function as a way to inject task-specific knowledge into the control process. Unit1
91
91 Simple Hill Climbing Example: coloured blocks Heuristic function: the sum of the number of different colours on each of the four sides (solution = 16). Unit1
92
92 Steepest-Ascent Hill Climbing (Gradient Search) Considers all the moves from the current state. Selects the best one as the next state. Unit1
93
93 Steepest-Ascent Hill Climbing (Gradient Search) Algorithm 1.Evaluate the initial state. 2.Loop until a solution is found or a complete iteration produces no change to current state: SUCC = a state such that any possible successor of the current state will be better than SUCC (the worst state). For each operator that applies to the current state, evaluate the new state: goal quit better than SUCC set SUCC to this state SUCC is better than the current state set the current state to SUCC. Unit1
94
94 Hill Climbing: Disadvantages Local maximum A state that is better than all of its neighbours, but not better than some other states far away. Unit1
95
95 Hill Climbing: Disadvantages Plateau A flat area of the search space in which all neighbouring states have the same value. Unit1
96
96 Hill Climbing: Disadvantages Ridge The orientation of the high region, compared to the set of available moves, makes it impossible to climb up. However, two moves executed serially may increase the height. Unit1
97
97 Hill Climbing: Disadvantages Ways Out Backtrack to some earlier node and try going in a different direction. Make a big jump to try to get in a new section. Moving in several directions at once. Unit1
98
98 Hill Climbing: Disadvantages Hill climbing is a local method: Decides what to do next by looking only at the “immediate” consequences of its choices. Global information might be encoded in heuristic functions. Unit1
99
99 Hill Climbing: Disadvantages B C D A B C StartGoal Blocks World AD Unit1
100
100 Hill Climbing: Disadvantages B C D A B C StartGoal Blocks World AD Local heuristic: +1 for each block that is resting on the thing it is supposed to be resting on. 1 for each block that is resting on a wrong thing. 04 Unit1
101
101 Hill Climbing: Disadvantages B C D B C D A A 02 Unit1
102
102 Hill Climbing: Disadvantages B C D A B CD AB C DA 0 0 0 B C D A 2 Unit1
103
103 Hill Climbing: Disadvantages B C D A B C StartGoal Blocks World AD Global heuristic: For each block that has the correct support structure: +1 to every block in the support structure. For each block that has a wrong support structure: 1 to every block in the support structure. 666 Unit1
104
104 Hill Climbing: Disadvantages B C D A B CD AB C DA 66 22 11 B C D A 33 Unit1
105
105 Hill Climbing: Conclusion Can be very inefficient in a large, rough problem space. Global heuristic may have to pay for computational complexity. Often useful when combined with other methods, getting it started right in the right general neighbourhood. Unit1
106
106 Simulated Annealing A variation of hill climbing in which, at the beginning of the process, some downhill moves may be made. To do enough exploration of the whole space early on, so that the final solution is relatively insensitive to the starting state. Lowering the chances of getting caught at a local maximum, or plateau, or a ridge. Unit1
107
107 Simulated Annealing Physical Annealing Physical substances are melted and then gradually cooled until some solid state is reached. The goal is to produce a minimal-energy state. Annealing schedule: if the temperature is lowered sufficiently slowly, then the goal will be attained. Nevertheless, there is some probability for a transition to a higher energy state: e E/kT. Unit1
108
108 Simulated Annealing Algorithm 1.Evaluate the initial state. 2.Loop until a solution is found or there are no new operators left to be applied: Set T according to an annealing schedule Selects and applies a new operator Evaluate the new state: goal quit E = Val(current state) Val(new state) E < 0 new current state else new current state with probability e E/kT. Unit1
109
109 Best-First Search Depth-first search: not all competing branches having to be expanded. Breadth-first search: not getting trapped on dead-end paths. Combining the two is to follow a single path at a time, but switch paths whenever some competing path look more promising than the current one. Unit1
110
110 Best-First Search A DCB FEHG JI 5 665 21 A DCB FEHG 5 6654 A DCB FE 5 6 3 4 A DCB 531 A Unit1
111
111 Best-First Search OPEN: nodes that have been generated, but have not examined. This is organized as a priority queue. CLOSED: nodes that have already been examined. Whenever a new node is generated, check whether it has been generated before. Unit1
112
112 Best-First Search Algorithm 1.OPEN = {initial state}. 2.Loop until a goal is found or there are no nodes left in OPEN: Pick the best node in OPEN Generate its successors For each successor: new evaluate it, add it to OPEN, record its parent generated before change parent, update successors Unit1
113
113 Best-First Search Greedy search: h(n) = estimated cost of the cheapest path from node n to a goal state. Unit1
114
114 Best-First Search Uniform-cost search: g(n) = cost of the cheapest path from the initial state to node n. Unit1
115
115 Best-First Search Greedy search: h(n) = estimated cost of the cheapest path from node n to a goal state. Neither optimal nor complete Unit1
116
116 Best-First Search Greedy search: h(n) = estimated cost of the cheapest path from node n to a goal state. Neither optimal nor complete Uniform-cost search: g(n) = cost of the cheapest path from the initial state to node n. Optimal and complete, but very inefficient Unit1
117
117 Best-First Search Algorithm A* (Hart et al., 1968): f(n) = g(n) + h(n) h(n) = cost of the cheapest path from node n to a goal state. g(n) = cost of the cheapest path from the initial state to node n. Unit1
118
118 Best-First Search Algorithm A*: f*(n) = g*(n) + h*(n) h*(n) (heuristic factor) = estimate of h(n). g*(n) (depth factor) = approximation of g(n) found by A* so far. Unit1
119
119 Problem Reduction Goal: Acquire TV set AND-OR Graphs Goal: Steal TV setGoal: Earn some moneyGoal: Buy TV set Algorithm AO* (Martelli & Montanari 1973, Nilsson 1980) Unit1
120
120 Problem Reduction: AO* A DCB 435 A 5 6 FE 44 A DCB 43 10 9 9 9 FE 44 A DCB 4 6 11 12 HG 75 Unit1
121
121 Problem Reduction: AO* A G CB 10 5 11 13 ED 65 F 3 A G CB 15 10 14 13 ED 65 F 3 H 9 Necessary backward propagation Unit1
122
122 Constraint Satisfaction Many AI problems can be viewed as problems of constraint satisfaction. Cryptarithmetic puzzle: SEND MORE MONEY Unit1
123
123 Constraint Satisfaction As compared with a straightforard search procedure, viewing a problem as one of constraint satisfaction can reduce substantially the amount of search. Unit1
124
124 Constraint Satisfaction Operates in a space of constraint sets. Initial state contains the original constraints given in the problem. A goal state is any state that has been constrained “enough”. Unit1
125
125 Constraint Satisfaction Two-step process: 1.Constraints are discovered and propagated as far as possible. 2.If there is still not a solution, then search begins, adding new constraints. Unit1
126
126 M = 1 S = 8 or 9 O = 0 N = E + 1 C2 = 1 N + R > 8 E 9 N = 3 R = 8 or 9 2 + D = Y or 2 + D = 10 + Y 2 + D = Y N + R = 10 + E R = 9 S =8 2 + D = 10 + Y D = 8 + Y D = 8 or 9 Y = 0Y = 1 E = 2 C1 = 0C1 = 1 D = 8D = 9 Initial state: No two letters have the same value. The sum of the digits must be as shown. SEND MORE MONEY Unit1
127
127 Constraint Satisfaction Two kinds of rules: 1.Rules that define valid constraint propagation. 2.Rules that suggest guesses when necessary. Unit1
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.