Presentation is loading. Please wait.

Presentation is loading. Please wait.

November 7, 2012Introduction to Artificial Intelligence Lecture 13: Neural Network Basics 1 Note about Resolution Refutation You have a set of hypotheses.

Similar presentations


Presentation on theme: "November 7, 2012Introduction to Artificial Intelligence Lecture 13: Neural Network Basics 1 Note about Resolution Refutation You have a set of hypotheses."— Presentation transcript:

1 November 7, 2012Introduction to Artificial Intelligence Lecture 13: Neural Network Basics 1 Note about Resolution Refutation You have a set of hypotheses h 1, h 2, …, h n, and a conclusion c. Your argument is that whenever all of the h 1, h 2, …, h n are true, then c is true as well. In other words, whenever all of the h 1, h 2, …, h n are true, then  c is false. If and only if the argument is valid, then the conjunction h 1  h 2  …  h n   c is false, because either (at least) one of the h 1, h 2, …, h n is false, or if they are all true, then  c is false. Therefore, if this conjunction resolves to false, we have shown that the argument is valid.

2 November 7, 2012Introduction to Artificial Intelligence Lecture 13: Neural Network Basics 2 Propositional Calculus You have seen that resolution, including resolution refutation, is a suitable tool for automated reasoning in the propositional calculus. If we build a machine that represents its knowledge as propositions, we can use these mechanisms to enable the machine to deduce new knowledge from existing knowledge and verify hypotheses about the world. However, propositional calculus has some serious restrictions in its capability to represent knowledge.

3 November 7, 2012Introduction to Artificial Intelligence Lecture 13: Neural Network Basics 3 Propositional Calculus In propositional calculus, atoms have no internal structure; we cannot reuse the same proposition for a different object, but each proposition always refers to the same object. For example, in the toy block world, the propositions ON_A_B and ON_A_C are completely different from each other. We could as well call them PETER and BOB instead. So if we want to express rules that apply to a whole class of objects, in propositional calculus we would have to define separate rules for every single object of that class.

4 November 7, 2012Introduction to Artificial Intelligence Lecture 13: Neural Network Basics 4 Predicate Calculus So it is a better idea to use predicates instead of propositions. This leads us to predicate calculus. Predicate calculus has symbols called object constants, object constants, relation constants, and relation constants, and function constants function constants These symbols will be used to refer to objects in the world and to propositions about the word.

5 November 7, 2012Introduction to Artificial Intelligence Lecture 13: Neural Network Basics 5Quantification Introducing the universal quantifier  and the existential quantifier  facilitates the translation of world knowledge into predicate calculus. Examples: Paul beats up all professors who fail him.  x(Professor(x)  Fails(x, Paul)  BeatsUp(Paul, x)) There is at least one intelligent UMB professor.  x(UMBProf(x)  Intelligent(x))

6 November 7, 2012Introduction to Artificial Intelligence Lecture 13: Neural Network Basics 6 Knowledge Representation a) There are no crazy UMB students.  x (UMBStudent(x)  Crazy(x))  x (UMBStudent(x)  Crazy(x)) b) All computer scientists are either rich or crazy, but not both.  x (CS(x)  [Rich(x)   Crazy(x)]  [  Rich(x)  Crazy(x)] )  x (CS(x)  [Rich(x)   Crazy(x)]  [  Rich(x)  Crazy(x)] ) c) All UMB students except one are intelligent.  x (UMBStudent(x)   Intelligent(x))   x,y (UMBStudent(x)  UMBStudent(y)   Identical(x, y)   Intelligent(x)   Intelligent(y))  x (UMBStudent(x)   Intelligent(x))   x,y (UMBStudent(x)  UMBStudent(y)   Identical(x, y)   Intelligent(x)   Intelligent(y)) d) Jerry and Betty have the same friends.  x ([Friends(Betty, x)  Friends(Jerry, x)]  [Friends(Jerry, x)  Friends(Betty, x)])  x ([Friends(Betty, x)  Friends(Jerry, x)]  [Friends(Jerry, x)  Friends(Betty, x)]) e) No mouse is bigger than an elephant.  x,y (Mouse(x)  Elephant(y)  BiggerThan(x, y))  x,y (Mouse(x)  Elephant(y)  BiggerThan(x, y))

7 November 7, 2012Introduction to Artificial Intelligence Lecture 13: Neural Network Basics 7 But now, finally… … let us move on to… Artificial Neural Networks

8 November 7, 2012Introduction to Artificial Intelligence Lecture 13: Neural Network Basics 8 Computers vs. Neural Networks “Standard” ComputersNeural Networks one CPUhighly parallel processing fast processing unitsslow processing units reliable unitsunreliable units static infrastructuredynamic infrastructure

9 November 7, 2012Introduction to Artificial Intelligence Lecture 13: Neural Network Basics 9 Why Artificial Neural Networks? There are two basic reasons why we are interested in building artificial neural networks (ANNs): Technical viewpoint: Some problems such as character recognition or the prediction of future states of a system require massively parallel and adaptive processing. Technical viewpoint: Some problems such as character recognition or the prediction of future states of a system require massively parallel and adaptive processing. Biological viewpoint: ANNs can be used to replicate and simulate components of the human (or animal) brain, thereby giving us insight into natural information processing. Biological viewpoint: ANNs can be used to replicate and simulate components of the human (or animal) brain, thereby giving us insight into natural information processing.

10 November 7, 2012Introduction to Artificial Intelligence Lecture 13: Neural Network Basics 10 Why Artificial Neural Networks? Why do we need another paradigm than symbolic AI for building “intelligent” machines? Symbolic AI is well-suited for representing explicit knowledge that can be appropriately formalized. Symbolic AI is well-suited for representing explicit knowledge that can be appropriately formalized. However, learning in biological systems is mostly implicit – it is an adaptation process based on uncertain information and reasoning. However, learning in biological systems is mostly implicit – it is an adaptation process based on uncertain information and reasoning. ANNs are inherently parallel and work extremely efficiently if implemented in parallel hardware. ANNs are inherently parallel and work extremely efficiently if implemented in parallel hardware.

11 November 7, 2012Introduction to Artificial Intelligence Lecture 13: Neural Network Basics 11 How do NNs and ANNs work? The “building blocks” of neural networks are the neurons.The “building blocks” of neural networks are the neurons. In technical systems, we also refer to them as units or nodes.In technical systems, we also refer to them as units or nodes. Basically, each neuronBasically, each neuron –receives input from many other neurons, –changes its internal state (activation) based on the current input, –sends one output signal to many other neurons, possibly including its input neurons (recurrent network)

12 November 7, 2012Introduction to Artificial Intelligence Lecture 13: Neural Network Basics 12 How do NNs and ANNs work? Information is transmitted as a series of electric impulses, so-called spikes.Information is transmitted as a series of electric impulses, so-called spikes. The frequency and phase of these spikes encodes the information.The frequency and phase of these spikes encodes the information. In biological systems, one neuron can be connected to as many as 10,000 other neurons.In biological systems, one neuron can be connected to as many as 10,000 other neurons.

13 November 7, 2012Introduction to Artificial Intelligence Lecture 13: Neural Network Basics 13 “Data Flow Diagram” of Visual Areas in Macaque Brain Blue: motion perception pathway Green: object recognition pathway


Download ppt "November 7, 2012Introduction to Artificial Intelligence Lecture 13: Neural Network Basics 1 Note about Resolution Refutation You have a set of hypotheses."

Similar presentations


Ads by Google