Presentation is loading. Please wait.

Presentation is loading. Please wait.

Authored By :- Rachit Kr. Rastogi Computer Sc. & Engineering Deptt., College Of Technology, G.B.P.U.A.T. Pantnagar, India

Similar presentations


Presentation on theme: "Authored By :- Rachit Kr. Rastogi Computer Sc. & Engineering Deptt., College Of Technology, G.B.P.U.A.T. Pantnagar, India"— Presentation transcript:

1 Authored By :- Rachit Kr. Rastogi Computer Sc. & Engineering Deptt., College Of Technology, G.B.P.U.A.T. Pantnagar, India email: getrachit@yahoo.com getrachit@yahoo.com comper_rachit@rediff.com Web:www.geocities.com/getrachit www.geocities.com/getrachit Artificial Neural Network Simulation

2 Key Points - Computer expert systems aim to go from the crisp binary conventional control towards the wooly way in which humans think. Computer expert systems aim to go from the crisp binary conventional control towards the wooly way in which humans think. Artificial Neural Network is a system loosely modeled on the human brain. Artificial Neural Network is a system loosely modeled on the human brain. The first attempt to build an operational model of the neuron used the simple binary comparator (known as binary decision neuron). The first attempt to build an operational model of the neuron used the simple binary comparator (known as binary decision neuron). One major disadvantage is that training is required and the amount of training data can be large. One major disadvantage is that training is required and the amount of training data can be large. As our aim is to mimic the operation of the human brain to some extent it implies building Artificial Intelligence. As our aim is to mimic the operation of the human brain to some extent it implies building Artificial Intelligence.

3  The field goes by many names, such as connectionism, parallel distributed processing, neuro-computing, natural intelligent systems, machine learning algorithms, and artificial neural networks. Artificial Neural Networks:  Consists of multiple layers of simple processing elements called neurons.  Learning is accomplished by adjusting the varying strengths of neurons with neighbors that causes the overall network to output appropriate results.  An artificial neural network (ANN) is an information- processing system that is based on generalizations of human cognition or neural biology.  Signals are passed between neurons over connection links.  Each connection link has an associated weight, which, in a typical neural net, multiplies the signal transmitted.

4 A Neural Network (NN) is characterized by its particular:  Architecture: its pattern of connections between the neurons.  Learning Algorithm: its method of determining the weights on the connections.  Activation function: which determines its output.  Each neuron has an internal state, called its activation or activity level which is a function of the inputs it has received.  A neuron sends its activation as a signal to several other neurons.  A neuron can send only one signal at a time, although that signal may be broadcast to several other neurons.

5 Analogy to the Brain:  Neural networks have a strong similarity to the biological brain and therefore a great deal of the terminology is borrowed from neuroscience. The Biological Neuron:   The most basic element of the human brain is a specific type of cell, which provides us with the abilities to These cells are known as neurons, each of these neurons can connect with up to 200000 other neurons.   The power of the brain comes from the numbers of a specific type cell which provides the ability to remember, think, and apply previous experiences to our every action and the multiple connections between them.   All natural neurons have four basic components, which are dendrites, soma, axon, and synapses.

6 A Biological Neuron

7  Artificial neurons, simulates the four basic functions of natural neurons.  Artificial neurons are much simpler than the biological neuron.  Various inputs to the network are represented by the mathematical symbol, x (n).  Each of these inputs are multiplied by a connection weight, these weights are represented by w (n).  In the simplest case, these products are simply summed, fed through a transfer function to generate a result, and then output.

8 Basic Block of an Artificial Neuron.

9 The Complex Design issues consists of: Layers:  Arranging neurons in various layers.  Deciding the type of connections among neurons for different layers, as well as among the neurons within a layer.  Deciding the way a neuron receives input and produces output.  Determining the strength of connection within the network by allowing the network to learn the appropriate values of connection weights by using a training data set.  A layer of “input” units is connected to a layer of “hidden” units, which is connected to a layer of “output” units.  The activity of each hidden unit is determined by the activities of the input units and weights on the connections between the input and hidden units.  The behavior of the output units depends on the activity of the hidden units and the weights between the hidden and output units.  Biologically, neural networks are constructed in a 3D way from microscopic components.

10  The input layer consists of neurons that receive input form the external environment.  The output layer consists of neurons that communicate the output of the system to the user or external environment.  Usually a number of hidden layers between these two layers. The input layer receives the input its neurons produce output, which becomes input to the other layers of the system. The process continues until a certain condition is satisfied. For determining the number of hidden neurons, one are often left out to the method trial and error.

11 Neural Network Architecture: Feed forward networks: Feed forward ANNs allow signals to travel one way only, from input to output. Feedback Networks: Feedback networks can have signals traveling in both directions by introducing the loops in the network.

12 Communication and types of connections:  Connected via a network of paths carrying the output of one neuron as input to another neuron.  Unidirectional paths.  Neuron receives input from many neurons, but produce a single output, which is communicated to other neurons.  Neuron in a layer may communicate with each other, or they may not have any connections.  The neurons of one layer are always connected to the neurons of at least another layer.

13 Inter-layer connections: Inter-layer connections:  Fully connected  Partially connected  Bi-directional  Resonance  Feed forward  Hierarchical There are different types of connections used between layers, these connections between layers are called inter-layer connections. These are of following types:

14 Intra-layer connections:  In more complex structures the neurons communicate among themselves within a layer, known as intra-layer connections. There are of two types:  Recurrent: The neurons within a layer are fully- or partially connected to one another. They communicate their outputs with one another a number of times before they are allowed to send their outputs to another layer.  On-center/off surround: A neuron within a layer has excitatory connections to itself and its immediate neighbors, and has inhibitory connections to other neurons. Each gang excites itself and its gang members and inhibits all members of other gangs. After a few rounds of signal interchange, the neurons with an active output value will win, and is allowed to update its and its gang member’s weights. There are two types of connections between two neurons, excitatory or inhibitory. In the excitatory connection, the output of one neuron increases the action potential of the neuron to which it is connected. In Inhibitory connection the output of the neuron sending a message would reduce the activity or action potential of the receiving neuron.  Excitatory causes the summing mechanism of the next neuron to add while Inhibitory causes it to subtract.

15 Learning: Learning:  Neural networks are sometimes called machine-learning algorithms.  Strength of connection between the neurons is stored as a weight-value for the specific connection.  System learns new knowledge by adjusting these connection weights. Supervised Learning: It incorporates an external teacher, so that each output unit is told what is desired response to input signals ought to be. During the learning process global information may be required. Unsupervised Learning: It uses no external teacher and is based upon only local information. It is also referred to as self-organization, in the sense that it self-organizes the data presented to the network and detect their emergent collective properties.

16 Back propagation: This method is proven highly successful in training of multilayered neural nets. The network is not just given reinforcement for how it is doing on a task. Information about errors is also filtered back through the system and is used to adjust the connections between the layers, thus improving performance. A form of supervised learning. Reinforcement learning: This method works on reinforcement from the outside. The connections among the neurons in the hidden layer are randomly arranged, then reshuffled as the network is told how close it is to solving the problem.

17 Learning Methods:  Off-line: In the off-line learning methods, once the systems enters into the operation mode, its weights are fixed and do not change any more. Most of the networks are of the off-line learning type.  On-line: In on-line or real time learning, when the system is in operating mode (recall), it continues to learn while being used as a decision tool. This type of learning has a more complex design structure.

18 Learning Laws: These laws are mathematical algorithms used to update the connection weights.  Hebb’s Rule: If a neuron receives an input from another neuron, and if both are highly active (mathematically have the same sign), the weight between the neurons should be strengthened.  Hopfield Law: It specifies the magnitude of the strengthening or weakening. It states, "if the desired output and the input are both active or both inactive, increment the connection weight by the learning rate, otherwise decrement the weight by the learning rate.

19  The Delta Rule: The Delta Rule is a further variation of Hebb’s Rule, and it is one of the most commonly used. This rule is based on the idea of continuously modifying the strengths of the input connections to reduce the difference (the delta) between the desired output value and the actual output of a neuron.  Activation functions: Various algorithms can be tried with several choices of activation functions. Some examples of those activation functions are: - sine, cosine, linear & hyperbolic tangent.

20 Architecture for XOR problem Problem analysis in Neural Networks: XOR/Parity bit problem: This is a standard problem. The network used, had a 2-2-1 architecture.

21 Architecture for 10-5-10 problem The 10-5-10 encoder decoder problem: This architecture contains 10 inputs, 5 Neurons in hidden layer & 10 Outputs.

22  8 input 3-output classification problem: The 8 input is 8 different symptoms of diseases. The network was trained to diagnose the disease. The disease was coded as one of the possible 8 binary combinations. The network used had an 8- 6-3 architecture. Architecture for 8 inputs & 3-output problem Architecture for 8 inputs & 3-output problem

23 Where are Neural Networks being used:  Pattern recognition training: Automated recognition of handwritten text, spoken words, facial/fingerprint identification and moving targets on a static background has all been successfully implemented.  Speech production: This involves a neural network connected to a speech synthesizer. ANN-based algorithms are used to discover rules for themselves. A most remarkable example of this is the program Net- Talk.  Image processing and pattern recognition form an important area of neural networks.  Character recognition and handwriting recognition.  AI expert systems are today used in applications where the underlying knowledge base does not significantly change with time (e.g. medical diagnostic systems).  ANNs are more suitable when the input dataset can evolve with time (e.g. real-time control systems).

24 Conclusion:  Artificial neural networks offer an ability to perform tasks outside the scope of traditional processors.  Neural networks learn, they are not programmed.It is for that reason that neural networks are finding themselves in applications where humans are also unable to always be right.  Neural networks need faster hardware. It is then that these systems will be able to hear speech, read handwriting, and formulate actions. They will be able to become the intelligence behind robots who never tire nor become distracted. It is then that they will become the leading edge in an age of "intelligent machines”.

25 Thank You.


Download ppt "Authored By :- Rachit Kr. Rastogi Computer Sc. & Engineering Deptt., College Of Technology, G.B.P.U.A.T. Pantnagar, India"

Similar presentations


Ads by Google