General Aspects of Learning

Slides:



Advertisements
Similar presentations
Artificial Intelligence 12. Two Layer ANNs
Advertisements

Multi-Layer Perceptron (MLP)
Memristor in Learning Neural Networks
Ch. Eick: More on Machine Learning & Neural Networks Different Forms of Learning: –Learning agent receives feedback with respect to its actions (e.g. using.
Multilayer Perceptrons 1. Overview  Recap of neural network theory  The multi-layered perceptron  Back-propagation  Introduction to training  Uses.
Learning Department of Computer Science & Engineering Indian Institute of Technology Kharagpur.
Kostas Kontogiannis E&CE
Artificial Neural Networks
Neural NetworksNN 11 Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Neural Networks Basic concepts ArchitectureOperation.
Artificial Neural Networks Artificial Neural Networks are (among other things) another technique for supervised learning k-Nearest Neighbor Decision Tree.
An Illustrative Example
Neural Networks Chapter Feed-Forward Neural Networks.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Hazırlayan NEURAL NETWORKS Radial Basis Function Networks II PROF. DR. YUSUF OYSAL.
Machine Learning. Learning agent Any other agent.
Machine Learning1 Machine Learning: Summary Greg Grudic CSCI-4830.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
© N. Kasabov Foundations of Neural Networks, Fuzzy Systems, and Knowledge Engineering, MIT Press, 1996 INFO331 Machine learning. Neural networks. Supervised.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Multi-Layer Perceptrons Michael J. Watts
Classification / Regression Neural Networks 2
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Intelligence Techniques Multilayer Perceptrons.
1 Pattern Classification X. 2 Content General Method K Nearest Neighbors Decision Trees Nerual Networks.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Chapter 8: Adaptive Networks
Hazırlayan NEURAL NETWORKS Backpropagation Network PROF. DR. YUSUF OYSAL.
Neural Network Terminology
Chapter 6 Neural Network.
Neural Networks References: “Artificial Intelligence for Games” "Artificial Intelligence: A new Synthesis"
Machine Learning Supervised Learning Classification and Regression
Neural networks.
Fall 2004 Backpropagation CS478 - Machine Learning.
Regression.
CS 388: Natural Language Processing: Neural Networks
Artificial neural networks
Other Classification Models: Neural Network
Learning in Neural Networks
Real Neurons Cell structures Cell body Dendrites Axon
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
FUNDAMENTAL CONCEPT OF ARTIFICIAL NETWORKS
CSSE463: Image Recognition Day 17
Introduction to Neural Networks And Their Applications
CSE P573 Applications of Artificial Intelligence Neural Networks
Classification / Regression Neural Networks 2
CSE 473 Introduction to Artificial Intelligence Neural Networks
General Aspects of Learning
Chapter 3. Artificial Neural Networks - Introduction -
Artificial Intelligence Methods
XOR problem Input 2 Input 1
CSE 573 Introduction to Artificial Intelligence Neural Networks
Word Embedding Word2Vec.
CSSE463: Image Recognition Day 17
The use of Neural Networks to schedule flow-shop with dynamic job arrival ‘A Multi-Neural Network Learning for lot Sizing and Sequencing on a Flow-Shop’
Lecture Notes for Chapter 4 Artificial Neural Networks
Backpropagation.
CSSE463: Image Recognition Day 17
Artificial Intelligence 12. Two Layer ANNs
CSSE463: Image Recognition Day 17
COSC 4368 Machine Learning Organization
COSC 4335: Part2: Other Classification Techniques
CSSE463: Image Recognition Day 17
The Network Approach: Mind as a Web
The McCullough-Pitts Neuron
Word representations David Kauchak CS158 – Fall 2016.
A task of induction to find patterns
General Aspects of Learning
Presentation transcript:

General Aspects of Learning Different Forms of Learning: Learning agent receives feedback with respect to its actions (e.g. using a teacher) Supervised Learning: feedback is received with respect to all possible actions of the agent Reinforcement Learning: feedback is only received with respect to the taken action of the agent Unsupervised Learning: Learning when there is no hint at all about the correct action Inductive Learning is a form of supervised learning that centers on learning a function based on sets of training examples. Popular techniques include decision trees, neural networks, nearest neighbor approaches, discriminant analysis, and regression. The performance of an inductive learning system is usually evaluated using n- fold cross-validation. The last technology I like to introduce in today’s presentation are shared ontologies. Shared ontologies are important to standardize communication, and for gathering information from different information sources. Ontologies play an important role for agent-based systems. Ontologies basically describe...

Neural Network Terminology A neural network is composed of a number of units (nodes) that are connected by links. Each link has a weight associated with it. Each unit has an activation level and a means to compute the activation level at the next step in time. Most neural networks are decomposed of a linear component called input function, and a non-linear component call activation function. Popular activation functions include: step-function, sign-function, and sigmoid function. The architecture of a neural network determines how units are connected and what activation function are used for the network computations. Architectures are subdivided into feed-forward and recurrent networks. Moreover, single layer and multi-layer neural networks (that contain hidden units) are distinguished. Learning in the context of neural networks mostly centers on finding “good” weights for a given architecture so that the error in performing a particular task is minimized. Most approaches center on learning a function from a set of training examples, and use hill-climbing and steepest decent hill-climbing approaches to find the best values for the weights.

Perceptron Learning Example Learn y=x1 and x2 for examples (0,0,0), (0,1,0), (1,0,0), (1,1, 1) and learning rate 0.5 and initial weights w0=1;w1=w2=0.8; step0 is used as the activation function w0 is set to 0.5; nothing else changes --- First example w0 is set to 0; w2 is set to 0.3 --- Second example w0 is set to –0.5; w1 is set to 0.3 --- Third example No more errors occurs for those weights for the four examples 1 w0 x1 w1 Step0-Unit y x2 w2 Perceptron Learning Rule: Wj:= Wj + a*Aj*(T-O)