Genome 559: Introduction to Statistical and Computational Genomics Elhanan Borenstein Artificial Neural Networks Some slides adapted from Geoffrey Hinton.

Slides:



Advertisements
Similar presentations
Artificial Neural Networks
Advertisements

KULIAH II JST: BASIC CONCEPTS
Artificial Intelligence 12. Two Layer ANNs
NN – cont. Alexandra I. Cristea USI intensive course Adaptive Systems April-May 2003.
Perceptron Lecture 4.
 How many neurons are to be used?  How the neurons are to be connected to form a network.  Which learning algorithm to use?  How to train the neural.
Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
Artificial Neural Networks (1)
Perceptron Learning Rule
NEURAL NETWORKS Perceptron
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 7: Learning in recurrent networks Geoffrey Hinton.
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
CSC321: Neural Networks Lecture 3: Perceptrons
Machine Learning Neural Networks
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Rutgers CS440, Fall 2003 Neural networks Reading: Ch. 20, Sec. 5, AIMA 2 nd Ed.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Data Mining with Neural Networks (HK: Chapter 7.5)
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Artificial Neural Networks
Traffic Sign Recognition Using Artificial Neural Network Radi Bekker
Neural Networks Lecture 8: Two simple learning algorithms
Artificial neural networks:
Dr. Hala Moushir Ebied Faculty of Computers & Information Sciences
© Negnevitsky, Pearson Education, Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works Introduction, or.
Neurons, Neural Networks, and Learning 1. Human brain contains a massively interconnected net of (10 billion) neurons (cortical cells) Biological.
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Machine Learning Dr. Shazzad Hosain Department of EECS North South Universtiy
1 Machine Learning The Perceptron. 2 Heuristic Search Knowledge Based Systems (KBS) Genetic Algorithms (GAs)
Artificial Neural Network Supervised Learning دكترمحسن كاهاني
NEURAL NETWORKS FOR DATA MINING
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
ADVANCED PERCEPTRON LEARNING David Kauchak CS 451 – Fall 2013.
CSC321 Introduction to Neural Networks and Machine Learning Lecture 3: Learning in multi-layer networks Geoffrey Hinton.
Neural Networks and Deep Learning Slides credit: Geoffrey Hinton and Yann LeCun.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Artificial Neural Networks Chapter 4 Perceptron Gradient Descent Multilayer Networks Backpropagation Algorithm 1.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
Neural Networks 2nd Edition Simon Haykin
Perceptrons Michael J. Watts
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Artificial Intelligence Methods Neural Networks Lecture 3 Rakesh K. Bissoondeeal Rakesh K. Bissoondeeal.
CSC321: Neural Networks Lecture 1: What are neural networks? Geoffrey Hinton
1 Azhari, Dr Computer Science UGM. Human brain is a densely interconnected network of approximately neurons, each connected to, on average, 10 4.
語音訊號處理之初步實驗 NTU Speech Lab 指導教授: 李琳山 助教: 熊信寬
Learning with Neural Networks Artificial Intelligence CMSC February 19, 2002.
Neural networks and support vector machines
Learning with Perceptrons and Neural Networks
Artificial Intelligence (CS 370D)
Artificial neural networks:
Real Neurons Cell structures Cell body Dendrites Axon
CSE 473 Introduction to Artificial Intelligence Neural Networks
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
CSE P573 Applications of Artificial Intelligence Neural Networks
Data Mining with Neural Networks (HK: Chapter 7.5)
Chapter 3. Artificial Neural Networks - Introduction -
CSE 573 Introduction to Artificial Intelligence Neural Networks
network of simple neuron-like computing elements
Artificial Neural Networks
ARTIFICIAL NEURAL networks.
Artificial Neural Networks
David Kauchak CS51A Spring 2019
David Kauchak CS158 – Spring 2019

Outline Announcement Neural networks Perceptrons - continued
Presentation transcript:

Genome 559: Introduction to Statistical and Computational Genomics Elhanan Borenstein Artificial Neural Networks Some slides adapted from Geoffrey Hinton and Igor Aizenberg

Ab initio gene prediction Parameters: Splice donor sequence model Splice acceptor sequence model Intron and exon length distribution Open reading frame More … Markov chain States Transition probabilities Hidden Markov Model (HMM) A quick review

Machine learning A field of study that gives computers the ability to learn without being explicitly programmed. Arthur Samuel (1959)

Tasks best solved by learning algorithms Recognizing patterns: Facial identities or facial expressions Handwritten or spoken words Recognizing anomalies: Unusual sequences of credit card transactions Prediction: Future stock prices Predict phenotype based on markers Genetic association, diagnosis, etc.

Why machine learning? It is very hard to write programs that solve problems like recognizing a face. We dont know what program to write. Even if we had a good idea of how to do it, the program might be horrendously complicated. Instead of writing a program by hand, we collect lots of examples for which we know the correct output A machine learning algorithm then takes these examples, trains, and produces a program that does the job. If we do it right, the program works for new cases as well as the ones we trained it on.

Why neural networks? One of those things you always hear about but never know exactly what they actually mean… A good example of a machine learning framework In and out of fashion … An important part of machine learning history A powerful framework

The goals of neural computation 1.To understand how the brain actually works Neuroscience is hard! 2.To develop a new style of computation Inspired by neurons and their adaptive connections Very different style from sequential computation 3.To solve practical problems by developing novel learning algorithms Learning algorithms can be very useful even if they have nothing to do with how the brain works

How the brain works (sort of) Each neuron receives inputs from many other neurons Cortical neurons use spikes to communicate Neurons spike once they aggregate enough stimuli through input spikes The effect of each input spike on the neuron is controlled by a synaptic weight. Weights can be positive or negative Synaptic weights adapt so that the whole network learns to perform useful computations A huge number of weights can affect the computation in a very short time. Much better bandwidth than a computer.

A typical cortical neuron Physical structure: There is one axon that branches There is a dendritic tree that collects input from other neurons Axons typically contact dendritic trees at synapses A spike of activity in the axon causes a charge to be injected into the post- synaptic neuron axon body dendritic tree

Idealized Neuron Σ X1X1 w1w1 X2X2 X3X3 w2w2 w3w3 Y Basically, a weighted sum!

Adding bias Σ,b X1X1 w1w1 X2X2 X3X3 w2w2 w3w3 Y Function does not have to pass through the origin

Adding an activation function Σ,b X1X1 w1w1 X2X2 X3X3 w2w2 w3w3 Y The field of the neuron goes through an activation function φ Z, (the field of the neuron)

Σ,b,φ X1X1 w1w1 X2X2 X3X3 w2w2 w3w3 Y Common activation functions Linear activation Threshold activation Hyperbolic tangent activation Logistic activation z z z z

McCulloch-Pitts neurons Introduced in 1943 (and influenced Von Neumann!) Threshold activation function Restricted to binary inputs and outputs 1 if z>0 0 otherwise y= Σ,b X1X1 w1w1 X2X2 w2w2 Y w 1 =1, w 2 =1, b=1.5 X1X1 X2X2 y X 1 AND X 2 w 1 =1, w 2 =1, b=0.5 X1X1 X2X2 y X 1 OR X 2

Beyond binary neurons 1 if z>0 0 otherwise y= Σ,b X1X1 w1w1 X2X2 w2w2 Y w 1 =1, w 2 =1, b=1.5 X1X1 X2X2 y X 1 AND X 2 w 1 =1, w 2 =1, b=0.5 X1X1 X2X2 y X 1 OR X 2 X1X1 X2X2 (0,0) (0,1) (1,0) (1,1) X1X1 X2X2 (0,0) (0,1) (1,0) (1,1)

Beyond binary neurons X1X1 X2X2 A general classifier The weights determine the slope The bias determines the distance from the origin But … how would we know how to set the weights and the bias? (note: the bias can be represented as an additional input)

Perceptron learning Use a training set and let the perceptron learn from its mistakes Training set: A set of input data for which we know the correct answer/classification! Learning principle: Whenever the perceptron is wrong, make a small correction to the weights in the right direction. Note: Supervised learning Training set vs. testing set

Perceptron learning 1. Initialize weights and threshold (e.g., use small random values). 2. Use input X and desired output d from training set 3. Calculate the actual output, y 4. Adapt weights: w i (t+1) = w i (t) + α(d y)x i for all weights. α is the learning rate (dont overshoot) Repeat 3 and 4 until the dy is smaller than a user-specified error threshold, or a predetermined number of iterations have been completed. If solution exists – guaranteed to converge!

Linear separability What about the XOR function? Or other non linear separable classification problems such as:

Multi-layer feed-forward networks We can connect several neurons, where the output of some is the input of others.

Solving the XOR problem Only 3 neurons are required!!! b=1.5 X1X1 X2X2 +1 Y b=

In fact … With one hidden layer you can solve ANY classification task! But …. How do you find the right set of weights? (note: we only have an error delta for the output neuron) This problem caused this framework to fall out of favor … until …

Back-propagation Main idea: First propagate a training input data point forward to get the calculated output Compare the calculated output with the desired output to get the error (delta) Now, propagate the error back in the network to get an error estimate for each neuron Update weights accordingly

Types of connectivity Feed-forward networks Compute a series of transformations Typically, the first layer is the input and the last layer is the output. Recurrent networks Include directed cycles in their connection graph. Complicated dynamics. Memory. More biologically realistic? hidden units output units input units

Which is the most useful representation? B C A D ABCD A0010 B0000 C0100 D0110 Connectivity Matrix List of edges: (ordered) pairs of nodes [ (A,C), (C,B), (D,B), (D,C) ] Object Oriented Name:A ngr: p1 Name:B ngr: Name:C ngr: p1 Name:D ngr: p1p2 Computational representation of networks