Today’s Lecture Project notes Recurrent nets Unsupervised learning

Slides:



Advertisements
Similar presentations
A Brief Overview of Neural Networks By Rohit Dua, Samuel A. Mulder, Steve E. Watkins, and Donald C. Wunsch.
Advertisements

Machine Learning Lecture 4 Multilayer Perceptrons G53MLE | Machine Learning | Dr Guoping Qiu1.
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Biological sequence analysis and information processing by artificial neural networks Søren Brunak Center for Biological Sequence Analysis Technical University.
September 7, 2010Neural Networks Lecture 1: Motivation & History 1 Welcome to CS 672 – Neural Networks Fall 2010 Instructor: Marc Pomplun Instructor: Marc.
Neural Nets How the brain achieves intelligence 10^11 1mhz cpu’s.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
1 Security problems of your keyboard –Authentication based on key strokes –Compromising emanations consist of electrical, mechanical, or acoustical –Supply.
Data Representation CS105. Data Representation Types of data: – Numbers – Text – Audio – Images & Graphics – Video.
Speech Recognition Deep Learning and Neural Nets Spring 2015.
CS-424 Gregory Dudek Today’s Lecture Neural networks –Backprop example Clustering & classification: case study –Sound classification: the tapper Recurrent.
Pattern Recognition Vidya Manian Dept. of Electrical and Computer Engineering University of Puerto Rico INEL 5046, Spring 2007
1 7-Speech Recognition (Cont’d) HMM Calculating Approaches Neural Components Three Basic HMM Problems Viterbi Algorithm State Duration Modeling Training.
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
© N. Kasabov Foundations of Neural Networks, Fuzzy Systems, and Knowledge Engineering, MIT Press, 1996 INFO331 Machine learning. Neural networks. Supervised.
Neural Networks Chapter 6 Joost N. Kok Universiteit Leiden.
The Sounds of Language: Phonological Awareness Today’s objectives: 4:00 Connect Standards to Stages of Reading and Spelling 4:20 Noticing Typical Spelling.
Korea Maritime and Ocean University NLP Jung Tae LEE
Artificial Intelligence Lecture No. 29 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
George F Luger ARTIFICIAL INTELLIGENCE 6th edition Structures and Strategies for Complex Problem Solving Machine Learning: Connectionist Luger: Artificial.
ECE 8443 – Pattern Recognition EE 3512 – Signals: Continuous and Discrete Objectives: Spectrograms Revisited Feature Extraction Filter Bank Analysis EEG.
Korean Phoneme Discrimination Ben Lickly Motivation Certain Korean phonemes are very difficult for English speakers to distinguish, such as ㅅ and ㅆ.
Introduction to Neural Networks and Example Applications in HCI Nick Gentile.
CS-424 Gregory Dudek Today’s Lecture Neural networks –Training Backpropagation of error (backprop) –Example –Radial basis functions.
The Language of Thought : Part II Joe Lau Philosophy HKU.
Today’s Topics Artificial Neural Networks (ANNs) Perceptrons (1950s) Hidden Units and Backpropagation (1980s) Deep Neural Networks (2010s) ??? (2040s [note.
1 7-Speech Recognition Speech Recognition Concepts Speech Recognition Approaches Recognition Theories Bayse Rule Simple Language Model P(A|W) Network Types.
Korea Maritime and Ocean University NLP Jung Tae LEE
Neural Networks Lecture 4 out of 4. Practical Considerations Input Architecture Output.
Computational Intelligence Semester 2 Neural Networks Lecture 2 out of 4.
English Pronunciation Clinic Week 1: Phonemes
Learning: Neural Networks Artificial Intelligence CMSC February 3, 2005.
Speech Recognition through Neural Networks By Mohammad Usman Afzal Mohammad Waseem.
1 Neural Networks MUMT 611 Philippe Zaborowski April 2005.
Development of a Flex Sensor Glove
Today’s Lecture Neural networks Training
English Pronunciation
Development of a Flex Sensor Glove
Development of a Flex Sensor Glove
Learning linguistic structure with simple and more complex recurrent neural networks Psychology February 2, 2017.
Neural Networks.
ECE 5424: Introduction to Machine Learning
Matt Gormley Lecture 16 October 24, 2016
Artificial Intelligence Lecture No. 30
Dr. Kenneth Stanley September 6, 2006
4.3 Feedforward Net. Applications
Development of a Flex Sensor Glove
CSE P573 Applications of Artificial Intelligence Neural Networks
CS540 - Fall 2016 (Shavlik©), Lecture 17, Week 10
Development of a Flex Sensor Glove
Development of a Flex Sensor Glove
Speech Generation Using a Neural Network
A Kaggle Project By Ryan Bambrough
XOR problem Input 2 Input 1
Artificial Intelligence CSC 361
Development of a Flex Sensor Glove
CSE 573 Introduction to Artificial Intelligence Neural Networks
Neural Network - 2 Mayank Vatsa
Development of a Flex Sensor Glove
Temporal Back-Propagation Algorithm
Word2Vec.
Development of a Flex Sensor Glove
Advances in Deep Audio and Audio-Visual Processing
Phoneme Recognition Using Neural Networks by Albert VanderMeulen
Review NNs Processing Principles in Neuron / Unit
CS621: Artificial Intelligence Lecture 18: Feedforward network contd
PYTHON Deep Learning Prof. Muhammad Saeed.
CS621: Artificial Intelligence Lecture 17: Feedforward network (lecture 16 was on Adaptive Hypermedia: Debraj, Kekin and Raunak) Pushpak Bhattacharyya.
Today’s Lecture Project notes Recurrent nets Unsupervised learning
Presentation transcript:

Today’s Lecture Project notes Recurrent nets Unsupervised learning Not in text Project notes Recurrent nets Nettalk (continued) Glovetalk video Unsupervised learning Kohonen nets CS-424 Gregory Dudek

NETtalk details Mapped English text to phonemes. Reference: Sejnowski & Rosenberg, 1986, Cognitive Science, 14, pp. 179-211. Mapped English text to phonemes. Phonemes then converted to sound by a separate engine. 3-layer network trained using backprop 203-80-26 MPL (multi-layer perceptron) 203 input units Encoded 7 consecutive characters. window in time before and after the current sound. Alphabet of 26 letters + some punctuation 29 total input symbols 80 hidden units 26 output units each represents one “articulatory feature” related to mouth positions/actions (typically 3 per phoneme). CS-424 Gregory Dudek

Example: nettalk Learn to read English: output is audio speech First grade text, before & after training. Dictionary Text, getting progressively Better. CS-424 Gregory Dudek

See Video Example: glovetalk Input: gestures Output: speech Akin to reading sign language, but HIGHLY simplified. Input is encoded as electrical signals from, essentially, a Nintendo Power Glove. Various joint angles as a function of time. See Video CS-424 Gregory Dudek