Sparse Coding: A Deep Learning using Unlabeled Data for High - Level Representation Dr.G.M.Nasira R. Vidya R. P. Jaia Priyankka.

Slides:



Advertisements
Similar presentations
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Advertisements

Feature Selection as Relevant Information Encoding Naftali Tishby School of Computer Science and Engineering The Hebrew University, Jerusalem, Israel NIPS.
Soft computing Lecture 6 Introduction to neural networks.
Simple Neural Nets For Pattern Classification
October 14, 2010Neural Networks Lecture 12: Backpropagation Examples 1 Example I: Predicting the Weather We decide (or experimentally determine) to use.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Longbiao Kang, Baotian Hu, Xiangping Wu, Qingcai Chen, and Yan He Intelligent Computing Research Center, School of Computer Science and Technology, Harbin.
How to do backpropagation in a brain
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
 The most intelligent device - “Human Brain”.  The machine that revolutionized the whole world – “computer”.  Inefficiencies of the computer has lead.
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Intelligence Techniques Multilayer Perceptrons.
ARTIFICIAL NEURAL NETWORKS. Overview EdGeneral concepts Areej:Learning and Training Wesley:Limitations and optimization of ANNs Cora:Applications and.
Indirect Supervision Protocols for Learning in Natural Language Processing II. Learning by Inventing Binary Labels This work is supported by DARPA funding.
CSE & CSE6002E - Soft Computing Winter Semester, 2011 Neural Networks Videos Brief Review The Next Generation Neural Networks - Geoff Hinton.
CSC321 Introduction to Neural Networks and Machine Learning Lecture 3: Learning in multi-layer networks Geoffrey Hinton.
Back-Propagation Algorithm AN INTRODUCTION TO LEARNING INTERNAL REPRESENTATIONS BY ERROR PROPAGATION Presented by: Kunal Parmar UHID:
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
COMP 2208 Dr. Long Tran-Thanh University of Southampton Decision Trees.
Pattern Recognition. What is Pattern Recognition? Pattern recognition is a sub-topic of machine learning. PR is the science that concerns the description.
Deep Learning Overview Sources: workshop-tutorial-final.pdf
A Presentation on Adaptive Neuro-Fuzzy Inference System using Particle Swarm Optimization and it’s Application By Sumanta Kundu (En.R.No.
Understanding unstructured texts via Latent Dirichlet Allocation Raphael Cohen DSaaS, EMC IT June 2015.
CSC2535: Lecture 4: Autoencoders, Free energy, and Minimum Description Length Geoffrey Hinton.
CSE343/543 Machine Learning Mayank Vatsa Lecture slides are prepared using several teaching resources and no authorship is claimed for any slides.
Introduction to Machine Learning, its potential usage in network area,
Today’s Lecture Neural networks Training
Neural networks.
Big data classification using neural network
Neural Network Architecture Session 2
Learning Deep Generative Models by Ruslan Salakhutdinov
Deep Learning Amin Sobhani.
Artificial Neural Networks
CSC321: Neural Networks Lecture 22 Learning features one layer at a time Geoffrey Hinton.
Goodfellow: Chap 1 Introduction
Dialog Processing with Unsupervised Artificial Neural Networks
Intro to NLP and Deep Learning
Multimodal Learning with Deep Boltzmann Machines
Announcements HW4 due today (11:59pm) HW5 out today (due 11/17 11:59pm)
Neural Networks Dr. Peter Phillips.
What is an ANN ? The inventor of the first neuro computer, Dr. Robert defines a neural network as,A human brain like system consisting of a large number.
Machine Learning Basics
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
FUNDAMENTAL CONCEPT OF ARTIFICIAL NETWORKS
Goodfellow: Chap 1 Introduction
Hidden Markov Models Part 2: Algorithms
Chapter 3. Artificial Neural Networks - Introduction -
Deep Learning Hierarchical Representations for Image Steganalysis
of the Artificial Neural Networks.
Similarity based on Shape and Appearance
Word Embedding Word2Vec.
Creating Data Representations
A Proposal Defense On Deep Residual Network For Face Recognition Presented By SAGAR MISHRA MECE
Department of Electrical Engineering
Lecture Notes for Chapter 4 Artificial Neural Networks
Chapter 11 Practical Methodology
Deep Cross-media Knowledge Transfer
MACHINE LEARNING TECHNIQUES IN IMAGE PROCESSING
MACHINE LEARNING TECHNIQUES IN IMAGE PROCESSING
Lip movement Synthesis from Text
Fundamentals of Neural Networks Dr. Satinder Bal Gupta
Dialog Processing with Unsupervised Artificial Neural Networks
CSC321: Neural Networks Lecture 11: Learning in recurrent networks
The Network Approach: Mind as a Web
An introduction to: Deep Learning aka or related to Deep Neural Networks Deep Structural Learning Deep Belief Networks etc,
Xiao-Yu Zhang, Shupeng Wang, Xiaochun Yun
Introduction to Neural Networks
CSC 578 Neural Networks and Deep Learning
Outline Announcement Neural networks Perceptrons - continued
Presentation transcript:

Sparse Coding: A Deep Learning using Unlabeled Data for High - Level Representation Dr.G.M.Nasira R. Vidya R. P. Jaia Priyankka

Sparse coding is an algorithm mainly used for unsupervised learning for unlabeled data This paper tries to find High – Level Representation from unlabeled data which clearly gives way to Deep learning Unlabeled data is easy to acquire when compared with labeled data because it does not follow class labels Sparse…

The main problem with sparse coding is it uses Quadratic loss function and Gaussian noise mode. So, its performs is very poor when binary or integer value or other Non-Gaussian type data is applied. Thus first we propose an algorithm for solving the L1 - regularized convex optimization algorithm for the problem to allow High - Level Representation of unlabeled data. Through this we derive a optimal solution for describing an approach to Deep learning algorithm by using sparse code.

Contact us Ground Rule Neural Network Classifier Sparse Coding L 1 Regularized BLUE PRINT

Supervised… Supervised learning can basically be described as learning by example. In this task, a system learns to recognize certain trends by comparing a sample data set against known and labeled examples. In essence, the platform is told to extract features and patterns from the known examples and apply them when examining new, unlabeled data. For instance, let’s say you wanted to teach a system to recognize George Clooney’s face. Using supervised learning, you would provide a data set containing many images. You’d label some of these images as George. From there, the idea is that the system’s supervised learning algorithm would adapt to the data and pick out the right faces.

Unsupervised… Here, the goal is to find hidden patterns and structures, which is accomplished by mining and reducing the complexity of data in a multitude of ways. Essentially, unsupervised learning is used to discover trends in data sets when we don’t exactly know what to look for. Sticking with George Clooney, rather than tracking his face let’s say we wanted to track his usage across platforms instead. Multi screen tracking is a problem that the entire data analytics industry is currently struggling to solve. Short of attaching unique IDs to users, it can’t be done with complete accuracy. However, an unsupervised learning system could potentially accomplish this feat with great precision.

Contact us

Supervised Learning Model

Contact us

Classification Deep Learning problem arises, when we have limited labeled data for classification task and also large amount of unlabeled data available which is very mildly related to the task.

Contact us  The main concept here in Supervised learning is we give our inputs that is to be observed. Only after processing we know whether the input observations are right or wrong. if it is right then it is output. If it is wrong then another set of input under observation will be feed inside. Hence we cannot name it as output. Both the inputs and outputs are only observations.  In unsupervised learning we do not feed input we feed the latent variable (which is half result from the input) and in the result we receive observations, which is again feed as latent variable. This process continuous until we receive output. Process Supervised Learning Unsupervised Learning Observations (inputs) Latent Variable

Contact us For more Clarification Latent Variables Observations (output) Observations (inputs) Consider this diagram where is this latent variable comes from.

Contact us Starting from Ground Rule: To complete it in simple sentences. we are following non - Gaussian method which is implemented with light modification from Gaussian and Regularized Algorithm. Our rules states that we can use any number of inputs for any number of observations. It means 5 input for first observation,7 inputs for second observation, 3 inputs for 3 observation and so on. There is no limit or minimum number of inputs stated for observations. Next we can even implement our observation in positive form or negative form. Even it can be turned neutral for verification. Rules

Contact us Sparse Coding Gaussian:

Contact us Sparse Coding Gaussian:

L 1 - Regularized Convex Optimization Algorithm

Contact us

Classifier…  Our goal is to use Sparse coding and to find high - Level representation of unlabeled data in deep learning. The task is to find the best classifier for the implementation. This approach is mainly used for Sparse algorithm which is designed for deep learning.  Neural Network Classifier Consists of Neurons i.e Units. The units are arranged in a layer which converts the input vector into output. Each and every unit first takes an input and applies some functions on it and then it passes the input to next level. Networks are said to be feed - forward, which means it feeds the first input's output to the next level which is input to the next level. It does not gives the feedback to the previous level so it is called feeding forward. weightings is also applied to the each units signals which passes from one level to another. These weighting actually help or tune the particular problem to adapt the neural network.

Conclusion & Discussion … In this paper we presented a general method to use Sparse coding for unsupervised learning with the help of unlabeled data mainly for High - Level Representation. Since Sparse coding does not support unlabeled data or non-Gaussian data we derived an algorithm for the optimization usage which came out as L1- regularized convex optimization algorithm. With the help of this algorithm we derived an Deep Learning algorithm for Higher - Level Representation. Overall our result suggest that L1- regularized convex optimization algorithm can help Higher - Level Representation of documents form unlabeled data and this knowledge is useful in classification problem. We believe this model could be applied generally to other problems, where large amount of unlabeled data are available.

Future Work… The model represented in the paper consists of two algorithm, which indirectly or directly related the Gaussian Sparse coding with Deep Learning. We have only implemented the model to the images, whereas we like to extend it to Text, Audio and Video related data. It really Seems to be challenging yet this extension when fully related and applied to all the types of multimedia content, It can be applied in Robotics and mainly in their Perception Task.

Thank You… Queries…