Download presentation
Presentation is loading. Please wait.
1
Overview of deep learning
Gregery T. Buzzard
2
Varieties of AI/ML AI is most general Many types of ML algs:
Nearest Neighbor Naive Bayes Decision Trees Linear Regression Support Vector Machines (SVM) Neural Networks Deep learning is associated with neural networks
3
Machine learning Machine learning changes the paradigm from programming for answers to programming to discover rules (how to transform new data into new answers).
4
Shallow vs deep Shallow learning – only 1 or 2 layers: limited ability to combine features into more sophisticated features. E.g., the concept of an eye. Deep learning allows a model to learn all layers of a representation jointly rather than sequentially - better than stacking shallow models. Two key points: Layer-by-layer way in which increasingly complex representations are developed These representations are learned jointly
5
NN depth Deep learning uses multiple layers of processing
Promotes reuse of rules for multiple inputs – i.e., identify common features rather than global properties. Promotes increasing levels of abstraction with depth.
6
NN depth Example activations in simple NN
Layers farther from input are more abstract
7
NN overview Simple layers use a linear transformation (AX + b) plus an activation function (relu). The weights (parameters) A and b determine behavior – need to learn them.
8
NN overview Compare predictions with training targets using a loss function (or error function). Often CE or MSE
9
NN overview Use the loss function with an optimizer to update the weights. E.g., gradient descent.
10
Driving forces Three technical forces are driving advances in machine learning: Hardware Datasets and benchmarks Algorithmic advances (and software platforms) Belief that it works
11
Neural networks – hype is not new!
Developed by Frank Rosenblatt in 1957 at Cornell under a grant from the Office of Naval Research. The New York Times reported the perceptron to be "the embryo of an electronic computer that [the Navy] expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.” The-Timeline-of-Artificial-Intelligence-and-Robotics.png
12
Neural networks - roadblocks
In 1969 Minsky and Papert of MIT claimed that Rosenblatt’s predictions had been grossly exaggerated. Several booms and busts later, lots of data, faster computers, and good programming infrastructure led to many of the early predictions coming true. What next?
13
Example code on Google colaboratory
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.