Download presentation
Presentation is loading. Please wait.
Published byPrudence Wilkinson Modified over 9 years ago
1
Mehdi Ghayoumi Kent State University Computer Science Department Summer 2015 mghayoum@kent.edu Exposition on Cyber Infrastructure and Big Data
2
Machine Learning
3
Science is a systematic enterprise that builds and organizes Knowledge in the form of testable explanations and predictions about nature and universe. universe
11
Why “Learn” ? Machine learning is programming computers to optimize a performance criterion using example data or past experience. There is no need to “learn” to calculate payroll Learning is used when: –Human expertise does not exist (navigating on Mars), –Humans are unable to explain their expertise (speech recognition) –Solution changes in time (routing on a computer network) –Solution needs to be adapted to particular cases (user biometrics)
12
What is Machine Learning? Optimize a performance criterion using example data or past experience. Role of Statistics: Inference from a sample Role of Computer science: Efficient algorithms to –Solve the optimization problem –Representing and evaluating the model for inference
13
Apply a prediction function to a feature representation of the image to get the desired output: f( ) = “apple” f( ) = “tomato” f( ) = “cow”
14
y = f(x)
15
Prediction Training Labels Training Images Training Image Features Learned model
17
Unsupervised “Weakly” supervised Fully supervised
20
SVM Neural networks Naïve Bayes Logistic regression Decision Trees K-nearest neighbor RBMs Etc.
21
Definition of Learning A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E.
22
Design a Learning System We shall use handwritten Character recognition as an example to illustrate the design issues and approaches
23
Step 0: –Lets treat the learning system as a black box Learning System Z
24
Step 1: Collect Training Examples (Experience). –Without examples, our system will not learn. 236789236789
25
Step 2: Representing Experience –Choose a representation scheme for the experience/examples The sensor input represented by an n-d vector, called the feature vector, X = (x 1, x 2, x 3, …, x n ) (1,1,0,1,1,1,1,1,1,1,0,0,0,0,1,1,1, 1,1,0, …., 1) (1,1,1,1,1,1,1,1,1,1,0,0,1,1,1,1,1, 1,1,0, …., 1)
26
Choose a representation scheme for the experience/examples The sensor input represented by an n-d vector, called the feature vector, X = (x1, x2, x3, …, xn) To represent the experience, we need to know what X is. So we need a corresponding vector D, which will record our knowledge (experience) about X The experience E is a pair of vectors E = (X, D)
27
–Assuming our system is to recognise 10 digits only, then D can be a 10-d binary vector; each correspond to one of the digits if X is digit 5, then d5=1; all others =0 if X is digit 9, then d9=1; all others =0 X = (1,1,0,1,1,1,1,1,1,1,0,0,0,0,1,1,1, 1,1,0, …., 1); D= (0,0,0,0,0,1,0,0,0,0) X= (1,1,1,1,1,1,1,1,1,1,0,0,1,1,1,1,1, 1,1,0, …., 1); D= (0,0,0,0,0,0,0,0,1,0) D = (d0, d1, d2, d3, d4, d5, d6, d7, d8, d9)
28
Step 3: Choose a Representation for the Black Box –We need to choose a function F to approximate the block box. For a given X, the value of F will give the classification of X. There are considerable flexibilities in choosing F Learning System F F(X) X
29
–F will be a function of some adjustable parameters, or weights, W = (w1, w2, w3, …w N ), which the learning algorithm can modify or learn Learning System F(W) F(W,X) X
30
Step 4: Learning/Adjusting the Weights We need a learning algorithm to adjust the weights such that the experience/prior knowledge from the training data can be learned into the system: E=(X,D) F(W,X) = D
31
Learning System F(W) F(W,X) X D E=(X,D) Error = D-F(W,X) Adjust W
32
Step 5: Use/Test the System –Once learning is completed, all parameters are fixed. An unknown input X is presented to the system, the system computes its answer according to F(W,X) Learning System F(W) F(W,X) X Answer
33
SVM Neural networks Naïve Bayes Logistic regression Decision Trees K-nearest neighbor RBMs Etc.
34
Machine Learning
37
Bayes Rule Thomas Bayes (c. 1701 – 7 April 1761) was an English statistician, philosopher and Presbyterian minister, known for having formulated a specific case of the theorem that bears his name: Bayes' theorem. Bayes never published what would eventually become his most famous accomplishment; his notes were edited and published after his death by Richard Price. Machine Learning
38
Maximum Likelihood (ML) Machine Learning
39
The Euclidean Distance Classifier
40
Cell structures –Cell body –Dendrites –Axon –Synaptic terminals Machine Learning
43
Entropy (disorder, impurity) of a set of examples, S, relative to a binary classification is: where p 1 is the fraction of positive examples in S and p 0 is the fraction of negatives. Machine Learning
45
An SVM is an abstract learning machine which will learn from a training data set and attempt to generalize and make correct predictions on novel data.
46
Machine Learning Clustering: Partition unlabeled examples into disjoint subsets of clusters, such that: – Examples within a cluster are similar – Examples in different clusters are different
47
Machine Learning
54
Original data Machine Learning
55
(1) centering & whitening process Machine Learning
56
(2) FastICA algorithm Machine Learning
57
(2) FastICA algorithm Machine Learning
62
https://www.youtube.com/watch?v=t4kyRyKyOpo
63
Thank you!
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.