CMS 165 Lecture 18 Summing up...

Slides:



Advertisements
Similar presentations
Ai in game programming it university of copenhagen Statistical Learning Methods Marco Loog.
Advertisements

Bayesian Learning Rong Jin. Outline MAP learning vs. ML learning Minimum description length principle Bayes optimal classifier Bagging.
EECS 349 Machine Learning Instructor: Doug Downey Note: slides adapted from Pedro Domingos, University of Washington, CSE
CSE 546 Data Mining Machine Learning Instructor: Pedro Domingos.
Machine Learning CMPT 726 Simon Fraser University
Bayesian Learning Rong Jin.
Statistical Learning: Pattern Classification, Prediction, and Control Peter Bartlett August 2002, UC Berkeley CIS.
Machine Learning Usman Roshan Dept. of Computer Science NJIT.
Deep Learning and Neural Nets Spring 2015
Artificial Intelligence
Machine Learning.
CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 Machine Learning Margaret H. Dunham Department of Computer Science and Engineering Southern.
Instructor: Pedro Domingos
Soft Computing methods for High frequency tradin.
SUPERVISED AND UNSUPERVISED LEARNING Presentation by Ege Saygıner CENG 784.
Machine Learning Usman Roshan Dept. of Computer Science NJIT.
William Stallings Data and Computer Communications
Usman Roshan Dept. of Computer Science NJIT
Brief Intro to Machine Learning CS539
Neural networks and support vector machines
Deep Learning Methods For Automated Discourse CIS 700-7
DEEP LEARNING BOOK CHAPTER to CHAPTER 6
Instructor: Pedro Domingos
ECE 5424: Introduction to Machine Learning
Large-scale Machine Learning
Valentino Pediroda, Danilo Di Stefano
Lecture 04: Logistic Regression
LECTURE 09: BAYESIAN ESTIMATION (Cont.)
ICS 280 Learning in Graphical Models
Automatic Lung Cancer Diagnosis from CT Scans (Week 4)
Table 1. Advantages and Disadvantages of Traditional DM/ML Methods
2. Industry 4.0: novel sensors, control algorithms, and servo-presses
CSC321: Neural Networks Lecture 19: Boltzmann Machines as Probabilistic Models Geoffrey Hinton.
Classification with Perceptrons Reading:
ECE 5424: Introduction to Machine Learning
Intelligent Information System Lab
CSCI 5922 Neural Networks and Deep Learning: NIPS Highlights
Bias and Variance of the Estimator
Relevance Feedback Hongning Wang
CSEP 546 Data Mining Machine Learning
Data Mining Practical Machine Learning Tools and Techniques
Outline Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no.
Q4 : How does Netflix recommend movies?
Hyperparameters, bias-variance tradeoff, validation
Training a Neural Network
CSEP 546 Data Mining Machine Learning
cs540 - Fall 2016 (Shavlik©), Lecture 20, Week 11
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Chap. 7 Regularization for Deep Learning (7.8~7.12 )
Neural Networks Geoff Hulten.
Biointelligence Laboratory, Seoul National University
Deep Learning for Non-Linear Control
Vinit Shah, Joseph Picone and Iyad Obeid
Lip movement Synthesis from Text
Deep Reinforcement Learning
Designing Neural Network Architectures Using Reinforcement Learning
Introduction to the course
Approximation and Generalization in Neural Networks
Competitive Optimization Spectral Methods
Model generalization Brief summary of methods
Machine Learning based Data Analysis
Lecture 2 CMS 165 Optimization.
Artificial Intelligence 10. Neural Networks
Roc curves By Vittoria Cozza, matr
Shih-Yang Su Virginia Tech
Multiple DAGs Learning with Non-negative Matrix Factorization
Machine learning CS 229 / stats 229
Introduction to Neural Networks
Rong Ge, Duke University
Patterson: Chap 1 A Review of Machine Learning
Presentation transcript:

CMS 165 Lecture 18 Summing up..

Most surprising/interesting things learnt in this class Bias-variance trade-off in non-classical setting Over-parametrization regime where traditional bias-variance don’t behave the same Robustness – min-max games (data augmentation + adversarial training) Decoupling the uncertainties in errors (active learning, fairness) Symmetries in non-convex optimization can be exploited Non-isolated optimal points (lives in the manifold) Generalization can be improved with robust training Data collection and importance of having good train/test set Role of local regularization and global regularization Sample complexity results and how it is not straightforward in deep learning Stability, effect of single data point in the dataset

Missing pieces (not covered in class) Interpretability Discussed applications are limited to standard ones, not many deep learning or recent applications of established works Reinforcement learning/Lifelong learning Likelihood in test after training GANs Negative results and tricks in achieving good results Explicit discussion of types of learning algorithms/ categorization Decision trees/classical ML techniques Recent properties of neural network architectures and computation scales Sequence models/autoencoders and embeddings Causal inference(besides Bayesian networks?)

Other comments.. Peer grading (not much learning from peer grading)(maybe on the open ended ones?) Maybe different application questions Paper to code kind of questions More applications/discussions of tensors Send the good homeworks to students instead of sending randomly More recitations on both theory and applications