Download presentation
Presentation is loading. Please wait.
1
CMS 165 Lecture 18 Summing up..
2
Most surprising/interesting things learnt in this class
Bias-variance trade-off in non-classical setting Over-parametrization regime where traditional bias-variance don’t behave the same Robustness – min-max games (data augmentation + adversarial training) Decoupling the uncertainties in errors (active learning, fairness) Symmetries in non-convex optimization can be exploited Non-isolated optimal points (lives in the manifold) Generalization can be improved with robust training Data collection and importance of having good train/test set Role of local regularization and global regularization Sample complexity results and how it is not straightforward in deep learning Stability, effect of single data point in the dataset
3
Missing pieces (not covered in class)
Interpretability Discussed applications are limited to standard ones, not many deep learning or recent applications of established works Reinforcement learning/Lifelong learning Likelihood in test after training GANs Negative results and tricks in achieving good results Explicit discussion of types of learning algorithms/ categorization Decision trees/classical ML techniques Recent properties of neural network architectures and computation scales Sequence models/autoencoders and embeddings Causal inference(besides Bayesian networks?)
4
Other comments.. Peer grading (not much learning from peer grading)(maybe on the open ended ones?) Maybe different application questions Paper to code kind of questions More applications/discussions of tensors Send the good homeworks to students instead of sending randomly More recitations on both theory and applications
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.