IEM Fall SEMINAR SERIES Raghu Pasupathy

Slides:



Advertisements
Similar presentations
Modeling and Simulation By Lecturer: Nada Ahmed. Introduction to simulation and Modeling.
Advertisements

Simulating Single server queuing models. Consider the following sequence of activities that each customer undergoes: 1.Customer arrives 2.Customer waits.
Sample Approximation Methods for Stochastic Program Jerry Shen Zeliha Akca March 3, 2005.
Semi-Stochastic Gradient Descent Methods Jakub Konečný (joint work with Peter Richtárik) University of Edinburgh.
SVM—Support Vector Machines
Raef Bassily Adam Smith Abhradeep Thakurta Penn State Yahoo! Labs Private Empirical Risk Minimization: Efficient Algorithms and Tight Error Bounds Penn.
Optimizing Flocking Controllers using Gradient Descent
1 Reinforcement Learning Introduction & Passive Learning Alan Fern * Based in part on slides by Daniel Weld.
Approaches to Data Acquisition The LCA depends upon data acquisition Qualitative vs. Quantitative –While some quantitative analysis is appropriate, inappropriate.
Gizem ALAGÖZ. Simulation optimization has received considerable attention from both simulation researchers and practitioners. Both continuous and discrete.
Introduction to Sampling based inference and MCMC Ata Kaban School of Computer Science The University of Birmingham.
Optimization methods Morten Nielsen Department of Systems biology, DTU.
On Systems with Limited Communication PhD Thesis Defense Jian Zou May 6, 2004.
Importance Sampling. What is Importance Sampling ? A simulation technique Used when we are interested in rare events Examples: Bit Error Rate on a channel,
Feature Selection for Regression Problems
Discrete-Event Simulation: A First Course Steve Park and Larry Leemis College of William and Mary.
Sérgio Pequito Phd Student
Steps of a sound simulation study
Efficient Methodologies for Reliability Based Design Optimization
Semi-Stochastic Gradient Descent Methods Jakub Konečný (joint work with Peter Richtárik) University of Edinburgh SIAM Annual Meeting, Chicago July 7, 2014.
1 Hybrid methods for solving large-scale parameter estimation problems Carlos A. Quintero 1 Miguel Argáez 1 Hector Klie 2 Leticia Velázquez 1 Mary Wheeler.
Australian Journal of Basic and Applied Sciences, 5(11): , 2011 ISSN Monte Carlo Optimization to Solve a Two-Dimensional Inverse Heat.
CHAPTER 4 S TOCHASTIC A PPROXIMATION FOR R OOT F INDING IN N ONLINEAR M ODELS Organization of chapter in ISSO –Introduction and potpourri of examples Sample.
Copyright © 2012 by Nelson Education Limited. Chapter 7 Hypothesis Testing I: The One-Sample Case 7-1.
Adaptive CSMA under the SINR Model: Fast convergence using the Bethe Approximation Krishna Jagannathan IIT Madras (Joint work with) Peruru Subrahmanya.
Stochastic Linear Programming by Series of Monte-Carlo Estimators Leonidas SAKALAUSKAS Institute of Mathematics&Informatics Vilnius, Lithuania
Assimilation of HF Radar Data into Coastal Wave Models NERC-funded PhD work also supervised by Clive W Anderson (University of Sheffield) Judith Wolf (Proudman.
Monte Carlo Methods Versatile methods for analyzing the behavior of some activity, plan or process that involves uncertainty.
Chapter 4 Stochastic Modeling Prof. Lei He Electrical Engineering Department University of California, Los Angeles URL: eda.ee.ucla.edu
M Machine Learning F# and Accord.net. Alena Dzenisenka Software architect at Luxoft Poland Member of F# Software Foundation Board of Trustees Researcher.
Monté Carlo Simulation  Understand the concept of Monté Carlo Simulation  Learn how to use Monté Carlo Simulation to make good decisions  Learn how.
Improved Cross Entropy Method For Estimation Presented by: Alex & Yanna.
Monte-Carlo method for Two-Stage SLP Lecture 5 Leonidas Sakalauskas Institute of Mathematics and Informatics Vilnius, Lithuania EURO Working Group on Continuous.
M Machine Learning F# and Accord.net.
Unsupervised Mining of Statistical Temporal Structures in Video Liu ze yuan May 15,2011.
Lecture #9: Introduction to Markov Chain Monte Carlo, part 3
Expectation-Maximization (EM) Algorithm & Monte Carlo Sampling for Inference and Approximation.
Monte-Carlo based Expertise A powerful Tool for System Evaluation & Optimization  Introduction  Features  System Performance.
CHAPTER 5 Simulation Modeling. Introduction In many situations a modeler is unable to construct an analytic (symbolic) model adequately explaining the.
1 Chapter 8: Model Inference and Averaging Presented by Hui Fang.
The Unscented Particle Filter 2000/09/29 이 시은. Introduction Filtering –estimate the states(parameters or hidden variable) as a set of observations becomes.
CS Statistical Machine learning Lecture 25 Yuan (Alan) Qi Purdue CS Nov
Non-parametric Methods for Clustering Continuous and Categorical Data Steven X. Wang Dept. of Math. and Stat. York University May 13, 2010.
Deep Learning and Deep Reinforcement Learning. Topics 1.Deep learning with convolutional neural networks 2.Learning to play Atari video games with Deep.
Tree and Forest Classification and Regression Tree Bagging of trees Boosting trees Random Forest.
Intelligent and Adaptive Systems Research Group A Novel Method of Estimating the Number of Clusters in a Dataset Reza Zafarani and Ali A. Ghorbani Faculty.
András Benczúr Head, “Big Data – Momentum” Research Group Big Data Analytics Institute for Computer.
Generalization Performance of Exchange Monte Carlo Method for Normal Mixture Models Kenji Nagata, Sumio Watanabe Tokyo Institute of Technology.
Markov Chain Monte Carlo in R
Introduction to Sampling based inference and MCMC
Big data classification using neural network
Journal club Jun , Zhen.
Prepared by Lloyd R. Jaisingh
A Fast Trust Region Newton Method for Logistic Regression
Generalization and adaptivity in stochastic convex optimization
Classification with Perceptrons Reading:
Chapter 4a Stochastic Modeling
Statistical Learning Dong Liu Dept. EEIS, USTC.
Chapter 4a Stochastic Modeling
Lecture 2 – Monte Carlo method in finance
Efficient Quantification of Uncertainties Associated with Reservoir Performance Simulations Dongxiao Zhang, The University of Oklahoma . The efficiency.
Ch13 Empirical Methods.
Overview of Machine Learning
October 6, 2011 Dr. Itamar Arel College of Engineering
Markov Decision Problems
Mathematical Foundations of BME Reza Shadmehr
CSSE463: Image Recognition Day 23
CH2 Time series.
Reinforcement Learning (2)
Reinforcement Learning (2)
Presentation transcript:

IEM Fall SEMINAR SERIES Raghu Pasupathy In answering (ii) while treating derivative and derivative-free settings in a unified manner, we characterize the work complexity of ASGM and ASTRO in terms of the oracle decay rate. We show, for instance, that when f is twice differentiable with Lipschitz first derivative, ASGM's and ASTRO's work complexity are arbitrarily close to O(epsilon^{-2 - 1/mu(alpha)}), where mu(alpha) is the error decay rate of the gradient estimate, however constructed; and alpha is the error decay rate of the inexact oracle. Other complexities that result from different smoothness and structural conditions on f can be readily deduced and compared against established convergence rates for SGD. We illustrate the calculation of alpha and mu(alpha) for common choices such as MC or QMC with forward or central difference approximation. SEMINAR SERIES Wednesday 11/15/17 EN 107 3:30-5:00 PM Sponsored by IEM and OSU INFORMS Student Chapter Light Refreshments Provided The Complexity of Adaptive Sampling Line Search and Trust Region Algorithms for Stochastic Optimization  Guest Speaker Raghu Pasupathy Abstract: The question of stochastic optimization, that is, optimizing an objective function f that is observable only with an inexact oracle, e.g., Monte Carlo (MC) and quasi-Monte Carlo (QMC), has recently gained special prominence due to its relevance in "machine learning" and "big data" contexts. For example, and as the rise of stochastic gradient descent (SGD) attests, virtually all classification, regression, and estimation in presence of a large dataset rely on the ability to efficiently solve a stochastic optimization problem. In this talk, we ask two broad questions: (i) How can the well-established (deterministic) Line Search and Trust Region Optimization techniques be suitably adapted to solve stochastic optimization problems? (ii) What can we say about the (work) complexity of the resulting algorithms? In answering (i), we propose the Adaptive Sampling Gradient Method (ASGM) and the Adaptive Sampling Trust Region Optimization (ASTRO) algorithms, where the structure of Line Search and Trust Region Optimization are combined with ideas from sequential sampling. The salient feature in both ASGM and ASTRO is the adaptive manner in which stochastic sampling is performed, exerting more oracle effort when algorithm iterates are close to a stationary point and less when far away, in an attempt to keep the accompanying errors in lock-step. Purdue University Bio: Raghu Pasupathy is an Associate Professor of Statistics at Purdue University in West Lafayette, IN. He is interested in questions related to Monte Carlo sampling and (statistical) efficiency within the context of stochastic simulation, optimization, machine learning, and big data. A focus of his research has been investigating the nature of, and developing methods for, stochastic optimization. Some of his other recent work also includes efficient rare-event probability computation, specialized stochastic optimization for mathematical finance, and random variate generation. Raghu's teaching interests include probability, Monte Carlo methods, and optimization. Raghu is active on the editorial board of the Winter Simulation Conference. He is the current Vice President/President Elect of the INFORMS Simulation Society, and also serves as an associate editor for Operations Research and INFORMS Journal on Computing, and as the Area Editor for the simulation desk at IIE Transactions. More information including downloadable publications, teaching profile, and software can be obtained through the website web.ics.purdue.edu/~pasupath. School of Industrial Engineering and Management 322 Engineering North Oklahoma State University Stillwater, OK 74078 @okstateIEM Stay Connected and Follow Us! Oklahoma State IEM