Marshall Wang Dept. of Statistics, NC State University

Slides:



Advertisements
Similar presentations
Autonomic Scaling of Cloud Computing Resources
Advertisements

Chapter 9 Overview of Alternating Treatments Designs.
Todd W. Neller Gettysburg College
CS590M 2008 Fall: Paper Presentation
Stacking RBMs and Auto-encoders for Deep Architectures References:[Bengio, 2009], [Vincent et al., 2008] 2011/03/03 강병곤.
1 A Semiparametric Statistics Approach to Model-Free Policy Evaluation Tsuyoshi UENO (1), Motoaki KAWANABE (2), Takeshi MORI (1), Shin-ich MAEDA (1), Shin.
SLAW: A Mobility Model for Human Walks Lee et al..
Single -Subject Designs - Ch 5 “Data collection allows teachers to make statements about the direction and magnitude of behavioral changes” (p. 116). In.
Deep Learning.
Unsupervised Learning With Neural Nets Deep Learning and Neural Nets Spring 2015.
Principal Component Analysis
Variable Selection for Optimal Decision Making Susan Murphy & Lacey Gunter University of Michigan Statistics Department Artificial Intelligence Seminar.
Submitted by:Supervised by: Ankit Bhutani Prof. Amitabha Mukerjee (Y )Prof. K S Venkatesh.
1 Variable Selection for Tailoring Treatment S.A. Murphy, L. Gunter & J. Zhu May 29, 2008.
The Effects of Ranging Noise on Multihop Localization: An Empirical Study from UC Berkeley Abon.
Can undergoing an internet based ACT intervention change the impact of predictors thought to lead to Substance Use? Leonidou. G., Savvides. S., N. & Karekla.
Single- Subject Research Designs
Neural Network Introduction Hung-yi Lee. Review: Supervised Learning Training: Pick the “best” Function f * Training Data Model Testing: Hypothesis Function.
 2008 Johns Hopkins Bloomberg School of Public Health Evaluating Mass Media Anti-Smoking Campaigns Marc Boulay, PhD Center for Communication Programs.
BEHAVIORAL TARGETING IN ON-LINE ADVERTISING: AN EMPIRICAL STUDY AUTHORS: JOANNA JAWORSKA MARCIN SYDOW IN DEFENSE: XILING SUN & ARINDAM PAUL.
QoS-guaranteed Transmission Scheme Selection for OFDMA Multi-hop Cellular Networks Jemin Lee, Sungsoo Park, Hano Wang, and Daesik Hong, ICC 2007.
Gap-filling and Fault-detection for the life under your feet dataset.
“Comparing Two web-based Smoking Cessation Programs: Randomized- Controlled Trial” By: McKay et. Al.
Speech Communication Lab, State University of New York at Binghamton Dimensionality Reduction Methods for HMM Phonetic Recognition Hongbing Hu, Stephen.
Reinforcement Learning with Laser Cats! Marshall Wang Maria Jahja DTR Group Meeting October 5, 2015.
A shared random effects transition model for longitudinal count data with informative missingness Jinhui Li Joint work with Yingnian Wu, Xiaowei Yang.
Dynamic Background Learning through Deep Auto-encoder Networks Pei Xu 1, Mao Ye 1, Xue Li 2, Qihe Liu 1, Yi Yang 2 and Jian Ding 3 1.University of Electronic.
Chapter 15: Classification of Time- Embedded EEG Using Short-Time Principal Component Analysis by Nguyen Duc Thang 5/2009.
Speech Enhancement based on
Machine Learning Artificial Neural Networks MPλ ∀ Stergiou Theodoros 1.
1 A genetic algorithm with embedded constraints – An example on the design of robust D-stable IIR filters 潘欣泰 國立高雄大學 資工系.
Variational Autoencoders Theory and Extensions
Deep Reinforcement Learning
Dimensionality Reduction and Principle Components Analysis
Session VII: Formulation of Monitoring and Evaluation Plan
Building Imitation and Self-Evolving AI in Python
SLAW: A Mobility Model for Human Walks
Understanding Drinking among Emerging Adults using the Dualistic Model of Passion: Associations with Alcohol Consumption, Blackouts, and Overdose Alan.
2 Research Department, iFLYTEK Co. LTD.
COMP 1942 PCA TA: Harry Chan COMP1942.
A practical guide to learning Autoencoders
Chapter 6: Temporal Difference Learning
CSCE 2017 ICAI 2017 Las Vegas July. 17.
Deep Learning Yoshua Bengio, U. Montreal
Ajita Rattani and Reza Derakhshani,
Dipartimento di Ingegneria «Enzo Ferrari»
Find and Treat All Missing Persons with TB
Approaches for Evaluating Local Environmental Efforts
Approximate Fully Connected Neural Network Generation
Prepared by: Mahmoud Rafeek Al-Farra
Deep neural networks (DNNs) can successfully identify, count, and describe animals in camera-trap images. Deep neural networks (DNNs) can successfully.
Organizing & Using Data
2009 Papers of Interest David Tinkelman.
المشرف د.يــــاســـــــــر فـــــــؤاد By: ahmed badrealldeen
Deep Neural Networks for Onboard Intelligence
Chapter 8: Generalization and Function Approximation
Representation Learning with Deep Auto-Encoder
Chapter 6: Temporal Difference Learning
Kannadasan K , Damodar Reddy Edla, Venkatanareshbabu Kuppili 
Autoencoders Supervised learning uses explicit labels/correct output in order to train a network. E.g., classification of images. Unsupervised learning.
LECTURE 34: Autoencoders
Deep learning enhanced Markov State Models (MSMs)
Health profile used in workshops to help people make diabetes management decisions. Health profile used in workshops to help people make diabetes management.
Cengizhan Can Phoebe de Nooijer
Statistical methods related to* air pollution
Maximum likelihood estimation of intrinsic dimension
CS249: Neural Language Model
What is Artificial Intelligence?
Tensorflow in Deep Learning
Goodfellow: Chapter 14 Autoencoders
Presentation transcript:

Sufficient Markov Decision Processes with Alternating Deep Neural Networks Marshall Wang (lwang31@ncsu.edu) Dept. of Statistics, NC State University Advisor: Eric Laber Aug 4, 2017 Reassure audience that the talk is self-contained. 30 sec.

Motivation Want to apply mobile intervention to students with heavy drinking/smoking behavior Hard to identify an optimal strategy when data are high dimensional and noisy Need a dimension reduction that retains useful information 30 sec.

Contributions Provided a criterion to measure the quality of a dimension reduction Designed a deep learning model to produce a dimension reduction with no information loss Demonstrated the method on a mobile intervention study 30 sec.

Sufficient Markov Decision Process Alternating Deep Neural Network Outline Sufficient Markov Decision Process Alternating Deep Neural Network Simulation Study Application on Mobile Health 20 sec.

Sufficient Markov Decision Process Alternating Deep Neural Network Simulation Study Application on Mobile Health

Markov Decision Process 1 min.

Markov Decision Process 2 min.

Sufficient Markov Decision Process Mention why PCA is bad in this case. 2 min.

Sufficient Markov Decision Process Alternating Deep Neural Network Simulation Study Application on Mobile Health

Deep Neural Networks (DNN) 30 sec.

Naive Dimension Reduction with DNN Recall our criterion for a sufficient dimension reduction: 1.5 min.

Alternating Deep Neural Networks Recall our criterion for a sufficient dimension reduction: Remember to mention variable selection. 2 min.

Sufficient Markov Decision Process Alternating Deep Neural Network Simulation Study Application on Mobile Health

Setup 1 min. Additionally, add 200 noise variables including constants, white noise, and dependent noise.

Results 1 min.

Sufficient Markov Decision Process Alternating Deep Neural Network Simulation Study Application on Mobile Health

Data BASICS-Mobile is a mobile intervention targeting heavy drinking and smoking among college students. Enrolled 30 students and lasted for 14 days. On each afternoon and evening, the student is asked to complete a list of self-report questions. Then either an informational module or a treatment module is provided. 15 variables are collected, including baseline information, answers to self-report questions, weekend indicator and so on. We’ll focus on smoking: Find an optimal intervention strategy to minimize cigarettes smoked. 1 min.

Dimension Reduction with ADNN 30 sec.

Dimension Reduction with ADNN 30 sec.

Questions? Thank you! lwang31@ncsu.edu Main References: Learning Deep architectures for AI, Y. Bengio, 2009. Reducing the dimensionality of data with neural networks, G. E. Hinton & R. Salakhutdinov, 2006. Neural networks and deep learning, M. A. Nielsen, 2015. Reinforcement learning: an introduction, R. Sutton & A. Barto. Brownian distance covariance, G. Szekely & M. Rizzo. Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion, P. Vincent et al. Conditional distance correlation, X. Wang et al. Development and evaluation of a mobile intervention for heavy drinking and smoking among college students, K. Witkiewitz et al. Thank you! lwang31@ncsu.edu