Modeling Medical Records of Diabetes using Markov Decision Processes

Slides:



Advertisements
Similar presentations
The Physician Assistant. Presentation Objectives What is a PA? What can they do? PA in Canada Liability The Value of PAs.
Advertisements

Markov Decision Processes (MDPs) read Ch utility-based agents –goals encoded in utility function U(s), or U:S  effects of actions encoded in.
Reinforcement Learning (II.) Exercise Solutions Ata Kaban School of Computer Science University of Birmingham 2003.
Svetlana V. Doubova Dolores Mino-Leon Hortensia Reyes Morales Sergio Flores-Hernández Ricardo Pérez-Cuevas.
Rutgers CS440, Fall 2003 Review session. Rutgers CS440, Fall 2003 Topics Final will cover the following topics (after midterm): 1.Uncertainty & introduction.
SA-1 1 Probabilistic Robotics Planning and Control: Markov Decision Processes.
Application of Reinforcement Learning in Network Routing By Chaopin Zhu Chaopin Zhu.
1 Hybrid Agent-Based Modeling: Architectures,Analyses and Applications (Stage One) Li, Hailin.
Reinforcement Learning: Learning algorithms Yishay Mansour Tel-Aviv University.
1 Quality of Experience Control Strategies for Scalable Video Processing Wim Verhaegh, Clemens Wüst, Reinder J. Bril, Christian Hentschel, Liesbeth Steffens.
Introduction Medical technologies are devices that extend and/or improve life. They can reduce pain, injury or a handicap as well as increase the effectiveness.
Computer Science AND DOCTORS Jolena Co Truong- 6 th period.
Clinical Pharmacy Basma Y. Kentab MSc..
BY PRABJOT SIDHU MY TOP THREE JOB CHOICES. FAMILY & GENERAL PRACTITIONERS A PHYSICIAN A PHYSICIAN OR VETERINARIAN WHOSE PRACTICE IS NOT LIMITED TO A SPECIALTY.
1 Understanding and Using NAMCS and NHAMCS Data: A Hands-On Workshop Susan M. Schappert Donald K. Cherry.
Data Mining. 2 Models Created by Data Mining Linear Equations Rules Clusters Graphs Tree Structures Recurrent Patterns.
Instructor: Vincent Conitzer
Future of Clinical Engineering
Prediction-based Threshold for Medication Alert Yoshimasa Kawazoe 1 M.D., Ph.D., Kengo Miyo 1,2 Ph.D., Issei Kurahashi 2 Ph.D., Ryota Sakurai 1 M.D. Kazuhiko.
IMPROVING DIABETES MANAGEMENT IN PRIMARY CARE
Electronic Health Records and Clinical Decision Support Systems Impact on National Ambulatory Care Quality Max J. Romano, BA; Randall S. Stafford, MD,
Reinforcement Learning
General Polynomial Time Algorithm for Near-Optimal Reinforcement Learning Duke University Machine Learning Group Discussion Leader: Kai Ni June 17, 2005.
Introduction Many decision making problems in real life
The Value of Medication Therapy Management Services.
Reinforcement Learning (II.) Exercise Solutions Ata Kaban School of Computer Science University of Birmingham.
Table 3:Yale Result Table 2:ORL Result Introduction System Architecture The Approach and Experimental Results A Face Processing System Based on Committee.
Medical Bioinformatics Prof:Rui Alves Dept Ciencies Mediques Basiques, 1st.
CPSC 422, Lecture 9Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 9 Sep, 28, 2015.
ALI R. RAHIMI, BOBBY WRIGHTS, MD, HOSSEIN AKHONDI, MD & CHRISTIAN M. RICHARD, MSC Clinical Correlation Between Effective Anticoagulants & Risk of Stroke:
+ Role of Industry in Clinical Care, Research, and Education.
Value Function Approximation on Non-linear Manifolds for Robot Motor Control Masashi Sugiyama1)2) Hirotaka Hachiya1)2) Christopher Towell2) Sethu.
REDEFINING HEALTH CARE How do we define “Value” in health care?
Wong, Gardner, Krieger, Litt (2006) Zack Dvey-Aharon, March 2008.
Focus Area 17: Medical Product Safety Progress Review November 5, 2003.
Athletic Training By: Amanda Proveaux. What is Athletic Training? “practiced by athletic trainers, the application of the principles and procedures for.
Marketing Career Project Renee Bass Pharmaceutical/Medical Marketing.
WHO PRESCRIBING INDICATORS (1991 – 1995) TRENDS AND PERSPECTIVES IN AN OUTPATIENT HEALTH CARE FACILITY IN BENIN CITY, NIGERIA. 1 Isah AO, 2 Isah EC, 3.
CPS 570: Artificial Intelligence Markov decision processes, POMDPs
Reinforcement Learning for Mapping Instructions to Actions S.R.K. Branavan, Harr Chen, Luke S. Zettlemoyer, Regina Barzilay Computer Science and Artificial.
Careers In Nephrology Introduction Eligibility Job Prospects Remuneration
Physiological Data Analysis of Neuro-Critical Patients Using Markov Models By Shashwat Bhoop sb3758.
Ch 1. Introduction Pattern Recognition and Machine Learning, C. M. Bishop, Updated by J.-H. Eom (2 nd round revision) Summarized by K.-I.
1-1 TITLE PRESENTATION:HEALTHCARE GROUP MEMBER: CHUAH XUE LI(212176) ONG SEAT NEE(212133) STIN2063 MACHINE LEARNING.
R. Papani, A. G. Duarte, Y-L. Lin, G. Sharma
Online Multiscale Dynamic Topic Models
OPERATIONS RESEARCH.
Bibliometric Analysis of Herbal Medicine Publications, 1991 to 2004
Reinforcement learning
Reinforcement Learning (1)
Introduction to Clinical Pharmacy
America’s Holistic Doctor The U.S. ranks dead last in preventable deaths among the 19 leading industrialized nations! We’re clearly off target in our approach.
Value of Pharmaceuticals in Managed Care Pharmacy
Biomedical Data & Markov Decision Process
Value of Pharmaceuticals in Managed Care Pharmacy
Value of Pharmaceuticals in Managed Care Pharmacy
"Playing Atari with deep reinforcement learning."
Prior authorization and patient cost-sharing are least likely to be seen as effective in reducing unnecessary care. “How effective do you think each of.
Reinforcement Learning in MDPs by Lease-Square Policy Iteration
Instructor: Vincent Conitzer
Decision Trees Showcase
Chapter 17 – Making Complex Decisions
CS 188: Artificial Intelligence Spring 2006
Computer Science Issues In a Patient’s Perspective
Toktobaeva B, Karymbaeva S Drug Information Centre Kyrgyzstan
Machine Learning for Individualized Medicine
Reinforcement Nisheeth 18th January 2019.
Reinforcement Learning (2)
Value of Pharmaceuticals in Managed Care Pharmacy
Reinforcement Learning (2)
Presentation transcript:

Modeling Medical Records of Diabetes using Markov Decision Processes 1H. Asoh, 1M. Shiro, 1S. Akaho, 1T. Kamishima, 1K. Hasida, 2E. Aramaki, 3T. Kohro 1National Institute of Advanced Industrial Science and Technology 2Design School, Kyoto University 3The University of Tokyo Hospital Proceedings of the ICML2013 Workshop on Role of Machine Learning in Transforming Healthcare

Introduction State of problem Objective of the study Method Analyzing long-term medical records of patients suffering from chronic diseases is beginning to be recognized as an important issue in medical data analysis. Objective of the study To obtain the optimal policy for the treatment of diabetes and compare it with the averaged policy of doctors. Method They modeled the data regarding diabetes using Markov decision process (MDP).

Data Raw data Medical records of heart disease patients cared in University of Tokyo hospital. Over 10,000 patients since 1987. Data includes: Attributes of patients Examination results Prescription of medicines Surgical operations Data used They preprocess the data with the patients who periodically attended the hospital and underwent examinations and treatment They used the data after January 1, 2000. They focused on the data related to diabetes. Value of Hemoglobin-A1c(HbA1c).

MDP Model 𝑆,𝐴,𝑇,𝑅 𝑆 −𝑠𝑒𝑡 𝑜𝑓 𝑠𝑡𝑎𝑡𝑒, 𝐴−𝑠𝑒𝑡 𝑜𝑓 𝑎𝑐𝑡𝑖𝑜𝑛𝑠 𝑇:𝑆 ×𝐴×𝑆 → 0,1 −𝑠𝑒𝑡 𝑜𝑓 𝑡𝑟𝑎𝑛𝑠𝑖𝑡𝑖𝑜𝑛 𝑝𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑦 𝑅:𝑆 ×𝐴×𝑆 ×ℛ → 0,1 −𝑠𝑒𝑡 𝑜𝑓 𝑝𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑦 𝑜𝑓 𝑡𝑎𝑘𝑖𝑛𝑔 𝑖𝑚𝑚𝑒𝑑𝑖𝑎𝑡𝑒 𝑟𝑒𝑤𝑎𝑟𝑑 Policy 𝜋: 𝑆 ×𝐴 → 0,1 Expectation of cumulative reward under policy 𝜋: ; 𝛾∈[0,1] – discount factor The value of an action 𝑎 at state 𝑠 under the policy 𝜋 can be defined as follows:

MDP Model Optimal policy 𝜋 ∗ Satisfies 𝑉 𝜋 ∗ 𝑠 ≥ 𝑉 𝜋 s for all state 𝑠∈𝑆 and for all policy 𝜋. The state values for optimal policy satisfy the following equation: With given MDP and policy, they evaluate state values 𝑉 𝜋 s and action values 𝑄 𝜋 s,𝑎 .

State & Action State Value of Hemoglobin-A1c(HbA1c). Discretized into three levels. (Normal, Medium, Severe). Action: Pharmaceutical treatment They grouped the drugs according to their functions and identified patterns of combinations of drug groups prescribed at a time. The number of identified combination patterns that appeared in the data was 38.

Experimenting the data To model and analyze medical records using MDP, they developed an MDP toolbox in R. Easily handle multiple episodes Estimate parameters of MDP Evaluate state and action values Compute the optimal policy. From the records they estimated MDP state transition probabilities 𝑇 and the policy 𝜋 of doctors. For the reward, they set state dependent values according to the opinion of a doctor.

State & Action values under estimated MDP They evaluated patients’ state values 𝑉 𝜋 𝑠 . Based on the estimated probabilities 𝑇 and policy 𝜋 Doctors’ action values 𝑄 𝜋 (𝑠,𝑎). (See appendix for all combinations)

State & Action values under “optimal policy” They obtained optimal policy 𝜋 ∗ By value iteration for MDP. The optimal policy for the each state is the same as the top actions in Table 4. The state value under optimal policy The state value for optimal policy are larger compared to the doctors’ policy. They noted that it doesn’t mean that the optimal policy performs better for the actual patients.

Evaluation of goodness of modeling Evaluation of one step future patients’ state prediction. They divided data into training data and test data. 90% : training data & 10% : test data Using the test data, they estimated the probabilities of MDP. For each state transition, they evaluated the log-likelihood of transition and averaged the values. 𝑁 𝑒 :number of action steps in episode e. The prediction achieves a log-likelihood value of -1.09. Evaluation of doctors’ action prediction. They evaluated the average log-likelihood of actions in test episodes. The number of candidates for the action prediction was 38. The prediction achieves a log-likelihood value of -3.63.

Conclusion In this paper, they exploited a Markov decision process to model the long-term process of disease treatment. They estimated the parameters of the model using the data extracted from patients’ medical records. Using the model they predicted the progression of the state of the patients and evaluate the value of treatment.

APPENDIX

Doctors’ action values Figure. Action values for the “normal" (left) and “medium" (right) states