1 Evaluating Health Information Technology: Putting Theory Into Practice Eric Poon, MD MPH Clinical Informatics Research and Development, Partners Information.

Slides:



Advertisements
Similar presentations
Agency for Healthcare Research and Quality (AHRQ)
Advertisements

Research Curriculum Session II –Study Subjects, Variables and Outcome Measures Jim Quinn MD MS Research Director, Division of Emergency Medicine Stanford.
Survey Methods Overview
Comparator Selection in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)
Integrating Chronic Care & Business Strategies in the Safety-Net AHRQ Annual Meeting September 9, 2008.
Data Mining Methodology 1. Why have a Methodology  Don’t want to learn things that aren’t true May not represent any underlying reality ○ Spurious correlation.
Study Designs in Epidemiologic
Evaluation Research Pierre-Auguste Renoir: Le Grenouillere, 1869.
KINE 4565: The epidemiology of injury prevention Randomized controlled trials.
Alvin Kwan Division of Information & Technology Studies
4.11 PowerPoint Emily Smith.
Quantitative vs. Qualitative Research Method Issues Marian Ford Erin Gonzales November 2, 2010.
PHSSR IG CyberSeminar Introductory Remarks Bryan Dowd Division of Health Policy and Management School of Public Health University of Minnesota.
How does the process work? Submissions in 2007 (n=13,043) Perspectives.
Clinical Trials Hanyan Yang
Research problem, Purpose, question
Project Design and Data Collection Methods: A quick and dirty introduction to classroom research Margaret Waterman September 21, 2005 SoTL Fellows
Evaluation. Practical Evaluation Michael Quinn Patton.
Cohort Studies Hanna E. Bloomfield, MD, MPH Professor of Medicine Associate Chief of Staff, Research Minneapolis VA Medical Center.
Formulating the research design
Studying treatment of suicidal ideation & attempts: Designs, Statistical Analysis, and Methodological Considerations Jill M. Harkavy-Friedman, Ph.D.
1 Evaluating the impact of information technology on clinician workload using time-motion methodologies Lisa P. Newmark, Carol Keohane, RN, Eric G. Poon,
PowerPoint presentation to accompany Research Design Explained 6th edition ; ©2007 Mark Mitchell & Janina Jolley Chapter 8 Survey Research.
Before and After Studies in Injury Research Thomas Songer, PhD University of Pittsburgh
Product Evaluation the outcome phase. Do the magic bullets work? How do you know when an innovative educational program has “worked”? How do you know.
Paula Peyrani, MD Medical/Project Director, HIV Program at the 550 Clinic Assistant Director, Research Design and Development Clinical and Translational.
Lecture 8 Objective 20. Describe the elements of design of observational studies: case reports/series.
Step 6: Implementing Change. Implementing Change Our Roadmap.
Experiments and Observational Studies. Observational Studies In an observational study, researchers don’t assign choices; they simply observe them. look.
Outcomes based approach to measuring the impact of new technology Vikas Arya HSCI 740 Spring 2004 May 22,2004.
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 13 Experiments and Observational Studies.
EVIDENCE BASED MEDICINE Health economics Ross Lawrenson.
Study Design. Study Designs Descriptive Studies Record events, observations or activities,documentaries No comparison group or intervention Describe.
Medical Audit.
RE-AIM Plus To Evaluate Effective Dissemination of AHRQ CER Products Michele Heisler, MD, MPA September, 2011.
Fundamentals of Evaluation for Public Health Programs ROBERT FOLEY, M.ED. NIHB TRIBAL PUBLIC HEALTH SUMMIT MARCH 31,
ECON ECON Health Economic Policy Lab Kem P. Krueger, Pharm.D., Ph.D. Anne Alexander, M.S., Ph.D. University of Wyoming.
1 Evaluating Health Information Technology: A Primer Eric Poon, MD MPH Clinical Informatics Research and Development, Partners Information Systems Davis.
Criteria for Assessing The Feasibility of RCTs. RCTs in Social Science: York September 2006 Today’s Headlines: “Drugs education is not working” “ having.
 Is there a comparison? ◦ Are the groups really comparable?  Are the differences being reported real? ◦ Are they worth reporting? ◦ How much confidence.
Incorporating an Evaluation Plan into Program Design: Using Qualitative Data Connie Baird Thomas, PhD Linda H. Southward, PhD Colleen McKee, MS Social.
 Collecting Quantitative  Data  By: Zainab Aidroos.
HOW TO WRITE RESEARCH PROPOSAL BY DR. NIK MAHERAN NIK MUHAMMAD.
IPMA Executive Conference Value of IT September 22, 2005.
Module 2: Quality and Quality Measures The degree to which health services for individuals and populations increase the likelihood of desired health outcomes.
Quantitative and Qualitative Approaches
Evidence-Based Medicine Presentation [Insert your name here] [Insert your designation here] [Insert your institutional affiliation here] Department of.
CHAPTER 12 Descriptive, Program Evaluation, and Advanced Methods.
Market research for a start-up. LEARNING OUTCOMES By the end of this lesson I will be able to: –Define and explain market research –Distinguish between.
The HMO Research Network (HMORN) is a well established alliance of 18 research departments in the United States and Israel. Since 1994, the HMORN has conducted.
Causal relationships, bias, and research designs Professor Anthony DiGirolamo.
AN INTRODUCTION Managing Change in Healthcare IT Implementations Sherrilynne Fuller, Center for Public Health Informatics School of Public Health, University.
Africa Impact Evaluation Program on AIDS (AIM-AIDS) Cape Town, South Africa March 8 – 13, Steps in Implementing an Impact Evaluation Nandini Krishnan.
“Problems” in Marketing Research MAR 6648: Marketing Research January 6, 2010.
Program Evaluation Principles and Applications PAS 2010.
The Quality Improvement Project MODULE 4: A FRAMEWORK FOR QI: THE MODEL FOR IMPROVEMENT October 2015.
Types of Studies. Aim of epidemiological studies To determine distribution of disease To examine determinants of a disease To judge whether a given exposure.
IMPLEMENTATION AND PROCESS EVALUATION PBAF 526. Today: Recap last week Next week: Bring in picture with program theory and evaluation questions Partners?
WHY IS THIS HAPPENING IN THE PROGRAM? Session 5 Options for Further Investigation & Information Flow.
Sociology. Sociology is a science because it uses the same techniques as other sciences Explaining social phenomena is what sociological theory is all.
Methodological Issues in Implantable Medical Device(IMDs) Studies Abdallah ABOUIHIA Senior Statistician, Medtronic.
Purpose of Epi Studies Discover factors associated with diseases, physical conditions and behaviors Identify the causal factors Show the efficacy of intervening.
Journal Club Curriculum-Study designs. Objectives  Distinguish between the main types of research designs  Randomized control trials  Cohort studies.
Stages of Research and Development
Incorporating Evaluation into a Clinical Project
Clinical Studies Continuum
A Meta Analysis of the Impact of SBI on Healthcare Utilization
Donald E. Cutlip, MD Beth Israel Deaconess Medical Center
S1316 analysis details Garnet Anderson Katie Arnold
A Meta Analysis of the Impact of SBI on Healthcare Utilization
Presentation transcript:

1 Evaluating Health Information Technology: Putting Theory Into Practice Eric Poon, MD MPH Clinical Informatics Research and Development, Partners Information Systems David F. Lobach, MD, PhD, MS David F. Lobach, MD, PhD, MS Division of Clinical Informatics Department of Community and Family Medicine Duke University Medical Center, Durham, North Carolina AHRQ’s National Resource Center for Health Information Technology Annual Meeting June 2005

2 Outline Overview of Evaluating HIT Overview of Evaluating HIT Why evaluate? Why evaluate? General Approach to Evaluation General Approach to Evaluation Choosing Evaluation Measures Choosing Evaluation Measures Study Design Types Study Design Types Analytical issues in HIT evaluations Analytical issues in HIT evaluations Evaluation in the ‘real world’ Evaluation in the ‘real world’ Duke University Medical Center Duke University Medical Center

3 Why Measure Impact of HIT? Impact of HIT often hard to predict Impact of HIT often hard to predict Many “slam dunks” go awry Many “slam dunks” go awry You can’t manage/improve what isn’t measured You can’t manage/improve what isn’t measured Understand how to clear barriers to effective implementation Understand how to clear barriers to effective implementation Understand what works and what doesn’t Understand what works and what doesn’t Invent the wheel only once Invent the wheel only once Justify enormous investments Justify enormous investments Return on investment Return on investment Allow other institutions to make tradeoffs intelligently Allow other institutions to make tradeoffs intelligently Use results to win over late adopters Use results to win over late adopters

4 General Approach to Evaluating HIT Understand your intervention Understand your intervention Formulate questions to answer Formulate questions to answer Select and define measures Select and define measures Pick the study design Pick the study design Data analysis Data analysis

5 Getting Started: Get to know your intervention What problem(s) is it trying to solve? What problem(s) is it trying to solve? Think about intermediate processes Think about intermediate processes Identify potential barriers to successful implementation: Identify potential barriers to successful implementation: Managerial barriers Managerial barriers End-user behavioral barriers End-user behavioral barriers Understand how your peers around the country are addressing (or not) the same issues. Understand how your peers around the country are addressing (or not) the same issues.

6 Formulating Questions Likely questions: Likely questions: Does the HIT work? Does the HIT work? What would have made it work better? What would have made it work better? What would the next set of designers/implementors like to know? What would the next set of designers/implementors like to know? Has this question been fully answered before? Has this question been fully answered before? Don’t reinvent the wheel! (not a big concern) Don’t reinvent the wheel! (not a big concern) What impact would the answer have? What impact would the answer have? Peers Peers Policy makers Policy makers

7 Array of Measures Quality and Safety Quality and Safety Clinical Outcomes Clinical Outcomes Clinical Processes Clinical Processes Knowledge Knowledge Patient Patient Provider Provider Satisfaction & Attitudes Satisfaction & Attitudes Patient Patient Provider Provider Resource utilization Costs and charges LOS Employee time/workflow Lessons learned

8 Choosing Study Measures Clinical vs Process Measures Clinical vs Process Measures Clinical outcomes (e.g. mortality) desirable Clinical outcomes (e.g. mortality) desirable Justifiable to measure process outcomes (e.g. door to abx time) if relationship between outcome and process already demonstrated Justifiable to measure process outcomes (e.g. door to abx time) if relationship between outcome and process already demonstrated Will outcomes be impacted by the intervention? Will outcomes be impacted by the intervention? Will impact on outcomes be detectable during the study period? Will impact on outcomes be detectable during the study period? ? Rare events, e.g. adverse outcomes ? Rare events, e.g. adverse outcomes ? Colon cancer screening ? Colon cancer screening What resources do you have? What resources do you have? Don’t bit off more than what you can chew. Don’t bit off more than what you can chew.

9 Selecting Study Types Commonly used study types: Commonly used study types: Optimal design: Randomized Controlled Trials Optimal design: Randomized Controlled Trials Factorial Design Factorial Design Before-and-after time series Trials Before-and-after time series Trials Main study design issues: Main study design issues: Secular Trend: Can a simultaneous control group be established? Secular Trend: Can a simultaneous control group be established? Confounding: Can you randomly assign individuals to study groups? Confounding: Can you randomly assign individuals to study groups? Study design often influenced by implementation plan Study design often influenced by implementation plan Need to respect operational needs, but often there is room for creative designs Need to respect operational needs, but often there is room for creative designs

10 Randomization Nuts and Bolts Justifiable to have a control arm as long as benefit not already demonstrated (usual care) Justifiable to have a control arm as long as benefit not already demonstrated (usual care) Want to choose a truly random variable Want to choose a truly random variable Not day of the week Not day of the week Consideration: Stratified randomization Consideration: Stratified randomization Ensures that intervention and control group are similar on important characteristics (e.g. baseline computer literacy) Ensures that intervention and control group are similar on important characteristics (e.g. baseline computer literacy) Strongest possible intervention Strongest possible intervention

11 Randomization Unit: How to Decide? Small units (patients) vs. Large units (practices wards) Small units (patients) vs. Large units (practices wards) Contamination across randomization units Contamination across randomization units If risk of contamination is significant, consider larger units If risk of contamination is significant, consider larger units Effect contamination-can underestimate impact Effect contamination-can underestimate impact However, if you see a difference, impact is present However, if you see a difference, impact is present Randomization by patient generally undesirable Randomization by patient generally undesirable Contamination Contamination Ethical concern Ethical concern

12 Randomization Schemes: Simple RCT Burn-in period Burn-in period Give target population time to get used to new intervention Give target population time to get used to new intervention Data not used in final analysis Data not used in final analysis XX Clinics Baseline Period Baseline Data Collection Data Collection for RCT No Intervention Intervention Period 3 month burn- in period Intervention Deployed Intervention arm Control arm Control arm gets intervention Post- Intervention Period

13 Randomization schemes: Factorial Design May be used to concurrently evaluate more than one intervention: May be used to concurrently evaluate more than one intervention: Assess interventions independently and in combination Assess interventions independently and in combination Loss of statistical power Loss of statistical power Usually not practical for more than 2 interventions Usually not practical for more than 2 interventions Control (no interventions) A B A+B

14 Randomization Schemes: Staggered Deployment Advantages of staggering Advantages of staggering Easier for user education and training Easier for user education and training Can fix IT problems up front Can fix IT problems up front Need to account for secular trend and baseline differences Need to account for secular trend and baseline differences Time variable in regression analysis Time variable in regression analysis Control for practice characteristics Control for practice characteristics Intervention Group Control Group Intervention Group Control Group Intervention Group Control Group Intervention Group Control Group

15 Inherent Limitations of RCTs in Informatics Blinding is seldom possible Blinding is seldom possible Effect on documentation vs. clinical action Effect on documentation vs. clinical action People always question generalizability People always question generalizability Success is highly implementation independent Success is highly implementation independent Efficacy-effectiveness gap: ‘Invented here’ effect Efficacy-effectiveness gap: ‘Invented here’ effect

16 Mitigating the Limitations of Before-and-After Study Designs Before-and-after trial common in informatics Before-and-after trial common in informatics Concurrent randomization is hard Concurrent randomization is hard Don’t lose the opportunity to collect baseline data! Don’t lose the opportunity to collect baseline data! Leave the time gap between before and after trends relatively short Leave the time gap between before and after trends relatively short Look for secular trend in statistical analysis and adjust for it if present Look for secular trend in statistical analysis and adjust for it if present

17 Common Pitfalls with Data Collection Measures you define and collect on your own Measures you define and collect on your own Pilot data collection and refine definition early Pilot data collection and refine definition early Ask yourself early whether data your collect measure what you intended to measure. Ask yourself early whether data your collect measure what you intended to measure. Measures others defined but you collect on your own Measures others defined but you collect on your own Do you need to adapt other people’s instruments? Do you need to adapt other people’s instruments? Measures others define and collect for you Measures others define and collect for you Understand nuisances and limitations, particular with administrative data. Understand nuisances and limitations, particular with administrative data.

18 Electronic Data Abstraction: There’s no free lunch! Convenient and time-saving, but… Convenient and time-saving, but… Some chart review (selected) to get information not available electronically Some chart review (selected) to get information not available electronically Get ready for surprises Get ready for surprises Documentation effect of EMRs Documentation effect of EMRs

19 Data Collection Issue: Baseline Differences Randomization schemes often lead to imbalance between intervention and control arms: Randomization schemes often lead to imbalance between intervention and control arms: Need to collect baseline data and adjust for baseline differences Need to collect baseline data and adjust for baseline differences Interaction term ( Time * Allocation Arm) gives effect for intervention in regression analysis Interaction term ( Time * Allocation Arm) gives effect for intervention in regression analysis

20 Data Collection Issue: Completeness of Followup The higher the better: The higher the better: Over 90% Over 90% 80-90% 80-90% Less than 80% Less than 80% Intention to treat analysis Intention to treat analysis In an RCT, should analyze outcomes according to the original randomization assignment In an RCT, should analyze outcomes according to the original randomization assignment

21 A Common Analytical Issue The Clustering Effect Occurs when your observations are not independent: Occurs when your observations are not independent: Example: Each physician treats multiple patients: Example: Each physician treats multiple patients: May need to increase sample size to account for loss of power. May need to increase sample size to account for loss of power. Intervention Group Control Group Physicians Patient -> Outcome assessed

22 Looking at Usage Data Great way to tell how well the intervention is going Great way to tell how well the intervention is going Target your trouble-shooting efforts Target your trouble-shooting efforts In terms of evaluating HIT: In terms of evaluating HIT: Correlate usage to implementation/training strategy Correlate usage to implementation/training strategy Correlate usage to stakeholder characteristics Correlate usage to stakeholder characteristics Correlate usage to improved outcome Correlate usage to improved outcome

23 Studies on Workflow and Usability How to make observations? How to make observations? Direct observations Direct observations Stimulated observations Stimulated observations Random paging method Random paging method Subjects must be motivated and cooperative Subjects must be motivated and cooperative Usability Lab Usability Lab What to look for? What to look for? Time to accomplish specific tasks: Time to accomplish specific tasks: Need to pre-classify activities Need to pre-classify activities Handheld/Tablet PC tools may be very helpful Handheld/Tablet PC tools may be very helpful Workflow analysis Workflow analysis Asking users to ‘think aloud’ Asking users to ‘think aloud’ Unintended consequences of HIT Unintended consequences of HIT

24 Cost Benefit Analysis Do the benefits of the technology justify the costs? Do the benefits of the technology justify the costs? Monetary benefits – Monetary costs Monetary benefits – Monetary costs Important in the policy realm Important in the policy realm Need to specify perspective Need to specify perspective Organizational Organizational Societal Societal Cost analysis more straight forward Cost analysis more straight forward Prospective data collection preferred Prospective data collection preferred Discounting: a dollar spent today worth more than a dollar 10 years from now Discounting: a dollar spent today worth more than a dollar 10 years from now Benefits analysis more controversial Benefits analysis more controversial Cost of illness averted: medical costs, productivity for patient Cost of illness averted: medical costs, productivity for patient What is the cost of suffering due to preventable adverse events? What is the cost of suffering due to preventable adverse events? What is the cost of a life? What is the cost of a life?

25 Using Surveys – Stay Tuned! Survey of user believes, attitude and behaviors Survey of user believes, attitude and behaviors Response rate – responder bias: Aim for response rate > 50-60% Response rate – responder bias: Aim for response rate > 50-60% Keep the survey concise Keep the survey concise Pilot survey for readability and clarity Pilot survey for readability and clarity Need formal validation if you want plan to develop a scale/summary score Need formal validation if you want plan to develop a scale/summary score

26 Qualitative Methodologies – Don’t touch that dial! Major techniques Major techniques Direct observations Direct observations Semi-structured interviews Semi-structured interviews Focus groups Focus groups Adds richness to the evaluation Adds richness to the evaluation Explains successes and failures. Generate Lessons learned Explains successes and failures. Generate Lessons learned Captures the unexpected Captures the unexpected Great for forming hypotheses Great for forming hypotheses People love to hear stories People love to hear stories Data analysis Data analysis Goal is to make sense of your observations Goal is to make sense of your observations Iterative & interactive Iterative & interactive

27 Concluding Remarks Don’t bite off more than what you can chew Don’t bite off more than what you can chew Pick a few study outcomes and study them well. Pick a few study outcomes and study them well. It’s a practical world It’s a practical world Balancing operational and research needs is always a challenge. Balancing operational and research needs is always a challenge. Life (data collection) is like a box of chocolates… Life (data collection) is like a box of chocolates… You don’t know what you’re going to get until you look, so look early! You don’t know what you’re going to get until you look, so look early!

28 Thank you Eric Poon, MD MPH Eric Poon, MD MPH Acknowledgements Acknowledgements Davis Bu, MD MA Davis Bu, MD MA CITL, Partners Healthcare CITL, Partners Healthcare David Bates, MD MSc David Bates, MD MSc Chief, Div of General Medicine, Brigham and Women’s Hospital Chief, Div of General Medicine, Brigham and Women’s Hospital