Correspondance: gpezzino@khi.org A multi-state comparison of public health preparedness assessment using a common standardized tool APHA Meeting, 2007.

Slides:



Advertisements
Similar presentations
Greater than the sum of its parts: Challenges and growth of alliances in the Land of Oz MLC-3 kick off meeting August 2008 Gianfranco Pezzino, M.D., M.P.H.
Advertisements

Briefing: NYU Education Policy Breakfast on Teacher Quality November 4, 2011 Dennis M. Walcott Chancellor NYC Department of Education.
Evaluation Research Pierre-Auguste Renoir: Le Grenouillere, 1869.
9 th Annual Public Health Finance Roundtable November 3, 2012 Boston, MA Peggy Honoré.
1 New England Common Assessment Program (NECAP) Setting Performance Standards.
Measuring Disability in a Survey or Census Context: Parallel Work Advancing the Field Barbara M. Altman, Ph.D. Disability Statistics Consultant.
Rankings: What do they matter, what do they measure? Anne McFarlane August 18, 2010.
RADM Ali S. Khan, MD, MPH Director, Office of Public Health Preparedness and Response Bridging the Gaps: Public Health and Radiation Emergency Preparedness.
Types of Evaluation.
Measuring Performance of Public Health Systems
FINAL REPORT: OUTLINE & OVERVIEW OF SURVEY ERRORS
Public Health Systems Research: What We Know and Need to Learn Glen P. Mays, PhD, MPH Department of Health Policy & Management UAMS College of Public Health.
Performance Measurement and Analysis for Health Organizations
Danielle Varda & Carrie Chapman University of Colorado at Denver, School of Public Affairs.
Performance Standards: Opportunities for Quality Improvement for Maternal and Child Health Dennis Lenaway, PhD, MPH Centers for Disease Control and Prevention.
Ibrahim Kamara, MS, MPH, Sc.D Torrance Brown, MPH June 2011.
Classroom Assessments Checklists, Rating Scales, and Rubrics
1. 2 IMPORTANCE OF MANAGEMENT Some organizations have begun to ask their contractors to provide only project managers who have been certified as professionals.
VARIATION, VARIABLE & DATA POSTGRADUATE METHODOLOGY COURSE Hairul Hafiz Mahsol Institute for Tropical Biology & Conservation School of Science & Technology.
Lauren Lewis, MD, MPH Health Studies Branch Environmental Hazards and Health Effects National Center for Environmental Health Centers for Disease Control.
This project is funded by the NNPHI and RWJF MLC-3 Grant award number A
Crossing Methodological Borders to Develop and Implement an Approach for Determining the Value of Energy Efficiency R&D Programs Presented at the American.
Assessing local public health capacity and performance in diabetes prevention and control Deborah Porterfield, MD MPH University of North Carolina-Chapel.
Evaluating Ongoing Programs: A Chronological Perspective to Include Performance Measurement Summarized from Berk & Rossi’s Thinking About Program Evaluation,
A Capacity Building Program of the Virginia Department of Education Division Support for Substantial School Improvement 1.
Chapter 7 Measuring of data Reliability of measuring instruments The reliability* of instrument is the consistency with which it measures the target attribute.
EBM --- Journal Reading Presenter :葉麗雯 Date : 2005/10/27.
1 Local Public Health Preparedness: how can we measure it? Experience in Kansas Gianfranco Pezzino, MD, MPH Kansas Health Institute.
DEVELOPED BY MARY BETH FURST ASSOCIATE PROFESSOR, BUCO DIVISION AMY CHASE MARTIN DIRECTOR OF FACULTY DEVELOPMENT AND INSTRUCTIONAL MEDIA UNDERSTANDING.
Department of Defense Voluntary Protection Programs Center of Excellence Development, Validation, Implementation and Enhancement for a Voluntary Protection.
Crystal Reinhart, PhD & Beth Welbes, MSPH Center for Prevention Research and Development, University of Illinois at Urbana-Champaign Social Norms Theory.
 Community Health Status Assessment MAPP Phase 3 California Gaining Ground Coalition Small County Learning Community August 13, 2015 Tamara Maciel Bannan,
Ability and Willingness to Report to Work During a Disaster A report on a survey of home health care employees For Presentation at APHA November 6, 2007.
General Introduction Some Notes. Definition of Measurement “Measurement consists of rules for assigning numbers to observable attributes so as to represent.
Yandell - Econ 216 Chap 1-1 Chapter 1 Introduction and Data Collection.
DoD Lead Agent: Office of the Assistant Secretary of the Army (Installations and Environment) Department of Defense Voluntary Protection Programs Center.
2016 Primary Assessment Update 27th September 2016
Classroom Assessments Checklists, Rating Scales, and Rubrics
A Lifetime Price Tag on Smoking
Maintenance BC - NZTA assessment in TIO
Do Adoptees Have Lower Self Esteem?
Check-Lists for Inspections Reform
Impact-Oriented Project Planning
Building Public Health System Capacity:
Director of Policy Analysis and Research
Statistics: The Z score and the normal distribution
What information do you think this is showing?
Patterns and trends in adult obesity
Classroom Assessments Checklists, Rating Scales, and Rubrics
Laura E. Pechta, PhD, Centers for Disease Control and Prevention
Social Distancing Decision Making Protocol
College of Public Health and Human Sciences
Immediate activity.
Cerdá M, Wall M, Feng T, et al
Public Health Surveillance
Sociology Outcomes Assessment
Performance Management
FUNCTION OF MANAGEMENT
Adapted from a presentation at the Rwanda First National Workshop on
Criteria for prioritizing health-related problems for research
Presented by Joseph P. Stern
Setting Performance Objectives/ Targets
Summary of Slide Content
Experimental Evaluation
School Finance Indicator Database
Local Epidemiology Capacity in Texas: A Proposed Evaluation
Planning for Design Project
MGS 3100 Business Analysis Regression Feb 18, 2016
  Using the RUMM2030 outputs as feedback on learner performance in Communication in English for Adult learners Nthabeleng Lepota 13th SAAEA Conference.
Presentation transcript:

Correspondance: gpezzino@khi.org A multi-state comparison of public health preparedness assessment using a common standardized tool APHA Meeting, 2007 Gianfranco Pezzino, MD, MPH, Kansas Health Institute Mark Edgar, MPH, Illinois Public Health Institute Mary Z. Thompson, MBA, Michigan Public Health Institute Correspondance: gpezzino@khi.org

Outline Discuss difficulty in identifying preparedness standards and indicators Describe one project that compared different states' preparedness assessments How to address P.H. preparedness in an environment full of competing needs and diversity

The need to measure public health preparedness Accountability >$7 billions have been sent to states in 5 years by feds, need to show results Program planning and management Officials need to know where the gaps are to make sound plans and decide budget allocations Rate of 1.3-1.6 bill/yr for BT vs 5.9 bill/yr for all ph programs preparedness -- defined as governmental preparations for bioterrorism, new epidemics, chemical and radiologic emergencies, mass casualties, natural disasters, and extreme weather. http://www.medscape.com/viewarticle/544741 Joshua Lipsman, MD, JD, MPH Disaster Preparedness: Ending the Exceptionalism Posted 10/03/2006

The challenges to measure preparedness NO STANDARDS! There is no consensus on what constitutes “ideal preparedness” “How prepared should we be”? NO MEASURABLE INDICATORS! Project objectives set up that are not easily measurable No universally accepted standardized assessment tools You need to know where you are ging AND where are you now to see if you are heading in the right direction. How can we show that we achieved the desired objectives, if we do not know where we are going and cannot measure the progress?

The PHPPO Capacity Inventory assessment instrument Developed by the CDC in 2002 “Rapid” self-assessment of 6 preparedness focus areas Over 70 questions, 700 specific items Widely disseminated: used by 22 states, > 800 local health departments aimed at providing a quick assessment of preparedness through the use of over 70 questions grouped into focus areas and critical capacities that mirrored the cooperative agreement with the states. Mid-2003 evaluation article published in 2004: Costich, Julia Field JD, PhD; Scutchfield, F Douglas MD. Public Health Preparedness and Response Capacity Inventory Validity Study. Journal of Public Health Management & Practice. 10(3):225-233, May/June 2004.

Study Questions How comparable are assessments done by states independently with no prior shared methodology? How helpful is a scoring system? How important is the effect on the comparison of the scoring system adopted?

Methods Three states (IL, KS, MI) performed local assessments using Capacity Inventory before this project started (2003-2004) Local assessments analyzed using two algorithms Preparedness indexes from both algorithms compared

Preparedness Indexes by focus area and state using the Illinois and the Kansas scoring systems 55 41 43 63 58 G 46 33 49 F 25 16 21 14 38 C 59 57 48 56 B 60 51 65 50 67 A KS Scoring IL Scoring Focus Area 3 2 1 State Show quickly, just to give idea of what type of results we got.

Preparedness Indexes by Focus Area – State 1 Note two things: 1) differences in scores by FA 2) relative consistency of results for two scoring systems

Average local preparedness indexes by population density group, three states combined. Obtained pooling data, no single state could show this trend. BOTH algorithms showed the clear linear trend of increasing preparedness with increasing population density.

The Answers 1) How comparable are assessments done by states independently with no prior shared methodology? A. Despite differences in methods, data had acceptable level of consistency and reliability

The Answers 2) How helpful is a scoring system? Using scoring systems allowed comparisons across jurisdictions and analysis of pooled data Could help identify reasonable benchmarks, performance indicators Could help identify trends not recognizable in one jurisdiction

The Answers 3) How important is the effect on the comparison of the scoring system chosen? A. Despite their different approaches, the two scoring systems produced very similar trends  Using the best available tools available NOW could be more helpful than developing the perfect tools for use LATER The trends revealed using either scoring system were consistent, despite variations in absolute score values. Within each state, both systems ranked the same focus areas highest, and both ranked Focus Area C the lowest in all three states. After pooling the information from all three states, a clear trend could be observed of increasing preparedness with increasing population density. This analysis would have been difficult to do for individual states, due to small sample size in some categories.

Why is it not so easy?

Barriers Lack of benchmarks and evidence Fragmentation of the system BT and P.H. preparedness are new concepts The best outcome in P.H. is when nothing happens… Fragmentation of the system Federal, state, local government NOT a hierarchy 2005: 500 discrete activities to account for in KS Compare with other sectors: public works – build a road or bridge Hard to see the forest when you have so many trees

How would it help?

Capacity assessment in LHDs in KS - Key Findings Capacity improved on average 27% Substantial room for improvement remained Average capacity score after 1 project year = 43 on a scale of 100 Wide variability in preparedness by counties, regions, and critical capacity areas Highest-to-lowest-score ratio = 4:1 Preparedness levels tend to be lower in rural than urban areas EVERYBODY can understand this!!

Why is it important?

A resource dilemma We must decide the risk that we are willing to take Without clear standards and indicators we do not know: Our final destination (could be traveling for ever!) If we are on the right path “Bioterrorism preparedness programs have been a disaster for public health” because of their unnecessary, harmful consequences.1 Similarly, with pandemic influenza, we cannot realistically prepare for a catastrophe the magnitude of which cannot be known. The potential scope of an influenza pandemic has been estimated to range from a size somewhat greater than current seasonal influenza outbreaks to one affecting up to 40% of the population. A problem of such uncertain parameters is a sink for never-ending resource utilization. The best response to such uncertainty is not to continue to spend disproportionately large sums of money with no clear end in sight, particularly when other extant public health problems are so pressing. Some observers even argue that "bioterrorism preparedness programs have been a disaster for public health" because of their unnecessary, harmful consequences.[11] 11. Cohen HW, Gould RM, Sidel VW. The pitfalls of bioterrorism preparedness: the anthrax and smallpox experiences. Am J Public Health. 2004;94:1667-1671. 1) Cohen HW, Gould RM, Sidel VW. Am J Public Health. 2004

What do people die from? 50 % = behavioral choices (e.g., smoking, life style)1 Root cause Approximate # (%) Deaths Smoking, obesity, or physical inactivity 800,000 (30%) Alcohol consumption 85,000 (4%) Infections (excl. HIV) 75,000 (3%) P.H. Emergencies (2001-2005) 5,000 (<1%) McGinnis JM, Foege WH. Actual causes of death in the United States. JAMA. 1993;270:2207-2212. Mokdad AH, Marks JS, Stroup DF, Gerberding JL. Actual causes of death in the United States, 2000. JAMA. 2004;291:1238-1245. Disaster Preparedness: Ending the Exceptionalism Posted 10/03/2006 Joshua Lipsman, MD, JD, MPH, Commissioner, Westchester County Department of Health, New Rochelle, New York P.H. Preparedness yearly budget = $1.3 bill. (6% to 8% of total federal- state P.H. budget) 2 1) McGinnis JM, Foege WH., JAMA, 1993 and Mokdad AH, Marks JS, Stroup DF, Gerberding JL. JAMA. 2004 2) Lipsman, J. Disaster Preparedness: Ending the Exceptionalism. www.medscape.com. Posted 10/03/2006

Do you agree with this statement? “Everyone, no matter where they live, should reasonably expect the local health department to meet certain standards” National Association of City and Counties Health Officials Regardless of its governance or structure, regardless of where specific authorities are vested or where particular services are delivered, everyone, no matter where they live, should reasonably expect the local health department to meet certain standards. (NACCHO operational definitions)‏

Same Standards for this… Ness County: Pop. 3,454 Two Hospitals Grisell Memorial: 12 beds 0.25 Physicians

….and for this? Sedgwick County: Pop. 452,869 7 hospitals Wesley Medical Center: 500 beds 700 physicians 3,000 employees

Next Steps – Develop a policy for standardized performance assessment Reach consensus on national performance standards (goals) for preparedness NACCHO Operational Definition of LHD Accreditation movement CDC's target capability list (37 areas!) Select few national performance indicators to monitor progress: Can be quantified (i.e., measured and counted)‏ Linked to the goals Understandable to policy makers and the public Allow monitoring of trends Allow comparisons Can be monitored without excessive burden Use available data and information systems, when possible Develop standardized assessment tools: Maintain link with past efforts, PHPPO Capacity Inventory Use scoring system This slide is busy, but each step is introduced one at the time. Must know where we are going to know if we are on the right path Measure what matters Defensible and logical I HAD THIS SLIDE FOUR YEARS AGO

Acknowledgments Kansas Health Institute: Marc Velasco, Kim Kimminau Illinois Public Health Institute: Elissa J. Bassler Michigan Public Health Institute: Angela Martin National Network of Public Health Institutes: Chad Brown Sarah Gillen Kansas Association of Local Health Department Robert W. Johnson Foundation

Extra Slides

The two algorithms CDC/IL scoring system KS scoring system Produced by CDC, used (modified) in IL Assigns point and weighted value to each question based on relative importance KS scoring system Developed and used in KS Used pre-determined criteria to classify each question as “successful/not successful” Each system produces preparedness indexes

Ranking of Focus Area scores by state using the Illinois and the Kansas scoring systems 5)‏ F 4)‏ G B 3)‏ A 2)‏ 1)‏ KS Scoring IL Scoring RANK 3 2 1 State Within each state, the two scoring systems produced very similar ranking orders of focus areas indexes. In state 2 both scoring systems identified focus area A as the strongest, followed in order by focus areas B, G, F and C. For states 1 and 3, the only ranking discrepancy was in ranking order number one and two, as indicated by the bold characters in the table; in both states the Kansas scoring system gave a higher score to focus area A, while the Illinois system gave a higher score to focus area G in state 1 and B in state 3 (which were both ranked number two by the Kansas system). It should be noted that in these cases the differences between the two focus area scores (reported in table 3) were relatively small, ranging from 1 to 6 points. (see previous table)‏

What do we want to measure? Capacity Resources, equipment, staffing, etc. Capability Ability to perform certain tasks – “know-how” Performance Quality and quantity of services provided Outcomes Did it make a difference? And what are “outcomes” for P.H. preparedness, anyway...?