Person-centred Care & Patient Activation Richard Owen NHS England Dr Natalie Armstrong University of Leicester
In the NHS, what gets measured – gets done. Measurement allows us to learn, do things differently and improve. Do we currently measure to support Person-centred care? But, measurement is only one element of implementing system wide change. Why measure?
Patient activation measure (PAM) – is a measurement scale for the knowledge, skill and confidence a patient has in managing their health and care. The PAM score is based on patients’ responses to 13 questions which include measures of individuals’: knowledge (e.g. I understand the nature and causes of my health condition); beliefs (e.g. When all is said and done, I am the person who is responsible for managing my health condition); confidence in interacting with healthcare professionals (e.g. I am confident I can tell my health care provider concerns I have even when he or she does not ask); and self-efficacy (e.g. I am confident that I can maintain lifestyle changes like diet and exercise even during times of stress). What is Patient Activation?
PAM Scale Source: J.Hibbard, University of Oregon
PAM has been extensively tested with reviewed findings from over 100 studies that quantified activation. The Kings Fund has published, Supporting people to manage their health – An introduction to patient activation, which introduces the measure, the evidence and its application. The evidence shows that more highly activated patients: o experience better health o have better outcomes and test results across a number of conditions o engage in healthier behaviour (correlated to smoking and obesity) o have fewer episodes of emergency care What does the evidence say?
NHS Horsham and Mid-Sussex CCG & NHS Crawley CCG NHS Islington CCG NHS Sheffield CCG NHS Somerset CCG NHS Tower Hamlets CCG UK Renal Registry The PAM learning-set
Lots of evidence about use in the USA Little information about how it can improve care and commissioning in the UK We need to find out how to optimise use of the PAM Learning from experiences: how it can be used, what value it has, what are the challenges? Why evaluate the PAM?
Qualitative research – University of Leicester Quantitative research – Health Foundation How will we evaluate?
Understand how the PAM is being used in practice and how its use develops over time Determine the impact of using the PAM in participating organisations Explicate the mechanisms of change and contextual influences on the use of the PAM Provide formative feedback to the PAM learning set Produce practical evidence for the future; share knowledge and learning The evaluation: aims
Located within the broad tradition of theory-based evaluation Draws on diverse forms of evidence using multiple stages of data collection Approach permits flexibility to ensure responsive to changing experiences and remains fit for purpose Data collection methods include observation, interviews and documentary analysis The evaluation: approach
WP1: Surfacing programme theory and understanding the logic of change as this evolves through time Focus on ‘core teams’ over project duration What are they doing, how, why, and how’s it going? WP2: Understanding implementation and experiences at the frontline Focus on 6 purposively sampled projects Explore the understanding and use of PAM (and wider concept of patient activation) in practice The evaluation: 2 work packages
WP1: first round interviews completed, plus 26 hours of observation at learning set and project-specific events WP2: sample selected and governance procedures underway ‘Early Learning Report’ to be published later this Summer The evaluation: progress to date
The learning set members are all doing a lot of quite different things… Plans and activities are sometimes changing quite quickly… Timescales are different in all cases… Differences in local context, priorities etc Evaluation approach needs to be able to work with, and capture, all of this. Challenges with this type of evaluation
Early Learning Report to be published in the summer An interim report by November 2015 First draft final report by November 2016 Final draft final report by February 2017 Timeline for evaluation