Download presentation
Presentation is loading. Please wait.
Published byHeather Harvey Modified over 6 years ago
1
PCEPS Person Centred and Experiential Psychotherapy Scale
Field trial results and next steps Robert Elliott, Beth Freire and Graham Westwell PCE 2012, Antwerp, 9 July 2012
2
Hold theory lightly “Person-centred theory gives us a sense of direction in what we do as counsellors, but no theory is ever adequate to explain the complexity of human relationships. In a counselling context, theory should be held lightly. It is always inadequate in that it reduces complexity to a series of simple statements.” Tony Merry (2003, p 174)
3
Overview of the PCEPS The PCEPS operationalizes widely-held competences for humanistic psychotherapy and counselling, and represents an extended effort to create a dialogue between classical person-centred and experiential "tribes" within the humanistic approaches. It was developed to support randomised clinical trials of person-centred-experiential psychotherapy/counselling but could also be used as an outcome measure in training studies. It has many potential uses in professional training, ranging from initial counselling skill practice to professional accreditation and continuing professional development.
4
Why develop the PCEPS? There is an increasing need for the development of the evidence base of person-centred and experiential therapies Assessment of ‘treatment integrity’ is an essential component of psychotherapy trials: Therapist adherence to the therapy manual Therapy is being performed competently “Competence presupposes adherence, but adherence does not necessarily imply competence” (Waltz, Addis, Coerner, & Jacobson, 1993).
5
PCEPS subscales Differentiated 2 subscales: Person-centred processes:
10 items – focused on therapist’s ‘Person-centred Relationship attitudes’ Experiential processes: 5 items – focused on process facilitation of emotional exploration and differentiation
6
PCEPS design features A behaviourally anchored rating scale.
Six point anchor scale is the common structure within the instrument: 1 is always total absence of the quality/skill 4 is always adequate presence of the quality/skill 6 is always excellent presence of the quality/skill High degree of specificity and differentiation within the instrument. Highly descriptive – giving examples of poor practice and best practice. Differentiated subscales accommodates theoretic modalities and allows comparisons between the two.
7
PCEPS Person Centred and Experiential Psychotherapy Scale
Inter-rater reliability study Generalizability study Convergent validity study
8
How consistent are ratings and raters on the PCEPS?
Inter-rater Reliability and Item Structure Elizabeth Freire, Robert Elliott & Graham Westwell
9
Aim To assess the reliability of the Person Centered and Experiential Psychotherapy Scale (PCEPS)
10
Design of the PCEPS study
11
Material 60 therapy sessions in total sampled from the archive of the Strathclyde Counselling and Psychotherapy Research Clinic: 20 clients 10 therapists (2 clients per therapist). 3 sessions per client (first, middle, and last third of therapy) 2 segments from each session (first and second half of the session)
12
Study Design 10 Therapists sampled in each protocol 6 Social Anxiety:
3 EFT therapists 3 PCA therapists 4 Practice Based 4 PCA therapists Social Anxiety Protocol Therapist Client 1 Beginning (Session 3 or 5) Middle (Session 10) First half of the session Second half of the session Ending (Session 15 or 20) Client 2 Practice Based Protocol 2 clients per therapist sampled (20 total) 3 Sessions per client sampled (60 total) 2 segments per session sampled (120 total)
13
Therapists 3 EFT therapists 7 PCT therapists 2 experienced therapists
5 counsellors in training
14
Clients 10 Social Anxiety protocol 10 Practice-based protocol
(with experienced therapists) 10 Practice-based protocol (with therapists in training) Clients sampled if met following criteria: Had signed consent forms for material to be used Had completed sufficient sessions Audio recordings of relevant sessions were present Relational data (necessary for convergent validity study) were present
15
Length of segments Half of the segments = 10 min long
The other half = 15 min long
16
Raters 6 raters 2 teams of 3 raters each
3 qualified and experienced person-centred therapists 3 counselling trainees in their first year of training 2 teams of 3 raters each Group A (2 qualified therapists, 1 trainee) Group B (1 qualified therapist, 2 trainees)
17
Rater training Initial 12-hour training on the use of the PCEPS
Fortnightly 2-hour monitoring meetings = supervision and feedback on ratings
18
Procedure Each rater rated 60 audio-recorded segments
(one from each of the 60 sessions). Segments listened to by the two groups of raters (from the same session) were different segments. Raters were not informed which audio-recordings were of which type of therapy. Raters knew some of the therapists being rated, including two of the investigators.
19
Inter-rater reliabilities
Mean inter-rater reliabilities (Cronbach’s alpha) for individual items varied from .68 to .86 Average inter-rater reliability across the 15 items was .78 Inter-rater reliability of the 15 items when averaged together was .87
20
Inter-item reliability
Inter-item reliability (Cronbach’s alpha) for total scale (item scores averaged across raters) was .98 High degree of internal consistency for the instrument. This alpha is too high? (see Robert’s Generalizability study results)
21
Factor analysis Exploratory factor analyses revealed:
a 12-item ‘facilitative relationship’ scale that cut across both Person-Centred and Experiential subscales (alpha=.98) (Items – (PC1) Client Frame of Reference/Track; (PC2) Core Meaning; (PC3) Client Flow; (PC4) Warmth; (PC7) Accepting Presence; (PC8) Genuineness; (PC9) Psychological Holding; (E1) Collaboration; (E2) Experiential Specificity; (E3) Emotion Focus; (E4) Client Self-Development; (E5) Emotion Regulation Sensitivity. a 3-item ‘non-facilitative directiveness’ factor (alpha=.89) (Items – (PC5) Clarity of Language; (PC6) Content Directiveness; (PC10) Dominant or Overpowering Presence.
22
What Affects Ratings on the PCEPS? A Generalisability Theory Analysis
4/16/2018 1:57 PM What Affects Ratings on the PCEPS? A Generalisability Theory Analysis Robert Elliott, Graham Westwell & Beth Freire University of Strathclyde © 2007 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
23
Aims Carry out a Generalisability Theory/components of variance study of the PCEPS in order to inform decisions about how best to sample psychotherapy/counselling sessions for adherence/competence evaluations.
24
The PCEPS study - Method
Two teams of three raters Used PCEPS to independently rate therapy sessions. Carefully selected from Strathclyde Therapy Research Centre archive. Complex Generalisability study of method factors that might affect ratings.
25
Generalisability Study Design: 12 facets of Observation:
1. Items 2. Person-centred vs experiential subscales 3. Raters within teams 4. Rating teams 5. More vs less therapeutically experienced raters 6. Early vs late segments within sessions 7. 10 vs 15 min segments 8. Early vs middle vs late sessions within clients 9. Therapists 10. Clients within therapists 11. Student vs professional level therapists 12. Person-centred vs emotion-focused professional level therapists.
26
1. Items Facet See reliability study (Beth)
Overall inter-item reliability (scores averaged X 6 raters): Alpha = .98 Implication: The PCEPS has over-sampled the item facet and needs to reduce the number of items within the scale. Our recommendation: Reduce the PCEPS from 15 items to a 10 item short form (7 PC, 3 E items)
27
2. Subscale Facet: Person-Centred vs. Experiential - 1
See reliability study (Beth) Overall inter-subscale reliability Session-level scores averaged X 6 raters (n =60) Alpha = .93 (r = .92) Univariate test for differences (within-participants): PC subscale: m = 3.81 (sd = .73), greater than: E subscale: m = 3.40 (sd = 1.03) t = 6.92 (p < .001) d = .45 (medium effect size)
28
2. Subscale Facet: Person-Centred vs. Experiential - 2
Implication: Person-Centred and Experiential subscales are highly overlapping But: PC scores are higher than Experiential scores This reflects domain sampling/content validity Not an empirically-based distinction Recommendation: Generate a main score from a 10- item single index: Person Centred Experiential Subscale: Person-centred (4 items) + Experiential (3 items) Directiveness subscale (3 items)
29
3. Rater Facet (within Teams)
See reliability study (Beth) Overall inter-rater reliability Scores averaged X 15 items Session level alpha (6 raters X 2 segments; n = 60): .91 Segment level alpha (3 raters; n = 60 X 2 segments): .88 (Team A = Team B) Smaller for individual items (mean alpha = .78) Implications: Average across items to increase inter-rater reliability Three raters is ideal (as a high standard of reliability is needed for such high-stakes testing) The Spearman-Brown predicted values for fewer raters are: 2 raters: alpha = .83; 1 rater: alpha = .71 This is a risky strategy given the high-stakes testing.
30
4. Rating Team Facet Compared rating teams between segments,
Scores averaged X 15 items and X 3 raters/team Confound differences between teams and segments Overall inter-team reliability (session level analyses, n = 60) Alpha = .85 (correlation = .77) Note: no difference in inter-rater reliabilities X teams: Team A = Team B (Alpha = .88) Univariate test for differences (dependent t-test): Team A: m = 3.55 (sd = 1.0), which is less than: Team B: m = 3.80 (sd = .73) t = (p < .01); d = -.29 (small effect)
31
4. Rating Team facet - 2 Conclusion: excellent consistency across rater teams, in spite of having rated different segments within sessions Also: identical levels of reliability But: one team gave higher ratings than the other. This is problematic for a scale on which absolute levels are important. Possible sources: rater team culture; confounded factors Recommendation: Different rating teams produce comparable scores in relative terms but perhaps not in absolute terms Need to explore sources of difference in rated level Confounded factors (eg, rater experience level, segment differences) => see following slides
32
5. Rater Therapeutic Experience Facet: More vs Less - 1
Overall inter-group reliability: 3 more experienced raters vs 3 less experienced raters Scores averaged X 15 items Session level rater group alpha = .92 (correlation = .85) Inter-rater reliability (X segments/rater teams) Experienced raters: alpha = .81 Inexperienced raters: alpha = .84 Univariate test for differences (dependent t-test): Inexperienced raters: m = 3.70 (sd = .86) Experienced raters: m = 3.64 (sd = .83) t = 1.0 (NS); d = .07 (< small effect)
33
5. Rater Therapeutic Experience Facet: More vs less - 2
Implications: Across rater therapeutic experience levels, PCEPS ratings highly consistent, comparable levels and reliability Vs. Carkuff: PCE competence raters don’t need to be highly clinically experienced Recommendation: Can use either inexperienced or experienced raters
34
6. Early vs Late segments within sessions - 1
Overall inter-segment reliability: Early vs late segments Scores averaged X 15 items Session level rater group alpha = .82 (correlation = .71) Univariate test for differences (dependent t-test): Early segments: m = 3.64 (sd = .96) Late segments: m = 3.70 (sd = .80) t = -.73 (NS); d = -.07 (< small effect)
35
6. Early vs late segments within sessions - 2
Conclusion: Across segments, PCEPS ratings highly consistent, comparable levels Recommendation: Can use either early or late segments Try: segment prior to 5 min before end of session Cf. Herrmann & Imke, in press
36
7. 10 vs. 15 min segments within sessions - 1
Overall inter-segment reliability: (15 items, 6 raters, 2 segments/session) Short segments: alpha = .90 Long segments: alpha = .92 Univariate test for differences (dependent t-test): Short segments: m = 3.63 (sd = .83) Long segments: m = 3.71 (sd = .82) t = -.37 (NS); d = -.10 (small effect)
37
7. 10 vs 15 min segments within sessions - 2
Conclusion: Across segment length, PCEPS ratings comparable levels of reliability, mean scores Recommendation: Can use either 10 or 15 segments Try: “Working segment”: 15 – 20 segment prior to 5 min before end of session
38
8. Early vs Middle vs Late sessions within clients - 1
One-way ANOVA for mean differences in PCEPS ratings: Early sessions (n = 21): m = 3.69 (sd = .78) Middle sessions (n = 20): m = 3.65 (sd = .86) Late sessions (n = 19): m = 3.68 (sd = .85) F = .01 (NS); eta-squared = .00 (zero % variance accounted for Overall inter-segment reliability: (15 items, 6 raters, 2 segments/session) Early sessions: alpha = .88 Middle sessions: alpha = .92 End sessions: alpha = .92
39
8. Early vs middle vs late sessions within clients- 2
Conclusion: Across sessions, PCEPS ratings show comparable levels of reliability, mean scores Recommendation: Don’t need to sample from throughout therapy Try: Two sessions (eg early – middle)
40
9. Therapists 10. Clients within therapists
One-way ANOVA for mean differences in PCEPS ratings across: A: Clients X therapists together: eta-squared = .92 (F = 23.6; p < .001) B: Therapists (n = 10): eta-squared = .88 (F = 40.7; p < .001) C = A – B: Clients w/in therapists (n = 20): eta-squared = .04 Conclusion: Therapist differences overwhelm all other effects including differences between clients Supports construct validity of measure Recommendation: For PCEPS, it’s enough to sample one client per therapist.
41
11. Student vs Professional level therapists - 1
Overall inter-rater reliability (15 items, 6 raters) Student therapists: alpha = .61 Professional therapists: alpha = .85 Univariate test for differences (independent t-test): Student therapists: m = 3.08 (sd = .33), less than: Professional therapists: m = 4.27 (sd = .72) t = 8.27 (p < .001); d = 2.12 (=extremely large effect)
42
11. Student vs Professional level therapists - 2
Implications: Poor reliability for student therapists Even with 6 raters Is this an order/practice effect?: Student therapists were rated earlier in the PCEPS study Recognisability/rater bias? Recommendations: Use raters who don’t know therapists May need more raters for rating inexperienced therapists
43
12. Person-centred vs Emotion-focused professional level therapists
Univariate test for differences (independent t-test): PC therapists (n=12 sessions): m = 4.38 (sd = .55) EFT therapists (n=18 sessions): m = 4.20 (sd = .81) t = .66 (NS); d = .27 (=small effect) Implications: Little if any difference between PC and EFT therapists Recommendations: PCEPS useful for both PCT and EFT Differences between PCT and EFT may be exaggerated
44
Beyond ideology, Or: Back to the Process Itself
Is it worth continuing to argue at an ideological level over nondirectivity and process guiding? Like Psychology, we have been neglecting the study of concrete behavior in favour of the ease of self-report data: Both quantitative questionnaires & qualitative interviews The PCEPS study illustrates the value of following the example of early Carl Rogers and colleagues We need to return to the study of the therapy process.
45
How well do PCEPS raters agree with client and therapist perceptions of the relationship?
A convergent validity study Graham Westwell, Robert Elliott and Beth Freire
46
The facts are friendly!
47
Aim The aim of this study is to measure the convergent validity of the PCEPS by measuring in what ways it correlates with similar client self-report instruments. “Validity is a more difficult concept to understand and to assess than reliability. The classical definition of validity is “whether the measure measures what it is supposed to measure.” Barker, Pistrang and Elliott (2002, p. 65)
48
Method Audio segments rated using the PCEPS were specifically chosen from the Research Centre archive to correspond with available ‘relational assessment’ data: Working Alliance Inventory (Short Revised version) Therapeutic Relationship Scale (Client) Therapeutic Relationship Scale (Therapist) Revised Session Reactions Scale Inter-rater reliability analysis and rater scores were correlated against ‘relational assessment’ data using Pearson’s r.
49
Working Alliance Inventory – Short version Revised
The WAI-SR is a 12-item, self report, short version of the 36-item Working Alliance Inventory (Horvath & Greenberg, 1989) This 12-item version is also based on Bordin’s (1979) model of the working alliance. It consists of three 4-item sub-scales, based on a negotiated, collaborative relationship: The quality of the interpersonal bond between client and therapist. Agreement between client and therapist on the goals of therapy Agreement between client and therapist that the tasks of therapy will address the problems the client brings.
50
Therapeutic Relationship Scale (TRS)
The Therapeutic Relationship Scale aims to measure the clients perception of the quality of the therapeutic relationship (Sanders, Freire and Elliott, 2007) from a specifically person-centred perspective. The TRS is a 27 item scale, with 6 domains: Empathy (5 items); Positive regard (3 items), Acceptance (3 items), Genuineness (4 items), Collaboration (3 items) and Deference (9 items). The TRS has a 4-point rating-scale: ‘Not at all’, ‘A little’, ‘Quite a lot’ and ‘A great deal’ and scored between 0-3: 13 of the items are reversed, so ‘Not at all’ = 3 and ‘A great deal = 0 There are two versions (with corresponding items): Therapeutic Relationship Scale, Client (TRS-C) Therapeutic Relationship Scale, Therapist (TRS-T) ***Too much detail here! You’re going to run out of time.
51
Revised Sessions Reaction Scale (RSRS)
Session level client-report measure of clients’ reactions to a therapy session Categorises therapeutic reactions into two broad groups: 1) Helpful (with 2 sub-groups): (a) helpful task reactions (eg, self-awareness) (b) helpful relationship reactions (eg feels closer to therapist). 2) Hindering: The contrasting hindering task reactions relate to clients’ negative experiences within therapy (eg, felt misunderstood). Too much detail here!
52
Expected results Judging from previous findings in the literature, we expected correlations of about .4 between the PCEPS ratings and client and therapist ratings on the Therapeutic Relationship Scale given the conceptual overlap between the PCEPS and the TRS. We expected similar correlations of .4 between the PCEPS and Revised Session Reaction Scale (RSRS); again accounted for by the conceptual overlap between the PCEPS and the RSRS. We expected correlations with the WAI-12 to be somewhat lower, at around .3. In other words, we expected clients, therapists and raters to rate the therapy sessions in agreement with each other
53
Research limitations We were aware of particular research limitations at the onset of the study: The relatively small sample of 54 sessions meant that statistical power to detect correlations smaller than .4 would be limited. In addition, the Therapeutic Relationship Scale is itself a new instrument of unknown reliability and validity. The WAI-SR appears to lack fit with person-centred therapy.
54
Convergent validity results - overview
Correlations of all Relational Instruments (mean values) PCEPS WAI-SR -.35** TRS-Client -.45** TRS-Therapist .04 RSRS-Helpful-Rel -.29 RSRS-Helpful-Task -.07 RSRS-Hindering .02 **p < 0.01 level (2-tailed)
55
Convergent validity results - analysis
The negative correlations between PCEPS and WAI-12, and the TRS-C in particular, were unpredicted and unexpected. Raters using the PCEPS seem to disagree significantly with the clients using the WAI-12 and TRS-C as a measurement of their therapeutic experience. There is no statistically significant correlation between the PCEPS and TRS-T, or the RSRS. The ‘Helpful Relationship’ factor within the RSRS is negatively correlated, but is non-statistically significant.
56
The facts are friendly!
57
Try using scatterplots!
58
PCEPS/WAI-SR scatterplot showing ‘therapist effect’
59
PCEPS/WAI-SR scatterplot showing ‘therapist effect’
60
Therapist effect (1) There seems to be a ‘leniency effect’ in which therapists T6 and T7 are consistently rated higher than other therapists by the raters. These are the two investigators known to the raters, and so is likely to indicate rater bias towards them (raters have formed a ‘global impression of these 2 therapists) – creating artificially high PCEPS scores. This probable bias toward T6 and T7 is also observable in the TRS-C scatterplot.
61
PCEPS/TRS-C scatterplot showing ‘Therapist effect’
62
Effects of protocol 5 clients were sampled from the Practice Based (PB) protocol 5 clients were sampled from the Social Anxiety (SA) protocol When we look at the scatterplots showing protocol, we see a very specific division amongst the two populations.
63
Sample showing Protocol effect.
64
Protocol effect results
The PB protocol is clustered in the bottom right hand corner, indicating high WAI-SR and low PCEPS scores. In other words, the clients rated these therapists highly, but the raters scored them very low. This is the opposite of what we expected. The raters seemed to display a ‘severity’ effect towards the PB therapists. Results imply that these therapists are competent (they are in the view of the clients) but not adherent (the raters don’t think they are): Is ‘being helpful’ in relationship the same as being competent? ***But competence is not the same as helpfulness or good relationship, so I don’t think this follows directly.
65
Correlations separated by protocol
Table 2 Correlations of PCEPS with WAI-12 and TRS-C PCEPS mean Both protocols SA protocol PB protocol WAI-12 mean -.35* .10 -.30 TRS-C mean -.45** -.34 -.22 **. p < .01, *p < .05 (two tailed)
66
Therapist effect 2 (within protocol effect)
Therapists within the PB protocol are all student therapists. The PCEPS shows poor reliability in scoring students (see Robert’s study): Overall inter-rater reliability (15 items, 6 raters): Student therapists: alpha = .61 Professional therapists: alpha = .85 Some of the therapists in this protocol were also known to some of the raters: a possible global impression formed (to explain ‘severity’ effect)? Also, one of the EFT professional level therapists was clearly not as skilled as the others, making it easier to discriminate between the professional level therapists
67
Preliminary qualitative analysis (1)
Qualitative analysis of these sessions was required to become clearer about what was happening. Made use of Helpful Aspects of Therapy (HAT) forms completed by clients after corresponding sessions, published HSCED studies involving some of these clients, and returning to recordings. Looked at high PCEPS with low WAI/TRS-C scores: The low WAI and TRS-C scores seem to contradict the HAT scores and narratives for the same session. HAT scores and commentary were strongly and generally positive. .E.g for T7, clients gave low WAI and TRS-C scores but reported high satisfaction with therapy in HAT.
68
Preliminary qualitative analysis (2)
4 possible underlying factors of high PCEPS vs. low WAI/TRS-C scores: 1. The clients qualitative and quantitative self-reporting seemed to lack consistency, providing evidence for multiple perspectives of the therapeutic alliance. 2. Verbose therapists, who were very active within the relationship, and made frequent empathy-oriented responses were easier for the PCEPS raters to judge. The PCEPS is perhaps focused too specifically towards empathic responding. 3. The observer versus client ratings of the quality of the therapist relational can be quite discrepant; even in opposition to each other. 4. The PCEPS cannot account for outside of therapy factors which might impact on the clients’ perspective of the therapeutic relationship.
69
Preliminary qualitative analysis (3)
3 possible underlying factors for low PCEPS vs. high WAI/TRS-C scores: 1. Quiet therapists with limited verbal response are hard for PCEPS raters to judge; however, the client knows the context of the therapy and rates the therapist well. 2. Client with therapist deemed non-adherent by the PCEPS raters seemed to do well and found the therapeutic space useful. (cf. Bohart, 2000). It is not clear if this change can be attributed to the therapy or the therapist, although HAT forms indicate the impact of the therapist. 3. The PCEPS might struggle with idiosyncrasies of practice and deem them non-adherent and therefore non-competent. The observer versus client ratings of the quality of the therapist relational can be quite discrepant; even in opposition to each other.
70
Summary of convergent validity findings (1)
First: (a) the PCEPS raters were positively biased towards they therapists that they knew in the SA protocol which skewed the data (b) at the same time, the SA protocol clients rated therapy more negatively than clients in the PB protocol (c) the PCEPS raters judged that the PB therapists were non-adherent to a PCE therapeutic mode and were practising poor therapy. (d) However, the PB clients rated their therapists highly. This resulted in a negative correlation between PCEPS, TRS-C and WAI-SR.
71
Summary of convergent validity findings (2)
Second: (e) the fact that were no correlations between the PCEPS and TRS-T is evidence of multiple perspectives and understandings of the therapeutic alliance. Third: (f) the PCEPS lacks the nuance to pick up the therapeutic alliance (as defined by the WAI and TRS) (g) equally, the WAI and TRS are not very good at picking up a PCE way of working.
72
Summary of convergent validity findings (3)
Fourth: (h) an alternative interpretation is that the PCEPS and the WAI and TRS are true scores, and that that the instruments measure very different constructs.
73
Recommendations for further study
Complete further and in-depth qualitative analysis of high PCEPS, low WAI and high PCEPS, low TRS-C clients – particularly their phenomenological experience of the alliance. Get raters to complete WAI-O and TRS-O for the same PCEPS data sample and measure convergent validity with WAI-SR and TRS-C. Replicate the PCEPS study to re-test reliability and validity. Make sure client populations and therapist skill levels are not mixed and confounded as here Create a client version of the PCEPS and test this for convergent validity with the PCEPS Test PCEPS scores against outcome measures – does a PCEPS score predict therapeutic outcome as measured by the client?
74
Discussion (1) Results are inconclusive because there were too many confounding variables and raters' biases Arguably, the experiential subscale did not tap into the differences between person-centred and experiential practices because the items were too broad and not specific to experiential interventions (tasks). However, the non-facilitative directiveness factor might have tapped into these differences?
75
Discussion (2) The PCEPS is vulnerable to raters' bias and the rating process is still very subjective, despite the descriptive ‘objective’ anchor points. The raters' experiences and understandings of what empathy, acceptance, and directiveness mean will inevitably have an impact on their ratings.
76
PCEPS Person Centred and Experiential Psychotherapy Scale
Beth Freire Robert Elliott Graham Westwell
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.