Presentation is loading. Please wait.

Presentation is loading. Please wait.

Water Framework Directive Directive 2000/60/EC of the European Parliament and of the Council of 23 October 2000 establishing a framework for Community.

Similar presentations


Presentation on theme: "Water Framework Directive Directive 2000/60/EC of the European Parliament and of the Council of 23 October 2000 establishing a framework for Community."— Presentation transcript:

1 Water Framework Directive Directive 2000/60/EC of the European Parliament and of the Council of 23 October 2000 establishing a framework for Community action in the field of water policy IC Guidance Annex V: Definition of comparability criteria for setting class boundaries Wendy Bonne JRC Guidance authors: Nigel Willby, Sebastian Birk & Wendy Bonne

2 Actions after ECOSTAT April 2010 Theoretical and practical evaluation of the proposed adaptations by NL, DE, CZ –Scientific reports on theoretical evaluation, proposed adapted approach and benchmarking by Nigel Willby & Sebastian Birk Testing by different GIG IC groups during summer Conclusions and formulation in guidance text version 4 discussed at drafting group meeting on 25-26 August: participation of NL, FR, DE, UK, ES, CZ, SE, AT Adapted version distributed on 3 Sept. – comments until 10 Sept from NL, FR, DE, SE, UK, CZ Adapted version 5 distributed on 14 Sept. to ECOSTAT – comments from ES, FR, DE, SE

3 Updated IC Guidance Annex V on Comparability criteria WFD background (Annex V, 1.4.1. Comparability of biological monitoring results) “The Commission shall facilitate this intercalibration exercise in order to ensure that these class boundaries are established consistent with the normative definitions [...] and are comparable between Member States.” What is meant by „comparable between Member States“? What is meant by „comparable between Member States“?

4 IC-Phase 1 Each BQE group elaborated its own definition, steered by the favoured Intercalibration Option IC Option 2 3 Procedure„Harmonisation band“„Average class difference“ Thresholdband width of +/-0.05 half a class Misinterpreted as equating to +- one quarter of a class Introduction - Why is improvement needed? In this proposal we made sure it is one quarter of a class! Need for harmonised concept at the beginning of analytical process of IC- Phase 2

5 Role of Comparability Criteria in the overall IC process Preconditions Compliance check Feasibility check Datasets and Intercalibration options Benchmarking Boundary setting Boundary comparison and harmonization Prerequisites 1. Good status boundaries have been set by Member States. 2. IC process has passed “feasibility checks“. 3. IC Benchmarks have been applied in line with harmonised requirements (IC Refcon Group). Prerequisites 1. Good status boundaries have been set by Member States. 2. IC process has passed “feasibility checks“. 3. IC Benchmarks have been applied in line with harmonised requirements (IC Refcon Group).

6 General principles Concept applicable to all BQEs/Water Categories In line with the requirements of the new Intercalibration Guidance Considering the approaches applied in IC-Phase 1 –It is an extension and harmonisation of them Considering only upper class boundaries (H/G and G/M) – classifications are aggregated in classes above or below that boundary

7 General principles Comparability is always checked with analysis of 2 components: boundary bias and class agreement, using EQRs Boundary harmonization = upper boundaries do not differ more than 0.5 class between MSs (= the maximum boundary deviation above or below each national boundary is a quarter of a class). Class agreement is calculated after boundary harmonization to show the performance of the methods after required adjustments are made to the boundaries

8 How is comparability explained in this Annex V? Boundary bias = the deviation in the positioning of a class boundary of one national method relative to the common view of the MSs (i.e. defined by the common metric or by the global mean of all the methods = pseudo-common metric) Class agreement = the confidence that two or more national methods will report the same class for a given site, as calculated by 3 class agreement metrics

9 How is comparability explained in this Annex V? Comparability is dependent on –Positioning of class boundaries –Scatter in EQR values (limits class agreement) –Uncertainty of methods (not analyzed in intercalibration) This scatter should be relatively low. Methods must be significantly correlated with each other or with a common metric. If this is not the case tests of comparability are not feasible.

10 1.How closely are the methods related over the whole ecological quality gradient? ! Clearly assess relatedness of methods before starting to plot boundaries ! 2. How comparable is definition of Good Ecological Status: H/G and G/M boundaries Analysis of boundary bias 3.Do the EQR results of the methods report the same class? Analysis of class agreement How is comparability explained in this Annex V? What to compare: comparability analysis can be split into 3 questions:

11 1.How closely are the methods related over the whole ecological quality gradient? ! Clearly assess relatedness of methods before starting to plot boundaries ! How is comparability explained in this Annex V?  Use of regression of national classification EQRs against common metric or average view of other MSs (pseudo-common metric) – or exceptional non-parametric correlation (3b) Scale of common metric (Option 2) or pseudo- common metric (Option 3) EQR of Member State A

12 2. Boundary bias Indirect: regression to plot all national boundary EQRs on one common metric scale or average view of other MSs How is comparability explained in this Annex V? Plot of H/G boundaries on common scale (Pseudo-) Common Metric scale Member State A Member State C Member State B 3. Class agreement Direct pairwise comparisons of EQRs for check on level of class agreement Do we all report the same class? MemberState AMemberState BMemberState C Exceptional pairwise comparison

13 Direct comparison (combined with regression) (Option 3a) Indirect comparison through regression (Option 2) Option 3Option 2 Direct comparison (without regression) (Option 3b) Q3. How many methods are participating in the exercise? Q1. Is intercalibration performed based on commonly assessed sites? Q2. Is the gradient of ecological quality sufficiently covered by the existing data? Yes No Yes No 3 <3 or >3 Use of pseudo-common metric and/or common metric (without (P)CM in case of only 2 methods) Q3. In case of 3 methods, is 1 method very different from the other 2 methods ? YesNo How to compare?  Analytical Options dependent on available dataset

14 Table 1 for judging acceptability of comparisons based on relatedness between methods 1. Relatedness methods Acceptability levels in analyses Option 2 Parametric regression against common metric Test the model quality + Pearson’s correlation coefficient r ≥ 0.5 Option 3a Parametric regression against pseudo-common metric (and/or common metric) Option 3b Non-parametric correlation of EQR outcomes of methods Spearman’s Rank correlation coefficient ≥ 0.5 1. Relatedness of methods

15 2. Boundary bias 1 evaluation per MS 3. Class agreement over all MSs Mean average absolute class difference All options 2 ≤ 0.25 classes All options < 1.0 > 0.25 classes 3 ≥ 1.0 4 Result not accepted (boundary bias) or questionable (class agreement) 2. Boundary bias 3. Class agreement Result accepted 1 Refers to the deviation from the global mean or median boundary for all national boundaries individually 2 For option 3b calculated as average class difference (a global mean or median boundary cannot be defined in this case) 3 In exceptional cases a justification can be given for a boundary bias that can slightly exceed 0.25 class, but it must never be larger than 0.5 class. 4 In exceptional cases a justification can be given for a mean average absolute class difference that reaches or exceeds 1 class.

16 Step 1: Benchmarking Objective: Find a common starting point as the upper limit of the ecological gradient Use population of sites screened against agreed abiotic criteria relevant to pressures No reference sites: benchmark at lower threshold for which all countries can provide data Process MUST be independent of national classifications (high status sites cannot just be accepted, has to be illustrated with abiotic data and common agreement)

17 0.7 Alternative benchmark e.g. common high-good or good-moderate boundary Step 1: Benchmarking Upper limit MS B Upper limit MS A Reference of EQR 1 is not the same ! Upper limit MS B Upper limit MS A Upper limit MS C

18 Step 2: Benchmark standardization Objective: to focus the intercalibration strictly on the relative positioning of class boundaries and minimise biogeographical sub-typological differences that can cause incomparability within a common dataset To avoid larger adjustment to methods than needed Calculation: Option 1 – 3: Each MS must apply method to benchmark dataset of every other MS, to see if benchmark of other MS is higher or lower on national scale, each value must be divided by its corresponding median benchmark value to get the same scale Option 2: Apply common metric to benchmark dataset of each MS, take median of benchmark sites, divide common metric values of each MS with this median

19 Step 1: Benchmarking: same definition of “absence of pressure” Step 2: Benchmark standardization needed if: Values of the benchmark of MS 1/or a subtype 1 differ significantly from MS 2/or a subtype 2 So current Option 2 example: Median of MS 1 benchmark values = 0.78 and Median of MS 2 benchmark values = 0.68 on common metric scale Step 1 & 2: Benchmarking-standardization MS 1MS 2 EQR /0.78 1 /0.68 1

20 Step 3: Construction of an ordinary least squares regression Option 2: relate the EQR classifications of a MS to the benchmark normalized independent Common Metric Construct parametric regression National EQR of Member State A Benchmark standardized Common Metric

21 Step 3: Construction of an ordinary least squares regression Option 1 - 3: benchmark standardized EQR of a Member State plotted against the average EQR of each combination of independent MSs = Pseudo-common metric Construct parametric regression Benchmark standardized National EQR of MS A Benchmark standardized Pseudo- Common Metric This makes it possible to apply regression approach to option 3 and establish a harmonization band to optimize placement of class boundaries

22 Step 4: Assessing the relation between methods = check of regressions before continuing Option 2 & 3: Any Member State not significantly correlated with average view of the MSs must be excluded from the process or improve its method. Parametric regression (Option 2 and 3a): –relationship must be significant p ≤ 0.05 - p ≤ 0.001 –Assumptions of normally distributed error and variance (homoscedasticity) of model residuals –(Pseudo) Common metric must adequately represent all methods: Pearson’s correlation coefficient r>0.5 –Observed minimum r 2 at least half of observed maximum r 2 –Slope of regression should be significantly different from 0 and should lie between 0.5 and 1.5

23 Step 4: Assessing the relation between methods = check of regressions before continuing Option 3b when regression is not possible: Spearman’s Rank correlation coefficient >0.5.

24 Step 5: Boundary translation using the regression (option 2 and 3a) Use regression formula: CM or PCM = m*(EQR country) + c to calculate H/G and G/M boundaries for all MSs H/G boundary G/M boundary Scale of common metric (Option 2) or pseudo- common metric (Option 3) Benchmark standardized national EQRs of Member State A

25 Step 6: Assessing boundary bias Global mean or median boundary = mean or median of all predicted MS EQRs Define the difference of each MS boundary EQR with the global mean or median Convert this difference in a class equivalent for each MS If difference > 0.25 class  not OK Maximum permitted boundary deviation = 0.25 class equivalents specified for each MS separately Global mean or median boundary A B C D E National methods of MSs 0.70 0.65 0.60 0.55 0.50 Benchmark standardized national EQRs of Member State A EQR of Common Metric

26 Option 3b: –Calculate average class difference between all pairs of values across all participating MSs –If difference > 0.25 class  not OK –For this all EQRs must be normalized to a common scale (H/G = 0.8, G/M = 0.6) Step 6: Assessing boundary bias

27 Step 7: Boundary adjustment Objective: to reset national classifications where necessary Define the lower (and upper) acceptable class boundary by subtracting (adding) the permitted boundary deviation of 0.25 class in the class equivalents of the Member State from (to) the global mean or median boundary Invert formula of regression previously established: EQR of MS for adjusted boundary = (y harmonised -c)/m Translate lowest permitted boundary for H/G and G/M

28 Step 8: Provide EQR classifications for direct comparison Objective: to check class agreement and report on it For G/M boundary: classification in good For this all EQRs must be transformed to a common scale, to make all the classes equal to 0.2 (H/G = 0.8, G/M = 0.6) Member State A Member State B Piecewise linear transformation 1 0.85 0.55 0.35 0.20 0 1 0.80 0.60 0.40 0.20 0

29 Option 2 Generate a set of synthetic national EQR values for each country across 300 sites, using the relationship between the Common Metric and national EQR for each MS, with error distribution at each predicted point: normally distributed, mean = 0, sd = regression prediction error Step 8: Provide EQR classifications for direct comparison

30 Step 9: Assessing levels of agreement Option 2, 3a and 3b: –Calculate average absolute class difference between all pairs of values across all participating MSs –Calculate proportion of classifications differing by half a class or more (< 70%, ideally < 50%) –Obtain multi-rater kappa coefficient (> 0.4, ideally > 0.6) Mean average absolute class difference All options < 1.0 ≥ 1.0 Result questionable Result accepted

31 Step 10: Translation of benchmark standardized EQRs Objective: to determine where on the original national EQR scale the benchmark standardized EQR value is equivalent to after harmonization Multiply national class boundaries by median of population of benchmark sites (previously used for normalisation by division)

32 Step 11: Ecological characterization of class boundaries Presentation of harmonized upper class boundaries must be supported by an ecological characterization that effectively maps the boundaries onto an ecological gradient that is defined using changes in relevant structural and functional attributes of the quality element

33 2. Data considerations Objective: to reduce effect of dataset characteristics on comparability Need of EQRs (not truncated at 1) Minimum requirements for meaningful quantitative comparisons –Size of datasets (sample – water body level: repackage samples to increase data availability, aggregation leads to reduction in gradient of ecological quality): quantitative comparison difficult with <20-25 discrete cases classified by every Member State –Widest possible gradient in ecological quality –Bilateral comparisons are preferably avoided

34 Training and evaluation workshops Training workshop: Rivers: Autumn 2010 Lakes: 4 November 2010 COAST: November 2010 in Cyprus back to back with general COAST meeting –COAST meeting 9 & 10 November –Workshop 11 November –Crucial to be ready for it ! Validation workshops in spring 2011

35 Discussion Response additional written comments Comment FR, ES, SE, DE: many GIG groups need more flexibility because they cannot fully apply Annex V –Why ? Feasibility criteria are not met: data requirements or others. –Solution: given already in Key Principle 9 of the Intercalibration Guidance, that can be repeated and extended in Annex V: In case the assessment methods developed by a Member State differ so much that the data cannot be compared, the assessment method cannot be intercalibrated by one of the options provided in this guidance. The MS (in collaboration with the GIG) will need to find an alternate intercalibration approach (see key principle 9 in the Intercalibration Guidance). In case of not meeting other feasibility criteria such as data requirements for benchmarking or for statistical robustness of the comparison, some technical variations to the provided options can be proposed. This necessity and the preservation of conformity with respect to the content and the sense of the comparability criteria guidance have to be shown to ECOSTAT by the GIG or the MS. The alternate or adapted approach will need to be approved by WG ECOSTAT.

36 Discussion Response additional written comments NL, FR, SE: Organizational issues: “cook book” and calculation sheets provided for workshops FR: omission of modeled dataset or not ? It is possible to be done at central level by JRC for a consistent reporting on class agreement SE: editorial comments –Reformulation necessary of 2 nd question: How comparable is the methodological definition of the good ecological status, so how comparable are the boundaries H/G and G/M (= assessment of boundary bias)? How comparable are the national definitions of the good ecological status, so how comparable are the boundaries H/G and G/M (= assessment of boundary bias)?

37 Discussion 1. Introduction 2. General principles 2.1 Further explanation of comparability 2.2 Comparison options and comparability criteria 3. Steps of the procedure Editorial comments –Improvement fig 1a-1b –Position figures 5, 6, 7, 8, 9


Download ppt "Water Framework Directive Directive 2000/60/EC of the European Parliament and of the Council of 23 October 2000 establishing a framework for Community."

Similar presentations


Ads by Google