Download presentation
Presentation is loading. Please wait.
Published byLeonard Watkins Modified over 9 years ago
1
Results of Implementation and Testing Soils and Riparian – What Did We Learn?
2
Purpose of Routine Evaluations Unclear It is our understanding that this was driven by the need to assess results under results-based legislation, ie FRPA The underlying purpose common to most evaluations of forest resources is to learn about forest practices.
3
Uses of Indicators Compliance and Effectiveness Audits Inspections Long-term Research Effectiveness Monitoring Etc.
4
What Did We Do? Compliance and Effectiveness Audits Testing the indicators was not the objective of the audits We used the indicators as the basis for a set of commonly accepted indictors of effective soil conservation and stream management. Can’t talk about the audit results, as not yet reported Can discuss the process and use of the indicators in the audits
5
Use of Indicators in Audits Audit relates to the forest practices (versus streams or soils) Indicators used to establish common understanding of effectiveness of forest practices impacting soils and streams This required some adjustments to the draft indicators to ensure that the use of the indicators was informative about the underlying forest practices. (handout)
6
Use of Indicators Cont. We also developed audit programs that incorporated the indicators to allow audit analysis. This provides the basis of determining whether or not an indicator is achieved, and why. This is an important point. The technical design of an evaluation is critical to proper application of the indicators. Development teams should consider the type, purpose and objectives of “routine evaluations” in developing indicators.
7
What did we Learn? (Key Messages) The development teams developed indicators and methodologies; the audit teams used the indicators, not the methodologies Would have been beneficial if the development teams had involvement of persons with varied expertise that will be using the indicators.
8
What did we Learn? (Cont.) Compliance audit assessment at same time as the effectiveness assessment works well – provides linkage to the forest practices Implementation teams need to incorporate the necessary expertise – we found substantial benefit in having one of the development team members on each audit team Site level field assessments were necessary to use the indicators (for most assessments) Using professional judgement inherent to using indicators –When to apply and take detailed measurements –Interpreting results
9
What did we Learn? (Cont.) Indicators need to be general enough to facilitate the use of professional judgement Indicators, as developed, were “commonly accepted” indicators. Each indicator should be supported by a short rationale
10
What did we Learn? (Cont.) Recommend that the indicators be tested using different routine evaluations e.g. compliance and enforcement inspections As part of training, having indicator developers scientists and implementers in the field together prior to starting audits was very beneficial Can not evaluate achievement of indicators without the collection of data (checklists developed). Different forms of data for different evaluations
11
What did we Learn? Cont. Indicators we used were “accepted” as common indicators (no argument from auditees – recognize that audits are not complete) The combination of using scientists, foresters, experts, auditors, etc. works very well. Licensees, stewardship groups should be involved in indicator development as well.
12
Lessons Learned The model of MOF developing indicators and board auditors developing the audit program worked adequately –Stronger relationship between the indicators and forest practices needed –Indicators were essentially premised on best management practices
13
END
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.