Evaluation Mary Rowlatt MDR Partners. Definition of project evaluation Evaluation focuses on whether the project was effective, achieved its objectives,

Slides:



Advertisements
Similar presentations
1 of 20 Evaluating an Information Project From Questions to Results © FAO 2005 IMARK Investing in Information for Development Evaluating an Information.
Advertisements

X4L Prog Mtg – March 05 Evaluation – What works best - when and why? Prof Mark Stiles Project Director: SURF X4L, SURF WBL, SUNIWE and ICIER Projects Head.
I 3 conference, June 2013 A typology of e-book interactions and the e-book literacy and tools required for achieving students study goals Dr Laura.
Customised training: Learner Voice and Post-16 Citizenship.
What “Counts” as Evidence of Student Learning in Program Assessment?
Practical resource Evidence based – developed through our research study Draws on the framework of impact, practicalities of capturing impact & lessons.
FORESTUR: “Tailored training for professionals in the rural tourist sector” ES/06/B/F/PP QUALITY MANAGEMENT PLAN Valencia, November 2006.
PM&E Participatory Monitoring and Evaluation Prepared by BMCalub.
User evaluation and the ‘Simplifying resource discovery and access in academic libraries’ JISC Project Annette Coates, Service Manager (Digital Library.
Qualitative Research Focus Groups Usability studies Web 2.0 Innovation: Add This.com Kumar Percy Jayasuriya
In Europe, When you ask the VET stakeholders : What does Quality Assurance mean for VET system? You can get the following answer: Quality is not an absolute.
Project Monitoring Evaluation and Assessment
Evaluation.
Educational Outcomes: The Role of Competencies and The Importance of Assessment.
Changes in Library Usage, Usability, & User Support Denise A. Troll Distinguished Fellow, Digital Library Federation Associate University Librarian, Carnegie.
Chapter Three: Determining Program Components by David Agnew Arkansas State University.
Formative and Summative Evaluations
Evaluation. Practical Evaluation Michael Quinn Patton.
Effective dissemination and evaluation
TIMELESS LEARNING POLICY & PRACTICE. JD HOYE President National Academy Foundation.
Needs Analysis Session Scottish Community Development Centre November 2007.
User Interface Evaluation Usability Inquiry Methods
PDHPE K-6 Using the syllabus for consistency of assessment © 2006 Curriculum K-12 Directorate, NSW Department of Education and Training.
Impact assessment framework
PLANNING YOUR EPQ How to write a great research paper – Cambridge Uni.
The Evaluation Plan.
Collecting data for Monitoring and Evaluation Purposes Dr. Fred Mugambi Mwirigi JKUAT.
March 2009 PERFORMANCE EVALUATION OF PUBLIC SERVICE DELIVERY (Social Services)– KENYAN EXPERIENCE March 2009 PERSPECTIVES ON IMPACT EVALUATION Presenter:
Evaluation methods and tools (Focus on delivery mechanism) Jela Tvrdonova, 2014.
Tools in Media Research In every research work, if is essential to collect factual material or data unknown or untapped so far. They can be obtained from.
Verification: Quality Assurance in Assessment Verification is the main quality assurance process associated with assessment systems and practice - whether.
Monitoring and Evaluation in MCH Programs and Projects MCH in Developing Countries Feb 10, 2011.
Outcome Based Evaluation for Digital Library Projects and Services
Developing Indicators
INTERNATIONAL LABOUR ORGANIZATION Conditions of Work and Employment Programme (TRAVAIL) 2012 Module 13: Assessing Maternity Protection in practice Maternity.
Introduction to Evaluation Odette Parry & Sally-Ann Baker
EVALUATION APPROACHES Heather Aquilina 24 March 2015.
Director of Evaluation and Accountability Manager, UW’s Grand Rapids, Michigan Robert McKown, CIRS Director of Evaluation and Accountability Sherri.
1 Women Entrepreneurs in Rural Tourism Evaluation Indicators Bristol, November 2010 RG EVANS ASSOCIATES November 2010.
ASSESSMENT. Assessment is the systematic and on-going process of collecting and reviewing evidence about the College's academic and administrative programs.
Monitoring & Evaluation: The concepts and meaning Day 9 Session 1.
W HAT IS M&E  Day-to-day follow up of activities during implementation to measure progress and identify deviations  Monitoring is the routine and systematic.
Quality Assessment July 31, 2006 Informing Practice.
September 2007 Survey Development Rita O'Sullivan Evaluation, Assessment, & Policy Connections (EvAP) School of Education, University of North Carolina-Chapel.
Programme Objectives Analyze the main components of a competency-based qualification system (e.g., Singapore Workforce Skills) Analyze the process and.
Welcome to Generator Workshop 25 th February 2010 RSC London.
Workshops to support the implementation of the new languages syllabuses in Years 7-10.
“A Truthful Evaluation Of Yourself Gives Feedback For Growth and Success” Brenda Johnson Padgett Brenda Johnson Padgett.
Monitoring and Evaluation
Monitoring and Evaluation in MCH Programs and Projects MCH in Developing Countries Feb 24, 2009.
National Standards in Reading & Writing Sources : NZ Ministry of Education websites. G Thomas, J Turner.
1 The project is financed from the European Union funds within the framework of Erasmus+, Key Action 2: Cooperation for innovation and the exchange of.
Strategic Research. Holiday Inn Express Stays Smart What research results led to an upgrade of all Holiday Inn Express bathrooms? How did their agency,
International engagement: m & e meeting Monitoring & Evaluation: an introduction for practitioners Liz Allen.
Data to collect and questions to ask to understand the meaning of the data.
Monitoring and Evaluation in the GMS Learning Program 7 – 18 May 2012, Mekong Institute, Khon Kaen, Thailand Randy S. Balaoro, CE, MM, PMP Data Collection.
Are we there yet? Evaluating your graduation SiMR.
Childhood Neglect: Improving Outcomes for Children Presentation P21 Childhood Neglect: Improving Outcomes for Children Presentation Measuring outcomes.
Associate Professor Cathy Gunn The University of Auckland, NZ Learning analytics down under.
Demonstrating Institutional Effectiveness Documenting Using SPOL.
Evaluation Nicola Bowtell. Some myths about evaluation 2Presentation title - edit in Header and Footer always involves extensive questionnaires means.
Monitoring and Evaluation Systems for NARS organizations in Papua New Guinea Day 4. Session 10. Evaluation.
Applied Methodologies in PHI Session 5: Evaluation Kath Roberts (NOO/EMPHO)
How would this information be useful to a business?
Logic Models How to Integrate Data Collection into your Everyday Work.
Evaluating the Quality and Impact of Community Benefit Programs
DATA COLLECTION METHODS IN NURSING RESEARCH
Monitoring and Evaluation Systems for NARS Organisations in Papua New Guinea Day 3. Session 9. Periodic data collection methods.
NQT Mentor and Tutor Seminar
OGB Partner Advocacy Workshop 18th & 19th March 2010
Presentation transcript:

Evaluation Mary Rowlatt MDR Partners

Definition of project evaluation Evaluation focuses on whether the project was effective, achieved its objectives, and the outcomes had an impact This and following sides taken from JISC website – Joint Information Services Committee – ctmanagement/planning/evaluation.aspx ctmanagement/planning/evaluation.aspx

Types of evaluation Formative evaluation Performed during the project/programme to improve the work in progress and the likelihood that it will be successful Summative evaluation Performed near the end of the project/programme to provide evidence of achievements and success

Aims of the formative evaluation might be to: Assess how effectively a project its aims Gather and disseminate best practice Identify gaps and issues Raise awareness of the project and stimulate discussion within the community Ensure project outputs are meeting stakeholder needs Ensure the project can respond flexibly to changes in the technical and political environment and that it isn’t overtaken by events

Aims of the summative evaluation might be to: Assess whether the project achieved its aims and objectives Assess the impacts, benefits, and value of the project in the broader context Identify achievements and stimulate discussion with the community Synthesise knowledge from the project and lessons learned Identify areas for future development work

Factors to evaluate might include: Achievements against aims and objectives Stakeholder engagement Outcomes and impacts Benefits Learning Effectiveness of the project

Questions to address List the specific questions the evaluation will answer. Focus on questions that really need to be answered to demonstrate success. Think about what stakeholders want to know. Make sure that the questions can be answered unambiguously. Avoid questions where the answer is likely to be ‘maybe’.

Typical questions - formative Have milestones been met on schedule? What is holding up progress? What should we do to correct this? Is project management effective? Are stakeholders on board? Do they agree with interim findings? Is our dissemination effective? What lessons have we learned? Do we need to change the plan?

Typical questions - summative Have objectives been met? Have outcomes been achieved? What are the key findings? What impact did the project have? What benefits are there for stakeholders? Was our approach effective? What lessons have we learned? What would we do differently?

Evaluation methods – quantative Questionnaires – Questionnaires are used to gather opinions from a particular group in a systematic way using closed and open-ended questions. They are a common and versatile way of collecting data and relatively cheap. They can be sent by , posted on the web, or even posted by snail mail. Care needs to be taken in selecting the sample, phrasing the questions, and analysing the results in order to make valid conclusions. QUALSERV – This measures the quality of a service in terms of five parameters: reliability, responsiveness, assurance, empathy, and tangibles. It’s a survey instrument that measures the gap between users’ expectations for excellence and their perception of the actual service delivered. Usage logs – Usage logs record what each user does during a session, and these can be analysed using various tools and techniques. They allow you to measure what content is used, how often, using what methods (e.g. searching), and sometimes by whom (e.g. by department). Analysis can allow you to identify trends and patterns (e.g. in searching or navigation). Web server logs – These can tell you a bit about how your website is used (e.g. the most used pages, if usage is increasing, and times of peak use). They don’t tell you who’s using the site, why, or if they like it. But they can identify problems to look into (e.g. navigation if important pages aren’t being used). Many software tools are available to analyse server logs.

Evaluation methods - qualitative Interviews – These are conversations, typically with one person. They may be structured, semi-structured, or unstructured, and conducted in person or by phone. They are useful for exploring opinions and issues in depth on a one-to-one basis. Focus groups – These are interviews conducted with a small group of people (e.g. 8-10). They allow you to get a range of views on an issue (not a consensus) and explore how strongly views are held or change as the issue is discussed. They are often used after a survey to help explain the results or clarify issues. However, they are time-consuming to set up and some skill is needed to guide and moderate the discussion. Observation – Observation is just that, observing what people do. It’s a technique often used by developers of commercial software to find out how users use their product. If results aren’t what they envisaged, they may change the design. Observation can be applied to other areas as well (e.g. how a process or content is used). Peer review – In some areas, an expert opinion is needed. A pedagogical expert might evaluate learning objects and say if they meet learning objectives. An expert in a discipline might evaluate the quality or relevance of a collection of content in that area.

Measuring success For project outputs, performance indicators may relate to: – user demand, user satisfaction, efficiency, effectiveness, take-up, etc. For the project, they will relate to: – achieving your objectives. By using SMART objectives (specific, measurable, achievable, realistic, timed), you can demonstrate they have been achieved. How do you measure success with stakeholders – i.e. understand success from their point of view.

For example: 1,000 users per day will visit the website Usage of the portal will increase by 200% from year 2 to year 3 80% of users questioned will express satisfaction with the service Student examination marks will improve by 10% in two years 90% of users questioned will say the process/method saved them time 4 out of 5 institutions approached say they will adopt the guidelines The portal will achieve a benchmark score of X in usability studies.

Using evaluation results Formative evaluation will improve the project and its outputs. It lets you reflect on what you’ve done so far, what’s going well (or not so well), and what you could do to change or improve things. Evaluation will demonstrate that you’ve achieved your aims and objectives, the work was useful, and there are benefits for the community

Evaluation in AccessIT+ What do we want to evaluate? – The courses? The take up? User views? The content? – The digital libraries The amount of content? The quality of content? Usefulness? The technology? – The impact? – Cost-effectiveness? – Benefits to users/to libraries? – ETC

Evaluation in AccessIT+ How do we want to do it? – Questionnaires – Interviews – Sampling – Counting – Expert opinion – Focus groups – Observation – ETC