Download presentation
Presentation is loading. Please wait.
1
Proposed plan for the summative evaluation of a Technology Enhanced Learning Project Dean Petters, Statement of the research question Participants Method Artefacts and data collection instruments Timetable Dependencies and risks Conclusion
2
Statement of the research question A plan for forming a research question from three objectives: – Adoption of effective practice – Experiment with new tools and approaches – Critically reflect on practice Tensions and synergies within research question Mould, sharpen, focus, guide Extend, broaden, experiment, innovate Reflect, deliberate, be explicit in how and why
3
Participants Novices users versus committed practitioners who may have become stakeholders Experienced lecturers versus participants new to HE teaching, individuals in training Trainers versus users
4
Research method Issues to clarify: – Evaluation of software versus evalution of processes that software is trying to facilitate – Controls in the design, contamination between conditions, artificiality of task An experimental within subjects design with counter-balancing – Half of subjects use application first and controlled condition second – Half of subjects use controlled condition first and application second Data – Quantitative from data logging, usability analysis – Qualitative from structured interviews, focus groups and questionnaires Analysis – Statistical analysis of differences, correlations – Thematic analysis of interviews and focus groups, diagrammatic representation of themes
5
Artefacts and data collection instruments A working prototype of the software with data logging adaptations for measurements for evaluation (or a usability testing environment to video users) Questionnaires and interview schedules The material for users to work upon, tasks to accomplish, optimum balance of control over these materials and with reality of tasks
6
Timetable Before the six month evaluation period – Getting appropriate data logging incorporated in application, or designing usability testing without these capabilities – Designing, calibrating and piloting questionnaires and structured interviews January to February - First five week design period End of February - Debrief, including post-experience interviews March to April - Second five week design period End of April - Debrief, including post-experience interviews May – data collation and analysis, and any short follow up data collection June – write up and dissemination
7
Dependencies and risks Key dependency and risk is getting everything ready for cohort of users in January Collection of results dependent on working software, but dependency on data logging may be limited by usability video capture Contamination between conditions limited by counterbalancing
8
Conclusion Overview of research question – how to combine different elements? Alternatives for Activity Theory Evaluating software and evaluating a way of promoting a pattern of behaviour Planning for future research – if application is effective, how and why?
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.