Instructional Design JMA 503. Overview and Purpose Courseware evaluation DB connectivity Menu, etc. Morae.

Slides:



Advertisements
Similar presentations
By: Edith Leticia Cerda
Advertisements

Curriculum Development and Course Design
The Teaching of RI 8.8 By Joseph Schmith.
What is the ADDIE Model? By San Juanita Alanis. The ADDIE model is a systematic instructional design model consisting of five phases: –Analysis –Design.
CS305: HCI in SW Development Evaluation (Return to…)
Instructional System Design.  The purpose of instructional design is to maximize the value of instruction for the learner especially the learner's time.
Chapter 14: Usability testing and field studies. 2 FJK User-Centered Design and Development Instructor: Franz J. Kurfess Computer Science Dept.
Think-aloud usability experiments or concurrent verbal accounts Judy Kay CHAI: Computer human adapted interaction research group School of Information.
Formative and Summative Evaluations
Design and Evaluation of Iterative Systems n For most interactive systems, the ‘design it right first’ approach is not useful. n The 3 basic steps in the.
An evaluation framework
Instructional Design Methods of Teaching Adults Spring Interim, May 2001 O riginal Slide Presentation Developed by Dr. Gary Moore at NC State.
Designing and Developing Interactive Multimedia EDCI 663 Educational Technology Purdue University.
Literacy Textual: The ability to read and write Oral: The ability to listen and speak Visual: The ability to interpret visual messages accurately and.
Evaluation of Training and Education Activities. Objectives By the end of this presentation, participants will be able to List reasons why evaluation.
Software Development, Programming, Testing & Implementation.
MULTIMEDIA SYSTEM.
La Naturaleza.  The generic term for the five-phase instructional design model consisting of Analysis, Design, Development, Implementation, and Evaluation.
Instructional System Design
Revising instructional materials
Introduction to Interactive Media 02. The Interactive Media Development Process.
Instructional Design Brian Newberry. Instructional Design Instructional Design is a systematic process for the creation of educational resources.
E-LEARNING GUIDELINES. Primary components of e-learning 1. Learner motivation 2. Learner interface 3. Content structure 4. Navigation 5. Interactivity.
Evaluation Framework Prevention vs. Intervention CHONG POH WAN 21 JUNE 2011.
1 SWE 513: Software Engineering Usability II. 2 Usability and Cost Good usability may be expensive in hardware or special software development User interface.
Lect 6 chapter 3 Research Methodology.
MERLOT’s Peer Review Report Composed from reports by at least two Peer Reviewers. Description Section Provides the pedagogical context (i.e. learning goals,
Lecture 8A Designing and Conducting Formative Evaluations English Study Program FKIP _ UNSRI
Classroom Assessment A Practical Guide for Educators by Craig A
Elizabeth Godfrey 1.  Periodic assessment of results Appropriateness, effectiveness, efficiency, impact, sustainability  Identifies intended and unintended.
Introduction to Interactive Media The Interactive Media Development Process.
Design for Interaction Rui Filipe Antunes
Comp 20 - Training & Instructional Design Unit 6 - Assessment This material was developed by Columbia University, funded by the Department of Health and.
Object-Oriented Software Engineering Practical Software Development using UML and Java Chapter 7: Focusing on Users and Their Tasks.
Let’s Talk Assessment Rhonda Haus University of Regina 2013.
Observation technique and assessment measurements 1.
Level 2 Prepared by: RHR First Prepared on: Nov 23, 2006 Last Modified on: Quality checked by: MOH Copyright 2004 Asia Pacific Institute of Information.
1 ISE 412 Usability Testing Purpose of usability testing:  evaluate users’ experience with the interface  identify specific problems in the interface.
COMP5047 Pervasive Computing: 2012 Think-aloud usability experiments or concurrent verbal accounts Judy Kay CHAI: Computer human adapted interaction research.
I Power Higher Computing Software Development The Software Development Process.
For ABA Importance of Individual Subjects Enables applied behavior analysts to discover and refine effective interventions for socially significant behaviors.
“A Truthful Evaluation Of Yourself Gives Feedback For Growth and Success” Brenda Johnson Padgett Brenda Johnson Padgett.
Assessment Specifications Gronlund, Chapter 4 Gronlund, Chapter 5.
Evaluating CAI Lessons Dr. Ennis-Cole CECS 5130 Lesson Objectives Have the expected outcomes been realized ? Were the desired outcomes a result of the.
Advanced Work with Embedded and Summative Assessment Dr. Steve Broskoske Misericordia University EDU 533 Computer-based Education.
Evaluation and Designing
By Godwin Alemoh. What is usability testing Usability testing: is the process of carrying out experiments to find out specific information about a design.
Mohammad Alipour Islamic Azad University, Ahvaz Branch.
Creating & Building the Web Site Week 8. Objectives Planning web site development Initiation of the project Analysis for web site development Designing.
Instructional Design Ryan Glidden. Instructional Design The process of planning, creating, and developing instructional or educational resources.
Oman College of Management and Technology Course – MM Topic 7 Production and Distribution of Multimedia Titles CS/MIS Department.
1 The Software Development Process ► Systems analysis ► Systems design ► Implementation ► Testing ► Documentation ► Evaluation ► Maintenance.
Oct 211 The next two weeks Oct 21 & 23: Lectures on user interface evaluation Oct 28: Lecture by Dr. Maurice Masliah No office hours (out of town) Oct.
Barry Williams1 Designing & Conducting Formative Evaluation Dick & Carey Chp. 10.
Evaluation / Usability. ImplementDesignAnalysisEvaluateDevelop ADDIE.
IMPLEMENTING LEAPS IN CENTRAL AND EASTERN EUROPE: TRAINERS’ HANDBOOK Monitoring and Evaluating Results.
Design Evaluation Overview Introduction Model for Interface Design Evaluation Types of Evaluation –Conceptual Design –Usability –Learning Outcome.
Overview of Types of Measures Margaret Kasimatis, PhD VP for Academic Planning & Effectiveness.
Day 8 Usability testing.
COMPREHENSION STRATEGIES
Provide instruction.
Document Development Cycle
Fundamentals of Information Systems, Sixth Edition
A Model for Successful Instructional Design Copyright © Media
HCI Evaluation Techniques
Designing Your Performance Task Assessment
Chapter 4 Instructional Media and Technologies for Learning
Designing & Conducting Formative Evaluation
COMP444 Human Computer Interaction Evaluation
Presentation transcript:

Instructional Design JMA 503

Overview and Purpose Courseware evaluation DB connectivity Menu, etc. Morae

Phase III Develop & Implement Phase I Analysis Phase II Design Evaluate & Revise Start Models

Why evaluate? A primary function of evaluation is to determine the extent to which the expected outcomes have been realized. Courseware objectives are treated as precise statements of performance expectancy.

Evaluation Evaluation can be defined as a systematic procedure used to determine the extent to which program objectives have been attained.

Why evaluate? “…evaluation as an ongoing process is used to determine – whether program objectives have been met, and – to identify those portions of a lesson where modifications are required.” (Hannafin & Peck, 1988, p. 299)

Levels of evaluation Multiple levels of evaluation – Evaluation of learner performance - tells you if the learner has learned something. – Evaluation of the instructional materials – Formative evaluation – Evaluation to determine the effectiveness and to provide decision makers information about adoption. – Summative evaluation

Phases of formative evaluation Design reviews: output of each stage of the design/development process is reviewed & revised. Expert Reviews: an expert reviews the materials. One-to-one evaluation: developer tries out the materials on one or more individuals. Small group: conducted when the program is almost finished. Use more formal techniques.

Phases of formative evaluation Dick & Carey, three stages of formative evaluation: – One-to-one – Small group evaluation – Field testing

One-to-One Evaluation Courseware Evaluation

One-to-one Determine and rectify major problems. Conducted extensively during initial lesson development. Informal. Attempts to answer: – Do learners understand the courseware? – Do learners know what to do during practice and test? – Can learners interpret graphics in the text? – Can learners read all the textual materials?

One-to-one O2O evaluation has the capacity to obtain valuable information concerning a lesson before unnecessarily expending energy to develop the lesson.

One-to-one Research indicates that teachers/trainers are NOT the best sources of information for predicting whether materials will be effective. Learners/users are the best source of information.

One-to-one Materials that have been tried out with only a few representative learners and then revised based upon the information gained are substantially more effective than the original instruction. Testing with five users:

Small Group Evaluation Courseware Evaluation

Small group Often conducted when courseware is nearing completion. Helps to determine: – Lesson effectiveness – Acceptability of lesson – Appropriateness of materials Are often more formal forms of evaluation.

Field Test Courseware Evaluation

Field test Conducted in the actual setting in which the courseware will be implemented. Field test are conducted when courseware is at final draft quality.

Evaluation and CBT Computer software is tested by programmers or developers, often referred to as alpha testing. Software is also given to users (target audience) to use and report problems – beta testing.

Levels of Evaluation Courseware Evaluation

Levels of evaluation Instructional Adequacy Navigation/information Adequacy Visual Adequacy Program Adequacy

Instructional Adequacy Are the directions for the courseware clearly stated? Are goals and objectives stated? Is courseware consistent with outcomes specified in objectives? Is organization of topics easy to follow? Is the courseware free from vague and ambiguous text? Is the basic design sensible? Does courseware provide opportunities for meaningful interaction between the learner and lesson content? Does courseware personalized instruction? Will courseware motivate learners or attract their interest? Are record-keeping capabilities available in the courseware?

Navigation & Information Adequacy Do learners know how to get around? Do they get lost? Is navigation consistent? Are labels meaningful? Does navigation answer : Where am I? What can I do? and what is here? Is navigation hidden? Are links/buttons explicitly describe. Are items group to reflect user expectation Is there a hierarchy to information structure? Is the information sequenced properly?

Visual Adequacy Is the screen space use effectively (e.g., is there too much text)? Is the information presented free of crowding and cramming? Is the courseware free from typographical errors? Do colors add to the quality of the courseware? Do graphics add to the quality of the courseware? Does animation or video support learning?

Program Adequacy Does the courseware run as intended? Is the courseware free from conceptual or programming loops (e.g., getting caught in a section and you are unable to go anywhere else)? Does the courseware minimize the disk- management requirements for the learner (e. g., how easy is it for the learner to run/use the courseware)? Does the courseware run efficiently? Does the courseware display information accurately?

Usability Conducting test

Usability What makes a problem severe? Frequency: how many users will encounter the problem? Impact: How much trouble does the problem cause to those users who encounter it? Persistence: Is the problem a one-time impediment to users or does it cause trouble repeatedly?

Usability Usability tests - structured interviews/meetings focused on specific features in interface prototype. Heart of the interviews/meeting is a series of tasks that are performed by an evaluator (a person who matches the product's ideal audience). Tapes and notes taken by the interview are later analyzed for the evaluator's successes, misunderstandings, mistakes, and opinions. After a number of these tests have been performed, the observations are compared, and the most common issues are addressed.

Usability A solid usability testing program will include: iterative usability testing of every major features tests scheduled throughout the development process reinforcing and deepening knowledge about user behavior and ensuring that designs become more effective as they develop.

Usability You should start preparing for a usability testing cycle at least three weeks before you expect to need the results.

Usability Common quantitative measurements include: Speed with which someone completes a task How many errors were made How often user recovered from errors How many people complete the task successfully Scores on test/evaluation

Usability Look at features that are: Used often New features Highly publicized Considered troublesome, based on feedback from earlier versions Potentially dangerous or have bad side effects if used incorrectly Considered important by users

Usability Evaluate your project Identify features to examine. For every feature write at least one task List major task/concepts the learner/user must perform with you program. Give each user the tasks to perform with your program. – For the eLearning program tasks should also relate to the program’s content.

Usability After the user performs the tasks, ask him/her to browse your program and to give you his/her overall reactions about it.  Instructional, visual, informational/navigational, program adequacy

Conducting usability tests 1. Introduce user to the testing site and the recording equipment 2. Inform the user about the purpose of testing 3. Ask the user to perform each of your program’s major tasks while thinking aloud. 4. Prompt the user to think aloud if he/she does not do so 5. Ask the user to browse your program and provide overall reactions

Conducting usability tests 1. Do not lead the user 2. Ask open-end questions 3. Be silent 4. Let the user try to figure it out 5. Things you say and do will influence the user 6. What colors to you think are a problem on screen? Versus What do you like and dislike about the screen design?

Evaluation Samples Eye tracking Usability Our approaches…

Evaluation (Advantages) Rebuild the actions taken by users. Identify errors in training-aids that may otherwise go unnoticed. Monitor users’ use, time on task, navigation of landscape, options selected, and their reactions. User’s able to respond immediately as problems occurred and to provide feedback about them. Ability to review and validate the observations of other reviewers.

Evaluation (Disadvantages) The amount of data obtained, while beneficial, requires a steep investment in time for analysis.