Presentation is loading. Please wait.

Presentation is loading. Please wait.

Instructional Design JMA 503. Overview and Purpose Courseware evaluation DB connectivity Menu, etc. Morae.

Similar presentations


Presentation on theme: "Instructional Design JMA 503. Overview and Purpose Courseware evaluation DB connectivity Menu, etc. Morae."— Presentation transcript:

1 Instructional Design JMA 503

2 Overview and Purpose Courseware evaluation DB connectivity Menu, etc. Morae

3 Phase III Develop & Implement Phase I Analysis Phase II Design Evaluate & Revise Start Models

4 Why evaluate? A primary function of evaluation is to determine the extent to which the expected outcomes have been realized. Courseware objectives are treated as precise statements of performance expectancy.

5 Evaluation Evaluation can be defined as a systematic procedure used to determine the extent to which program objectives have been attained.

6 Why evaluate? “…evaluation as an ongoing process is used to determine – whether program objectives have been met, and – to identify those portions of a lesson where modifications are required.” (Hannafin & Peck, 1988, p. 299)

7 Levels of evaluation Multiple levels of evaluation – Evaluation of learner performance - tells you if the learner has learned something. – Evaluation of the instructional materials – Formative evaluation – Evaluation to determine the effectiveness and to provide decision makers information about adoption. – Summative evaluation

8 Phases of formative evaluation Design reviews: output of each stage of the design/development process is reviewed & revised. Expert Reviews: an expert reviews the materials. One-to-one evaluation: developer tries out the materials on one or more individuals. Small group: conducted when the program is almost finished. Use more formal techniques.

9 Phases of formative evaluation Dick & Carey, three stages of formative evaluation: – One-to-one – Small group evaluation – Field testing

10 One-to-One Evaluation Courseware Evaluation

11 One-to-one Determine and rectify major problems. Conducted extensively during initial lesson development. Informal. Attempts to answer: – Do learners understand the courseware? – Do learners know what to do during practice and test? – Can learners interpret graphics in the text? – Can learners read all the textual materials?

12 One-to-one O2O evaluation has the capacity to obtain valuable information concerning a lesson before unnecessarily expending energy to develop the lesson.

13 One-to-one Research indicates that teachers/trainers are NOT the best sources of information for predicting whether materials will be effective. Learners/users are the best source of information.

14 One-to-one Materials that have been tried out with only a few representative learners and then revised based upon the information gained are substantially more effective than the original instruction. Testing with five users: http://www.useit.com/alertbox/20000319.html http://www.useit.com/alertbox/20000319.html

15 Small Group Evaluation Courseware Evaluation

16 Small group Often conducted when courseware is nearing completion. Helps to determine: – Lesson effectiveness – Acceptability of lesson – Appropriateness of materials Are often more formal forms of evaluation.

17 Field Test Courseware Evaluation

18 Field test Conducted in the actual setting in which the courseware will be implemented. Field test are conducted when courseware is at final draft quality.

19 Evaluation and CBT Computer software is tested by programmers or developers, often referred to as alpha testing. Software is also given to users (target audience) to use and report problems – beta testing.

20 Levels of Evaluation Courseware Evaluation

21 Levels of evaluation Instructional Adequacy Navigation/information Adequacy Visual Adequacy Program Adequacy

22 Instructional Adequacy Are the directions for the courseware clearly stated? Are goals and objectives stated? Is courseware consistent with outcomes specified in objectives? Is organization of topics easy to follow? Is the courseware free from vague and ambiguous text? Is the basic design sensible? Does courseware provide opportunities for meaningful interaction between the learner and lesson content? Does courseware personalized instruction? Will courseware motivate learners or attract their interest? Are record-keeping capabilities available in the courseware?

23 Navigation & Information Adequacy Do learners know how to get around? Do they get lost? Is navigation consistent? Are labels meaningful? Does navigation answer : Where am I? What can I do? and what is here? Is navigation hidden? Are links/buttons explicitly describe. Are items group to reflect user expectation Is there a hierarchy to information structure? Is the information sequenced properly?

24 Visual Adequacy Is the screen space use effectively (e.g., is there too much text)? Is the information presented free of crowding and cramming? Is the courseware free from typographical errors? Do colors add to the quality of the courseware? Do graphics add to the quality of the courseware? Does animation or video support learning?

25 Program Adequacy Does the courseware run as intended? Is the courseware free from conceptual or programming loops (e.g., getting caught in a section and you are unable to go anywhere else)? Does the courseware minimize the disk- management requirements for the learner (e. g., how easy is it for the learner to run/use the courseware)? Does the courseware run efficiently? Does the courseware display information accurately?

26 Usability Conducting test

27 Usability What makes a problem severe? Frequency: how many users will encounter the problem? Impact: How much trouble does the problem cause to those users who encounter it? Persistence: Is the problem a one-time impediment to users or does it cause trouble repeatedly?

28 Usability Usability tests - structured interviews/meetings focused on specific features in interface prototype. Heart of the interviews/meeting is a series of tasks that are performed by an evaluator (a person who matches the product's ideal audience). Tapes and notes taken by the interview are later analyzed for the evaluator's successes, misunderstandings, mistakes, and opinions. After a number of these tests have been performed, the observations are compared, and the most common issues are addressed.

29 Usability A solid usability testing program will include: iterative usability testing of every major features tests scheduled throughout the development process reinforcing and deepening knowledge about user behavior and ensuring that designs become more effective as they develop.

30 Usability You should start preparing for a usability testing cycle at least three weeks before you expect to need the results.

31 Usability Common quantitative measurements include: Speed with which someone completes a task How many errors were made How often user recovered from errors How many people complete the task successfully Scores on test/evaluation

32 Usability Look at features that are: Used often New features Highly publicized Considered troublesome, based on feedback from earlier versions Potentially dangerous or have bad side effects if used incorrectly Considered important by users

33 Usability Evaluate your project Identify features to examine. For every feature write at least one task List major task/concepts the learner/user must perform with you program. Give each user the tasks to perform with your program. – For the eLearning program tasks should also relate to the program’s content.

34 Usability After the user performs the tasks, ask him/her to browse your program and to give you his/her overall reactions about it.  Instructional, visual, informational/navigational, program adequacy

35 Conducting usability tests 1. Introduce user to the testing site and the recording equipment 2. Inform the user about the purpose of testing 3. Ask the user to perform each of your program’s major tasks while thinking aloud. 4. Prompt the user to think aloud if he/she does not do so 5. Ask the user to browse your program and provide overall reactions

36 Conducting usability tests 1. Do not lead the user 2. Ask open-end questions 3. Be silent 4. Let the user try to figure it out 5. Things you say and do will influence the user 6. What colors to you think are a problem on screen? Versus What do you like and dislike about the screen design?

37 Evaluation Samples Eye tracking Usability Our approaches…

38 Evaluation (Advantages) Rebuild the actions taken by users. Identify errors in training-aids that may otherwise go unnoticed. Monitor users’ use, time on task, navigation of landscape, options selected, and their reactions. User’s able to respond immediately as problems occurred and to provide feedback about them. Ability to review and validate the observations of other reviewers.

39 Evaluation (Disadvantages) The amount of data obtained, while beneficial, requires a steep investment in time for analysis.


Download ppt "Instructional Design JMA 503. Overview and Purpose Courseware evaluation DB connectivity Menu, etc. Morae."

Similar presentations


Ads by Google