Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 L643: Evaluation of Information Systems Week 5: February 4, 2008.

Similar presentations


Presentation on theme: "1 L643: Evaluation of Information Systems Week 5: February 4, 2008."— Presentation transcript:

1 1 L643: Evaluation of Information Systems Week 5: February 4, 2008

2 2 Team Formation for Group Assignments Will be announced next week Teammate evaluation will be conducted at the end of the semester

3 3 Individual Assignment: Measurement Memo A summary of the selected category A description of an information system & its objectives (select a different IS for each memo) A purpose of the evaluation (measurement) What to measure; How to measure Limitation of the proposed measurement References (APA style, see Resources links on the course website Assignment page)Assignment

4 4 Individual Assignment: Measurement Memo Write no more than 1 page (excluding the references) Use MS Word default margin, i.e., 1 inch top/bottom & 1.25 both sides Use no smaller than 10 points Times New Roman

5 5 Individual Assignment: Measurement Memo Due is at the beginning of the class Assignment samples are available Pick two weeks to write two measurement memos

6 6 Evaluating Sources Be careful with wording The inequality of available resources between the research and teaching universities is an important issue for the center.

7 7 Evaluating Sources

8 8

9 9

10 10 Writing The art of revising Take advantage of the Writing Tutorial Services http://www.indiana.edu/~wts/ Do me a favor Have someone read your writing At least write it on the night before the due date, read it next day, and revise it

11 11 Determining Importance (Davidson, 2005) Dimensional evaluation E.g., DeLone & McLean’s IS Effectiveness Model Component evaluation E.g., policies, programs (e.g., Teen program in a library) Holistic evaluation Personnel, product, service

12 12 Reporting of Evaluation I. Executive Summary II. Preface III. Methodology 1. Background & Context 2. Descriptions & Definitions 3. Consumers 4. Resources 5. Values 6. Process Evaluation 7. Outcome Evaluation 8 & 9. Comparative Cost-Effectiveness 10. Exportability 11. Overall Significance 12. Recommendations & Explanations 13. Responsibilities 14. Reporting & Follow-up 15. Meta-evaluation

13 13 Exercise: Survey Instrument Read the survey instrument for Executive Involvement and Participation in the Management of Information Technology How many criteria do they use to evaluate executive involvement? What are strengths & weaknesses of the survey instrument? How would you improve/modify the survey instrument?

14 14 The Basics of Research

15 15 Hypothesis Testing State Null Hypothesis Come up With a Research Question Determine Probability Retain Null Hypothesis Reject Null Hypothesis

16 16 Normal Curve Divided into Different Sections (Salkind, 2007) 100908070 110 120130 (Mean) Raw score Standard deviations -3 -2 0 123 Standard deviation = 10

17 17 Distribution of Cases Under the Normal Curve (Salkind, 2007) 100908070 110 120130 (Mean) Raw score Standard deviations -3 -2 0 123 34.13 % 13.59 % 2.15 % 0.13% 34.13 % 13.59 % 2.15 % 0.13%

18 18 Distribution of Cases Under the Normal Curve (Salkind, 2007) 100908070 110 120130 (Mean) Raw score Standard deviations -3 -2 0 123 34.13 % 13.59 % 2.15 % 0.13% 34.13 % 13.59 % 2.15 % 0.13% 95.44%

19 19 Significant Differences (Salkind, 2007) The research hypothesis specifies the predicted outcome of a study The null hypothesis most commonly used specifies there is no relationship in the population E.g., there is no difference between the population mean of users who used a unix-based system and the population mean of users who used a web-based system (= the difference between the means of the two populations is zero) to process students’ grades

20 20 Significant Differences (Salkind, 2007) Then, the researcher proceeds to test the null hypothesis Basic assumption is: the sampling distribution is normal Instead of using the obtained sample value (sample means) as the mean of the sampling distribution, we use zero (i.e., z score)

21 21 Significant Differences (Salkind, 2007) Determine the probability of getting a particular sample value (e.g., an obtained difference between sample means) by seeing where such a value falls on the sampling distribution If the probability is small, the null hypothesis is rejected  providing support for the research hypothesis The results are said to be statistically significant

22 22 Different Types of Errors (Salkind, 2007) Accepted the null hypothesis Rejected the null hypothesis The null hypothesis is really true Accepted the null when no difference between groups Type I error – rejected the null when no difference between groups The null hypothesis is really false Type II error – accepted a false null hypothesis Rejected the null hypothesis when there are differences

23 23 Significant Differences (Salkind, 2007) What counts as “small”??? What constitutes an unlikely outcome? It is customary in social science research to view as unlikely any outcome that has a probability of.05 (p=.05) or less This is referred to as the.05 level of significance When we reject a null hypothesis at the.05 level, we are saying that the probability of obtaining such as outcome is only 5 times (or less) in 100 (i.e., 5%)

24 24 Different Types of Stats (Salkind, 2004) See the chart, Figure 8.1, p. 186.

25 25 Practicality of the Significance (Salkind, 2007) Look at the raw scores Web-interface (75.6) vs. unix-based interface (75.7) Sample size = 10,000 T-test  the difference between the two means is statistically significant at the.01 level

26 26 Linear Regression: Y=f(X) 1234 1 2 3 4 0 Population Crime rate

27 27 Path Analysis System quality Information quality Use User satisfaction Individual impact Organizational impact InfoSys User environment Organizational environment 0.37 0.47 0.37 0.36 0.43

28 28 Factor Analysis Identify the general dimensions represented by a collection of variables E.g., DeLone & McLean (1992) table 7 Cf., Babbie, p. 475

29 29 One-Way Analysis of Variance (ANOVA) Determine the extent to which the groups differ from one another based on dependent variables E.g., Figure 16-9 (Babbie, p. 477)


Download ppt "1 L643: Evaluation of Information Systems Week 5: February 4, 2008."

Similar presentations


Ads by Google