Grade Distribution and Its Impact on CIS Faculty Evaluations 1992-2002 David McDonald, Ph.D. Roy D. Johnson, Ph.D.

Slides:



Advertisements
Similar presentations
Mark Troy – Data and Research Services –
Advertisements

Irwin/McGraw-Hill © Andrew F. Siegel, 1997 and l Chapter 12 l Multiple Regression: Predicting One Factor from Several Others.
Presented by Ogbonnaya “John” Nwoha, Rosemary Mokia, & Bilal Dia ERCBEC Conference Myrtle Beach, SC, Oct , 2009.
Explaining Race Differences in Student Behavior: The Relative Contribution of Student, Peer, and School Characteristics Clara G. Muschkin* and Audrey N.
Grades: Their Effects on Students as Measures of Achievement.
Making Inferences for Associations Between Categorical Variables: Chi Square Chapter 12 Reading Assignment pp ; 485.
Multiple Regression Involves the use of more than one independent variable. Multivariate analysis involves more than one dependent variable - OMS 633 Adding.
Writing Program Assessment Report Fall 2002 through Spring 2004 Laurence Musgrove Writing Program Director Department of English and Foreign Languages.
Chapter 7 Correlational Research Gay, Mills, and Airasian
Experiments and Observational Studies.  A study at a high school in California compared academic performance of music students with that of non-music.
Advisors Are Selling, But Are Faculty Buying? Assessing Faculty Buy-in of an Early Academic Alert System Abstract The advising community recognizes the.
Katja Košir, College of Business Doba Maribor Sonja Pečjak, Department of Psychology, University in Ljubljana Interpersonal relationships and academic.
Enhancing Parents’ Role in Higher Education Assessment Anne Marie Delaney Director of Institutional Research, Babson College.
A Longitudinal Analysis of the College Transfer Pathway at McMaster Karen Menard Ying Liu Jin Zhang Marzena Kielar Office of Institutional Research and.
Correlation Scatter Plots Correlation Coefficients Significance Test.
LEARNING PROGRAMME Hypothesis testing Intermediate Training in Quantitative Analysis Bangkok November 2007.
Estimation and Confidence Intervals
RESULTS CONCLUSIONS METHODS All post-doctoral psychiatry trainees at the JHU Bayview campus (site of the first year of the JHU psychiatry residency and.
HERI Faculty Survey Selected IVCC Results Office of Institutional Research.
Block Scheduling Ken Toth Kate Warncke Kathy DiAntonio Melissa Macfie.
CHAPTER NINE Correlational Research Designs. Copyright © Houghton Mifflin Company. All rights reserved.Chapter 9 | 2 Study Questions What are correlational.
Dual Credit and Advanced Placement: Do They Help Prepare Students for Success in College? Mardy Eimers, Director of Institutional Research & Planning Robert.
Grading and Grade Inflation at Georgia Tech Preliminary Report to the Executive Board March 11, 2003.
Students’ Perceptions of the Physiques of Self and Physical Educators
Understanding Statistics
Final Study Guide Research Design. Experimental Research.
University of Arkansas Faculty Senate Task Force on Grades Preliminary Report April 19, 2005.
The Psychology of the Person Chapter 2 Research Naomi Wagner, Ph.D Lecture Outlines Based on Burger, 8 th edition.
The Genetics Concept Assessment: a new concept inventory for genetics Michelle K. Smith, William B. Wood, and Jennifer K. Knight Science Education Initiative.
Simple Linear Regression One reason for assessing correlation is to identify a variable that could be used to predict another variable If that is your.
Molly M. Gathright, MD 1 ; Shane Sparks, MD 1 ; Carol R. Thrush, EdD 2 ; Brynn Mays MLS 3 ; Lewis Krain, MD 1 1 UAMS Department of Psychiatry, 2 UAMS Office.
On Parenting: An Examination of Older Adolescents’ Perceptions of Parenting Styles and Success in College Results ANCOVA (controlling for ethnicity, religion,
The Redesigned Elements of Statistics Course University of West Florida March 2008.
From Theory to Practice: Inference about a Population Mean, Two Sample T Tests, Inference about a Population Proportion Chapters etc.
Main components of effective teaching. SRI validity A measuring instrument is valid if it measures what it is supposed to measure SRIs measure: Student.
Developing a Teaching Portfolio for the Job Search Graduate Student Center University of Pennsylvania April 19, 2007 Kathryn K. McMahon Department of Romance.
INDIANA’S GREATEST CHALLENGES —AND ITS GREATEST HOPE.
Hypothesis testing Intermediate Food Security Analysis Training Rome, July 2010.
Teacher Engagement Survey Results and Analysis June 2011.
Release of PARCC Student Results. By the end of this presentation, parents will be able to: Identify components of the PARCC English.
MGS3100_04.ppt/Sep 29, 2015/Page 1 Georgia State University - Confidential MGS 3100 Business Analysis Regression Sep 29 and 30, 2015.
2008 FAEIS Annual Longitudinal Assessment With a Comparison to the 2007 Survey Results The purpose of the FAEIS annual evaluation is to develop longitudinal.
Chapter 9 Three Tests of Significance Winston Jackson and Norine Verberg Methods: Doing Social Research, 4e.
Lecture 02.
West Central Community School District Performance Document: Formative Evaluation Tool By John Johnson ortheast Iowa Charter School Northeast Charter School.
Do Easier Classes Make for Happier Students? Amanda J. Watson, PhD Murray State University Background Grade inflation has been of concern in higher education.
Grades: Their Effects on Students as Measures of Achievement.
Overview of Regression Analysis. Conditional Mean We all know what a mean or average is. E.g. The mean annual earnings for year old working males.
Practice Is there a significant (  =.01) relationship between opinions about the death penalty and opinions about the legalization of marijuana? 933.
Instructors’ General Perceptions on Students’ Self-Awareness Frances Feng-Mei Choi HUNGKUANG UNIVERSITY DEPARTMENT OF ENGLISH.
Linear Regression and Correlation Chapter GOALS 1. Understand and interpret the terms dependent and independent variable. 2. Calculate and interpret.
2010 Survey of 9 th Grade GPHS Students Peter G. Mohn, LMS June 2010.
IMPROVING SUCCESS AMONG UNDERREPRESENTED COMMUNITY COLLEGE STUDENTS* California Community Colleges Chancellor's Office Equal Employment Opportunity and.
Evaluation Institute Qatar Comprehensive Educational Assessment (QCEA) 2008 Summary of Results.
Kapil Bawa, Ph.D., Professor of Marketing, Zicklin School of Business Micheline Blum, Director, Baruch College Survey Research, Distinguished Lecturer,
Crystal Reinhart, PhD & Beth Welbes, MSPH Center for Prevention Research and Development, University of Illinois at Urbana-Champaign Social Norms Theory.
Optimizing Undergraduate Student Study Time based on Future Income Perry Baptista Senior Honors Project April 26, 2014.
C-Reactive Protein & Cognitive Function
Topics: Multiple Regression Analysis (MRA)
Widening Participation whilst Narrowing Attainment Gaps between Student Groups: A Realistic Objective for Higher Education? Introduction: How this study.
Ilanna Fricks, Emma Murphy, Yana Mayayeva
Hypothesis testing March 20, 2000.
Release of PARCC Student Results
Andrew Caudill Western Kentucky University
Carrie O’Reilly, Ph.D., M.S.N., RN Touro University Nevada
2010 Survey of 9th Grade GPHS Students Peter G. Mohn, LMS June 2010
Sabine Wollscheid, Senior Researcher, Dr. phil.
School Committee Policy and The Facts
MGS 3100 Business Analysis Regression Feb 18, 2016
USG Dual Enrollment Data and Trends
Presentation transcript:

Grade Distribution and Its Impact on CIS Faculty Evaluations David McDonald, Ph.D. Roy D. Johnson, Ph.D. Georgia State University Computer Information Systems Dept. Atlanta, GA. (

Introduction Over half of Harvard undergraduates receive “A”s 91% graduate with honors Compared with Yale (51%) and Princeton (44%), one might conclude grade inflation exists at one of the nation’s top institutions (Healy 2001)

Review of the Literature Most research to date is broad in scope done by the educational discipline Ample evidence that grade inflation exists in higher education (Nagle 1998; Carey 1993; Young 1993; Crumbley & Fliedner 1995) Students also manipulate faculty behaviors with performance reviews (Stone 1995; Damron 1996) Student evaluations change focus from a consumerist approach by measuring and improving a course to a mercantilist approach of faculty “pleasing the customer.” (Renner 1981; Goldman 1993; Bonetti 1994)

Problem Sparsity of longitudinal studies on the effects of grade inflation within the IS discipline An “A” should indicate superior performance among a peer-group – not the general population of students in the U.S. Faculty face increasing pressures to give higher grades Many faculty equate a better grade with better student evaluations Parents want perceived “value” for their investment College administrators use GPA and student evaluations as a measure of quality from many departments

Research Questions Has grade inflation occurred within the degree programs of a CIS department in a large, southeastern university over the past ten years? What is the impact on the expected grades students receives on their performance evaluation of faculty?

Hypotheses H1: There has been a steady increase of higher grades given to students in the CIS Department by full-time faculty over the past decade H1a: The percentage of “A”s given has steadily increased H1b: The percentage of “B”s given has steadily increased H1c: The percentage of “C”s given has steadily decreased H1d: The percentage of “D”s given has steadily decreased H1e: The percentage of “F”s given has steadily decreased

Hypotheses (cont’d) H2: Students will give higher evaluations to full-time faculty if they expect a high grade. H2a: Students expecting an “A” will give faculty higher evaluations H2b: Students expecting an “B” will give faculty higher evaluations H2c: Students expecting an “C” will give faculty lower evaluations H2d: Students expecting an “D” will give faculty lower evaluations H2e: Students expecting an “F” will give faculty lower evaluations

Research Design Ten year period utilized (Fall 1992 to Fall of 2002) Only permanent, faculty-of-rank used Criteria-based testing methodology courses were eliminated from the dataset Percentage of grades within each course was used rather than raw numbers Usable data consisted of 36,147 grades assigned by faculty-of-rank in 1,382 courses ( original data set included 58,315 grades assigned in 1931 courses) Student Evaluation of Instructor Performance (SEIP) standardized form used to measure faculty performance

Results – Grade Inflation Significant increase in the percentage of “A”s, while a concurrent decrease in “C”s and “F”s (p-values ≤.000) When the analysis is broken down to undergraduate (20,708 grades in 780 courses) vs. graduate 15,295 grades in 593 courses), the results indicate the problem arises primarily with the undergraduate program However, the percentage of “C”s given to graduate students has decreased over the decade Support for hypotheses H1a, H1c, and H1e

Results – The effect of the students’ expected grades on their evaluation of faculty Giving “A”s did not significantly increase overall positive evaluations of faculty Giving “A”s to undergraduates marginally effected undergraduate evaluations of faculty (p-value ≤.05) Giving “C”s to undergraduate student significantly effected faculty evaluations at p-value ≤.000 Giving “D”s to undergraduates significantly effected faculty evaluations at p-value ≤.01 Fairly good support for hypotheses H2c and H2d

Limitations Use of secondary data limits the strength of the relationships found; i.e., no data on possible mediating variables or covariates E.g., graduate students are older, more mature with better work experience Variation in testing and grading philosophies of faculty; e.g., some faculty may use the same exams even though the quality of the CIS major has increased over the decade Similarly, the exam quality and validity differ from faculty member to faculty member Graduate students are placed on academic probation if they earn a “C” and therefore, faculty may be reticent to give lower grades to graduates. Difficult to show significance when “A”s and “B”s are the primary grades distributed. Generalizability of results may not hold The students’ expected grades were used as the independent variable for H2. The actual grade for each student and his or her evaluation would add more credence to the results. However, since the SEIPs were administered anonymously, this was not possible It is not known whether SEIP evaluations had an impact on how faculty graded future courses (further study needed)

Conclusions Undergraduates experience the greatest levels of grade inflation In part, grade inflation exists because of pressures on faculty to achieve high teaching evaluations A de-coupling of the existing student evaluation methodology is needed There is little support from college and university administrators to change the status quo Faculty committees need to place a higher priority to adequate teaching evaluation of faculty with college and university administration Undergraduates expecting a “C” will negatively evaluate faculty’s performance Possibly because faculty treat undergraduate students differently than their graduate counterparts and are willing to give “C”s Less maturity in age and job experience with undergraduate students may result in blaming the faculty member rather than accepting responsibility Faculty should provide students with a more realistic expectation and explanation of their grading system

Future Research Other relationships in the existing dataset E.g. “Well-prepared?” “Explains Clearly?” “Knows if the class is understanding him/her or not?” “Motivates me to do my best work” “Considers the course worthwhile” Popularity of the class (using overflow requests)

Results of Data Analysis

TABLES Table 1. Grade Distribution: Graduate and Undergraduate Programs ( ) Dependent Variable Standardized Beta Coefficient Adjusted R 2 FSignificanc e Percent “A”s * Percent “B”s Percent “C”s * Percent “D”s Percent “F”s * * indicates p-value <.001 Table 2. Grade Distribution - Undergraduate Program ( ) Dependent Variable Standardized Beta Coefficient Adjusted R 2 FSignificance Percent “A”s * Percent “B”s Percent “C”s * Percent “D”s Percent “F”s * * indicates p-value <.001 Table 3. Grade Distribution: Graduate Program ( ) Dependent Variable Standardized Beta Coefficient Adjusted R 2 FSignificance Percent “A”s Percent “B”s Percent “C”s ** Percent “D”s Percent “F”s ** indicates p-value <.05

Table 4. The Effect of the Expected Grade Students Would Receive on Their Perception of Faculty Effectiveness: Graduate and Undergraduate Programs ( ) Independent Variable Standardized Beta Coefficient Adjusted R 2 FSignificance Number of “A”s ** Number of “B”s Number of “C”s * Number of “D”s ** Number of “F”s ** * indicates p-value <.001 ** indicates p-value <.05 Table 5. The Effect of the Expected Grade Students Would Receive on Their Perception of Faculty Effectiveness: Undergraduate Programs ( ) Independent Variable Standardized Beta Coefficient Adjusted R 2 FSignificance Number of “A”s Number of “B”s Number of “C”s * Number of “D”s ** Number of “F”s * indicates p-value <.001 ** indicates p-value <.05 Table 6. The Effect of the Expected Grade Students Would Receive on Their Perception of Faculty Effectiveness: Graduate Programs ( ) Independent Variable Standardized Beta Coefficient Adjusted R 2 FSignificance Number of “A”s Number of “B”s Number of “C”s Number of “D”s Number of “F”s

Student Evaluation of Instructor Performance (SEIP) form