Final Presentations Class 6.

Slides:



Advertisements
Similar presentations
Student Survey Results and Analysis May Overview HEB ISD Students in grades 6 through 12 were invited to respond the Student Survey during May 2010.
Advertisements

© McGraw-Hill Higher Education. All rights reserved. Chapter 3 Reliability and Objectivity.
Using the IDEA Student Ratings System: An Introduction University of Saint Thomas Fall
RESEARCH METHODS Lecture 18
Social Science Research Design and Statistics, 2/e Alfred P. Rovai, Jason D. Baker, and Michael K. Ponton Internal Consistency Reliability Analysis PowerPoint.
Measurement Concepts & Interpretation. Scores on tests can be interpreted: By comparing a client to a peer in the norm group to determine how different.
AN EVALUATION OF THE EIGHTH GRADE ALGEBRA PROGRAM IN GRAND BLANC COMMUNITY SCHOOLS 8 th Grade Algebra 1A.
1 Using Factor Analysis to Clarify Operational Constructs for Measuring Mission Perception Ellen M. Boylan, Ph.D. NEAIR 32 nd Annual Conference November.
Social Science Research Design and Statistics, 2/e Alfred P. Rovai, Jason D. Baker, and Michael K. Ponton Factor Analysis PowerPoint Prepared by Alfred.
Student Engagement Survey Results and Analysis June 2011.
Indicators of Family Engagement Melanie Lemoine and Monica Ballay Louisiana State Improvement Grant/SPDG.
McMillan Educational Research: Fundamentals for the Consumer, 6e © 2012 Pearson Education, Inc. All rights reserved. Educational Research: Fundamentals.
Classroom Assessments Checklists, Rating Scales, and Rubrics
The Genetics Concept Assessment: a new concept inventory for genetics Michelle K. Smith, William B. Wood, and Jennifer K. Knight Science Education Initiative.
Chapter 1: Research Methods
Counseling Research: Quantitative, Qualitative, and Mixed Methods, 1e © 2010 Pearson Education, Inc. All rights reserved. Basic Statistical Concepts Sang.
Review/articles/ Qualitative
Teacher Engagement Survey Results and Analysis June 2011.
Value-Added Teacher Evaluation: Explanations and Implications for Michigan Music Educators Colleen Conway, University of Michigan (Session Presider) Abby.
Appraisal and Its Application to Counseling COUN 550 Saint Joseph College For Class # 3 Copyright © 2005 by R. Halstead. All rights reserved.
Chapter 9 Correlation, Validity and Reliability. Nature of Correlation Association – an attempt to describe or understand Not causal –However, many people.
Review and Writers’ Workshop Class 7 Meetings at 11:30am Cathrine, Jeff, Alisha.
NIH and IRB Purpose and Method M.Ed Session 2.
Measurement MANA 4328 Dr. Jeanne Michalski
Chapter 14: Affective Assessment
Part II – Chapters 6 and beyond…. Reliability, Validity, & Grading.
Sample paper in APA style Sample paper in APA style.
Descriptive Research/Workshop Class 6. Confidence Limits/Interval Attempts to define range of true population mean based on Standard Error estimate. Confidence.
ACCESS for ELLs Score Changes
Classroom Assessments Checklists, Rating Scales, and Rubrics
Inferential Statistics
Statistical Significance
A short instrument to assess topic interest in multimedia research
Standard Level Diploma
Reliability Analysis.
Parental Alcoholism and Adolescent Depression?
Introduction Method Results Conclusions
Teacher SLTs
Discussion Discuss your results.
–Anonymous Participant
Understanding Results
Classroom Assessments Checklists, Rating Scales, and Rubrics
Reliability & Validity
Data Collection Methods
Class 5 Statistical Tests Class 5.
Phillip M. Hash Calvin College Grand Rapids, Michigan
Objectives Assignment and quizzes review
Experimental Design.
K-3 Student Reflection and Self-Assessment
Chapter Eight: Quantitative Methods
Reliability and Validity of Measurement
Class 7a Pearson Spearman Cronbach’s alpha (α)
RESEARCH METHODS Lecture 18
Thesis Proposal Presentation
Using statistics to evaluate your test Gerard Seinhorst
Reliability Analysis.
Learning online: Motivated to Self-Regulate?
THE RELATIONSHIP BETWEEN PRE-SERVICE TEACHERS’ PERCEPTIONS TOWARD ACTIVE LEARNING IN STATISTIC 2 COURSE AND THEIR ACADEMIC ACHIEVEMENT Vanny Septia Efendi.
Teacher SLTs
AET/515 Instructional Plan Template (Shirmen McDonald)
Validity and Reliability II: The Basics
15.1 The Role of Statistics in the Research Process
Understanding Statistical Inferences
IDEA Student Ratings of Instruction
InferentIal StatIstIcs
2017 Postgraduate Research Experience Survey (PRES) Results
Chapter 8 VALIDITY AND RELIABILITY
Teacher SLTs
EDUC 2130 Quiz #10 W. Huitt.
MGS 3100 Business Analysis Regression Feb 18, 2016
Presentation transcript:

Final Presentations Class 6

Agenda Today APA format DePaul Univ IRB Reliability of Likert Data SEM & Confidence Limits/Intervals A past study as an example Your Presentations

APA Format Headers Remember title page, page #s Chapter title = Level 1 Others = Level 2 (Flush left) Remember title page, page #s Running Head ≤ 50 characters total. Goes in the header flush left Research Question after purpose statement & before need for study (Header?) Commas = …apples, oranges, and grapes. Citation before period and after end quotation mark. No page number needed unless you are citing a quote. Two spaces b/w sentences; 1 space b/w reference elements! Spacing: Double space, paragraph set to 0, no additional spaced added.

DePaul University IRB - orp@depaul. edu https://offices. depaul Complete the necessary training Developing your Research Jackie as Faculty Sponsor Review the Levels of Review page to determine which review level applies to your research: exempt, expedited, or full board. If you believe your research is Non- reviewable (i.e., does not involve research or humans subjects), the IRB requires that you submit an email containing a summary of your planned project to the IRB in order for the IRB to make an official non- reviewable determination. (??) Send hard copy or scanned PDF version of just the signed IRB application form in addition to the electronic version. Please send all consent documents, information sheets, and recruitment materials in WORD, so that the IRB can manipulate or edit them, when necessary.

Reliability of Survey What broad single dimension is being studied? e.g. = attitudes towards elementary music Preference for Western art music “People who answered a on #3 answered c on #5” Use Cronbach’s alpha Measure of internal consistency Extent to which responses on individual items correspond to each other

Confidence Limits/Interval http://graphpad.com/quickcalcs/CImean1.cfm Attempts to define range of true population mean based on Standard Error estimate. Confidence Level 95% chance vs. 99% chance Confidence Limits 2 numbers that define the top & bottom of the range Confidence Interval (margin of error) expressed in units or % The confidence interval (also called margin of error) is the plus-or-minus figure usually reported in newspaper or television opinion poll results. For example, if you use a confidence interval of 4 and 47% percent of your sample picks an answer you can be "sure" that if you had asked the question of the entire relevant population between 43% (47-4) and 51% (47+4) would have picked that answer. OR = the range b/w confidence limits

Standard Error of the Mean Estimate of the average SD for any number of samples of the same size taken from the population. Example: If I tested 30 students on music theory Test 0-100 Mean 75; SD 10 Standard Error (SE) would estimate average SD among any number of same size samples taken from the population SEM = SD/sq root N Calculate for example on the left. 95% Confidence Interval 95% of the area under a normal curve lies within roughly 1.96 SD units above or below the Mean (rounded to +/-2) 95% CI = M + or – (SEM X 1.96) 99% CI = M + or – (SEM X 2.76) Sq root of N = 5.477 SEM = 1.826 SEM x 1.96 = 3.579 SEM x 2.76 = 5.04 95% CI = 78.579 – 71.421 99% CI = 80.04 – 69.96

Normal Curve/Distribution (+/- 1 = 68.26%, 2 = 95%, 3 = 99.7%)

Confidence Calculate at a 95% confidence level (by hand) ACT Explore scores Confidence Limits Interval How close to your scores represent true population?

Phillip M. Hash Calvin College Grand Rapids, Michigan pmh3@calvin.edu Development and Validation of a Music Self-Concept Inventory for College Students Phillip M. Hash Calvin College Grand Rapids, Michigan pmh3@calvin.edu

Abstract The purpose of this study was to develop a music self-concept inventory (MSCI) for college students that is easy to administer and reflects the global nature of this construct. Students (N = 237) at a private college in the Midwest United States completed the initial survey, which contained 15 items rated on a five-point Likert scale. Three subscales determined by previous research included (a) support or recognition from others, (b) personal interest or desire, and (c) self-perception of music ability. Factor analysis indicated that two items did not fit the model and, therefore, were deleted. The final version of the MSCI contains 13 items and demonstrates strong internal consistency for the total scale (α = .94) and subscales (α = .83 - .92). A second factor analysis supported the model and explained 63.6% of the variance. Validity was demonstrated through correlation between the MSCI and another measure of music self-perception (r = .94), MSCI scores and years of participation in music activities (r = .64), and interfactor correlations (r = .71 - .75). This instrument will be useful to researchers examining music self-concept and to college instructors who want to measure this construct among their students.

Definitions Self-concept: Beliefs, hypotheses, and assumptions that the individual has about himself. It is the person's view of himself as conceived and organized from his inner vantage…[and] includes the person's ideas of the kind of person he is, the characteristics that he possesses, and his most important and striking traits. (Coopersmith & Feldman, 1974, p. 199) [Global – “Am I musical?”] Self-efficacy: One’s confidence in their ability to accomplish a task or succeed at a particular activity (Pajares & Schunk, 2001). [Specific – “Can I play the trumpet?”] Self-esteem: Positive or negative feelings a person holds in relation to their self-concept (Pajares & Schunk, 2001). [Evaluative - “Am I a good musician compared to my peers or a professional in a major symphony orchestra?”]

Self Concept Self-concept is hierarchical. Individuals hold both global self- perceptions and more specific perceptions about different areas or domains in their lives. The hierarchy can progressively narrow from beliefs regarding academic, social, emotional, or physical facets into more discreet types of self-concepts such as those related to specific academic areas (e.g., language arts, music), social relationships (e.g., family, peers), or physical traits (e.g., height, complexion) (Pajares & Schunk, 2001). Self-concept can influence achievement, motivation, self-regulation, perseverance, and participation (Reynolds, 1992).

Three- Factor Model of Music Self-Concept (Schmitt, 1979; Austin, 1990) Support/ Recognition from Others Personal Interest/ Desire MUSIC SELF CONCEPT Perception of Music Ability

Purpose The purpose of this study was to develop a brief music self-concept inventory for college students that is easy to administer, tests the three-factor model defined by Austin (1990), and demonstrates acceptable internal reliability of α ≥ .80 (e.g., Carmines & Zeller, 1979; Krippendorff, 2004). Inventory will be useful to researchers and college professors who want to measure music self- concept among their students.

Initial Draft Fifteen statements in three equal subscales related to three-factor model. Five-point Likert Scale (1 = strongly disagree, 5 = strongly agree). Reliability - Total Scale: α = .95; Subscales: α = .84 - .92. Factor analysis explained 63.1% of the variance. Bartlett’s test (χ2 = 2552.22, p < .001) and the KMO measure (.94) indicated adequate sample size. Subject-to-variable ratio = 15.8:1. Items intended to constitute each factor loaded highest under their respective columns. “I am a capable singer or instrumentalist” & “I can be creative with music” were deleted due to cross-loadings.

Final Version Thirteen items. Reliability - Total scale: α = .94; Subscales α = .92 - .83. Factor analysis explained 63.6% of the variance. Influence of others (54.1%), interest (5.7%), and ability (3.7%). All but one of the rotated factor loadings exceeded .50. Eigenvalues = 7.37 (others), 1.17 (interest), and 0.84 (abilities). Bartlett’s test (χ2 = 2077.78, p < .001) and the KMO measure (.94) indicated adequate sample size. Subject-to-variable ratio equaled 18.2:1.

Pattern Matrix for Principal Factor Analysis with Promax Rotation of the MSCI (final version)   Factors Item I Others II Interest III Abilities My family encouraged me to participate in music. .95 I have received praise or recognition for my musical abilities. .92 Teachers have told me I have musical potential. .85 My friends think I have musical talent. .60 .32 Other people like to make music with me. .52 .39 I like to sing or play music for my own enjoyment. Music is an important part of my life. .74 I want to improve my musical skills. .71 I enjoy singing or playing music in a group. .62 I like to sing or play music for other people. .56 I can hear subtle differences or changes in musical sounds. .84 I have a good sense of rhythm. .76 Learning new musical skills would be easy for me. .45

Conclusions The MSCI is an effective measure of music self-concept as described by (a) support or recognition from others, (b) personal interest or desire, and (c) perception of music ability (Austin, 1990). Use the MSCI to (a) assess change or development in music self-concept, (b) identify differences in various aspects of music self-concept, (c) compare music self-concept among different populations, or (d) examine relationships between music self-concept and other variables. Instructors of music courses for elementary classroom teachers or the general college population might use MSCI scores to (a) assess students’ attitudes towards their musical potential and accomplishment, (b) select leaders for small group work, or (c) identify students who might require extra support.

Conclusions The MSCI might be useful for students below the college level. The structure of music self-concept likely does not change between junior high school and college (Vispoel, 2003). Flesch-Kincaid Reading Ease score of 67.3. Average grade level reading score of 7.0 (Readability-Score.com). MSCI might be effective with students as young as middle school. Future studies should test the MSCI among people of varying ages and backgrounds, and examine additional variables, models, and theories that might explain this construct (e.g., Schnare, MacIntyre, & Doucett, 2012).

The Ratings and Reliability of the 2010 VBODA Concert Festivals Phillip M. Hash Calvin College Grand Rapids, Michigan pmh3@calvin.edu

Abstract The purpose of this study was to analyze the ratings and interrater reliability of concert festivals sponsored by the Virginia Band and Orchestra Directors Association (VBODA) in 2010. Research questions examined the distribution, reliability, and group differences of ratings by ensemble type (band vs. orchestra), age level (middle school vs. high school), and classification (1-6). The average final rating was 1.58 (SD = .66) and 91.5% (n = 901) of ensembles (N = 985) earned either a I/Superior or II/Excellent out of five possible ratings. Data indicated a high level of interrater reliability regardless of contest site, ensemble type, age level, or classification. Although final ratings differed significantly by age (middle school bands vs. high school bands), ensemble type (middle school bands vs. middle school orchestras), and classification (lower vs. higher), these results were probably due to performance quality rather than adjudicator bias, since interrater reliability remained consistent regardless of these variables. Findings from this study suggested a number of opportunities for increasing participation and revising contest procedures in festivals sponsored by the VBODA and other organizations.

Factors Influencing Contest Ratings/Inter Rater Reliability Adj. experience (e.g., Brakel, 2006) Familiarity w/ repertoire (e.g., Kinney, 2009) Adjudication form (e.g., Norris & Borst, 2007) Length of contest day (e.g., Barnes & McCashin, 2005) Performance Time (e.g., Bergee & McWhirter, 2007) Size of judging panel (e.g., Bergee, 2007) Difficulty of repertoire (e.g., Baker, 2004) Size of Ensemble (e.g., Killian, 1998, 1999, 2000) Adjudicator Bias Special circumstances (Cassidy & Sims, 1991) Conductor race (VanWeelden & McGee, 2007) Conductor Expressivity (Morrison, Price, Geiger, & Cornacchio, 2009) Ensemble Label (Silvey, 2009) Grade Inflation (Boeckman, 2002) Event type Concert performance vs. sight-reading (Hash, in press) Research in laboratory settings rather than actual large-group contest adjudication Taped excerpts Undergraduate evaluators rather than contest judges

Method - Participants 985 middle school (n = 498) and high school (n = 487) bands (n = 596) and orchestras (n = 389) 13 VBODA districts/36 Contest Sites 144 Judges 108 concert-performance judges in 36 panels 36 sight-reading judges

Method-Analysis M, SD, frequency counts Contest site Ensemble type (Band vs. Orchestra) Classification (I-V) Age group (MS vs. HS) Inter Rater Reliability (.80 = good reliability) % agreement (Pairwise & Combined) Cronbach’s alpha (α) Group Differences (Kruskal-Wallis ANOVA & post hoc Mann-Whitney U tests) Age Level (MS vs. HS) Classifications (I-V) Ensemble Type (Band vs. Orchestra)

Distribution & Reliability Significant Group Differences Results Distribution & Reliability Average Final Rating = 1.58 (SD = .66) Final Ratings Distribution I/Superior = 50.6%, n = 498 II/Excellent = 40.9%, n = 403 III/Good = 8.0%, n = 79 IV/Fair = 0.5%, n = 5 V/Poor = 0.0%, n = 0 D Interrater Reliability Interrater Agreement (Pairwise) = .76 Interrater Agreement (combined) = .87 Cronbach’s alpha = .87 Significant Group Differences Age & Ensemble Type MS orchs > MS bands HS bands > MS bands Classification Class 6 bands > 54321 Class 5 bands > 321 Class 4 bands > 2 Class 6 orchs > 5421 Sight-Reading Rating HS sight-reading > MS sight- reading Sight-reading > concert- performance Weak correlation b/w grade level & rating (rs = -.16 to -. 32)

Discussion Preponderance of I & II ratings may not accurately differentiate b/w ensemble quality Consider new rating system e.g., spread current I & II ratings over three levels such as gold, silver, or bronze (Brakel, 2006) Only award I, II, III in order to increase validity of ratings Use rubric w/ descriptors (e.g., Norris & Borst, 2007) Consider adding a second sight-reading judge to compensate for possible grade inflation.

Discussion Only best bands participated in contests? Consider promoting “comments only” option Perhaps some directors/administrators do not feel large-group festival affordable and/or worthwhile Create support for novice directors or those working in challenging situations Overall, 2010 VBODA contests demonstrate good statistical reliability, regardless of contest site, ensemble type, age level, or classification.

Future Research Additional studies should examine: Correlation of grades given in individual categories and final ratings) Effect of various evaluation forms on reliability Quality and consistency of written comments Attitudes of directors, students and parents toward festivals Extent to which contests are supporting instrumental programs in light of current education reform and changing attitudes towards the arts Effectiveness of current rating and adjudication systems to determine if revisions in contest procedures are necessary. All organizations sponsoring contests should analyze ratings and reliability to insure consistent, fair results Music organizations and researchers should work together throughout this process to insure that school music festivals serve teachers and students as much as possible.