Presentation is loading. Please wait.

Presentation is loading. Please wait.

Final Presentations Class 6.

Similar presentations


Presentation on theme: "Final Presentations Class 6."— Presentation transcript:

1 Final Presentations Class 6

2 Agenda Today APA format DePaul Univ IRB Reliability of Likert Data
SEM & Confidence Limits/Intervals A past study as an example Your Presentations

3 APA Format Headers Remember title page, page #s
Chapter title = Level 1 Others = Level 2 (Flush left) Remember title page, page #s Running Head ≤ 50 characters total. Goes in the header flush left Research Question after purpose statement & before need for study (Header?) Commas = …apples, oranges, and grapes. Citation before period and after end quotation mark. No page number needed unless you are citing a quote. Two spaces b/w sentences; 1 space b/w reference elements! Spacing: Double space, paragraph set to 0, no additional spaced added.

4 DePaul University IRB - orp@depaul. edu https://offices. depaul
Complete the necessary training Developing your Research Jackie as Faculty Sponsor Review the Levels of Review page to determine which review level applies to your research: exempt, expedited, or full board. If you believe your research is Non- reviewable (i.e., does not involve research or humans subjects), the IRB requires that you submit an containing a summary of your planned project to the IRB in order for the IRB to make an official non- reviewable determination. (??) Send hard copy or scanned PDF version of just the signed IRB application form in addition to the electronic version. Please send all consent documents, information sheets, and recruitment materials in WORD, so that the IRB can manipulate or edit them, when necessary.

5 Reliability of Survey What broad single dimension is being studied?
e.g. = attitudes towards elementary music Preference for Western art music “People who answered a on #3 answered c on #5” Use Cronbach’s alpha Measure of internal consistency Extent to which responses on individual items correspond to each other

6 Confidence Limits/Interval http://graphpad.com/quickcalcs/CImean1.cfm
Attempts to define range of true population mean based on Standard Error estimate. Confidence Level 95% chance vs. 99% chance Confidence Limits 2 numbers that define the top & bottom of the range Confidence Interval (margin of error) expressed in units or % The confidence interval (also called margin of error) is the plus-or-minus figure usually reported in newspaper or television opinion poll results. For example, if you use a confidence interval of 4 and 47% percent of your sample picks an answer you can be "sure" that if you had asked the question of the entire relevant population between 43% (47-4) and 51% (47+4) would have picked that answer. OR = the range b/w confidence limits

7 Standard Error of the Mean
Estimate of the average SD for any number of samples of the same size taken from the population. Example: If I tested 30 students on music theory Test 0-100 Mean 75; SD 10 Standard Error (SE) would estimate average SD among any number of same size samples taken from the population SEM = SD/sq root N Calculate for example on the left. 95% Confidence Interval 95% of the area under a normal curve lies within roughly 1.96 SD units above or below the Mean (rounded to +/-2) 95% CI = M + or – (SEM X 1.96) 99% CI = M + or – (SEM X 2.76) Sq root of N = 5.477 SEM = 1.826 SEM x 1.96 = 3.579 SEM x 2.76 = 5.04 95% CI = – 99% CI = – 69.96

8 Normal Curve/Distribution (+/- 1 = 68.26%, 2 = 95%, 3 = 99.7%)

9 Confidence Calculate at a 95% confidence level (by hand)
ACT Explore scores Confidence Limits Interval How close to your scores represent true population?

10 Phillip M. Hash Calvin College Grand Rapids, Michigan pmh3@calvin.edu
Development and Validation of a Music Self-Concept Inventory for College Students Phillip M. Hash Calvin College Grand Rapids, Michigan

11 Abstract The purpose of this study was to develop a music self-concept inventory (MSCI) for college students that is easy to administer and reflects the global nature of this construct. Students (N = 237) at a private college in the Midwest United States completed the initial survey, which contained 15 items rated on a five-point Likert scale. Three subscales determined by previous research included (a) support or recognition from others, (b) personal interest or desire, and (c) self-perception of music ability. Factor analysis indicated that two items did not fit the model and, therefore, were deleted. The final version of the MSCI contains 13 items and demonstrates strong internal consistency for the total scale (α = .94) and subscales (α = ). A second factor analysis supported the model and explained 63.6% of the variance. Validity was demonstrated through correlation between the MSCI and another measure of music self-perception (r = .94), MSCI scores and years of participation in music activities (r = .64), and interfactor correlations (r = ). This instrument will be useful to researchers examining music self-concept and to college instructors who want to measure this construct among their students.

12 Definitions Self-concept: Beliefs, hypotheses, and assumptions that the individual has about himself. It is the person's view of himself as conceived and organized from his inner vantage…[and] includes the person's ideas of the kind of person he is, the characteristics that he possesses, and his most important and striking traits. (Coopersmith & Feldman, 1974, p. 199) [Global – “Am I musical?”] Self-efficacy: One’s confidence in their ability to accomplish a task or succeed at a particular activity (Pajares & Schunk, 2001). [Specific – “Can I play the trumpet?”] Self-esteem: Positive or negative feelings a person holds in relation to their self-concept (Pajares & Schunk, 2001). [Evaluative - “Am I a good musician compared to my peers or a professional in a major symphony orchestra?”]

13 Self Concept Self-concept is hierarchical. Individuals hold both global self- perceptions and more specific perceptions about different areas or domains in their lives. The hierarchy can progressively narrow from beliefs regarding academic, social, emotional, or physical facets into more discreet types of self-concepts such as those related to specific academic areas (e.g., language arts, music), social relationships (e.g., family, peers), or physical traits (e.g., height, complexion) (Pajares & Schunk, 2001). Self-concept can influence achievement, motivation, self-regulation, perseverance, and participation (Reynolds, 1992).

14 Three- Factor Model of Music Self-Concept (Schmitt, 1979; Austin, 1990)
Support/ Recognition from Others Personal Interest/ Desire MUSIC SELF CONCEPT Perception of Music Ability

15 Purpose The purpose of this study was to develop a brief music self-concept inventory for college students that is easy to administer, tests the three-factor model defined by Austin (1990), and demonstrates acceptable internal reliability of α ≥ .80 (e.g., Carmines & Zeller, 1979; Krippendorff, 2004). Inventory will be useful to researchers and college professors who want to measure music self- concept among their students.

16 Initial Draft Fifteen statements in three equal subscales related to three-factor model. Five-point Likert Scale (1 = strongly disagree, 5 = strongly agree). Reliability - Total Scale: α = .95; Subscales: α = Factor analysis explained 63.1% of the variance. Bartlett’s test (χ2 = , p < .001) and the KMO measure (.94) indicated adequate sample size. Subject-to-variable ratio = 15.8:1. Items intended to constitute each factor loaded highest under their respective columns. “I am a capable singer or instrumentalist” & “I can be creative with music” were deleted due to cross-loadings.

17 Final Version Thirteen items.
Reliability - Total scale: α = .94; Subscales α = Factor analysis explained 63.6% of the variance. Influence of others (54.1%), interest (5.7%), and ability (3.7%). All but one of the rotated factor loadings exceeded .50. Eigenvalues = 7.37 (others), 1.17 (interest), and 0.84 (abilities). Bartlett’s test (χ2 = , p < .001) and the KMO measure (.94) indicated adequate sample size. Subject-to-variable ratio equaled 18.2:1.

18 Pattern Matrix for Principal Factor Analysis with Promax Rotation of the MSCI (final version)
Factors Item I Others II Interest III Abilities My family encouraged me to participate in music. .95 I have received praise or recognition for my musical abilities. .92 Teachers have told me I have musical potential. .85 My friends think I have musical talent. .60 .32 Other people like to make music with me. .52 .39 I like to sing or play music for my own enjoyment. Music is an important part of my life. .74 I want to improve my musical skills. .71 I enjoy singing or playing music in a group. .62 I like to sing or play music for other people. .56 I can hear subtle differences or changes in musical sounds. .84 I have a good sense of rhythm. .76 Learning new musical skills would be easy for me. .45

19 Conclusions The MSCI is an effective measure of music self-concept as described by (a) support or recognition from others, (b) personal interest or desire, and (c) perception of music ability (Austin, 1990). Use the MSCI to (a) assess change or development in music self-concept, (b) identify differences in various aspects of music self-concept, (c) compare music self-concept among different populations, or (d) examine relationships between music self-concept and other variables. Instructors of music courses for elementary classroom teachers or the general college population might use MSCI scores to (a) assess students’ attitudes towards their musical potential and accomplishment, (b) select leaders for small group work, or (c) identify students who might require extra support.

20 Conclusions The MSCI might be useful for students below the college level. The structure of music self-concept likely does not change between junior high school and college (Vispoel, 2003). Flesch-Kincaid Reading Ease score of 67.3. Average grade level reading score of 7.0 (Readability-Score.com). MSCI might be effective with students as young as middle school. Future studies should test the MSCI among people of varying ages and backgrounds, and examine additional variables, models, and theories that might explain this construct (e.g., Schnare, MacIntyre, & Doucett, 2012).

21 The Ratings and Reliability of the 2010 VBODA Concert Festivals
Phillip M. Hash Calvin College Grand Rapids, Michigan

22 Abstract The purpose of this study was to analyze the ratings and interrater reliability of concert festivals sponsored by the Virginia Band and Orchestra Directors Association (VBODA) in Research questions examined the distribution, reliability, and group differences of ratings by ensemble type (band vs. orchestra), age level (middle school vs. high school), and classification (1-6). The average final rating was 1.58 (SD = .66) and 91.5% (n = 901) of ensembles (N = 985) earned either a I/Superior or II/Excellent out of five possible ratings. Data indicated a high level of interrater reliability regardless of contest site, ensemble type, age level, or classification. Although final ratings differed significantly by age (middle school bands vs. high school bands), ensemble type (middle school bands vs. middle school orchestras), and classification (lower vs. higher), these results were probably due to performance quality rather than adjudicator bias, since interrater reliability remained consistent regardless of these variables. Findings from this study suggested a number of opportunities for increasing participation and revising contest procedures in festivals sponsored by the VBODA and other organizations.

23 Factors Influencing Contest Ratings/Inter Rater Reliability
Adj. experience (e.g., Brakel, 2006) Familiarity w/ repertoire (e.g., Kinney, 2009) Adjudication form (e.g., Norris & Borst, 2007) Length of contest day (e.g., Barnes & McCashin, 2005) Performance Time (e.g., Bergee & McWhirter, 2007) Size of judging panel (e.g., Bergee, 2007) Difficulty of repertoire (e.g., Baker, 2004) Size of Ensemble (e.g., Killian, 1998, 1999, 2000) Adjudicator Bias Special circumstances (Cassidy & Sims, 1991) Conductor race (VanWeelden & McGee, 2007) Conductor Expressivity (Morrison, Price, Geiger, & Cornacchio, 2009) Ensemble Label (Silvey, 2009) Grade Inflation (Boeckman, 2002) Event type Concert performance vs. sight-reading (Hash, in press) Research in laboratory settings rather than actual large-group contest adjudication Taped excerpts Undergraduate evaluators rather than contest judges

24 Method - Participants 985 middle school (n = 498) and high school (n = 487) bands (n = 596) and orchestras (n = 389) 13 VBODA districts/36 Contest Sites 144 Judges 108 concert-performance judges in 36 panels 36 sight-reading judges

25 Method-Analysis M, SD, frequency counts
Contest site Ensemble type (Band vs. Orchestra) Classification (I-V) Age group (MS vs. HS) Inter Rater Reliability (.80 = good reliability) % agreement (Pairwise & Combined) Cronbach’s alpha (α) Group Differences (Kruskal-Wallis ANOVA & post hoc Mann-Whitney U tests) Age Level (MS vs. HS) Classifications (I-V) Ensemble Type (Band vs. Orchestra)

26 Distribution & Reliability Significant Group Differences
Results Distribution & Reliability Average Final Rating = 1.58 (SD = .66) Final Ratings Distribution I/Superior = 50.6%, n = 498 II/Excellent = 40.9%, n = 403 III/Good = 8.0%, n = 79 IV/Fair = 0.5%, n = 5 V/Poor = 0.0%, n = 0 D Interrater Reliability Interrater Agreement (Pairwise) = .76 Interrater Agreement (combined) = .87 Cronbach’s alpha = .87 Significant Group Differences Age & Ensemble Type MS orchs > MS bands HS bands > MS bands Classification Class 6 bands > 54321 Class 5 bands > 321 Class 4 bands > 2 Class 6 orchs > 5421 Sight-Reading Rating HS sight-reading > MS sight- reading Sight-reading > concert- performance Weak correlation b/w grade level & rating (rs = -.16 to -. 32)

27

28

29 Discussion Preponderance of I & II ratings may not accurately differentiate b/w ensemble quality Consider new rating system e.g., spread current I & II ratings over three levels such as gold, silver, or bronze (Brakel, 2006) Only award I, II, III in order to increase validity of ratings Use rubric w/ descriptors (e.g., Norris & Borst, 2007) Consider adding a second sight-reading judge to compensate for possible grade inflation.

30 Discussion Only best bands participated in contests?
Consider promoting “comments only” option Perhaps some directors/administrators do not feel large-group festival affordable and/or worthwhile Create support for novice directors or those working in challenging situations Overall, 2010 VBODA contests demonstrate good statistical reliability, regardless of contest site, ensemble type, age level, or classification.

31 Future Research Additional studies should examine:
Correlation of grades given in individual categories and final ratings) Effect of various evaluation forms on reliability Quality and consistency of written comments Attitudes of directors, students and parents toward festivals Extent to which contests are supporting instrumental programs in light of current education reform and changing attitudes towards the arts Effectiveness of current rating and adjudication systems to determine if revisions in contest procedures are necessary. All organizations sponsoring contests should analyze ratings and reliability to insure consistent, fair results Music organizations and researchers should work together throughout this process to insure that school music festivals serve teachers and students as much as possible.


Download ppt "Final Presentations Class 6."

Similar presentations


Ads by Google