James G Ladwig Newcastle Institute for Research in Education The impact of teacher practice on student outcomes in Te Kotahitanga.

Slides:



Advertisements
Similar presentations
Project VIABLE: Behavioral Specificity and Wording Impact on DBR Accuracy Teresa J. LeBel 1, Amy M. Briesch 1, Stephen P. Kilgus 1, T. Chris Riley-Tillman.
Advertisements

Agenda Levels of measurement Measurement reliability Measurement validity Some examples Need for Cognition Horn-honking.
Standardized Scales.
Exams and Revision Some hints and tips.
 Make better decisions Usually business decisions  Build theory Understand the world better.
© Cambridge International Examinations 2013 Component/Paper 1.
Validity In our last class, we began to discuss some of the ways in which we can assess the quality of our measurements. We discussed the concept of reliability.
Reliability, the Properties of Random Errors, and Composite Scores.
Using Rubrics for Evaluating Student Learning. Purpose To review the development of rubrics for the purpose of assessment To share an example of how a.
EdTPA: Task 1 Support Module Mike Vitale Mark L’Esperance College of Education East Carolina University Introduction edTPA INTERDISCIPLINARY MODULE SERIES.
RESEARCH METHODS Lecture 18
Consistency of Assessment
EDUC 894 Week 8. Plan for Today  Conference Debrief What’s Hot in Analysis Methods: Engaging Complexity  Group-work & Team Consultations Midterm Course.
Using Rubrics for Evaluating Student Learning Office of Assessment and Accreditation Indiana State University.
An Inquiry Into Students Perceptions of the impact of the PYP Profile on their lives By Casey McCullough.
Preview of Today l Review next paper l Cover Chapter Three Get into groups.
From the gym window most Sundays – I observe?. Deliberate Practice Colvin (2008) noted that exceptional performers, were not necessarily the most talented.
Reliability & Validity the Bada & Bing of YOUR tailored survey design.
Educational Psychology Define and contrast descriptive, correlational and experimental studies, giving examples of how each of these have been used in.
1 Learning Statistics Your goals and beliefs about learning statistics are directly related to your grade in STT 215.
Multiple Linear Regression A method for analyzing the effects of several predictor variables concurrently. - Simultaneously - Stepwise Minimizing the squared.
Multivariate Methods EPSY 5245 Michael C. Rodriguez.
Reliability & Validity Qualitative Research Methods.
How the Social Studies Interns are Viewed by their Mentors Going Public Presentation Mike Broda, Mark Helmsing, Chris Kaiser, and Claire Yates.
OCTOBER ED DIRECTOR PROFESSIONAL DEVELOPMENT 10/1/14 POWERFUL & PURPOSEFUL FEEDBACK.
Psy B07 Chapter 1Slide 1 ANALYSIS OF VARIANCE. Psy B07 Chapter 1Slide 2 t-test refresher  In chapter 7 we talked about analyses that could be conducted.
Assessing and Evaluating Student Learning UNIVERSIDAD AUTÓMA DE QUERÉTARO FACULTAD DE LENGUAS Y LETRAS Profesional Asociado Universitario en Enseñanza.
Office of Institutional Research, Planning and Assessment January 24, 2011 UNDERSTANDING THE DIAGNOSTIC GUIDE.
Validity/Reliability Matters Really? Beverly Mitchell, Kennesaw State University.
Tools in Media Research In every research work, if is essential to collect factual material or data unknown or untapped so far. They can be obtained from.
GROUP DIFFERENCES: THE SEQUEL. Last time  Last week we introduced a few new concepts and one new statistical test:  Testing for group differences 
RESEARCH IN MATH EDUCATION-3
Assessing Teachers with Value- Added Models (VAMs) Erik Ruzek October 14, 2010 UCI Department of Education, Chair’s Advisory Board Presentation.
Standard 9 - Assessment of Candidate Competence Candidates preparing to serve as professional school personnel know and demonstrate the professional knowledge.
Jason Leman Education Researcher Sheffield Hallam University.
1 Psych 5500/6500 Standard Deviations, Standard Scores, and Areas Under the Normal Curve Fall, 2008.
 A fixing of thoughts on something; a careful consideration.
OCTOBER ED DIRECTOR PROFESSIONAL DEVELOPMENT 10/1/14 POWERFUL & PURPOSEFUL FEEDBACK.
Social Capital [II] Exercise for the Research Master Multivariate Statistics W. M. van der Veld University of Amsterdam.
Reliability, the Properties of Random Errors, and Composite Scores Week 7, Psych R. Chris Fraley
Latent Growth Modeling Byrne Chapter 11. Latent Growth Modeling Measuring change over repeated time measurements – Gives you more information than a repeated.
` Disciplined Reading, Disciplined Learning VISUAL COMPREHENSION: COGNITIVE PROCESSING OF ART TEXT BY PRE-ADOLESCENT AND ADOLESCENT READERS Sandra M. Loughlin,
Validity Validity is an overall evaluation that supports the intended interpretations, use, in consequences of the obtained scores. (McMillan 17)
Assessment. Workshop Outline Testing and assessment Why assess? Types of tests Types of assessment Some assessment task types Backwash Qualities of a.
JS Mrunalini Lecturer RAKMHSU Data Collection Considerations: Validity, Reliability, Generalizability, and Ethics.
Criteria for selection of a data collection instrument. 1.Practicality of the instrument: -Concerns its cost and appropriateness for the study population.
This was developed as part of the Scottish Government’s Better Community Engagement Programme.
Week 6. Statistics etc. GRS LX 865 Topics in Linguistics.
GSS as a Professional Learning Community. What do we already know about PLC’s?
Usability Testing Instructions. Why is usability testing important? In a perfect world, we would always user test instructions before we set them loose.
Candidate Assessment of Performance CAP The Evidence Binder.
1 We Changed the COSF The Flip The Skills The Decision-Tree Questions.
Measuring Mathematics Self Efficacy of students at the beginning of their Higher Education Studies With the TransMaths group BCME Manchester Maria.
King Faisal University جامعة الملك فيصل Deanship of E-Learning and Distance Education عمادة التعلم الإلكتروني والتعليم عن بعد [ ] 1 جامعة الملك فيصل عمادة.
The Relationship Between Conceptual Learning and Teacher Self-Reflection Kathleen Falconer Dept of Elementary Education and Reading, SUNY- Buffalo State.
INFORMATION AND PROGRESS An analysis of what is happening in the Caribbean with information, decision- making and progress in Education.
STAT MINI- PROJECT NKU Executive Doctoral Co-hort August, 2012 Dot Perkins.
PGES Professional Growth and Effectiveness System.
Designing Quality Assessment and Rubrics
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Chapter 21 More About Tests and Intervals.
Good teaching for diverse learners
PSY 626: Bayesian Statistics for Psychological Science
Inquiry-based learning and the discipline-based inquiry
NKU Executive Doctoral Co-hort August, 2012 Dot Perkins
INFORMATION AND PROGRESS
Office of Education Improvement and Innovation
PSY 626: Bayesian Statistics for Psychological Science
RESEARCH METHODS Lecture 18
Final Research Question
Evidence for Enhancement: Making sense of surveys
Presentation transcript:

James G Ladwig Newcastle Institute for Research in Education The impact of teacher practice on student outcomes in Te Kotahitanga

Main points of Presentation First task: exploring the observational data to: See if the data confirms the theoretical ideas about how pedagogy ‘works’ initially asking ‘what does the data say about how the various elements of pedagogy fit together?’ See if we can use this data as a measure of pedagogy to examine the effects of high quality pedagogy on student outcomes Second task: exploring the link between pedagogy and outcomes Here we’re reliant on existing testing data (warts and all) Here we’re also limited in the options we have for making that linking match the reality of what students experience (in different classes, subjects, with different teachers, etc…) Third task: offer some interpretive suggestions of the implications for Te Kotahitanga schools and teachers

Initial elements Learning relations Engagement: based on counts (thus a direct measure of a representative selection of students in a given lesson) Cognitive level: global indicator, single item on observational instrument Discursive interactions: based on counts and categories (of same selected representative students) Caring Relations (each global ratings, aka ‘the second page’): Culturally Appropriate Culturally Responsive Caring Well Managed High Expectations for Performance High Expectations for Behaviour

Modelling the relations Since Engagement and ‘Discursive’ are counts, and since the ‘cognitive’ indicators is a single measure, ‘Learning Relations’ indicators are best taken as direct measures (you see them directly). However, the Caring relation indicators could be taken as individual measures of something bigger but less directly observed, aka they could be indicators of a ‘latent concept’ All six are significantly correlated with each other, so we wanted to see if these modelled as a ‘latent construct’ was worth the effort. If so, you can combine them into one measure of ‘Caring’

So, is ‘Caring Relations’ a good construct? Yes – very good ‘fit’ measures with slight modification a)The ‘well managed’ item isn’t needed for this exercise, and b)Good reliability (h =.84 for the technically inclined)

So what can this show us? In this case, consider how Caring relations relate to the amount of ‘Discursive interaction’: We don’t see high levels of discursive interactions without high level of Caring relations, but Having high levels of Caring relations does not guarantee high degrees of discursive interaction

The reality is, of course, more complex Modelling all of the indicators can be done, but gets pretty complicated.. Importantly, though, note direct effects from specific indicators of Caring relations and the feedback loops

Making the link to outcomes….

Where is the difference in student gains? (in Te Kotahitanga schools) Maths gains (using available data from 5 schools) 90.5% of variance within schools 9.5% between schools

And what of the variance of pedagogy? Taking ‘Discursive’ interaction as an example (2009 observational data): 16.2% of the variance is between schools 83.8% of the variance is between teachers

What do we make of these? First, keeping in mind that the biggest differences in both student learning gains AND in the quality of pedagogy are not between different schools. In fact most of these school differences are not statistically significant – meaning we can’t tell they are really very different at all… BUT.. The differences in gains scores and the quality of pedagogy within schools ARE quite important and, in my view, the place to start any school reform initiative (i.e. the main focus of Te Kotahitanga is very much on target). This is all the more true if the model of pedagogy employed in Te Kotahitanga is linked to improved gains.

Some caveats before the punchline First, keep in mind that the observation data was not gathered for this analysis – but was for PD purposes. Also keeping in mind that not all school have comparable data (nor have they all test in comparable time schedules) This means it has not be sampled in a manner that is readily linked to outcomes (observations are not directly tracked to specific classes of particular groups of students). Also keep in mind that students change classes and subjects and teachers over the course of a school year…

So how did I do it? I have had to come at this from the ground up – by taking data from whatever classes the students were in that were also observed. This means that there is a fair bit of looseness in that matching. Luckily, for most students a lot of classes were observed (around 9 over the year, on average) So we can consider the average of all the classes as an estimate of each students pedagogical experience. Given this… what’s the link can we find at this point?

Who experienced what quality of pedagogy?

And the punchline: Maths Gains by Pedagogy

So far so good I should note this is a preliminary analysis and there is much more to do, including: Examining the link with gains in Reading Examining ways to better match the pedagogy data with students and classes For now, though, it seems very clear that the focus on improving the quality of students’ pedagogical experiences is rightfully at the core of our work. For teachers, this suggests the large effort to improve from good to very good is well worth the effort. For schools, this suggests the effort to provide students with more consistently good quality pedagogy (getting less variance within schools by moving more classes ‘up’) is well worth the effort.