Measuring & Supporting English Language Learning in Schools: Challenges for Test Makers CRESST Conference 2002 September Los Angeles Mari Pearlman
2 The Context Demographics Projections indicate that by 2015, up to 55% of all students in K-12 public schools in the US will not speak English as their first language Teacher preparation Striking absence of attention to learning to teach ELL’s Little or no in-service support and assistance for regular content teachers Warfare among the proponents of various approaches to teaching ELL’s
Mari Pearlman3 Enter NCLB Law implies that test use alone can effect changes in learning Under Title III, all states must have testing provisions for English language learners Currently available language proficiency tests have many of the same deficits as other standardized K-12 tests: Not aligned with existing instructional programs (if any) No instructionally useful feedback (even if it came while the teacher of record still was teaching the cohort of students tested)
Mari Pearlman4 Toward Responsible Tests Large-scale standardized tests have historically taken technically sound status reports as their chief deliverable Validity claims have rested largely on content validation studies; the knotty problem of the definition of the learning domain has been bypassed How one might change the reported status has not been the purview of test makers
Mari Pearlman5 Tests as Learning Tools In language proficiency testing, how might test themselves support learning? Broaden the definition of validity: tests not only report current status of the learner’s proficiency, but also suggest instructional next steps, or how that status might be changed This requires grappling with the domain of learning, which is inevitably much larger and more complex than the domain of testing
Mari Pearlman6 Implications The domain of language proficiency is vast, ill-structured, and dynamic Tests of language proficiency must engage in a constant struggle with issues of construct representation and construct-irrelevant variance Linking assessments of students’ language proficiency with instructional goals is a critical bridge to amplifying the definition of validity of these instruments
Mari Pearlman7 Lessons from Butler & Bachman
Mari Pearlman8 Butler: Distinctions between general language proficiency and academic language proficiency Implications for test design Links to instructional design Benefits for all students, monolingual as well as English language learners Embedding language learning goals in all instruction
Mari Pearlman9 Bachman: The topical content and language of presentation used in language proficiency performance tasks cannot be separated from the construct behind the assessment Language ability and content knowledge are intertwined The generalizability of inferences based on any task set is, thus, dependent on precise definition of the particular domain Not just “speaking,” but to whom, about what, and in what particular circumstances
Mari Pearlman10 Practical Consequences Just obtaining some score on a standardized measure of language proficiency is, practically speaking, useless to a teacher Tests for students must result in score profiles that are linked with possible hypotheses about student learning— hypotheses the teacher can systematically explore in instruction
Mari Pearlman11 Tests need to be easy to use in the classroom—brief, easy to administer Score “reports,” especially for young children, must be immediately informative to the teacher Language proficiency tests need to be accompanied by instructionally useful scaffolding Practical Consequences