RRating Oral Language
PAULA WINKE
Assessment of oral language is useful for many purposes, from estimating proficiency growth in communicative language‐learning environments, to deciding whether a person's speaking ability is high enough to warrant entrance into a university program. A critical component in the process of oral assessment is the rating of the speech samples. Most traditionally, oral language is rated by highly proficient speakers of the targeted language who have been trained according to rating criteria established by the test designers. Because the score on an oral assessment is intended to reflect the test taker's ability to speak in real‐life contexts (McNamara, 2001), raters are expected to provide an unbiased implementation of the rating criteria and thus remain invisible to the scoring process. Yet research has well established that raters' personal beliefs or backgrounds may affect their rating processes and the resulting scores. Rating oral language is more complicated than rating written language (essays) because more of the test takers' nonlinguistic characteristics (such as voice quality, accent, and, in the case of face‐to‐face or video‐based testing, gestures, expressions, and general appearance) are revealed and may inadvertently influence raters' score assignments. To rate accurately and reliably, raters of oral language need to be guided, trained, and monitored to ensure that their personal opinions, beliefs, and backgrounds minimally affect ...
Get The Concise Encyclopedia of Applied Linguistics now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.