You are here
Assessing the “Rothstein Test”. Does it Really Show Teacher Value-Added Models are Biased?
In a provocative and influential paper, Jesse Rothstein (2010) finds that standard value-added models (VAMs) suggest implausible future teacher effects on past student achievement, a finding that obviously cannot be viewed as causal. This is the basis of a falsification test (the Rothstein falsification test) that appears to indicate bias in VAM estimates of current teacher contributions to student learning. More precisely, the falsification test is designed to identify whether or not students are effectively randomly assigned conditional on the covariates included in the model. Rothstein's finding is significant because there is considerable interest in using VAM teacher effect estimates for high-stakes teacher personnel policies, and the results of the Rothstein test cast considerable doubt on the notion that VAMs can be used fairly for this purpose. However, in this paper, we illustrate—theoretically and through simulations—plausible conditions under which the Rothstein falsification test rejects VAMs even when students are randomly assigned, conditional on the covariates in the model, and even when there is no bias in estimated teacher effects.
Keywords: Value-added, Simulation, Teacher Effectiveness
Citation: Dan Goldhaber, Duncan Chaplin (2012). Assessing the “Rothstein Test”. Does it Really Show Teacher Value-Added Models are Biased?. CALDER Working Paper No. 71
You May Also Be Interested In
How Did It Get This Way? Disentangling the Sources of Teacher Quality Gaps Across Two States
Dan Goldhaber, Vanessa Quince , Roddy Theobald
Accounting for Student Disadvantage in Value-Added Models (Update)
Eric Parsons, Cory Koedel, Li Tan
An Exploration of Sources of Variation in Teacher Evaluation Ratings across Classrooms, Schools, and Districts
James Cowan , Dan Goldhaber, Roddy Theobald