Assessing the “Rothstein Test”. Does it Really Show Teacher Value-Added Models are Biased?
In a provocative and influential paper, Jesse Rothstein (2010) finds that standard value-added models (VAMs) suggest implausible future teacher effects on past student achievement, a finding that obviously cannot be viewed as causal. This is the basis of a falsification test (the Rothstein falsification test) that appears to indicate bias in VAM estimates of current teacher contributions to student learning. More precisely, the falsification test is designed to identify whether or not students are effectively randomly assigned conditional on the covariates included in the model. Rothstein's finding is significant because there is considerable interest in using VAM teacher effect estimates for high-stakes teacher personnel policies, and the results of the Rothstein test cast considerable doubt on the notion that VAMs can be used fairly for this purpose. However, in this paper, we illustrate—theoretically and through simulations—plausible conditions under which the Rothstein falsification test rejects VAMs even when students are randomly assigned, conditional on the covariates in the model, and even when there is no bias in estimated teacher effects.