University of Washington - Statistics
A central goal of the education literature is to demonstrate that specific educational interventions have a treatment effect on student test performance. Researchers often have access to student test scores for students in the treatment and control groups both prior to and after the intervention, but usually must estimate the treatment effect from observational data in which the intervention has not been randomly assigned to units. This talk begins with a discussion of the assumptions that underlie common approaches to estimating a treatment effect with observational data. To facilitate the discussion, I use Single World Intervention Templates (SWITs, Richardson and Robins 2013), a unification of the graphical and causal modeling approaches to causality. I demonstrate that specific treatment assignment mechanisms can lead to a version of a â€œparadoxâ€ described by Lord (1967): within the treatment and control groups, the mean of the pre-test score is equal to the mean of the post-test score, the variance of the pre-test score is equal to the variance of the post-test score, but common statistical approaches lead to quite different estimates of the treatment effect. I then discuss how an instrumental variables (IV) approach can relax the assumptions placed on the treatment assignment mechanism and still produce an unbiased estimate of the treatment effect, and apply this approach to estimate the treatment effect of special education services on student test scores in Washington state public schools. Specifically, I exploit the stateâ€™s special education funding formula that cuts off special education funding to school districts once 12.7% of the districtâ€™s students are enrolled in special education, and use an indicator of whether a student attends a district that is beyond this funding threshold as the instrument in an IV model.