Test-Based Accountability and Student Achievement: An Investigation of Differential Performance on NAEP and State Assessments
This paper explores the phenomenon referred to as test score inflation, which occurs when achievement gains on "high-stakes" exams outpace improvements on "low-stakes" tests. The first part of the paper documents the extent to which student performance trends on state assessments differ from those on the National Assessment of Educational Progress (NAEP). I find evidence of considerable test score inflation in several different states, including those with quite different state testing systems. The second part of the paper is a case study of Texas that uses detailed item-level data from the Texas Assessment of Academic Skills (TAAS) and the NAEP to explore why performance trends differed across these exams during the 1990s. I find that the differential improvement on the TAAS cannot be explained by several important differences across the exams (e.g., the NAEP includes open-response items, many NAEP multiple-choice items require/permit the use of calculators, rulers, protractors or other manipulative). I find that skill and format differences across exams explain the disproportionate improvement in the TAAS for fourth graders, although these differences cannot explain the time trends for eighth graders.
I would like to thank to Elizabeth Kent and J.D. LaRock for excellent project management and research assistance, and Daniel Koretz for many helpful suggestions. Funding for this project was generously provided by the U.S. Department of Education NAEP Secondary Analysis Grant (#R902B030024). The views expressed in this paper are those of the author and of course, all errors are my own.