By Christopher Edley and Robert Hauser, The Washington Post
Washingtonians are accustomed to seeing their elected and appointed leaders attempt to improve public education in the city: Since 1804 there have been 17 different school governance and administrative regimes, the most recent established in 2007. That legislation transferred authority from an elected board to the Mayor, called for fundamental changes to management structures, created a public school charter board, and, perhaps most noticeably, allowed the mayor to recruit a new school chancellor. This “jolt” to the system was clearly motivated by widespread dissatisfaction with the quality of teaching and learning, and it came with a requirement that the results of the reforms be independently evaluated.
The National Research Council of the National Academy of Sciences was asked to take on that important challenge and has now released the first of its reports, A Plan for Evaluating the District of Columbia’s Public Schools: From Impressions to Evidence. The prepublication version of the report is freely available at http://www.nap.edu/. Not surprisingly, the report has garnered both respect and criticism (for example, see the April 12 Washington Times article by the Harvard political scientist Paul Peterson, based on a longer article that will soon appear in another magazine he edits and is now on the website).
In light of claims made about both the purpose and results of the National Research Council report, it is important to clarify its main findings and recommendations. The report’s authoring committee – which we co-chaired – was asked to develop a plan for evaluating the District’s public schools. The committee included 15 nationally recognized educational researchers and practitioners, a number of whom live in Washington D.C. and have deep knowledge of the city and its schools, and its report was independently reviewed by 13 more experts before revision and public release.
We recommended that there be a sustained, independent, and scientifically rigorous examination of a wide range of data, using many types of analysis, to provide a comprehensive picture of how the District as a whole is functioning and its progress over time that will help all stakeholders – D.C. officials, school teachers and administrators, parents, students, and other citizens – to improve the D.C. schools.
Although it may be disappointing to people who were hoping for a “thumbs-up or thumbs-down” appraisal of former Chancellor Michelle Rhee’s performance, we were not asked to evaluate her performance or the specific effects of her tenure on the schools, and we did not do so.
Our recommendations were motivated largely by our review of publicly available data, which are incomplete and inadequate to support definitive conclusions about the effects of the reform initiative or the functioning of the school system.
For example, although there is evidence of test score gains, our committee cautioned against premature causal explanations or policy advice based on impressionistic evidence. We explained why test scores alone cannot answer essential questions about the reforms or about how effectively the District is carrying out its many responsibilities. This theme was echoed by the former chancellor, who said in a recent blog post that “rising test scores are a critical measure of school progress, but they aren’t the only metrics we can use….”
Our report is cautious about comparisons between D.C. and other cities, as is appropriate in a scientific analysis of a complex set of issues. Thus, we considered the role of chance in findings from sample data, like those from the National Assessment of Educational Progress (NAEP). When compared with the other school districts assessed by NAEP in 2007 and 2009, D.C. showed an increase that is reliably higher than that of only two districts (Austin and Cleveland) in grade 4 mathematics and one district (Cleveland) in grade 4 reading – but no others. This finding has no particular ideological or political bent; it is the result of careful and straightforward analysis of the data.
We were also careful to follow an elementary tenet of scientific inquiry: that correlation is not evidence of causation. Thus, although our report notes the encouraging news that truancy has declined in D.C., we did not infer a cause-and-effect relationship between that change and academic achievement.
We believe our report lays the groundwork for a rigorous and credible program of evaluation and research, one that would bring to the problems of education in DC the very best available expertise and reasoned judgment. The city’s children deserve nothing less.