statistics

How I Teach

New York

Into crowded field of school data comes a user-friendly report

Insideschools introduced its new school data tool, "Inside Stats" at a panel discussion on school assessment. When Jacqueline Wayans helped her second daughter pick a high school, they were confident about their choice. After all, Wayans is a savvy parent who had worked for years visiting and reviewing schools for Insideschools, the online guide to city schools. Her older daughter had attended a city school with an arts theme and gotten a good education, and her younger daughter's top pick, Manhattan's High School for Fashion Industries, had gotten an "A" from the Department of Education. It wasn't until after her daughter enrolled that Wayans learned Fashion Industries only offered three years of math classes. And when the school added a fourth math class, she didn't find out until it was too late that her daughter's scores were too low for her to qualify. Now, when Wayans's daughter starts college this fall, she'll need to take remedial math. "I just assumed that there was a four-year sequence," Wayans said today during a panel discussion about metrics for assessing high schools that Insideschools hosted. "My older daughter had it at her high school and I just thought it was there." Wayans isn't alone in trusting a small sliver of information to make the potentially life-changing decision about where to attend high school. Some parents and students choose schools by their names, their sports teams, or their neighborhoods, without digging deep to understand what kind of education the schools offer. Now entering its second decade, Insideschools (where I also worked from 2005 to 2008) is preparing to launch a tool to help parents like Wayans — and those far less savvy than she is — make better choices. The tool, called "Inside Stats," is a consumer-oriented presentation of public data about high schools that is meant to complement, or perhaps even rival, the information the city distributes.
New York

Improvement in progress report grades: real or random?

Last year, the first round of progress reports attracted anger and ridicule. Perhaps because far fewer schools received low grades, the response this year has been more muted, making room for measured, evidence-based discussion of the DOE's methodology in constructing the reports. Over at Eduwonkette, Harvard education professor Daniel Koretz offers a lengthy critique of the progress report methodology. He notes that test scores alone are not a legitimate way to evaluate schools; New York State's tests were not designed to be used in "value-added" analysis like that behind the progress reports; and the progress reports, like all accountability systems, place pressure on school administrators that likely leads to score inflation. In addition, he writes that the DOE's formula does not take into account "interval scaling," or the reality that different amounts of "value" are required to move students from one proficiency level to the next at different points on the proficiency spectrum. (In June, I wrote about how interval scaling might contribute to the finding that No Child Left Behind has helped high-performing students less than their low-performing peers.) But those problems exist in many test-based, value-added accountability systems — Koretz writes that New York's progress report system has its own set of errors. The tremendous variation in schools' grades from last year to this year probably has less to do with school improvement than sampling and measurement error, he writes. Here's an illustration of the effect of error. I first calculated the variation in schools' grades between last year and this year and then graphed it against their enrollments.