News: K. Chenoweth on Chicago's improving educational outcomes

When a school agency shows improvements, shouldn't the accomplishments be recognized?

This note is not strickly about special education, although I’ll throw in a twist after I recount the main point; I’m posting it in hopes of promoting examination about schools in general.

Writing in the opinion section of the Washington Post, Karin Chenoweth recounted a unusual story about education: Chicago Public Schools appear to have improving outcomes for students.

Under the headline, “In Chicago, public schools are often called a mess. Truth is, they’ve improved — a lot,” Ms. Chenoweth cites data using at least three different indicators to document the improvements.

What happens when a school district improves and hardly anyone notices?

I ask because for the past couple of decades, Chicago Public Schools (CPS) has improved. A lot. And yet, that probably comes as news, even to many who pay attention to education.

Here are a few data points. From 2007 to 2019, high school graduation rates rose from 60 percent to 82 percent. In 2018, 63 percent of high school graduates enrolled immediately in two- or four-year colleges, compared with 50 percent in 2006 — and that rate held fairly steady last year in the face of covid-19. Achievement in reading and math has improved since the early 2000s, as measured by the National Assessment of Educational Progress.

Because Ms. Chenoweth is a long-time observer of schooling, I lend some credence to her observation. Still, let’s take a little deeper look.

Graduation rates may not be the strongest measure of school’s benefits. They are influenced by many factors, including grading policies and educational fads.

The percentage of graduates who enroll in post-secondary education is a little more solid indicator of success, but those numbers may be affected by factors other than educational success, too; still something like a 25% increase in college enrollment makes it sound like something is going right.

The National Assessment of Educational Progress (NAEP) results are even firmer. Those of us in education know that the NAEP provides pretty solid evidence. The assessments are administered under consistent conditions by a team of people representing the US Institute for Education Sciences, the students who take the tests are selected using scientific procedures so that they represent their schoolmates in multiple ways (e.g., gender, ethnicity, etc.), the instruments are designed and prepared by educators whose very job is to create good tests, the data are independently analyzed by very competent statisticians. To be sure, educators and the public might lament the results of the NAEP (woefully too few students score at the proficient levels on the tests), but it’s hard to consider the results inaccurate and dismiss them out of hand as fabricated, false, or un-representative.

So, on balance, Ms. Chenoweth’s observations do seem to indicate improvements in Chicago schooling. Yay! And, as an educator, I appreciate her shining a light on an educational success story.

Now, the twist: Where is special education going well? What objective measures do we have that will permit us to shine such a light on successes in our little corner of education? There are NAEP data about students with disabilities.