Editor’s note: One in an occasional series.
They seem almost quaint now, those days when California issued annual scores for each school based on how well its students performed on the state’s annual standardized tests.
Local newspapers scrambled to print the Academic Performance Index scores of every school in their circulation areas. Real estate agents flaunted them to parents looking for new homes, although, in a way, high scores reflected more about the demographics of the surrounding community -- how affluent and educated it was – than about the quality of the schools.
The tests themselves were faulty. Some of the questions were downright inane. I recall seeing a pirated copy of a test that asked students what the best title for a reading passage would be. The “correct” answer was the blandest and most obvious, but not the ambitious and creative choice that someone like Steven Spielberg would have picked.
I know kids who got mediocre scores on the tests but went on to do brilliantly in college and graduate school in those same subjects. That tells you something.
The tests never examined real writing skills, so teachers spent less time teaching those and more time teaching what was on the exams. No wonder within a few years, colleges were complaining that students couldn’t write worth a damn and their analytical skills stank, too. And aside from outright cheating there were some curious goings-on. My younger daughter’s 8th grade history teacher did three weeks of intensive review for the exam – time that could have been spent learning new things – and my daughter came out of the test saying it was amazing how almost all of the questions had been directly covered on her review worksheets. Yeah, amazing, maybe.
As silly has the testing frenzy got, however, it had certain value. The scores weren’t good at differentiating between the vast numbers of schools in the middle, but it was clear that students weren’t learning the most basic material in schools with abysmal scores. Finding out how just how bad it was provided a necessary jolt of reality. In a large-scale, broadband kind of way, the scores gave an approximation of some key skills that students were or weren’t mastering. To some extent, they enabled parents to see which schools were improving and which were wallowing. It was easy to compare schools.
Unfortunately, the same can’t be said about the state’s new color-coded, multi-factor reports that are about to replace the API. I never thought I’d miss the old scores that emphasized the tyranny of a few days of standardized testing over everything else that students had accomplished all year. But California’s colorful attempt to broaden the criteria for judging schools is more a muddle than a rainbow and, strange to say, it’s starting to make the API look pretty decent.
The latest version of the “dashboard,” as the new reports are called, is certainly an improvement over the initial efforts. Those showed a hopelessly confusing grid of kindergarten-bright primary colors that symbolized their performance.
And they covered an overly complicated range of factors. Some were clear and important, such as whether English learners were reaching the point of fluency in reasonable time. Others were mushier, such as “parent engagement.”
Certainly, involved parents are a big help to their kids, but how much does parent engagement at the school level matter when it comes to learning? If they’re not learning to read and write well, having high parent engagement is sweet but useless.
The new version, instead of mixing it all up in one grid, gives a separate sort of pie chart, a la the Trivial Pursuit game, for each factor being measured. But colors still indicate how well a school is doing. Why use two indicators in the same symbol just to say “good” or “bad”? Better yet, why not just have a clear scoring benchmark, and then simple scores for each category to show plainly and unequivocally how close they are?
Worst, there is no overall score to show how well a school is doing, which means that school-to-school comparisons are nearly impossible. And the chart displays the different categories as though suspension rates matter as much as actual academic performance.
Gov. Jerry Brown’s original intention in dumping the API was to assess schools in more meaningful ways; the dashboard looks more like a way to obfuscate school performance. Brown talked idealistically about portfolio work, which makes a lot more sense.
That would provide a more meaningful look at the work that students perform throughout the year in order to gauge whether they’re learning. Of course, scoring computer exams is a lot cheaper than human oversight of portfolios, but there might be various ways to manage this by having schools cross-check each other, or performing the reviews every few years rather than annually.
There’s another way: Schools already go through an accreditation process, generally every five years, that takes a broad and deep look at school operations and has firm, clear benchmarks for passing. Why not put schools through those accrediting processes a little more often, say every three years, and include portfolios, tests and other measurements of student success? Those could be translated into useful, clear overall scores for school operations and student achievement.
It’s taken years to develop the messy dashboard, but that doesn’t mean we should have to live with it forever. Too bad Brown didn’t push his original idea more; maybe the next governor will consider doing so.
Karin Klein is a veteran California journalist and commentator who has written extensively on education. She can be contacted at email@example.com. Follow her on Twitter @kklein100.