A college professor friend whose older daughter is at Harvard literally threw up her hands in frustration about the new PSAT, imploring me to explain it after her younger child had just received her scores.
The highest possible score had been ratcheted down a bit; at the same time, the percentiles – how students had scored compared with others – seemed unusually rosy. Rumors were swirling at her daughter’s high school in an educated, upper-middle-class town that the percentiles had been given a “bump” this year.
It’s a sad reminder of how, in the supercharged environment of college admissions, the PSAT is no longer just a little practice exam to check whether high school students are on track for the all-important SAT college admissions test. The business of applying to college is, indeed, an actual business, or rather, many businesses.
And two of those are the ACT and the College Board, purveyor of the SAT, which have been vying for market share. The territory used to belong lopsidedly to the SAT; in recent years, the ACT surpassed it.
Never miss a local story.
What does all this have to do with the PSAT percentile scoring? After all, the PSAT results aren’t sent to colleges. But in most of the country, where students have a choice between the two tests, the percentiles are seen as a toe in the water, an indicator of whether the student is better off trying the ACT instead.
That choice got clouded after the College Board made several changes in its scoring during this first year of the “new PSAT.” It’s all part of the recasting of the SAT that’s supposed to be more aligned with Common Core standards. For starters, the College Board slightly lowered the maximum possible score, a fair-minded but confusing attempt to show students how they would score on the more difficult SAT. This year, there also are “test scores” and “section scores.”
And if you’re not thoroughly dizzy by now, two changes were made to the closely watched percentile rankings. First, the College Board redefined percentiles to mean the percentage of test-takers who scored at or below a given student’s score, in keeping with how the ACT does it. The effect is to raise students’ percentiles a little.
Then, because there are no previous years of comparable scores on which to base percentiles, it came up with two different percentiles – the user percentile, based on a sample of students who took the PSAT, and the nationally representative number that compares scores with high school students nationwide. Not just those who are college bound, but also those who might have no interest in college or might be in danger of not receiving a diploma.
Obviously, the scores of college-bound students are going to look better than those of students overall, so guidance counselors have been noticing a big bump in the numbers of top-percentile students. And in a very bad decision, the College Board made those the only percentiles visible on student score reports. The more valid “user” percentile can be found, but only by following the instructions in the fine print.
Why did the College Board do this? It’s not saying, but there’s no getting around the effect: It makes students think they’re doing much better on the PSAT than they really are. And that makes them more likely to stick with the SAT than try the ACT.
If college-educated parents can’t figure this out, it’s painful to imagine how confused first-generation college applicants must be about the ever more complicated and stacked world of testing. The College Board, wittingly or not, just added to the problem.
Karin Klein is a freelance journalist in Orange County who has covered education, science and food policy. She can be contacted at email@example.com.