Despite the provocative headlines, this study really doesn’t tell us anything about how helpful test scores are compared to grades in predicting success in college. That’s because both the design of the study and the interpretation of the findings were flawed.
The focus of the study was to examine the consistency of high school GPA (HSGPA) and ACT scores as indicators of readiness for college across high schools. Unfortunately, that’s not the correct approach if they want to make claims about the predictive utility of admission measures. If so, the study should have focused on the validity of test scores for predicting college success within colleges.
Studies taking the latter approach—and there have been many such studies—have reached substantively different conclusions than the University of Chicago study, namely, that test scores are predictive of college success, including college graduation, and contribute uniquely to this prediction above and beyond HSGPA.
For example, a recent report out of California independently concluded that test scores add value to the admission decision process:
“… standardized test scores aid in predicting important aspects of student success, including undergraduate grade point average (UGPA), retention, and completion. At UC, test scores are currently better predictors of first-year GPA than high school grade point average (HSGPA), and about as good at predicting first-year retention, UGPA, and graduation.” (p. 3)
Common Yardstick Hypothesis
The University of Chicago study set out to evaluate the “common yardstick” hypothesis: specifically, that test scores are useful in the college admissions process because they provide a common yardstick of students’ academic readiness across students, schools, districts, and states, while HSGPA may vary in meaning across different schools given different grading standards and school quality.Their results actually show that ACT scores are a more consistent indicator of academic preparation level than HSGPA across schools. In particular, the authors state that “HSGPAs are not equivalent measures of readiness across high schools” and “school-level variance in college graduation rates is one quarter smaller among students with the same ACT score than students with the same HSGPA” (p. 12). For example, the researchers found that the estimated college graduation rate for students with a HSGPA of 3.75 or higher ranged from a low of below 40 percent at one high school to a high of above 90 percent at another, clearly illustrating that an “A” HSGPA does not indicate the same level of readiness across high schools. As for test scores, they found that there was less variability in graduation rates between schools for a particular ACT range.
However, when it comes to validity evidence for the use of test scores in college admissions, the analytical approach and sample of the study fall short.
Analytical Approach
Traditional validity studies examine the relationship between admission measures and college outcomes within colleges. This study, in contrast, examines the relationship between admission measures and college outcomes within high schools. As such, the study’s approach is not consistent with how colleges actually use these measures. Colleges need to evaluate applicants across high schools and across districts and states.The analytical approach used in this study can result in test scores appearing less useful than they are for two reasons:
- There is less variability in test scores than HSGPA within schools due to how the measures were designed (HSGPA compares/ranks students within high schools; test scores compare/rank students across high schools) and how students are assigned to high schools based on where they live. This restriction of range artificially lowers the correlation between test scores and college outcomes when examined within high school.
- Test scores help colleges and universities contextualize high school grades and are often used as a measure of school quality. Therefore, examining the relationship within high school has the effect of making HSGPA look more predictive and test scores less predictive of college outcomes because school quality is controlled for in these analyses.
The Sample
The sample used in the study is also problematic.One issue is a lack of representation within a college. Traditional validity studies examine the relationship between HSGPA and test scores with college graduation among students within colleges. Unfortunately, the authors did not analyze the data within colleges due to sample limitations (“students in our sample make up only a small fraction of the students at any college and might differ considerably from typical students at the college they attend” p. 5). Instead, they used a handful of institutional characteristics to account for differences in college attended. Whether those characteristics were sufficient and, moreover, appropriate remains an open question.
Another issue with the sample concerns a lack of variability in the high schools included. The sample consisted of students in a single public school district. We would expect HSGPA to be more consistent within a district that has a shared curriculum, learning objectives, policies and practices. If the sample had spanned multiple districts, which would more accurately reflect the applicant data that colleges receive, the finding that “HSGPAs are not equivalent measures of readiness across high schools” would likely be even more pronounced.
Holistic View of College Readiness: The Importance of Multiple Measures
In summary, what this study actually tells us is that that ACT scores are a more consistent indicator of academic readiness for college than HSGPA across high schools.What it doesn’t tell us is that HSGPA is better than ACT scores at predicting college success (let alone five times better!).
But, more importantly, that’s not the point. Test scores and HSGPA are not in competition with each other. High stakes decisions such as admissions to college aren’t, and shouldn’t be, based on a single measure. It’s much better to take a holistic view of students and consider multiple factors.
Therefore, the goal shouldn’t be to identify the single, best predictor or to pit different measures against each other but rather to understand the combination of relevant factors needed for college success – factors that are predictive of future performance and provide unique information about an applicant’s readiness to succeed.
Let’s focus on addressing the right research questions with the right approach and what those results tell us to improve the college admission process.