We’ve received a good deal of feedback regarding our recent report on test-optional policies, More Information, More Informed Decisions: Why Test-Optional Policies Do NOT Benefit Institutions or Students. Some of the feedback has been positive, while some has been negative.
I thought it would be helpful to address some of the negative responses that we’ve received in order to help improve understanding of the report itself and ACT’s position on this matter.
Some of the critical feedback suggested that the conclusions of this report were self-serving and defensive. We anticipated this type of reaction, of course, and that’s why we included graphs illustrating the research findings on which we based those conclusions. Some of that research was conducted by ACT, but a good deal of it was conducted by independent, external sources.
But then there were those who misinterpreted the graphs. One individual, referencing a figure on page 4 of the ACT report, essentially argued that ACT was unfairly picking on low-performing students by illustrating that students with a low ACT composite score are not terribly successful academically in college even when they have a history of high grades. (The figure shows that if you rely on grades alone to predict academic success, you are missing much of the story—and that is true for students with an ACT score of 30, 20 or even 10.)
Test-based metrics are used widely across educational, employment and organizational settings, in training programs, certification, licensure and healthcare. Yet only in college admissions do we regularly hear calls for allowing individuals a choice in determining what information should be conveyed or hidden from decision makers.
The graph in our report illustrates that two students with the exact same high school grades have very different probabilities of academic success when their ACT scores differ significantly. Of course the same argument applies to high school grades: Two students with the exact same test score have very different probabilities of academic success when one has a history of low grades in high school and another student has a history of high grades.
The overall message that the critic fails to address is as follows: Research across multiple domains has consistently demonstrated that test scores add significant value above and beyond other predictors, whether one is examining student achievement, job performance, or workplace competencies. Decision accuracy is improved when all valid indicators are considered—grades, course rigor, test scores, background experiences, opportunities, etc.
When a college makes test scores optional, it suggests that admissions officials must blindly weight those indicators in a mechanistic fashion and are unable to make holistic decisions based on the sum and consistency of various sources of evidence and the specific needs of the institution. In addition, forcing students to determine when it is in their best interests to report or suppress their test scores can lead to gaming and additional strategies which may undermine the very students who are seemingly targeted by such test optional policies.
The basic question here is whether or not test scores add value to admissions. Quite frankly, if colleges did not see any value in test scores, then they would not be test-optional; they would be test-free. Colleges would not continue to use an instrument that did not offer incremental validity in admissions, placement, retention, diagnostics, and other important functions. Colleges wouldn’t accept test scores from thousands of applicants if the information did not supplement the high school record and provide a common metric to evaluate students from different schools, who completed different courses and were graded by different faculty using different standards.
The research findings are clear: About four out of every 10 college-bound students report an “A” average in high school courses, but their actual college grades tend to run nearly a full point lower (on a 4.0 scale). ACT’s research simply repeats much of what has been found in peer-reviewed scientific research conducted by independent scholars with no affiliation to testing organizations. A good example is this report, in which the authors conclude (p. 13): “Test-optional admissions policies, as a whole, have done little to meet their manifest goals of expanding educational opportunity for low-income and minority students. However, we find evidence that test-optional policies fulfill a latent function of increasing the perceived selectivity and status of these institutions.”
Each institution has the right to establish its own admission policies to meet its own needs, and ACT respects that right. However, claims that ACT scores add little-to-no validity above high school grades are simply not borne out by the data, and claims that test optional policies result in greater campus diversity have not been substantiated in independent research.
Assessments contribute valuable information that can inform decisions in admissions, placement, hiring, accountability, certification, licensure, diagnosis, and instruction, to name just a few. Would you accept test-optional policies for certifying a pilot, licensing a pharmacist, or allowing a bank auditor access to your personal financial information? Do the colleges that adopt test-optional policies institute a similar option for course grades? Do they allow students to decide whether their grades are based only on papers, research projects, and class participation or do they require quizzes, tests, and final exams?
Most admissions professionals see the value in admissions tests and understand that, in the large majority of instances, when test scores confirm what high school grades indicate, it is confirming and reassuring, not a waste of time. Most also seek multiple sources of information and attempt to make important decisions based on all sources of data. ACT believes that test scores are a valuable source of data, and the research supports this conclusion.
ACT Newsroom & Blog
Hide All News & Blogs
View All News & Blogs
Links
Categories
Archive
-
►
2025
(30)
- ► April 2025 (8)
- ► March 2025 (3)
- ► February 2025 (3)
- ► January 2025 (2)
-
►
2024
(46)
- ► December 2024 (1)
- ► November 2024 (7)
- ► October 2024 (3)
- ► September 2024 (5)
- ► August 2024 (2)
- ► April 2024 (3)
- ► March 2024 (4)
- ► February 2024 (5)
- ► January 2024 (4)
-
►
2023
(49)
- ► December 2023 (2)
- ► November 2023 (4)
- ► October 2023 (4)
- ► September 2023 (3)
- ► August 2023 (4)
- ► April 2023 (5)
- ► March 2023 (4)
- ► February 2023 (6)
- ► January 2023 (5)
-
►
2022
(44)
- ► December 2022 (1)
- ► November 2022 (1)
- ► October 2022 (4)
- ► September 2022 (4)
- ► August 2022 (2)
- ► April 2022 (4)
- ► March 2022 (8)
- ► February 2022 (6)
- ► January 2022 (3)
-
►
2021
(62)
- ► December 2021 (1)
- ► November 2021 (4)
- ► October 2021 (2)
- ► September 2021 (4)
- ► August 2021 (5)
- ► April 2021 (11)
- ► March 2021 (10)
- ► February 2021 (5)
- ► January 2021 (3)
-
►
2020
(58)
- ► December 2020 (4)
- ► November 2020 (4)
- ► October 2020 (2)
- ► September 2020 (5)
- ► August 2020 (4)
- ► April 2020 (6)
- ► March 2020 (4)
- ► February 2020 (5)
- ► January 2020 (4)
-
►
2019
(94)
- ► December 2019 (3)
- ► November 2019 (8)
- ► October 2019 (8)
- ► September 2019 (7)
- ► August 2019 (4)
- ► April 2019 (10)
- ► March 2019 (7)
- ► February 2019 (12)
- ► January 2019 (6)
-
►
2018
(101)
- ► December 2018 (6)
- ► November 2018 (17)
- ► October 2018 (8)
- ► September 2018 (12)
- ► August 2018 (7)
- ► April 2018 (9)
- ► March 2018 (9)
- ► February 2018 (6)
- ► January 2018 (9)
-
►
2017
(58)
- ► December 2017 (8)
- ► November 2017 (4)
- ► October 2017 (9)
- ► September 2017 (9)
- ► August 2017 (5)
- ► April 2017 (2)
- ► March 2017 (3)
- ► February 2017 (4)
- ► January 2017 (3)
-
▼
2016
(15)
- ► November 2016 (1)
- ► August 2016 (1)
- ► April 2016 (2)

Test Optional Report - Response to Feedback
ACT
October 19, 2016
Category:
Industry News |
Research |
We’ve received a good deal of feedback regarding our recent report on test-optional policies, More Information, More Informed Decisions: W...

Where FairTest Gets It Wrong
ACT
October 10, 2016
Category:
Industry News |
Research |
In a recent report by the National Center for Fair & Open Testing (FairTest), “Assessment Matters: Constructing Model State Systems t...
In a recent report by the National Center for Fair & Open Testing (FairTest), “Assessment Matters: Constructing Model State Systems to Replace Testing Overkill,” the authors deem performance assessments as the preferred model for state assessment systems and detail their Principles for Assessment.
The issue of high-quality assessments is of critical importance today and the use of assessments to inform and enhance student learning is certainly one of the primary uses; however, I disagree with many of their conclusions.
Performance assessments often provide students with an opportunity to engage in extended and complex problems and situations which can be more authentic than a typical objective test question. ACT has highlighted in our K–12 Policy Platform, assessment formats should vary according to the type of standards that need to be measured and the intended construct to be measured; typically, a balance of question types provide the basis for a comprehensive evaluation of student achievement.
In advocating for performance assessments, FairTest incorrectly claims that multiple-choice assessments are limited “to facts and procedures and thereby block avenues for deeper learning.” As ACT research shows in “Reviewing Your Options: The Case for Using Multiple-Choice Test Items,” multiple-choice items can test higher-order thinking skills—by requiring students to, for example, apply information they have learned from a given scenario to a new situation, or recognize a pattern and use it to solve a problem—and do so in an efficient and cost-effective manner.) Instead of being dogmatic to a particular assessment format, states and schools need to focus on what is being measured and try to balance innovation and sustainability.
The report also ignores some of the limitations of performance tasks:
they require significantly more time to complete, which reduces instructional time;
they sample relatively few skills, which means scores are based on only a very small subset of standards or content;
they are often highly expensive to create and score, which delays score reporting; and
they have lower reliability (and score precision) than multiple choice tests.
Related to FairTest’s Principles for Assessment, I disagree that assessments systems should be decentralized and primarily practitioner developed and controlled. To create a fair, valid, and reliable assessment is difficult and time-consuming work. Before a question is placed on the ACT and scored, a number of very extensive and detailed processes needs to occur, including multiple reviews by internal and external experts to ensure the item is measuring what it says it is measuring and not introducing irrelevant information that may make it more difficult for students to access.
For example, at ACT we try to reduce the language load on math items to ensure that they measure math and not a student’s reading ability. Other testing programs may include extensive reading passages and context in presenting a math item, but we need to ask ourselves: Does the heavy reading load disadvantage a student with limited English experience who otherwise is highly proficient in mathematics? The reviews also ensure that all test questions are culturally sensitive and that test forms as a whole include a balance in terms of culture, gender, and life experience.
Further, tests forms are created to match particular content and statistical specifications. This helps to ensure that the assessments are comparable across time. Doing so is necessary to better maintain longitudinal trends used to monitor achievement gaps or measure growth within a classroom, across districts, and/or across schools within a state.
Finally, FairTest includes among its principles that students should exercise significant control where appropriate, for example by deciding whether to include SAT or ACT scores in their college applications. As highlighted in recent ACT research, “More Information, More Informed Decisions,” more sources of student information—not fewer—are needed to better understand a student’s preparedness for college.
In ignoring the realities of cost—both in teacher time and financial–that states face in developing their assessment systems and the need for fairness, reliability, and validity in the construction and administration of tests, FairTest inflates some good ideas for innovative item formats into a “system” that many if not the majority of states will find difficult to construct or unworkable at scale.
ACT advocates for a holistic view of student learning using multiple sources of valid and reliable information. Performance assessments and teacher-created assessments can be one source of information, but for most states, relying on them exclusively is not feasible due to technical capacity and costs.