Skip to content

ACT Newsroom & Blog

Hide All News & Blogs View All News & Blogs

Test Optional Report - Response to Feedback

We’ve received a good deal of feedback regarding our recent report on test-optional policies, More Information, More Informed Decisions: W...

Read this article


We’ve received a good deal of feedback regarding our recent report on test-optional policies, More Information, More Informed Decisions: Why Test-Optional Policies Do NOT Benefit Institutions or Students. Some of the feedback has been positive, while some has been negative.

I thought it would be helpful to address some of the negative responses that we’ve received in order to help improve understanding of the report itself and ACT’s position on this matter.

Some of the critical feedback suggested that the conclusions of this report were self-serving and defensive. We anticipated this type of reaction, of course, and that’s why we included graphs illustrating the research findings on which we based those conclusions. Some of that research was conducted by ACT, but a good deal of it was conducted by independent, external sources.

But then there were those who misinterpreted the graphs. One individual, referencing a figure on page 4 of the ACT report, essentially argued that ACT was unfairly picking on low-performing students by illustrating that students with a low ACT composite score are not terribly successful academically in college even when they have a history of high grades. (The figure shows that if you rely on grades alone to predict academic success, you are missing much of the story—and that is true for students with an ACT score of 30, 20 or even 10.)

Test-based metrics are used widely across educational, employment and organizational settings, in training programs, certification, licensure and healthcare. Yet only in college admissions do we regularly hear calls for allowing individuals a choice in determining what information should be conveyed or hidden from decision makers.

The graph in our report illustrates that two students with the exact same high school grades have very different probabilities of academic success when their ACT scores differ significantly. Of course the same argument applies to high school grades: Two students with the exact same test score have very different probabilities of academic success when one has a history of low grades in high school and another student has a history of high grades.

The overall message that the critic fails to address is as follows: Research across multiple domains has consistently demonstrated that test scores add significant value above and beyond other predictors, whether one is examining student achievement, job performance, or workplace competencies. Decision accuracy is improved when all valid indicators are considered—grades, course rigor, test scores, background experiences, opportunities, etc.

When a college makes test scores optional, it suggests that admissions officials must blindly weight those indicators in a mechanistic fashion and are unable to make holistic decisions based on the sum and consistency of various sources of evidence and the specific needs of the institution. In addition, forcing students to determine when it is in their best interests to report or suppress their test scores can lead to gaming and additional strategies which may undermine the very students who are seemingly targeted by such test optional policies.

The basic question here is whether or not test scores add value to admissions. Quite frankly, if colleges did not see any value in test scores, then they would not be test-optional; they would be test-free. Colleges would not continue to use an instrument that did not offer incremental validity in admissions, placement, retention, diagnostics, and other important functions. Colleges wouldn’t accept test scores from thousands of applicants if the information did not supplement the high school record and provide a common metric to evaluate students from different schools, who completed different courses and were graded by different faculty using different standards.

The research findings are clear: About four out of every 10 college-bound students report an “A” average in high school courses, but their actual college grades tend to run nearly a full point lower (on a 4.0 scale). ACT’s research simply repeats much of what has been found in peer-reviewed scientific research conducted by independent scholars with no affiliation to testing organizations. A good example is this report, in which the authors conclude (p. 13): “Test-optional admissions policies, as a whole, have done little to meet their manifest goals of expanding educational opportunity for low-income and minority students. However, we find evidence that test-optional policies fulfill a latent function of increasing the perceived selectivity and status of these institutions.”

Each institution has the right to establish its own admission policies to meet its own needs, and ACT respects that right. However, claims that ACT scores add little-to-no validity above high school grades are simply not borne out by the data, and claims that test optional policies result in greater campus diversity have not been substantiated in independent research.

Assessments contribute valuable information that can inform decisions in admissions, placement, hiring, accountability, certification, licensure, diagnosis, and instruction, to name just a few. Would you accept test-optional policies for certifying a pilot, licensing a pharmacist, or allowing a bank auditor access to your personal financial information? Do the colleges that adopt test-optional policies institute a similar option for course grades? Do they allow students to decide whether their grades are based only on papers, research projects, and class participation or do they require quizzes, tests, and final exams?

Most admissions professionals see the value in admissions tests and understand that, in the large majority of instances, when test scores confirm what high school grades indicate, it is confirming and reassuring, not a waste of time. Most also seek multiple sources of information and attempt to make important decisions based on all sources of data. ACT believes that test scores are a valuable source of data, and the research supports this conclusion.

Where FairTest Gets It Wrong

In a recent report by the National Center for Fair & Open Testing (FairTest), “Assessment Matters: Constructing Model State Systems t...

Read this article



In a recent report by the National Center for Fair & Open Testing (FairTest), “Assessment Matters: Constructing Model State Systems to Replace Testing Overkill,” the authors deem performance assessments as the preferred model for state assessment systems and detail their Principles for Assessment.

The issue of high-quality assessments is of critical importance today and the use of assessments to inform and enhance student learning is certainly one of the primary uses; however, I disagree with many of their conclusions.

Performance assessments often provide students with an opportunity to engage in extended and complex problems and situations which can be more authentic than a typical objective test question. ACT has highlighted in our K–12 Policy Platform, assessment formats should vary according to the type of standards that need to be measured and the intended construct to be measured; typically, a balance of question types provide the basis for a comprehensive evaluation of student achievement.

In advocating for performance assessments, FairTest incorrectly claims that multiple-choice assessments are limited “to facts and procedures and thereby block avenues for deeper learning.” As ACT research shows in “Reviewing Your Options: The Case for Using Multiple-Choice Test Items,” multiple-choice items can test higher-order thinking skills—by requiring students to, for example, apply information they have learned from a given scenario to a new situation, or recognize a pattern and use it to solve a problem—and do so in an efficient and cost-effective manner.) Instead of being dogmatic to a particular assessment format, states and schools need to focus on what is being measured and try to balance innovation and sustainability.

The report also ignores some of the limitations of performance tasks:
they require significantly more time to complete, which reduces instructional time;
they sample relatively few skills, which means scores are based on only a very small subset of standards or content;
they are often highly expensive to create and score, which delays score reporting; and
they have lower reliability (and score precision) than multiple choice tests.

Related to FairTest’s Principles for Assessment, I disagree that assessments systems should be decentralized and primarily practitioner developed and controlled. To create a fair, valid, and reliable assessment is difficult and time-consuming work. Before a question is placed on the ACT and scored, a number of very extensive and detailed processes needs to occur, including multiple reviews by internal and external experts to ensure the item is measuring what it says it is measuring and not introducing irrelevant information that may make it more difficult for students to access.

For example, at ACT we try to reduce the language load on math items to ensure that they measure math and not a student’s reading ability. Other testing programs may include extensive reading passages and context in presenting a math item, but we need to ask ourselves: Does the heavy reading load disadvantage a student with limited English experience who otherwise is highly proficient in mathematics? The reviews also ensure that all test questions are culturally sensitive and that test forms as a whole include a balance in terms of culture, gender, and life experience.

Further, tests forms are created to match particular content and statistical specifications. This helps to ensure that the assessments are comparable across time. Doing so is necessary to better maintain longitudinal trends used to monitor achievement gaps or measure growth within a classroom, across districts, and/or across schools within a state.

Finally, FairTest includes among its principles that students should exercise significant control where appropriate, for example by deciding whether to include SAT or ACT scores in their college applications. As highlighted in recent ACT research, “More Information, More Informed Decisions,” more sources of student information—not fewer—are needed to better understand a student’s preparedness for college.

In ignoring the realities of cost—both in teacher time and financial–that states face in developing their assessment systems and the need for fairness, reliability, and validity in the construction and administration of tests, FairTest inflates some good ideas for innovative item formats into a “system” that many if not the majority of states will find difficult to construct or unworkable at scale.

ACT advocates for a holistic view of student learning using multiple sources of valid and reliable information. Performance assessments and teacher-created assessments can be one source of information, but for most states, relying on them exclusively is not feasible due to technical capacity and costs.

Inclusion and Opportunity

College was once a privilege afforded to the fortunate few. For most high school graduates, a diploma was the end of the educational road....

Read this article


College was once a privilege afforded to the fortunate few. For most high school graduates, a diploma was the end of the educational road. The road may not have been paved with gold, but it did lead to steady employment in stable careers that paid solid middle class wages. That was true in the United States, and perhaps even more the case in Europe, where I was born, raised, and educated.

Today, few of these assumptions endure. Instead of going straight to work, most high school graduates on both sides of the Atlantic enter some form of postsecondary education. Still, even with a college degree or vocational certification, it’s likely the modern millennial will have many jobs and even professions before easing into retirement—whatever that might look like a half century from now.

At ACT, we had the privilege of testing 64 percent of the U.S. high school Class of 2016—nearly 2.1 million students in all. What we found was both encouraging and sobering.

On the sobering side, the average composite score on the ACT declined slightly to 20.8, down from 21.0 last year. Across all graduates, 38 percent met the ACT College Readiness Benchmarks in at least three of the four core subject areas tested (English, math, reading, and science), an achievement that indicates they’re ready for first-year college success.

The flip side is that, based on their scores, 62 percent of graduates are not prepared. Worse, 34 percent met none of the ACT benchmarks, suggesting they will likely struggle with what comes next.

Still, while average scores are down, that doesn’t necessarily imply that the performance of this year’s graduates is worse.

How is that possible?

The reason is we are testing more students than ever—again, 64 percent of graduates this year, five percent more than a year ago. As a result, our findings include an additional 100,000 students who would not have tested before and are likely to score somewhat lower than previous testers.

Broad-based participation in the assessment process is a victory—for our society, which gets a more accurate perspective of America’s academic achievement, and for the nearly 2.1 million graduates who took the ACT, who now have a better understanding of the full range of opportunities available to them.

While 64 percent is a big number, an even bigger number is 84 percent—the percentage of this year’s tested seniors who aspire to postsecondary education.

The opportunities available to these students are considerable. By including them in the assessment process, we also include them in the conversation—the ones they are having with their counselors, parents, potential schools and training institutions, and most importantly themselves.

By better understanding where they stand, they can better appreciate where they need to go next. While their world may not resemble that experienced by their parents and grandparents, it is also likely to include opportunities few of us can even imagine.

And that, for all the students who will follow in our footsteps, is victory.

Improving Security for International Testing

Piracy is an international crime that accounts for an estimated $300 billion in lost intellectual property (IP) revenues (The Commission o...

Read this article


Piracy is an international crime that accounts for an estimated $300 billion in lost intellectual property (IP) revenues (The Commission on the Theft of American Intellectual Property, 2013). Additionally, the theft of IP creates a significant drag on United States gross domestic product and diminishes future innovation by businesses. ACT, similar to other companies across the globe, is impacted by piracy, and while we cannot stop it, we intend to address it head on.

ACT’s tests are taken by millions of students every year, trusted by parents, accepted by every four-year college and university in the country, and used by scholarship agencies to make decisions that impact millions of students each year in the US and around the world. ACT knows how important trusted, valid results are to those who take our tests and use our scores, and we are committed to ensuring the validity of our assessments. We regularly refresh our test questions and forms, and we are continually improving our testing processes to ensure a fair testing experience for our test takers.

While the vast majority of test takers are honest, a small number of individuals—and a growing number of adults and organized fraud rings—are unfortunately seeking to undermine the system for their own financial gain, jeopardizing the hard work of honest test takers.

We realize the importance individuals and institutions place on the scores generated by our tests. We are committed to doing our part to curtail this type of fraudulent behavior by not only monitoring and addressing specific issues as they occur, but also by improving our test development and delivery processes to assure students, institutions, and the public that the scores we report are valid and reliable. We intend to do this while also maintaining the highest degree of access for test takers.

ACT has always sought to regularly improve our testing processes, and, to that end, we are aggressively planning for the development and launch of a Computer Adaptive Test (CAT) version of the ACT® test that will be implemented for international testing in the fall of 2017. More details on ACT’s International CAT will be forthcoming in the next week.

The use of a CAT design allows for quicker scoring and turnaround of results for examinees, results in an assessment that is shorter in duration, and—because assessments delivered on a CAT platform are uniquely generated based on the test taker’s responses—are more secure and less prone to security threats. ACT’s desire has always been to innovate and advance the field of measurement. In doing so, we also hope to make it more secure and, therefore, more reliable.

ACT encourages anyone who has concerns about testing irregularities to report them via our anonymous Test Security Hotline.

ACT is also reaching out to National Association for College Admission Counseling (NACAC) leadership and to others in the admissions testing industry to discuss how—together—we can do more to limit the negative impact of cheating in higher education. We look forward to further conversations and to ensuring ongoing confidence in ACT’s results.

Measuring Science Skills

“If you can not measure it, you can not improve it,” said the famous physicist Lord Kelvin. As an organization committed for more tha...

Read this article





“If you can not measure it, you can not improve it,” said the famous physicist Lord Kelvin.

As an organization committed for more than 55 years to helping people achieve education and workplace success, ACT firmly believes that measuring students’ college and career readiness in English, math, reading, and science will help improve their readiness.

In science, improving students’ knowledge and performance is more critical than ever: The U.S. Department of Commerce estimates that jobs in the fields of science, technology, engineering, and math (STEM) will grow 17 percent by 2018 and that more than 1.2 million of these jobs will go unfilled because of a lack of qualified workers.

Several states have enacted laws that explicitly require students’ science skills be tested. In addition, the recently reauthorized Elementary and Secondary Education Act—the Every Student Succeeds Act—upholds the importance of science testing in elementary, middle, and high school.

While other nationally recognized high school tests, such as the SAT, reference science content—if they do so at all—only in the context of assessing reading, writing/language, and mathematics skills, the ACT® test has a full, separate science test with 40 questions devoted to measuring skills and knowledge deemed important for success in first-year college science coursework. The constructs measured on the ACT science test are unique and different from those measured by the ACT math and reading tests.

The inclusion of both a math and a science test allows ACT to offer examinees a STEM score, which represents their overall performance on the two tests. Only through the comprehensive measurement of both math and science skills can this unique score be determined.

The ACT test has empirically derived benchmark scores that indicate readiness for success in first-year college courses in each individual subject area measured, including science. And our new STEM benchmark score indicates whether a student is well prepared for the types of first-year college courses required for a college STEM-related major.

The science test on every ACT test form includes at least one passage on each of the science disciplines that are most often offered to students in high school—biology, chemistry, Earth/space science, and physics.

In fact, science educators who participated in the recently released 2016 ACT National Curriculum Survey overwhelmingly prefer a stand-alone science assessment with authentic scientific scenarios. Eighty-six percent of middle school teachers, 89 percent of high school teachers, and 87 percent of college instructors felt that such a test is a better assessment of science knowledge than either science-oriented questions included in a math test or questions on an English or reading test involving science-oriented topics.

Of the 1.9 million graduates who took the ACT in 2015, 49 percent declared an interest in STEM majors and careers. These students need to be prepared for STEM jobs, so why in the world would we cut back on measuring students’ science knowledge and skills?

If we want students who are prepared for the millions of science, technology, engineering, and math jobs of the future, we must invest in teaching them science skills. But we also must assess their performance to measure what they have learned and to identify areas in need of improvement. The ACT is the only nationally recognized high school assessment that does this.

ACT Perspective on Fisher Decision

As is clear to most observers, not every student enjoys the same advantages as they advance through the K-12 educational system. Too ofte...

Read this article



As is clear to most observers, not every student enjoys the same advantages as they advance through the K-12 educational system. Too often those disparate experiences not only impair their personal academic outcomes, they also limit the opportunities those students might have had to contribute their distinct perspectives to the colleges they might have attended—and, in the longer run, to contribute to the vitality of the communities they represent and to the prosperity and well-being of our country as a whole.

For generations, ACT has advocated that colleges and universities must use admissions criteria that are valid, reliable, holistic, and effective—and embrace the full range of students who could benefit from higher education. We believe the U.S. Supreme Court Fisher decision, to uphold the University of Texas’ efforts to promote diversity and inclusion, is consistent with that holistic perspective.

“A university is in large part defined by those intangible qualities which are incapable of objective measurement but which make for greatness,” Justice Anthony Kennedy wrote, adding “Considerable deference is owed to a university in defining those intangible characteristics, like student body diversity, that are central to its identity and educational mission.”

On June 22, just one day before the Fisher decision was announced, we launched the ACT Center for Equity in Learning, which will advocate for underserved students and young working learners.

In some ways, our timing could not have been more fortuitous.

Building on ACT’s core strengths in the high school to postsecondary years, the Center's initiatives will reflect ACT's interests in both college and career readiness and highlight the use of data, evidence, and thought leadership to close gaps in equity and achievement.

Until the quality of education is uniformly high for every student, the Center—and all of our society—still has work to do. As we strive to reach that ambitious standard of equality of opportunity for every young person, we appreciate and applaud the court’s counsel to use “valuable data about…different approaches to admissions” to “foster diversity” rather than “dilute it.”

ACT Named Winner of Culture of Innovation Award

ACT received the Culture of Innovation Award at the Chief Innovation Officer Summit on May 18, 2016, in San Francisco. The award recogniz...

Read this article


ACT received the Culture of Innovation Award at the Chief Innovation Officer Summit on May 18, 2016, in San Francisco. The award recognizes the comprehensiveness of ACT’s innovation programs, processes, and platforms.

ACT’s Culture of Innovation Award was one of four Strategy and Innovation Awards announced at the summit. The summit’s Strategy and Innovation Advisory Board, composed of high-level executives working in strategy and innovation at major corporations and organizations, selected the winners for their “exceptional efforts in strengthening business performance and growth.” Learn more about the awards here.

In announcing the award, ACT CEO Marten Roorda congratulated all ACT team members for their engagement as innovators, saying that “developing and driving a culture of innovation involves everyone at ACT.”
Top