The ACT is designed and has been empirically validated to assess student progress toward postsecondary readiness. Research by ACT and others has consistently established the validity of scores on the ACT—the most widely used college entrance examination in the United States—in predicting student success in credit-bearing first-year college coursework and a variety of college outcomes such as persistence to a second year of college at the same institution and ultimate degree attainment.
This is precisely consistent with ESSA’s requirement that state academic standards be aligned to “the entrance requirements for credit-bearing coursework in the system of higher education in the State” (Sec. 1111(b)(1)(D)(i)); this was also the intent of the lawmakers during the ESSA conference negotiations (H.Rept. 114-354, p. 426–427) when they stated: “It is the intent of the Conferees that existing assessments already widely recognized as validly measuring student performance, such as ACT or SAT exams, may, subject to approval described in this subparagraph, be selected and used.” ACT is currently working with states administering the ACT for accountability purposes to meet the approvals referenced by the conferees. It is clear that ESSA requires that 1) college readiness and 2) mastery of high school academic standards should be one and the same; this is not a binary choice.
For their part, most states had already revised or replaced their state standards before enactment of ESSA to reflect the knowledge and skills students need to be ready for college by the time they graduate from high school. Thus, there is generally alignment between state academic standards and those measured by the ACT to provide valid and comparable results.
The ACT has advantages over other types of state assessments beyond that of predictive validity: nationally representative empirical data tying results to college readiness, longitudinal trend data, and consequential validity (e.g., increased motivation and effort of test takers, increased college-going rates in states administering the ACT for census testing), to name just three.
Further, use of the ACT is consistent with language in ESSA that seeks to minimize the time that students spend taking assessments. The ACT generally requires less than half of the time of existing state assessments, because it focuses only on critical and representative skills that students must demonstrate to be academically ready for college—the same knowledge and skills that are evident in the preponderance of state high school standards.
It is both right and proper—and consistent with the framework of ESSA—that states and districts should determine their own needs and priorities and choose an assessment that best suits those needs and priorities. ACT supports the right of states to augment the ACT with content-based questions focusing on unique aspects of its own academic standards. We contend, however, that such augmentation only supplements, rather than establishes, the validity of ACT scores for federal accountability purposes.
The ACT is one of the best available methods by which states and districts can determine the strengths and weaknesses of their educational programs so that they can steadily make progress under ESSA at the work of helping more students to graduate from and succeed after high school.