Skip to content

ACT Newsroom & Blog

Hide All News & Blogs View All News & Blogs

Transforming Test Development

Equity, individualization, removing barriers; all are issues that continue to challenge the field of education.

Suppose you created your own, unique profile based on your interests, goals, experiences, and cultural background, etc. Now, suppose your educational experience—including the tests you take—were individualized to align with your specific profile.

ACT will utilize the capabilities of automatic item generation (AIG) to help address these very challenges.

What is AIG? Essentially it is using a computer program to create test questions (or “items”) for an exam. Traditionally, items used by ACT and other testing organizations are created by people. Then, they are edited and revised and pretested (and revised again) and put through a rigorous review process (and perhaps revised again) before they make it to an actual test.

AIG replaces the first part of this model—the initial creation of test questions by people—with item creation by a computer program.

How does it work? It’s technical, but it involves a structured template— a cognitive model—that represents the way the humans identify and associate variables related to a variety of topics. That cognitive model is then used by a computer to generate test questions of various types. While the technology and terminology may be unfamiliar to most people, AIG is a proven operational technique that has been rigorously tested over the past decade.

Why would we use AIG? There are many reasons, the first being efficiency. AIG can generate many, many more test questions in much less time than can a person. It can help make test development less time-consuming and prevent costs from increasing as much in the future.

AIG can be used to generate open-ended and multiple choice questions. It can generate math questions, science questions, questions on grammar, questions based on text passages… The possibilities are vast and promising.

You might think test development professionals would see AIG as a threat, but in my experience they actually see it as an opportunity. Typically, once they begin creating cognitive models they soon enjoy capturing how individuals identify and assemble variables to create meaningful questions. Content specialists quickly learn that diversity (even just a small variation) within a single variable can distinguish students who have strong understanding of content from those who do not. This helps educators diagnose learning gaps—and ultimately helps students learn.

AIG enhances the value of content specialists, extracting their knowledge and putting it to use in a much more efficient way. Engaging content experts upfront in the modeling process results in work that is not only more scalable but also more stimulating, more impactful and, in the words of our content specialists—more fun!

Bottom line: Can a computer create good test questions? I have placed questions generated by AIG next to human-generated questions and asked people to tell me which were which. They couldn’t do it. It’s impossible to tell the difference. The items are indistinguishable. The answer is: YES.

As I mentioned at the start of this piece, AIG may someday help us create personalized tests that are aligned with each examinee’s specific interests, goals, experiences, and cultural backgrounds. Someday, test questions could be created while a test is being taken, tailored to each individual’s unique academic needs.

We aren’t there yet, but we are clearly on our way. The possibilities are endless—and exciting!

Read the Press Release