If you’ve applied for a job at any time in the last couple of years, you already know: The modern job-search market is marked by two sides of the equation using the same type of tool. Hiring companies are using artificial intelligence (AI) to screen and sift through hundreds of resumes in search of the best candidate for the job. Those candidates, meanwhile, are using the same AI tools to compose and tailor their resumes precisely to the position for which they’re applying.
The goal is ostensibly the same, but as Greg Downey, the Evjue-Bascom Professor of Journalism and Mass Communication and the Director of the Information School discovered, the situation isn’t even remotely new.
“We’ve been here before, even if we didn’t call it AI, or even if it wasn’t computers,” says Downey. “There’s this history of how people think about matching individuals to careers or jobs, and all the tools and algorithms and methods and technologies and machines people have been using to do that over the last 100 years.”
In addition to his classes in his two home departments, Downey spent a decade teaching a career course to undergraduates through SuccessWorks, the College of Letters & Science’s career advising service. It’s one of the things that sparked him to delve into this history, to discover where the push for a technological solution to career selection all began. He spoke about the topic at the 2026 edition of CultureCon, an AI summit held in Madison on April 21-23.
Downey’s research took him all the way back to the 1920s — yes, the 1920s — and to Clark Hull, a professor of psychology who worked at UW–Madison until 1929. Hull, an expert on aptitude testing, believed that educational and industrial psychologists could develop a battery of around 20 different tests to determine which students might be successful in certain types of jobs, from a telegraph operator to a streetcar driver. He built a rudimentary machine to measure and process the test results.
“There were no computers, there was no AI,” says Downey. “My claim is we were doing the same sort of thing. We were trying to build an algorithm or a system or a machine to automatically predict who’s going to be successful in what job.”
Hull’s idea proved interesting but ultimately impractical. As it turns out, the problem with Hull’s approach wasn’t designing the perfect test; it was the time and expense of administering the test and parsing the results. Each type of job required a different type of formula, further complicating the process.
Hull’s effort was only the beginning. Several decades later, in the 1960s, a different group of psychologists in Pittsburgh convinced around 4,000 schools across the United States to buy into something called Project Talent, a two-day series of student testing designed to match individual aptitudes with potential career paths. By this point, actual computers were around to collect and process the mountains of data. The results could be packaged and sold to school districts as potential career guidance tools. Versions of that test persisted for several more decades, even as the 1960s-era data upon which it was initially based became decreasingly relevant.
“The folks who are doing this at any moment in this history, are not doing it with bad intentions,” Downey notes. “They’re always doing it with the prejudices of their time, but they have an idea that we want to make things better. We want to help people find their true selves. They’re really looking for the technological fix. It’s still so attractive.”
Even modern technology hasn’t been able to solve the problem. A little more than 10 years ago, Amazon built an AI system to help it find the right employees to expand their engineering and data science workforce. The model was trained using the resumes and job histories of the company’s successful employees. But the results were, to put it mildly, quite troubling.
“Their AI predictor predicted they should only ever hire men,” says Downey. “They had been hiring overwhelmingly men from certain kinds of tech programs and certain kinds of schools, certain kinds of places. They realized, ‘Oh, maybe we shouldn’t train our AI on the data that we’re claiming represents success,’ when actually it’s a very partial, potentially biased view of what success is and who should be considered successful.”
Downey worries that the draw of modern AI tools, which are being marketed and implemented at a ridiculously rapid pace, may not in fact lead to cost savings and better outcomes but instead just become an endless pursuit of moving onto the next big thing. He sees parallels to the ’60s and ’70s, when experts predicted that computers would revolutionize every school system in America. They certainly did, but not remotely close to the degree at which they had been predicted.
“I think the lesson from history is, whenever we’re talking about technology, we often aim high and we try huge projects, and very rarely do those succeed or become self-sustaining,” says Downey. “But from each one of those big swings, we do learn something, and certain things do get woven into our practices. That’s what we should be paying attention to.”