Rubin was candid about the current crisis of trust around AI-generated content. He described himself as someone who lives and breathes AI daily yet still struggles to tell real media from fabricated material.
“I feel like I’m the most gullible person because when I read something or my kids send me something, I don’t know if it really happened or not,” he says. “And so now I’m spending my time trying to verify information.”
The flood of low-quality, machine-generated content online—“AI slop”—is significant, but he says it’s solvable. He pointed to ideas like watermarking verified media or blockchain-based content verification, though he noted that solutions will need to work at a global scale, not just a state or federal one.
Closer to home, Rubin says the University is trying to lead by example. When Syracuse builds a new tool—such as its new AI-powered class search tool, Clementine—he wants users to see how it works, what it can answer, what it won’t and what guardrails are in place.
“Transparency and responsibility are going to be a big part of this,” Rubin says.