This post was originally published on this site.
When OpenAI released ChatGPT in 2022, the U.S. clearly led the world in artificial intelligence. Last week, the Chinese startup DeepSeek challenged that lead with its R1 “reasoning” model, rivaling U.S. models at a fraction of the cost.
Billionaire investor and provocateur Marc Andreessen, Silicon Valley’s loudest bullhorn, tweeted that “Deepseek R1 is AI’s Sputnik moment,” invoking the 1957 Soviet satellite launch that spurred U.S. spending in the Space Race. His comment sparked extensive media coverage and debate.
But Andreessen is both wrong and untrustworthy.
R1 is impressive, but it’s not a sign the U.S. is suddenly “losing” the AI race. This is, however, a narrative that best serves Andreessen’s vast portfolio and broader agenda. His $52 billion venture firm, Andreessen Horowitz (a16z), is invested in defense tech startups like Anduril and AI giants like OpenAI and Meta (where Andreessen sits on the board).Â
Andreessen advises President Trump and is helping to staff the new administration, placing a16z partners in key positions: Scott Kupor as head of the Office of Personnel Management and Sriram Krishnan as senior adviser for artificial intelligence.
A China-fearing frenzy, whipped up by overstated claims like Andreessen’s, could unleash a torrent of government contracts, subsidies, and deregulation, rewarding the AI industry.
The U.S. has no national AI safety regulations, but several states are considering bills to mandate guardrails on powerful models. After successfully lobbying California Gov. Gavin Newsom to veto one such bill in September, Andreessen and the AI industry will likely leverage China fears to push for federal preemption legislation that would nullify these state efforts.
The specter of Chinese AI dominance also fuels lucrative defense partnerships. OpenAI and Anthropic recently aligned with defense tech firms like Anduril and Palantir.
By pumping up the idea that the AI industry just got Sputnik-ed, Andreessen is looking to dupe policymakers into becoming uncompensated partners in his sprawling empire.Â
Andreessen is a profiteer, not a prophet.Â
Let’s be clear: Sputnik was a Soviet space victory, but DeepSeek hasn’t leapfrogged American AI. R1 is an impressive implementation of largely U.S.-pioneered advancements. And while R1 is the top open-weight system, OpenAI’s forthcoming o3 model boasts significantly higher benchmark scores, and Google DeepMind’s new free reasoning model tops competitive leaderboards (where R1 is fourth).
DeepSeek’s much-touted “$6 million” price tag also omits substantial development expenses, reflecting only the marginal training cost and obscuring the true investment required. Most of the AI employees I chatted with saw the public response to R1 as an overreaction to results in line with expected algorithmic progress.
There’s a better Cold War analogy than Sputnik: the “missile gap,” a phrase coined by John F. Kennedy in his 1960 presidential campaign. Kennedy warned that the U.S. was falling behind the Soviets in nuclear missiles. Defense hawks amplified this fear, pushing for massive spending. Soviet Premier Nikita Khrushchev played along, bragging that the USSR was turning out missiles “like sausages.”
The reality? By 1961, domestic intelligence confirmed that the U.S. had dozens of long-range missiles to the Soviets’ four. But by then, the narrative had served its purpose, empowering those who inflated the threat and enriching defense contractors. Between 1940 and 1996, the U.S. spent the equivalent of $11 trillion on nukes.Â
Seemingly oblivious to the irony, OpenAI’s chief lobbyist recently warned of a looming U.S. “compute gap” with China, even while admitting America’s current advantage. The company’s Economic Blueprint calls for channeling $175 billion into U.S. AI infrastructure, warning that funds will flow to “CCP-backed projects” without fast action.
Today’s frenzy mirrors the “missile gap” moment: Once again, we’re trusting fearmongers, like Andreessen, who stand to gain from panic.
While the U.S.-China AI gap is smaller than the missile gap, American AI advances help China catch up. OpenAI’s o1 “reasoning” model showed the way in September. AI innovations are like the four-minute mile: Once broken, others follow.
Andreessen is just the loudest voice in a chorus of tech hypocrisy. Even AI leaders who were once wary of racing China have shifted. In 2017, Anthropic CEO Dario Amodei cautioned that a U.S.-China AI race could create “the perfect storm” for AI-caused catastrophes.
After DeepSeek’s release, Amodei urged building self-improving AI to outpace China. Similarly, OpenAI CEO Sam Altman pivoted from advocating cooperation with China in 2023 to asserting in July that we face a binary choice between “democratic” and “authoritarian” AI. His hawkish turn is already paying off — Trump is considering fast-tracking OpenAI-led “Stargate,” a $500 billion U.S. data center project, justified by the need to beat China.
This acceleration carries grave risks. The first International AI Safety Report warns that competition could lead developers to cut corners on safety while AI rapidly improves at dangerous capabilities like deception and bioweapon design.Â
Despite these risks, Andreessen has publicly committed to accelerating at all costs, writing in his 2023 “Techno-Optimist Manifesto,” “We believe any deceleration of AI will cost lives. Deaths that were preventable by the AI that was prevented from existing [are] a form of murder.” Stoking China fears provides the ultimate justification for racing forward, juiced by government contracts and unencumbered by guardrails.
Instead of doubling down on the self-defeating approach of advancing AI capabilities we don’t know how to control, the U.S. should invest in an Apollo program for AI alignment and security, ensuring that powerful AI systems serve humanity.Â
Ensuring the safe and responsible use of this technology is a shared global imperative. Just as nuclear arms control became a necessity during the Cold War, international coordination on AI governance is essential today.Â
Ultimately, the U.S. and China need to strike a deal over how to govern AI. Let’s learn from the “missile gap” and invest wisely in AI’s future — prioritizing global security over manufactured panic and a self-defeating race to the bottom.
Garrison Lovely (@GarrisonLovely) is a reporter in residence at the Omidyar Network and author of the forthcoming book “Obsolete: Power, Profit, and the Race to Build Machine Superintelligence.” He writes the The Obsolete Newsletter, and his writing on AI has appeared in The New York Times, Time, The Guardian, The Verge, The Nation, and elsewhere.Â