This post was originally published on this site.
US President Donald Trump has announced a $500bn (£406bn) joint investment initiative called Stargate, focused on building domestic artificial intelligence infrastructure. At the same time, he has axed former president Joe Biden’s executive order on AI safety, signalling a divergence from the regulatory landscape in major powers such as Europe and China.
Speaking at the White House on Tuesday, Trump said Stargate will, with its initial $100bn (£81bn) in funding, create the “physical and virtual infrastructure to power the next generation of AI” and create 100,000 new jobs.
The project is intended to increase the capacity for training and running AI models. The first Stargate data centre is being built in Texas – although this was already under construction prior to the announcement. Japanese investment firm Softbank is tasked with overseeing the financials of Stargate, while OpenAI will lead the operations of the venture.
Where the full funding will come from remains unclear. Early investors include US cloud provider Oracle and Abu Dhabi state fund MGX. Microsoft, Nvidia and Arm, which was acquired by Softbank in 2016, will all be technology partners.
Stargate heats up AI arms race
Peter van der Putten, AI researcher at Leiden University, views Stargate as the latest development in the AI arms race between the US and China.
“From a competition point of view, China is very much on the radar for the Trump administration,” says van der Putten, who is also director of the AI lab at software company Pega. “Quite a lot of good, performant AI models have been coming out of Chinese labs, such as Tencent and Alibaba.” China’s most advanced AI model Deepseek R1 is now competitive with OpenAI, he says.
The announcement shows an understanding from the Trump administration that “if you don’t have infrastructure then you have nothing for your AI to run on”, van der Putten adds.
This follows OpenAI boss Sam Altman’s complaints to Bloomberg in early January that it’s “wild how difficult it has become to build things [such as power plants and data centres] in the US”.
Altman added that the “most helpful” path for the US to secure its position in AI would be to focus on infrastructure. Last week, OpenAI also published a policy paper that re-emphasised its America-first AI messaging and warned that, if the US didn’t act to attract investment, capital would flow to China-backed projects, “strengthening the Chinese Communist Party’s global influence”.
With backing from Softbank and UAE-based investor MGX, Stargate also demonstrates the US’s willingness to open the doors to foreign investment in domestic capacity.
“We’ve all seen the pictures of tech CEOs at Trump’s inauguration,” says Pia Hüsch, cyber research fellow at the RUSI think-tank. “The administration clearly wants to position itself as a friend to the private sector and distance itself from the Biden administration.”
However, the support of Silicon Valley’s tech executives comes with its own set of risks. Ruptures are already appearing, with Trump’s “first buddy” Elon Musk raising doubts about Stargate’s funding on X, following the announcement.
Trump, the US and AIÂ safety
Stargate’s announcement followed Trump’s decision to revoke Biden’s executive order on AI safety, which aimed to mitigate the technology’s risks to workers, consumers and national security. In the run up to the election, Republicans described the order as “dangerous” and anti-innovation, setting the tone for a different approach to AI regulation in Trump’s second term.
Combined with the launch of Stargate, Trump is sending strong signals that he will entrust the future of artificial intelligence to the private sector, according to Hüsch. Revoking Biden’s executive order has a “large symbolic impact”, she says.
What this means for the future of the US AI Safety Institute, which was created a day after Biden’s executive order in 2023, is unclear. But Hüsch expects funding to be significantly reduced, if not revoked completely. The US may also be less likely to cooperate on international issues around AI safety.
Fazl Barez is a research fellow in AI safety at The University of Oxford. He describes the revocation of Biden’s executive order as a “huge move for the AI deregulation agenda” and one that puts the US on a diverging path from Europe.
“Unlike the EU where you have a highly regulatory approach, which prioritises finding the right regulations to make sure AI is safe, robust and trustworthy before you develop and deploy, the US is taking the approach that you first deploy and think about regulations and standards later,” Barez says.
Earlier this month, Meta, in an apparent concession to Trump, announced it would cut social media safety features, putting the company at odds with Europe’s regulatory outlook around online harms.
Revoking Biden’s executive order will be “the real test for the EU AI Act and potential regulation that the UK might develop,” says Barez.
AI safety did not feature prominently in the UK’s AI plan either, which was published last week, instead, focusing on investment. “I do expect the UK will hold on to its AI safety efforts to a certain degree,” says HĂĽsch.Â
There’s a risk, Hüsch adds, that if the US and UK go quieter on AI safety, a vacuum may be left for China to occupy. “It’ll be interesting to see what China does on AI safety,” she says.
Whatever the outcome, Trump is distancing himself from the Biden administration – and may set the US on a radically different course to its economic peers as a result.