As Trump’s AI deregulation, job cuts sink in, industry gets spooked | Biometric Update

This post was originally published on this site.

In January 2025, President Donald Trump issued Executive Order (EO) 14179, Removing Barriers to American Leadership in Artificial Intelligence. It marked a profoundly significant shift in U.S. AI policy, focusing on eliminating what the administration described as “ideological bias” and “engineered social agendas” in AI development.

As a result, the National Institute of Standards and Technology (NIST) responded by instructing scientists who partner with the U.S. Artificial Intelligence Safety Institute (AISI) to remove all references to “AI safety,” “responsible AI,” and “AI fairness” from their objectives. Biden’s executive order on AI established AISI under the Department of Commerce in 2023 to develop the testing, evaluations, and guidelines for what it calls “trustworthy” AI.

NIST’s directive is part of an updated cooperative research and development agreement for AISI consortium members that was distributed in early March. The previous version of the NIST agreement strongly encouraged researchers to contribute technical work aimed at identifying and addressing discriminatory behavior in AI models, particularly concerning gender, race, age, and economic inequality – all biases that can drastically impact end users by disproportionately affecting minorities and economically disadvantaged persons.

The dramatic shift, though, aligns with the Trump administration’s emphasis on reducing ideological bias in AI models. The goal is to strengthen American economic competitiveness and national security by fostering an environment where AI innovation can thrive without regulatory restrictions that the administration sees as unnecessary hindrances. Trump’s EO revoked the AI policies of the Biden administration which had emphasized AI safety, fairness, and the mitigation of discriminatory behaviors.

The new EO ordered a comprehensive review of all existing AI-related policies to identify and remove those seen as obstacles to innovation. It also established a 180-day timeline for the development of a strategic plan to ensure U.S. leadership in AI, with oversight from key White House officials, including the newly appointed Special Advisor for AI and Crypto, David Sacks.

Biometric Update earlier reported that Trump’s selection of Sacks to be “AI Czar” signaled the administration’s intent to move toward reduced regulation of AI, something Trump has championed, as have Republican members of the Republican-controlled Congress.

Sacks’ selection sparked mixed reactions from the tech community and policymakers. Critics have raised concerns about his preference for limited oversight, potential industry bias, and conflicts of interest tied to his private sector activities and his “special government employee” status which exempts him from the standard confirmation process and full financial disclosure required of Senate-confirmed officials. Critics argue this lack of transparency risks undermining public trust and could enable him to advance policies that align with his professional interests without adequate accountability.

Trump’s EO sparked a strong reaction within the tech community. While some praised the move as a necessary step toward preventing politically motivated constraints on AI research, others have criticized it as a dangerous abandonment of ethical and safety considerations.

One of the most vocal critics is Yann LeCun, Meta’s chief AI scientist. LeCun condemned the policy as a “witch hunt in academia.” He compared the administration’s actions to the Red Scare of the Cold War era, warning they could drive American scientists to seek research opportunities abroad. LeCun and other industry leaders have argued that an excessive focus on removing perceived ideological bias could inadvertently lead to a deregulated AI landscape where discriminatory or unsafe AI systems proliferate.

The implications of Trump’s EO are wide-ranging. On the one hand, the White House’s approach could accelerate AI development in the U.S. by reducing regulatory hurdles, potentially giving American companies an edge over global competitors, but on the other hand, by deprioritizing safety and ethical guidelines there is a substantial risk that AI systems will become more prone to discriminatory outcomes or unintended consequences. A reverse discrimination.

The White House policy shift also puts the U.S. on a divergent path from other global powers, particularly the European Union, which is implementing strict AI regulations emphasizing transparency, accountability, and fairness. This regulatory mismatch will almost certainly pose challenges for American companies operating internationally, as they may be forced to comply with different sets of standards depending on the region.

Domestically, the federal government’s reduced oversight of AI regulation has already led to individual states implementing their own AI laws, creating an unwieldly – and more costly – patchwork regulatory landscape. Businesses now face challenges navigating inconsistent rules across different jurisdictions, which will only complicate AI development and deployment.

In addition, the administration’s unilateral stance on AI policy will very likely hinder international efforts to establish common safety and ethical standards, potentially reducing the U.S.’s influence in shaping the future of AI governance on a global scale.

Meanwhile, potential funding cuts to AISI are raising concerns within the technology sector, where many fear efforts to develop responsible AI could be jeopardized by Trump’s push to downsize the federal government. Probationary employees at NIST reportedly are bracing for imminent termination. It’s feared that AISI is deliberately being dismantled, along with the staff of NIST’s Chips for America program.

According to reports, NIST is preparing to lay off 497 employees, including 74 postdoctoral researchers, 57 percent of the CHIPS staff responsible for incentive programs, and 67 percent of those focused on research and development. These potential job losses have intensified long-standing suspicions that AISI could ultimately face closure under Trump’s administration.

“It feels almost like a Trojan horse. Like, the exterior of the horse is beautiful. It’s big and this message that we want the United States to be the leaders in AI, but the actual actions, the [goal] within, is the dismantling of federal responsibility and federal funding to support that mission,” Jason Corso, a robotics, electrical engineering, and computer science professor at the University of Michigan told The Hill.

The future of NIST remains uncertain. AISI, meanwhile, lost its director earlier this month, and its staff were excluded from an AI summit recently held in Paris. Trump has yet to nominate a new director for NIS, but with Commerce Secretary Howard Lutnick officially in charge, further sweeping changes to the department’s various agencies are expected.

While Trump’s executive order seeks to position the U.S. as the dominant force in AI by prioritizing innovation over regulatory constraints, they also raise significant concerns about the long-term implications of this approach. The balance between technological advancement and ethical responsibility remains a critical debate, with critics warning that the absence of guardrails could lead to AI systems that reinforce biases, lack accountability, and create unforeseen societal risks.

Whether Trump’s destabilizing AI policies strengthen U.S. AI leadership or result in unforeseen consequences remains to be seen, and has undoubtedly set the stage for ongoing controversy and debate in the AI community.

Related Posts

Article Topics

 |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Mar 17, 2025, 7:04 pm EDT

Digital public services are increasing their efficiency, as well as accessibility, which in turn increases inclusivity. Delivering them to people…

 

Mar 17, 2025, 7:00 pm EDT

The UK’s cyber security industry – which includes digital identification, authentication and access controls firms – has generated £13.2 billion…

 

Mar 17, 2025, 6:58 pm EDT

Fresh concerns have been raised about issues of data privacy, security and exclusion in relation to Kenya’s Maisha Namba digital…

 

Mar 17, 2025, 6:57 pm EDT

Sri Lanka will secure the biometric data for its national ID system in hashed form. This one-way technology converts biometric…

 

Mar 17, 2025, 6:35 pm EDT

A major center for the production of identity documents has been commissioned in Cameroon’s economic capita, Douala. The Minister of…

 

Mar 17, 2025, 6:32 pm EDT

This Monday saw the official kick-off of the 2025 Adversarial Attack Challenge (AAC), a competition aimed at strengthening biometric authentication…