What International AI Safety report says on jobs, climate, cyberwar and more – The Guardian

This post was originally published on this site.

The International AI Safety report is a wide-ranging document that acknowledges an array of challenges posed by a technology that is advancing at dizzying speed.

The document, commissioned after the 2023 global AI safety summit, covers numerous threats from deepfakes to aiding cyberattacks and the use of biological weapons, as well as the impact on jobs and the environment.

Here are some of the key points from the report chaired by Yoshua Bengio, a world-leading computer scientist.


  1. 1. Jobs

    In a section on “labour market risks”, the report warns that the impact on jobs will “likely be profound”, particularly if AI agents – tools that can carry out tasks without human intervention – become highly capable.

    “General-purpose AI, especially if it continues to advance rapidly, has the potential to automate a very wide range of tasks, which could have a significant effect on the labour market. This means that many people could lose their current jobs,” says the report.

    The report adds that many economists believe job losses could be offset by the creation of new jobs or demand from sectors not touched by automation.

    According to the International Monetary Fund, about 60% of jobs in advanced economies such as the US and UK are exposed to AI and half of these jobs may be negatively affected. The Tony Blair Institute has said AI could displace up to 3m private-sector jobs in the UK, though the ultimate rise in unemployment will be in the low hundreds of thousands because growth in the technology will create new roles in an AI-transformed economy.

    “These disruptions could be particularly severe if autonomous AI agents become capable of completing longer sequences of tasks without human supervision,” the report says.

    It adds that some experts have pointed to scenarios where work is “largely” eliminated. In 2o23, Elon Musk, the world’s richest person, told the former UK prime minister Rishi Sunak that AI could ultimately replace all human jobs. However, the report says such views are controversial and there is “considerable uncertainty” over how AI might affect labour markets.


  2. 2. The environment

    The report describes AI’s impact on the environment as a “moderate but rapidly growing contributor” as datacentres – the central nervous systems of AI models – consume electricity to train and operate the technology.

    Datacentres and data transmission account for about 1% of energy-related greenhouse gas emissions, says the report, with AI constituting up to 28% of datacentre energy consumption.

    It adds that models are using more energy as they become more advanced and warned that a “significant portion” of global model training relies on high-carbon energy sources such as coal or natural gas. Use of renewable energy by AI firms and improvements in efficiency have not kept pace with rising demand for energy, says the report, which also points to tech firms admitting that AI development is harming their ability to meet environmental targets.

    The report also warns that water consumption by AI, used for cooling equipment in datacentres, could pose a “substantial threat to the environment and the human right to water”. However, the report adds that there is a shortage of data about the environmental impact of AI.


  3. 3. Loss of control

    An all-powerful AI system evading human control is the central concern of experts who fear the technology could extinguish humanity. The report acknowledges those fears but says opinion varies “greatly”.

    “Some consider it implausible, some consider it likely to occur, and some see it as a modest-likelihood risk that warrants attention due to its high severity,” it says.

    Bengio told the Guardian that AI agents, which carry out tasks autonomously, are still being developed and so far are unable to carry out the long-term planning necessary for those systems to eradicate jobs wholesale – or evade safety guidelines. “If an AI cannot plan over a long horizon, it’s hardly going to be able to escape our control,” he said


  4. 4. Bioweapons

    The report states that new models can create step-by-step guides to creating pathogens and toxins that surpass PhD-level expertise. However, it cautions that there is uncertainty over whether they can be used by novices.

    There is evidence of advancement since an interim safety report last year, the experts say, with OpenAI producing a model that could “meaningfully assist experts in the operational planning of reproducing known biological threats”.


  5. 5. Cybersecurity

    A fast-growing threat from AI in terms of cyber-espionage is autonomous bots being able to find vulnerabilities in open-source software, the term for code that is free to download and adapt. However, relative shortcomings in AI agents mean the technology is not able to plan and carry out attacks autonomously.


  6. 6. Deepfakes

    The report lists an array of known examples of AI deepfakes being used maliciously, including tricking companies into handing over money and creating pornographic images of people. However, the report says there is not enough data to fully measure the amount of deepfake incidents.

    “Reluctance to report may be contributing to these challenges in understanding the full impact of AI-generated content intended to harm individuals,” the report says. For example, institutions often hesitate to disclose their struggles with AI-powered fraud. Similarly, individuals attacked with AI-generated compromising material about themselves may stay silent out of embarrassment and to avoid further harm.”

    The report also warns that there are “fundamental challenges” to tackling deepfake content, such as the ability to remove digital watermarks that flag AI-generated content.