AI’s Risky Business, MIT Researchers Catalogue Over 750 AI Risks – Forbes

This post was originally published on this site.

If you thought AI generated deep fakes or AI bots eliminating jobs were the main issues associated from this emerging tech—think again. Researchers from MIT and the University of Queensland in Australia, decided it was high time that someone compiled a compendium of AI’s version of 8 million Ways to Day.

After scouring thousands of pages of existing research as part of this retrospective analysis, the team came up with more than 750 AI risks worth capturing in its official AI Risk Repository, which is free—the first of its kind—and available to the public.

Balancing AI’s Risks And Rewards

Lead researcher Peter Slattery, PhD, MIT FutureTech wrote in an email exchange that the risk listing was a necessary addition to the AI ecosystem to identify gaps and uncertainties in our current understanding of AI.

“If current understanding is fragmented, policymakers, researchers, and industry leaders may believe they have a relatively complete shared understanding of AI risks when they actually don’t. This sort of misconception could lead to critical oversights, inefficient use of resources, and incomplete risk mitigation strategies, which leave us more vulnerable,” Slattery provided via email.

To assemble and rank the AI risks the team relied on systematic searches, support from other experts, and a method called “the best fit framework synthesis” to create the classifications needed to organize the database.

How The AI Risk Repository Is Structured

The researchers distilled down all categories of AI risks to the following seven buckets or domains listed here.

Within each of these domains are a total of 23 more specific categories to further refine the nature of the AI risks. Examples of those sub-domains include “AI system security vulnerabilities and attacks” as well as “loss of human agency and autonomy.”

The team also flowed the AI risks through the following series of classification filters.

  1. Entities: Was the primary party responsible for the risk human, AI, or a combination of both?
  2. Intent: Was it determined that the primary risk was deliberate, accidental, or indeterminate?
  3. Timing: From a timing perspective did the primary risk occur before the AI was deployed, after it was deployed, or is it unclear?

“We’ve extracted and synthesized the risks from many of the major frameworks to reveal the overlaps and gaps and provide a more accessible, searchable and comprehensive overview of the AI risk landscape,” wrote Slattery.

Various Uses For The AI Risk Repository

Beyond the database and methodology behind it, the website offers ways for various stakeholders to benefit from this research. For instance, it suggests that policymakers might use the information to plan and prioritize AI funding projects or help inform legislative committees to ensure more complete oversight.

Academicians could develop new training and educational materials that incorporate some of these learnings as well as using this body of evidence as a foundation for additional AI research and advancement within the field.

While industry, could develop new systems and processes to mitigate those risks within their own AI creations. Additionally, they could deploy organizational wide trainings to educate and sensitize employees regarding the various risks to help proactively protect against the AI pitfalls.

“To me, the findings suggest that we might be overlooking some areas which are quite important. One is how AI might affect our daily lives and sources of knowledge. I think that there is a considerable risk that people will increasingly rely on AI for information, entertainment and social engagement—and that this causes issues,” wrote Slattery.