Mythbusting AI – Insights Magazine

This post was originally published on this site.

Review: AI Snake Oil, Arvind Narayanan and Sayash Kapoor, Princeton

There is a lot of hype around about AI, both fearmongering and talking up its capabilities. AI is said to be superintelligent; it can do this, do that, will create utopia, replace us, enslave us. The authors of AI Snake Oil are keen to dispel the mythology. AI is like other tech, created and controlled by humans. It gives out what you put in. At the same time, it has the potential for good and ill, and choices need to be made about how we use it.

The authors point out that we need to be clear about what we mean when we talk about AI, because the term covers various technologies. There are three main types: predictive, generative and content moderation. Predictive AI is used by insurance companies and governments to crunch data to decide risk. Programs are often trained and assessed on the same data, so claims about their capabilities can be overstated. AI is given the answers before the test, in other words. There is much hype from the companies selling this software, hype that is often reproduced unquestioningly by journalists. Often this AI is little better than guessing, because human behaviour is too complex. In complex systems there are too many variables, and which are most important is not always understood.

Often predictive AI doesn’t remove errors but perpetuates them. So, mistakes and biases are imposed on already marginalised people. Simply put, AI using biased data will be biased. For example, one program for use in American law courts assessed what type of person would re-offend – the program used data on arrests – but because Black people are arrested more often because of biases in policing, the program decided they are more likely to re-offend.

Generative AI takes data and recombines it in different ways. Put like that, it seems rather mundane, but most of us have seen surprising AI-generated imagery. The process of how AI generators learn is fascinating, and works, so far, reasonably well. (Look closely though and there are often small elements that seem quite off – odd angles, proportions, etc.) But another problem is that they use images already on the internet, and so human artists are often not compensated for the use of their original imagery. The copyright issues are a major challenge, largely ignored by the big tech companies.

Chatbots are ‘immature’ and ‘unreliable’. Sometimes they are downright dangerous, as in cases where AI has encouraged people with mental illness to commit suicide. AI has no common sense, as the authors of the book The Blind Spot put it. Or, as theologian David Bentley Hart, a critic of the mind-as-computer model, says, AI doesn’t judge. Its mindlessness is its danger. It gives the illusion of understanding, but simply responds as it is programmed to. AI is not sentient because while it can play a game, it doesn’t know it is playing a game. As we don’t understand consciousness fully, it is difficult to predict that with more complexity AI will become (through more data mining and training) conscious.

Because of its ability to sort through so much data quickly, there are claims AI is and will be good for content moderation. So far, this hasn’t been the case, and it is unlikely to be, as context is so important for judging what constitutes offence, vilification and the like. The other side of this is that AI has failed by unthinkingly flagged things as offensive, getting the people who posted them banned from platforms, when the context meant the content wasn’t offensive at all.

This is worse for non-Western users, say the authors, as while content moderation does happen in the West, the Western centricity of tech companies means that, so far, many languages are not well catered for. Related to this, content moderation is often supported by online drudgery by low-paid workers in non-Western countries (who need to both train AI and moderate its moderation).

It’s probably not surprising that the use of software, just like the use of hardware that utilises precious metals mined in slavery conditions, is facilitated by unfairly treated workers in countries where the use of the tech is not widest.

In the West, too, there are threats to workers. While AI may not and cannot replace some jobs, it is used as an excuse by institutions and corporations to cut corners, sack staff, all without proof that AI will do the job properly.

Nick Mattiske blogs on books at coburgreviewofbooks.wordpress.com and is the illustrator of Thoughts That Feel So Big.Â