This post was originally published on this site.
Iâve recently found myself back on the job market after my department was made redundant. My new life looks like this: refresh Seek; go on LinkedIn; regret the life choices that made me go on LinkedIn; refresh Seek; try to sound normal in a cover letter; cry; refresh Seek. The usual job-hunting stuff.
Late last week I was minding my own business (refreshing Seek) and I saw a job that seemed perfect for me. A disability services organisation was looking for a writer to create accessible content, which is my particular area of interest. Excited, I penned an application. I struck just the right balance of extremely knowledgeable and easy to get along with. I uploaded my CV. I submitted the form, quietly confident that I might be in with a chance.
A reply email popped up almost instantly. The weekend had definitely already begun, but I was invited to take the next step in the application process and answer some âpersonalityâ questions in a sort of pre-interview. Not with a person â with a chatbot.
Now, having been a socially isolated teenager in the 90s, Iâm no stranger to intimate conversations with chatbots. And I love any opportunity to talk about myself. So I clicked the link. Sure, I thought. I can answer personality questions.
They were standard interview fare. Tell us how you overcome unexpected obstacles. Tell us how you work in a team. Tell us why you want this job. I answered them honestly, chucking in a few jokes in case the chatbot was a ruse and a real person was reading. I love being part of a team. I try to be friendly and helpful in the workplace. My biggest weakness is needing to set phone reminders to remember basic tasks. Submit. By now it was after knock-off time.
Another email soon arrived. This one wasnât from the hiring organisation but a third-party AI platform. The ominous subject line read, âYour personality insights, Anna.â
Nothing in the chatbot process had mentioned personality âinsightsâ. I hadnât opted into anything extra. And to be honest with you, being underemployed has not been terrific for my self-esteem, so I wasnât exactly craving a Friday-night character assessment by AI.
Six âinsightsâ were listed inside. Some of them were fine. It told me Iâm always up for a challenge. Iâm a positive, confident and enthusiastic person. Thanks, robot overlords, I thought. But the more I read, the more targeted they felt. The AI platform told me not everyone likes positive, confident and enthusiastic people. Actually, had I considered I might be kind of abrasive? Why did I keep getting defensive when other people made suggestions? And have I ever â ever â tried just listening for a change?
At the bottom of the email, âCoaching Tipsâ suggested I adapt my working style to be less unnerving for people. By this stage of the process, no human, as far as I could tell, had seen or vetoed anything.
I like to think Iâm pretty resilient (donât tell the bot). Iâve been to enough therapy to mostly be open to criticism, often reflecting and sometimes even acting on it. But I was not prepared to be wiped out by an AI villain deployed by a disability nonprofit on the weekend.
I put down my non-alcoholic spritzer. I opened the companyâs website. Sure enough, itâs an automated platform that uses AI to interview, screen and assess applicants. Many, many companies in Australia use this service; the site lists big brands including supermarkets, airlines, department stores and major sporting governing bodies as some of its clients.
This AI platform is driven, it reckons, by the number one complaint from job hunters, which is never hearing back after applying. Its strategy is to make sure everyone gets a response, even if that response is to tear their delicate heart to shreds. In addition to text and video chats, it wants to make every candidate âfeel seenâ with âpersonalised insightsâ.
In the soulless machineâs defence, I did feel seen. Iâm literally proving its point by writing a defensive op-ed about its suggestion that I might get defensive about suggestions. It had tapped directly into my deepest workplace insecurities and rummaged around. It had flayed me alive and exposed my greatest fears for my career and the future of my industry. The problem wasnât that it didnât see me. It was that itâs a robot.
Studies show long-term unemployed people are at least twice as vulnerable to mental illness, with high risk of depression, anxiety and suicide. In this no-longer-hypothetical situation, it seems only a matter of time before an AI platform sends an unsolicited âbetter than no responseâ personality assessment to one of these people, with no supports in place if they need them.
Iâve only been job hunting for a few weeks. I still have hope. But the triple nightmare of a slow market, high cost of living and Centrelink payments below the poverty line can get serious really fast. On what planet is it preferable to be told, via large language models, that the problem might actually be you?
I thought I was only morally opposed to AI because itâs destroying the planet and stealing indiscriminately from artists. As it turns out, its impersonation of âasshole boss from my first ever jobâ is right up there, too.