This post was originally published on this site.
Lately, I have been experiencing anger, occasionally edging toward rage (depending on my mood) when I open a new document in MSWord and I see the ghostly prompt urging me to use its Copilot generative AI tool.
I do not want to use this tool. I especially do not want to use this tool to start a draft of a document, because writing the first draft under the power of my own thoughts is the key to ultimately producing something someone else might want to read, and outcome on which my living depends, but itâs also, the point of all writing ever, in any context, as far as Iâm concerned.
I am persuaded by Marc Watkinsâs framing of âAI is unavoidable, not inevitableâ for no other reason than the tech companies will not allow us to avoid their generative AI offerings. We canât get away from this stuff if we want to, and boy, do I really want to.
Most Popular
But just because it is unavoidable and must be acknowledged and, in its way, dealt with, does not mean we are required to use or experiment with it. Over the period of writing More Than Words: How to Think About Writing in the Age of AI, and now spending a month or so promoting and talking about the book in various venues, I grow more and more convinced that if this technology is to have utility in helping students learnâand I mean learn, not merely do schoolâthis utility is likely to be specialized and narrow and the product of deep thought and careful exploration and step-by-step iteration.
Instead, weâre on the receiving end of a fire hose spraying, This is the future!
Is it, really?
One of the reasons weâre being told itâs the future is because at this time, generative AI has no strong business rationale. Donât take my word for it. Listen to Microsoft CEO Satya Nadella, who admitted in a podcast interview that generative AI applications have had no meaningful effect on GDP, suggesting they are not amazing engines of increased productivity.
Tech watcher Ed Zitron has been saying for months that there is no âAI revolutionâ and that weâre heading toward the bursting of a bubble that will at least rival the 2008 downturn caused by the subprime mortgage crisis.
So, while there is reason to believe that we are experiencing a bubble that is inevitably going to burst, as we imagine what our institutional and individual relationships should be with this technology, I think itâs useful to see what the people who areâliterallyâinvested in AI envision for our futures. If they are right, and AI is inevitable, what awaits us?
Letâs check in with the people directly funding and developing AI technology what they foresee for the educators of the United States.
@elonmusk/X
That is the man who is apparently runningâand running roughshod overâthe United States government suggesting that AI-assisted education is superior to what teachers deliver. Now, we know this is not true. We know it will never be trueâthat is, unless what counts as outcomes is defined down to what AI-assisted education can deliver.
Editors’ Picks
At her âSecond Breakfastâ newsletter, Audrey Watters puts it plainly, and we should be prepared to accept these truths:
âBut to be clear, the âbetter outcomesâ that Silicon Valley shit-posters Palmer Luckey and Elon Musk fantasize about in the image above do not involve the quality of educationâof learning or teaching or schooling. (Youâre not fooled that they do, right?) They arenât talking about improved test scores or stronger college admissions or nicer job prospects for graduates or well-compensated teachers or happier, healthier kids or any such metric. Rather, this is a call for AI to facilitate the destruction of the teaching profession, one that is, at the K-12 level comprised predominantly of women (and, in the U.S., is the largest union) and at the university levelâin their imaginations, at leastâis comprised predominantly of âwoke.ââ
It is hard to know what to do about a technology that some intend to leverage to destroy your profession and harm the constituents your profession is meant to serve. More Than Words is not a book that argues we must resist this technology at all costs, but again, these people want to destroy me, you, us.
ChatGPT and its ilk havenât even been around for all that long, and we already see the consequences of voluntary deskilling. Futurism reports, âYoung coders are using AI for everything, giving âblank staresâ when asked how programs actually work.â
Namanyay Goel, a veteran coder who has been observing the AI-wielding coders who canât actually code, says, âThe foundational knowledge that used to come from struggling through problems is just ⊠missing.â This is output divorced from process, a pattern that is already endemic to our transactional model of schooling, but which AI now supercharges.
There is no role for educational institutions in the world where we allow this sort of thing to substitute for knowledge and learning. That may be the least of our problems should the full deskilling result. (See the film Idiocracy for that particular flavor of dystopia.)
When Microsoft shoves its AI tools in the face of a student with less time, less freedom, less confidence and more incentive to use it, what are we giving them to make them want to resist, to commit to their learning, to become something other than a meat puppet plugging syntax into a machine with the machine spewing more syntax out?
At this point, where is the evidence the companies do not wish us harm?