This post was originally published on this site.
The expansion and launch of an increasing number of AI large language models is transforming business, consumer and institution-to-institution communication — as well as the Internet itself.
Recent research led by Weixin Liang at Stanford University quantifies this shift, demonstrating LLM-assisted writing is increasingly widespread in various domains — everything from corporate press releases and job postings to consumer grievances and United Nations reports.
By the end of 2024, AI-driven writing was a semi-omnipresent fact with as much as 24% of business press releases, 18% of financial grievances, 10% of job postings and 14% of UN press releases containing some measure of AI writing and editing.
“We developed this method to quantify and compare frequencies of words more and less likely used by AI, tracking their prevalence over time across many types of text,” Liang explained in an email response to questions.
The research is among the largest empirical investigations of AI writing adoption, reviewing more than 300 million online documents and posts between 2022 and 2024.
The Rapid Expansion Of AI Writing Across The Web
While AI-generated content was once limited to tech-savvy groups, usage rocketed following ChatGPT’s launch in late 2022. Within months, evidence of generative AI content began to spread across the web.
Smaller firms led the way, with AI-assisted job postings soaring to 15% at younger, smaller firms, according to Liang’s research. Corporate earnings announcements followed the same trend, surging to 24% highs in late 2023 before leveling off.
“The big takeaway is just how quickly LLM-assisted writing surged across such diverse areas,” Liang noted. “Even high-level international organizations like the United Nations showed roughly 14% LLM usage in its press releases.”
This growing occurrence is indicative of both the potential and danger of AI writing. On the upside, LLMs bring efficiency as professionals are able to produce content more quickly. On the downside, overuse of AI would make communication homogenous and destroy confidence in authenticity.
How Generative AI Content Cuts Two Ways
The explosive growth of LLM-produced writing also produces a combination of promises and pitfalls. AI can help enable non-native speakers to express themselves better and increase the scope of formal communication.
“For example, someone who claims she wrote one of the consumer financial complaints in our sample said LLMs helped her organize her thoughts better and also better understand her rights and the legislation — and her complaint was successful,” Liang wrote.
But AI-powered writing also presents prickly problems.
“If so many communications are AI-generated, people may become suspicious about authenticity—‘Who really wrote this?’” he added.
The study predicts dangers like generic, template-based writing taking the place of distinctive voices, making it even harder for companies to get their messaging to stand out. Job listings are another problem area. The study discovered that 10% of LinkedIn job listings were AI-generated and 15% at smaller companies.
“When applying for a job, details about the firm and the authenticity of tasks are really important. If AI is generating job descriptions, applicants may struggle to discern real opportunities from algorithmically optimized postings,” stated Liang.
What Happens When AI Is Trained On Its Own Output?
An open question posed by AI research is what occurs when LLMs are increasingly trained on material that has itself been produced by AI. Though Liang’s research did not test this impact directly, he concluded an overreliance on AI content for AI training introduces the possibility of recursive feedback loops.
The more AI-generated material there is online, the higher the likelihood future LLMs will be trained on synthetic — not human-authored — text. That would further entrench issues of bias, disinformation, unreliability and compromised creativity.
A paper in Nature in 2023 described a phenomenon called “model collapse,” where AI models are trained on AI-generated content more than on diverse, human-authored content. The study found that an AI model that was only “fed” AI content saw performance deteriorate over time as it lost accuracy, nuance and the capacity for producing any type of usable outputs.
Essentially, AI starts to feed off its own skewed mirror reflections, generating content that is increasingly unreliable and detached from real data. This trend presents serious implications for the long-term usefulness of AI-generated text as a data source for training.
If left unchecked, model collapse may create a reality in which LLMs become increasingly useless, producing ever more repetitive and less meaningful content. This would have implications for industries that already depend on AI-aided writing, ranging from business communication to journalism and scholarly research.
Looking beyond short-term effects, researchers and policymakers are beginning to grapple with long-term regulation and ethical concerns. These are still largely open questions, even as AI becomes increasingly integrated into routine business and institutional operations.
What’s In Store As AI Writing Expands More And More?
As AI-generated content becomes the rule instead of the exception, companies will have to walk a tightrope between efficiency and authenticity. Though LLMs undoubtedly provide productivity benefits, unchecked dependence would undermine creativity and credibility in essential communications, according to Liang’s research.
He and his colleagues are already making plans for future studies on AI applications in financial communications and how it affects knowledge sharing more generally.
“We have a lot of new research ideas now. We want to investigate how LLMs impact communication and decision-making in high-stakes contexts,” he added.
Despite the fuzzy future surrounding unseeable unintended consequences of expansive AI written content — one thing is clear. AI-generated content isn’t a trend — it’s a seismic shift that’s shaking how we communicate en masse.