This post was originally published on this site.
Recently, there has been much fanfare over artificial intelligence. Microsoft co-founder Bill Gates recently predicted that humans would no longer be needed for most jobs within the next decade. This came after Microsoft announced an $80 billion investment in artificial intelligence infrastructure.
The Wharton School of Business at the University of Pennsylvania has also recently announced the formation of a new major and concentration focusing on “Artificial Intelligence for Business.” The International Monetary Fund reported that 40% of jobs worldwide could be at risk due to AI. Even pizza chain Papa John’s announced that it would use AI for pizza ordering.
The world also seems to be experiencing a sharp rise in AI investments. In 2024, the investing firm Goldman Sachs published a report estimating a total of approximately $1 trillion to be invested into AI in future years. The University of Alabama recently announced a $100 million investment into a new AI data center, alongside work on a “BamaGPT.”
Generative AI is a machine learning program, which means AI programs operate according to mathematical formulas and artificial neural networks meant to mimic the human brain in order to identify patterns and solve problems. This type of program is nothing new, and it is the type of program used for algorithms on sites such as YouTube and Netflix.
The newest type of generative AI, such as chatbots such as ChatGPT, is still a type of weak AI, which means it cannot reason or think outside of its programming. This means that AI needs specific inputs by humans in order to function and is prone to mistakes due to its lack of independent intelligence.
Last year a Google AI stated that geologists recommended people eat a rock a day, and instructed users to use glue to make cheese stick to pizza. Additionally, a recent BBC report found that 51% of answers that AI gave about the news had some sort of significant error.
It also doesn’t help that AI has a significant black box problem: Scientists don’t know how AI makes its decisions or generates its responses. This means that if AI makes a mistake, it can be difficult if not impossible to pinpoint why it made such a mistake and how to correct it. This could be very bad if AI is used to independently make significant decisions, like in self-driving cars, banking or healthcare.
Biases that programmers hold, even if minor and subconscious, could not only be present in AI systems but amplified. This could potentially lead to increased discrimination should AI be placed in charge of administrative jobs or decision-making.
Businesses are already starting to use AI to replace jobs done by people in order to reduce costs. One report even found that jobs in coding, mathematics, law and accounting could be entirely replaced by chatbots such as ChatGPT, and the prospect of AI writing Hollywood scripts led to a five-month-long writers strike in 2023.
While there is some evidence that AI could enhance human creativity in certain circumstances, due to its reliance on input, it cannot be creative in the same way a person could. Therefore, AI might be able to boost productivity in certain jobs and possibly replace less creative jobs, but the more creative a job requires a person to be, the worse AI will be at it.
There is also the possibility that the mass amounts of investments into AI will not pay off in the economic sense. Some economists, such as Nobel Prize-winning MIT economist Daron Acemoglu, suggest that the economic benefits might only amount to 1% of economic growth over the next 10 years — a significant amount, but much smaller than anticipated by many.
Another potential problem is that the stock market is also growing increasingly dependent on the market value of AI and technology firms such as Apple, Microsoft, and Nvidia. This risks a potential economic bubble in the market, which could cause a crash as investors realize that AI might not be as valuable as advertised.
There is precedent for this; during the early 2000s, rapid stock market growth due to large investments in rising internet companies and startups led to a market crash in 2001-02 as investments superseded actual value. A more recent example of the potential for an economic crash from an unstable bubble was the release of Chinese AI program DeepSeek in January. Due to its low-cost efficiency, it brought the S&P down 1.5 points and the Nasdaq down 3.5 points, heightening the U.S.-China AI race.
It’s not just capital that’s being invested, as AI data centers are also predicted to use the power of entire cities, resulting in a massive investment in energy for something with questionable societal wide benefits.
While the future of AI remains uncertain, what is true is that AI technology today has very real limits. AI of course can be used for very positive things, such as identifying diseases earlier or aiding in scientific discovery by sifting through large amounts of data. However, it is clear that AI as it stands cannot independently reason or make decisions, meaning much of the hype surrounding it might be overblown.