As a philosopher, my interest in AI tends to focus on metaphysics (philosophy of mind), epistemology (the problem of other minds) and ethics rather than on economics. My academic interest goes back to my participation as an undergraduate in a faculty-student debate on AI back in the 1980s, although my interest in science fiction versions arose much earlier. While “intelligence” is difficult to define, the debate focused on whether a machine could be built with a mental capacity analogous to that of a human. We also had some discussion about how AI could be used or misused, and science fiction had already explored the idea of thinking machines taking human jobs. While AI research and philosophical discussion never went away, it was not until recently that AI was given headlines, mainly because it was being aggressively pushed as the next big thing after driverless cars fizzled out of the news.
While AI technology has improved dramatically from the 1980s, we do not have the sort of AI we debated about, namely that on par with (or greater than) a human. As Dr. Emily Bender pointed out, the current text generators are stochastic parrots. While AI has been hyped and made into a thing of terror, it is not really that good at doing its one job. One obvious problem is hallucination, which is a fancy way of saying that the probabilistically generated text fails to match the actual world. A while ago, I tested this out by asking ChatGPT for my biography. While I am not famous, my information is readily available on the internet and a human could put together an accurate biography in a few minutes using Google. ChatGPT’s hallucinated a version of me that I would love to meet; that guy is amazing. Much more seriously, AI can do things like make up legal cases when lawyers foolishly rely on it to do their work.
Since I am a professor, you can certainly guess that my main encounters with AI are in the form of students turning in AI generated papers. When ChatGPT was first freely available, I saw my first AI generated papers in my Ethics class, and most were papers on the ethics of cheating. Ironically, even before AI that topic has always been the one with the most plagiarized papers. As I told my students, I did not fail a paper just because it was AI generated, the papers failed themselves just by being bad. To be fair to the AI systems, some of this can be attributed to the difficulty of writing good prompts for the AI to use. However, even with some effort at crafting prompts, the limits of the current AI are readily available. I, of course, have heard of AI written works passing exams, getting B grades and so on. But what shows up in my classes is easily detected and fails itself. But to be fair once more, perhaps there are exceptional AI papers that are getting past me. However, my experience has been that AI is bad at writing and it has so far proven easy to address efforts to cheat using it. Since this sort of AI was intended to write, this seems to show the strict limits under which it can perform adequately.
AI was also supposed to revolutionize search, with Microsoft and Google incorporating it into their web searches. In terms of how this is working for us, you just need to try it yourself. Then again, it does seem to be working for Google in that the old Google would give you better results and the new Google is bad in a way that will lead you to view more ads as you try to find what you are looking for. But that is hardly showing that AI is effective in the context of search.
Microsoft has been a major spender on AI and they recently rolled out Copilot into Windows and their apps, such as Edge and Word. The tech press has been generally positive about Copilot and it does seem to have some uses. However, there is the question of whether it is, in fact, something that will be useful (and more importantly) profitable. Out of curiosity, I tried it but failed to find it compelling or useful. But your results might vary.
But there might be useful features, especially since “AI” is defined so broadly that almost any automation seems to count as AI. Which leads to a concern that is both practical and philosophical: what is AI? Back in that 1980s debate we were discussing what they’d probably call general artificial intelligence today, as opposed to what used to be called “expert systems.” Somewhat cynically, “AI” seems to have almost lost meaning and, at the very least, you should wonder what sort of AI (if any) is being referred to when someone talks about AI. This, I think, will help contribute to the possibility of an AI bubble as so many companies try to jam “AI” into as much as possible without much consideration. Which leads to the issue of whether AI is a bubble that will burst.
I, of course, am not an expert on AI economics. However, Ed Zitron presents a solid analysis and argues that there is an AI bubble that is likely to burst. AI seems to be running into the same problem faced by Twitter, Uber and other tech companies, namely that it burns cash and does not show a profit. On the positive side, it does enrich a few people. While Twitter shows that a tech company can hemorrhage money and keep crawling along, it is reasonable to think that there is a limit to how long AI can run at a huge loss before those funding it decide that it is time to stop. The fate of driverless cars provides a good example, especially since driverless cars are a limited form of AI that was supposed to specialize in driving cars.
An obvious objection is to contend that as AI is improved and the costs of using it are addressed, it will bring about the promised AI revolution and the investments will be handsomely rewarded. That is, the bubble will be avoided and instead a solid structure will have been constructed. This just requires finding ways to run the hardware much more economically and breakthroughs in the AI technology itself.
One obvious reply is that AI is running out of training data (although we humans keep making more everyday) and it is reasonable to question whether enough improvement is likely. That is, AI might have hit a plateau and will not get meaningfully better until there is some breakthrough. Another obvious reply is that there is unlikely to be a radical breakthrough in power generation to enable a significant reduction in the cost of AI. That said, it could be argued that long term investments in solar, wind and nuclear power could lower the cost of running the hardware.
One final concern is the concern that despite all the hype and despite some notable exceptions, AI is just not the sort of thing that most people need or want. That is, it is not a killer product like a smartphone or refrigerator. This is not to deny that AI (or expert) systems have some valuable uses, but that the hype of AI is just that and the bubble will burst soon.
Many people are excited about this. Obviously. I guess the main reason I am not is wrapped in my age and pragmatist outlook. Rorty said it well when talking about things that are more useful, rather than less. I don’t worry so much about the cheating aspect of it. As long as well-schooled and capable professors recognize bad papers, and grade them accordingly, cheating will fail, regardless of creativity or ingenuity. AI has been simmering for awhile, as you point out in this post. The sky is not falling yet. Well reasoned and well stated, professor. My thinking is the bubble may not burst. It may only lose inflation as the hot air contained cools down. I also think the cool-down has begun. Thanks for the wise words.
The plagiarism fear was part of the hype; I must admit to being surprised at how bad AI is at writing philosophy papers and writing in general. It does seem adequate for writing that is, by its nature, based on formulas. So it does an okay job at answering basic essay questions.
I also has kind of a sophist effect going for it. When I read an AI generated paper, it starts off seeming smooth and slick, with plenty of keywords being thrown around. But it has always lacked the depth that requires understanding.
So, AI would make a great CEO or grifter simulation.
Agreed.