You’re in a conversation and someone says, “Artificial intelligence (AI) will soon replace humans in almost everything. It’s smarter and more efficient than us, and we won’t be able to keep up.”
What would you say?
While artificial intelligence has shown remarkable progress, sensational claims about AI replacing humans misunderstands what AI is, and underestimates its major shortcomings. It also misunderstands and underestimates the uniqueness of the human mind.
This video explores three things to consider when discussing artificial intelligence.
You’re in a conversation and someone says, “Artificial intelligence will soon replace humans in almost everything. It’s smarter and more efficient than us, and we won’t be able to keep up.” What would you say?
In the last two years, machine learning models and generative AI chatbots have transformed the internet by doing things that, previously, only humans could do such as creating images, producing music, and writing. This has led some to speculate that, in a short time, artificial intelligence will overtake human intelligence as a source of thought and decision-making.
For example, Tesla and SpaceX founder Elon Musk recently said that AI will soon render humans mostly obsolete, predicting that “intelligence that is biological will be less than 1 percent.”
While artificial intelligence has shown remarkable progress, sensational claims about AI replacing humans misunderstands what AI is, and underestimates its major shortcomings. It also misunderstands and underestimates the uniqueness of the human mind.
So, the next time someone says, “Artificial intelligence will replace humans at almost everything,” here are three things to remember:
1. Artificial intelligence is not as smart as many think.
At its most basic level, “artificial intelligence” isn’t intelligent, at least not as the Oxford Learner’s Dictionary defines it. “Intelligence” is “the ability to learn, understand and think in a logical way.” Except in a metaphorical sense, computers don’t learn, understand, or think. Instead, so-called “machine learning” algorithms are instructions a computer uses to process vast amounts of information, like images and text, in order to create statistical models. With enough input, AI chatbots like ChatGPT answer prompts by users.
At no point in this process, however, do the transistors and electronic storage components learn, understand, or think. Computers do not comprehend the data that they analyze, the questions users ask, nor even the answers they produce. They merely make conclusions from statistical patterns in the data—sometimes with humorous results.
Recently, X’s “Grok” AI tried to summarize pro basketball player Klay Thompson’s poor shooting performance in an NBA playoff game. The typically reliable Golden State Warrior shot an awful 0-10 from the 3-point line in a loss to the Sacramento Kings. Pulling from online fan content that teased Thompson for “throwing up bricks,” Grok generated a story reporting that Thompson had vandalized several people’s houses with actual bricks!
The reason for the misunderstanding was simple: There was nobody “there” to understand the joke. Even if AI eventually gets better at discerning the subtleties of human humor, it will never laugh at the joke, because a mathematical model isn’t a mind, no matter how well it mimics one.
2. Artificial intelligence cannot create. It imitates and extrapolates.
Because so-called “machine learning” is entirely dependent on the “training data” it receives, it has no ability to check its models against the real world. Everything generative AI appears to create is a result of what designers feed it, or what’s available on the internet. This makes chatbots especially vulnerable to error and attacks.
For instance, programmers are now warning of something called “model collapse,” when AI is trained on an internet increasingly saturated with AI-generated material. When AI models consume their own output, it creates a feedback loop and the quality of the answers they give declines.
Popular Mechanics described an example of model collapse when a team of researchers trained a machine learning model on its own answers for ten generations. The AI began by writing about Gothic revival architecture in Renaissance cathedrals. After ten cycles, it was babbling about jackrabbits with multicolored tails.
Of course, AI is also vulnerable to so-called “data poisoning” by malicious actors. Some warn that chatbots could be trained by strategically placed “poison data” to spread false information about public figures or steal people’s financial information.
3. Human intelligence is not like a computer. It is unique.
For all the apparent advances in artificial intelligence, we have actually made very little progress toward what most people envision when they think of AI: a conscious, adaptive, generally intelligent entity of the type seen in science fiction. This is because, while researchers have had great success producing “narrow AI” like ChatGPT or Google’s Gemini, they are still mostly stumped on how to create “artificial general intelligence,” or AGI.
As John Lennox explained in his book, 2084: Artificial Intelligence and the Future of Humanity, narrow AI can be great at specific, repetitive tasks, like playing chess, constructing sentences, or identifying precancerous tissue on a CAT scan. However, AI is not great at other things that come easily to humans, such as navigating an unfamiliar room or detecting sarcasm. This is because, so far, AI lacks the kind of generalized intelligence that allows us to move from task to task, to think in the abstract, to apply background knowledge, to use common sense, and to understand cause and effect. For all the hype around “machine learning,” AI systems continue to be, at a fundamental level, programs that do what their creators tell them to do.
The human mind, in other words, is unlike anything the world of computing has yet produced. This is partly because, as psychologist Robert Epstein argued several years ago at Aeon, the human brain is not a computer. It “does not process information, retrieve knowledge or store memories” as symbolic data.
These, he explained, are metaphors we have come to use in the digital age just as people in past ages likened the mind to a steam engine, a mechanical clock, or a set of pipes. We use such technological metaphors because we don’t really understand ourselves. For instance, scientists still do not know where, if anywhere, in the brain human consciousness resides, how it emerges, or even what it is! And the concept of the soul is unexplained by science.
The idea that we would ever be able to create a computer that rivals human intelligence or is conscious, intentional, or truly creative as we are is much more far-fetched than our science-fiction-saturated culture would lead you to believe. AI will almost certainly become a major part of life in the coming years, and it will replace humans in many tasks that rely on repetitive, predictable skills. But it is not, in the most meaningful sense, smarter than we are, nor is it likely to render the human mind obsolete.
So, the next time someone says, “Artificial intelligence will replace humans at almost everything,” remember these three things:
1. Artificial intelligence is not as smart as many think.
2. Artificial intelligence cannot create. It imitates and extrapolates.
3. Human intelligence is not like a computer. It is unique.
Sources cited in this video:
Curtis, Charles. “Klay Thompson’s awful night turned into a wild story about a ‘brick-vandalism spree’ thanks to Twitter AI,” For The Win, April 17, 2024, https://ftw.usatoday.com/2024/04/klay-thompson-twitter-grok-ai-brick-vandalism-spree-meme
Duboust, Oceane. “Elon Musk predicts AI will overtake humans to the point that ‘biological intelligence will be 1%,’” Euronews, May 10, 2024, https://www.euronews.com/next/2024/05/10/elon-musk-predicts-ai-will-overtake-humans-to-the-point-that-biological-intelligence-will-
Epstein, Robert. “The empty brain,” Aeon, May 18, 2016, https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer
Fried, Ina; Rosenberg, Scott. “AI could choke on its own exhaust as it fills the web,” Axios, August 28. 2023, https://www.axios.com/2023/08/28/ai-content-flood-model-collapse
Hashemi-Pour, Cameron; Lutkevitch, Ben. “What is artificial general intelligence (AGI)?” TechTarget, Accessed May 27, 2024, https://www.techtarget.com/searchenterpriseai/definition/artificial-general-intelligence-AGI
Lennox, John C. 2084: Artificial Intelligence, the Future of Humanity, and the God Question, Zondervan: Grand Rapids, 2020.
Orf, Darren. “A New Study Says AI Is Eating Its Own Tail,” Popular Mechanic, October 11, 2023, https://www.popularmechanics.com/technology/a44675279/ai-content-model-collapse/
Snow, Jackie. “As Generative AI Takes Off, Researchers Warn of Data Poisoning,” The Wall Street Journal, March 14, 2024, https://www.wsj.com/tech/ai/as-generative-ai-takes-off-researchers-warn-of-data-poisoning-d394385c