Why Human Intelligence and Ingenuity Outshine AI’s Limitations
Gary Marcus aptly highlights, as many have for years, that merely increasing computational power won’t resolve the challenges faced by generative artificial intelligence (AI). When billion-dollar AI systems struggle with tasks that a child can easily handle, it signals a need to reassess the prevailing hype. However, he overlooks a crucial factor: a seven-year-old can tackle the Tower of Hanoi puzzle because we are embodied beings navigating the world.
All living creatures are driven to explore, utilizing all our senses from the moment we are born. This exploration helps us construct a model of reality and understand the world around us. We can derive general concepts from limited experiences—something no computer can achieve. For instance, teaching a large language model the concept of “cat” requires exposure to thousands of images showing cats in various situations—be it perched in a tree or nestled in a box. Even then, if it encounters a cat engaged in an unusual activity, like playing with a bath plug, it might fail to recognize it as a cat.
In contrast, a human child, after interacting with just two or three cats, will have a lifelong ability to identify any cat as a cat. Furthermore, this inherently embodied intelligence makes us remarkably energy-efficient compared to computers. The systems powering an autonomous vehicle consume over a kilowatt of energy, whereas a human driver operates on around twenty watts of renewable energy—and doesn’t require external fuel to learn a new route.
In light of the climate crisis, the substantial energy needs of the AI industry could prompt us to recognize and appreciate the remarkable economy, versatility, adaptability, and creativity inherent in human intelligence—qualities we possess simply by being alive.
It’s no surprise that Apple researchers have identified “fundamental limitations” in advanced artificial intelligence models. AI systems, such as large reasoning models or large language models (LLMs), are far from capable of genuine reasoning. This can be easily tested with a simple question posed to ChatGPT or similar platforms: “If 9 plus 10 is 18, what is 18 minus 10?” Often, the response received is 8, or at times, it’s vague and inconclusive.
This underscores that current AI doesn’t truly reason; instead, it’s a blend of brute force and logical routines aimed at refining the brute force approach. A term that deserves more attention is ANI—artificial narrow intelligence—which refers to the likes of ChatGPT that excel in summarizing information and rephrasing sentences but are still far from achieving authentic reasoning.
However, it’s worth noting that the more often LLMs encounter similar questions, the better their responses become. Yet again, this isn’t real reasoning but rather a reflection of model training.
Have an opinion on what you’ve read? We welcome your thoughts via email for consideration in our letters section.