Moravec's paradox
Moravec's paradox is the observation in the fields of artificial intelligence and robotics that, contrary to traditional assumptions, reasoning requires very little computation, but sensorimotor and perception skills require enormous computational resources. The principle was articulated in the 1980s by Hans Moravec, Rodney Brooks, Marvin Minsky, Allen Newell, and others. Newall presaged the idea, and characterized it as a myth of the field in a 1983 chapter on the history of artificial intelligence: "But just because of that, a myth grew up that it was relatively easy to automate man's higher reasoning functions but very difficult to automate those functions man shared with the rest of the animal kingdom and performed well automatically, for example, recognition".[1] Moravec wrote in 1988: "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility".[2]
Similarly, Minsky emphasized that the most difficult human skills to reverse engineer are those that are below the level of conscious awareness. "In general, we're least aware of what our minds do best", he wrote, and added: "we're more aware of simple processes that don't work well than of complex ones that work flawlessly".[3] Steven Pinker wrote in 1994 that "the main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard".[4] There is currently no consensus as to which tasks AI tends to excel at.[5]
By the 2020s, in accordance with Moore's law, computers were hundreds of millions of times faster than in the 1970s, and the additional computer power was finally sufficient to begin to handle perception and sensory skills, as Moravec had predicted in 1976.[6] In 2017, leading machine-learning researcher Andrew Ng presented a "highly imperfect rule of thumb", that "almost anything a typical human can do with less than one second of mental thought, we can probably now or in the near future automate using AI".[7] While this statement is somewhat consistent with current AI development, truly matching the capability of a human still eludes us in a number of fields. While some AI classification systems are somewhat accurate at image identification, most still fail to classify or distinguish images at certain levels in the category hierarchy.[8] Deep learning in robotics is somewhat less successful: even the simplest of real-world situations to a human can generate massively complex physical problems, demanding continuous and multifaceted analysis of a given space— to humans, such analysis isn't even conscious, but AI's struggle immensely.[9] These shortcomings further affirm Moravec's paradox, highlighting the extent to which humans are optimized for perception and motor tasks.
The biological basis of human skills
[edit]One possible explanation of the paradox, offered by Moravec, is based on evolution. All human skills are implemented biologically, using machinery designed by the process of natural selection. In the course of their evolution, natural selection has tended to preserve design improvements and optimizations. The older a skill is, the more time natural selection has had to improve the design. In the case of sensorimotor capability, humans have been optimized to receive and process unexpectedly large amounts of data in the form of millions of signals, throughout all of our sensory channels, for even the simplest of tasks.[10] In contrast, abstract thought developed only very recently, and consequently, we should not expect its implementation to be particularly efficient.
As Moravec writes:
Encoded in the large, highly evolved sensory and motor portions of the human brain is a billion years of experience about the nature of the world and how to survive in it. The deliberate process we call reasoning is, I believe, the thinnest veneer of human thought, effective only because it is supported by this much older and much more powerful, though usually unconscious, sensorimotor knowledge. We are all prodigious olympians in perceptual and motor areas, so good that we make the difficult look easy. Abstract thought, though, is a new trick, perhaps less than 100 thousand years old. We have not yet mastered it. It is not all that intrinsically difficult; it just seems so when we do it.[11]
Moravec seems to suggest that the oldest of human skills are largely unconscious and so appear to us as effortless. If we take Moravec's statement that unconscious skills are ones that have been optimized over hundreds of millions of years to be true, a natural conclusion is that these "effortless" skills are actually far more difficult to reverse-engineer, while skills that require more conscious effort are comparatively, significantly easier. A conclusion from the previous statements is that the difficulty of reverse-engineering a human skill should be roughly correlated to the time that skill has been evolving in animals.
Skills like recognizing a face, moving around, judging people's motivations, catching a ball, recognizing a voice, setting appropriate goals, paying attention to things one deems interesting; generally anything to do with perception, attention, visualization, motor skills, and several other abilities are examples of skills that have evolved in humans for millions of years. We indeed find that many of the capacities aforementioned are difficult to replicate in AI systems, alternatively, skills like mathematics, engineering, games, logic, and scientific reasoning— skills that have developed more recently, seem to be better replicated by AI. The latter skills appear difficult to replicate as evolutionarily (according to Moravec), we are less optimized for them.
Possible Implications of Moravec's Theory
[edit]Moravec's theories on the nature of human evolution and how it affects the perceived difficulty of human tasks carry notable implications. The skills that have appeared in recent times, refined over a comparatively miniscule period of a few thousand years, are much more easily matched by computers and AI. Moravec wrote in 2009 that:
The ability to do mathematical calculations, of course, was irrelevant for survival... we visualize numbers as complex shapes, write them down and perform other such functions, we process digits in a monumentally awkward and inefficient way. We use hundreds of billions of neurons to do in minutes what hundreds of them, specially “rewired” and arranged for calculation, could do in milliseconds.[12]
According to Moravec, humans are vastly inefficient at mathematical computation. Extending the same argument to other "newer" human skills, one may find a potential reason for why computers and AI are so vastly better at them and not sensorimotor skills: AI is not inherently better at "intellectual" tasks than motor skills, we merely perceive it as such since we're naturally worse at the former and better at the latter. It may be approximately as easy to obtain startlingly and superhumanly competent AI in intellectual reasoning as human-level AI in perception and robotics.
Historical influence on artificial intelligence
[edit]In the early days of artificial intelligence research, leading researchers often predicted that they would be able to create thinking machines in just a few decades (see history of artificial intelligence). Optimism stemmed in part from the fact that they had been successful at writing programs that used logic, solved algebraic and geometrical problems and played games like checkers and chess. Given that logic and algebra are difficult for most people and are considered a sign of intelligence, many prominent researchers[a] assumed that, having (almost) solved the "hard" problems, the "easy" problems of vision and commonsense reasoning would soon fall into place. They were essentially incorrect (see also AI winter); one of the reasons for this development is that these problems are not easy at all, but incredibly difficult. The fact that they had solved problems like logic and algebra was mostly useless to the task of solving the "easy" problems, since most of the former are comparatively easy for machines to solve.[b]
Rodney Brooks explains that, according to early AI research, intelligence was "best characterized as the things that highly educated male scientists found challenging", such as chess, symbolic integration, proving mathematical theorems and solving complicated word algebra problems. "The things that children of four or five years could do effortlessly, such as visually distinguishing between a coffee cup and a chair, or walking around on two legs, or finding their way from their bedroom to the living room were not thought of as activities requiring intelligence. Nor were any aesthetic judgments included in the repertoire of intelligence-based skills."[14]
In the 1980s, this would lead Brooks to pursue a new direction in artificial intelligence and robotics research. He decided to build intelligent machines that had "No cognition. Just sensing and action. That is all I [he] would build and completely leave out what traditionally was thought of as the intelligence of artificial intelligence."[14] He called this new direction "Nouvelle AI".[15] Note that more modern outlooks on the nature of intelligence tend to include the skills early AI research left out— specifically embodied intelligence— which expands on the traditional definition to generally encompass all intentional interchanges of information and energy in a physical system.[16] As such, sensorimotor capabilities are then included as intelligence, among other skills (also see intelligence).
Cultural references
[edit]Linguist and cognitive scientist Steven Pinker considers this the main lesson uncovered by AI researchers in his 1994 book The Language Instinct.[17]
See also
[edit]Notes
[edit]- ^ Anthony Zador wrote in 2019: "Herbert Simon, a pioneer of artificial intelligence (AI), famously predicted in 1965 that "machines will be capable, within twenty years, of doing any work a man can do" — to achieve [human-level] general AI."[13]
- ^ These are not the only reasons that their predictions did not come true: see History of artificial intelligence § Problems.
References
[edit]- ^ Newell 1983, p. 199.
- ^ Moravec 1988, p. 15.
- ^ Minsky 1986, p. 2.
- ^ Pinker 2007, p. 190.
- ^ Brynjolfsson & Mitchell 2017.
- ^ Moravec 1976.
- ^ Lee 2017.
- ^ Mueller, Shane T. (2020-07-01). "Cognitive Anthropomorphism of AI: How Humans and Computers Classify Images". Ergonomics in Design. 28 (3): 12–19. arXiv:2002.03024. doi:10.1177/1064804620920870. ISSN 1064-8046.
- ^ Sünderhauf, Niko; Brock, Oliver; Scheirer, Walter; Hadsell, Raia; Fox, Dieter; Leitner, Jürgen; Upcroft, Ben; Abbeel, Pieter; Burgard, Wolfram; Milford, Michael; Corke, Peter (2018-04-01). "The limits and potentials of deep learning for robotics". The International Journal of Robotics Research. 37 (4–5): 405–420. arXiv:1804.06557. doi:10.1177/0278364918770733. ISSN 0278-3649.
- ^ Korteling, J. E. Hans; van de Boer-Visschedijk, G. C.; Blankendaal, R. a. M.; Boonekamp, R. C.; Eikelboom, A. R. (2021). "Human- versus Artificial Intelligence". Frontiers in Artificial Intelligence. 4: 622364. doi:10.3389/frai.2021.622364. ISSN 2624-8212. PMC 8108480. PMID 33981990.
- ^ Moravec 1988, pp. 15–16.
- ^ Moravec, Hans. "Rise of the Robots--The Future of Artificial Intelligence". Scientific American. Retrieved 2025-05-04.
- ^ Zador 2019.
- ^ a b Brooks (2002), quoted in McCorduck (2004, p. 456)
- ^ Brooks 1986.
- ^ Roy, Nicholas; Posner, Ingmar; Barfoot, Tim; Beaudoin, Philippe; Bengio, Yoshua; Bohg, Jeannette; Brock, Oliver; Depatie, Isabelle; Fox, Dieter (2021-10-28), From Machine Learning to Robotics: Challenges and Opportunities for Embodied Intelligence, arXiv:2110.15245, retrieved 2025-05-04
- ^ Pinker 2007, pp. 190–91.
Bibliography
[edit]- Brooks, Rodney (1986), Intelligence Without Representation, MIT Artificial Intelligence Laboratory
- Brooks, Rodney (2002), Flesh and Machines, Pantheon Books
- Brynjolfsson, Erik; Mitchell, Tom (22 December 2017). "What can machine learning do? Workforce implications". Science. 358 (6370): 1530–1534. Bibcode:2017Sci...358.1530B. doi:10.1126/science.aap8062. PMID 29269459. Retrieved 7 May 2018.
- Lee, Amanda (14 June 2017). "Will your job still exist in 10 years when the robots arrive?". South China Morning Post. Retrieved 7 May 2018.
- Minsky, Marvin (1986), The Society of Mind, Simon and Schuster, p. 29
- Moravec, Hans (1976), The Role of Raw Power in Intelligence, archived from the original on 3 March 2016, retrieved 16 October 2008
- Moravec, Hans (1988), Mind Children, Harvard University Press
- McCorduck, Pamela (2004), Machines Who Think (2nd ed.), Natick, Massachusetts: A. K. Peters, ISBN 1-5688-1205-1, p. 456.
- Newell, Allen (1983). "Intellectual issues in the history of artificial intelligence". The study of information: interdisciplinary messages (PDF). USA: John Wiley & Sons, Inc. pp. 187–294. ISBN 978-0-471-88717-1. Archived from the original (PDF) on 2023-12-21.
- Pinker, Steven (September 4, 2007) [1994], The Language Instinct, Perennial Modern Classics, Harper, ISBN 978-0-06-133646-1
- Zador, Anthony (2019-08-21). "A critique of pure learning and what artificial neural networks can learn from animal brains". Nature Communications. 10 (1): 3770. Bibcode:2019NatCo..10.3770Z. doi:10.1038/s41467-019-11786-6. PMC 6704116. PMID 31434893.
External links
[edit]- Explanation of the XKCD comic about Moravec's paradox