AI Snake Oil
![]() | This article's plot summary may be too long or excessively detailed. (June 2025) |
Author | Arvind Narayanan, Sayash Kapoor |
---|---|
Language | English |
Publisher | Princeton University Press |
Publication date | Sep 24, 2024 |
Publication place | United States |
Pages | 360 pp |
ISBN | 9780691249131 |
AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference is a 2024 non-fiction book written by scholars Arvind Narayanan and Sayash Kapoor. Their text works to debunk hype surrounding Artificial intelligence (AI), and attempts to outline the potential positives and negatives that come with different modes of the technology. The text was published by Princeton University Press. The chapters explore and explain different types of artificial intelligence, as well as hype, why it occurs, and potential sources of AI-based hype. Examples of both successes and failures of AI technologies are discussed, including Epic's attempted sepsis prediction model and Be My Eyes, a digital image classification tool designed for the visually impaired.
The text has been positively reviewed by scholars and journals, including Elizabeth Quill of Science News.[1] However, some critics, including Edward Ongweso Jr., highlight the lack of discussion outside of the West, as well as lack of focus on who controls the power surrounding AI technologies.[2] Critiques of the text raise questions surrounding the global impact of artificial intelligence and hype from sources outside of the West.
Definition of "snake oil"
[edit]The phenomenon of "AI snake oil" appears throughout the entirety of the text. The phenomenon refers to the idiom Snake oil, which can be described as something that is promoted and sold, but is ultimately a "fraudulent cure, remedy, or solution".[3] Narayanan and Kapoor use this term to highlight the hype around the technology while also noting its potential for misinformation.
Contents
[edit]Publication
[edit]Author Narayanan is a computer science professor at Princeton University. Kapoor is a doctoral candidate at the same university, and both scholars are located at the Center for Information Technology at Princeton.[4] In 2023, Narayanan and Kapoor appeared on the TIME100 Artificial Intelligence list, which features influential figures in the field.[5]
The book was published in 2024 by the Princeton University Press. AI Snake Oil consists of 360 pages and features eight chapters, and sections for acknowledgements, references, and an index. The central goal of the text is found within the introductory chapter, where the authors state the following: "the goal of this book is to identify AI snake oil– and to distinguish it from AI that can work well if used in the right ways".[6]
Text overview
[edit]The text is mainly concerned with exploring different modes of artificial intelligence, such as predictive or generative, and identifying the benefits and limitations of each form. Chapter one, which is represented as an introduction, explores the background of the authors and what brought them to collaboratively write the book. The authors also note the title and state who the intended audience is. Chapter two focuses on predictive artificial intelligence, and criticizes the overestimation of the capabilities of the technology. Narayanan and Kapoor use this chapter to provide readers with examples of predictive AI models, including EAB Navigate and COMPAS (Correctional Offender Management Profiling for Alternative Sanctions). Chapter three works to inform the reader about the history of early computational prediction attempts, with examples from companies like Simulatics. Chapter four begins with an overview of the history of generative technologies, such as the perceptron developed by Frank Rosenblatt. The authors use this chapter to explain technological aspects of generative models, such as network layers and data learning, to assist the reader in their understanding. Generative AI technologies include image to text generators and chatbots, which have sprung to popularity: a study by OpenAI indicates the more than 33% of "college-aged young adults in the US use ChatGPT".[7] Chapter five focuses on the Ladder of Generality, as well as Artificial General Intelligence (AGI), to address fears and concerns about the potential for sentient artificial intelligence and automation. Chapter six explores the content moderation aspect of AI, and how it is potentially implemented on social media platforms, like Facebook, to both protect and discriminate against users. Chapter seven switches focus from specific modes of artificial intelligence to the hype surrounding the technology as a whole. They cite different sources of hype, and demonstrate the ways that AI is anthropomorphized by companies that use it to decrease costs of labour. The final chapter, chapter eight, turns its attention to the future. The authors express their ideas and predictions for how the technology will evolve and be utilized in the upcoming years.
Chapter one: Introduction
[edit]The first chapter of the text outlines several broad examples to introduce each discussed mode of artificial intelligence to the reader. Narayanan and Kapoor argue that many individuals do not yet have the literacy to detect functioning aspects of AI compared to potential snake oil, which they identify as "AI that does not and cannot work as advertised".[8] Some of the major examples utilized by the authors in this introductory chapter include Allstate's 2013 use of predictive AI, as well as the concern surrounding actors and AI attempting to replicate or use their likeness. The authors cite the 2023 Writer's Guild Strikes that occurred in Hollywood, California.
Important discussions regarding discrimination are brought up and explored in the first chapter, including the false arrests of six Black individuals due to errors with AI facial recognition tools.[9] They present several rhetorical questions throughout the chapter to provoke reflection, such as asking the reader if technologies filled with such discriminatory error should still be in use.[9] The chapter concludes with a comparison to the Industrial Revolution, where Narayanan and Kapoor highlight the extensive human labour that is necessary for artificial intelligence technologies to function.
Chapter four: The Long Road to Generative AI
[edit]The fourth chapter goes in more in-depth in explorations of generative AI. Generative artificial intelligence, as described by Narayanan and Kapoor, "refers to AI technology that is capable of generating text, images, or other media".[10] Generative AI software examples include ChatGPT, Midjourney, and DALL-E. The section begins with a positive example of generative AI by discussing a platform called Be My Eyes, which is designed to virtually assist blind individuals. The platform supports a generative text-based feature to assist individuals when no human volunteers are accessible.[10] Beginning with a positive usage of generative AI helps to eliminate negative biases about the technology, and initially works to provide the reader with alternative perspectives. As the chapter progresses, the authors begin to provide examples of harm produced by generative AI, including the suicide of a Belgian man after connecting with Chai, a generative chatbot. [11] Issues of deepfakes and preservation of artistic property are also discussed. The use of generative AI to create non-consensual pornographic deepfake content is discussed in relation to female celebrities. However, there is no mention of male celebrity deepfakes, as well as a lack of detailed conversation surrounding revenge pornography and its harmful impact.
A brief overview of the history of machines and image classification is included in the chapter. Frank Rosenblatt's perceptron is explained by breaking down the way that it functions, including discussions of output units, support vector machines (SVMs), and machine learning systems. [12] Important developments in the AI field are also mentioned, including Fei-Fei Li's ImageNet, which the authors note for its contributions to the importance of dataset assemblage.
Chapter five: Is Advanced AI an Existential Threat?
[edit]The fifth chapter draws attention the AGI, or Artificial General Intelligence. The authors describe AGI as "AI that can perform most or economically relevant tasks as effectively as any human".[13] They summarize that many contributors to the field of artificial intelligence believe AGI to be an impending threat that demands attention.[14] However, they argue that the perceived threat of AGI would only exist if the technology continually functioned reliably.[15] They attribute the barrier of functioning AGI to several factors, including insufficient datasets and lack of computer-based social understanding. One of the more notable quotes from the text appears in this chapter and stems from concern about AI usage: the authors write that "we should be far more concerned about what people will do with AI than with what AI will do on its own".[16] The text compares the prediction of weather and weather forecasting to predictive AI, explaining how past data and knowledge of physics impacts the accuracy of forecasting. Comparatively, predictive AI does not always have past datasets to work with, and does not always have a specific scientific understanding, such as mathematics or physics, to corroborate data with and accurately predict future events, such as the probability of sepsis developing.[17]
In order to better illustrate the hype surrounding AGI, Narayanan and Kapoor use the Ladder of Generality, which is described as a visual tool in which "each rung represents a way of computing that is more flexible, and more general, than the previous one".[18] They note that we are not yet aware of the next rungs on the ladder, of if the ladder will eventually result in a dead end. The rungs that have been identified so far are as follows: (0, or floor) special purpose hardware, (1) programmable computers, (2) stored program computers, (3) machine learning, (4) deep learning, (5) pretrained models, and, finally, (6) instruction-tuned models.[19] The potential for future rungs and what those rungs might be are currently undetermined.
The chapter also discusses the ELIZA effect, which Lawrence Switzky discusses in his article "ELIZA Effects". Switzky attributes the coined term ELIZA Effect to Sherry Turke, who defined it as "our more general tendency to treat responsive computer programs as more intelligent than they really are".[20]
To conclude the chapter, Narayanan and Kapoor explore the idea that despite the major advances in AI technology, including generative AI and deepfakes, there is not yet any proof that AI will successfully predict the future. As well, improvements in prediction assisted by AI have not yet been proven or identified.
Chapter six: Why Can't AI Fix Social Media?
[edit]The sixth chapter focuses on content moderation, why it is important, and how it has been and could be affected by artificial automation. The first issue raised in regard to AI-driven content moderation is the inability for computers and machines to understand context and nuance, resulting in potential for discriminatory moderation and shadow banning. While they note that there are issues with automating content moderation, Narayanan and Kapoor also highlight the psychological impact on human content moderators and their labour. They indicate the hidden labour behind moderation, which is often outsourced to less developed countries, where labourers sort through potentially traumatizing content for pay.[21] However, the discussion focuses more heavily on why automated moderation can be problematic, including discriminatory algorithms and lack of nuance. To balance their argument, issues of discrimination and bias are also discussed in relation the human content moderators. To automate moderation, there are two types of AI used, which are fingerprint matching and machine learning.[22]
Chapter seven: Why Do Myths about AI Persist?
[edit]The seventh chapter outlines possible factors that contribute to hype surrounding AI. The authors discuss Epic and it's attempt to use sepsis-detection AI in healthcare and hospitals. Despite the exposure and initial excitement surrounding the model, the relative accuracy percentage was revealed to be on 63%, which authors note to be close to random prediction.[23] Narayanan and Kapoor explain how companies often promote their new AI models without properly disclosing how the model works, and what it is learning from. They attribute hype to several different groups, including journalists, researchers, and companies. They explain the impact of companies and the misplaced hype that they spread can be attributed to greed and a desire to grow corporate funds. For journalists, one of the stated sources of hype, they argue that news media has a tendency to prioritize financial incentives over validity and quality of writing.[24] As well, Narayanan and Kapoor point out the emergence of company statement regurgitation in news media, leading to clickbait. Hype from researchers is potentially linked to lack of reproducibility in studies as well as leakage, which occurs when AI models are tested on their training data.[25]
The text discusses the book The Age of AI, which was written in 2021 by Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher.[26] Narayanan and Kapoor point out the former positions of each author, which includes former Google CEO, Eric Schmidt. Using supportive evidence from other scholars, such as Meredith Whittaker and Lucy Suchman and their critical review, The Myth of Artificial Intelligence, Narayanan and Kapoor explore the book's contributions to hype surrounding AI, and the text's lack of attention to understanding and literacy.[27]
A phenomenon also identified in this chapter is criti-hype. Criti-hype, as the authors explain, is "criticism that ends up portraying technology as all powerful instead of calling out its limitations".[28] This term was coined by Lee Vinsel in his article "You're Doing It Wrong: Notes on Criticism and Technology Hype" for Medium, in which he discusses the role of social media and film in regard to hype.[29]
Reception
[edit]Positive reception
[edit]The book received both praise and critique with its release in September 2024. Nature, a science and technology peer-reviewed journal, released an article highlighting the top "10 essential reads from the past year", listing Arvind Narayanan and Sayash Kapoor's AI Snake Oil in their selection. The article states the that text is "one of the best on this controversial subject".[30] Elizabeth Quill, in her review of the text in Science News, writes that the authors "squarely achieve their stated goal: to empower people to distinguish AI that works well from AI snake oil".[1] In Practical Ecommerce, Jean Gazis writes that Narayanan and Kapoor manage to separate facts from personal opinion, and their arguments are supported with extensive citation.[31]
Critical reception
[edit]Joshua Rothman of The New Yorker writes that "compared with many technologists, Narayanan, Kapoor, and Vallor [Shannon Vallor, University of Edinburgh], are deeply skeptical about today's A.I. technology and what it can achieve. Perhaps they shouldn't be".[32] Rothman argues, following an interview with prominent computer scientist Geoffrey Hinton of University of Toronto, that the potential for AI to replicate complexity is already here and continues to be heavily funded, enhancing the prospective capabilities of the technology.[32] However, he does praise the author's ability to address questions regarding the existential human experience. Alexya Martinez discusses the text in a book review for Journalism and Mass Communication Quarterly, critiquing AI Snake Oil for its extensive focus on the West.[33] Martinez writes that Narayanan and Kapoor "do not fully explore how AI impacts other countries", and suggests more focus on countries outside of the United States to enhance their argument.[33] Another critique comes from a December 2024 issues of Books & the Arts, where author Edward Ongweso Jr. states that the authors of AI Snake Oil "goal of separating AI snake oil from AI that they consider promising, even idealistic, means that they don't engage with some of the greatest problems this technology poses".[2] Ongweso continues to argue that their argumentation would be stronger if they assigned more attention to who holds the power instead of their extensive focus on how the technology works, and when.[2]
Reference
[edit]- ^ a b Quill, Elizabeth (2024-09-03). "A new book tackles AI hype – and how to spot it". Retrieved 2025-04-02.
- ^ a b c Ongweso Jr., Edward (December 2024). "The Hype Machine: Ai is inextricable from scams, propaganda, and deceit". Books & the Arts. 255 (12): 46–51.
- ^ "Snake-oil salesman - Idioms by The Free Dictionary". web.archive.org. 2022-02-13. Retrieved 2025-04-02.
- ^ "Sayash Kapoor". www.cs.princeton.edu. Retrieved 2025-03-28.
- ^ "The 100 Most Influential People in AI 2023". Time. Retrieved 2025-03-28.
- ^ Narayanan, Arvind; Kapoor, Sayash (2024). AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference. Princeton University Press.
- ^ "College students and ChatGPT adoption in the US". openai.com. Retrieved 2025-04-02.
- ^ Narayanan, Arvind; Kapoor, Sayash (2024), pp. 2-3.
- ^ a b Narayanan, Arvind; Kapoor, Sayash (2024), p. 15.
- ^ a b Narayanan, Arvind; Kapoor, Sayash (2024), p. 99.
- ^ Narayanan, Arvind; Kapoor, Sayash (2024), p. 103.
- ^ Narayanan, Arvind; Kapoor, Sayash (2024), p. 105-107.
- ^ Narayanan, Arvind; Kapoor, Sayash (2024), p. 150.
- ^ Narayanan, Arvind; Kapoor, Sayash (2024), p. 152.
- ^ Narayanan, Arvind; Kapoor, Sayash (2024), p. 153-154.
- ^ Narayanan, Arvind; Kapoor, Sayash (2024), p. 171.
- ^ Narayanan, Arvind; Kapoor, Sayash (2024), p. 155.
- ^ Narayanan, Arvind; Kapoor, Sayash (2024), p. 159.
- ^ Narayanan, Arvind; Kapoor, Sayash (2024), p. 163.
- ^ Switzky, L. (2020). ELIZA Effects: Pygmalion and the Early Development of Artificial Intelligence. Shaw, 40(1), 50–68. https://doi.org/10.5325/shaw.40.1.0050
- ^ Narayanan, Arvind; Kapoor, Sayash (2024), p. 185.
- ^ Narayanan, Arvind; Kapoor, Sayash (2024), p. 194.
- ^ Narayanan, Arvind; Kapoor, Sayash (2024), p. 228.
- ^ Narayanan, Arvind; Kapoor, Sayash (2024), p. 248.
- ^ Narayanan, Arvind; Kapoor, Sayash (2024), p. 244.
- ^ Kissinger, H., Schmidt, E., & Huttenlocher, D. P. (2021). The age of AI : and our human future (First edition.). Little Brown and Company.
- ^ Whittaker, M., & Suchman, L. (2021). The Myth of Artificial Intelligence. In The American prospect (Vol. 32, Number 6, pp. 1–10). American Prospect.
- ^ Narayanan, Arvind; Kapoor, Sayash (2024), p. 253.
- ^ Vinsel, Lee (2021-02-01). "You're Doing It Wrong: Notes on Criticism and Technology Hype". Medium. Archived from the original on 2025-03-28. Retrieved 2025-04-02.
- ^ Robinson, Andrew (2024-12-16). "Thoughtless obedience and the healing power of trees: 2024's best Books in brief". Nature. 636 (8043): 564–566. doi:10.1038/d41586-024-04117-3. ISSN 1476-4687.
- ^ Gazis, Jean (2024-08-15). "'AI Snake Oil' Sorts Promise from Hype". Practical Ecommerce. Retrieved 2025-04-02.
- ^ a b Rothman, Joshua (2024-08-06). "In the Age of A.I., What Makes People Unique?". The New Yorker. ISSN 0028-792X. Retrieved 2025-04-02.
- ^ a b Martinez, Alexya (2025-03-12). "Book Review: AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference, by Arvind Narayanan and Sayash Kapoor". Journalism & Mass Communication Quarterly: 10776990251325876. doi:10.1177/10776990251325876. ISSN 1077-6990.