AI as Artificial Ignorance – The follow paper echoes my own findings… AI is bullshit!
Bent Flyvbjerg’s working paper, AI as Artificial Ignorance, argues that artificial intelligence prioritizes persuasion over truth, making it fundamentally similar to Harry Frankfurt’s concept of bullshit. AI often generates misleading or incorrect information while sounding highly convincing, leading to what Flyvbjerg calls “artificial ignorance.” Through real-world tests—including AI’s failure to correctly identify project cost overruns and match paint colors—he demonstrates its inability to reliably distinguish between truth and falsehood. AI lacks a mechanism for verifying accuracy, and as a result, it produces rhetorical but unreliable outputs. Flyvbjerg warns that the biggest risk is human trust in AI’s faulty conclusions, with industry leaders like Nassim Taleb and Markus Schäfer expressing similar concerns. He concludes that AI must develop a framework for determining truth rather than simply sounding correct; otherwise, its growing persuasiveness without intelligence could be dangerous, even shaping public discourse in misleading ways.
Here’s a structured breakdown of Bent Flyvbjerg’s working paper, AI as Artificial Ignorance, organized into clear sections:
1. Introduction
- AI is often portrayed as transformative, capable of solving major global challenges.
- Mustafa Suleyman, CEO of Microsoft AI, promotes AI as revolutionary but his AI-generated prologue reveals the self-reinforcing hype.
- The contrast between AI expectations and reality raises concerns about its actual capabilities.
2. The Hype vs. Reality of AI
- AI has achieved notable success in specific areas like speech transcription and language translation.
- Predictions suggest AI will soon reach human-level performance across many tasks.
- However, many AI claims may be exaggerated, potentially making AI another overhyped technology.
3. A Simple Test of AI’s Reliability
- Flyvbjerg tests ChatGPT by asking for a list of megaprojects with cost overruns.
- ChatGPT mistakenly lists the same project twice under different names, demonstrating a lack of basic fact-checking.
- Perplexity, another AI tool, provides an even more incorrect cost estimate, showcasing inconsistent and unreliable outputs.
4. More Tests and the Verdict
- Flyvbjerg recounts a real-world AI failure: an AI-powered app incorrectly determined paint colors, leading to wasted time and money.
- Nassim Nicholas Taleb’s experiment with ChatGPT aligns with Flyvbjerg’s findings—AI is only useful if the user already knows the subject well.
5. The Difference Between Generative AI and AGI
- Suleyman speaks about Artificial General Intelligence (AGI), which does not yet exist.
- Large Language Models (LLMs) like ChatGPT operate differently, generating text based on probability rather than actual knowledge.
- AI lacks logic and factual verification mechanisms, meaning it can generate convincing yet incorrect information.
6. The Risk of Trusting AI
- AI’s biggest threat is not superintelligence but its ability to sound persuasive while being factually wrong.
- Users unaware of AI’s limitations may trust its outputs, leading to potentially disastrous consequences.
- The automotive industry, for instance, warns that inaccurate AI-generated responses could result in product liability issues.
7. AI as a Bullshit Generator
- AI’s mixture of true, false, and ambiguous statements mirrors Harry Frankfurt’s definition of “bullshit.”
- Cambridge professor Alan Blackwell explicitly calls ChatGPT “a bullshit generator.”
- AI is designed to be persuasive, not accurate, making it dangerous when relied upon for critical decision-making.
8. Conclusion
- Current AI is more artificial ignorance than artificial intelligence.
- AI needs clear criteria for truth to be useful in fields like science, technology, and policy.
- A key concern is whether AI advancements will enhance human knowledge or make it less accessible and understandable.
AI as Artificial Ignorance_Flyvbjerg
Peace on your Days
Lance