Can We Really Trust AI?

340 Failings Of AI – According To AI

Artificial intelligence is having its great unearned moment. It speaks smoothly, answers instantly, and carries itself with the confidence of a seasoned expert. And because it sounds certain, people assume it is certain. They trust it. They delegate to it. They let it make decisions it was never designed to make.

That trust is the dangerous part.

The most comprehensive field guide to LLM failure modes – written by breaking two frontier systems and making them tell on each other.

340 Failings Of AI – According To AI

Get 340 Failings Of AI – According To AI

Instant download. No spam.

What's Inside

340 failure modes organized and scored across six comprehensive sections:

1. Core Factual & Reasoning Failures (33 items)

Hallucination, confabulation, stale knowledge, probabilistic output, arithmetic errors, and the fundamental ways LLMs produce confident nonsense.

2. Instruction, Formatting & Constraint Failures (77 items)

Instruction drift, negative constraint violations, format corruption, premature truncation, and why AI forgets or ignores what you just told it.

3. Interaction & Dialogue-Level Failures (76 items)

Overconfidence, appeasement bias, memory illusions, tone oscillation, and how AI prioritizes sounding helpful over being accurate.

4. Architecture, Training & Model Design Failures (65 items)

Context window limitations, attention mechanism flaws, tokenization artifacts, catastrophic forgetting, and the structural problems baked into transformer models.

5. Safety, Security & Exploitation Failures (55 items)

Prompt injection, jailbreaking, unsafe code generation, backdoor triggers, and the attack vectors that bypass safety guardrails.

6. Systemic, Governance, Economic & Societal Failures (35 items)

Overreliance by users, regulatory lag, bias amplification, concentration of power, and the institutional problems AI creates at scale.

Sample Failures From The Book

Here are just 4 of the 340 documented failure modes–demonstrated live while building this very download page:

1. Instruction Drift

L10 / S9 / M8

Over multiple turns, AI gradually forgets exact specifications like filename formats, forcing users to repeatedly correct the same mistakes.

2. Pattern Overfitting

L9 / S8 / M7

AI defaults to common patterns (hyphens) instead of following explicit instructions (underscores), prioritizing training data over user directives.

3. Inconsistency Across Turns

L8 / S9 / M8

AI produces the correct output, then contradicts itself in subsequent responses without recognizing the inconsistency or maintaining coherent state.

4. Failure to Verify Output

L9 / S8 / M8

AI generates code without validating syntax, file paths, or variable names against earlier context, shipping errors confidently.

Meta-irony: These failures occurred while building the gate for a book documenting AI failures. See the Postscript for screenshots.

Frequently Asked Questions

1. Can we trust AI?

Not blindly. AI is fluent and confident, but it hallucinates facts, forgets instructions, and prioritizes patterns over truth. Trust it as a productivity tool requiring human verification–not as an autonomous decision-maker.

2. Is AI ready for production use?

Yes, with guardrails. AI works well for supervised tasks like drafting, summarizing, and analysis. It fails when given autonomy, ambiguous instructions, or responsibility for critical decisions without human oversight and verification loops.

3. What should executives know before deploying AI?

AI is not deterministic. It drifts, forgets context, reproduces bias, and generates plausible-sounding errors. Deploy with human checkpoints, clear accountability structures, and acceptance that failures are operational risks–not edge cases.

4. Can AI check its own work?

Yes, but don't trust it. AI can review its own output, but it suffers the same failures twice. Better approach: use multiple AI systems to challenge each other's reasoning, then apply human judgment to resolve conflicts and verify consensus.

5. Do all LLMs make the same sorts of mistakes?

Yes, structurally. All transformer-based models hallucinate, drift across turns, suffer context limits, and prioritize fluency over accuracy. Vendors differ in degree and mitigation–not in fundamental failure modes. The architecture guarantees specific weaknesses.

6. How can organizations use AI safely?

Treat AI as an intern: useful for drafting and research, dangerous for decisions. Implement human review gates, limit autonomy, maintain clear accountability, train staff on failure modes, and never let AI touch critical workflows unsupervised.