NoDeluluNODELULU

Frequently Asked Questions

Common questions about NoDelulu, AI hallucinations, and how adversarial verification works.

What is NoDelulu?

NoDelulu is an AI hallucination detection platform available at nodelulu.ai. It uses a sequential adversarial pipeline — the first model sweeps for problems, the second analyses independently before reviewing and challenging every one of the first model’s findings, then groundable findings are verified against live web sources. It detects 8 types of AI hallucination: Factual DeLulu, Number DeLulu, Made Up DeLulu, Self-Contradiction, Logical Leap, Opinion As Fact, Time/Date DeLulu, and Missing Context.

How does NoDelulu detect AI hallucinations?

NoDelulu uses a sequential adversarial verification approach. Your text is analysed twice — first by a sweep model that hunts for problems, then by a review model that analyses independently before seeing the first model's findings and challenges every result it produces. The two analyses are combined through adversarial scoring: findings confirmed by both models carry stronger weight, findings challenged by the second model carry lower weight, and unique findings from either model are evaluated on their own evidence. Factual and temporal findings are then grounded against live web sources via category-specific search queries. You receive a detailed hallucination report with scored findings, evidence links, and a correction prompt.

What types of AI hallucination can NoDelulu detect?

NoDelulu classifies findings across 8 distinct hallucination categories: (1) Factual DeLulu — claims that contradict established facts, (2) Number DeLulu — wrong dates, statistics, or quantities, (3) Made Up DeLulu — citations or URLs that don't exist, (4) Self-Contradiction — internal inconsistencies in the text, (5) Logical Leap — conclusions that don't follow from premises, (6) Opinion As Fact — subjective claims presented as objective truth, (7) Time/Date DeLulu — anachronisms or outdated information, (8) Missing Context — critical missing context that changes meaning.

Is NoDelulu free to use?

Your first analysis at nodelulu.ai is free — no sign-up, no email, no account. You receive a full-quality report with adversarial model analysis and live web grounding. After that, pay-as-you-go. Browse sample reports at nodelulu.ai/samples to see what you get.

Do I need to create an account to use NoDelulu?

No. NoDelulu requires zero sign-up. You can paste or upload your text and receive a complete hallucination report immediately — no email address, no password, no account creation needed.

Which AI models does NoDelulu use for verification?

NoDelulu uses premium frontier AI models in a sequential adversarial setup. Each model analyses your text without seeing the other's findings — the review model is deliberately kept web-free and context-free during its analysis, ensuring genuine independent challenge rather than echo-chamber reinforcement. After the adversarial review, factual and temporal findings are grounded against live web sources.

How is NoDelulu different from asking ChatGPT to check its own work?

When you ask a single AI to review its own output, you are asking the same system — with the same biases, training gaps, and blind spots — to evaluate itself. Research shows this produces a form of "confirmation bias" where the model tends to defend its original output. NoDelulu solves this with independent adversarial AI models, each analysing the text before seeing the other's findings. The second model is structured as an explicit challenger — it is prompted to dispute, not confirm. Combined with live web verification, this adversarial approach catches errors that self-review consistently misses.

How does NoDelulu verify findings against real sources?

After both adversarial models flag potential hallucinations, NoDelulu runs category-specific live web verification. Only groundable finding types are sent to web search: Factual DeLulu, Number DeLulu, Made Up DeLulu, and Time/Date DeLulu. Analytical finding types — Logical Leap, Opinion As Fact, Self-Contradiction, and Missing Context — are verified through the adversarial model debate rather than web search, because those categories are not resolvable by looking something up. This is not RAG (Retrieval-Augmented Generation) — NoDelulu does not inject retrieved content into model prompts during analysis. Live search is a strictly post-analysis grounding step: models form their own views first, then factual findings are confirmed or refuted against live indexed web pages. This gives you clickable evidence for every grounded finding in your report.

Can NoDelulu check any type of text?

NoDelulu works best on informational, factual, and technical text — articles, essays, reports, documentation, research summaries, and similar content. It accepts pasted text and file uploads (.txt, .md, .docx). It is less effective on purely creative writing, poetry, or fiction where "hallucination" is subjective. The system is strongest on verifiable claims and weakest on highly specialised niche domains where authoritative web sources are scarce.

Does NoDelulu store my text?

No. NoDelulu does not store your text on any server after analysis. Your text is processed in-session for the duration of the analysis and is not retained, logged, or used for training. See our privacy policy at nodelulu.ai/privacy for full details.

What is an AI hallucination?

An AI hallucination occurs when a large language model (LLM) generates text that is factually wrong, internally inconsistent, or entirely fabricated — but presents it with the same confidence as verified truth. This happens because LLMs generate text by predicting the most likely next word based on patterns in training data, not by understanding or verifying facts. The model has no concept of "truth" — it only knows what sounds statistically plausible.

Why do AI models hallucinate?

AI hallucination is a fundamental consequence of how large language models work. LLMs are probabilistic text generators — they predict the next most likely token based on patterns in their training data. They have no internal fact database, no ability to verify claims, and no concept of truth. When the training data is incomplete, ambiguous, or the question requires reasoning beyond pattern matching, the model fills the gap with plausible-sounding but potentially false text. This is not a bug that can be patched — it is inherent to the architecture. Read more at nodelulu.ai/science/why-ai-hallucinates.

How accurate is NoDelulu?

The adversarial verification process significantly reduces false positives compared to single-model approaches. A clean report means no issues were found — not that the text is guaranteed error-free. Verification improves reliability; it does not eliminate uncertainty. Full methodology and limitations are published at nodelulu.ai/methodology.

Can I use NoDelulu for academic work?

Yes. NoDelulu is particularly useful for verifying AI-assisted academic writing. It can catch fabricated citations (a common AI hallucination in academic contexts), incorrect dates and statistics, logical inconsistencies, and factual errors — all issues that could undermine academic integrity. However, NoDelulu is a verification aid, not a replacement for proper academic review and source checking.

What output does NoDelulu provide?

NoDelulu provides: (1) A NoDelulu Index (0–100) — a calibrated document reliability score with a plain-English band label (Looking Good, Nearly There, Needs Work, or Foundations First), (2) Individually ranked findings each with a nodeluluScore, severity label, and category, (3) Clickable source links for every grounded finding so you can verify the verification, (4) A downloadable report, and (5) A custom prompt you can paste back into your AI to correct the specific issues found.