LLM Hallucinations Debate

The cluster centers on debates about the concept of 'hallucinations' in large language models (LLMs), with many arguing that all LLM outputs are hallucinations—some coincidentally accurate—and critiquing the term as misleading or anthropomorphic.

➡️ Stable 1.4x AI & Machine Learning
4,999
Comments
11
Years Active
5
Top Authors
#8275
Topic ID

Activity Over Time

2015
1
2017
5
2018
9
2019
3
2020
3
2021
3
2022
56
2023
1,384
2024
1,483
2025
1,926
2026
136

Keywords

AI LLM IME HN towardsdatascience.com GPT4 philosophersmag.com NOT CONSTANTLY API hallucination hallucinations llm llms ai false outputs output probability chatgpt

Sample Comments

PLenz Dec 5, 2024 View on HN

Everything an LLM returns is an hallucination, it's just that some of those hallucinations line up with reality

wlesieutre Dec 9, 2025 View on HN

LLM "hallucination" is a pretty bullshit term to begin with

herval Sep 6, 2025 View on HN

How does one “fix hallucinations” on an LLM? Isn’t hallucinating pretty much all it does?

coldtea Aug 9, 2023 View on HN

Especially since a lot of LLM output involves hallucinations

Scarblac Jul 6, 2025 View on HN

LLM hallucinations aren't errors.LLMs generate text based on weights in a model, and some of it happens to be correct statements about the world. Doesn't mean the rest is generated incorrectly.

dasefx Jul 10, 2025 View on HN

What? LLMs CONSTANTLY Hallucinate stuff just to fit the narrative, read this https://philosophersmag.com/large-language-models-and-the-co...

evilfred Jul 25, 2024 View on HN

LLM hallucinations can't be solved as it is the whole LLM mechanisms. all LLM results are hallucinations, it just happens some results are more true/useful than others.

mrjin Sep 15, 2024 View on HN

LLMs can neither understand nor hallucinate. All LLMs are just picking tokens based on probability. So doesn't matter how plausible the outputs look, the reasons lead to the output are absolutely NOT what we expect them to be. But such ugly fact cannot be admitted or the party would be stopped.

jgalt212 Jun 14, 2024 View on HN

so if ChatGPT hallucinates 10% of the time, their model hallucinates only 1% of the time?

trash_cat Mar 29, 2025 View on HN

Fun fact, "confabulation", not "hallucinating" is the correct term what LLMs actually do.