LLM Hallucinations Debate
The cluster centers on debates about the concept of 'hallucinations' in large language models (LLMs), with many arguing that all LLM outputs are hallucinations—some coincidentally accurate—and critiquing the term as misleading or anthropomorphic.
Activity Over Time
Top Contributors
Keywords
Sample Comments
Everything an LLM returns is an hallucination, it's just that some of those hallucinations line up with reality
LLM "hallucination" is a pretty bullshit term to begin with
How does one “fix hallucinations” on an LLM? Isn’t hallucinating pretty much all it does?
Especially since a lot of LLM output involves hallucinations
LLM hallucinations aren't errors.LLMs generate text based on weights in a model, and some of it happens to be correct statements about the world. Doesn't mean the rest is generated incorrectly.
What? LLMs CONSTANTLY Hallucinate stuff just to fit the narrative, read this https://philosophersmag.com/large-language-models-and-the-co...
LLM hallucinations can't be solved as it is the whole LLM mechanisms. all LLM results are hallucinations, it just happens some results are more true/useful than others.
LLMs can neither understand nor hallucinate. All LLMs are just picking tokens based on probability. So doesn't matter how plausible the outputs look, the reasons lead to the output are absolutely NOT what we expect them to be. But such ugly fact cannot be admitted or the party would be stopped.
so if ChatGPT hallucinates 10% of the time, their model hallucinates only 1% of the time?
Fun fact, "confabulation", not "hallucinating" is the correct term what LLMs actually do.