LLM Anthropomorphization Debate
The cluster focuses on debates criticizing the anthropomorphization of large language models, arguing they lack human-like agency, understanding, intent, or reasoning, and instead produce outputs statistically without true knowledge or the ability to admit uncertainty.
Activity Over Time
Top Contributors
Keywords
Sample Comments
People say nonsense all the time. LLMs also don't have this issue all the time. They are also often right instead of saying things like this. If this reply was meant to be a demonstration of LLMs not having human level understanding and reasoning, I'm not convinced.
I think it confuses things immensely to anthropomorphize large language models. LLMs don't lie or tell the truth they just spit out text that is in alignment with the training model. Don't give them agency they don't have.
These anthropomorphizations of LLMs are unhelpful in helping people understand what’s going on.LLMs aren’t “pretending” to do anything, they don’t “know” anything.Your AI is nothing but a blackbox of math and the inputs you’re providing are creating outputs you don’t want.
The difference is that a human will tell you things like "I think", "I'm pretty sure" or "I don't know" in order to manage expectations. The LLM will very matter-of-factly tell you something that's not right at all, and if you correct them the LLM will go and very confidently rattle off another answer based on what you just said, whether your were telling it the truth or not. If a human acted that way more than a few times we'd stop asking them q
All humans are not LLMs, why does this constantly get brought up?
LLMs hallucinating, lying and doubling down on things that are wrong seem very human.
Asking LLMs to do tasks like this and expecting any useful result is mind boggling to me.The LLM is going to guess at what a human on the internet may have said in response, nothing more. We haven't solved interpretability and we don't actually know how these things work, stop believing the marketing that they "reason" or are anything comparable to human intelligence.
Because people overstate the LLM's ability in a way they wouldn't for a cat
No one is claiming LLMs "believe" things. Well, maybe someone is.
but an LLM is not a person. it’s a stochastic parrot. this crazy anthropomorphizing has got to stop