LLM Anthropomorphization Debate

The cluster focuses on debates criticizing the anthropomorphization of large language models, arguing they lack human-like agency, understanding, intent, or reasoning, and instead produce outputs statistically without true knowledge or the ability to admit uncertainty.

➡️ Stable 1.8x AI & Machine Learning
5,232
Comments
11
Years Active
5
Top Authors
#2736
Topic ID

Activity Over Time

2015
1
2016
1
2017
1
2019
1
2020
1
2021
3
2022
84
2023
1,339
2024
1,296
2025
2,351
2026
156

Keywords

AI LLM BS i.e ycombinator.com llms llm humans llms don human misinformation bots intelligence nonsense ai

Sample Comments

Zambyte May 21, 2025 View on HN

People say nonsense all the time. LLMs also don't have this issue all the time. They are also often right instead of saying things like this. If this reply was meant to be a demonstration of LLMs not having human level understanding and reasoning, I'm not convinced.

gwright Jul 18, 2023 View on HN

I think it confuses things immensely to anthropomorphize large language models. LLMs don't lie or tell the truth they just spit out text that is in alignment with the training model. Don't give them agency they don't have.

deadbabe Dec 24, 2024 View on HN

These anthropomorphizations of LLMs are unhelpful in helping people understand what’s going on.LLMs aren’t “pretending” to do anything, they don’t “know” anything.Your AI is nothing but a blackbox of math and the inputs you’re providing are creating outputs you don’t want.

xienze Feb 24, 2025 View on HN

The difference is that a human will tell you things like "I think", "I'm pretty sure" or "I don't know" in order to manage expectations. The LLM will very matter-of-factly tell you something that's not right at all, and if you correct them the LLM will go and very confidently rattle off another answer based on what you just said, whether your were telling it the truth or not. If a human acted that way more than a few times we'd stop asking them q

goatlover Aug 8, 2025 View on HN

All humans are not LLMs, why does this constantly get brought up?

sshine Sep 1, 2024 View on HN

LLMs hallucinating, lying and doubling down on things that are wrong seem very human.

_heimdall May 20, 2025 View on HN

Asking LLMs to do tasks like this and expecting any useful result is mind boggling to me.The LLM is going to guess at what a human on the internet may have said in response, nothing more. We haven't solved interpretability and we don't actually know how these things work, stop believing the marketing that they "reason" or are anything comparable to human intelligence.

nxor Nov 4, 2025 View on HN

Because people overstate the LLM's ability in a way they wouldn't for a cat

recursive Jun 17, 2024 View on HN

No one is claiming LLMs "believe" things. Well, maybe someone is.

binary132 Jan 11, 2026 View on HN

but an LLM is not a person. it’s a stochastic parrot. this crazy anthropomorphizing has got to stop