LLMs Thinking Debate

Cluster centers on debates about whether large language models (LLMs) truly think, reason, or understand like humans, or if they are just statistical next-token predictors without genuine cognition.

➡️ Stable 1.2x AI & Machine Learning
7,212
Comments
13
Years Active
5
Top Authors
#1651
Topic ID

Activity Over Time

2012
1
2015
1
2016
2
2017
5
2018
4
2019
10
2020
15
2021
14
2022
164
2023
2,288
2024
1,679
2025
2,850
2026
187

Keywords

AI AGI LLM IMO RL GPT i.e ALL anthropic.com AK llms llm humans model human intelligence token reasoning predict training

Sample Comments

JambalayaJimbo Jul 29, 2025 View on HN

LLMs do not have brains and there is no evidence as far as I know that they "think" like human beings do.

saulpw Oct 14, 2025 View on HN

An LLM has no mind! What is your strong theory of mind for an LLM? That it knows the whole internet and can regurgitate it like a mindless zombie?

corethree Nov 1, 2023 View on HN

No it doesn't. The model we use to create these things is general enough that it can be applied to ALL forms of intelligence.Basically it's a best fit curve in N-dimensional space. The entire human brain can be modeled this way. In practice what we end up doing is using a bunch of math tricks to try to poke and prod at this curve to get some sort of "fit" on the data.There are an infinite number of possible curves that can fit within this data. One of these curves is th

maxdoop Oct 9, 2023 View on HN

We know exactly what an LLM does? How does that differ from a brain?

spondylosaurus May 20, 2023 View on HN

Reasoning how? LLMs are statistical models, not brains.

LoganDark Mar 27, 2025 View on HN

LLMs don't think, and LLMs don't have strategies. Maybe it could be argued that LLMs have "derived meaning", but all LLMs do is predict the next token. Even RL just tweaks the next-token prediction process, but the math that drives an LLM makes it impossible for there to be anything that could reasonably be called thought.

grajaganDev Jan 27, 2025 View on HN

Humans and LLMs are different things.LLMs can not reason - many people seen to believe that they can.

chki Mar 15, 2023 View on HN

What is it about humans that makes you think we are more than a large LLM?

marco_craveiro Oct 30, 2024 View on HN

I think the problem is our traditional notions of "understanding" and "intelligence" fail us. I don't think we understand what we mean by "understanding". Whatever the LLM is doing inside, it's far removed from what a human would do. But on the face of it, from an external perspective, it has many of the same useful properties as if done by a human. And the LLM's outputs seem to be converging closer and closer to what a human would do, even though the

Alchemista Feb 19, 2024 View on HN

LLMs aren't designed to emulate human cognition, they are a statistical model designed to predict the next word in a sentence. It happens that they seem to exhibit some similarities to human cognition as a side effect, but that does not mean they are on some developmental path to a "full human" like a child. Again it is silly to try and compare the two.