LLMs Thinking Debate
Cluster centers on debates about whether large language models (LLMs) truly think, reason, or understand like humans, or if they are just statistical next-token predictors without genuine cognition.
Activity Over Time
Top Contributors
Keywords
Sample Comments
LLMs do not have brains and there is no evidence as far as I know that they "think" like human beings do.
An LLM has no mind! What is your strong theory of mind for an LLM? That it knows the whole internet and can regurgitate it like a mindless zombie?
No it doesn't. The model we use to create these things is general enough that it can be applied to ALL forms of intelligence.Basically it's a best fit curve in N-dimensional space. The entire human brain can be modeled this way. In practice what we end up doing is using a bunch of math tricks to try to poke and prod at this curve to get some sort of "fit" on the data.There are an infinite number of possible curves that can fit within this data. One of these curves is th
We know exactly what an LLM does? How does that differ from a brain?
Reasoning how? LLMs are statistical models, not brains.
LLMs don't think, and LLMs don't have strategies. Maybe it could be argued that LLMs have "derived meaning", but all LLMs do is predict the next token. Even RL just tweaks the next-token prediction process, but the math that drives an LLM makes it impossible for there to be anything that could reasonably be called thought.
Humans and LLMs are different things.LLMs can not reason - many people seen to believe that they can.
What is it about humans that makes you think we are more than a large LLM?
I think the problem is our traditional notions of "understanding" and "intelligence" fail us. I don't think we understand what we mean by "understanding". Whatever the LLM is doing inside, it's far removed from what a human would do. But on the face of it, from an external perspective, it has many of the same useful properties as if done by a human. And the LLM's outputs seem to be converging closer and closer to what a human would do, even though the
LLMs aren't designed to emulate human cognition, they are a statistical model designed to predict the next word in a sentence. It happens that they seem to exhibit some similarities to human cognition as a side effect, but that does not mean they are on some developmental path to a "full human" like a child. Again it is silly to try and compare the two.