LLM Reasoning Debate

Cluster debates whether large language models (LLMs) truly reason or merely pattern-match from training data, featuring arguments against formal reasoning capabilities alongside counterexamples and discussions of chain-of-thought prompting.

➡️ Stable 1.3x AI & Machine Learning
5,935
Comments
16
Years Active
5
Top Authors
#8168
Topic ID

Activity Over Time

2010
1
2012
5
2013
11
2014
6
2015
15
2016
17
2017
19
2018
26
2019
19
2020
93
2021
41
2022
235
2023
1,546
2024
1,541
2025
2,284
2026
76

Keywords

AI LLM IME github.io stanford.edu X2 OAI X1 arxiv.org GPT reasoning llms training models llm reason gpt training data llms don model

Sample Comments

loki49152 Feb 28, 2025 View on HN

LLMs don't do formal reasoning. Not in any sense. They don't do any kind of reasoning - they replay combinatorics of the reasoning that was encoded in their training data via "finding" the patterns in the relationships of the tokens at different scales and then applying those to the generation of some output triggered by the input.

tenuousemphasis Aug 20, 2025 View on HN

No, because reasoning models don't actually reason.

freejazz Sep 10, 2025 View on HN

I still don't understand what a "reasoning" LLM is

boxed Aug 13, 2025 View on HN

LLMs don't learn reasoning. At all. They are statistical language models. Nothing else. If they get math right it's because correct math is more statistically probable given the training data, it can't actually do math. This should be pretty clear from all the "how many Rs are there in strawberry" type examples.

imtringued May 30, 2024 View on HN

LLMs reason to the extent they are allowed to. You could say that they are overfitting when it comes to reasoning. They weren't trained to reason to begin with, so the bigger surprise is that they can do it within limits.

eggdaft May 14, 2024 View on HN

What do you mean by “an LLM doesn’t reason”?

ttpphd Apr 6, 2023 View on HN

It is a large language model. It manipulates text based on context and the imprint of its vast training. You are not able to articulate a theory of reasoning. You are just pointing to the output of an algorithm and saying "this must mean something!" There isn't even a working model of reasoning here, it's just a human being impressed that a tool for manipulating symbols is able to manipulate symbols after training it to manipulate symbols in the specific way that you want sym

energy123 Jul 19, 2025 View on HN

This is incredible. We know these questions are not in the training data. How can you still say that LLMs aren't reasoning.

__loam Jul 18, 2023 View on HN

LLMs are not reasoning systems. That's one of the major problems with them.

emorning3 Feb 21, 2025 View on HN

LLMs cannot reason, they can only say things that sound reasonable, there's a difference. Duh.