LLM Progress Plateau

The cluster debates whether Large Language Models (LLMs) are hitting a plateau in improvements due to exhausted training data, scaling limits, and diminishing returns, or if exponential progress will continue through new breakthroughs and compute scaling.

➡️ Stable 1.5x AI & Machine Learning
4,091
Comments
15
Years Active
5
Top Authors
#6486
Topic ID

Activity Over Time

2010
2
2011
1
2014
1
2015
1
2016
4
2017
6
2018
3
2019
19
2020
70
2021
55
2022
140
2023
1,045
2024
996
2025
1,605
2026
147

Keywords

CS LeCun AI AGI LLM PhDs SWE RAG IN SDXL llms models gpt progress llm training data training diminishing returns scaling diminishing

Sample Comments

gnatolf Apr 9, 2025 View on HN

Given the rate of improvement wrt to llms, this may not hold true for long

spencerchubb Sep 27, 2024 View on HN

LLMs have been improving exponentially for a few years. let's at least wait until exponential improvements slow down to make a judgement about their potential

sirwhinesalot Oct 15, 2025 View on HN

Current LLMs are already trained on the entirety of the interwebs, including very likely stuff they really should not have had access to (private github repos and such).GPT-5 and other SoTA models are only slightly better than their predecessors, and not for every problem (while being worse in other metrics).Assuming there is no major architectural breakthrough[1], the trajectory only seems to be slowing down.Not enough new data, new data that is LLM generated (causing a "recompres

rwmj Aug 22, 2025 View on HN

Only thing? Just off the top of my head: That the LLM doesn't learn incrementally from previous encounters. That we appear to have run out of training data. That we seem to have hit a scaling wall (reflected in the performance of GPT5).I predict we'll get a few research breakthroughs in the next few years that will make articles like this seem ridiculous.

jama211 Sep 25, 2025 View on HN

Seems llm progress really is plateauing. I guess that was to be expected.

xmorse Aug 29, 2025 View on HN

Good. LLMs progress is too stagnant, it was time they started to play seriously

ilaksh Feb 9, 2023 View on HN

You seem doubtful that there will be a lot more progress. It's been well documented how much progress we have been through and how many new paradigms have arisen when we think we can't progress.It's spectacularly short-sighted, in my opinion, to assume that we won't make progress.Also, some of the prompted or fine-tuned LLMs today are actually very close.

eru Jun 17, 2025 View on HN

What you say might be true for the current crop of LLMs. But it's rather unlikely their progress will stop here.

marcosdumay Jan 15, 2026 View on HN

Well, expectations vary widely.On one hand, recent models seem to be less useful than the previous generation of them, the scale needed for training improved networks seems to be following the expected quadratic curve, and we don't have more data to train larger models.On the other hand, many people claim that what tooling integration is the bottleneck, and that the next generation of LLMs are much better than anything we have seen up to now.

biohcacker84 Jan 28, 2025 View on HN

That matches my experience too. I wonder how fast they'll improve and if LLMs will hit a wall, as some AI experts think.