AI Compute Constraints

The cluster discusses the high hardware and compute requirements for training and running LLMs and other AI models like Stable Diffusion, along with predictions on future optimizations, hardware advancements, and potential physical limits slowing progress.

➡️ Stable 1.3x AI & Machine Learning
3,145
Comments
19
Years Active
5
Top Authors
#4768
Topic ID

Activity Over Time

2008
5
2009
3
2010
2
2011
3
2012
7
2013
10
2014
7
2015
30
2016
70
2017
69
2018
92
2019
96
2020
123
2021
87
2022
176
2023
633
2024
666
2025
988
2026
80

Keywords

RAM CPU LLM ARM LED HDD HBM TPU GPU AI hardware ai compute llms models gpu energy breakthrough algorithm 100x

Sample Comments

nullsense Apr 7, 2023 View on HN

Three observations here. Firstly it has been a really eye opening experience watching the innovation around Stable Diffusion and locally run LLMs and seeing that the unoptimized research code that needed such beefy hardware could actually be optimized to run on consumer hardware given sufficient motivation.Secondly it wasn't obvious that deep learning was going to work as well as it did if you simply threw enough compute at it. Now that this tech has reached critical mass there is a tonn

alwillis Jul 28, 2025 View on HN

Yes and no.It’s very expensive to create these models and serve them at scale.Eventually the processing power required to create them will come down, but that’s going to be a while.Even if there was a breakthrough GPU technology announced tomorrow, it would take several years before it could be put into production.And pretty much only TSMC can produce cutting edge chips at scale and they have their hands full.Between Anthropic, xAI and OpenAI, these companies have raised about $84

cyanydeez Dec 21, 2023 View on HN

gonna be awhile before sufficiently powerful hardware is GA for LLM

moonchrome Aug 24, 2023 View on HN

> but the rate of progression feels like they will soonThe rate of progression seems to be logarithmic - so we got "something looks plausible" but to get that last 10% it's probably going to cost more in HW than just using humans, unless there's some breakthroughs. Just like self driving cars.My impressions at least looking at the developments from a sort of technical perspective - they are hitting all kinds of scaling problems - both in terms of data available, runt

__loam Nov 19, 2023 View on HN

In terms of trajectory, I'm not really convinced we can do much better than we are now. Moore's law is ending in the next decade as we hit fundamental physical limitations in how many transistors we can pack on a chip. The growth in computational power is going to slow down considerably at a time when ai companies are struggling to get more gpu compute to train new models and run inference. OpenAI themselves supposedly stopped sign ups because they're running out of computational

stult Apr 14, 2022 View on HN

There was just an article from deepmind on HN about this topic the other day[1], but basically IIRC it argues that all of the LLMs are horrendously compute inefficient, which means there’s a ton of room to improve them. So those models will be optimized over time just as the consumer hardware will be improved until eventually one day the two trends will converge. It’s just a question of when that will happen.[1] <a href="https://news.ycombinator.com/item?id=30987885" rel="nofol

sojuz151 Jul 25, 2025 View on HN

Compute has been getting cheaper and models more optimised. So if models can do something it will not be long till they can do this cheap.

jacquesm Mar 22, 2023 View on HN

Absolutely not. Computers used to be extremely centralized and the decentralization revolution powered a ton of progress in both software development and hardware development.You can run many AI applications locally today that would have required a massive investment in hardware not all that long ago. It's just that the bleeding edge is still in that territory. One major optimization avenue is the improvement of the models themselves, they are large because they have large numbers of par

spacetime_cmplx May 20, 2023 View on HN

While OP's reply answers your question, it's important to not apply current costs to predict the future of AI. Hardware for LLMs is one step function away from unimaginable capabilities. That breakthrough could be in performance, cost, or more likely, both.Imagine GPT-4 at 1/1000th the cost. That's where we're going. And you can bet your ass Nvidia is working on it as we speak. Or maybe someone else will leapfrog them like ARM did to Intel.

seanmcdirmid Dec 3, 2024 View on HN

Haven't people been saying that for the last decade? I mean, eventually they will be right, maybe "about" means next year, or maybe a decade later? They just have to stop making huge improvements for a few years and the investment will dry up.I really wasn't interested in computer hardware anymore (they are fast enough!) until I discovered the world of running LLMs and other AI locally. Now I actually care about computer hardware again. It is weird, I wouldn't have ev