Curse of Dimensionality

The cluster discusses the curse of dimensionality in machine learning, focusing on overfitting risks when models have more parameters than data points, and the challenges of high-dimensional spaces despite regularization techniques.

📉 Falling 0.5x AI & Machine Learning
1,774
Comments
20
Years Active
5
Top Authors
#4375
Topic ID

Activity Over Time

2007
1
2008
4
2009
11
2010
21
2011
14
2012
50
2013
37
2014
53
2015
64
2016
105
2017
117
2018
134
2019
144
2020
155
2021
146
2022
151
2023
213
2024
181
2025
167
2026
6

Keywords

e.g HN OK ML en.m TBD NP openai.com wikipedia.org parameters data curse data points dimensions function dimensional fit models number

Sample Comments

vbuwivbiu Nov 4, 2017 View on HN

please elaborate - are you thinking of the curse of dimensionality ?

dwaltrip May 6, 2023 View on HN

https://en.m.wikipedia.org/wiki/Curse_of_dimensionalitySounds like you are both right?

Retric Nov 24, 2018 View on HN

The problem is not asthetics the problem is overfitting the data. Add enough knobs and you can approximate anything, but your simply encoding data not finding anything new.

CuriouslyC May 22, 2023 View on HN

Parameters don't have diminishing returns so much as we don't have enough (distinct) data to train models to use that many parameters efficiently.

fredophile Sep 6, 2025 View on HN

I'm not an expert on LLMs but my guess would be that this is a result of the curse of dimensionality. As a general rule more dimensions != more better.

In general, it's OK to use more parameters than data points even, if you properly use regularization (such as weight decay), for example. Other times, even significantly fewer parameters than data points can be wrong.

andrewprock Mar 16, 2021 View on HN

That's not in any way surprising. When you have more parameters than data, this is trivial.

neallindsay Nov 12, 2025 View on HN

We're all suffering from the curse of dimensionality.

nullc Dec 1, 2013 View on HN

Because your 400 gazillion degrees of freedom machine learning core was not already ill conditioned enough for you?

noobermin Sep 5, 2016 View on HN

I see, so it's the high dimensionality of parameters to fit rather than, say, a small number of parameters for millions of points.