ML Paper Skepticism

Discussions center on the credibility, novelty, reproducibility, and hype surrounding machine learning research papers, including questions about trustworthiness, replication challenges, and comparisons to prior work.

➡️ Stable 0.7x AI & Machine Learning
3,950
Comments
20
Years Active
5
Top Authors
#6577
Topic ID

Activity Over Time

2007
6
2008
17
2009
24
2010
35
2011
56
2012
43
2013
64
2014
80
2015
122
2016
162
2017
258
2018
273
2019
263
2020
306
2021
277
2022
275
2023
557
2024
517
2025
575
2026
42

Keywords

RTI LLM openreview.net openreview.ne AI GP HN GraphCore chatgpt.html FWIW paper research read paper published ants research paper methods researchers scientists ml

Sample Comments

arthursilva Aug 26, 2016 View on HN

Not a paper, but I liked this related article https://news.ycombinator.com/item?id=11813180

lookingforsome Dec 27, 2019 View on HN

Wasn't able to read the paper, how groundbreaking is this work actually?

fithisux Jul 11, 2019 View on HN

Hype article? Where is the paper and methods?

mkl Aug 29, 2019 View on HN

What are some signs an ML paper is untrustworthy?

corysama Aug 10, 2024 View on HN

I'm guessing this paper is more about "It's neat that this works at all." rather than trying to improve on the state of the art.

Ar-Curunir May 1, 2018 View on HN

You do realise this is academic research from top researchers and published at a top conference, right?

usr1106 Dec 19, 2021 View on HN

The paper is 7 months old and has been submitted a dozen times. Most discussion is here:https://news.ycombinator.com/item?id=27378444The title is promising, but when I read the paper half a year ago I found it disappointing, no practical value. Academic considerations to get a conference paper.

blazespin May 26, 2023 View on HN

To be fair, this paper has been made obsolete in its entirety with recent research. It's not really their fault, but folks need to start publishing faster as posters or something if they want to provide something relevant.A better title, knowing what we now, might be "To outperform GPT4, do more than imitating"

chinchilla2020 Jun 3, 2025 View on HN

Agreed.The article provides zero measurement, zero examples, zero numbers.It's pure conjecture with no data or experiment to back it up. Unfortunately conjecture rises to the top on hackernews. A well built study on LLM effectiveness would fall off the front page quickly.

averagewall Aug 24, 2017 View on HN

That's not what the paper is about. The abstract quoted by the GP shows that it has been done before. Probably only very recently so it's going to be a surprise to a lot of readers, but perhaps old news measured in machine learning years.