ML Paper Skepticism
Discussions center on the credibility, novelty, reproducibility, and hype surrounding machine learning research papers, including questions about trustworthiness, replication challenges, and comparisons to prior work.
Activity Over Time
Top Contributors
Keywords
Sample Comments
Not a paper, but I liked this related article https://news.ycombinator.com/item?id=11813180
Wasn't able to read the paper, how groundbreaking is this work actually?
Hype article? Where is the paper and methods?
What are some signs an ML paper is untrustworthy?
I'm guessing this paper is more about "It's neat that this works at all." rather than trying to improve on the state of the art.
You do realise this is academic research from top researchers and published at a top conference, right?
The paper is 7 months old and has been submitted a dozen times. Most discussion is here:https://news.ycombinator.com/item?id=27378444The title is promising, but when I read the paper half a year ago I found it disappointing, no practical value. Academic considerations to get a conference paper.
To be fair, this paper has been made obsolete in its entirety with recent research. It's not really their fault, but folks need to start publishing faster as posters or something if they want to provide something relevant.A better title, knowing what we now, might be "To outperform GPT4, do more than imitating"
Agreed.The article provides zero measurement, zero examples, zero numbers.It's pure conjecture with no data or experiment to back it up. Unfortunately conjecture rises to the top on hackernews. A well built study on LLM effectiveness would fall off the front page quickly.
That's not what the paper is about. The abstract quoted by the GP shows that it has been done before. Probably only very recently so it's going to be a surprise to a lot of readers, but perhaps old news measured in machine learning years.