P-Hacking Criticism
Discussions center on accusations of p-hacking in a scientific study, highlighting issues like multiple testing, false positives, and post-hoc hypothesis testing, often referencing XKCD comics and other explanatory resources.
Activity Over Time
Top Contributors
Keywords
Sample Comments
No. It's called p-hacking, and it's a sleazy way to get published.https://youtu.be/42QuXLucH3Q
P-Hacking: https://projects.fivethirtyeight.com/p-hacking/
As someone who knows a thing or two about statistics, this looks as a painfully wrong interpretation of the data. Essentially this is p-hacking, although it might be inadvertant by the authors.This XKCD explains it well: https://xkcd.com/882/So, if you have a lot of data, you don't get to try many things and then report the ones with high significance. I really doubt this result will stand in a
Indeed. What you are describing is p-hackinghttps://en.wikipedia.org/wiki/Data_dredging
That sounds like a textbook case of p-value hacking and hypothesis fishing. Is there a solid statistical analysis on the space of hypotheses considered and the size of effects measured?
Does that qualify as "p-hacking?"
Is this p-hacking physic's edition?
They already have a problem with p-hacking
Relevant XKCD: https://xkcd.com/882/Do 20 experiments with a p<5% criterion, and itβs likely that one will be a false positive. Only publish positive results, and someone will eventually publish a false positive result without fraud.
Such post hoc hypothesis trawling essentially is p hacking. Try a large enough set of explanations and your data will show something significant. Might be hard to replicate.