P-Value Misinterpretation

Comments focus on common misunderstandings of p-values and statistical significance, such as confusing the probability of no real effect with the probability of observing the data under the null hypothesis, and critiques of the arbitrary 0.05 threshold.

πŸ“‰ Falling 0.4x Science
3,430
Comments
19
Years Active
5
Top Authors
#8462
Topic ID

Activity Over Time

2008
3
2009
47
2010
71
2011
104
2012
107
2013
135
2014
173
2015
345
2016
265
2017
283
2018
261
2019
375
2020
177
2021
258
2022
242
2023
225
2024
180
2025
176
2026
3

Keywords

II NOTHING HN ycombinator.com OK wikipedia.org i.e I.e tandfonline.com B.S values 05 value hypothesis cutoff threshold effect results statistical conclusions

Sample Comments

drc500free β€’ Jun 22, 2025 β€’ View on HN

The wrong statement is saying P(no real effect) The correct statement is saying P(saw these results | no real effect) Consider two extremes, for the same 5% threshold:1) All of their ideas for experiments are idiotic. Every single experiment is for something that simply would never work in real life. 5% of those experiments pass the threshold and 0% of them are valid ideas.2) All of their ideas are brilliant. Every single experiment is for something that is a perfect wa

quickthrower2 β€’ Aug 31, 2019 β€’ View on HN

Sounds like the old p=0.05 problem!

vacri β€’ Jul 27, 2017 β€’ View on HN

You're making the same mistake in reverse. The p-value is not a marker of correctness, but of confidence. If something doesn't reach the p-value cutoff, that doesn't mean that the effect doesn't exist, just that it didn't meet the confidence level. A paper could be entirely correct in its hypothesis, yet not meet the confidence level in its experimentation.

Aperocky β€’ Apr 11, 2023 β€’ View on HN

Plaster this: https://en.wikipedia.org/wiki/P-value

myopicgoat β€’ Dec 29, 2018 β€’ View on HN

You may be referring to p-values, and the arbitrary 0.05 threshold but in that case, the connection is risky at best and wrong in general (common misinterpretation of p-value as probability of alternative being false)

tomjen3 β€’ May 16, 2011 β€’ View on HN

Yes isn't p usually supposed to be less than 0.05 for the it to usefull?

alexfromapex β€’ Dec 30, 2019 β€’ View on HN

What’s wrong with statistical significance?

scythe β€’ Apr 29, 2020 β€’ View on HN

Statistical significance is not a bright-line test. The p value reported is the probability that the observed effects occur if there are actually no effects. Notably, p is always greater than zero.In general, when less data is available, larger p values are considered more interesting, while when more data is available, p must be smaller to be interesting. In particle physics, p p

viraptor β€’ Sep 7, 2022 β€’ View on HN

That's a whole thing in statistics:https://en.wikipedia.org/wiki/Power_of_a_test

okl β€’ Feb 13, 2022 β€’ View on HN

Related: "The ASA Statement on p-Values: Context, Process, and Purpose"- https://www.tandfonline.com/doi/full/10.1080/00031305.2016.1...- https://news.ycombinator.com/item?id=30324223