Big O Notation

Discussions center on common misconceptions, proper usage, and limitations of Big O notation in algorithm analysis, emphasizing its asymptotic nature over practical performance details like constants and small inputs.

➡️ Stable 0.6x Programming Languages
4,923
Comments
20
Years Active
5
Top Authors
#3947
Topic ID

Activity Over Time

2007
2
2008
18
2009
70
2010
120
2011
194
2012
181
2013
339
2014
220
2015
219
2016
369
2017
225
2018
216
2019
510
2020
333
2021
401
2022
415
2023
359
2024
304
2025
417
2026
11

Keywords

CS e.g IT CPU L1 TM E.g notation algorithm big complexity input constant log bound function size

Sample Comments

pkrs Sep 2, 2014 View on HN

You're thinking about Big O notation, which is not the case here

WanderPanda Nov 12, 2019 View on HN

O(log n) is so precise it ain't even informative :P

beza1e1 Jul 16, 2010 View on HN

Understand the effect of the constant factor hidden within the Big O. Often a O(n) algorithm is faster than a O(1) algorithm, because n is too small. E.g. arrays vs hashmaps.Make sure what n is in each case. For example a graph algorithm with O(n^2) and n being the number of nodes may actually be O(n) for n being the graph size (number of edges).

chc Aug 18, 2011 View on HN

Big-O tells you the behavior of your running time as the dataset increases. It does not tell you what parts of your program are actually fast or slow. For example, an O(n^n) algorithm is actually fine if it will never be applied to more than six elements and each operation takes two microseconds, and is vastly better for that case than an O(1) algorithm that takes half a second.

tehsauce Oct 29, 2019 View on HN

You're misusing big O notation

dreamcompiler Oct 5, 2019 View on HN

It's too slow unless n is at least on the order of several thousand. Big-O notation doesn't always tell the whole story.

faragon Aug 20, 2015 View on HN

Be careful with theoretical asymptotic complexity (big O) related to execution time. E.g. if your algorithm time complexity is O(1), but internally calls a higher complexity function, e.g. malloc(), implemented with higher complexity, e.g. O(log n), your algorithm time complexity would be O(log n) and not O(1). It could be even worse: on average or typical constant time algorithm could be in reality an O(n) one: e.g. case of hash table reindexation (that's the reason of why ma

trufflepig May 28, 2019 View on HN

i don’t think big-O cares about a couple orders constant

magicalhippo Feb 11, 2025 View on HN

Your sense of frustration seems to stem from thinking about something which is similar to big-O, but not actually being big-O.All of the points you raise are valid if you want to have a good understanding of actual runtime cost of an algorithm.None of it is relevant if you want to compare algorithms according to big-O, in order to find one which doesn't lead to failure[1] when faced with large amounts of data.Perhaps you should define your own big-R notation for what you're af

mekoka Oct 13, 2020 View on HN

They're not just annoying, they possibly don't get the point of big O. The n in O(n) means that at the time of algorithm design the size of the data is unknown and considered unbound. If you already know its upper limit then it becomes O(c) (where c is a known constant). If c is low enough (a discretionary decision) it can possibly be considered O(1). Think of how a theoretical hash map traverses one of its buckets in linear time to find the key's value (if the bucket is implement