Big O Notation
Discussions center on common misconceptions, proper usage, and limitations of Big O notation in algorithm analysis, emphasizing its asymptotic nature over practical performance details like constants and small inputs.
Activity Over Time
Top Contributors
Keywords
Sample Comments
You're thinking about Big O notation, which is not the case here
O(log n) is so precise it ain't even informative :P
Understand the effect of the constant factor hidden within the Big O. Often a O(n) algorithm is faster than a O(1) algorithm, because n is too small. E.g. arrays vs hashmaps.Make sure what n is in each case. For example a graph algorithm with O(n^2) and n being the number of nodes may actually be O(n) for n being the graph size (number of edges).
Big-O tells you the behavior of your running time as the dataset increases. It does not tell you what parts of your program are actually fast or slow. For example, an O(n^n) algorithm is actually fine if it will never be applied to more than six elements and each operation takes two microseconds, and is vastly better for that case than an O(1) algorithm that takes half a second.
You're misusing big O notation
It's too slow unless n is at least on the order of several thousand. Big-O notation doesn't always tell the whole story.
Be careful with theoretical asymptotic complexity (big O) related to execution time. E.g. if your algorithm time complexity is O(1), but internally calls a higher complexity function, e.g. malloc(), implemented with higher complexity, e.g. O(log n), your algorithm time complexity would be O(log n) and not O(1). It could be even worse: on average or typical constant time algorithm could be in reality an O(n) one: e.g. case of hash table reindexation (that's the reason of why ma
i don’t think big-O cares about a couple orders constant
Your sense of frustration seems to stem from thinking about something which is similar to big-O, but not actually being big-O.All of the points you raise are valid if you want to have a good understanding of actual runtime cost of an algorithm.None of it is relevant if you want to compare algorithms according to big-O, in order to find one which doesn't lead to failure[1] when faced with large amounts of data.Perhaps you should define your own big-R notation for what you're af
They're not just annoying, they possibly don't get the point of big O. The n in O(n) means that at the time of algorithm design the size of the data is unknown and considered unbound. If you already know its upper limit then it becomes O(c) (where c is a known constant). If c is low enough (a discretionary decision) it can possibly be considered O(1). Think of how a theoretical hash map traverses one of its buckets in linear time to find the key's value (if the bucket is implement