Data Structure Performance Tradeoffs

Discussions focus on trade-offs in data structures like arrays and vectors, including memory efficiency, cache locality, insertion speed, iteration, random access, and allocation strategies.

➡️ Stable 0.5x Programming Languages
4,220
Comments
20
Years Active
5
Top Authors
#101
Topic ID

Activity Over Time

2007
8
2008
5
2009
26
2010
61
2011
97
2012
102
2013
135
2014
230
2015
232
2016
219
2017
255
2018
203
2019
347
2020
284
2021
383
2022
396
2023
436
2024
383
2025
394
2026
24

Keywords

e.g CPU weblogs.asp JS doc.rust APL GB struct.Vec lang.org IMO arrays array memory pointer sparse struct vector property locality bytes

Sample Comments

grashalm Aug 12, 2016 View on HN

Bad title. Datastructures always come with trade offs. In this case Insertion Speed and memory efficiency vs iteration and random access Speed.

scosman Jan 19, 2015 View on HN

A small savings for memory, but much faster. The link has benchmarks.

CyberDildonics Jan 28, 2024 View on HN

What specifically is wrong with vector? There have been a lot of hash maps done with flat memory to minimize allocations and pointer hopping over the STL but vector doesn't have those problems.

v_lisivka Oct 1, 2018 View on HN

Please don't, because memory access pattern will be very different.

bob1029 Feb 16, 2022 View on HN

It is astonishing to me that the most basic of memory hacks are regarded as dark magics these days. Many things take zero extra effort to make orders of magnitude faster if you are simply aware of some basic concepts.For instance, if you know a collection of things is going to be bounded by a reasonable quantity (e.g. less than 1mm) and that you will frequently need to perform a full linear scan of these things, you should consider using an array rather than some b-tree or other pointer-chasi

maratd Dec 16, 2011 View on HN

Ugly special casing requires code that would detect those special cases. This adds overhead in terms of processing cycles and code. Most arrays, majority of the time, store a tiny amount of information. Less than a dozen entries. Doing that sort of detection is massive overkill and would actually hurt performance. You want to speed things up for the real world. Not artificial benchmarks. How many times in your code are you really going to need 100,000 integers in memory? Probably once in your en

pjako Apr 20, 2017 View on HN

All nicecly ordered in linear memory for good data locality?

kzrdude Jun 27, 2024 View on HN

It's a performance optimization for growing datastructures. If you can use all the space that was actually allocated, you can (on average) call realloc less often when the container is growing dynamically.

posterboy Nov 25, 2017 View on HN

the time complexity of insert etc is superior and a good reason for abstraction on top of an array, I would think.

geysersam Dec 18, 2022 View on HN

Would using a structure-of-arrays (instead of array-of-structues) make this less of a problem?