CPU Cache Locality

This cluster discusses CPU cache performance, memory locality, cache misses, and optimization techniques for improving access patterns and reducing latency in low-level programming.

📉 Falling 0.4x Hardware
5,373
Comments
20
Years Active
5
Top Authors
#7624
Topic ID

Activity Over Time

2007
2
2008
26
2009
74
2010
90
2011
129
2012
179
2013
199
2014
283
2015
286
2016
377
2017
322
2018
365
2019
391
2020
432
2021
383
2022
490
2023
506
2024
409
2025
394
2026
36

Keywords

RAM CS II CPU GC e.g L1 YMMV DRAM intel.com cache memory cpu locality access caches performance data caching bytes

Sample Comments

wtetzner Jan 27, 2019 View on HN

Why do you think you'd suffer in CPU time and memory locality?

pmontra Dec 25, 2016 View on HN

It also helps with cache locality.

kllrnohj Nov 24, 2021 View on HN

Accessing RAM is very slow and CPU caches don't scale, though. So it's not as simple as you're presenting it

zurn May 27, 2016 View on HN

Your needles (search terms) need to be quite long if you want to save on memory traffic: cache lines are 64-128 bytes and random access to standard DRAM is slower than sequential access. It can help if your data is in-cache though. (Or if you have an unusual system where memory is faster than what CPU can keep up with).

Bognar Jul 31, 2015 View on HN

It's likely that the CPU overhead is lower than a cache miss.

renox Nov 14, 2021 View on HN

Interesting but you must also take care of the CPU caches..

rerdavies Oct 27, 2023 View on HN

Better L1/L2 cache performance, perhaps?

dozzie Dec 8, 2016 View on HN

I'm not that low-level programmer, but my guess would be loading to CPU cache a whole memory segment at once.

moss2 Nov 18, 2022 View on HN

Geez you use this tool for quick answers, it's not a manual specification. Do you also hate the CPU cache because it doesn't always point to correct memory?

keepquestioning Aug 14, 2022 View on HN

Is any of this applicable to designing a CPU cache?