Huge Pages and Page Faults
The cluster centers on operating system memory management, particularly using huge pages to mitigate frequent page faults, TLB flushes, and inefficiencies in large allocations, with discussions on Linux kernel behaviors like lazy allocation and page zeroing.
Activity Over Time
Top Contributors
Keywords
Sample Comments
Doesn’t Linux support large page allocations for just this type of situation?
This is almost like paged memory with faulty cache write-back.
How is that possible without page fault exceptions or equivalent?
What if the OS could dedupe pages?
No, huge pages wouldn't help. They would change when the TLB gets flushed, but the flushes would still be there.
That's unfortunate. I wrote a VMM that tries to back memory with hugepages (even the guests page tables). It's making a difference!
I'm surprised this article doesn't mention one typical solution to the problem of massive numbers of expensive page faults for large allocations: Pages bigger than 4K, which are supported on most modern operating systems. On Linux they go by the name "huge pages":https://www.kernel.org/doc/Documentation/vm/hugetlbpage.txt
The same OS kernel that zeros out pages before handing them back to me?
The hardware and modern OSes support large 2MB pages, and huge 1GB pages. They trim 1 or 2 levels from that tree, respectively. With huge pages a single node of that tree addresses 512GB of memory, for most practical applications you’re pretty much guaranteed to translate these addresses without TLB cache misses.There’re some limitations about them, on Windows the process needs a special security privilege, also these pages are never swapped to a page file. But still, AFAIK when programmers r
You don't need to couple memory pages with disk blocks.