我是否正确地假设使用 MAP_HUGETLB|MAP_ANONYMOUS 的 mmap 内存实际上是 100% 物理一致的?至少在大页面大小上,2MB或1GB。
否则我不知道它如何工作/高性能,因为 TLB 需要更多条目...
最佳答案
是的,他们是。事实上,正如您所指出的,如果不是,单个大页面将需要多个页表条目,这将破坏拥有大页面的整个目的。
这是 Documentation/admin-guide/mm/hugetlbpage.rst
的摘录:
The default for the allowed nodes--when the task has default memory policy--is all on-line nodes with memory. Allowed nodes with insufficient available, contiguous memory for a huge page will be silently skipped when allocating persistent huge pages. See the discussion below
<mem_policy_and_hp_alloc>
of the interaction of task memory policy, cpusets and per node attributes with the allocation and freeing of persistent huge pages.The success or failure of huge page allocation depends on the amount of physically contiguous memory that is present in system at the time of the allocation attempt. If the kernel is unable to allocate huge pages from some nodes in a NUMA system, it will attempt to make up the difference by allocating extra pages on other nodes with sufficient available contiguous memory, if any.
另请参阅:How do I allocate a DMA buffer backed by 1GB HugePages in a linux kernel module?
关于linux-kernel - MAP_HUGETLB 是一致内存的同义词吗? (成功时),我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/70386828/