THP (Transparent Huge Pages)

From HPCWIKI
Revision as of 08:27, 30 March 2023 by Admin (talk | contribs) (새 문서: Performance critical computing applications dealing with large memory working sets are already running on top of libhugetlbfs and in turn hugetlbfs. THP can be enabled system wide or restricted to certain tasks or even memory ranges inside task’s address space. Unless THP is completely disabled, there is <code>khugepaged</code> daemon that scans memory and collapses sequences of basic pages into huge pages == Need of application restart == The transparent_hugepage/enabled v...)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Performance critical computing applications dealing with large memory working sets are already running on top of libhugetlbfs and in turn hugetlbfs.


THP can be enabled system wide or restricted to certain tasks or even memory ranges inside task’s address space. Unless THP is completely disabled, there is khugepaged daemon that scans memory and collapses sequences of basic pages into huge pages

Need of application restart

The transparent_hugepage/enabled values and tmpfs mount option only affect future behavior. So to make them effective you need to restart any application that could have been using hugepages. This also applies to the regions registered in khugepaged.

Monitoring usage

File Field Description Notes
/proc/meminfo AnonHugePages The number of anonymous transparent huge pages currently used by the system

To identify what applications are using anonymous transparent huge pages, it is necessary to read /proc/PID/smaps and count the AnonHugePages fields for each mapping

** Note that reading the smaps file is expensive and reading it frequently will incur overhead.
ShmemPmdMapped The number of file transparent huge pages mapped to userspace.

To identify what applications are mapping file transparent huge pages, it is necessary to read /proc/PID/smaps and count the FileHugeMapped fields for each mapping

/proc/vmstat used to monitor how successfully the system is providing huge pages for use
thp_fault_alloc
is incremented every time a huge page is successfully allocated to handle a page fault.
thp_collapse_alloc
is incremented by khugepaged when it has found a range of pages to collapse into one huge page and has successfully allocated a new huge page to store the data.
thp_fault_fallback
is incremented if a page fault fails to allocate a huge page and instead falls back to using small pages.
thp_fault_fallback_charge
is incremented if a page fault fails to charge a huge page and instead falls back to using small pages even though the allocation was successful.
thp_collapse_alloc_failed
is incremented if khugepaged found a range of pages that should be collapsed into one huge page but failed the allocation.
thp_file_alloc
is incremented every time a file huge page is successfully allocated.
thp_file_fallback
is incremented if a file huge page is attempted to be allocated but fails and instead falls back to using small pages.
thp_file_fallback_charge
is incremented if a file huge page cannot be charged and instead falls back to using small pages even though the allocation was successful.
thp_file_mapped
is incremented every time a file huge page is mapped into user address space.
thp_split_page
is incremented every time a huge page is split into base pages. This can happen for a variety of reasons but a common reason is that a huge page is old and is being reclaimed. This action implies splitting all PMD the page mapped with.
thp_split_page_failed
is incremented if kernel fails to split huge page. This can happen if the page was pinned by somebody.
thp_deferred_split_page
is incremented when a huge page is put onto split queue. This happens when a huge page is partially unmapped and splitting it would free up some memory. Pages on split queue are going to be split under memory pressure.
thp_split_pmd
is incremented every time a PMD split into table of PTEs. This can happen, for instance, when application calls mprotect() or munmap() on part of huge page. It doesn’t split huge page, only page table entry.
thp_zero_page_alloc
is incremented every time a huge zero page used for thp is successfully allocated. Note, it doesn’t count every map of the huge zero page, only its allocation.
thp_zero_page_alloc_failed
is incremented if kernel fails to allocate huge zero page and falls back to using small pages.
thp_swpout
is incremented every time a huge page is swapout in one piece without splitting.
thp_swpout_fallback
is incremented if a huge page has to be split before swapout. Usually because failed to allocate some continuous swap space for the huge page.
As the system ages, allocating huge pages may be expensive as the system uses memory compaction to copy data around memory to free a huge page for use. There are some counters in /proc/vmstat to help monitor this overhead.
compact_stall
is incremented every time a process stalls to run memory compaction so that a huge page is free for use.
compact_success
is incremented if the system compacted memory and freed a huge page for use.
compact_fail
is incremented if the system tries to compact memory but failed.

Reference