TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Disable transparent hugepages

178 点作者 wheresvic3超过 7 年前

12 条评论

markjdb超过 7 年前
Please be aware that the article describes a problem with a specific implementation of THP. Other operating systems implement it differently and don&#x27;t suffer from the same caveats (though any implementation will of course have its own disadvantages, since THP support requires making various tradeoffs and policy decisions). FreeBSD&#x27;s implementation (based on [1]) is more conservative and works by opportunistically reserving physically contiguous ranges of memory in a way that allows THP promotion if the application (or kernel) actually makes use of all the pages backed by the large mapping. It&#x27;s tied in to the page allocator in a way that avoids the &quot;leaks&quot; described in the article, and doesn&#x27;t make use of expensive scans. Moreover, the reservation system enables other optimizations in the memory management subsystem.<p>[1] <a href="https:&#x2F;&#x2F;www.cs.rice.edu&#x2F;~druschel&#x2F;publications&#x2F;superpages.pdf" rel="nofollow">https:&#x2F;&#x2F;www.cs.rice.edu&#x2F;~druschel&#x2F;publications&#x2F;superpages.pd...</a>
评论 #15797740 未加载
评论 #15799644 未加载
lorenzhs超过 7 年前
I&#x27;ve had a really bad run-in with transparent hugepage defragmentation. In a workload consisting of many small-ish reductions, my programme spent over 80% of its total running time in <i>pageblock_pfn_to_page</i> (this was on a 4.4 kernel, <a href="https:&#x2F;&#x2F;github.com&#x2F;torvalds&#x2F;linux&#x2F;blob&#x2F;v4.4&#x2F;mm&#x2F;compaction.c#L74-L115" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;torvalds&#x2F;linux&#x2F;blob&#x2F;v4.4&#x2F;mm&#x2F;compaction.c#...</a>) and a total of 97% of the total time in hugepage compaction kernel code. Disabling hugepage defrag with <i>echo never &gt; &#x2F;sys&#x2F;kernel&#x2F;mm&#x2F;transparent_hugepage&#x2F;defrag</i> lead to an instant 30x performance improvement.<p>There&#x27;s been some work to improve performance (e.g. <a href="https:&#x2F;&#x2F;github.com&#x2F;torvalds&#x2F;linux&#x2F;commit&#x2F;7cf91a98e607c2f935dbcc177d70011e95b8faff" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;torvalds&#x2F;linux&#x2F;commit&#x2F;7cf91a98e607c2f935d...</a> in 4.6) but I haven&#x27;t tried if this fixes my workload.
评论 #15797619 未加载
xchaotic超过 7 年前
So glad this is on the front page of HN. A good 30% of perf problems for our clients are low level misconfigurations such as this. For databases: huge pages - good THP - bad
reza_n超过 7 年前
Not to mention that there was a race condition in the implementation which would cause random memory corruption under high memory load. Varnish Cache would consistently hit this. Recently fixed:<p><a href="https:&#x2F;&#x2F;access.redhat.com&#x2F;documentation&#x2F;en-us&#x2F;red_hat_enterprise_linux&#x2F;7&#x2F;html-single&#x2F;7.2_release_notes&#x2F;index#kernel" rel="nofollow">https:&#x2F;&#x2F;access.redhat.com&#x2F;documentation&#x2F;en-us&#x2F;red_hat_enterp...</a>
mnw21cam超过 7 年前
Agreed. Found this to be a problem and fixed it by switching it off three years ago. Seems to be a bigger problem on larger systems than small systems. We had a 64-core server with 384GB RAM, and running too many JVMs made the khugepaged go into overdrive and basically cripple the server entirely - unresponsive, getting 1% the work done, etc.
fps_doug超过 7 年前
I stumbled upon this feature when some Windows VMs running 3D accelerated programs exhibited freezes of multiple seconds every now and then. We quickly discovered khugepaged would hog the CPU completely during these hangs. Disabling THP solved any performance issues.
评论 #15797270 未加载
mwolff超过 7 年前
Bad advise... The following article is much better at actually measuring the impact:<p><a href="https:&#x2F;&#x2F;alexandrnikitin.github.io&#x2F;blog&#x2F;transparent-hugepages-measuring-the-performance-impact&#x2F;" rel="nofollow">https:&#x2F;&#x2F;alexandrnikitin.github.io&#x2F;blog&#x2F;transparent-hugepages...</a><p>Especially the conclusion is noteworthy:<p>&gt; Do not blindly follow any recommendation on the Internet, please! Measure, measure and measure again!
评论 #15795713 未加载
评论 #15796381 未加载
评论 #15795667 未加载
评论 #15797849 未加载
评论 #15800988 未加载
lunixbochs超过 7 年前
Transparent hugepages causes a massive slowdown on one of my systems. It has 64GB of RAM, but it seems the kernel allocator fragments under my workload after a couple of days, resulting in very few &gt;2MB regions free (as per proc buddyinfo) even with &gt;30GB of free ram. This slowed down my KVM boots dramatically (10s -&gt; minutes), and perf top looked like the allocator was spending a lot of cycles repeatedly trying and failing to allocate huge pages.<p>(I don&#x27;t want to preallocate hugepages because KVM is only a small part of my workload.)
phkahler超过 7 年前
Shouldn&#x27;t huge pages be used automatically if you malloc() large amounts of memory at once? Wouldn&#x27;t that cover some of the applications that benefit from it?
评论 #15800284 未加载
评论 #15797050 未加载
brazzledazzle超过 7 年前
Brendan Gregg&#x27;s presentation at re:Invent today reflected this advice. Netflix saw good and bad perf so switched back to madvise.
vectorEQ超过 7 年前
good article, though as other posters suggest, just use it if you obsolutely must, and measure &#x2F; test the results for any issues!
hossbeast超过 7 年前
What&#x27;s the recommendation on a desktop for gaming &#x2F; browsing &#x2F; compiling with 32gb of ram ?
评论 #15799104 未加载