TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Fewer mallocs in curl

392 pointsby dosshellabout 8 years ago

15 comments

nnethercoteabout 8 years ago
There is a little-known Valgrind tool called &quot;DHAT&quot; (short for &quot;Dynamic Heap Analysis Tool&quot;) that&#x27;s designed to help find exactly these sorts of excessive allocations.<p>Here&#x27;s an old blog post describing it, by DHAT&#x27;s author: <a href="https:&#x2F;&#x2F;blog.mozilla.org&#x2F;jseward&#x2F;2010&#x2F;12&#x2F;05&#x2F;fun-n-games-with-dhat&#x2F;" rel="nofollow">https:&#x2F;&#x2F;blog.mozilla.org&#x2F;jseward&#x2F;2010&#x2F;12&#x2F;05&#x2F;fun-n-games-with...</a><p>Here&#x27;s another blog post in which I describe how I used it to speed up the Rust compiler significantly: <a href="https:&#x2F;&#x2F;blog.mozilla.org&#x2F;nnethercote&#x2F;2016&#x2F;10&#x2F;14&#x2F;how-to-speed-up-the-rust-compiler&#x2F;" rel="nofollow">https:&#x2F;&#x2F;blog.mozilla.org&#x2F;nnethercote&#x2F;2016&#x2F;10&#x2F;14&#x2F;how-to-speed...</a><p>And here is the user manual: <a href="http:&#x2F;&#x2F;valgrind.org&#x2F;docs&#x2F;manual&#x2F;dh-manual.html" rel="nofollow">http:&#x2F;&#x2F;valgrind.org&#x2F;docs&#x2F;manual&#x2F;dh-manual.html</a>
评论 #14184564 未加载
tom_melliorabout 8 years ago
For the benefit of others who found the description in the blog post unclear and can&#x27;t or don&#x27;t want to dig through the code changes themselves: &quot;fixing the hash code and the linked list code to not use mallocs&quot; is a bit misleading. Curl now uses the idiom where the linked list data (prev&#x2F;next pointers) are inlined in the same struct that also holds the payload. So it&#x27;s one malloc instead of two per dynamically allocated list element. This explains the &quot;down to 80 allocations from the 115&quot; part.<p>The larger gain is explained better and comes simply from stack allocation of some structures (which live in a simple array, not a linked list or hash table).
评论 #14178765 未加载
评论 #14178359 未加载
评论 #14182199 未加载
makerbrakerabout 8 years ago
I think this is fantastic engineering work towards performance, without falling back on the &quot;RAM is cheap&quot; line and instead doing nothing.<p>It&#x27;s not every day that you see an example of someone examining and improving old code, that will result in a measurable benefit to direct and indirect users.
评论 #14178583 未加载
评论 #14177927 未加载
评论 #14178638 未加载
评论 #14178987 未加载
评论 #14179874 未加载
评论 #14179809 未加载
vbezhenarabout 8 years ago
Underlying problem is that C doesn&#x27;t have comprehensive standard collections, so many developers reinvent the wheel over and over again, and usually that wheel is far from best in the world. If curl was written with C++, those optimizations would be applied automatically by using STL collections.
评论 #14178992 未加载
评论 #14178990 未加载
评论 #14178985 未加载
评论 #14179636 未加载
iamalurkerabout 8 years ago
My problem with excessive allocations is usually what happens in interpreted languages. People think, hey it&#x27;s already slow ass interpreted so lets not care about allocation at all.<p>An example which I see all the time, looking at tons of python libraries which in the end do I&#x2F;O against a TCP socket. Sometimes the representation between what the user passes to the library and what goes out to the socket can be retained as an array of buffers which are to be sent to the socket.<p>Instead of iterating on the array, and sending each block (if big enough) on it&#x27;s own to the socket, the library author concat them to one buffer, and then send it over the socket.<p>When dealing with big data, this adds lots of fragmentation and overhead (measurable), yet some library authors don&#x27;t care...<p>Even the basic httplib and requests has this issue when sending via POST a large file (it concats it to the header, instead of sending the header, then the large fiel).
snksnkabout 8 years ago
Optimization backed by comparative statics. These reads are so satisfying. Thank you for submitting.
faragonabout 8 years ago
Explicit dynamic memory handling in low level languages hurts in a similar way garbage collectors do in high level languages: hidden and often unpredictable execution costs (malloc&#x2F;realloc&#x2F;free internally usually implement O(n log n) algorithms, or worse). So the point for performance, no matter if you work with low level or high level languages is to use preallocated data structures, when possible. That way you&#x27;ll have low fragmentation and fast execution because not calling the allocator&#x2F;dealocator in the case of explicit dynamic memory handling, and lower garbage collector pressure because of the same reasons in the case of a garbage-collected languages.
评论 #14180111 未加载
rumcajzabout 8 years ago
My rule of thumb is to look at application&#x27;s design and only ever use malloc where there is 1:N (or N:M) relationship between entities. Everything that&#x27;s 1:1 should be allocated in a single step.
评论 #14179431 未加载
评论 #14180096 未加载
21about 8 years ago
&gt; Doing very small (less than say 32 bytes) allocations is also wasteful just due to the very large amount of data in proportion that will be used just to keep track of that tiny little memory area (within the malloc system). Not to mention fragmentation of the heap.<p>That not necessarily true. Modern allocators tend to use a bunch of fixed size-buckets.<p>But given that curl runs on lots of platforms it makes sense to just fix the code.
评论 #14178209 未加载
vertex-fourabout 8 years ago
Note that this pattern[0] is essentially &quot;copy-on-write&quot;, which can be encapsulated safely as such in a reasonably simple type (in a language with generics) and used elsewhere. I use a similar mechanism pervasively in some low-level web server code to use references into the query string, body and JSON objects directly when possible, and allocated strings when not.<p>[0] <a href="https:&#x2F;&#x2F;github.com&#x2F;curl&#x2F;curl&#x2F;commit&#x2F;5f1163517e1597339d" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;curl&#x2F;curl&#x2F;commit&#x2F;5f1163517e1597339d</a>
评论 #14178248 未加载
0xcde4c3dbabout 8 years ago
&gt; The point is rather that curl now uses less CPU per byte transferred, which leaves more CPU over to the rest of the system to perform whatever it needs to do. Or to save battery if the device is a portable one.<p>Does anyone have a general sense of how these kinds of efficiencies translate to real-world battery life? I understand that the mechanisms (downclocking&#x2F;sleeping the CPU) are there; I&#x27;m just curious as to how much it actually moves the needle in a real system.
评论 #14178858 未加载
评论 #14178245 未加载
评论 #14178164 未加载
hota_maziabout 8 years ago
&gt; There have been 213 commits in the curl git repo from 7.53.1 till today. There’s a chance one or more other commits than just the pure alloc changes have made a performance impact, even if I can’t think of any.<p>&quot;I can&#x27;t think of any&quot; is not a very scientific way to measure optimizations. Actually, this simple fact casts a doubt on whether it&#x27;s this malloc optimization that led to the speed up or any of the 200+ commits OP is working on top of.<p>Why not eliminate that doubt by applying the malloc optimizations to the previous official release? I&#x27;m a bit skeptical about the speed up myself, since I would expect curl to be primarily IO bound and not CPU bound (much less malloc bound, given how little memory it uses).
评论 #14178622 未加载
ape4about 8 years ago
&gt; This time the generic linked list functions got<p>&gt; converted to become malloc-less (the way<p>&gt; linked list functions should behave, really).<p>I don&#x27;t see how a linked list can not use malloc().
评论 #14178239 未加载
评论 #14178304 未加载
amenghraabout 8 years ago
You would think curl&#x27;s perf is bound by the network latency&#x2F;bandwidth and that intrusive lists wouldn&#x27;t make a signifiant difference.
__sabout 8 years ago
&gt; The point here is of course not that it easily can transfer HTTP over 20GB&#x2F;sec using a single core on my machine<p>2GB
评论 #14181517 未加载