TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Let the callers of your API control allocations

130 pointsby robfigover 5 years ago

19 comments

holy_cityover 5 years ago
I read the title as &quot;don&#x27;t force callers to allocate&quot; not &quot;don&#x27;t do allocations behind the call&quot; which is what the post is about. Totally agree on that point, but with nuance. Any call that allocates (or blocks) should be obvious in name and API.<p>For example a lot of C libs will have an API along these lines:<p><pre><code> typedef void* my_api_t; int create_api (my_api_t* api); &#x2F;&#x2F; almost certainly allocates typedef struct { &#x2F;&#x2F; yada yada yada } api_result_t; int get_result(my_api_t api, api_result_t* result); &#x2F;&#x2F; does this allocate? I have no idea </code></pre> I think every function that might block&#x2F;allocate should be stupid obvious in the documentation and function naming convention, especially if you&#x27;re distributing closed source libs with opaque types.
评论 #20889068 未加载
评论 #20889041 未加载
评论 #20889113 未加载
评论 #20889550 未加载
评论 #20889224 未加载
评论 #20889735 未加载
评论 #20889906 未加载
评论 #20895383 未加载
kragenover 5 years ago
I, too, read the title as meaning the opposite of what he&#x27;s actually saying. He&#x27;s saying that if you force your callers to do the allocation like Golang&#x27;s io.Reader.read does, rather than doing the allocation yourself and forcing the new allocation on them (the way Python&#x27;s fileobj.read call does), then they can reuse and pool allocations, improving efficiency substantially under some circumstances.<p>There are another couple of benefits, though:<p>1. A call which allocates can necessarily fail, although in garbage-collected languages like Golang, memory allocation failure are normally handled implicitly with something like panic(). (That&#x27;s because the main benefit of garbage collection is not that you don&#x27;t have to debug segfaults, but that you can treat arbitrary-sized data structures as if they were atomic integers — with immutability, even when they share structure. This allows you to program at a higher level. You lose this benefit if you have to know which calls allocate and which do not.)<p>1½. By the same token, dynamic allocation almost never takes a deterministic or bounded amount of runtime, though see below about per-frame heaps.<p>2. In many cases, you can do the allocation on the stack rather than the heap, with potentially substantial implications for garbage-collection efficiency. (You could see the MLKit&#x27;s region allocation as an automatic version of this pattern.)<p>3. In environments like C++, it may make sense to use a different custom allocator, like the per-frame heap commonly used in game engines, or the pool allocator provided by the APR, or an allocator that allocates from a memory-mapped object database like ObjectStore. The alternative is something like the rarely-used allocator template parameter that makes all our STL compile errors so terrible. Even in ordinary C programs, it might make sense to use a static allocation for these things some of the time.<p>On the other hand, putting the allocation on the caller ossifies the amount of space needed as part of your API, and prevents that space from being dynamically determined at runtime by the algorithm. So there are times when letting the caller allocate all the space is undesirable or even impossible. And, as Cheney points out, it does make your API more of a hassle to invoke — but that&#x27;s easily fixed with a wrapper function that does the allocation.<p>A third alternative, used for example by Numpy and realloc(), is the one mentioned in <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=20888920" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=20888920</a> (though it erroneously implies that C has optional parameters) — have an optional buffer argument telling where to store the result, but if it&#x27;s not passed in, dynamically allocate a buffer and return it. Taking advantage of this parameter to avoid extra allocations and cache misses commonly doubles the speed of Numpy code, in my experience, but it really hurts readability. As a general API design technique, this seems like the most constraining to future implementations, and I think it makes more sense to separate the underlying stable non-allocating API from the allocating helper wrapper, as described above.
评论 #20893457 未加载
dangover 5 years ago
Since multiple commenters have found the title misleading, we&#x27;ve taken a crack at replacing it with something that expresses the article&#x27;s point. If anyone suggests a better title—i.e. more accurate and neutral, and preferably using a representative phrase from the article (I couldn&#x27;t find one in this case), we can change it again.
评论 #20892701 未加载
stefan_over 5 years ago
The ultimate of this is libraries like BearSSL (<a href="https:&#x2F;&#x2F;bearssl.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;bearssl.org&#x2F;</a>) that do no dynamic memory allocation at all. This is particularly nice for moderately big embedded systems where you don&#x27;t want to deal with the impossible to contain complexity that comes from having a heap.
评论 #20889967 未加载
评论 #20890137 未加载
评论 #20893496 未加载
apaprockiover 5 years ago
Ultimately this is why Halpern, Lakos, et al. pushed for polymorphic allocators to be added to C++. We make heavy use of them because it gives the caller the full control over the allocator used in nearly all situations then the code is written properly to take and propagate allocators according to the rules.<p>(A little example posted by Bartlomiej Filipek showing a vector neatly using a stack buffer)<p><pre><code> #include &lt;iostream&gt; #include &lt;memory_resource&gt; #include &lt;vector&gt; int main() { char buffer[64] = {}; std::fill_n(std::begin(buffer), std::size(buffer)-1, &#x27;_&#x27;); std::cout &lt;&lt; buffer &lt;&lt; &#x27;\n&#x27;; std::pmr::monotonic_buffer_resource pool{ std::data(buffer), std::size(buffer) }; std::pmr::vector&lt;char&gt; vec{&amp;pool}; for (char ch=&#x27;a&#x27;; ch &lt;= &#x27;z&#x27;; ++ch) vec.push_back(ch); std::cout &lt;&lt; buffer &lt;&lt; &#x27;\n&#x27;; }</code></pre>
评论 #20893488 未加载
makecheckover 5 years ago
There is a certain elegance to time-tested C APIs that show exactly how data structures are managed. One of the things you can lose in OO implementations is the ability to efficiently store data, or even understand where your memory is going. (And I feel that the varying semantics between languages don’t help, e.g. people that were used to doing &quot;new X()&quot; for everything in Java move to C++ and keep trying to &quot;new&quot; things that don’t need to be sent to &quot;new&quot;.)<p>It is important to have the option to deviate from “objects” in projects that can require significant memory. I would not allow objects to wrap absolutely everything, being passed all around the system, to the point that it’s impossible to control the biggest users of memory.<p>As a simple example, imagine an object that strictly encapsulates storage of “64-bit int plus bool”. At a small scale, the wasted space is an implementation detail but for a few <i>million</i> objects you have a pretty ridiculous memory footprint. A fancy encapsulated object probably gives no option to avoid this, such as “put your array of million 64-bit values over here, and your buckets for bits over there”. When this happens enough times, you have holes all over the place and it’s almost too late (where do you start to fix it?). That is also the type of design that can consume shockingly more memory when moving from 32-bit to 64-bit, e.g. some class that had moderate holes with 32-bit pointers has bigger holes with 64-bit pointers.
wadkarover 5 years ago
Isn&#x27;t this dependency injection though? As in, the caller must manage the buf() dependency on it&#x27;s own - promoting performance and other qualities like code reuse. For example, the caller might have a library that manages creating these buffers for large bytes or maybe over network or what not. Then such dependency injection will promote code reuse of the library as well.
评论 #20889658 未加载
评论 #20890428 未加载
yen223over 5 years ago
From experience, I would generalise this advice to, &quot;let the callers of the API control side effects&quot;.<p>The justification is largely the same, i.e. that the caller generally has more context and thus better ability to decide when and how side-effects should happen.
评论 #20902040 未加载
radiospielover 5 years ago
I don&#x27;t totally get this.. isn&#x27;t it in the nature of reading via read(2) that it is possible it reads some data and then errors? If this is indeed reflected in the io.Reader.Read interface, then - regardless of who is allocating the buffer - the caller must evaluate both the result and the error.<p>It would indeed be possible in that scenario to return the buffer, and to return the error in a subsequent call, but this seems to change its behaviour into one that (to me) is surprisingly different.<p>The same is true for writing; write(2) can write some, but not all of the passed in data, and then err. What kind of API would one want tosee here?
评论 #20889388 未加载
评论 #20889373 未加载
评论 #20889248 未加载
评论 #20889316 未加载
dreamcompilerover 5 years ago
In Common Lisp the way I usually handle this is with an optional arg. If the caller passes in a buffer[0] that&#x27;s where I put the result; otherwise I allocate one and return it. The more general approach is to accept a continuation argument. One can even do both at the same time by dispatching a generic function on the type of the arg (but in that case the arg cannot be optional unless you write a wrapper function around the generic function.)<p>[0] Passing buffers around is much less necessary in Lisp than in C-like languages. This is about the times when it still makes sense to do so.
ses1984over 5 years ago
I&#x27;m not that great in golang but I don&#x27;t understand why you &quot;must&quot; not consult the error first in this situation:<p>&gt;First they must record the number of bytes read into the buffer, reslice the buffer, process that data, and only then, consult the error.
评论 #20889634 未加载
trevor-eover 5 years ago
&gt;Because of the high cost of retrofitting a change to an API’s signature to address performance concerns, it’s worthwhile considering the performance implications of your API’s design on its caller.<p>ObjC is very verbose partly because it went with the &quot;signature encoding&quot; approach, which has its own tradeoffs, but I do appreciate the API consistency.<p><a href="https:&#x2F;&#x2F;developer.apple.com&#x2F;library&#x2F;archive&#x2F;documentation&#x2F;CoreFoundation&#x2F;Conceptual&#x2F;CFMemoryMgmt&#x2F;Concepts&#x2F;Ownership.html#&#x2F;&#x2F;apple_ref&#x2F;doc&#x2F;uid&#x2F;20001148-CJBEJBHH" rel="nofollow">https:&#x2F;&#x2F;developer.apple.com&#x2F;library&#x2F;archive&#x2F;documentation&#x2F;Co...</a>
评论 #20893509 未加载
cozzydover 5 years ago
This is why for C&#x2F;C++, I always like to have something like:<p>foo * bar(foo * useme=0);<p>where one can optionally pass memory to be used, but if nothing is passed, it will be allocated by the function (placement new can be helpful here!).
评论 #20889098 未加载
评论 #20889184 未加载
评论 #20889273 未加载
gouggougover 5 years ago
I&#x27;ve always been puzzled by this awkward API and couldn&#x27;t understand why it was this way. It had the side-effect of making me unsure as to whether or not I should be using it and why, which is never a nice feeling when you&#x27;re trying to write the best code possible. Now I understand! Code anxiety just decreased by a lot.
doboyyover 5 years ago
Don&#x27;t do allocations in the call until the API becomes annoying and unpleasant to use. Also weighted upon the domain and performance implications of the particular API<p>By nature of being a buffer means it can probably be used in subsequent calls. You dont need to read everything all in one go
justicezyxover 5 years ago
I sense that the modern CS education seems lost a lot of its root in computing fundamentals...<p>This type of findings are pretty much the daily routines of engineers a decade or longer ago...
评论 #20889577 未加载
评论 #20890244 未加载
评论 #20889734 未加载
评论 #20889602 未加载
newtonappleover 5 years ago
For low level APIs, I would say anything that could be configured on a per call basis should be parameterized: memory allocations, database &amp; http connections etc.
lostmsuover 5 years ago
FYI, in the recent versions of .NET you could do this by accepting Span&lt;T&gt; instances instead of the usual array + index + length.
aledalgrandeover 5 years ago
This is also valid for any library, gem, module or SDK.