In case someone wants the same in C++, the Guidelines Support Library comes with the class "final_action" and the function "finally()".<p>Check the implementation here: <a href="https://github.com/microsoft/GSL/blob/master/include/gsl/gsl_util#L76" rel="nofollow">https://github.com/microsoft/GSL/blob/master/include/gsl/gsl...</a>.<p>Example from <a href="https://docs.microsoft.com/en-us/cpp/code-quality/c26448?view=vs-2019" rel="nofollow">https://docs.microsoft.com/en-us/cpp/code-quality/c26448?vie...</a>:<p><pre><code> void poll(connection_info info)
{
connection c = {};
if (!c.open(info))
return;
auto end = gsl::finally([&c] { c.close(); });
while (c.wait())
{
connection::header h{};
connection::signature s{};
if (!c.read_header(h))
return;
if (!c.read_signature(s))
return;
// ...
}
}
</code></pre>
I love this pattern, it's a very nice way to have a kind of RAII but with more control and flexibility.
So much complexity. Just standardize __attribute__((cleanup)) which is already being used by a load of software, is available already in GCC and Clang, and does everything that anyone wants.
Presumably GCC (and I believe Clang)'s __cleanup__ attribute provides this functionality already in most cases?<p>Any platform where Clang and GCC aren't supported is a platform where this style of code shouldn't be used, no?
If I'm not mistaken, the first example is equivalent to the following purely structured code:<p><pre><code> void * const p = malloc(25);
if (p != NULL) {
void * const q = malloc(25);
if (q != NULL) {
if (mtx_lock(&mut) != thrd_error) {
mtx_unlock(&mut);
}
free(q);
}
free(p);
}
</code></pre>
At least to me, this flow is much easier to understand.
I see this idea posted with some frequency and the responses are almost always "clang and gcc have compiler intrinsics for this". I'm not a regular C programmer, so this begs the question: why is it that nobody seems to know or use them?
I hate commenting on this usually, but please please please don't touch letter-spacing if you want people to be able to read your text! Doubly so if these are literally <i>headers</i> and using a fairly ugly, squat font…
Is this a serious proposal for a new C language feature? Or is this just an experiment from someone's masters thesis or something? The paper is titled "Proposal for C2x", but this can't possibly be seriously considered. I have so many questions.<p>In section 1.1, the linearization it gives with goto statements is barely longer than the defer example. They claim defer is better just because of the proximity of the cleanup code? Why not just move the "resources acquired" code to a separate function? You wouldn't even need goto in that case, you could just nest if statements to do the cleanup.<p>The spec claims defer allocates memory. Why? As far as I know __attribute__((cleanup(fn))) doesn't allocate memory. This defer may exhaust memory, and if so, it will immediately terminate execution of the enclosing guard block with a panic() and DEFER_ENOMEM. So like an exception?<p>This says exit() or panic() will clean up all guarded blocks across all function calls of the same thread. So basically stack unwinding? Apparently you can recover somewhere with a call to recover()? This is just exceptions by another name. This stack unwinding can't possibly interoperate with existing code that expects error return values.<p>This claims it's robust because any deferred statement is guaranteed to be executed eventually, and it describes in great detail how it runs defer statements on signals. What if I write an infinite loop, or get a SIGKILL, or yank the power cord? Obviously deferred statements won't be executed.<p>This says defer is implemented with longjmp. Isn't setjmp/longjmp way too slow for exception handling? C++ compilers haven't done exceptions that way for decades. What happens if I longjmp or goto past a defer statement? This says it just doesn't invoke the defer mechanism and may result in memory leaks or other damage. Does that mean it's undefined behaviour? C++ won't compile a goto past constructors for good reason.<p>All POSIX error and signal codes have an equivalent prefixed with DEFER_, e.g. DEFER_ENOMEM, DEFER_HUP. This is just in case the system doesn't already have ENOMEM? Doesn't the standard already require that ENOMEM exist? If not, why not just make this feature require that ENOMEM exist? Why depend so much on errno for new core language features when it's basically an ugly artifact of ancient C library functions?<p>> If C will be extended with lamdas (hopefully in a nearer future)<p>I wouldn't hold my breath.
Why is this better than RAII with a destructor/drop being called whenever the block is exited? Also, this mechanism is already present in C via __attribute__(cleanup).
The C language shouldn't need a defer statement keyword, because it's so trivial to implement using an asm() macro that overwrites the return address.<p>Using a macro is more succinct:<p><pre><code> const char *s = gc(xasprintf("%s/%s", dir, name));
</code></pre>
Than what's being proposed:<p><pre><code> char *s = xasprintf("%s/%s", dir, name);
defer free(s);
</code></pre>
See this x86 reference implementation of defer() and gc(). <a href="https://gist.github.com/jart/aed0fd7a7fa68385d19e76a63db687ff" rel="nofollow">https://gist.github.com/jart/aed0fd7a7fa68385d19e76a63db687f...</a> That should just work with GCC and Clang. That code is originally from the Cosmopolitan C Library (<a href="https://github.com/jart/cosmopolitan" rel="nofollow">https://github.com/jart/cosmopolitan</a>) so check it out if you like the Gist and want more!<p>Please note the macro operates at function call boundaries rather than block scoped. I consider that a feature since it behaves sort of like a memory pool. Having side effects at the block scope level requires changing compilers, the language itself, and it would cause several important gcc optimization passes to be disabled in places where it's used.