The solution isn't to stop using rand(). The solution is to stop using newlib.<p>If you're doing your own custom memory management like this, you shouldn't even <i>have</i> a malloc implementation at all. Even newlib is too bloated for your use case. At this point, chances are you're using a trivial subset of the C library and it'd be easy to roll your own. You can import bits and pieces from other projects (I personally sometimes copy and paste bits from PDClib for this). In such a tight embedded project, chances are you don't even have threads; why even pull in that reentrancy code?<p>Freestanding C code with no standard library isn't scary. If you need an example, look at what we do for m1n1:<p><a href="https://github.com/AsahiLinux/m1n1/tree/main/src" rel="nofollow">https://github.com/AsahiLinux/m1n1/tree/main/src</a><p>In particular, this is the libc subset we use (we do have malloc here, which is dlmalloc, but still not enough of libc to be worth depending on a full one):<p><a href="https://github.com/AsahiLinux/m1n1/tree/main/sysinc" rel="nofollow">https://github.com/AsahiLinux/m1n1/tree/main/sysinc</a>
So...it wasn't really rand() at fault here, it was some reentrancy library that newlib uses...<p>For some reason the blog lists the source for rand_r, to which the caller supplies the state rather than using global state.<p>I wonder if they'll run into this with any of the non-reentrant C functions (strtok, I'm looking at you). It makes their conclusion of "we stopped using rand()" a bit wanting.<p>I mean, it's great that they stopped using rand(), that's a good move, and pragmatic given their issue, it just feels like it's a really surface level conclusion.
This sort of thing is why I prefer to leave the C standard library out of my microcontroller code altogether. It keeps me from having to worry about what PC-centric assumptions they may be making under the hood, and they're often a bit bloated anyway.<p>Something you do have to watch out for regardless is accidental promotions of floats to doubles. C loves float promotion, and if you're not careful (and don't use the special compiler options) it's easy to turn a one-cycle FPU operation into a hundred-cycle CPU function call.<p>I keep thinking there ought to be a better language for bare-metal microcontroller programming, but the demand for it is so small I'm not sure who would bother supporting it.
The article contains a fair bit of fluff that may be aimed at developers without familiarity with embedded systems or I guess C, I'm not exactly sure what's with the tone but the short of it is:<p>- They've got an embedded system with all static allocation (they do mention having a dynamic allocator though)<p>- They've recently found a stack corruption crash which was traced back to rand() calling malloc() which is both unexpected and also malloc should never be called in the system at all<p>- They traced it back to their usage of newlibc configured to add reentant suppose for c functions that don't don't support it, and this support uses malloc, so when they call rand() this results in a call to malloc()<p>- A recent tooling update caused them to be using newlibc built this way; they previously were using newlibc configured without this feature<p>- Their solution is to not use rand() and instead use a different prng, and to write a tool that detects use of malloc() and fails the build
We had a team project at university to write some graphic demo in 3d from scratch (no open gl, no fancy graphic libraries, just plotting pixels onto screen).<p>We did, and it was pretty slow despite all the microoptimizations (for example we used fixed point math). It was at the time when hyperthreaded CPUs were introduced and my friend added support for multithreading - 1 thread would draw 1 half of the screen. It broke our code and we found out rand() wasn't thread-safe.<p>So being the inexperienced dumbasses we were - we added locks around the rand() calls :). Which made the whole multithreading useless but we didn't realize it then :) What we should do is implement rand() on local variables, it's like 3 lines of code :)
It’s possible the architect of the article is actually a marketeer who wanted to publicise the company and its “careful approach” to IOT. Be careful what you wish for! Now your engineers just look a little inexperienced as a result.
Interesting article, but, man, incredibly annoying writing style. It reads like a LinkedIn post. Use normal sentence/paragraph structure and say what you’re going to say.
<p><pre><code> > The stack memory is somewhat tricky to allocate, because its maximum size is determined at runtime.
>
> There is no silver bullet: we can’t predict it. So we need to measure it.
</code></pre>
When you're extremely memory constrained, you should probably know the max stack depth at different points in your program.<p>GCC has -Wstack-usage=BYTES to warn you if a single function uses more than BYTES of stack space (VLAs and alloca not included), which admittedly isn't too useful if your function calls another...
I know it's offtopic, but adding perf probes to glibc's malloc in a running system was quite revealing:
- qsort calls malloc - and qsort is used a lot in glibc internal functions such as gethostbybame so... Yeah.
- snprintf can call malloc too<p>I'm sure I'm not finished discovering fun stuff.
As someone who dabbles with NES programming, 10KB is quite a lot of ram for me. NES programming is almost always in 6502 asm and at least when I do it I don't even need the stack. Generally people do not use it for storage, they use the stack just for function calling, and generally uses just a fraction of the zero page for the most part. Instead, every subroutine either gets its own variables somewhere in memory or you use zero page and have to be careful that the interrupt handlers do not disrupt the parts of zero page that other routines might use. It's a lot harder perhaps but it does force you to be more intentional with your memory usage. I'll admit I've never coded any like crazy demos (yet) but programming in this way, I've yet to use anywhere near to the full 2KB for standard ram the NES has.<p>CHR RAM (like graphics ram, sort of) is a different story perhaps, but at least for the game logic and some sprite manipulation, yeah 2KB for an 8bit game is plenty. I think programming in C (which requires a stack, also probably requires floats which the 6502 has no native support for and would require implementation by the compiler) adds a lot of convenience but the overhead is too much for a NES, although there are libraries and tools for NES C programming out there.
And in Java, Math.cos(double) might do allocation as well, for large angles reduction. In Jafama I took care to avoid that, by encoding the quadrant in two useless bits of reduced angle exponent.
His solution is to use a PCG generator? As far as I know its designer withdrew submission of a journal article after negative reviews and has never resubmitted one. And this doesn't look good:<p><a href="https://pcg.di.unimi.it/pcg.php" rel="nofollow">https://pcg.di.unimi.it/pcg.php</a>
<p><pre><code> int
rand_r (unsigned int *seed)
{
long k;
long s = (long)(*seed);
if (s == 0)
s = 0x12345987;
k = s / 127773;
s = 16807 * (s - k * 127773) - 2836 * k;
if (s < 0)
s += 2147483647;
(*seed) = (unsigned int)s;
return (int)(s & RAND_MAX);
}
</code></pre>
Who on this green Earth wrote this??<p><pre><code> long s = (long)(*seed);</code></pre>
If you are developing against a supposed implementation, rather than the published interface, you are wrong. rand is not the culprit, newlib is not the culprit: you are.<p>You don't want to use malloc? Don't have malloc.