Greg Parker, who works on the Objective-C runtime, has a blog post that goes into more detail: <a href="http://www.sealiesoftware.com/blog/archive/2017/6/5/Objective-C_and_fork_in_macOS_1013.html" rel="nofollow">http://www.sealiesoftware.com/blog/archive/2017/6/5/Objectiv...</a>
This issue has been addressed on ruby-head<p><a href="https://github.com/ruby/ruby/commit/8b182a7f7d798ab6539518fbfcb51c78549f9733" rel="nofollow">https://github.com/ruby/ruby/commit/8b182a7f7d798ab6539518fb...</a>
> This cryptic error<p>Well....I don't think I've seen many programmer-to-programmer errors that are less cryptic than the one described in the article. It's actually quite amazing how much explanation you sometimes get from Cocoa!
This is so incredibly Apple :)<p>The breakage, I mean. To clarify a bit, for better or for worse, this is what Microsoft does, totally different psychology: <a href="https://blogs.msdn.microsoft.com/oldnewthing/20031223-00/?p=41373" rel="nofollow">https://blogs.msdn.microsoft.com/oldnewthing/20031223-00/?p=...</a>
Interesting discussion. If these were user-space threads, like FreeBSD ~20 years ago, there'd be no problem. When fork() is called, the whole user-space threads package would be forked, along with all the threads.<p>So the obvious question is whether it's <i>fundamental</i> that with kernel threads the fork() system call doesn't clone all the other threads in the process? Yes, that's not how it's done, but could Apple choose to implement a new fork_all() system call? I imagine it wouldn't be easy - you'd need to pause <i>all</i> the running threads while you copied state, but is there a reason it's actually not possible?
Do developers just call whatever function seems to work without reading the docs? It doesn't work for low level programming.<p><pre><code> ~ man fork
</code></pre>
CAVEATS: There are limits to what you can do in the child process. To be totally safe you should restrict yourself to only executing async-signal safe operations until such time as one of the exec functions is called. All APIs, including global data symbols, in any framework or library should be assumed to be unsafe after a fork() unless explicitly documented...
fork() <i>fundamentally does not make sense</i> as the de-facto method of starting a new process. Why aren't people using posix_spawn() by default?
for other fun low-level High Sierra issues, see the PostgreSQL msync() thread: <a href="https://www.postgresql.org/message-id/flat/13746.1506974083%40sss.pgh.pa.us#13746.1506974083@sss.pgh.pa.us" rel="nofollow">https://www.postgresql.org/message-id/flat/13746.1506974083%...</a>
The discussion on the Ruby core team issue tracker is also very informative: <a href="https://bugs.ruby-lang.org/issues/14009" rel="nofollow">https://bugs.ruby-lang.org/issues/14009</a>
I've hit similar issues with uwsgi in recent memory (though pre high-sierra), where an OS upgrade caused it to start segfaulting when using the `requests` lib inside CoreFoundation somewhere (though of course entirely unrelated to the new forking changes).<p>Maybe this? Though the resolution was to disable uwsgi proxying globally...
<a href="https://stackoverflow.com/questions/35650520/uwsgi-segmentation-fault-when-using-flask-and-python-requests" rel="nofollow">https://stackoverflow.com/questions/35650520/uwsgi-segmentat...</a>
LOL. mac osx breaks fork() to avoid state inconsistency in threaded applications. How about pthread_atfork() semantics? But ,as usual, apple heavy hands userspace and breaks things. Nothing new to see here, move on.
I agree the bug should be fixed. But, why not just use docker, then run rails like its on ubuntu/linux on your mac? It miserable having windows/mac/etc specific issues.
I love this geek-porn stuff, and the Phusion guys never fail to deliver it ;)<p>But my question is: is this really that important? I mostly use macOS for development, don't feel that the preforking model has that impact in the development cycle.