TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: Why are all major operating systems monolithic?

44 点作者 achenet12 个月前
Why are all major OS's (Windows, macOS, Linux, *BSD) all monolithic kernels? Why aren't there more microkernel operating systems, like for example Redox or Minix?

17 条评论

nullindividual12 个月前
The NT kernel and XNU (macOS) are considered hybrid kernels. QNX, a microkernel, is absolutely everywhere (i.e., cars).<p>Performance of microkernels is or was a hotly debated topic back in the &#x27;90s; microkernels generally had lower performance than their monolithic counterparts. There was a famous argument comparing Linux (monolithic) to MINIX (micro) [0]. Wikipedia has simple explanations for the differences between hybrid, mono, and micro kernels that serve as a decent primer [1] and a visual that I find helpful [2].<p>[0] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Tanenbaum%E2%80%93Torvalds_debate" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Tanenbaum%E2%80%93Torvalds_deb...</a><p>[1] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Kernel_(operating_system)" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Kernel_(operating_system)</a><p>[2] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Hybrid_kernel#&#x2F;media&#x2F;File:OS-structure2.svg" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Hybrid_kernel#&#x2F;media&#x2F;File:OS-s...</a>
评论 #40526320 未加载
评论 #40526435 未加载
评论 #40527173 未加载
评论 #40526847 未加载
评论 #40529939 未加载
评论 #40527082 未加载
freedomben12 个月前
Definitely read up on the Torvalds&#x2F;Tanenbaum debate as there are lots of moving pieces and interesting theories.<p>My opinion summed up ridiculously concisely is that the reasons are largely accidents of timing (i.e. there was room in the market for new entrants), mixed with trivial-at-the-time but high-impact choices. Had Tanenbaum been permitted by his publisher to give away copies of minix freely, Linux may never have been started at all. Had Tanenbaum chose raw performance over teachability (the kernel was a teaching tool after all), it may have made a difference. (warning: this part is controversial) There&#x27;s also a lot of question about whether the GPL licensing kept Linux relevant, or if it was largely inconsequential. My opinion is that GPL made a big difference. Had the licensing of Minix been GPL flavored, it probably wouldn&#x27;t have been used by companies like Intel for the Management Engine, but there might have been enough &quot;forced sharing&quot; to keep it relevant. Impossible to say for sure, but that&#x27;s my two cents.
0xbadcafebee12 个月前
Linux wouldn&#x27;t really exist today if it wasn&#x27;t for GNU Hurd being a microkernel. It took so damn long for Hurd to become stable that nobody wanted to use it. But people did want to use the GNU tools. So Linus made the Linux kernel a simple monolith, shipped it quickly, integrated patches quickly, and everyone started using it. By the time Hurd was stable, Linux was popular. So microkernels are hard and monoliths are easier, and the first one to the party tends to stick around the longest.
评论 #40526747 未加载
评论 #40527036 未加载
评论 #40527213 未加载
yjftsjthsd-h12 个月前
<i>Very</i> oversimplified answer: Microkernels still have a performance hit, end-users care more about performance than stability&#x2F;security (at least at the ratio&#x2F;margin in question), therefore we use the faster monolithic kernels for most user-visible systems, with carve outs where the performance hit isn&#x27;t <i>so</i> bad (hence NT&#x2F;Darwin really being more hybrid now, and even Linux <i>can</i> do some drivers in userspace). As other comments note, this only applies to places where performance matters; lots of embedded systems do in fact care more about reliability than performance so the outcome is different.
racional12 个月前
A good starting point for this question would be this famous discussion:<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Tanenbaum–Torvalds_debate" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Tanenbaum–Torvalds_debate</a><p>Which in a nutshell comes down to Linus&#x27;s view that producing a system that meets current needs (for example by offering a simple API) and is reasonably performant -- even if it ran on but a single architecture such as the i386 -- was more promising than a design offering &quot;theoretical&quot; benefits (such as portability) but which either didn&#x27;t deliver the goods today or just wasn&#x27;t reasonably performant.<p>This is a vast simplification of a much broader set of issues but if one digs into the notes of why subsequent projects (e.g. Windows NT) moved away from microkernel designs, or the roughly analogous microservices debate of the past decade -- one often finds echoes of the same basic considerations.
nimish12 个月前
Linux has more microkernel features than you&#x27;d think. You can run all of networking, filesystems, HID and more in user space with BPF&#x2F;Fuse. What else is left?
评论 #40526780 未加载
pclmulqdq12 个月前
New OSes that have come out are pretty much universally microkernels. Fuchsia from Google is on a lot of their devices, and is a microkernel. SeL4 and QNX are microkernels for security and safety critical devices. I believe that Firecracker from AWS is also sort of microkernel-like (but pretends to be Linux). If you are building the device, the kernel, and the software, chances are that you are using a microkernel in 2024.<p>Windows and the Unix OSes just have a lot more market penetration, strong network effects, and a big first-mover advantage. That means that when you are buying your OS or your hardware or your software, you&#x27;re probably stuck with a monokernel.
评论 #40526886 未加载
评论 #40527938 未加载
mike_hearn12 个月前
Nobody actually wants microkernels. Remember that the microkernel design is just a pessimistic mitigation. It assumes code will be crashy and full of exploits forever, and that it&#x27;s worth significantly complicating the design as well as sacrificing performance to try and mitigate that. If you&#x27;re making an OS you could invest resources into micro-kernelling everything and then trying to resolve the many hard problems and bugs that mitigation itself creates, or you could just .... fix the bugs! Kernels aren&#x27;t infinitely large and the latter is actually much preferable if you can do it.<p>Some kinds of code are inherently in the core failure domain and trying to use restartable processes for them is impossible. For example if the code configuring your PCI controller crashes half way through, well, you need to reboot. You can&#x27;t just restart the driver because you have no idea what state the hardware is in. A reboot gets you back into a known state, a server restart doesn&#x27;t. Likewise if your root filesystem server segfaults, you can&#x27;t restart it because your process start code is going to involve sending RPCs to the filesystem server so it&#x27;d just deadlock.<p>Finally, it&#x27;s a spectrum anyway. Microkernel vs monolithic is not a hard divide and code moves in and out of those kernels as time and hardware changes. The graphics stack on Windows NT started out of kernel, moved into the kernel, then moved back out again. In all OS&#x27; the bulk of the code resides in userspace libraries and in servers connected via IPC.
squarefoot12 个月前
Because microkernel was born in an era when cpu cycles were precious and albeit minimal, the difference in performance would be noticeable. Also, back then there were no security reasons to choose one over another; those were the days of NNTP and unencrypted email, and the web didn&#x27;t even exist. I can&#x27;t comment about other OSes, but in the case of Linux, when microkernels became stable enough, Linux had already taken over and was already running from supercomputers down to embedded systems (also 8088 based ancient hardware if one also counts ELKS). As of today, in my opinion, the biggest obstacle are device drivers; that&#x27;s the biggest battleground that determines the success or failure of an OS. Linux runs pretty much everywhere and where hardware manufacturers stubbornly refuse to document their hardware, there are thousands of developers around the world spending time and money to reverse engineer closed drivers so that Linux can support more hardware. Of course once they&#x27;re written they can be ported to other OSes, but that requires more time and more people devoted to the job.
kazinator12 个月前
In the world of Linux, &quot;micro-kernel-like things&quot; are done when they are beneficial. An example of this is FUSE: filesystems in user space. There is a benefit to being able to run a filesystem in its own address space as a user process, so it is done.<p>Also the model of a Unix kernel with services (daemons) in user space is kind of similar to the microkernel model. The graphical desktop is not in the kernel, the mail server isn&#x27;t in the kernel, crond isn&#x27;t in the kernel, the web server isn&#x27;t in the kernel, the database server isn&#x27;t in the kernel ... and maybe that&#x27;s as close as we need to get to the microkernel.<p>Speaking of web servers in user space: at least for serving a static page, it&#x27;s faster for that to be inside the monolithic kernel, where it can access the protocol stack and filesystem code without crossing a protection boundary. Over the history of Linux, we have seen experimentation both with moving kernel things into user space (like with FUSE) as well as user things in to the kernel (TUX server, now evidently unmaintained).
bluGill12 个月前
Because once something works it is hard to start over from scratch and also hard to do a major refactor. It can be done but few can dedicate the time needed and so you stick with what is even if not ideal.
评论 #40526579 未加载
23B112 个月前
1. There is value in &#x27;central control&#x27; several reasons;<p>- the people required to work on it<p>- in a timely fashion<p>- in a coordinated fashion<p>- the resources required<p>- bugs, updates, maintenance<p>2. The competition for aforementioned talent<p>3. The competition for aforementioned resources<p>4. Profit motive
评论 #40526913 未加载
评论 #40527496 未加载
gwd12 个月前
For one thing, writing things in a microkernel are far more <i>complex</i>. In theory the isolation can mean lower impact of bugs and security issues; but in practice, the complexity means more bugs which are harder to debug. That&#x27;s why you don&#x27;t really see microkernels outside of embedded use cases -- the lower functionality requirement makes the complexity manageable.<p>You can write a rocket OS as a &quot;proper&quot; microkernel, but there&#x27;s no way you could write a full-blown Windows OS as a &quot;proper&quot; microkernel.<p>That said, Windows or Linux can take leaves out of the microkernel playbook and apply them judiciously: KVM&#x27;s architecture is arguably an example of this, where the core kernel does only the minimum necessary, and everything else is pushed to the user-mode component (QEMU, Firecracker, etc).
fsflover12 个月前
According to Qubes OS, they are not realistic today: <a href="https:&#x2F;&#x2F;www.qubes-os.org&#x2F;faq&#x2F;#what-about-safe-languages-and-formally-verified-microkernels" rel="nofollow">https:&#x2F;&#x2F;www.qubes-os.org&#x2F;faq&#x2F;#what-about-safe-languages-and-...</a>
nottorp12 个月前
Because if the difference is still meaningful, the monolithic kernels are good enough?
renonce12 个月前
Is Linux really a monolithic kernel? I mean yeah for most use cases such as desktops and servers it is but there&#x27;s nothing preventing you from disabling most of its features and drivers and keeping a very minimal core (I&#x27;m sure lots of embedded Linux does this) and you can, right now, make it act as a microkernel by putting any component in userspace, mounting filesystems and devices via FUSE and networks via TUN, at its current state.<p>Now the question becomes why these components are built in the kernel and not in userspace? The answer is clear for each individual component. See how the in-tree NTFS driver eventually replaces NTFS-3g. Basically when an in-tree solution exists it just gets preferred over userspace solutions because of performance and reliability etc.
评论 #40527373 未加载
stracer12 个月前
Monolithic systems allow their authors much greater control over software that runs on computers.
评论 #40529168 未加载