TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

My story on “worse is better” (2018)

195 pointsby rui314about 3 years ago

20 comments

CipherThrowawayabout 3 years ago
IMO it&#x27;s not that the simplest solutions are the best but that the &quot;better&quot; complex solutions are not actually available upfront. They can only be made with hard-won domain knowledge. The design policy for lld v1 per this article encoded many assumptions that turned out to be untrue in practice (like the importance of platform independence). If they had been true then the extra complexity might have been worth it. Over time the simpler lld v2 might accrue its own complexity that better reflects learned experience.<p>Code is a tool for exploring and understanding problems as much as it is about solving them. Sophisticated solutions can&#x27;t be designed before they are validated.
评论 #31347009 未加载
评论 #31340862 未加载
评论 #31341614 未加载
recursivedoubtsabout 3 years ago
The worse is better essay is a great read:<p><a href="https:&#x2F;&#x2F;www.dreamsongs.com&#x2F;WorseIsBetter.html" rel="nofollow">https:&#x2F;&#x2F;www.dreamsongs.com&#x2F;WorseIsBetter.html</a><p>Over time, I have come to believe that the problem is overly-aggressive abstraction. It is very tempting for most developers, especially good developers, to reach for abstraction as soon as things get complicated. And this can pay off very, very well in some cases. However, too much abstraction in a system leads to a very, well, abstract code base that becomes hard to get a handle on: there&#x27;s no there there. You see this in the java world with AbstractFactoryBuilderLookupServiceBuilders and so forth, and with very elaborate type setups in functional programming languages.<p>Concretizing the crucial bits of your system, even if that means a few large, complex and gronky methods or classes, ends up often making things more understandable and maintainable and, certainly, debuggable.<p>John Ousterhout wrote a book that makes this point as well, advocating for &quot;deep&quot; rather than &quot;shallow&quot; classes and methods:<p><a href="https:&#x2F;&#x2F;www.goodreads.com&#x2F;en&#x2F;book&#x2F;show&#x2F;39996759-a-philosophy-of-software-design" rel="nofollow">https:&#x2F;&#x2F;www.goodreads.com&#x2F;en&#x2F;book&#x2F;show&#x2F;39996759-a-philosophy...</a>
评论 #31340627 未加载
评论 #31340966 未加载
评论 #31343513 未加载
评论 #31347231 未加载
评论 #31340339 未加载
评论 #31342693 未加载
评论 #31340974 未加载
评论 #31348017 未加载
ncmncmabout 3 years ago
Cf: &quot;Architecture Astronaut&quot;.<p>Abstraction always costs. This implies any abstraction you put in that doesn&#x27;t deliver commensurate benefits makes your code fundamentally worse.<p>If you have an abstraction with a name that seems to say it does X, but to be sure I have to trace through four other source files plus an unknown number of dead ends just to see if it really does exactly just X, and in the end it could have just been coded in place, it has already cost way more than any benefit it could yield.<p>Abstraction is an engineering tool, not a moral imperative. You can always add abstraction later.<p>So, &quot;Worse is Better&quot; is at best misleading. <i>Better is better</i>. But the measure of &quot;better&quot; you have been using is likely to be way off. People who think of themselves as smart tend to undervalue simplicity. It is a personal failing.
评论 #31341219 未加载
评论 #31341139 未加载
avgcorrectionabout 3 years ago
This ain’t “worse is better”. If the end-user has no worse of a user experience with the supposedly worse-is-better interface or program—most notably in this case if the linker never in practice gives a bad user experience on bad input (perhaps because it never happens)—then it’s not really “worse”.<p>The central tenet of worse-is-better is to prioritize implementation simplicity over user experience. But if you simplify the implementation since some features <i>are never used</i> and not needed then you haven’t even had to make a choice on that spectrum—you have just cut out unneeded cruft.<p>In fact the user experience has improved since it is faster...
评论 #31341041 未加载
CTmysteryabout 3 years ago
I have seen &quot;worse is better&quot; trotted out many many times, used mostly as an appeal to authority instead of a clear instantiation of the argument presented in the essay. I think this is because the argument in the original essay is not crisply presented (at least not to me). So authors take &quot;worse&quot; and &quot;better&quot; on some random dimension that&#x27;s beneficial to them in the moment, and then appeal to the authority of this well respected essay to &quot;prove&quot; that their approach is better.<p>This post is stating that they tried to generalize too much instead of building to narrow use cases first. There is no need to bring &quot;worse is better&quot; into it at all, IMO
评论 #31407160 未加载
scouttabout 3 years ago
I wouldn&#x27;t called it &quot;worse is better&quot;. Why not &quot;less is better&quot; or &quot;simpler is better&quot;?<p>My take is that one should program in function of what the code should do, and not in function of what it&#x27;s comfortable to me (as a developer).<p>Yes, it&#x27;s a great feeling when your code fits like a jigsaw puzzle, but also more complexity = more code being executed. Behind that RAII, behind that <i>&quot;operator=&quot;</i> and that <i>&quot;p = new Struct&quot;</i>, etc. there might be extra complexity for the sake of developer&#x27;s readability and comfortability. There is little or no added value for the end user or the purpose of the program itself.<p>Also the code should be written &quot;for the now&quot;, not for &quot;that future feature it would be awesome to have someday like making it compatible with every other library X, etc.&quot;.<p>At the end of the day, even without realizing it, your program is slow.<p>I remember a developer where I work did a C# implementation of an AT command parser, in which every AT command was a separate DLL. It was very complex, and super slow. But the developer argued &quot;if I need a new AT command, I&#x27;ll just add a new DLL&quot;. It might have been <i>better</i> for him as a developer, but it was <i>worse</i> for the end user and the system in general. The code died the day that guy left the job.
评论 #31341356 未加载
评论 #31341644 未加载
评论 #31341091 未加载
jt2190about 3 years ago
I’ve been thinking a lot about this kind of problem lately. My thinking was triggered by a description of real-world contracts by Lawrence Lessig, in a talk about crypto. [1] His point was that in most real world contracts there are many, many undefined contingencies for rare occurrences; the time and effort to nail-down what each party should do in those cases is just not worth it, and if any of them do actually occur everyone will go to court for adjudication.<p>The software industry might be better off if we start making explicit that we have a real trade off of time and effort between designing a system that can handle all contingencies, and one that needs adjudication occasionally, i.e. throws an error or crashes, needs patching, etc. [2]<p>Notice that Rui’s solution here was to basically allow lld to just crash under rare, extraordinary cases. This seems reasonable when written here, but in the real world suggesting that a system be allowed to crash, ever, often gets very hard pushback.<p>[1] “Smart Contracts and DApps: Clarity versus Obscurity” <a href="https:&#x2F;&#x2F;youtu.be&#x2F;JPkgJwJHYSc?t=3512" rel="nofollow">https:&#x2F;&#x2F;youtu.be&#x2F;JPkgJwJHYSc?t=3512</a><p>[2] “Hard-assed bug fixin’” Joel on Software (2001)
评论 #31341224 未加载
nyanpasu64about 3 years ago
&gt; Since the linker&#x27;s input file is created not by humans but by the compiler, it is unlikely that the linker takes a corrupted file as an input. Therefore, the policy did not actually increase a crash rate. The code that trusts input was much simpler than the one that does not trust any byte of an input file.<p>Interestingly I <i>have</i> encountered crashes in Ninja (not lld), caused by corrupted on-disk state I had to delete: <a href="https:&#x2F;&#x2F;github.com&#x2F;ninja-build&#x2F;ninja&#x2F;issues&#x2F;1978" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;ninja-build&#x2F;ninja&#x2F;issues&#x2F;1978</a>. I think I traced it down to a memory indexing or null pointer error, which would&#x27;ve been caught by asserts but they were disabled in release builds.
hprotagonistabout 3 years ago
There&#x27;s some commonality here between this sort of &quot;worse is better&quot; and the observation that a meticulously neat and tidy, fastidiously clean {desk, notetaking system, editor configuration, ...} is a good indicator that its owner doesn&#x27;t do anything worthwhile with it.<p>The real world is messy. Things that come into contact with the real world will acquire a little bit of wear and mess. If it&#x27;s all still brilliantly clean and tidy, you can&#x27;t have done anything useful yet!
golergkaabout 3 years ago
&gt; Here is the rule: if a user can trigger an error condition by using the linker in a wrong way, the linker should print out a proper error message. However, if the binary of an input file is corrupted, it is OK for the linker to simply crash.<p>One of the most useful ideas in developing reasonably complex systems that I encountered is treating different types of errors differently. There&#x27;s a place for both panics&#x2F;exceptions on one hand result monads&#x2F;error messages on another.
jstimpfleabout 3 years ago
I&#x27;ve always considered the EINTR example in that article to be a bad one (although I think I agree with the article in general). Having blocking syscalls interrupted in case of an asynchonous signal is the right thing, because it allows the program to act on the signal. Think for example of a terminal user pressing Ctrl+C to interrupt blocking I&#x2F;O to return to the shell prompt.<p>The problem is that this does&#x27;t go far enough - decades ago machines were running on a single CPU and OSes were focused on the scheduling of processes. Syscalls were all blocking, so for each individual process there could only ever be one syscall (&quot;request&quot;) in flight at a time. Now, we&#x27;re seeing a change (for example with io_uring) towards fully asynchronous I&#x2F;O exposed to the userland, which allows submitting multiple requests to various I&#x2F;O devices simultaneously, which has the potential to improve throughput a lot.
hardwaregeekabout 3 years ago
What a lot of people seem to not understand is that mistakes or failings are sometimes on purpose. If software is slow, sometimes (not always!) it’s because the software that focused on performance didn’t get traction with users and failed. Too many programmers just see the immediate failings and not the larger, successfully avoided failings that the software prevented due to a tradeoff.<p>Likewise many programmers don’t seem to get that the incentives of a programmer are quite different than the incentives of a manager. Programmers think “aw man my stupid manager is making me push out features instead of refactoring, if I were in charge I’d focus on code quality and performance”, not understanding that maybe, just maybe their manager might have different incentives and a different perspective.<p>Worse is better is essentially a shorthand for understanding product management and scoping.
eikenberryabout 3 years ago
&gt; &quot;It says that lazily-looking code that does not provide a consistent interface is sometimes actually better than neatly layered, consistent one.&quot;<p>I find this an odd take on that paper. To me it has always been about how simplicity is more important than correctness. And while the authors take doesn&#x27;t conflict with WiB, I do think they miss the point.<p>&gt; &quot;Simplicity of implementation is very, very important, and you can sometimes sacrifice consistency and completeness for it.&quot;<p>They almost get it in the conclusion. They are starting to see the important of simplicity, but still are fixed on correctness. Simplicity is not something you sacrifice correctness and completeness for &quot;sometimes&quot;, it always wins if there is a contest. It is always more important (at least in line with WiB).
评论 #31407298 未加载
sam_lowry_about 3 years ago
This echoes the &quot;premature optimization is the root of all evil&quot; saying.
klysmabout 3 years ago
&gt; This may seem like an amateur-level programming mistake, but in reality, it&#x27;s much easier to write straightforward code for each target than writing unified one that covers all the details and corner cases of all supported targets simultaneously.<p>This is one of the reasons sum types are so critical in my opinion. They let you write code in that style, where OOP forces you to make everything look kinda the same.
0xdeadbeefbabeabout 3 years ago
&gt; It is OK to not aim to minimize the amount of code; reducing the complexity is much more important.<p>Also, it&#x27;s not OK to aim to increase the amount of code. &#x2F;&#x2F;TODO include both statements in the creed.
phendrenad2about 3 years ago
Not a good example of &quot;Worse is Better&quot;. This is &quot;Worse is faster&quot; which is much less interesting. The power of WIB is that is applies even if the programs are equally fast.
评论 #31342880 未加载
kazinatorabout 3 years ago
&gt; <i>First of all, do many people really need a set of library functions and data structures that collectively work as a linker?</i><p>Yes; people who unit test.
chrchang523about 3 years ago
(2018)
评论 #31346124 未加载
kazinatorabout 3 years ago
The &quot;PC losering&quot; anecdote in Gabriel&#x27;s original essay is vert dated.<p>In fact, neither design is the &quot;better&quot;.<p>In not-so-modern-anymore POSIX, you can choose whether a system call will be restarted after a signal is handled, or whether it will terminate with an error. Both requirements are needed.<p>It is signals themselves that are &quot;worse&quot;. But they let you have asynchronous behaviors without using threads.<p>Sometimes you want a signal handler to just set some flag. This is because you have to be careful what you do in a signal handler, as well as how much you do. And then if you want the program to react to that flag, it behooves you to have it wake up from the interrupted system call and not go back to sleep for another 27 seconds until some network data arrives or whatever.<p>In addition to sigaction, you can also abort a system call by jumping out of a signal handler; in POSIX you have sigsetjmp and siglongjmp which save and restore the signal mask. So that would be an alternative to setting a flag and checking. If you use siglongjmp, the signal itself can be set up in such a way that the system call is restarted. The signal handler can then choose to return (syscall is restarted) or bail via siglongjmp (syscall is abandoned). I wouldn&#x27;t necessarily want to be forced to use siglongjmp as the only way to get around system calls being always restartable.<p>Anyway, the Unix design showed to be capable of being &quot;worse for now&quot;, and have space to work toward &quot;better eventually&quot;.<p>In the present story, the monolithic linker design isn&#x27;t &quot;worse&quot;. Let&#x27;s just look at one aspect: crashing on corrupt inputs. Is that a bad requirement not to require robustness? No; the requirement is justifiable, because a linker isn&#x27;t required to handle untrusted inputs. It&#x27;s a tool-chain back-end. The only way it gets a bad input is if the middle parts of the toolchain violate its contract; the assembler puts out a bad object file and such. It can be a wasteful requirement to have careful contract checking between internal components.<p>Gabriel naturally makes references to Lisp in the Rise of Worse is Better, claiming that Common Lisp is an example of better. But not everything is robust in Common Lisp. For instance the way type declarations work is &quot;worse is better&quot;: you make promises to the compiler, and then if you violate them, you have undefined behavior. Modifying a literal object is undefined behavior in Common Lisp, pretty much exactly like in ISO C. The Loop macro&#x27;s clause symbols being compared as strings is worse-is-better; the &quot;correct requirement&quot; would have been to use keywords, or else symbols in the CL package that have to be properly made visible to be used.<p>I don&#x27;t think that Gabriel had a well reasoned and organized point in the essay and himself admitted that it was probably flawed (and on top of that, misunderstood).<p>The essays is about requirements; of course the assumption is that everyone is implementing the requirements right: &quot;worse&quot; doesn&#x27;t refer to bugs (which would be a strawman interpretation) but to a sort of &quot;taste&quot; in the selection of requirements.<p>Requirements have so many dimensions that it&#x27;s very hard to know which directions in that space point toward &quot;better&quot;. There are tradeoffs at every corner. Adopt this &quot;better&quot; requirement here, but then you have to concede toward &quot;worse&quot; there. If we look at one single requirement at a time, it&#x27;s not difficult to acquire a sense of which direction is better or worse, but the combinations of thousands of requirements are daunting.<p>If we look for what is the truth, the insight in Gabriel&#x27;s essay it is that adherence to principled absolutes is often easily defeated by flexible reasoning that takes into account the context.<p>3.1415926 is undeniably a better approximation of pi than 3.14. But if you had to use pencil-and-paper calculations to estimate how many tiles you need for a circular room, it would be worse to be using 3.1415926. You would just do a lot of extra work, for no benefit; the estimate wouldn&#x27;t be any better. Using the worse 3.14 is better than using 3.1415926; that may be the essence of &quot;worse is better&quot;. On the other hand, if you have a calculator with a pi button, it would be worse to be punching in 3.14 than just using the button, and the fact that the button gives you pi to 17 digits is moot. A small bit of context like that can change the way in which the worse-is-better reasoning is applied.