TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Systems that defy detailed understanding

278 点作者 a7b3fa大约 5 年前

13 条评论

Ididntdothis大约 5 年前
I often wonder if things would be better if systems were less forgiving. I bet people would pay more attention if the browser stopped rendering on JavaScript errors or misformed HTML&#x2F;CSS. This forgiveness seems to encourage a culture of sloppiness which tends to spread out. I have the displeasure of looking at quite a bit of PHP code. When I point out that they should fix the hundreds of warnings the usual answer is “why? It works.” My answer usually is “are you sure? “.<p>On the other hand maybe this forgiveness allowed us to build complex systems.
评论 #22837207 未加载
评论 #22837669 未加载
评论 #22839476 未加载
评论 #22837792 未加载
评论 #22846867 未加载
评论 #22838418 未加载
评论 #22839267 未加载
评论 #22839552 未加载
评论 #22843688 未加载
评论 #22843228 未加载
评论 #22839688 未加载
评论 #22838765 未加载
评论 #22837447 未加载
smitty1e大约 5 年前
Great article.<p>Recalls Gall&#x27;s Law[1]. &quot;A complex system that works is invariably found to have evolved from a simple system that worked.&quot;<p>Also, TFA invites a question: if handed a big ball of mud, is it riskier to start from scratch and go for something more triumphant, or try to evolve the mud gradually?<p>I favor the former, but am quite often wrong.<p>[1] <a href="https:&#x2F;&#x2F;en.m.wikiquote.org&#x2F;wiki&#x2F;John_Gall" rel="nofollow">https:&#x2F;&#x2F;en.m.wikiquote.org&#x2F;wiki&#x2F;John_Gall</a>
评论 #22835528 未加载
评论 #22836428 未加载
评论 #22835961 未加载
评论 #22842001 未加载
评论 #22835028 未加载
mannykannot大约 5 年前
Big balls of mud result from a process that resembles reinforcement learning, in that modifications are made with a goal in mind and with testing to weed out changes that are not satisfactory, but without any correct, detailed theory about how the changes will achieve the goal without breaking anything.
评论 #22835616 未加载
评论 #22835103 未加载
评论 #22836329 未加载
carapace大约 5 年前
&quot;Introduction to Cybernetics&quot; W. Ross Ashby<p><a href="http:&#x2F;&#x2F;pespmc1.vub.ac.be&#x2F;ASHBBOOK.html" rel="nofollow">http:&#x2F;&#x2F;pespmc1.vub.ac.be&#x2F;ASHBBOOK.html</a><p>&gt; ... still the only real textbook on cybernetics (and, one might add, system theory). It explains the basic principles with concrete examples, elementary mathematics and exercises for the reader. It does not require any mathematics beyond the basic high school level. Although simple, the book formulates principles at a high level of abstraction.
评论 #22836375 未加载
评论 #22842729 未加载
xyzzy2020大约 5 年前
I think this is useful even for systems (SW stacks) that are much smaller and &quot;knowable&quot;: you start by observing, trying small things, observing more, trying different things, observe more and slowly build a mental model of what is likely happening and where.<p>His defining characteristic is where you can permanently work around a bug (not know it, but know _of_ it) vs find it, know it, fix it.<p>Very interesting.
jborichevskiy大约 5 年前
&gt; If you run an even-moderately-sophisticated web application and install client-side error reporting for Javascript errors, it’s a well-known phenomenon that you will receive a deluge of weird and incomprehensible errors from your application, many of which appear to you to be utterly nonsensical or impossible.<p>...<p>&gt; These failures are, individually, mostly comprehensible! You can figure out which browser the report comes from, triage which extensions might be implicated, understand the interactions and identify the failure and a specific workaround. Much of the time.<p>&gt; However, doing that work is, in most cases, just a colossal waste of effort; you’ll often see any individual error once or twice, and by the time you track it down and understand it, you’ll see three new ones from users in different weird predicaments. The ecosystem is just too heterogenous and fast-changing for deep understanding of individual issues to be worth it as a primary strategy.<p>Sadly far too accurate.
naringas大约 5 年前
I firmly believe that <i>in theory</i> all computer systems can be understood.<p>But I agree when he says, it has become impractical to do so. But I just don&#x27;t like it personally, I got into computing because it was supposed to be the most explainable thing of all (until I worked with the cloud and it wasn&#x27;t).<p>I highly doubt that the original engineers who designed the first microchips and wrote the first compilers, etc... relied on &#x27;empirical&#x27; tests to understand their systems.<p>Yet, he is absolutely correct, it can no longer be understood, and when I wonder why I think the economic incentives of the industry might be one of the reasons?<p>for example, the fact that chasing crashes down the rabbit hole is &quot;always a slow and inconsistent process&quot; will make any managerial decision maker feel rather uneasy. This make sense.<p>Imagine if the first microprocessors where made by incrementally and empirically throwing together different logic gates until it just sort of worked??
woodandsteel大约 5 年前
From a philosophical perspective, I would say this is an example of the inherent finitudes of human understanding. And I would add that such finitudes are deeply intertwined with many other basic finitudes of human existence.
lucas_membrane大约 5 年前
I suspect that systems that defy understanding demonstrate something that ought to be a corollary of the halting problem, i.e. just as you can&#x27;t figure out for sure how long an arbitrary system will take to halt, or even figure out for sure whether or not it will, neither can you figure out how long it will take to figure out what&#x27;s going on when an arbitrary system reaches an erroneous state, or even figure out for sure whether or not you can figure it out.
评论 #22840544 未加载
natmaka大约 5 年前
Postel&#x27;s Robustness principle seems pertinent, along with &quot;The Harmful Consequences of the Robustness Principle&quot;. <a href="https:&#x2F;&#x2F;tools.ietf.org&#x2F;id&#x2F;draft-thomson-postel-was-wrong-03.html" rel="nofollow">https:&#x2F;&#x2F;tools.ietf.org&#x2F;id&#x2F;draft-thomson-postel-was-wrong-03....</a>
INTPnerd大约 5 年前
Even if you can reason about the code enough to come to a conclusion that seems like it must be true, that doesn&#x27;t prove your conclusion is correct. When you figure something out about the code, whether through reason and research, or tinkering and logging&#x2F;monitoring, you should embed that knowledge into the code, and use releases to production as a way test if you were right or not.<p>For example, in PHP I often find myself wondering if perhaps a class I am looking at might have subclasses that inherit from it. Since this is PHP and we have a certain amount of technical debt in the code, I cannot 100% rely on a tool to give me the answer. Instead I have to manually search through the code for subclasses and the like. If after such a search I am reasonably sure nothing is extending that class, I will change it to a &quot;final&quot; class in the code itself. Then I will rerun our tests and lints. If I am wrong, eventually an error or exception will be thrown, and this will be noticed. But if that doesn&#x27;t happen, the next programmer who comes along and wonders if anything extends that class (probably me) will immediately find the answer in the code, the class is final. This drastically reduces possibilities for what is possible to happen, which makes it much easier to examine the code and refactor or make necessary changes.<p>Another example is often you come across some legacy code that seems like it no longer can run (dead code). But you are not sure, so you leave the code in there for now. In harmony with this article, you might log or in some way monitor if that path in the code ever gets executed. If after trying out different scenarios to get it to run down that path, and after leaving the monitoring in place on production for a healthy amount of time, you come to the conclusion the code really is dead code, don&#x27;t just add this to your mental model or some documentation, embed it in the code as an absolute fact by deleting the code. If this manifests as a bug, it will eventually be noticed and you can fix it then.<p>By taking this approach you are slowly narrowing down what is possible and simplifying the code in a way that makes it an absolute fact, not just a theory or a model or a document. As you slowly remove this technical debt, you will naturally adopt rules like, all new classes must start out final, and only be changed to not be final when you need to actually extend them. Eventually you will be in a position to adopt new tools, frameworks, and languages that narrow down the possibilities even more, and further embedding the mental model of what is possible directly into the code.
jerzyt大约 5 年前
Great read. A lot hard earned wisdom!
drvortex大约 5 年前
What a long winded article on what has been known to scientists for decades as &quot;emergence&quot;. Emergent properties are systems level properties that are not obvious&#x2F;predictable from properties of individual components. Looking and observing one ant is unlikely to tell you that several of these creatures can build an anthill.
评论 #22838141 未加载
评论 #22837616 未加载