TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Understanding Why Resilience Faults in Microservice Applications Occur

48 点作者 cmeiklejohn大约 3 年前

3 条评论

leroman大约 3 年前
&gt; in order to solve this problem I watched 77 presentations from industrial conferences and blog posts on the use of chaos engineering to identify the types of resilience issues that companies experience and how they go about identifying them.<p>I believe what is observed here is symptoms and not root cause.<p>My experience in this area tell me that after you go the &quot;Micro Services&quot; route, there is no coherent view of the system and a holistic design &amp; architecture is derived from the many integration issues instead of trying to improve the data domain and it&#x27;s inherent business challenges. So basically (over)engineering vs creating features..<p>I can&#x27;t see how an academic could arrive to this conclusion unless he took part first hand in several organizations taking this route and contrasting this with first hand experience with a more &quot;monolith&quot; approach - or less emphasis on &quot;micro-servicing-all-the-things&quot;..
评论 #30742609 未加载
jameshart大约 3 年前
I <i>think</i> the main thesis here is that chaos testing is the only way to detect ‘unscalable error handling’, but that most ‘unscalable error handling’ faults could be eliminated by testing for ‘missing error handling’ and ‘unscalable infrastructure’, which should be able to be tested with less disruptive techniques than ‘chaos’.<p>I’m not sure I follow the argument though.<p>Just because you have demonstrated that a system is scalable, and that it is tolerant of errors, does not imply it is tolerant of errors at scale.<p>The example given of Expedia’s error handling that, they claim, could have been verified without chaos testing:<p>&gt; Expedia tested a simple fallback pattern where, when one dependent service is unavailable and returns an error, another service is contacted instead afterwards. There is no need to run this experiment in production by terminating servers in production: a simple test that mocks the response of the dependent service and returns a failure is sufficient.<p>When the first service becomes unavailable, does the alternate service have a cold cache? Does that drive increased timeouts and retries? Is there a hidden codependency of that service on the thing which caused the outage if the first service?<p>Maybe that can all be verified by independent non-chaos scalability testing of that service.<p>But chaos testing is like the integration testing over the units that individual service load and mock-error tests have verified. Sure, in theory this service fails over to calling a different dependency. And in theory that dependency is scalable.<p>Running a chaos test confirms that those assumptions are correct - that scalability + error tolerance actually delivers resilience.
bob1029大约 3 年前
After seeing the Audible block diagram, I&#x27;d add 4th &amp; 5th takeaways:<p>&gt; Most of this conversation can be obviated by spending time minimizing the number of systems, dependencies, vendors and other 3rd party items required to satisfy the product objectives. Prefer more &quot;batteries-included&quot; ecosystems when feasible.<p>&gt; Start with a monolithic binary, SQLite and a single production host. Change this only when measurements and business requirements <i>actually</i> force you to. Plan for the possibility that you might have to expand to more than one production host, but don&#x27;t prioritize it as an inevitability. There is no such thing as an executable that is &quot;too big&quot; when the alternative is sharding your circumstances to the 7 winds.