TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

“The Right Tool For The Job” Isn't

30 点作者 tomblomfield超过 11 年前

7 条评论

Aaronontheweb超过 11 年前
Some additional thoughts for the author:<p>- Training &#x2F; warmup overhead for new employees: the amount of time a new developer needs to warm up when joining a company grows exponentially with the number of different technologies &#x2F; tools used on the job. It&#x27;s not just learning Redis - it&#x27;s learning _your Redis_.<p>All of the configuration details, deployment procedures, setup, integration with other services, data model, etc specific to your company. This can be a killer for small companies, where everyone needs to know a little bit about everything...<p>- Future-proofing: the greater the number of different technologies used, the higher likelihood of future driver &#x2F; server &#x2F; runtime &#x2F; etc incompatibility issue. A future release of Cassandra which has a mission-critical feature for your business, but the CQL drivers that support it all require a later version of Node than your background workers currently run. Unfortunately, if you upgrade to a later version of Node then your Redis driver won&#x27;t work because it hasn&#x27;t been updated to support the past 6 months of Node releases due to breaking changes Node.JS introduced into a critical security update and ad nauseam...<p>There will always be a level of this even in a tightly integrated stack, but you&#x27;re setting yourself up for more frequent headaches the greater the number of technologies you have to maintain in parallel.<p>- Operational complexity: beyond the basic stuff like service outages, there&#x27;s dealing with less catastrophic but more frequent ops such as monitoring and configuration management. While there are a number of great generic solutions for monitoring the health of processes, services, and VMs, there&#x27;s a level of requisite application monitoring that needs to be deployed for each service too. Need to monitor the query plans and cache hits for PostGREs, the JMX metrics for any JVM application, compaction and read&#x2F;write latency for Cassandra, etc...<p>Setting up that level of monitoring and _actually using it_ on a day-to-day basis for a large number of different platforms is expensive and cumbersome. If you&#x27;re Facebook, it may not be a big cost to manage. If you&#x27;re a 6-person engineering team at a startup, it&#x27;s a bitch.<p>Great article!
评论 #6729014 未加载
评论 #6729627 未加载
Touche超过 11 年前
Yeah, I agree, I wince when someone uses that phrase. The truth is that most technology choices, whether languages or databases, are designed to be general purpose. They are designed to fill as many needs as they can. Because no one wants their product to be a niche.
tieTYT超过 11 年前
The author should really change that font contrast. It&#x27;s almost the same color as the background.
shuzchen超过 11 年前
Failing together usually only make sense for the simplest of apps, not for anything that has a lot of moving parts. If my pub-sub message queue falls over, I&#x27;d rather the web workers still stay up so visitors can see the site - they&#x27;ll just be without realtime notifications. If the background workers die, those tasks should stay on the queue, but everything else still runs as normal.<p>So really, the math works out such that if you fail together, you&#x27;ll have X amount of downtime. If you fail seperately, you&#x27;ll have X*3 amount of degraded service.
评论 #6730276 未加载
tomblomfield超过 11 年前
I&#x27;ve had similar experiences with poorly-architected SOA.<p>If your services are inter-dependent, your uptime probability suddenly becomes probability^n, for n services.
dw5ight超过 11 年前
ha, so true. much like &quot;judgement&quot; is the best technical skill you can hire for - far too many tech wizards spend 6 wks for +5% perf gain :(
评论 #6729373 未加载
nottombrown超过 11 年前
Author here. Added a clarification that services can go down for reasons other than data center issues. Who knew?