TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Stop Designing Your Web Application for Millions of Users When You Dont Have 100

120 点作者 riz_8 个月前

25 条评论

zimpenfish8 个月前
At a previous job, there was an argument over a code review where I had done some SQL queries that fixed a problem but were not optimal. The other side were very much &quot;this won&#x27;t work for 1000 devices! we will not approve it!&quot; whereas my stance was &quot;we have a maximum of 25 devices deployed by our only customer who is going to leave us next week unless we fix this problem today&quot;. One of the most disheartening weeks of my software development life.<p>(that was also the place I had to have a multi-day argument over the precise way to define constants in Perl because they varied in performance except it was a long running mod_perl server process and the constants were only defined at startup and it made absolutely zero difference once it had been running for an hour or more.)
评论 #41600213 未加载
评论 #41600729 未加载
评论 #41602151 未加载
评论 #41600680 未加载
评论 #41601346 未加载
评论 #41601023 未加载
评论 #41609241 未加载
评论 #41600306 未加载
ManBeardPc8 个月前
People seriously underestimate the amount of clients you can serve from a single monolith + SQL database on a VPS or physical hardware. Pretty reliable as well, not many moving parts, simple to understand, fast to setup and keep up to date. Use something like Java, C#, Go or Rust. If you need to scale you can either scale vertically (bigger machines) or horizontally (load-balancer).<p>The SQL database is probably the hardest part to scale, but depending on your type of app there is a lot of room with optimizing indices or add caching.<p>In my last company we could easily develop and provide on-call support for multiple production critical deployments with only 3 engineers that way. Got so little calls that I had trouble to remember everything and had to look it up.
评论 #41600446 未加载
评论 #41600786 未加载
评论 #41600437 未加载
评论 #41609201 未加载
myprotegeai8 个月前
This topic lacks nuance.<p>I agree in focusing on building things that people want, as well as iterating and shipping fast. But guess what? Shipping fast without breaking things requires <i>a lot</i> of infrastructure. Tests are infrastructure. CI and CD are infrastructure. Isolated QA environments are infrastructure. Monitoring and observability are infrastructure. Reproducible builds are infrastructure. Dev environments are infrastructure. If your team is very small, you cannot ship fast, safely, without these things. You will break things for customers, without knowing, and your progress will grind to a halt while you spend days trying to figure out what went wrong and how to fix it, instead of shipping, all while burning good will with your customers. (Source: have joined several startups and seen this first hand.)<p>There is a middle ground between &quot;designing for millions of users&quot; and &quot;build for the extreme short term.&quot; Unfortunately, many non-technical people and inexperienced technical people choose the latter because it aligns with their limited view of what can go wrong in normal growth. The middle ground is orienting the pieces of your infrastructure <i>in the right direction</i>, and growing them as needed. All those things that I mentioned as infrastructure above can be implemented relatively simply, but sets the ground work for future secure growth.<p>Planning is not the enemy and should not be conflated with premature optimization.
评论 #41600722 未加载
255kb8 个月前
Guilty! I spent so much time recently asking myself and trying to optimize my app and stack: - what if people upload files that big, what about the bandwidth cost? - what if they trigger thousands of events X and it costs 0.X$ per thousands?<p>Fast forward months, it&#x27;s a non issue despite having paying customers. Not only I grossly exagerated the invidual user&#x27;s resources consumption, but I also grossly exagerated the need for top notch k8s auto-scaling this and that. Turns out you can go a long way with something simpler...
评论 #41600397 未加载
oliwarner8 个月前
The ethos described in TFA extends a lot further than this vague idea of &quot;scale&quot;. Be honest. We&#x27;ve all built things without a test framework and 99.9% coverage. Many all the way to production and beyond. Many of us skimp on accessibility. Works For Me™ means it Works For Client™ and that&#x27;s all anyone will budget for. Even client optimisations (bundling&#x2F;shaking&#x2F;compressing&#x2F;etc) get thrown out the window in a hurry.<p>The problems with accepting this way of thinking is you never budget for cut corners. After you build the MVP, all your client&#x2F;boss wants is the next thing. They don&#x27;t want to pay you again to do the thing you just did, but right.<p>And if you never get approval for testing, a11y, optimisations, the first time you hear about it is when it has lost you users. When somebody complains. When something breaks. Those numbers really matter when you&#x27;re small. And it <i>always</i> looks bad for the devs. Your boss will dump on you.<p>So just be careful what corners you&#x27;re cutting. Try to involve the whole team in the process of <i>consciously not doing something</i> so it&#x27;s not a shock when you need to put things right later on.
michaelteter8 个月前
Yes but...<p>Still make some effort to build as if this were a professional endeavor; use that proof of concept code to test ideas, but rewrite following reasonable code quality and architecture practices so you don&#x27;t go into production with lack of ability to make those important scaling changes (for if&#x2F;when you get lucky and get a lot of attention).<p>If your code is tightly coupled, functions are 50+ lines long, objects are mutated everywhere (and in places you don&#x27;t even realize), then making those important scaling changes will be difficult and slow. Then you might be tempted to say, &quot;We should have built for 1 million users.&quot; Instead, you should be saying, &quot;We should have put a little effort into the software architecture.&quot;<p>There are two languages that start with &quot;P&quot; which seem to often end up in production like this.
评论 #41600659 未加载
评论 #41602957 未加载
评论 #41600533 未加载
dspillett8 个月前
[caveat: not read the text because if you click the partners link onthe nag box, it looks like I need to object to each “legitimate interest” separately and I&#x27;ve not got time for that – congratulations darrenhorrocks.co.uk on keeping your user count lower, so you don&#x27;t have to design for higher!]<p>The problem often comes from people solving the problem that they <i>want</i> to have, not the ones that they currently have. There is a pervasive view that if your site&#x2F;app goes viral and you can&#x27;t cope with the load, you lose the advantage of that brief glut of attention and might never get it again, if there is a next time some competing site&#x2F;app might get the luck instead. There is some truth in this, so designing in a way that allows for scaling makes some sense, but perhaps many projects give this too much priority.<p>Also, designing with scaling in mind from the start makes it easier to implement later, if you didn&#x27;t you might need a complete rewrite to efficiently scale. Of course keeping scaling in mind might mean that you <i>intend</i> a fairly complete redo at that point, if you consider the current project to be a proof of concept of other elements (i.e. the application&#x27;s features that are directly useful to the end user), the difference being that in this state you are at least aware of the need rather than it being something you find out when it might already be too late to do a good job.<p>One thing that a lot of people overengineering for scale from day 1, with a complex mesh of containers running a service based design miss, when they say “with a monolith all you can do is throw hardware at the problem”, is that scaling your container count <i>is</i> essentially throwing (virtual) hardware at the problem, and that this is a valid short-term solution in both cases, and until you need to regularly run at the higher scale day-in-day-out the simpler monolith will likely be more efficient and reduce running costs.<p>You need to find the right balance of “designing with scalability in mind”, so it can be implemented quickly when you are ready, which is not easy to judge so people tend to err on the side of just going directly for the massively scalable option despite the potential costs of that.
评论 #41600959 未加载
1GZ08 个月前
When you have more micro-services than users.
dimitar8 个月前
This phenomenon needs a term, how about Premature Architecture?
评论 #41600293 未加载
评论 #41600339 未加载
CM308 个月前
Or in other words, you&#x27;re not Google or Facebook. You almost certainly don&#x27;t need the level of performance and architectural design those companies need for their billion + user systems.<p>And it doesn&#x27;t help that a lot of people seem to drastically underestimate the amount of performance you can get from a simpler setup too. Like, a simple VPS can often do fine with millions of users a day, and a mostly static informational site (like your average blog or news site) could just put it behind Cloudflare and call it a day in most cases.<p>Alas, the KISS principle seems to have gone the way of the dodo.
ForHackernews8 个月前
I dunno, I&#x27;ve lived the other side of this where people made boneheaded choices early on, the product suddenly got traction, and then we were locked into lousy designs. At my last company, there were loads of engineers dedicated to re-building an entire parallel application stack with a view to an eventual migration.<p>A relatively small amount of upfront planning could have saved the company millions, but I guess it would have meant less work for engineers so I suppose I should be glad that firms keep doing this.
评论 #41601102 未加载
anonzzzies8 个月前
I agree with this if you are talking fb&#x2F;google scale; you will very likely not even get close to that, ever. But millions of users, depending on how it hits the servers over time, is really not very much. I run a site with well over 1m active users, but it runs on $50&#x2F;mo worth of VPSs with an LB and devving for it doesn&#x27;t take a millisecond longer than for a non scaling version.
n_ary8 个月前
Counter argument is that, you do not know whether your system will go big next week. If it indeed does, now you do not have the Google&#x2F;Meta&#x2F;Amazon engineer legion to immediately scale it up without losing users[1]. Also, a gambit of scalable system initially helps to solve the later death-march, when management wants to ship the MVP in global production and then come howling when users are bouncing because the system was not designed for scale.<p>[1] Unlike early days, users are very quick to dismiss or leave immediately if the thing is breaking down and of course they will go rant at all social media outlets about the bad experience further making life hell for &quot;build things that don&#x27;t scale&quot;.
评论 #41603426 未加载
评论 #41602652 未加载
评论 #41601635 未加载
the84728 个月前
Designing for <i>low latency</i> (even if only for a few clients) can be worth it though. Each action taking milliseconds vs. each action taking sections will lead to vastly different user experiences and will affect how users use the application.
bcye8 个月前
No, I don&#x27;t think I will let you share my personal data with 200 select partners (:
emmanueloga_8 个月前
Focusing on customers and MVPs over complex architecture makes sense, but whether you have 10 or 1 million users, for any real business, you need to be ready to recover from outages.<p>Building for resilience early sets the foundation for scaling later. That’s why I’m not a fan of relying on &quot;one big server.&quot; No matter how powerful, it can still fail.<p>By focusing on resilience, you&#x27;re naturally one step closer to scaling across multiple servers. Sure, it’s easy to overcomplicate things, but investing in scalable infrastructure from the start has benefits, even with low traffic—it&#x27;s all about finding the right balance.
jillesvangurp8 个月前
The truth is in the middle. I&#x27;ve now had two startups where I came in to fix things where the system would fall over if there were more than 1 user. Literally; no transactional logic and messy interactions with the database combined with a front end engineers just making a mess of doing a backend.<p>In one case the database was &quot;mongo realm&quot;, which was something our Android guy randomly picked. No transactions, no security, and 100% of the data was synced client side. Also there was no IOS and web UI. Easiest decision ever to scrap that because it was slow, broken, and there wasn&#x27;t really a lot there to salvage. And I needed those other platforms supported. It&#x27;s the combination of over and under engineering that is problematic. There were some tears but about six months later we had replaced 100% of the software with something that actually worked.<p>In both cases, I ended up just junking the backend system and replacing it with something boring but sane. In both cases getting that done was easy and fast. I love simple. I love monoliths. So no Kubernetes or any of that micro services nonsense. Because that&#x27;s the opposite of simple. Which usually just means more work that doesn&#x27;t really add any value.<p>In a small startup you should spend most of your time iterating on the UX and your product. Like really quickly. You shouldn&#x27;t get too attached to anything you have. The questions that should be in the back of your mind is 1) how much time would it take a competent team to replicate what you have? and 2) would they end up with a better product?<p>Those questions should lead your decision making. Because if the answers are &quot;not long&quot; and &quot;yes&quot;, you should just wipe out the technical debt you have built up and do things properly. Because otherwise somebody else will do it for you if it really is that good of an idea.<p>I&#x27;ve seen a lot of startups that get hung up on their own tech when it arguably isn&#x27;t that great. They have the right ideas and vision but can&#x27;t execute because they are stuck with whatever they have. That&#x27;s usually when I get involved actually. The key characteristic of great UX is that things are simple. Which usually also means they are simple to realize if you know what you are doing.<p>Cumulative effort does not automatically add up to value; often it actually becomes the main obstacle to creating value. Often the most valuable outcome of building software is actually just proving the concept works. Use that to get funding, customer revenue, etc. A valid decision is to then do it properly and get a good team together to do it.
评论 #41600495 未加载
signaru8 个月前
This also applies when calculating losses from paying third party services.
pknerd8 个月前
Agreed, a simple index.php with jQuery storing data in an SQLite db file is sufficient for earning millions.<p>No I m not trolling. This is exactly what Peter Levis do
rkachowski8 个月前
almost 20 years ago the exact same sentiment was expressed in the ground breaking classic &quot;I&#x27;m going to scale my foot up your ass&quot; by Ted Dziuba<p><a href="http:&#x2F;&#x2F;widgetsandshit.com&#x2F;teddziuba&#x2F;2008&#x2F;04&#x2F;im-going-to-scale-my-foot-up-y.html" rel="nofollow">http:&#x2F;&#x2F;widgetsandshit.com&#x2F;teddziuba&#x2F;2008&#x2F;04&#x2F;im-going-to-scal...</a>
评论 #41600412 未加载
KronisLV8 个月前
I&#x27;ve seen a bunch of things.<p>Sometimes you have people who try to build a system composed of a bunch of microservices but the team size means that you have more services than people, which is a recipe for failure because you probably also need to work with Kubernetes clusters, manage shared code libraries between some of the services, as well as are suddenly dealing with a hard to debug distributed system (especially if you don&#x27;t have the needed tracing and APM).<p>Other times I&#x27;ve seen people develop a monolithic system for something that <i>will</i> need to scale, but develop it in a way where you can only ever have one instance running (some of the system state is stored in the memory) and suddenly when you need to introduce a key value store like Valkey or a message queue like RabbitMQ or scale out horizontally, it&#x27;s difficult and you instead deal with HTTP thread exhaustion, DB thread pool exhaustion, issues where the occasional DB connection hangs for ~50 seconds and stops <i>everything</i> because a lot of the system is developed for sequential execution instead of eventual consistency.<p>Yet other times you have people who read about SOLID and DRY and make an enterprise architecture where the project itself doesn&#x27;t have any tools or codegen to make your experience of writing code easier, but has <i>guidelines</i> and if you need to add a DB table and work with the data, suddenly you need: MyDataDto &lt;--&gt; MyDataResource &lt;--&gt; MyDataDtoMapper &lt;--&gt; MyDataResourceService &lt;--&gt; MyDataService &lt;--&gt; MyDataDao &lt;--&gt; MyDataMapper&#x2F;Repository with additional logic for auditing, validation, some interfaces in the middle to &quot;make things easier&quot; which break IDE navigation because it goes to where the method is defined instead of the implementation that you care about and handlers for cleaning up related data, which might all be useful in some capacity but makes your velocity plummet. Even more so when the codebase is treated as a &quot;platform&quot; with a lot of bespoke logic due to the &quot;not invented here&quot; syndrome, instead of just using common validation libraries etc.<p>Other times people use the service layer pattern above liberally and end up with hundreds of DB calls (N+1 problem) instead of just selecting what they need from a DB view, because they want the code to be composable, yet before long you have to figure out how to untangle that structure of nested calls and just throw an in-memory cache in the middle to at least save on the 95% of duplicated calls, so that filling out a table in the UI wouldn&#x27;t take 30 seconds.<p>At this point I&#x27;m just convinced that I&#x27;m cursed to run into all sorts of tricky to work with codebases (including numerous issues with DB drivers, DB pooling libraries causing connections to hang, even OpenJDK updates causing a 10x difference in performance, as well as other just plain <i>weird</i> technical issues), but on the bright side at the end of it all I might have a better idea of what to avoid myself.<p>Damned if you do, damned if you don&#x27;t.<p>The sanest collection of vague architectural advice I&#x27;ve found is the 12 Factor Apps: <a href="https:&#x2F;&#x2F;12factor.net&#x2F;" rel="nofollow">https:&#x2F;&#x2F;12factor.net&#x2F;</a> and maybe choosing the right tools for the job (Valkey, RabbitMQ, instead of just putting everything into your RDBMS, additional negative points for it being Oracle), as well as leaning in the direction of modular monoliths (one codebase initially, feature flags for enabling&#x2F;disabling your API, scheduled processes, things like sending e-mails etc., which <i>can</i> be deployed as separate containers, or all run in the same one locally for development, or on your dev environments) with as many of the dependencies runnable locally<p>For the most part, you should optimize for developers, so that they can debug issues easily, change the existing code (loose coupling) while not drowning in a bunch of abstractions, as well as <i>eventually</i> scale, which in practice might mean adding more RAM to your DB server and adding more parallel API containers. KISS and YAGNI for the things that let you pretend that you&#x27;re like Google. The most you should go in that direction is having your SPA (if you don&#x27;t use SSR) and API as separate containers, instead of shipping everything together. That way routing traffic to them also becomes easier, since you can just use Caddy&#x2F;Nginx&#x2F;Apache&#x2F;... for that.
评论 #41600348 未加载
Dalewyn8 个月前
Attempting to please everyone pleases noone.
bravetraveler8 个月前
All infrastructure please.<p>I&#x27;m currently wrestling a stupid orchestration problem - DNS external from my domain controllers - because the architecture astronaut thought we&#x27;d need to innovate on a pillar of the fucking internet
bschmidt18 个月前
The main thing is choosing the right technology that will allow scaling if it&#x27;s part of the plan. It&#x27;s not that you need to build Twitter-level traffic scale from Day 1, but you should of course choose a tech stack and platform that you <i>can</i> scale if you need.<p>(Twitter famously learned this the hard way)<p>If I loved Ruby, why would I choose Rails for a high-traffic social media app? If I loved Python, why choose Django for an API-first service that doesn&#x27;t have an Admin dashboard? Yet I see it all the time - the developer only knows PHP&#x2F;Laravel so everything gets built in Laravel. Point is... why lock yourself into a monolithic setup when other options are available even in those languages? You can definitely choose the &quot;wrong stuff&quot; and for bad reasons, even if it works today. &quot;Doing things that don&#x27;t scale&quot; seems frankly stupid whenever scaling is a big part of the plan for the company, especially when there are plenty of options available to build things in a smart, prepared way.<p>But go ahead, install your fav monolith and deploy to Heroku for now, it will just be more work later when it has to be dismantled in order to scale parts independent from others (an API gateway that routes high traffic, a serverless PDF renderer, job scripts like notifications, horizontal scaling of instances, etc.).<p>It&#x27;s smarter though to just choose a more future-proof language and framework setup. The Node setup on my rpi4 has been on GCP, AWS, Heroku, Render in various forms (as a monolith, as microservices, in between) - repo-wise, it&#x27;s a &quot;mono server&quot; of 15+ APIs, apps, and websites that separate businesses in town rely on, yet I can work in it as one piece, even easily move it around providers if I want, horizontally scale as needed with no code changes, and because of essentially the folder structure of the app (I copied Vercel) and how Node require works, any export can be either imported to another file or deployed as a serverless function itself.<p>There&#x27;s nothing about the codebase that forces me to choose &quot;not scale&quot; or &quot;scale&quot;. I even have platform-agnostic code (libraries) that can run in either a browser or a server, talk about flexibility! No other language is as fast and scalable while also being this flexible.
injidup8 个月前
Do you mean, don&#x27;t consider accessibility, security and privacy because statistically speaking your 100 customers won&#x27;t care about those things?
评论 #41600303 未加载
评论 #41600313 未加载