TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: How do you scale your service worldwide in 2021?

9 pointsby diazcabout 4 years ago
Hi HNers,<p>Built a service in the form of an app during the course of the pandemic for people to watch videos and receive news.<p>However since the pandemic, I had a huge surge of people signing up and straining my backend, leaving me frantically trying to keep the servers up and respond to customers at the same time.<p>It must have been hard 10 or 20 years ago to scale your backend&#x2F;service to reach as many people as possible worldwide than it is today, but I still struggled with this.<p>I&#x27;m curious how would one do this when faced with a TON of customers as I had to face.<p>If you&#x27;re curious of my stack: it&#x27;s all on Heroku, Redis, Node &amp; 2 Postgres servers.<p>Would also like to know what you would have done better as well.

4 comments

sylvain_kerkourabout 4 years ago
Hi, I faced this situation after a Show HN (<a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=20105567" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=20105567</a>), all I had to do is simply to add a CDN (Cloudflare or AWS Cloudfront) in front of the servers in order to serve the static assets and pages from the CDN, and the API requests by the servers.<p>If you are not familiar with CDNs yet, learn how works the Cache-Control HTTP header (<a href="https:&#x2F;&#x2F;developer.mozilla.org&#x2F;en-US&#x2F;docs&#x2F;Web&#x2F;HTTP&#x2F;Headers&#x2F;Cache-Control" rel="nofollow">https:&#x2F;&#x2F;developer.mozilla.org&#x2F;en-US&#x2F;docs&#x2F;Web&#x2F;HTTP&#x2F;Headers&#x2F;Ca...</a>) so you can send a different header for API responses than for static assets responses. Then learn how to purge the CDN after each release.<p>I would recommend to start with Cloudflare as it&#x27;s cheaper, and easier to use.<p>A CDN with correct Cache-Control headers and Heroku Autoscaling should bring you very far in term of traffic you can handle.
评论 #26926792 未加载
bemabout 4 years ago
If you like GraphQL and don&#x27;t mind managed services:<p>- Fauna (<a href="http:&#x2F;&#x2F;fauna.com" rel="nofollow">http:&#x2F;&#x2F;fauna.com</a>) or Hasura (<a href="https:&#x2F;&#x2F;hasura.io" rel="nofollow">https:&#x2F;&#x2F;hasura.io</a>) for the backend<p>- Vercel (<a href="http:&#x2F;&#x2F;vercel.com" rel="nofollow">http:&#x2F;&#x2F;vercel.com</a>) or Netlify (<a href="https:&#x2F;&#x2F;www.netlify.com" rel="nofollow">https:&#x2F;&#x2F;www.netlify.com</a>) for the frontend and functions
kasey_junkabout 4 years ago
Fly.io would be my goto stack right now (especially for read heavy workloads). Edge compute in conjunction with static assets being served by a traditional cdn will go a long way.
tracer4201about 4 years ago
I haven’t used Heroku, but I’ll assume you’re running virtual hosts in the cloud as opposed to something like Lambda.<p>I don’t have enough information from your post to give a concrete answer, but here are some things I might consider in your shoes.<p>How is your system struggling? Do you have metrics that measure your cpu, memory, or disk usage on your application servers? Can you see a pattern of how they’ve regressed over time, in a time series graph? Did you make a change that caused this? Are you certain whatever scaling issue you’re trying to solve is because of real customer request growth and not some regression?<p>How many requests does your service accept right now, per second?<p>What’s the latency to process each request? Can we figure out what all contributes to this? What are the cheap things and expensive things, or is it all uniform? Have you profiled?<p>You could measure this at the application level as the end time (before you send a response down) minus the start time (when you received the request).<p>Is your Redis instance on the same host? Is it a distributed cache? And how is load on whichever host it lives on? Redis is pretty dang fast. Is it receiving too many connections? Is it actually not doing much and the issue is elsewhere?<p>What’s the load on Postgres? Is your data&#x2F;table schemas optimized based on your query patterns? Do you have the right indices set up?<p>Where do your customers live? Do you track latency from the webpage or mobile client? Is it slower for customers in some parts of the world than others? If so, have you thought about scaling out to other regions? If the component contributing latency is on a specific part of your webpage or front end, does it even see many eyeballs? Should you do away with that functionality entirely or is it worth the ROI?<p>Are your Postgres servers writing in a distributed manner (sharding) or is one just a replica? Is one server getting more traffic than the other and so load isn’t even balanced?<p>Is there content on your page that is queried from Postgres but could be moved to a cache like Redis or read from a static file?<p>What response patterns does your application have? When a customer signs up, can you just show them a thank you screen and put any expensive processing into some queue? Once it succeeds or fails, send them an email.<p>We obviously want to avoid over complicating things. Is there a specific business function in your app that is really expensive to do on the server at your current scale? Can we move that business function to a separate server or make it it’s own web service?<p>How does load balancing work on your customer facing page or service? Is it optimally distributing requests? How much of your traffic is real customers and not someone spamming your API? Are you measuring this?