TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Why most clients will not get a scalable server for just $100

6 pointsby p0larboyalmost 9 years ago

2 comments

tracker1almost 9 years ago
TLDR: Outsourcing development is more expensive than doing it yourself if you have better skills...<p>In the end, there&#x27;s still some differences, real-time chat has different needs than other types of applications. Scaling chat applications beyond a few million users has it&#x27;s own complexity... FB spends a <i>LOT</i> of money to scale it&#x27;s infrastructure. Twitter had a lot of hard growing up in a similar space.<p>As a hobby endeavor for a free app, it&#x27;s bound to be harder still. It&#x27;s a mixed blessing. Maybe time to get a corporate sponsor for the app.
wahernalmost 9 years ago
I&#x27;m both surprised and not surprised that a mere 600 hits per second is considered crushing, and needs a $4000&#x2F;month budget to handle.<p>In a recently failed startup, I was able sustain a minimum of 5000 live audio streams with dynamic, per-listener ad spots inserted into the stream server-side (the idea being that ads can&#x27;t easily be blocked by client-side ad blockers). That&#x27;s 5,000 unique, dynamically-generated audio byte streams; not the same stream with the same ad spot broadcast to listeners, or with ad spots being fetched out-of-band by special client-side code. And it was doing codec and format transcoding so that the same URL generated whatever was natively supported by the client (MP3, AAC, Vorbis codecs; ADTS, Flash, OGG, RTSP formats).<p>On top of that, you could dynamically inject ad spots per stream, and during each ad spot each context queried a single-threaded logic controller. So every 30 second interval there were about 10,000 small IPC messages passed over sockets between the front-end thread and the controller thread in under 2 seconds. And the controller thread maintained state for each session so for each listener you could see various real-time stats, including the list of ad spots delivered.<p>And all of that that was on a simple E3 Haswell. The front-end ad splicer and output format transcoder wasn&#x27;t yet threaded, so the vast majority of the work occurred on a single core of a single E3 CPU. Linux handled NIC IRQs on the same core, so 5,000 wasn&#x27;t even close to the limit. All-in-all, basically a single core using a single thread to deliver 5,000 streams (with 5,000 * N messages being generated at times) simultaneously using non-blocking I&#x2F;O; much more work than any chat app would ever be asked to handle anywhere under such constraints.<p>(Though, to be clear, because it was live streams the codec transcoding didn&#x27;t need to be done 5,000 times, and the codec of the ad spots could be cached. Rather, in this case there were only about 5 unique codecs for the incoming live stream. The codec transcoding occurred on a couple of back-end threads, with each thread multiplexing several input streams.)<p>The lesson I had to learn the hard way (_failed_ startup) is that good design and good architecture isn&#x27;t worth anything. The vast majority of companies succeed with rather poorly written software, and by the time they hit real scalability problems they&#x27;ve already &quot;succeeded&quot; in the most important sense of the term.<p>So if you want a successful startup, use the environment that you&#x27;re most productive in and that allows you to churn out features the fastest. Worry about scalability later, because if your competitor is first to market all the scalability in the world matters for precisely naught.