TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Production Twitter on one machine? 100Gbps NICs and NVMe are fast

776 点作者 trishume超过 2 年前

63 条评论

BeefWellington超过 2 年前
I&#x27;m going to preface this criticism by saying that I think exercises like this are fun in an architectural&#x2F;prototyping code-golf kinda way.<p>However, I think the author critically under-guesses the sizes of things (even just for storage) by a reasonably substantial amount. e.g.: Quote tweets do not go against the size limit of the tweet field at Twitter. Likely they are embedding a tweet reference in some manner or other in place of the text of the quoted tweet itself but regardless a tweet takes up more than 280 unicode characters.<p>Also, nowhere in the article are hashtags mentioned. For a system like this to work you need some indexing of hashtags so you aren&#x27;t doing a full scan of the entire tweet text of every tweet anytime someone decides to search for #YOLO. The system as proposed is missing a highly critical feature of the platform it purports to emulate. I have no insider knowledge but I suspect that index is maybe the second largest thing on disk on the entire platform, apart from the tweets themselves.
评论 #34292997 未加载
评论 #34294545 未加载
评论 #34294027 未加载
评论 #34293940 未加载
评论 #34300987 未加载
评论 #34294340 未加载
评论 #34294683 未加载
评论 #34294782 未加载
评论 #34296397 未加载
评论 #34294286 未加载
评论 #34299789 未加载
评论 #34298334 未加载
aetimmes超过 2 年前
(Disclaimer: ex-Twitter SRE)<p>&gt; There’s a bunch of other basic features of Twitter like user timelines, DMs, likes and replies to a tweet, which I’m not investigating because I’m guessing they won’t be the bottlenecks.<p>Each of these can, in fact, become their own bottlenecks. Likes in particular are tricky because they change the nature of the tweet struct (at least in the manner OP has implemented it) from WORM to write-many, read-many, and once you do that, locking (even with futexes or fast atomics) becomes the constraining performance factor. Even with atomic increment instructions and a multi-threaded process model, many concurrent requests for the same piece of mutable data will begin to resemble serial accesses - and while your threads are waiting for their turn to increment the like counter by 1, traffic is piling up behind them in your network queues, which causes your throughput to plummet and your latency to skyrocket.<p>OP also overly focuses on throughput in his benchmarks, IMO. I&#x27;d be interested to see the p50&#x2F;p99 latency of the requests graphed against throughput - as you approach the throughput limit of an RPC system, average and tail latency begin to increase sharply. Clients are going to have timeout thresholds, and if you can&#x27;t serve the vast majority of traffic in under that threshold consistently (while accounting for the traffic patterns of viral tweets I mentioned above) then you&#x27;re going to create your own thundering herd - except you won&#x27;t have other machines to offload the traffic to.
评论 #34301681 未加载
评论 #34305190 未加载
评论 #34306941 未加载
评论 #34301944 未加载
jameshart超过 2 年前
Getting everything onto one machine works great until... it no longer fits on one machine.<p>You add another feature and it requires a little bit more RAM, and another feature that needs a little bit more, and.. eventually it doesn&#x27;t all fit.<p>Now you have to go distributed.<p>And your entire system architecture and all your development approaches are built around the assumptions of locality and cache line optimization and all of a sudden none of that matters any more.<p>Or you accept that there&#x27;s a hard ceiling on what your system will ever be able to do.<p>This is like building a video game that pushes a specific generation of console hardware to its limit - fantastic! You got it to do realtime shadows and 100 simultaneous NPCs on screen! But when the level designer asks if they can have water in one level you have to say &#x27;no&#x27;, there&#x27;s no room to add screenspace reflections, the console can&#x27;t handle that as well. And that&#x27;s just a compromise you have to make, and ship the game with the best set of features you can cram into that specific hardware.<p>You certainly <i>could</i> build server applications that way. But it feels like there&#x27;s something fundamental to how service businesses operate that pushes away from that kind of hyperoptimized model.
评论 #34296290 未加载
评论 #34297946 未加载
评论 #34296206 未加载
评论 #34297374 未加载
评论 #34296201 未加载
评论 #34296025 未加载
评论 #34297857 未加载
评论 #34296330 未加载
评论 #34297323 未加载
评论 #34296103 未加载
TacticalCoder超过 2 年前
TFA, to me, touches about something I&#x27;ve wondered about a very long time ago: what are the implications of CPU and storage growing at <i>much</i> faster rates than human population?<p>Back in the 486 days you wouldn&#x27;t be keeping, in RAM, data about every single human on earth (let&#x27;s take &quot;every single human on earth&quot; as the maximum number of humans we&#x27;ll offer our services to with on our hypothetical server). Nowadays keeping in RAM, say, the GPS coordinates of every single human on earth (if we had a mean to fetch the data) is doable. On my desktop. In RAM.<p>I still don&#x27;t know what the implications are.<p>But I can keep the coordinates of every single humans on earth in my desktop&#x27;s RAM.<p>Let that sink in.<p>P.S: no need to nitpick if it&#x27;s actually doable on my desktop <i>today</i>. That&#x27;s not the point. If it&#x27;s not doable today, it&#x27;ll be doable tomorrow.
评论 #34294626 未加载
评论 #34295577 未加载
评论 #34294808 未加载
评论 #34299677 未加载
评论 #34306370 未加载
评论 #34300157 未加载
评论 #34295075 未加载
评论 #34294650 未加载
sethev超过 2 年前
John Carmack tweeted something that made me noodle on this too:<p>&gt;It is amusing to consider how much of the world you could serve something like Twitter to from a single beefy server if it really was just shuffling tweet sized buffers to network offload cards. Smart clients instead of web pages could make a very large difference. [1]<p>Very interesting to see the idea worked out in more detail.<p>[1] <a href="https:&#x2F;&#x2F;twitter.com&#x2F;id_aa_carmack&#x2F;status&#x2F;1350672098029694998" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;id_aa_carmack&#x2F;status&#x2F;1350672098029694998</a>
评论 #34293500 未加载
评论 #34293427 未加载
drewg123超过 2 年前
How much bandwidth does Twitter use for images and videos? Less than 1.4Tb&#x2F;s globally? If so, we could probably fit that onto a second machine. We can currently serve over 700Gb&#x2F;s from a dual-socket Milan based server[1]. I&#x27;m still waiting for hardware, but assuming there are no new bottlenecks, that should directly scale up to 1.4Tb&#x2F;s with Genoa and ConnectX-7, given the IO pathways are all at least twice the bandwidth of the previous generation.<p>There are storage size issues (like how big is their long tail; quite large I&#x27;d imagine), but its a fun thing to think about.<p>[1] <a href="https:&#x2F;&#x2F;people.freebsd.org&#x2F;~gallatin&#x2F;talks&#x2F;euro2022.pdf" rel="nofollow">https:&#x2F;&#x2F;people.freebsd.org&#x2F;~gallatin&#x2F;talks&#x2F;euro2022.pdf</a>
评论 #34293136 未加载
评论 #34293687 未加载
habibur超过 2 年前
He will be up for surprise.<p>HTTP with connection: keep-open can serve 100k req&#x2F;sec. But that&#x27;s for one client being served repeatedly over 1 connection. And this is the inflated number that&#x27;s published in webserver benchmark tests.<p>For more practical down to earth test, you need to measure performance w&#x2F;o keep-alive. Request per second will drop to 12k &#x2F; sec then.<p>And that&#x27;s for HTTP without encryption or ssl handshake. Use HTTPS and watch it fall down to only 400 req &#x2F; sec under load test [ without connection: keep-alive ].<p>That&#x27;s what I observer.
评论 #34293102 未加载
评论 #34293199 未加载
评论 #34293713 未加载
评论 #34296887 未加载
评论 #34294737 未加载
评论 #34300308 未加载
summerlight超过 2 年前
I think many people in this thread are making the mistake of ignoring evolutionary factors in system engineering. If a system doesn&#x27;t need to adopt or change, lots of things can be much more efficient, easier and simpler, likely the order of 10x~100x. But you gotta appreciate that we&#x27;re all paid because we need to swap wheels on running trains (or even engines in flying airplanes). A large fraction of demand for redundancy, introspection, abstraction and generalization comes from this.<p>Why do we want to apply ML at the cost of a significant fleet cost increase? Because it can make the overall system consistently perform against external changes via generalization, thus the system can evolve more cheaply. Why do we want to implement a complex logging layer although it doesn&#x27;t bring direct gains on system performance? Because you need to inspect the system to understand its behavior and find out where it needs to change. The list can go on and I can give you hundreds of reasons why we need all these apparently unnecessary complexities and overheads can be important for systems&#x27; longevity.<p>I don&#x27;t deny the existence of accidental complexities (probably Twitter can become 2~3x simpler and cheaper given sufficient eng resource and time), but in many cases you probably won&#x27;t be able to confidently say if some overheads are accidental or essential since system engineering is essentially a highly predictive&#x2F;speculative activity. To make this happen, you gotta have a precise understanding of how the system &quot;currently works&quot; to make a good bet rather than re-imagination of the system with your own wish list of how the system &quot;should work&quot;. There&#x27;s a certain value on the latter option, but it&#x27;s usually more constructive to build an alternative rather than complaining about the existing system. This post is great since the author actually tried to build something to prove its possibility, this knowledge could turn out to be valuable for other Twitter alternatives later on.
评论 #34300269 未加载
jasonhansel超过 2 年前
If you really wanted to run Twitter on one machine at any cost, wouldn&#x27;t an IBM mainframe be much more practical?<p>You can even run Linux on them now. The specs he cites would actually be fairly small for a mainframe, which can reach up to 40TB of memory.<p>I&#x27;m not saying this is a <i>good</i> idea, but it seems better than what the OP proposes.
评论 #34292666 未加载
评论 #34292855 未加载
评论 #34293863 未加载
varunkmohan超过 2 年前
Good analysis. Obviously, this doesn&#x27;t handle cases like redundancy and doesn&#x27;t handle some of other critical workloads the company has. However, it does show how much real compute bloat these companies actually have - <a href="https:&#x2F;&#x2F;twitter.com&#x2F;petrillic&#x2F;status&#x2F;1593686223717269504" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;petrillic&#x2F;status&#x2F;1593686223717269504</a> where they use 24 million vcpus and spend 300 million a month on cloud.
评论 #34292975 未加载
评论 #34292747 未加载
agilob超过 2 年前
The title reminded me about this <a href="https:&#x2F;&#x2F;www.phoronix.com&#x2F;news&#x2F;Netflix-NUMA-FreeBSD-Optimized" rel="nofollow">https:&#x2F;&#x2F;www.phoronix.com&#x2F;news&#x2F;Netflix-NUMA-FreeBSD-Optimized</a> (2019) and this 2 years later: <a href="https:&#x2F;&#x2F;papers.freebsd.org&#x2F;2021&#x2F;eurobsdcon&#x2F;gallatin-netflix-freebsd-400gbps&#x2F;" rel="nofollow">https:&#x2F;&#x2F;papers.freebsd.org&#x2F;2021&#x2F;eurobsdcon&#x2F;gallatin-netflix-...</a> (2021)
评论 #34292567 未加载
PragmaticPulp超过 2 年前
Very cool exercise. I enjoyed reading it.<p>I see a lot of comments here assuming that this proves something about Twitter being inefficient. Before you jump to conclusions, take a look at the author’s code: <a href="https:&#x2F;&#x2F;github.com&#x2F;trishume&#x2F;twitterperf">https:&#x2F;&#x2F;github.com&#x2F;trishume&#x2F;twitterperf</a><p>Notably absent are things like <i>serving HTTP</i>, not to even mention HTTPS. This was a fun exercise in algorithms, I&#x2F;O, and benchmarking. It wasn’t actually imitating anything that resembles actual Twitter or even a usable website.
评论 #34293240 未加载
mgaunard超过 2 年前
All web and cloud technologies are inherently inefficient, and most programmers don&#x27;t know networking or even how hardware works sufficiently well to optimize for high througput and low-latency.<p>There was an article just yesterday about how Jane Street had developed an internal exchange way faster than any actual exchange by building it from the ground up, thinking about how the hardware works and how agents can interact with it.<p>Modern software like Slack or Twitter are just reinventing what IRC or BBS did in the past, and those were much leaner, more reliable and snappier than their modern counterparts, even if they didn&#x27;t run at the same scale.<p>It wouldn&#x27;t be surprising at all that you could build something equivalent to Twitter on just one beefy machine, maybe two for redundancy.
评论 #34294384 未加载
评论 #34294353 未加载
评论 #34297031 未加载
samsquire超过 2 年前
I recommend this table of latency figures for any software engineer:<p><a href="https:&#x2F;&#x2F;gist.github.com&#x2F;jboner&#x2F;2841832" rel="nofollow">https:&#x2F;&#x2F;gist.github.com&#x2F;jboner&#x2F;2841832</a><p>Essentially IO is expensive except within a datacenter but even in a data center, you can do a lot of loop iterations in a hot loop in the time it takes to ask a server for something.<p>There is a whitepaper which talks about the raw throughput and performance of single core systems outperforming scalable systems. These should be required reading of those developing distributed systems.<p><a href="http:&#x2F;&#x2F;www.frankmcsherry.org&#x2F;assets&#x2F;COST.pdf" rel="nofollow">http:&#x2F;&#x2F;www.frankmcsherry.org&#x2F;assets&#x2F;COST.pdf</a> A summary: <a href="http:&#x2F;&#x2F;dsrg.pdos.csail.mit.edu&#x2F;2016&#x2F;06&#x2F;26&#x2F;scalability-cost&#x2F;" rel="nofollow">http:&#x2F;&#x2F;dsrg.pdos.csail.mit.edu&#x2F;2016&#x2F;06&#x2F;26&#x2F;scalability-cost&#x2F;</a>
SilverBirch超过 2 年前
I think one of the under-estimated interesting points of twitter as a business is that this is the core. Yes, Twitter is 140 characters, it&#x27;s got &quot;300m users&quot; which is probably 5m real heavy users. So yes, you could do a lot of &quot;140 characters, a few tweets per person, few million users&quot; on very little hardware. But that&#x27;s why Twitters a shit business!<p>How much RAM did your advertising network need? Becuase <i>that</i> is what makes twitter a business! How are you building your advertiser profiles? Where are you accounting for fast roll out of a Snapchat&#x2F;Instagram&#x2F;BeReal&#x2F;Tiktok equivalent? Oh look, your 140 characters just turned into a few hundreds megs of video that you&#x27;re going to transcode 16 different ways for Qos. Ruh Roh!<p>How are your 1,000 engineers going to push their code to production <i>on one machine</i>?<p>Almost always the answer to &quot;do more work&quot; or &quot;buy more machines&quot; is &quot;buy more machines&quot;.<p>All I&#x27;m saying is I&#x27;d change it to &quot;Toy twitter on one machine&quot; not Production.
评论 #34293360 未加载
评论 #34296392 未加载
jiggawatts超过 2 年前
Something I&#x27;ve found a lot modern IT architects seem to ignore is &quot;write amplification&quot; or the equivalent effect for reads.<p>If you have a 1 KB piece of data that you need to send to a customer, ideally that should require <i>less</i> than 1 KB of actual NIC traffic thanks to HTTP compression.<p>If processing that 1 KB takes more than 1 KB of total NIC traffic within and out of your data centre, the you have some level of <i>amplification</i>.<p>Now, for writes, this is often unavoidable because redundancy is pretty much mandatory for availability. Whenever there&#x27;s a transaction, an amplification factor of 2-3x is assumed for replication, mirroring, or whatever.<p>For reads, good indexing and data structures within a few large boxes (like in the article) can reduce the amplification to just 2-3x as well. The request will likely need to go through a load balancer of some sort, which amplifies it, but that&#x27;s it.<p>So if you need to process, say, 10 Gbps of egress traffic, you need a total of something like 30 Gbps at least, but 50 Gbps for availability and handling of peaks.<p>What happens in places like Twitter is that they go <i>crazy</i> with the microservices. Every service, every load balancer, every firewall, proxy, envoy, NAT, firewall, and gateway adds to the multiplication factor. Typical Kubernetes or similar setups will have a minimum NIC data amplification of 10x <i>on top of</i> the 2-3x required for replication.<p>Now <i>multiply</i> that by the crazy inefficient JSON-based protocols, the GraphQL, an the other insanity layered on to &quot;modern&quot; development practices.<p>This is how you end up serving 10 Gbps of egress traffic with <i>terabits</i> of internal communications. This is how Twitter apparently &quot;needs&quot; 24 million vCPUs to host <i>text chat</i>.<p>Oh, sorry... text chat with the occasional postage-stamp-sized, potato quality static JPG image.
评论 #34293935 未加载
bitbckt超过 2 年前
Regarding tweet distribution, I was one of the folks who built the first scalable solution to this problem at Twitter (called Haplocheirus). We used the Yahoo “Feeding Frenzy” design, pushing tweets through a Redis-backed caching layer.<p>Feel free to continue using that (historically-correct) answer in interviews. :P
justapassenger超过 2 年前
Saying this is production Twitter is like saying that rsync is a Dropbox.
评论 #34294550 未加载
Cyph0n超过 2 年前
More like “barebones, in-memory, English-only Twitter clone on one machine”.<p>Edit: Still a nice writeup!
评论 #34292283 未加载
Tepix超过 2 年前
The new EPYC servers can be filled with 6TB of RAM and 96 cores per socket. Fun times.
kissgyorgy超过 2 年前
I feel like people writing posts like this never worked in a big team at a big company on a big project. It is so obviously impossible to do this and Twitter has so many more features users will never even see, but sure, re-implement it in a couple hundred lines of Rust and Twitter will be saved...
评论 #34296176 未加载
评论 #34295556 未加载
ricardobeat超过 2 年前
&gt; super high performance tiering RAM+NVMe buffer managers which can access the RAM-cached pages almost as fast as a normal memory access are mostly only detailed and benchmarked in academic papers<p>Isn&#x27;t this exactly what modern key value stores like RocksDB, LMDB etc are built for?
morphle超过 2 年前
Why not a single FPGA with 100Gbps ethernet or pcie with NVM attached? Around $5K for the hardware and $5K for the traffic per month. The software would be a bit trickier to write, but you now get 100x performance for the same price
评论 #34292923 未加载
评论 #34292939 未加载
评论 #34292737 未加载
keewee7超过 2 年前
In the coming years we will probably see a lot of complicated microservice architectures be replaced by well-designed and optimized Rust (and modern C++) monoliths that use simple replication to scale horizontally.
评论 #34293255 未加载
评论 #34294487 未加载
评论 #34296346 未加载
gravypod超过 2 年前
As a side note:<p>&gt; I did all my calculations for this project using Calca (which is great although buggy, laggy and unmaintained. I might switch to Soulver) and I’ll be including all calculations as snippets from my calculation notebook.<p>I&#x27;ve always wanted an {open source, stable, unit-aware} version of something like this which could be run locally or in the browser (with persistence on a server). I have yet to find one. This would be a massive help to anyone who does systems design.
eatonphil超过 2 年前
This is a great exercise in napkin math, even with constraints you&#x27;ve set for yourself that don&#x27;t fully approximate Twitter (yet). Thanks!
pengaru超过 2 年前
This post reminds me of an experience I had in ~2005 while @ Hostway Chicago.<p>Unsolicited story time:<p>Prior to my joining the company Hostway had transitioned from handling all email in a dispersed fashion across shared hosting Linux boxes with sendmail et al, to a centralized &quot;cluster&quot; having disparate horizontally-scaled slices of edge-SMTP servers, delivery servers, POP3 servers, IMAP servers, and spam scanners. That seemed to be their scaling plan anyways.<p>In the middle of this cluster sat a refrigerator sized EMC fileserver for storing the Maildirs. I forget the exact model, but it was quite expensive and exotic for the time, especially for an otherwise run of the mill commodity-PC based hosting company. It was a big shiny expensive black box, and everyone involved seemed to assume it would Just Work and they could keep adding more edge-SMTP&#x2F;POP&#x2F;IMAP or delivery servers if those respective services became resource constrained.<p>At some point a pile of additional customers were migrated into this cluster, through an acquisition if memory serves, and things started getting slow&#x2F;unstable. So they go add more machines to the cluster, and the situation just gets worse.<p>Eventually it got to where every Monday was known as Monday Morning Mail Madness, because all weekend nobody would read their mail. Then come Monday, there&#x27;s this big accumulation of new unread messages that now needs to be downloaded and either archived or deleted.<p>The more servers they added the more NFS clients they added, and this just increased the ops&#x2F;sec experienced at the EMC. Instead of improving things they were basically DDoSing their overpriced NFS server by trying to shove more iops down its throat at once.<p>Furthermore, by executing delivery and POP3+IMAP services on separate machines, they were preventing any sharing of buffer caches across these embarrassingly cache-friendly when colocated services. When the delivery servers wrote emails through to the EMC, the emails were also hanging around locally in RAM, and these machines had several gigabytes of RAM - only to <i>never</i> be read from. Then when customers would check their mail, the POP3&#x2F;IMAP servers <i>always</i> needed to hit the EMC to access new messages, data that was <i>probably</i> sitting uselessly in a delivery server&#x27;s RAM somewhere.<p>None of this was under my team&#x27;s purview at the time, but when the castle is burning down every Monday, it becomes an all hands on deck situation.<p>When I ran the rough numbers of what was actually being performed in terms of the amount of real data being delivered and retrieved, it was a trivial amount for a moderately beefy PC to handle at the time.<p>So it seemed like the obvious thing to do was simply colocate the primary services accessing the EMC so they could actually profit from the buffer cache, and shut off most of the cluster. At the time this was POP3 and delivery (smtpd), luckily IMAP hadn&#x27;t taken off yet.<p>The main barrier to doing this all with one machine was the amount of RAM required, because all the services were built upon classical UNIX style multi-process implementations (courier-pop and courier-smtp IIRC). So in essence the main reason most of this cluster existed was just to have enough RAM for running multiprocess POP and SMTP sessions.<p>What followed was a kamikaze-style developed-in-production conversion of courier-pop and courier-smtp to use pthreads instead of processes by yours truly. After a week or so of sleepless nights we had all the cluster&#x27;s POP3 and delivery running on a single box with a hot spare. Within a month or so IIRC we had powered down most of the cluster, leaving just spam scanning and edge-SMTP stuff for horizontal scaling, since those didn&#x27;t touch the EMC. Eventually even the EMC was powered down, in favor of drbd+nfs on more commodity linux boxes w&#x2F;coraid.<p>According to my old notes it was a Dell 2850 w&#x2F;8GB RAM we ended up with for the POP3+delivery server and identical hot spare, replacing <i>racks</i> of comparable machines just having less RAM. &gt;300,000 email accounts.
评论 #34296040 未加载
评论 #34294237 未加载
spullara超过 2 年前
When I was working there I implemented my patent during a hack week (given a set of follows return the list of matching tweet ids, very similar to his prototype):<p><a href="https:&#x2F;&#x2F;patents.google.com&#x2F;patent&#x2F;US20120136905A1&#x2F;en" rel="nofollow">https:&#x2F;&#x2F;patents.google.com&#x2F;patent&#x2F;US20120136905A1&#x2F;en</a> (licensed under Innovators Patent Agreement, <a href="https:&#x2F;&#x2F;github.com&#x2F;twitter&#x2F;innovators-patent-agreement">https:&#x2F;&#x2F;github.com&#x2F;twitter&#x2F;innovators-patent-agreement</a>)<p>I could have definitely served all the chronological timeline requests on a normal server with lower latency that the 1.1 home timeline API. There are a bunch of numbers in the calculations that he is doing that are off but not by an order of magnitude. The big issue is that since I left back then Twitter has added ML ads, ML timeline and other features that make current Twitter much harder to fit on a machine than 2013 Twitter.
评论 #34296663 未加载
KaiserPro超过 2 年前
&gt; A friend points out that IBM Z mainframes have a bunch of the resiliency software and hardware infrastructure I mention,<p>Sure its expensive, and you have to deal with IBM, who are either domain experts or mouth breathers. Sure it&#x27;ll cost you $2m but!<p>the opex of running a team of 20 engineers is pretty huge. Especially as most of the hard bits of redundant multi-machine scaling are solved for you by the mainframe. Redundancy comes for free(well not free, because you are paying for it in hardware&#x2F;software)<p>Plus, IBM redbooks are the golden standard of documentation. Just look at this: <a href="https:&#x2F;&#x2F;www.redbooks.ibm.com&#x2F;redbooks&#x2F;pdfs&#x2F;sg248254.pdf" rel="nofollow">https:&#x2F;&#x2F;www.redbooks.ibm.com&#x2F;redbooks&#x2F;pdfs&#x2F;sg248254.pdf</a> its the redbook for GPFS (scalable multi-machine filesystem, think ZFS but with a bunch more hooks.)<p>Once you&#x27;ve read that, you&#x27;ll know enough to look after a cluster of storage.
viraptor超过 2 年前
This is in no way a criticism of the analysis. But what I think is a hidden cost of an idea like this (that hasn&#x27;t been pointed out) is the ability to extend the features. With a tightly integrated system like that you may want to add a frobnicator as a test - now that whole system would need to change to accommodate that, because all the timeline processing happens more or less in memory. Making things external &#x2F; network based adds overhead, but makes plugging in &#x2F; removing an extra feature much easier. If you count the cost of work required for making changes, then burning money on the &quot;unnecessary&quot; horizontal scaling may not be a bad idea. Wanna add new ads analytics? Just plug into this common firehouse&#x2F;summary endpoint without worrying about the internals. Wanna test a new implementation of some component? Run both in parallel, plugging into same inputs. Etc.
firstSpeaker超过 2 年前
This is one of the most interesting part of the whole post for me:<p>Through intense digging I found a researcher who left a notebook public including tweet counts from many years of Twitter’s 10% sampled “Decahose” API and discovered the surprising fact that tweet rate today is around the same as or lower than 2013! Tweet rate peaked in 2014 and then declined before reaching new peaks in the pandemic. Elon recently tweeted the same 500M&#x2F;day number which matches the Decahose notebook and 2013 blog post, so this seems to be true! Twitter’s active users grew the whole time so I think this reflects a shift from a “posting about your life to your friends” platform to an algorithmic content-consumption platform.<p>So, the number of writes has been the same for a good long while.
swellguy超过 2 年前
You have violated the number one rule in Silicon Valley: If it doesn&#x27;t take at least &quot;N&quot; &quot;engineers&quot; to &quot;solve&quot; a problem who report directly to moi, then how am I relevant? So I agree this is entirely possible, but no one would build this with any funding.
henning超过 2 年前
No rate limiting, API data, quote tweets, view count, threads, likes, mentions, notifications, ads, video, images, account blocking (permanent or TTL), account muting (permanent or TTL), word filtering (permanent or TTL), moderation&#x2F;reporting, user profile storage, or the fact that tweets that display show more than just the tweet itself. No mention that tweet activity all occurs concurrently and therefore the loading script is not at all a realistic estimate of real activity.<p>But sure, go ahead and take this as evidence that 10 people could build Twitter as I&#x27;m sure that&#x27;s what will happen to this post. If that&#x27;s true, why haven&#x27;t they already done so? It should only take a couple weeks and one beefy machine, right?
siliconc0w超过 2 年前
Enjoyed the write up, would be curious to see the twitter spend broken down by functionality given all the extra stuff they do. I imagine it&#x27;s a non-linear relationship where the company has to burn more and more cash with every new feature (and esp things like Advertising which you need once your spend surpasses what a simple subscription can offer), more scale adds more complexity, bureaucracy and overhead (management, hr &amp; recruiting, legal&amp;accounting, etc). While it&#x27;s likely there is waste (some of which is inevitable, see &#x27;overhead&#x27; above) a super bareboens twitter can maybe run within one beefy machine but a &#x27;real&#x27; twitter ends up needing millions + lots of people.
systemvoltage超过 2 年前
It&#x27;s worth noting Stackoverflow&#x27;s production arch: <a href="https:&#x2F;&#x2F;stackexchange.com&#x2F;performance" rel="nofollow">https:&#x2F;&#x2F;stackexchange.com&#x2F;performance</a>
mcqueenjordan超过 2 年前
Fun thought experiment! I can&#x27;t help but be reminded of the Good Will Hunting quote, though:<p>SEAN: So if I asked you about art you’d probably give me the skinny on every art book ever written. Michelangelo? You know a lot about him. Life’s work, political aspirations, him and the pope, sexual orientation, the whole works, right? But I bet you can’t tell me what it smells like in the Sistine Chapel. You’ve never actually stood there and looked up at that beautiful ceiling. Seen that.
knubie超过 2 年前
&gt; I’m not sure how real Twitter works but I think based on Elon’s whiteboard photo and some tweets I’ve seen by Twitter (ex-)employees it seems to be mostly the first approach using fast custom caches&#x2F;databases and maybe parallelization to make the merge retrievals fast enough.<p>I think Twitter does (or at some point did) use a combination of the first and second approach. The vast majority of tweets used the first approach, but tweets from accounts with a certain threshold of followers used the second approach.
fleddr超过 2 年前
&quot;Through intense digging I found a researcher who left a notebook public including tweet counts from many years of Twitter’s 10% sampled “Decahose” API and discovered the surprising fact that tweet rate today is around the same as or lower than 2013! Tweet rate peaked in 2014 and then declined before reaching new peaks in the pandemic. Elon recently tweeted the same 500M&#x2F;day number which matches the Decahose notebook and 2013 blog post, so this seems to be true! Twitter’s active users grew the whole time so I think this reflects a shift from a “posting about your life to your friends” platform to an algorithmic content-consumption platform.&quot;<p>I know it&#x27;s not the core premise of the article, but this is very interesting.<p>I believe that 90% of tweets per day are retweets, which supports the author&#x27;s conclusion that Twitter is largely about reading and amplifying others.<p>That would leave 50 million &quot;original&quot; tweets per day, which you should probably separate as main tweets and reply tweets. Then there&#x27;s bots and hardcore tweeters tweeting many times per day, and you&#x27;ll end up with a very sobering number of actual unique tweeters writing original tweets.<p>I&#x27;d say that number would be somewhere in the single digit millions of people. Most of these tweets get zero engagement. It&#x27;s easy to verify this yourself. Just open up a bunch of rando profiles in a thread and you&#x27;ll notice a pattern. A symmetrical amount of followers and following typically in the range of 20-200. Individual tweets get no likes, no retweets, no replies, nothing. Literally tweeting into the void.<p>If you&#x27;d take away the zero engagement tweets, you&#x27;ll arrive at what Twitter really is. A cultural network. Not a social network. Not a network of participation. A network of cultural influencers consisting of journalists, politicians, celebrities, companies and a few witty ones that got lucky. That&#x27;s all it is: some tens of thousands of people tweeting and the rest leeching and responding to it.<p>You could argue that is true for every social network, but I just think it&#x27;s nowhere this extreme. Twitter is also the only &quot;social&quot; network that failed to (exponentially) grow in a period that you might as well consider the golden age of social networks. A spectacular failure.<p>Musk bought garbage for top dollar. The interesting dynamic is that many Twitter top dogs have an inflated status that cannot be replicated elsewhere. They&#x27;re kind of stuck. They achieved their status with hot take dunks on others, but that tactic doesn&#x27;t really work on any other social network.
评论 #34293857 未加载
thriftwy超过 2 年前
I remember Stack Overflow running on a single Windows Server box and mocking fellow LAMP developers with their propensity towards having dozens of VMs to same effect.<p>That was some time ago, though.
评论 #34296405 未加载
VLM超过 2 年前
Interesting optimization idea: If 95% of the users are bots, and your ML algorithms are smart enough to figure who&#x27;s bot and who&#x27;s bio, you could save a lot of traffic by not publishing tweets to bots as no one is going to read them anyway. Of course if that traffic included advertisements, you&#x27;d also lose 95% of your ad revenue.<p>The ultimate extension of this &quot;run it all on one machine&quot; meme would be to run the bots on the single machine along with the service.
评论 #34294940 未加载
mizzao超过 2 年前
While you might not want to do this with actual Twitter, any sort of high-performance computing workload can run substantially faster on a single optimized machine than on a distributed computing environment.<p>I learned this the hard way when I was running a medium-sized MapReduce job in grad school that was over 100x faster when run as a local direct computation with some numerical optimizations.
评论 #34296050 未加载
snotrockets超过 2 年前
I ask candidates I interview to design a certain service. Most ask about scale, to which I like to direct the question back at them: it&#x27;s going to be huge. As big as Twitter. How big would that be, do you think?<p>Most then suggest scale that would make the service run comfortable from a not-too powerful machine, and then go to design data-center spanning distributed service.
mattbillenstein超过 2 年前
Interesting abstract system design type problem - I think it becomes difficult if you have to shard the data though because all the assumptions about the hot set being in RAM break all the performance guarantees now I think... Which is I think basically what Twitter&#x27;s existing backend has to deal with.
z3t4超过 2 年前
A Twitter clone could probably run in a teenagers closet, but not after it has been iterated by 10000 monkeys.
fortran77超过 2 年前
I&#x27;ve thought about this problem, too, and blocklists seem like a hard problem to implement efficiently. I have a few thousand users blocked, and several hundred keywords, phrases and emoji. How are these processed efficiently?
jeffbee超过 2 年前
I like this kind of exercise. One thing I am not seeing is analytics, logs and so forth that as I understand it are significant portions of Twitter&#x27;s production cost story.
评论 #34293076 未加载
评论 #34292748 未加载
betimsl超过 2 年前
I&#x27;m just curious as to what kind of motherboard this personal computer is going to have? I&#x27;m asking this because of the limit on PCIe bandwidth. 100gbit NIC? How?
评论 #34294585 未加载
评论 #34294371 未加载
sammy2255超过 2 年前
A bit out of touch to think that the bandwidth alliance will let you push 500TB a month through them for free
surume超过 2 年前
Ops: &quot;One of our instances went down&quot; Everyone else: &quot;Gaaaahhh&quot;
irq超过 2 年前
Excellent article! I wish the font size on mobile was bigger.
truth_seeker超过 2 年前
How to subdue the cost of abstraction. Very well explained !
Halan超过 2 年前
Let’s hope Elon doesn’t read this
castratikron超过 2 年前
Does 4chan fit on one machine?
lightlyused超过 2 年前
It is all fun running on one machine until a capacitor leaks or something else goes south.
wonnage超过 2 年前
This doesn’t seem to support fetching a specific tweet by id?
评论 #34294026 未加载
sitkack超过 2 年前
I am both embarrassed and disappointed with the negativity this post has attracted.<p>A litanny of &quot;gotchas&quot;, where someone attempts to best the OP. What about x, y and z? It can&#x27;t possibly scale. Twitter is so much more than this, etc.<p>The OP isn&#x27;t making the assertion that Twitter should replace their current system with a single large machine.<p>The whole thread paints a picture of HN like it is full of a bunch of half-educated, uncreative negative brats.<p>To the people that encourage a fun discussion, thank you! Great things are not built by people who only see how something cannot possibly work.
评论 #34294013 未加载
评论 #34294614 未加载
评论 #34293979 未加载
评论 #34294948 未加载
评论 #34294957 未加载
andrewstuart超过 2 年前
Most projects I encounter these days instantly reach for kubernetes, containers and microservices or cloud functions.<p>I find it much more appealing to just make the whole thing run on one fast machine. When you suggest this tend to people say &quot;but scaling!&quot;, without understanding how much capacity there is in vertical.<p>The thing most appealing about single server configs is the simplicity. The more simple a system easy, likely the more reliable and easy to understand.<p>The software thing most people are building these days can easily run lock stock and barrel on one machine.<p>I wrote a prototype for an in-memory message queue in Rust and ran it on the fastest EC2 instance I could and it was able to process nearly 8 million messages a second.<p>You could be forgiven for believing the only way to write software is is a giant kablooie of containers, microservices, cloud functions and kubernetes, because that&#x27;s what the cloud vendors want you to do, and it&#x27;s also because it seems to be the primary approach discussed. Every layer of such stuff add complexity, development, devops, maintenance, support, deployment, testing and (un)reliability. Single server systems can be dramatically mnore simple because you can trim is as close as possible down to just the code and the storage.
评论 #34292897 未加载
评论 #34292698 未加载
评论 #34293146 未加载
评论 #34293073 未加载
评论 #34292784 未加载
评论 #34292768 未加载
评论 #34292727 未加载
评论 #34292951 未加载
评论 #34292825 未加载
评论 #34292781 未加载
评论 #34292988 未加载
评论 #34293088 未加载
评论 #34292724 未加载
评论 #34293083 未加载
评论 #34293179 未加载
britneybitch超过 2 年前
&gt; colo cost + total server cost&#x2F;(3 year) =&gt; $18,471&#x2F;year<p>Meanwhile the company I just left was spending more than this for dozens of kubernetes clusters on AWS before signing a single customer. Sometimes I wonder what I&#x27;m still doing in this industry.
评论 #34292719 未加载
评论 #34292624 未加载
评论 #34292642 未加载
评论 #34292793 未加载
评论 #34293409 未加载
评论 #34293191 未加载
jacobsenscott超过 2 年前
Nice. While there may be some impracticalities to actually doing this for twitter, 99% of the software out there could run on a fraction of a single commodity server. People complain about the carbon burn of crypto, and they are right, but I bet it is dwarfed by the carbon burn of all the shitty over-provisioned and over-architected CRUD apps running interpreted languages. Unfortunately with universities teaching python of all things we&#x27;ll have (or maybe already do have) a whole generation of developers that actually have no idea how powerful a modern computer is.<p>I suppose there&#x27;s a chance AI will get to the point where we can feed it a ruby&#x2F;python&#x2F;js&#x2F;whatever code base and it can emit the functionally equivalent machine code as a single binary (even a microservices mess).
评论 #34295527 未加载
评论 #34295542 未加载
评论 #34295512 未加载
throwmeup123超过 2 年前
The title is highly misleading for some theoretical &quot;exploration&quot;.
评论 #34292181 未加载
kierank超过 2 年前
This is as realistic as the moon rocket in my back garden.
twp超过 2 年前
This post solves all of the easy problems (i.e. make simple stuff go fast) and none of the hard problems (i.e. build a system that still works when other stuff breaks).<p>This post is perfect world thinking. We don&#x27;t live in a perfect world.
kureikain超过 2 年前
Not to the extreme of fitting everything into one machine but I have explorer the idea of separate stateless workload into its own machine.<p>However, the stateless workload can still operate in a read-only manner if the stateful component failed.<p>I run an email forwarding service[1], and one of challenge is how can I ensure the email forwarding still work even if my primary database failed.<p>And I come up with a design that the app boot up, and load entire routing data from my postgres into its memory data structure, and persisted to local storage. So if postgres datbase failed, as long as I have an instance of those app(which I can run as many as I can), the system continue to work for existing customer.<p>The app use listen&#x2F;notify to load new data from postgres into its memory.<p>Not exactly the same concept as the artcile, but the idea is that we try to design the system in a way where it can operate fully on a single machine. Another cool thing is that it easiser to test this, instead of loading data from Postgres, it can load from config files, so essentially the core biz logic is isolated into a single machine.<p>---<p><a href="https:&#x2F;&#x2F;mailwip.com" rel="nofollow">https:&#x2F;&#x2F;mailwip.com</a>