TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Continuous reinvention: A brief history of block storage at AWS

385 点作者 riv9919 个月前

18 条评论

mjb9 个月前
Super cool to see this here. If you&#x27;re at all interested in big systems, you should read this.<p>&gt; Compounding this latency, hard drive performance is also variable depending on the other transactions in the queue. Smaller requests that are scattered randomly on the media take longer to find and access than several large requests that are all next to each other. This random performance led to wildly inconsistent behavior.<p>The effect of this can be huge! Given a reasonably sequential workload, modern magnetic drives can do &gt;100MB&#x2F;s of reads or writes. Given an entirely random 4kB workload, they can be limited to as little as 400kB&#x2F;s of reads or writes. Queuing and scheduling can help avoid the truly bad end of this, but real-world performance still varies by over 100x depending on workload. That&#x27;s really hard for a multi-tenant system to deal with (especially with reads, where you can&#x27;t do the &quot;just write it somewhere else&quot; trick).<p>&gt; To know what to fix, we had to know what was broken, and then prioritize those fixes based on effort and rewards.<p>This was the biggest thing I learned from Marc in my career (so far). He&#x27;d spend time working on visualizations of latency (like the histogram time series in this post) which were much richer than any of the telemetry we had, then tell a story using those visualizations, and completely change the team&#x27;s perspective on the work that needed to be done. Each peak in the histogram came with it&#x27;s own story, and own work to optimize. Really diving into performance data - and looking at that data in multiple ways - unlocks efficiencies and opportunities that are invisible without that work and investment.<p>&gt; Armed with this knowledge, and a lot of human effort, over the course of a few months in 2013, EBS was able to put a single SSD into each and every one of those thousands of servers.<p>This retrofit project is one of my favorite AWS stories.<p>&gt; The thing that made this possible is that we designed our system from the start with non-disruptive maintenance events in mind. We could retarget EBS volumes to new storage servers, and update software or rebuild the empty servers as needed.<p>This is a great reminder that building distributed systems isn&#x27;t just for scale. Here, we see how building the system in a way that can seamlessly tolerate the failure of a server, and move data around without loss, makes large scale operations (everything from day-to-day software upgrades to a massive hardware retrofit project) possible that just wouldn&#x27;t be possible in a &quot;simpler&quot; architecture. A &quot;simpler&quot; architecture would make these operations much harder, to the point of being impossible, making the end-to-end problem we&#x27;re trying to solve for the customer harder.
评论 #41322465 未加载
jedberg9 个月前
Ah, this brings back memories. Reddit was one of the very first users of EBS back in 2008. I thought I was <i>so</i> clever when I figured out that I could get more IOPS if I build a software raid out of five EBS volumes.<p>At the time each volume had very inconsistent performance, so I would launch seven or eight, and then run some each write and read loads. I&#x27;d take the five best performers and then put them into a Linux software raid.<p>In the good case, I got the desired effect -- I did in fact get more IOPS then 5x a single node. But in the bad case, oh boy was it bad.<p>What I didn&#x27;t realize was that if you&#x27;re using a software raid, if one node is slow, the entire raid moves at the speed of the slowest volume. So this would manifest as a database going bad. It took a while to figure out it was the RAID that was the problem. And even then, removing the bad node was hard -- the software raid really didn&#x27;t want to let go of the bad volume until it could finish writing out to it, which of course was super slow.<p>And then I would put in a new EBS volume and have to rebuild the array, which of course it was also bad at because it would be bottlenecked on the IOPS for the new volume.<p>We moved off of those software raids after a while. We almost never used EBS at Netflix, in part because I would tell everyone who would listen about my folly at reddit, and because they had already standardized on using only local disk before I ever got there.<p>And an amusing side note, when AWS had that massive EBS outage, I still worked at reddit and I was actually watching Netflix while I was waiting for the EBS to come back so I could fix all the databases. When I interviewed at Netflix one of the questions I asked them was &quot;how were you still up during the EBS outage?&quot;, and they said, &quot;Oh, we just don&#x27;t use EBS&quot;.
评论 #41330221 未加载
评论 #41323922 未加载
mgdev9 个月前
It&#x27;s cool to read this.<p>One interesting tidbit is that during the period this author writes about, AWS had a roughly 4-day outage (impacted at least EC2, EBS, and RDS, iirc), caused by EBS, that really shook folks&#x27; confidence in AWS.<p>It resulted in a reorg and much deeper investment in EBS as a standalone service.<p>It also happened around the time Apple was becoming a customer, and AWS in general was going through hockey-stick growth thanks to startup adoption (Netflix, Zynga, Dropbox, etc).<p>It&#x27;s fun to read about these technical and operational bits, but technical innovation in production is messy, and happens against a backdrop of Real Business Needs.<p>I wish more of THOSE stories were told as well.
评论 #41325358 未加载
abrookewood9 个月前
This is the bit I found curious: &quot;adding a small amount of random latency to requests to storage servers counter-intuitively reduced the average latency and the outliers due to the smoothing effect it has on the network&quot;.<p>Can anyone explain why?
评论 #41325459 未加载
simonebrunozzi9 个月前
If you&#x27;re curious, this is a talk I gave back in 2009 [0] about Amazon S3 internals. It was created from internal assets by the S3 team, and a lot in there influenced how EBS was developed.<p>[0]: <a href="https:&#x2F;&#x2F;vimeo.com&#x2F;7330740" rel="nofollow">https:&#x2F;&#x2F;vimeo.com&#x2F;7330740</a>
lysace9 个月前
I liked the part about them manually retrofitting an SSD in every EBS unit in 2013. That looks a lot like a Samsung SATA SSD:<p><a href="https:&#x2F;&#x2F;www.allthingsdistributed.com&#x2F;images&#x2F;mo-manual-ssd.png" rel="nofollow">https:&#x2F;&#x2F;www.allthingsdistributed.com&#x2F;images&#x2F;mo-manual-ssd.pn...</a><p>I think we got SSDs installed in blades from Dell well before that, but I may be misremembering.<p>I&#x2F;O performance was a big thing in like 2010&#x2F;2011&#x2F;2012. We went from spinning HDs to Flash memory.<p>I remember experimenting with these raw Flash-based devices, no error&#x2F;wear level handling at all. Insanity, but we were all desperate for that insane I&#x2F;O performance bump from spinning rust to silicon.
评论 #41325393 未加载
rnts089 个月前
This gives me fond memories of building storage-as-a-service infrastructure back before we had useful opensource stuff, moving away from sun san, fibrechannel and solaris we landed on glusterfs on supermicro storage servers, running linux and nfs. We peaked almost 2Pb before I moved on in 2007.<p>Secondly it reminds me of the time when it simply made sense to ninja-break and rebuild mdraids with ssds in-place of the spinning drives WHILE the servers were running (sata kind of supported hotswapping the drives). Going from spinning to ssd gave us a 14x increase in IOPS in the most important system of the platform.
0xbadcafebee9 个月前
At the very start of my career, I got to work for a large-scale (technically&#x2F;logistically, not in staff) internet company doing all the systems stuff. The number of lessons I learned in such a short time was crazy. Since leaving them, I learned that most people can go almost their whole careers without running into all those issues, and so don&#x27;t learn those lessons.<p>That&#x27;s one of the reasons why I think we should have a professional license. By requiring an apprenticeship under a master engineer, somebody can pick up incredibly valuable knowledge and skills (that you only learn by experience) in a very short time frame, and then be released out into the world to be much more effective throughout their career. And as someone who also interviews candidates, some proof of their experience and a reference from their mentor would be invaluable.
评论 #41323665 未加载
herodoturtle9 个月前
Loved this:<p>&gt; While the much celebrated ideal of a “full stack engineer” is valuable, in deep and complex systems it’s often even more valuable to create cohorts of experts who can collaborate and get really creative across the entire stack and all their individual areas of depth.
tanelpoder9 个月前
The first diagram in that article is incorrect&#x2F;quite outdated. Modern computers have most PCIe lanes going directly into the CPU (IO Hub or &quot;Uncore&quot; area of the processor), not via a separate PCH like in the old days. That&#x27;s an important development for both I&#x2F;O throughput and latency.<p>Otherwise, great article, illustrating that it&#x27;s queues all the way down!
评论 #41322357 未加载
pbw9 个月前
Early on, the cloud&#x27;s entire point was to use &quot;commodity hardware,&quot; but now we have hyper-specialized hardware for individual services. AWS has Graviton, Inferentia and Tranium chips. Google has TPUs and Titan security cards, Azure uses FPGA&#x27;s and Sphere for security. This trend will continue.
评论 #41323366 未加载
评论 #41323062 未加载
评论 #41325584 未加载
评论 #41327228 未加载
评论 #41332091 未加载
moralestapia9 个月前
Great article.<p><i>&quot;EBS is capable of delivering more IOPS to a single instance today than it could deliver to an entire Availability Zone (AZ) in the early years on top of HDDs.&quot;</i><p>Dang!
apitman9 个月前
What&#x27;s the best way to provide a new EC2 instance with a fast ~256GB dataset directory? We&#x27;re currently using EBS volumes but it&#x27;s a pain to do updates to the data because we have to create a separate copy of the volume for each instance. EFS was too slow. Instance storage SSDs are ephemeral. Haven&#x27;t tried FSx Lustre yet.
评论 #41323891 未加载
评论 #41327881 未加载
评论 #41332126 未加载
评论 #41324677 未加载
评论 #41326047 未加载
mannyv9 个月前
The most surprising thing ia that the author had no previous experience in the domain. It&#x27;s almost impossible to get hired at AWS now without domain expertise, AFAIK.
评论 #41324388 未加载
评论 #41330546 未加载
dasloop9 个月前
So true and valid of almost all software development:<p>&gt; In retrospect, if we knew at the time how much we didn’t know, we may not have even started the project!
Silasdev9 个月前
Great read, although a shame that it didn&#x27;t go any further than adding the write cache SSD solution, which must have been many years ago. I was hoping for a little more recent info on the EBS architecture.
swozey9 个月前
I had no idea Werner Vogels had a systems blog. Awesome read, thanks.
tw049 个月前
I think the most fascinating thing is watching them relearn every lesson the storage industry already knew about a decade earlier. Feels like most of this could have been solved by either hiring storage industry experts or just acquiring one of the major vendors.
评论 #41322541 未加载
评论 #41322147 未加载
评论 #41339263 未加载
评论 #41322323 未加载
评论 #41325474 未加载
评论 #41322129 未加载