TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: How would you store 10PB of data for your startup today?

307 点作者 philippb大约 4 年前
I&#x27;m running a startup and we&#x27;re storing north of 10PB of data and growing. We&#x27;re currently on AWS and our contract is up for renewal. I&#x27;m exploring other storage solutions.<p>Min requirements of AWS S3 One Zone IA (<a href="https:&#x2F;&#x2F;aws.amazon.com&#x2F;s3&#x2F;storage-classes&#x2F;?nc=sn&amp;loc=3" rel="nofollow">https:&#x2F;&#x2F;aws.amazon.com&#x2F;s3&#x2F;storage-classes&#x2F;?nc=sn&amp;loc=3</a>)<p>How would you store &gt;10PB if you&#x27;d be in my shoes? Thought experiment can be with and without data transfer cost our of current S3 buckets. Please mention also what your experience is based on. Ideally you store large amounts of data yourself and speak of first hand experience.<p>Thank you for your support!! I will post a thread once we got to a decision on what we ended up doing.<p>Update: Should have mentioned earlier, data needs to be accessible at all time. It’s user generated data that is downloaded in the background to a mobile phone, so super low latency is not important, but less than 1000ms required.<p>The data is all images and videos, and no queries need to be performed on the data.

129 条评论

pmlnr大约 4 年前
Non-cloud:<p>HPE sells their Apollo 4000[^1] line, which takes 60x3.5&quot; drives - with 16TB drives, that&#x27;s 960TB each machine, one rack of 10 of these is 9PB+ therefore, which nearly covers your 10PB needs. (We have some racks like this). They are not cheap. (Note: Quanta makes servers that can take 108x3.5&quot; drive, but they need special deep racks.)<p>The problem here would be the &quot;filesystem&quot; (read: the distributed service): I don&#x27;t have much experience with Ceph, and ZFS across multiple machines is nasty as far as I&#x27;m aware, but I could be wrong. HDFS would work, but the latency can be completely random there.<p>[^1]: <a href="https:&#x2F;&#x2F;www.hpe.com&#x2F;uk&#x2F;en&#x2F;storage&#x2F;apollo-4000.html" rel="nofollow">https:&#x2F;&#x2F;www.hpe.com&#x2F;uk&#x2F;en&#x2F;storage&#x2F;apollo-4000.html</a><p>So unless you are desperate to save money in the long run, stick to the cloud, and let someone else sweat about the filesystem level issues :)<p>EDIT: btw, we let the dead drives &quot;rot&quot;: replacing them would cost more, and the failure rate is not that bad, so they stay in the machine, and we disable them in fstabs, configs, etc.<p>EDIT2: at 10PB HDFS would be happy; buy 3 racks of those apollos, and you&#x27;re done. We started struggling at 1000+ nodes first; now, with 2400 nodes, nearly 250PB raw capacity, and literally a billion filesystem objects, we are slow as f*, so plan carefully.
评论 #26920276 未加载
评论 #26919273 未加载
评论 #26920089 未加载
评论 #26923332 未加载
评论 #26918960 未加载
评论 #26922489 未加载
skynet-9000大约 4 年前
At that kind of scale, S3 makes zero sense. You should definitely be rolling your own.<p>10PB costs more than $210,000 per month at S3, or more than $12M after five years.<p>RackMountPro offers a 4U server with 102 bays, similar to the BackBlaze servers, which fully configured with 12GB drives is around $11k total and stores 1.2 PB per server. (<a href="https:&#x2F;&#x2F;www.rackmountpro.com&#x2F;product.php?pid=3154" rel="nofollow">https:&#x2F;&#x2F;www.rackmountpro.com&#x2F;product.php?pid=3154</a>)<p>That means that you could fit all 15TB (for erasure encoding with Minio) in less than two racks for around $150k up-front.<p>Figure another $5k&#x2F;mo for monthly opex as well (power, bandwidth, etc.)<p>Instead of $12M spent after five years, you&#x27;d be at less than $500k, including traffic (also far cheaper than AWS.) Even if you got AWS to cut their price in half (good luck with that), you&#x27;d still be saving more than $5 million.<p>Getting the data out of AWS won&#x27;t be cheap, but check out the snowball options for that: <a href="https:&#x2F;&#x2F;aws.amazon.com&#x2F;snowball&#x2F;pricing&#x2F;" rel="nofollow">https:&#x2F;&#x2F;aws.amazon.com&#x2F;snowball&#x2F;pricing&#x2F;</a>
评论 #26920423 未加载
评论 #26921607 未加载
评论 #26919062 未加载
评论 #26921073 未加载
评论 #27055136 未加载
评论 #26921990 未加载
评论 #26918827 未加载
评论 #26919383 未加载
评论 #26928291 未加载
评论 #26918686 未加载
评论 #26919060 未加载
user5994461大约 4 年前
What if you want to move off S3? Let&#x27;s do the math.<p>* To store 10+ PB of data.<p>* You need 15 PB of storage (running at 66% capacity)<p>* You need 30 PB of raw disks (twice for redundancy).<p>You&#x27;re looking at buying thousands of large disks, in the order of a million dollar upfront. Do you have that sort of money available right now?<p>Maybe you do. Then, are you ready to receive and handle entire pallets of hardware? That will need to go somewhere with power and networking. They won&#x27;t show up for another 3-6 months because that&#x27;s the lead time to receive an order like that.<p>If you talk to Dell&#x2F;HP&#x2F;other, they can advise you and sell you large storage appliances. Problem is, the larger appliances will only host 1 or 2 PB. That&#x27;s nowhere near enough.<p>There is a sweet spot in moving off the cloud, if you can fit your entire infrastructure into one rack. You&#x27;re not in that sweet spot.<p>You&#x27;re going to be filling multiple racks, which is a pretty serious issue in terms of logistics (space, power, upfront costs, networking).<p>Then you&#x27;re going to have to handle &quot;sharding&quot; on top of the storage because there&#x27;s no filesystem that can easily address 4 racks of disks. (Ceph&#x2F;Lustre is another year long project for half a person).<p>The conclusion of this story. S3 is pretty good. Your time would be better spend optimizing the software. What is expensive? The storage or the bandwidth or both?<p>* If it&#x27;s the bandwidth. You need to improve your CDN and caching layer.<p>* If it&#x27;s the storage. You should work on better compression for the images and videos. And check whether you can adjust retention.
评论 #26921152 未加载
评论 #26919869 未加载
评论 #26921713 未加载
评论 #26921818 未加载
评论 #26922381 未加载
评论 #26920301 未加载
评论 #26919652 未加载
评论 #26919567 未加载
epistasis大约 4 年前
If you have good sysadmin&#x2F;devops types, this is a few racks of storage in a datacenter. Ceph is pretty good at managing something this size, and offers an S3 interface to the data (with a few quirks). We were mostly storing massive keys that were many gigabytes, so if you have smaller keys, so I&#x27;m not sure about performance&#x2F;scalding limits with smaller keys and 10PB. I&#x27;d be sure to give your team a few months to build a test cluster then build and scale the full size cluster. And a few months to transfer the data...<p>But you&#x27;ll need to balance the cost of finding people with that level of knowledge and adaptability with the cost of bundled storage packages. We were running super lean, got great deals on bandwidth, power, and has low performance requirements. When we ran the numbers for all in costs, it was less than we thought we could get from any other vendor. And if you commit to buying the severs racks it will take to fit 10PB, you can probably get somebody like Quanta to talk to you.
评论 #26916787 未加载
评论 #26919117 未加载
评论 #26920297 未加载
maestroia大约 4 年前
There are four hidden costs which not many have touched upon.<p>1) Staff You&#x27;ll need at least one, maybe two, to build, operate, and maintain any self-hosted solution. A quick peek on Glassdoor and Salary show the unloaded salary for a Storage Engineer runs $92,000-130,000 US. Multiply by 1.25-1.4 for loaded cost of an employee (things like FICA, insurance, laptop, facilities, etc). Storage Administrators run lower, but still around $70K US unloaded. Point is, you&#x27;ll be paying around $100K+&#x2F;year per storage staff position.<p>2) Facilities (HVAC, electrical, floor loading, etc) If you host on-site (not hosting facility), you&#x27;d better make certain your physical facilities can handle it. Can your HVAC handle the cooling, or will you need to upgrade it? What about your electrical? Can you get the increased electrical in your area? How much will your UPS and generator cost? Can the physical structure of the building (floor loading, etc) handle the weight of racks and hundreds of drives, the vibration of mechanical drives, the air cycling?<p>3) Disaster Recovery&#x2F;Business Continuity Since you&#x27;re using S3 One Zone IA, you have no multi-zone duplicated redundancy. It&#x27;s use case is for secondary backup storage for data, not the primary data store for running a startup. When there is an outage&#x2F;failure (and it will happen), the startup may be toast, and investors none too happy. So this is another expense you&#x27;re going to have to seriously consider, whether you stick with S3 or roll-your-own.<p>4) Cost of money With rolling-your-own, you&#x27;re going to be doing CAPEX and OPEX. How much upfront and ongoing CAPEX can the startup handle? Would the depreciation on storage assets be helpful financially? You really need to talk to the CPA&#x2F;finance person before this. There may be better tax and financial benefits by staying on S3 (OPEX). Or not.<p>Good luck.
评论 #26922417 未加载
ktpsns大约 4 年前
I have worked in HPC (academia) where the cluster storage size is measured in multiples of PB since a decade. Since latency and bandwidth is a killer requirement there, Infiniband (instead of Ethernet) is the defacto standard for connecting the storage pools to the computing nodes.<p>Maintaining such a (storage) cluster requires 1-2 people on site which replace a few hard disks every day.<p>Nevertheless, when I would continously need massive amount of data, I would opt in doing it myself anytime instead of cloud services. I just know how well these clusters run and there is little to no saving when outsourcing it.
评论 #26922324 未加载
评论 #26918628 未加载
评论 #26921733 未加载
jtchang大约 4 年前
I would host in a datacenter of your choice and do a cross connect into AWS: <a href="https:&#x2F;&#x2F;aws.amazon.com&#x2F;directconnect&#x2F;pricing&#x2F;" rel="nofollow">https:&#x2F;&#x2F;aws.amazon.com&#x2F;directconnect&#x2F;pricing&#x2F;</a><p>This allows you to read the data into AWS instances at no cost and process it as needed since there is 0 cost for ingress into AWS. I have some experience with this (hosting using Equinix)
评论 #26921648 未加载
评论 #26916753 未加载
staticassertion大约 4 年前
It&#x27;s going to depend entirely on a number of factors.<p>How are you storing this data? Is it tons of small objects, or a smaller number of massive objects?<p>If you can aggregate the small objects into larger ones, can you compress them? Is this 10PB compressed or not? If this is video or photo data, compression won&#x27;t buy you nearly as much. If you have to access small bits of data, and this data isn&#x27;t something like Parquet or JSON, S3 won&#x27;t be a good fit.<p>Will you access this data for analytics purposes? If so, S3 has querying functionality like Athena and S3 Select. If it&#x27;s instead for serving small files, S3 may not be a good fit.<p>Really, at PB scale these questions are all critically important and any one of them completely changes the article. There is no easy &quot;store PB of data&quot; architecture, you&#x27;re going to need to optimize heavily for your specific use case.
评论 #26916064 未加载
garciasn大约 4 年前
In my opinion, knowing what you&#x27;re planning to do w&#x2F;the data once it&#x27;s stored is the important piece to giving you some idea of where to put it.
评论 #26916023 未加载
评论 #26915797 未加载
warrenm大约 4 年前
I can build a 720T raw SSD storage box for ~$138k<p>Or a 648T raw HDD storage box for ~$53k<p>To get that up to raw 10 PB, I need ~$2m for all-SSD, or ~$850k for all-HDD<p>Bake-in a 2-system safety margin, and that&#x27;s ~$2.3m all-SSD or ~$960 all-HDD<p>Run TrueNAS and ZFS on each of them ... and my overhead becomes a little bit of cross-over sysadmin&#x2F;storage admin time per year and power<p>Say that&#x27;s 1 FTE at $180k ($120k salary + 50% overhead) per year (even though <i>actual</i> admin time is only going to be <i>maybe</i> 10% of their workload - I like rounding-up for these types of approximations)<p>Peak cost, therefore, is ~$2.5m the first year, and ~$200k per year afterwards<p>And, of course, we&#x27;ll want to plan for replacement systems to pop-in ... so factor-up to $250k per year in overhead (salary, benefits, taxes, power, budget for additional&#x2F;replacement servers)<p>Using [Wasabi](<a href="https:&#x2F;&#x2F;wasabi.com&#x2F;cloud-storage-pricing&#x2F;#three-info" rel="nofollow">https:&#x2F;&#x2F;wasabi.com&#x2F;cloud-storage-pricing&#x2F;#three-info</a>), 10PB is going to run ~$62k&#x2F;mo, or ~$744k per year<p>It&#x27;s cheaper to build-vs-buy in no more than 5 years ... probably under 3
nikisweeting大约 4 年前
Backblaze B2, ingress and egress are free through Cloudflare, and it&#x27;s S3 compatible. It&#x27;s peanuts by comparison but I&#x27;ve been storing ~22TB on there for years and love it.<p>Wasabi and Glacier would be my 2nd choices.
评论 #26915714 未加载
评论 #26918020 未加载
评论 #26921884 未加载
评论 #26916098 未加载
tw04大约 4 年前
I should preface this with: I read the question as you want something on-premises&#x2F;in a colo. If you&#x27;re talking hosted S3 by someone other than Amazon that&#x27;s a different story.<p>It probably depends on if you are tied at the hip to <i>other</i> AWS services. If you are, then you&#x27;re kind of stuck. The ingress&#x2F;egress traffic will kill you doing anything with that data anywhere else.<p>If you aren&#x27;t, the major players for on-prem S3 (assuming you want to continue access the data that way) would be (in no specific order):<p>Cloudian<p>Scality<p>NetApp Storagegrid<p>Hitachi Vantara HCP<p>Dell&#x2F;EMC ECS<p>There are plusses and minuses to all of them. At that capacity I would honestly avoid a roll-your-own unless you&#x27;re on a shoestring budget. Any of the above will be cheaper than Amazon.
评论 #26922479 未加载
babelfish大约 4 年前
I assume you&#x27;re already making use of most of S3s auto-archive features?[0] Really it seems like this comes down to how quickly any of your data &#x2F;needs&#x2F; to be loaded. I&#x27;d probably investigate after how much time a file is only ~1-10% likely to be accessed in the next 30 days, then auto-archive files in S3 to Glacier after that threshold. If you want to be a bit &#x27;smarter&#x27; about it, here&#x27;s an article by Dropbox[1] on how they saved $1.7M&#x2F;year by determining which file previews actually need to be generated, and their strategy seems like it could be applied to your use case. That said, it seems like you are more likely to save money by going colo than by staying in the cloud.<p>[0] <a href="https:&#x2F;&#x2F;aws.amazon.com&#x2F;blogs&#x2F;aws&#x2F;archive-s3-to-glacier&#x2F;" rel="nofollow">https:&#x2F;&#x2F;aws.amazon.com&#x2F;blogs&#x2F;aws&#x2F;archive-s3-to-glacier&#x2F;</a> [1] <a href="https:&#x2F;&#x2F;dropbox.tech&#x2F;machine-learning&#x2F;cannes--how-ml-saves-us--1-7m-a-year-on-document-previews" rel="nofollow">https:&#x2F;&#x2F;dropbox.tech&#x2F;machine-learning&#x2F;cannes--how-ml-saves-u...</a>
reacharavindh大约 4 年前
I have done 2 PB HPC data storage with ZFS. If I may extrapolate, I don’t see why it wouldn’t workout the same for 10 PB.<p>A 1U rack server attached to two JBODs(each 4U containing 60 spinning disks) connected to the server via 4 SAS HD cables. The rack server gets 512GiB of RAM to cache reads, and an Optane drive as persistent cache for writes. The usable storage depends on your redundancy and spare needs. But, as an example my setup - (9 * 6 drives(RAIDz2) + 4 hot spares) nets me about 450 TiB per JBOD or 900 TiB per rack server + two JBODs.<p>Repeat the setup by 6 times, and it would meet your 10 PB need. Throw in a few links 10GBps per server and have them all linked up by a switch, and you got your own storage setup. May be Minio(I have no experience with it) or something like that would give you a S3 interface over the whole thing.<p>I bet it would come out much cheaper than AWS. But, you’ve got to get your hands dirty a bit with system in work, and automate all the things with a tool like Ansible. Having done it, I’d say it is totally worth it at your scale.
plank_time大约 4 年前
Why do you need all 10PB accessible? Have you analyzed your usage pattern to see if you really need that much data accessible? This seems so unlikely and could solve most of your problems if you change the parameters.
Tepix大约 4 年前
It seems to me like you could save a <i>ton</i> of money by using your own hardware. Perhaps buy a bunch of big Synology boxes? At that scale you should also consider looking at technologies such as Ceph.<p>We&#x27;ve recently switched to a setup with several Synology boxes for around 1PB net storage.
评论 #26919494 未加载
评论 #26917211 未加载
评论 #26916793 未加载
timr大约 4 年前
At this scale, there&#x27;s no one perfect answer. You need to consider your usage patterns, business needs, etc.<p>Is the data cold storage, that is rarely accessed? Is it OK to risk losing a percentage of it? Can you identify that percentage? If it&#x27;s actively utilized, is it <i>all</i> used, or just a subset? Which subset? How much data is added every day? How much is deleted? What are the I&#x2F;O patterns?<p>Etc.<p>I have direct experience moving big cloud datasets to on-site storage (in my case, RAID arrays), but it was a situation where the data had a long-tail usage pattern, and it didn&#x27;t really matter if some was lost. YMMV.
erulabs大约 4 年前
I’d go with Ceph and dedicated hardware. Something like Hetzner or Datapacket, or built it yourself and go big with something like SoftIron. We’ve built and maintain a number of these types of clusters - using S3 compatible APIs (CephObjectStore). SoftIron is probably overkill but good lord is it fun to play with that much thruput!<p>If you’re looking for a partner&#x2F;consultant to get things going, feel free to reach out! This stuff is sort of our wheelhouse, as me and my co-founder were previously Ops at Imgur, you can imagine the kinds of image hosting problems we’ve seen :P
评论 #26921795 未加载
creiht大约 4 年前
Late to the party, but one does not simply store 10PB of data :)<p>The short story is, ignore most of the advice, poach^H^H^H^H^Hhire someone who has done this, and leverage their expertise. There is no armchair quarterbacking infrastructure at this scale.
评论 #27055391 未加载
msk20大约 4 年前
I don&#x27;t really know much about optimizing storage costs, But You could learn from storage giants.<p>Example is Blackblaze storage pod 6.0 according to them it holds 0.5PB with a cost of 10k$, you will need about 20*10K$ = 200K$ + Maintenance(They also publish failure rates) , The schematics and everything is in their website and according to them they have already a supplier who provides them with such devices which you could probably buy from. Note: This was published 2016, they probably have Pod 7.0 by now so cost may be better.<p>Reference: <a href="https:&#x2F;&#x2F;www.backblaze.com&#x2F;blog&#x2F;open-source-data-storage-server&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.backblaze.com&#x2F;blog&#x2F;open-source-data-storage-serv...</a>
评论 #26921871 未加载
qeternity大约 4 年前
Are you fundamentally a data storage business or are you another business that happens to store a tremendous amount of data?<p>If it&#x27;s the former, then investing in-house might make sense (a la Dropbox&#x27;s reverse course).
评论 #26915853 未加载
miouge大约 4 年前
Cloud or self-hosted will depend on your in-house expertise. For cloud others have already mentioned Backblaze and Wasabi, but you can also check Scaleway, they do 0.02 EUR&#x2F;GB&#x2F;mo for hot storage and 0.002&#x2F;GB&#x2F;mo for cold storage.<p>Since we&#x27;re talking about images and videos, do you already have different quality of each media available? Maybe thumbnail, high quality, and full quality. It could allow you to use cold storage for the full quality media, serving the high quality version while waiting for retrieval.<p>If the use case is more of a backup&#x2F;restore service and a restore typically takes longer than a cold storage retrieval (being Glacier or self hosted tape robot), then keep just enough in S3 to restore while you wait for the retrieval of the rest.<p>If you go the self-hosted route, I like software that is flexible around hardware failures. Something that will rebalance automatically and reduce the total capacity of the cluster, rather than require you to swap the drive ASAP. That way you can keep batch all the hardware swapping&#x2F;RMA once per week&#x2F;month&#x2F;quarter.
评论 #26919517 未加载
laurensr大约 4 年前
Also have a look at the Datahoarder community [1] on Reddit. Some people are storing astronomical amounts of data. [1]: <a href="https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;DataHoarder&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.reddit.com&#x2F;r&#x2F;DataHoarder&#x2F;</a>
nknealk大约 4 年前
How firm are your &quot;less than 1000ms&quot; requirements. Could you identify a subset of your images&#x2F;videos that are very unlikely to ever be accessed and move those to s3 glacier and price in that some fractional percentage will require expedited retrieval costs?
ransom1538大约 4 年前
Netapp. If you are managing it yourself do not accept alternatives.<p><a href="https:&#x2F;&#x2F;www.ebay.com&#x2F;itm&#x2F;313012077673?_trkparms=aid%3D1110006%26algo%3DHOMESPLICE.SIM%26ao%3D1%26asc%3D231993%26meid%3Dc294d51f9f1d43039a67c63e2b758896%26pid%3D101195%26rk%3D1%26rkt%3D12%26sd%3D254525082672%26itm%3D313012077673%26pmt%3D1%26noa%3D0%26pg%3D2047675%26algv%3DSimplAMLv9PairwiseUnbiasedWeb%26brand%3DNetApp&amp;_trksid=p2047675.c101195.m1851&amp;amdata=cksum%3A313012077673c294d51f9f1d43039a67c63e2b758896%7Cenc%3AAQAFAAACAO1XXaZyPyRLa0sxnvGelOPiTEvO2R6GQhm3qPCdacDIqq74%252BOKJMA3oz%252FGJMIgMKbBC1jx0jKKs4BzT3t08s19TjUvAPwJwdYVquZtamvuBn0BIIZW0XmTvWEfXan3znoGjB1YrfHdyWluOC0rgKbvt0xvfYv29HCi79vki6jALQ0n8PshpyVdL3G1psNdDzpQPVmJNqtUapsrE1T%252FcQMAWy4073JjHNZbr4NGlGVXt9DsbqJEI3sxdkDFVlqSq5xRL6FZ%252B6qeAE0m1oLnDQ8N23ucFLoPRg%252FxE%252B2g6wRZe5afyk8jMtlRvJTmum9HU1fDyunBI8Y2hvLQIQtF6XV7S%252Bqy%252FYQKQajhbB9RcCaKrlKw0s46WsjOemzH5G929yjcrfxRF0pMD2STiWMNS3yTcDYknyoV6AkTEpgYjN9XPHVQxNXo5A4l7CvW8%252F%252B0jJ8jSmEFUeXe5hw2%252B5MBpO0lMxAbYtg6i8g7rDfbtIEACBT9Pjw0o6L3RLqP3%252BvBSxEQ6jD%252BdZabNwfn5cKIwS5bBomoHDqPRBUnceHTN8ZGeo9vIVC40b62mHSUHxn6QUowovz9VUaXEMWYMEm3h0JqCYsb9%252Fc%252B6NZwcU6%252FjcCjvpcErCpxO1OEb9CrF4cZHM2Xxnx0Wn2WdPTGrW%252B0nEuxjSuoQtavgY5O35c9QwTHs%7Campid%3APL_CLK%7Cclp%3A2047675&amp;epid=5019588668" rel="nofollow">https:&#x2F;&#x2F;www.ebay.com&#x2F;itm&#x2F;313012077673?_trkparms=aid%3D111000...</a>
amacneil大约 4 年前
At that level of data you should be negotiating with the 3 largest cloud providers, and going with whoever gives you the best deal. You can negotiate the storage costs and also egress.
ZeroCool2u大约 4 年前
Take any credits you can get from a provider switch and then thoroughly map out your access patterns, ingestion, and egress. Do whatever you can to segment data by your needs for availability and modification.<p>If it&#x27;s all archival storage then it&#x27;s pretty straight forward. If you&#x27;re on GCP you take it all and dump it into archival single region DRA (Durable Reduced Availability) storage for the lowest costs.<p>Otherwise, identify your segments and figure out a strategy for &quot;load balancing&quot; between standard, nearline, coldline, and archive storage classes. If you can figure out a chronological pattern, you can write a small script that uses the gsutils built-in rsync feature to mirror over data from a higher grade storage class to a lower one at the right time.<p>The strategy will probably be similar in any of the other big 3 providers as well, but fair warning, some providers archival grade storage does not have immediate availability last I checked.<p>See: <a href="https:&#x2F;&#x2F;cloud.google.com&#x2F;storage&#x2F;docs&#x2F;storage-classes" rel="nofollow">https:&#x2F;&#x2F;cloud.google.com&#x2F;storage&#x2F;docs&#x2F;storage-classes</a><p><a href="https:&#x2F;&#x2F;cloud.google.com&#x2F;storage&#x2F;docs&#x2F;gsutil&#x2F;commands&#x2F;rsync" rel="nofollow">https:&#x2F;&#x2F;cloud.google.com&#x2F;storage&#x2F;docs&#x2F;gsutil&#x2F;commands&#x2F;rsync</a>
评论 #26915907 未加载
giantg2大约 4 年前
Agree with someone else&#x27;s comment questioning how is the data ingested and used.<p>10PB seems like a lot to store in S3 buckets. I assume much of that data is not accessed frequently or would be used in a big data scenario. Maybe some other services like Glacier or RedShift (I think).
ufmace大约 4 年前
10PB is a crazy amount of data. Far more than any normal business would ever have to deal with. Presuming you aren&#x27;t crazy, you must have an unusual business plan to legitimately need to handle that much data. That means it&#x27;s tough for us to say much - any assumptions we might have about it could be invalid depending on your actual business needs. You&#x27;re just going to have to tell us some more about your business case before we can say anything useful about it.
评论 #26918699 未加载
anij大约 4 年前
Disclaimer: *I work for Nutanix*<p>Consider looking at Nutanix - you can get the hardware from HPE (including Apollo).<p>Object storage from Nutanix doesn’t even break a sweat at 10PB of usable storage.<p>However the main reasons to look at Nutanix would be ease of use for<p>day 0 (bootstrapping) day 1 (administration operations, capacity management), fault tolerance and day n operations (upgrades, security patches etc)<p>Nutanix spends considerable time and resources on all this to make life of our customers easy.
DSingularity大约 4 年前
Amazing how one post will tell you that, at your scale, S3 is stupid and other posts will tell you that at your not-small-enough-and-yet not-big-enough scale S3 is the only option. I say stick with cloud. If cost is an issue go negotiate a better contract — GCP will probably give you a nice discount. Setting up a highly available service at that scale is not a walk in the park. Can you afford the distractions from your primary app while you figure it out?
byteshock大约 4 年前
Wasabi is a good option. They’re S3 compatible and don’t charge any egress or ingress fees. Been using them for a few years. Great speeds and customer support.
评论 #26915823 未加载
评论 #26918719 未加载
throwaway823882大约 4 年前
1. Shrink your data. That&#x27;s just an absurd amount of data for a start-up. Even large organizations can&#x27;t quickly work around too much data. Resource growth directly affects system performance and complexity and limits what you will be able to practically do with the data. You already have a million problems as a start-up, don&#x27;t make another one for yourself by trying to find a clever solution when you can just get rid of the problem.<p>2. As a general-purpose alternative, I would use Backblaze. It&#x27;s cheap and they know what they&#x27;re doing. Here is a comparison of (non-personal) cloud vendor storage prices: <a href="https:&#x2F;&#x2F;gist.github.com&#x2F;peterwwillis&#x2F;83a4636476f01852dc2b6703b78941e9" rel="nofollow">https:&#x2F;&#x2F;gist.github.com&#x2F;peterwwillis&#x2F;83a4636476f01852dc2b670...</a><p>3. You need to know how the architecture impacts the storage costs. There are costs for incoming traffic, outgoing traffic, intra-zone traffic, storage costs, archive costs, &#x27;access&#x27; costs (cost per GET | POST | etc). You may end up paying $500K a month just to serve files smaller than 1KB.<p>4. You need to match up availability and performance requirements against providers&#x27; guarantees, and then measure a real-world performance test over a month. Some providers enforce rate limits, with others you might be in a shared pool of rate limits.<p>5. You need to verify the logistics for backup and restore. For 10PB you&#x27;re gonna need an option to mail physical drives&#x2F;tapes. Ensure that process works if you want to keep the data around.<p>6. Don&#x27;t become your own storage provider. Unless you have a ton of time and money and engineering talent to waste and don&#x27;t want to ship a reliable product soon.
mattgair大约 4 年前
SoftIron would love to help with this project. We&#x27;re in your backyard and could have POC on your hands in no time at all, and full 10PB in about 6 weeks. matt@softiron.com
hamburga大约 4 年前
Meta-question: shouldn&#x27;t there be a website dedicated specifically to reliable, crowd-sourced answers to questions like these? Does it really not exist? I&#x27;m thinking like StackShare, but you start from &quot;What&#x27;s the problem I&#x27;m trying to solve?&quot;, not &quot;What products are big companies using?&quot;.
评论 #26922594 未加载
评论 #26919801 未加载
msoad大约 4 年前
Having dealt with a lot of big data I often came to realization that we actually did not need most of it.<p>Try being intentional and smart in front of your data pipeline and purge data that is not useful. Too many times people store data &quot;just in case&quot; and that case never happens years later.
lokl大约 4 年前
You wrote, &quot;data needs to be accessible at all time ... less than 1000ms&quot; latency, but this does not tell the whole story about accessibility&#x2F;latency. Does your use case allow you to do something similar to lazy loading, where you serve reduced quality images&#x2F;video at low latency and only offer the full quality on demand&#x2F;as needed with greater latency? For example, initially serve a reduced-resolution or reduced-length video instead of the full-res&#x2F;full-length original, which you keep in colder storage at a reduced cost? Depending on the details of what is permissible and data characteristics, this approach might save you a lot overall by reducing warm storage costs.
XorNot大约 4 年前
I&#x27;m wondering here if this data is currently oversized? If the use case is all mobile, has your product committed to losslessly storing something or not?<p>While there&#x27;s definitely a cross-over point where you should roll your own, the overhead costs of running a storage cluster reliably (and all the problem you don&#x27;t really have to deal with because they&#x27;re outsourced to AWS) mean it might be a better use of time and effort to see how much you can cut that number down by changing the parameters of your storage. The immediate savings will be much easier to justify.<p>Keep in mind you&#x27;ve also got a migration problem: getting 10PB off Amazon is <i>not</i> a simple, handsfree project.
zmmmmm大约 4 年前
My only comment is that I have a hard time reconciling these two statements:<p>&gt; downloaded in the background to a mobile phone<p>and<p>&gt; but less than 1000ms required<p>I&#x27;m struggling to think of what kind of application needs data access <i>in the background</i> with latency of less than 1000ms. That would normally be for interactive use of some kind.<p>Getting to 1 min access time would get you into the S3 glacier territory ... you will obviously have considered this but I feel like some really hard scrutiny on requirements could be critical here. With intelligent tiering and smart software you might make a near order of magnitude difference in cost and lose almost no user-perceptible functionality.
评论 #26922473 未加载
Dylan16807大约 4 年前
&gt; Should have mentioned earlier, data needs to be accessible at all time. It’s user generated data that is downloaded in the background to a mobile phone, so super low latency is not important, but less than 1000ms required.<p>&gt; The data is all images and videos, and no queries need to be performed on the data.<p>Okay, this is a good start, but there are some other important factors.<p>For every PB of data, how much bandwidth is used in a month, and what percentage of the data is actually accessed?<p>Annoyingly, the services that have the best warm&#x2F;&quot;cold&quot; storage offerings also tend to be the services that overcharge the most for bandwidth.
fvv大约 4 年前
Need more details.. maybe a graph (or several graphs ) of requests \ day for various items (categorized by popularity and size is ok ) (a curve ( i suppose not very hyperbolic) to breakdown populary of top requested items vs long tail of almost never seen, and rarely seen which i suppose is the most 9f those 10pb ) and current bandwidth intersection ( and data size ) and volume , this is too have an idea about bw, iops ,structure of the data and requests patterns and requirements and caching layer , i think that probably a share fs is worse than distributed blobl storage here ( assuming spinning disks somewhere and not huge caches ) Not all days usage patterns are equal, your requirement are different from database (which is more in line with some suggestions here ) Plus data safety is everything for your kind of business so redoundancy is a must , speed too (don&#x27;t even think about filecoin imho) i would think more about a mix of spinning and name as cache layer redoundant on multiple datacenter if it&#x27;s to save costs.. if it&#x27;s to save efforts and a bit of costs look at ovh offerings for blob storage services or contact backblaze for a custom solution hosted by them ?<p>Plus here we are not talking about 10pb but probably at 25 given redoundancy and probably also at 100pb ad more given the assumption that your company is growing , so a solution that cost slightly less today but will only do 2x when you do 10x would still be very interesting imo.. there is a lot to talk about ;)
Keverw大约 4 年前
I have a startup idea and want to make sure it scales, I was thinking S3 but don&#x27;t like vendor lock-in. Not that far along yet, I was thinking maybe SeaweedFS or even going crazy enough to write my own storage system. Use a database like CockroachDB or MongoDB to store the meta data, and then replica pieces of the file to &quot;chunk servers&quot;. However cleaning up deleted files, etc seem a bit of a pain. I was thinking instead of top down, let each node contain a copy of the metadata and scan on each node individually instead of the central database trying to manage each node. Then have a a process to handle under replicated files. However if you can adjust the number of replicas for say a popular file, you&#x27;d need to then coordinate which extra copies to remove when scaling down. Maybe a bit optimistic.<p>Kinda disappointed the file solutions seem more complicated and nothing more simple to setup like some of the new databases are like CockroachDB or MongoDB are to use. I feel like reinventing the wheel is kinda bad as rather let people who are more experts in this field handle this stuff, but I hate the idea of vendor lock-in and forced to use other peoples servers, self hosting be nice from a single node to test to a cluster spanning multiple datacenters. Maybe there&#x27;s a solution out there, I done some searching and just seems to go in circles. I seen one system but if you wanted to add or remove nodes in the future, you couldn&#x27;t just &quot;drain&quot; a chunk server by moving it data.
stlava大约 4 年前
If data storage isn&#x27;t your startup&#x27;s job then I would negotiate heavily on the AWS contract.
评论 #26916066 未加载
immnn大约 4 年前
At startup grade, it‘s fine to stick and grow with IaaS provider like Amazon, Google, Microsoft, Oracle or whatever you like.<p>However, you‘ll get to a point, where it‘s crucial to become profitable. And storing that much data does cost a lot of money using one of the mentioned providers.<p>So, when you think it‘s the right time to become “mature”, then get your own servers up and running using colocation.<p>What options do you have here (just a quick brainstorm): 1. Set up some servers, put in a lot of hard drives, format them using zfs and make it available using nfs on your network 2. Get some storage servers 3. Set up a Ceph cluster<p>I used to work as a CTO at a hosting company and evaluated all of these options and more. Every of these options comes with pros and cons.<p>Just one last advice: Evaluate your options and get some external help on this. Any of these options have pitfalls and you need experienced consultants to set up and run such an infrastructure.<p>All in all, it’s an invest, that will save you a lot of money and will give you freedom and flexibility to grow further.<p>P.S. we ended up setting up a Ceph cluster. We found a partner, who’s specialized on hosting custom infrastructures. That partner is responsible for all the maintenance, so we could focus on the product itself.
howeyc大约 4 年前
If you want to stick with cloud, then stick with what you&#x27;re doing or migrate to a cheaper alternative like wasabi, backblaze, etc.<p>If you&#x27;re not afraid of having a few operations people on staff and running a few racks in multiple data centers, then buy a bunch of drives and servers and install something to expose everything via S3 interface (Ceph, Minio, ...) so none of your tools have to change.
评论 #26918066 未加载
super3大约 4 年前
If you put the data on Storj DCS, it would run about $40k&#x2F;month for list pricing with global availability and encryption. I&#x27;m sure you could get a deal if you asked though. It has S3 compatibility, so would be plug and play with whatever you have now. Egress out of AWS would be free.<p>Way cheaper than AWS, and a lot less headache than trying to run it all yourself.
edoceo大约 4 年前
is this a case where GlusterFS and ZFS would work? I dont have PB of data, but many TBs. Gluster nodes are spread around globe, use ZFS for the &quot;brick&quot; and then the Gluster magic gives me distribute &#x2F; replica.<p>surprised I didn&#x27;t see Gluster already in this thread. maybe its not for such big scale?<p>edit: Wikipedia says &quot; GlusterFS to scale up to several petabytes on commodity hardware&quot;
bonoboTP大约 4 年前
Check whether you really need 10 PB or you can make do with several orders of magnitude less. I wouldn&#x27;t be surprised if it was some sort of perverse incentive CV building thing, like engineers building a Kubernetes cluster for every tiny thing. If you really do need 10 PB, then still you probably should check again because you probably don&#x27;t need 10 PB.
ignoramous大约 4 年前
In cloud:<p>Wasabi&#x27;s <i>Reserved Capacity Storage</i> is likely to be the cheapest: <a href="https:&#x2F;&#x2F;wasabi.com&#x2F;rcs&#x2F;" rel="nofollow">https:&#x2F;&#x2F;wasabi.com&#x2F;rcs&#x2F;</a><p>If you front it with Cloudflare, egress would be close to free given both these companies are part of the <i>Bandwidth Alliance</i>: <a href="https:&#x2F;&#x2F;www.cloudflare.com&#x2F;bandwidth-alliance&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.cloudflare.com&#x2F;bandwidth-alliance&#x2F;</a><p>Cloudflare has an images product in closed beta, but that is likely unnecessary and probably expensive for your usecase: <a href="https:&#x2F;&#x2F;blog.cloudflare.com&#x2F;announcing-cloudflare-images-beta&#x2F;" rel="nofollow">https:&#x2F;&#x2F;blog.cloudflare.com&#x2F;announcing-cloudflare-images-bet...</a><p>--<p>If you&#x27;re curious still, take a look at Facebook&#x27;s F4 (generic blob store) and Haystack (for IO bound image workloads) designs: <a href="https:&#x2F;&#x2F;archive.is&#x2F;49GUM" rel="nofollow">https:&#x2F;&#x2F;archive.is&#x2F;49GUM</a>
jkingsbery大约 4 年前
Besides what others have asked:<p>What are your access patterns? You say &quot;no queries need to be performed,&quot; but are you accessing via key-value look-ups? Or ranged look-ups?<p>What do customers do with the pictures? Do customers browse through images and videos?<p>You mention it&#x27;s &quot;user generated data&quot; - how many users (order of magnitude)? How often is new data generated? Does the dataset grow, or can you evict older images&#x2F;videos (so you have a moving window of data through time)?<p>Besides your immediate needs, what other needs do you anticipate? (Will you need to do ML&#x2F;Analytics work on the data in the future? Will you want to generate thumbnails from the existing data set?)<p>What my experience is based on: I was formerly Senior Software Engineer&#x2F;Principal Engineer for a team that managed reporting tools for internal reporting of Amazon&#x27;s Retail data. The team I was on provides tools for accessing several years worth of Amazon.com&#x27;s order&#x2F;shipment data.
ecesena大约 4 年前
S3 + Glacier. For data you&#x27;re accessing via Spark&#x2F;Presto&#x2F;Hive I believe Parquet is a good format. At your scale AWS should prob provide discounts, worth connecting w&#x2F; an account rep.<p>I&#x27;d recommend reaching out to some data eng in the various Bigs, they certainly have more clear numbers. Happy to make an intro if you need, feel free to dm me.
dublin大约 4 年前
Actual answer: There is almost NO company that really needs that much data. This has mostly just become a pissing match. In general, companies (<i>especially</i> startups) are way better off making sure they have a small amount of high-quality, accurate, data than a huge pile-o-dung that they think they&#x27;re going to use magical AI&#x2F;ML pixie dust to do something with.<p>That said, if you really think you <i>must</i>, spend effort on good deduping&#x2F;transcoding (relatively easy with images&#x2F;video), and consider some far lower-cost storage options than S3, which is pretty pricey no matter what you do. If S3 is a good fit, I hear good things about Wasabi, but haven&#x27;t used it myself.<p>If you have the technical ability (non-trivial, you need someone who <i>really</i> understands, disk and system I&#x2F;O, RAID Controllers, PCI lane optimization, SAN protocols and network performance (not just IP), etc.) and the wherewithal to invest, then putting this on good hardware with something like say, ZFS at your site or a good co-lo will be WAY cheaper and probably offer higher performance than any other option, especially combined with serious deduping. (Look carefully at everything that comes in <i>once</i> and you never have to do it again.) Also, keep in mind that even-numbered RAID levels can make more sense for video streaming, if that&#x27;s a big part of the mix.<p>The MAIN thing: Keep in mind that understanding your data flows is <i>way</i> more important than just &quot;designing for scale&quot;. And <i>really</i> try to not need so much data in the first place.<p>(Aside: I&#x27;m was cofounder and chief technologist of one of the first onsite storage service providers - we built a screamer of a storage system that was 3-4x as fast, and scaled 10x larger than IBM&#x27;s fastest Shark array, at less than 10% of the cost. The bad news - we were planning to launch the week of 9&#x2F;11 and, as self-funded, ran out of money before the economy came back. The system kicked ass, though.)
Icer5k大约 4 年前
As others have said, it’s a complicated question, but if you have the resources&#x2F;wherewithal to run Ceph but don’t want to deal with co-location, you can get a bunch of storage servers from Hetzner and get a much better grasp on cost over S3.<p>For example, at 10PB with every object duplicated twice (so 20 PB raw storage), you’d need ~90 of their SX293[1] boxes, coming out to around €30k&#x2F;mo. This doesn’t include time to configure&#x2F;maintain on your end, but it does cover any costs associated with drive replacement for failure.<p>I’ve done similar setups for cheap video storage &amp; CDN origin systems before, and it’s worked fairly well if you’re cost conscious.<p>[1] <a href="https:&#x2F;&#x2F;www.hetzner.com&#x2F;dedicated-rootserver&#x2F;sx293&#x2F;configurator" rel="nofollow">https:&#x2F;&#x2F;www.hetzner.com&#x2F;dedicated-rootserver&#x2F;sx293&#x2F;configura...</a>
评论 #26918776 未加载
itroot大约 4 年前
It&#x27;s a complex question. I had experience of working with ~60petabytish system back in 2016, and there a lot of things to cover (not only storage):<p>* network access - do you have data that will be accessed frequently, and with high traffic? You need to cover this skewed access pattern in your solution.<p>* data migration from one node to another, etc...<p>* ability to restore quickly in case of failure.<p>I would suggest to:<p>* use some open-source solution on top of the hosted infrastructure (Hetzner or similar is a good choice)<p>* bring in a seasoned expert to analyze your data usage&#x2F;storage patterns, maybe there are some other ways to make storage more cost effective, that simply moving out of AWS S3.
rvr_大约 4 年前
Try <a href="https:&#x2F;&#x2F;min.io&#x2F;" rel="nofollow">https:&#x2F;&#x2F;min.io&#x2F;</a> I would 100% go for it if my company was not a <a href="https:&#x2F;&#x2F;www.caringo.com&#x2F;products&#x2F;swarm" rel="nofollow">https:&#x2F;&#x2F;www.caringo.com&#x2F;products&#x2F;swarm</a> customer
dpa42大约 4 年前
I&#x27;d like to echo an suggestion I read earlier in this thread: at this scale (i.e. yearly spent), talk to AWS, GCP, Azure or a reseller of your trust and get a good deal to compare your other options with.<p>Disclaimer: I&#x27;m working at a consultancy&#x2F;partner for a competing cloud.
PLenz大约 4 年前
I would consider moving to my own metal and using hadoop.
tux大约 4 年前
Maybe take a look at BackBlaze Storage Pods;<p><a href="https:&#x2F;&#x2F;www.backblaze.com&#x2F;blog&#x2F;open-source-data-storage-server&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.backblaze.com&#x2F;blog&#x2F;open-source-data-storage-serv...</a><p>There Storage Pod 6.0 can hold up to 480TB per server.
chrislusf大约 4 年前
I am working on SeaweedFS. It was originally designed to store images as Facebook Haystack paper, and should be ideal for your use case. See <a href="https:&#x2F;&#x2F;github.com&#x2F;chrislusf&#x2F;seaweedfs" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;chrislusf&#x2F;seaweedfs</a><p>And it already supports S3 API, and other HTTP, FUSE, WebDAV, Hadoop, etc.<p>There should be many existing hardware options that is much cheaper than AWS S3.
teitoklien大约 4 年前
I would go for something like Wasabi cloud storage ,<p>It’s api is S3 compliant.<p>And also I believe they have minimal cost for transferring data from S3 into wasabi , so initial setup cost should be lower too.<p>It should be relatively cheaper than self hosting too , when you account for hidden costs that comes with self hosting , related to managing additional employees , having protocols in place for recovering from faults , expanding the storage as you go , maintaining existing infrastructure , etc.<p>You can compare the prices with respect to S3 at<p>(<a href="https:&#x2F;&#x2F;wasabi.com&#x2F;cloud-storage-pricing&#x2F;#cost-estimates" rel="nofollow">https:&#x2F;&#x2F;wasabi.com&#x2F;cloud-storage-pricing&#x2F;#cost-estimates</a>)
ilc大约 4 年前
Look at the cost of moving out of the cloud carefully.<p>Can you afford the up-front costs of the hardware needed to run the solutions you may want to run?<p>Will those solutions have good enough data locality to be useful to you?<p>It isn&#x27;t real useful to have all your data on-site, and then you operations in the cloud. You&#x27;ve introduced many new layers that can fail.<p>If you go on-prem, the solution to look at is likely Ceph.<p>Source: Storage Software Engineer, who has spoken at SNIA SDC. I currently maintain a &quot;small&quot; 1PB ceph cluster at work.<p>Recommendation: Get someone who knows storage and systems engineering to work with you on the project. Even if you decide not to move, understanding why is the most important part.
oneplane大约 4 年前
If I were in your shoes I&#x27;d still host it on AWS, unless your shoes have a problem with the AWS bill, but then you run into other problems:<p>- Paying for physical space and facilities<p>- Paying people to maintain it<p>- Paying for DRP&#x2F;BCP<p>- Paying periodically since it doesn&#x27;t last forever so it&#x27;ll need replacements<p>But if you were to have to move out of AWS but Azure and GCP aren&#x27;t options, you can do: Ceph and HDDs. Dual copies of files so you have to lose three drives for any specific file to have (only those files) dataloss. Does not come with versioning or full IAM-style access control or webservers for static files (which you get &#x27;for free&#x27; with S3).<p>HDDs don&#x27;t need to be in servers, they can be in drive racks, connected with SAS or iSCSI to servers. This means you only need a few nodes to control many harddisks.<p>A more integrated option would be (As suggested) back blaze pod-style enclosures, or storinator type top loaders (supermicro has those too). It&#x27;s generally 4U rack units for 40 to 60 3.5&quot; drives, which again generally comes to about 1PB per 4U. A 48U rack holds 11 units when using side-mounted PDUs, a single top-of-rack switch and no environmental monitoring in the rack (and no electronic access control - no space!).<p>This means that for redundancy you&#x27;d need 3 racks of 10 units. If availability isn&#x27;t a problem (1 rack down == entire service down) you can do 1 rack. If availability is important enough that you don&#x27;t want downtime for maintenance, you need at least 2 racks. Cost will be about 510k USD per rack. Lifetime is about 5 to 6 years but you&#x27;ll have to replace dead drives almost every day at that volume, which means an additional 2000 drives over the lifespan, perhaps some RAM will fail too, and maybe one or two HBAs, NICs and a few SFPs. That&#x27;s about 1.500.000 spare parts over the life of the hardware, not including the racks themselves, not including power, cooling or physical facilities to locate them.<p>Note: all of the figures above are &#x27;prosumer&#x27; class and semi-DIY. There are vendors that will support you partially, but that is an additional cost.<p>I&#x27;m probably repeating myself (and others) here, but unless you happen to already have most of this (say: the people, skills, experience, knowledge, facilities, money upfront and money during its lifecycle), this is a bad idea and 10PB isn&#x27;t nearly enough to do by yourself &#x27;for cheaper&#x27;. You&#x27;d have to get into the 100PB or more arena to &#x27;start&#x27; with this stuff if you need to get all of those externalities covered as well (unless it happens to be your core business, which from the opening post it doesn&#x27;t seem to be).<p>A rough S3 IA 1Z calculation shows a worst-case cost of about 150.000 USD monthly, but at that rate you can get a lot of cost savings, and with some smart lifecycle configuration you can get that down as well. This means that doing it yourself vs. letting AWS do it makes AWS half as expensive.<p>Calculation as follows:<p>DIY: at least 3 racks to match AWS IA OneZone (you&#x27;d need 3 racks on 3 different locations, a total of 9 racks to have 3 zones but we&#x27;re not doing that as per your request) which means the initial starting cost is a minimum of 1.530.000 and combined with a lifetime cost of at least 1.500.000, over 5 years, if we&#x27;re lucky, so about 606.000 per year, just for the contents of racks that you have to already have.<p>Adding to this, you&#x27;d have some average colocation costs, no matter if you have an entire room, a private cage or a shared corridor. That&#x27;s at least 160U and in total at least 1400VA per 4U (or roughly 14A at 120V). That amount of power is what a third of a normal rack might use on its own! Roughly, that will boil down to a monthly racking cost of 1300USD per 4U if you use one of those colocation facilities. That&#x27;s another ~45k per month, at the very least.<p>So no-personnel colocated can be done, but doing all that stuff &#x27;externally&#x27; is expensive, about 95.500 every month, with no scalability, no real security, no web services or load balancing etc.<p>That means below-par features gets you a rough saving of 50k monthly if you didn&#x27;t need any personnel and nothing breaks &#x27;more&#x27; than usual. And you&#x27;d have to not use any other features in S3 besides storage. And if you use anything outside of the datacenter you&#x27;re located (i.e. if you host an app in AWS EC2, ECS or a lambda or something) and you need a reasonable pipe between your storage and the app, that&#x27;s a couple of K&#x27;s per month you can add, eating into the perceived savings.
评论 #26919107 未加载
chubot大约 4 年前
Why not downsample everything to 10% the size, put those online, and use Amazon Glacier for the originals? (e.g. for exporting)<p>If you&#x27;re storing images and videos directly from the phone, they can be downsampled drastically without losing quality on a viewing device that anyone&#x27;s likely to have.<p>It&#x27;s unlikely that anyone wants to download the full size copy, and if they do, they can wait a few hours for Glacier.<p>You could expose this to the customer, e.g. offer direct access of originals at 2x or 5x the price. But 99.9% of people will be OK with immediate access to quality images&#x2F;video and eventual access to the unmodified originals.
Rafuino大约 4 年前
Perhaps look into Vast Data? They have a TCO calculator [1] but it seems to compare to other on-prem data storage providers (like Isolon...). 10PB in One Zone IA costs $100,000&#x2F;mo without discount, or $1.2M per year, and that&#x27;s just for storage alone. Vast claims something like $3.5M TCO over 5 years with 10PB of data and no growth assumption. 5 years on your S3 zone with no data growth (or transfer...) is $6M.<p>[1] <a href="https:&#x2F;&#x2F;vastdata.com&#x2F;tco-calculator&#x2F;" rel="nofollow">https:&#x2F;&#x2F;vastdata.com&#x2F;tco-calculator&#x2F;</a>
pkb大约 4 年前
1) For hardware you want cheap, expendable, bare metal. Look up posts about how Google built their own servers for reference. 2) For RAID, go with software only RAID. You will sidestep problems caused by hardware RAID controllers having custom data format each (i.e. non-swapable for different model&#x2F;make). 3) For filesystem, look for OpenAFS. CERN is using OpenAFS to store petabytes of data from LHC. 4) For operating system, look at Debian. Coupled with FAI (fully automated installation), it will enable you to deploy multiple servers in an automated way, to host your files.
SergeAx大约 4 年前
With a volume like that you should negotiate at least three storage+CDN providers and see who will give you the best offer. It could be as much as 50% off street price and even more if you are ready to sign a 2-3 years contract.<p>I personally would consider S3 Glacier+CloudFront, member of Bandwidth Alliance [0] of your choice+CloudFlare, and whomever serves TikTok now.<p>[0] <a href="https:&#x2F;&#x2F;www.cloudflare.com&#x2F;en-gb&#x2F;bandwidth-alliance&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.cloudflare.com&#x2F;en-gb&#x2F;bandwidth-alliance&#x2F;</a>
kissgyorgy大约 4 年前
I would buy commodity hardware and build my own storage cluster with ZFS and just put Minio in Distributed mode on it. You have full control of redundancy levels either on the cluster on individual ZFS pool side and can fine-tune what your business needs. Maybe you don&#x27;t need to mirror all the data so you can have RAIDZ2 with just 20-30% extra cost.<p>Hiring staff to build this would make sense at this point, because if your S3 storage cost is really $200,000&#x2F;month, you can hire 3 good engineers with $450,000&#x2F;year, which is the cost of just two months of S3 storage.
speedgoose大约 4 年前
I strongly recommend having more than one zone. A datacenter being offline for a while or totally burning is possible. It did happen a few weeks ago and a lot of companies learnt the value of multi zones the hard way.
verdverm大约 4 年前
It definitely depends on how you accumulate and the usage patterns. More clarity is needed there to make recommendations.<p>As an aside, you can often get nice credits for moving off of AWS to Azure or GCP. I recommend the later.
peterthehacker大约 4 年前
Can you elaborate on what the &gt;10PB of data is and why it’s important to your startup? Is it archived customer data, like backups? Or is it data purchased from vendors for analysis and ML?
评论 #26916845 未加载
joepour大约 4 年前
Hey Philip,<p>We store north of 2PB with AWS and have just committed to an agreement that will increase that commitment based on some competitive pricing they&#x27;ve given us.<p>Give me a shout if you&#x27;d like to chat.
up2isomorphism大约 4 年前
I have designed, deployed and supported an S3 compatible storage with 5PB capacity for a couple of years, so I acquired the experience to put right hardware and software together together to build such a storage system. And the cost reduction compared to public could like AWS is tremendous. If you are interested in building a private cloud storage for you own, you can contact me at hackernewsantispam@gmail.com for a more detailed discussion.
bullen大约 4 年前
My high level view is that if you are storing that much content, most of it is bad, so the solution for me would be to delete it!<p>As for my own storage I use 1TB SanDisc SD cards in a raspberry 2 cluster for write once data (user) and 8x64GB 50nm SATA drives from 2011 on 2xAtom 8-core for data that changes all the time! Xo<p>People say that content is king, I think that final technology (systems that don&#x27;t need rewriting ever) is king and content has peaked! ;)
gigatexal大约 4 年前
Latency being time to first byte downloaded I’d still store this in cloud somewhere so that the really “hot” images&#x2F;videos could be cached in a cloudfront CDN or something.<p>Also this is a startup, no? A million or so in storage so you need not preoccupy your startup with having to deal with failing disks, disk provisioning, collocation costs, etc. etc. not to mention the 11 9s of durability you get with S3, to me it just makes the most sense to do this on the cloud.
pmorici大约 4 年前
I&#x27;d look at using a Storinator cluster with a scalable network filesystem like Gluster, Lustre, Ceph or something along those lines. A 4U Storinator with 60 18TB drives has 1PB of raw capacity and cost $43,0000. You&#x27;d been looking at a upfront cost of $500k but if you amortize that cost over a 5 year period you are looking at 100k per year plus you are going to need someone that dedicates an amount of time to maintaining that.
offtop5大约 4 年前
If AWS is what you know I&#x27;d stick with it.<p>Changing that can be very very difficult for not much gain. Plus AWS skills are very easy to recruit for vs Google cloud.
miga大约 4 年前
By moving from AWS to a cheaper backup storage provider like B2 you would get costs from 200k$ to 50k$ per month.<p>There is S3-like interface, so you may just change access key, and region host: <a href="https:&#x2F;&#x2F;www.backblaze.com&#x2F;b2&#x2F;docs&#x2F;s3_compatible_api.html" rel="nofollow">https:&#x2F;&#x2F;www.backblaze.com&#x2F;b2&#x2F;docs&#x2F;s3_compatible_api.html</a>
davgoldin大约 4 年前
My previous startup (~2014) had a similar problem: PBs of data, with millions of mixed clients accessing it at close to real-time speeds. The biggest difference is that we needed to do real-time processing before delivering the content. We needed storage capacity balanced with CPU and RAM.<p>We ended up buying lots of Supermicro&#x27;s ultra dense servers [1]. That&#x27;s a 3U box, containing 24 servers that are interconnected with internal switches (think: 1 box is a self-contained mini cloud). Each server has (cheap config) 1 CPU 4 Xeon cores, 32GB ram, 4TB disk.<p>Those were bought &amp; hosted in China, and IIRC price tag was around $20k USD per box. That&#x27;s 96TB per 3U, or &gt;1.2PB and ~$200k per rack. We had a lot of racks in multiple datacenters. These days capacity can be much larger, e.g.: 6TB disk, 144TB per 3U and &gt;1.8PB per rack.<p>We&#x27;ve tried Ceph, GlusterFS, HDFS, even early versions of Citus, and pretty much everything that existed and was maintain during that time. We eventually settled on Cassandra. It required 2 people to maintain the software, and 1 for the hardware.<p>Today, I would have done the same hardware setup, mainly because I haven&#x27;t had 1 Supermicro component fail on me since I bought them first in early 2000s. Cassandra would&#x27;ve been replaced by FoundationDB. I&#x27;ve been using FoundationBD for awhile now, and it just works: zero maintenance, incredible speeds, multi datacenter replication, etc.<p>Alternatively, if I needed storage without processing, but with fast access, I&#x27;d probably go with Supermicro&#x27;s 4U 90 bay pods [2]. That&#x27;d be 90*16TB, 1.4PB in 4U, or ~14PB per rack. And FoudnationDB, no doubt.<p>As a fun aside: back then, we also tried Kinetic Ethernet Attached Storage [3]. Great idea but what a pain in the rear it was. We did however have a very early access device. No idea if it&#x27;s still in production or not.<p>[1] <a href="https:&#x2F;&#x2F;www.supermicro.com&#x2F;en&#x2F;products&#x2F;system&#x2F;3U&#x2F;5038&#x2F;SYS-5038MD-H24TRF.cfm" rel="nofollow">https:&#x2F;&#x2F;www.supermicro.com&#x2F;en&#x2F;products&#x2F;system&#x2F;3U&#x2F;5038&#x2F;SYS-50...</a><p>[2] <a href="https:&#x2F;&#x2F;www.supermicro.com&#x2F;en&#x2F;products&#x2F;system&#x2F;4U&#x2F;6048&#x2F;SSG-6048R-E1CR90L.cfm" rel="nofollow">https:&#x2F;&#x2F;www.supermicro.com&#x2F;en&#x2F;products&#x2F;system&#x2F;4U&#x2F;6048&#x2F;SSG-60...</a><p>[3] <a href="https:&#x2F;&#x2F;www.supermicro.com&#x2F;products&#x2F;nfo&#x2F;files&#x2F;storage&#x2F;d_SSG-K1048-RT.pdf" rel="nofollow">https:&#x2F;&#x2F;www.supermicro.com&#x2F;products&#x2F;nfo&#x2F;files&#x2F;storage&#x2F;d_SSG-...</a>
jcalabro大约 4 年前
I&#x27;ve used Wasabi a ton in the past and it&#x27;s been excellent. It&#x27;s already been talked about a lot in this thread, but I haven&#x27;t seen their marketing video[0] linked, and it&#x27;s pretty funny so I thought I&#x27;d leave it here!<p><a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=P7OzyTG4fCM" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=P7OzyTG4fCM</a>
paulmd大约 4 年前
Tape, if it fits your storage needs. You won&#x27;t beat the cost of tape if you are doing cold storage.<p>For online or nearline storage, you should look at what Backblaze did. Either buy hardware that is similar to what they did (basically disk shelves, you can cram ~100 drives into a 4U chassis) or if you are at that scale you can probably build your own just like they did.
ForHackernews大约 4 年前
Have you considered deleting most of it?<p>Chances are you don&#x27;t need all of it. Every company today thinks they need &quot;Big Data&quot; to do their theoretical magic machine learning, but most of them are wrong. Hoarding petabytes of worthless data doesn&#x27;t make you Facebook.<p>To be a little less glib, I&#x27;d start by auditing how much of that 10PB actually matters to anyone.
helge9210大约 4 年前
For on-premises storage (without managing storage racks and Ceph yourself) you can look at Infinibox (<a href="https:&#x2F;&#x2F;www.infinidat.com&#x2F;en&#x2F;products-technology&#x2F;infinibox" rel="nofollow">https:&#x2F;&#x2F;www.infinidat.com&#x2F;en&#x2F;products-technology&#x2F;infinibox</a>).<p>(I&#x27;m not working there anymore, posting this just to help)
philippb大约 4 年前
i just wanted to thank everyone for taking the time to reply. This has been way better input than I thought it would turn out.
评论 #26919633 未加载
louwrentius大约 4 年前
Ceph is a beast and will require at least 2-3 technicians with intricate Ceph knowledge to run multiple (!) Ceph clusters in a business continuity responsible manner.<p>Because you must be able to deal with Ceph quirks.<p>If you can shard your data over multiple independent stand-alone ZFS boxes, that would be much simpler and more robust. But it might not scale like Ceph.
sparrc大约 4 年前
Have you tried backblaze b2 storage? Requires more work client-side but is around 1&#x2F;4 to 1&#x2F;5 the price.<p>The only issue is whether or not you have a CDN in front of this data. If you do then backblaze might not be much cheaper than S3-&gt;Cloudfront. You&#x27;d save storage costs but easily exceed those savings in egress.
评论 #26921714 未加载
silviot大约 4 年前
I think if I _had_ to decide (I&#x27;m not the best informed person on the matter) I&#x27;d lean towards leofs[1].<p>I only read about it, but never used it.<p>It advertises itself as exabyte scalable and provides s3 and nfs access.<p>[1] <a href="https:&#x2F;&#x2F;leo-project.net&#x2F;leofs&#x2F;" rel="nofollow">https:&#x2F;&#x2F;leo-project.net&#x2F;leofs&#x2F;</a>
sandreas大约 4 年前
If someone needs even more background:<p><a href="http:&#x2F;&#x2F;web.archive.org&#x2F;web&#x2F;20201128103953&#x2F;https:&#x2F;&#x2F;blog.amplitude.com&#x2F;keepsafes-data-driven-approach-to-pricing" rel="nofollow">http:&#x2F;&#x2F;web.archive.org&#x2F;web&#x2F;20201128103953&#x2F;https:&#x2F;&#x2F;blog.ampli...</a>
znpy大约 4 年前
You can buy an appliance from Cloudian and have you S3 on-premise and support.<p>They&#x27;re basically 100% S3-compatible.<p>I don&#x27;t know the details of their pricing, but they&#x27;re production grade in the reald sense of the word.<p>I am not affiliated with them in any way, but I interviewed with them a couple of years ago and left with a good impression.
christophilus大约 4 年前
Wasabi + BunnyCDN has worked like a charm for us. We&#x27;ve got about 50TB there, if I recall. Our bill is dramatically smaller than when we were on AWS. Wasabi has had some issues-- notably a DNS snafu that took the service out for about 8 hours, if I recall. But over all, the savings have been worth it.
u678u大约 4 年前
Sounds like a standard business problem, make a spec and get the main 20 cloud providers to submit bids.
glitchc大约 4 年前
Compression is always a good alternative, which is especially effective when modification is infrequent.
评论 #26917123 未加载
joering2大约 4 年前
It would be cool to actually have a &quot;blockchain&quot; for something like this. I know the huge amount of data to be store is a niche market, but hear me out:<p>Everyone that wants to make extra money can join<p>You join with your computer hooked up to internet, a piece of software running in background<p>You share % of your hard-drive and limit speed that can be used to upload&#x2F;download<p>When someone needs to store 100PB of data (&quot;uploader&quot;), they submit a &quot;contract&quot; on a blockchain - they also setup what&#x27;s the redundancy rate, meaning how many copies need to be spread to guarantee consistency of data as a whole<p>The &quot;uploader&quot; shares a file - the file is being chop in chunks and each chunk being encrypted with uploader private PHP key. The info re chunks are uploaded to blockchain and everyone get a piece. In return, all parties that keep piece of uploader data get paid small % either via PayPal or simply in crypto.<p>I think that would be a cool project, but someone would have to do back-of-napkin number crunching if that would be profitable enough to data hoarders :)
评论 #26919367 未加载
plint大约 4 年前
I&#x27;m curious why distributed cloud storage systems such as filecoin haven&#x27;t been mentioned as a possible solution. Estimates of cost of storage that I saw on &quot;file.app&quot; put it at something like 100x cheaper than S3.<p>Not worth the risk or why?
Charon77大约 4 年前
Not an experience. But if I was given the task, I&#x27;ll probably think about how those data could be distributed. Maybe use my own instance of IPFS, so each &#x27;node&#x27; don&#x27;t have to store all of the data.
rasz大约 4 年前
Just run another venture round and dont think too hard about this problem. If everything goes well it wont be your problem for much longer, if it goes bad then who cares anyway.
hemmert大约 4 年前
I happen to own exa-byte.com, in case you need a domain for it ;-)<p>(In 1998, in school, I looked up in our math book what would come after mega, giga... 20 years later, just as fresh and useless as on day one ;))
plasma大约 4 年前
Have you looked into the storage tiering (eg moving objects to glacier) for less active users?<p>Perhaps it’s a mix of some app pattern changes and leveraging the storage tier options in AWS to reduce your cost.
brudgers大约 4 年前
Is the storage of the data critical to the future growth of the business?
评论 #26915798 未加载
sgt大约 4 年前
Here&#x27;s an unpopular answer - don&#x27;t store 10PB of data. Find a way for your startup to work without needlessly having to store insane amounts of data that will likely never be needed.
评论 #26916022 未加载
评论 #26916041 未加载
评论 #26916048 未加载
评论 #26915994 未加载
daveguy大约 4 年前
At that scale I would contact AWS, Backblaze and Wasabi directly to see what improvements they can offer in terms of TCO (and potentially for a longer term contract).
评论 #26929649 未加载
punitvara大约 4 年前
See how file coin works. And decentralized database work. It should be the way cheaper than aws. Search for s3 like api in decentralized database and you will get you answer
iamgopal大约 4 年前
Google nearline etc cost a bit less, also coming from AWS, they may give good discount, with considering operation and Maintainance, cloud will be cheaper.
hsaliak大约 4 年前
Use Intelligent tiering or some kind of a custom system that moves data into glacier more aggressively based on access times.It can help a lot.
jkuria大约 4 年前
Always look to nature first. Nature never lies. DNA storage:<p>Escherichia coli, for instance, has a storage density of about 10 to the 19 bits per cubic centimeter. At that density, all the world’s current storage needs for a year could be well met by a cube of DNA measuring about one meter on a side.<p>There are several companies doing it: <a href="https:&#x2F;&#x2F;www.scientificamerican.com&#x2F;article&#x2F;dna-data-storage-is-closer-than-you-think&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.scientificamerican.com&#x2F;article&#x2F;dna-data-storage-...</a>
nixgeek大约 4 年前
What happens to your business if you lose this data?
royalresolved大约 4 年前
I&#x27;m unsure if it&#x27;s mature enough for your use right now (in particular, the retrieval market is undeveloped for fast access, but I wonder if you looked at filecoin?)<p><a href="https:&#x2F;&#x2F;file.app&#x2F;" rel="nofollow">https:&#x2F;&#x2F;file.app&#x2F;</a> <a href="https:&#x2F;&#x2F;docs.filecoin.io&#x2F;build&#x2F;powergate&#x2F;" rel="nofollow">https:&#x2F;&#x2F;docs.filecoin.io&#x2F;build&#x2F;powergate&#x2F;</a><p>(Disclosure: I am indirectly connected to filecoin, but interested in genuine answers)
coverband大约 4 年前
Have you looked into Backblaze? They’re a lot cheaper than Amazon and have S3-compatible APIs.
xnx大约 4 年前
Off topic, but I&#x27;m shocked that anyone would trust uploading sensitive files (e.g. nudes) to this service. Photo vault type apps can be useful, but I would never want the content in those apps to upload to a small service like this based on their word that employees won&#x27;t go through it.
SrslyJosh大约 4 年前
&gt; no queries need to be performed on the data.<p>cat &gt;&#x2F;dev&#x2F;null, obviously. ;-)
siavosh大约 4 年前
Not sure the state of some of the decentralized solutions...
treeman79大约 4 年前
Tape drives. Semi joke.<p>How often you access data is another question.
SteveNuts大约 4 年前
Pure Flashblade 100%<p>Feel free to ama on it, I&#x27;m a huge fan
aaccount大约 4 年前
how much is it costing to keep 10PB on AWS S3? according to calculator.s3.amazonaws.com is usd 200000+ per month
johngalt大约 4 年前
900 LTO-U8 tapes
cuducos大约 4 年前
I&#x27;d store in node_modules&#x2F;
skuhn大约 4 年前
The right answer for you may have more to do with your business requirements than technical requirements. I&#x27;ve done large scale storage in cloud providers (S3, GCS, etc.) and on premise (I designed the early storage systems at Dropbox). I haven&#x27;t found there to be a one-size-fits-all answer.<p>If you place a high value on engineering velocity and you already rely on managed services, then I would look to stay in S3. Do the legwork to gather competitive bids (GCS, Azure, maybe one second tier option) and use that in your price negotiation. Negotiation is a skill, so depending on the experience in your team, you may have better or worse results -- but it should be possible to get some traction if you engage in good faith with AWS.<p>There is a considerable opportunity cost to moving that data to another cloud provider. No matter how well you plan and execute it, you&#x27;re going to lose some amount of velocity for at least several months. In a worse scenario, you are running two parallel systems for a considerable amount of time and have to pay that overhead cost on your engineering team&#x27;s productivity. In the worst case scenario, you experience service degradation or even lose customer data. It&#x27;s quite easy for 2-3 months to turn into 2-3 years when other higher priority requirements appear, and it&#x27;s also easy for unknowns to pop up and complicate your migration.<p>With all of that said, if the fully baked cost of migrating to another cloud provider (engineering time + temporary migration services + a period of duplicated costs between services + opportunity cost) is trajectory changing for your business, then it certainly can be done. I feel like GCS is a bit better of a product vs S3, although S3 has managed to iron out some of its legacy cruft in the last few years. Azure is not my cup of tea. I have never seriously considered any other vendors in the space, although there are many.<p>Your other option is to build it. I&#x27;ve done it several times, people do it every day. You may need someone on the team who either has or can grow the skillset you&#x27;re going to need: vendor negotiation, capacity planning, hardware qualification, and other operational tasks. You can save a bunch of money, but the opportunity cost can be even greater.<p>10PB is the equivalent of maybe 1-2 rack of servers in a world where you can easily get 40-50 drive systems with 10-18TB drives (of course for redundancy you would need more like 2-2.5x, and you need space to grow into so that you&#x27;re always ahead of your user growth curve). At any rate, my point is that the deployment isn&#x27;t particularly large, so you aren&#x27;t going to see good economies of scale. If you expect to be in the 100+PB range in 6-12 months, this could still be the right option.<p>Personally, I would look to build a service like this in S3 and migrate to on-premise at an inflection point probably 2 orders of magnitude above yours, if the future growth curve dictated it. The migration time and cost will be even more onerous, but the flexibility while finding product&#x2F;market fit probably countermands the cost overhead.<p>There is a third option, which is hosted storage where someone else runs the machines for you. Personally I see it as a stop-gap solution on the path to running the machines yourself, and so it&#x27;s not very exciting. But it is a way to minimize your investment before fully committing.
jeffrallen大约 4 年前
On tape.
gamedna大约 4 年前
Context please.<p>1. Do you have paying customers already?<p>2. Can the startup weather large capex? does opex work better for you?<p>3. Do you already have staff with sufficient bandwidth to support this, or will you need to hire?<p>4. What are the access patterns for the data?<p>5. What is the data growth rate?<p>6. What is the cost of losing some, or all of this data?<p>7. What is your expected ROI?<p>TL;DR - storing and serving up the data is the easy part.
tusqasi大约 4 年前
Talk with Linus at LTT.
acd大约 4 年前
Using Erasure coding.
water8大约 4 年前
Gusterfs + ZFS
neverartful大约 4 年前
I&#x27;m way late to the conversation. There are a few things that I haven&#x27;t seen mentioned (apologies if I overlooked them).<p>I have no idea how you evaluate the necessity of keeping the data safe, and that plays a huge factor in deciding what&#x27;s appropriate. Amazon S3 makes it a no-brainer for having your data safe across failure domains. Of course, the same can be done with non-S3 solutions, but someone has to set it all up, test it, and pay for it.<p>My background in storage is mostly related to working with Ceph and Swift (both OpenStack Swift and SwiftStack) while being employed by various hardware vendors.<p>Some thoughts on Ceph: - In my opinion, Ceph is better suited for block storage than object storage. To be fair, it does support object storage with use of the Rados Gateway (RGW) and RGW does support the S3 API. However, Ceph has a <i>strong</i> consistency model and in my opinion, strong consistency tends to be better suited to block storage. Why is this? For a 10PB cluster (or larger), failures of various types will be the norm (mostly disk failures). What does Ceph do when a disk fails? It goes to work right away to move whatever data <i>was</i> on the failed disk (using its redundant copies&#x2F;fragments) to a new place. No big deal if it&#x27;s only a single HDD that&#x27;s in failed status at any given point of time. What if you have a server, disk controller, or drive shelf fail? You get a whole bunch of data backfilling going on all at once. The other consideration with strong consistency model is having multi-site storage. Not so good for strong consistency model (due to higher latency for inter-site communication). - Ceph has a ton of knobs, is very feature rich, and high on complexity (although it has improved). The open-source mechanisms for installing and the admin tools have experienced (and continue to have) a high-rate of churn. Do a quick search on how to install&#x2F;deploy Ceph and you&#x27;ll see multiple. Same with admin tools. Should you strongly consider Ceph as an option, I would strongly advise you to license and use one of the 3rd party software suites that (a) take the pain away from install&#x2F;deploy&#x2F;admin, and (b) reduce the amount of deep expertise that you would need to keep it running successfully. Examples of these 3rd party Ceph admin suites are Croit [0] and OSNEXUS [1]. Alternatively, if you like the idea of a Ceph appliance, I would take a close look at SoftIron [2].<p>Aside from Ceph, it&#x27;s worth taking a very close look at OpenStack Swift [3][4]. It&#x27;s only object storage and has been around for about 10 years. It supports the S3 protocol and also has its own Swift protocol. It&#x27;s open source and it has an <i>eventually</i> consistent data model. Eventually consistent is (IMO) a much better fit for a 10+PB cluster of objects. Why is this? Because failures can be handled with less urgency and at more opportune times. Additionally, an eventually consistent model makes multi-site storage MUCH easier to deal with.<p>I suggest going further and spending some quality time with the folks at SwiftStack [5]. Object storage is their game and they&#x27;re very good at it. They can also help with on-prem vs hosted vs hybrid deployments.<p>Additionally, you would definitely want to use erasure coding (EC) as opposed to full replication. This is easy enough to do with either Swift or Ceph.<p>Disclaimers and disclosures - I am not currently (nor have ever been) employed by any of the companies I mentioned above.<p>Dell EMC Technical Lead and co-author of these documents:<p><pre><code> Dell EMC Ready Architecture for Red Hat Ceph Storage 3.2 - Object Storage Architecture [6] Dell EMC Ready Architecture for SwiftStack Storage - Object Storage Architecture Guide [7] </code></pre> Intel co-author of this document:<p><pre><code> &quot;Accelerating Swift with Intel Cache Acceleration Software&quot; [8] [0] https:&#x2F;&#x2F;croit.io [1] https:&#x2F;&#x2F;www.osnexus.com&#x2F;technology&#x2F;ceph [2] https:&#x2F;&#x2F;softiron.com [3] https:&#x2F;&#x2F;wiki.openstack.org&#x2F;wiki&#x2F;Swift [4] https:&#x2F;&#x2F;github.com&#x2F;openstack&#x2F;swift [5] https:&#x2F;&#x2F;www.swiftstack.com [6] https:&#x2F;&#x2F;www.delltechnologies.com&#x2F;resources&#x2F;en-us&#x2F;asset&#x2F;technical-guides-support-information&#x2F;solutions&#x2F;red_hat_ceph_storage_v3-2_object_storage_architecture_guide.pdf [7] https:&#x2F;&#x2F;infohub.delltechnologies.com&#x2F;section-assets&#x2F;solution-brief-swiftstack-1 [8] https:&#x2F;&#x2F;www.intel.sg&#x2F;content&#x2F;www&#x2F;xa&#x2F;en&#x2F;software&#x2F;intel-cache-acceleration-software-performance&#x2F;intel-cache-acceleration-software-performance-accelerating-swift-white-paper.html</code></pre>
rkagerer大约 4 年前
Floppies. Lots of floppy disks. Like, 7B of them.
distroguy大约 4 年前
In my opinion, you&#x27;re probably better off building and managing your own infrastructure at that scale, especially if you control the rest of the software stack that runs your platform. It would be best to go with an open source solution and invest in your own technology, infrastructure and people. This way, no matter what happens you can be in control of your data for as long as you want to and avoid vendor lock-in at every level.<p>If this isn&#x27;t already something that your company is familiar with, you&#x27;ll need people who know how to buy, build, test and manage infrastructure across datacentres, including servers and core networking. Understanding platforms like Linux will be critical, as well as monitoring and logging solutions (perhaps like Prometheus and Elastic).<p>The only solution that I know of which would scale to your requirements would be OpenStack Swift (<a href="https:&#x2F;&#x2F;wiki.openstack.org&#x2F;wiki&#x2F;Swift" rel="nofollow">https:&#x2F;&#x2F;wiki.openstack.org&#x2F;wiki&#x2F;Swift</a>). It&#x27;s explicitly designed as an eventually consistent object store which makes it great for multi-region, and it scales. It is Apache 2.0 licensed, written in Python with a simple REST API (plus support for S3).<p>The Swift architecture is pretty simple. It has 4 roles (proxy, account, container and object) which you can mix and match on your nodes and can scale independently. The proxy nodes handle all your incoming traffic like retrieving data from clients and sending it onto the object nodes and vice versa. Proxy nodes can be addressed independently rather than through a load balancer and is one of the ways Swift is able to scale out so well. You could start with three and go up to dozens across regions, as required.<p>The object nodes are pretty simple, they are also Linux machines with a bunch of disks each formatted with a simple XFS file system where they read and write data. Whole files are stored on disk but very large files can be sharded automatically and spread across multiple nodes. You can use replication or erasure coding and the data is scrubbed continuously, so if there is a corrupt object it will be replaced automatically.<p>Data is automatically kept on different nodes to avoid loss for when a node dies, in which case new copies of the data are made automatically from existing nodes. You can also configure regions and zones to help determine the placement of data across the wider cluster. For example, you could say you want at least one copy of an object per datacentre.<p>I know that many large companies use Swift and I&#x27;ve personally designed and built large clusters of over 100 nodes (with SwiftStack product) across three datacentres. This gives us three regions (although we mostly use two) and we have a few different DNS entries as entry points into the cluster. For example, we have one at swift.domain.com which resolves to 12 proxy nodes across each region, then others which resolves to proxy nodes in one region only, e.g. swift-dc1.domain.com. This way users can go to a specific region if they want to, or just the wider cluster in general.<p>We used Linux on commodity hardware, stock 2RU HPE servers with 12 x 12 TB drives (so total cluster size is ~14PB raw), but I&#x27;m sure there&#x27;s a better sweet spot out there. You could also create different types, higher density or faster disk as required, perhaps even an &quot;archive&quot; tier. NVMe is ideal for the account and container services, the rest can be regular SATA&#x2F;NL-SAS. You want each drive to be addressed individually, so no multi-disk RAID arrays however each of our drives sits on its own single member RAID-0 array in order to make use some caching from the RAID controller (so 12 x RAID-0 arrays per object node).<p>Our cluster nodes connect to Cisco spine and leaf networking and have multiple networks; e.g. the routeable frontend network for accessing the proxy nodes, private cluster network for accessing objects and the replication network for sending objects around the cluster.<p>Ceph is another open source option and while I love it as block storage for VMs, I’m not convinced that it’s quite the right design for a large, distributed object store. Compared to Swift object store seems more of an after thought and inherits a system designed for blocks. For example, it is synchronous and latency sensitive, so multi-region can be tricky. Could still be worth looking into, though.<p>Given the size of your data and ongoing costs of keeping it in AWS, it might be worthwhile investing in a small proof of concept with Swift (and perhaps some others). If you can successfully move your data onto your own infrastructure I&#x27;m sure you can not only save money but be in better control overall.<p>I&#x27;ve worked on upstream OpenStack and I&#x27;m sure the community would be very welcoming if you wanted to go that way. Swift is also just a really great piece of technology and I love seeing more people using it :-) Feel free to reach out if you want more details or some help, I&#x27;ll be glad to do what I can.
molszanski大约 4 年前
Pied Piper ?
exdsq大约 4 年前
RAM?
评论 #26915974 未加载
评论 #26918706 未加载
zennzei大约 4 年前
Wasabi storage
artemist大约 4 年前
You almost certainly should not have 10PB of data. Not just is it extremely expensive, it is unlikely that millions of people have each allowed you to take gigabytes of their data. You are sitting on a huge violation of CCPA, GDPR, and other privacy laws, as well as copyright issues. If you are scraping data off the Internet you likely have content illegal to poses in several different countries (such as child sexual abuse material or videos of ISIL killings). As a startup you do not have the legal and technical capabilities to manage this data so you should not have it.
评论 #26920942 未加载
JoelSchmoel大约 4 年前
Move to Oracle Cloud and before everybody starts hammering me look at this: <a href="https:&#x2F;&#x2F;www.oracle.com&#x2F;cloud&#x2F;economics&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.oracle.com&#x2F;cloud&#x2F;economics&#x2F;</a><p>I am not from Oracle and I am also running startup with growing pains. Oracle is a bit late to the Cloud game so they are loading up customer&#x27;s base now and squeezing ears will come in 3-5 years down the road. Maybe you can take advantage of this.