The conclusion completely handwaves the massive overheads that come with not only owning your own infrastructure, but also having to manage a complex platform stack and its security.<p>The security points, which the main points hinges on, seem hyperfocused and in most cases misguided.<p>Redshift is not internet facing. If your Redshift is internet facing, you've messed up somewhere.<p>The CPU attack example given is for AMD ZEN, which the most common AWS workload CPUs aren't. Further, the benefit of using a cloud provider is that they put mitigations in place for most exploits, whereas running your own stack means it's now on you; running your own stack does not excuse you from having to put mitigations in place.<p>In the examples again, the speed to market's problems are a reflection of your organisation, not the cloud.<p>Stepping back a little, I'm thinking (as terrible as it is) that this is a case of blaming the tools, but never yourself; a lot of the problems the author is seeing seem very specific to their observations but are a poor use and poor understanding of AWS in general.<p>Overall not a great article, with a headline designed for people who already dislike AWS/GCP/Azure.
Why is the choice always portrayed as if it's either cloud, or on-prem / colocation ?
Those are two extremes.
At work I use dedicated physical machines from Hetzner.
If I need an extra one they deliver it in a few minutes, and thanks to Ansible it's provisioned within a few minutes more.
Hetzner keeps an eye on the hardware and replaces disks, PSU's and the like if needed.<p>I wouldn't often advocate to do colocation or on-prem since that indeed comes with a whole set of headaches, but renting dedicated physical servers like we do offers a lot of flexibility, with very little overhead, at a price/performance ratio that makes AWS look like pure extortion.
Move stuff back on-prem. Great. But aside from all the usual practical matters (power, backups, spare parts, service contracts, etc...), software is rapidly becoming a roadblock to move back to on-prem. If everything you have is running on Kubernetes or other open source software, great. If not, then there's an increasing amount of roadblocks being put up:<p>* Some software is only available as SaaS anymore<p>* Jacking up prices to ridiculous levels for on-prem licenses (to favor their SaaS offering of course)<p>* Intentionally knee capping on-prem software feature wise<p>* Stifle development of the on-prem product<p>* Force you to use some parts of their cloud services to make other things work<p>* Dark patterns and endless nags in software to push people to use some part of their cloud services<p>* Poor documentation how to install/use/maintain the software on-prem or make it needlessly complex<p>* Slower response from the software vendor in case of security issues<p>* Exporting data from the cloud and importing it on-prem is impossible<p>And I can make this list go on and on and on... My point being, for small time firms that don't have the resources and solely rely on commercial software, moving back might not even be an option anymore.
With IAM you can restrict all dynamodb endpoint access to a VPCE/private link. But the insecurity of the public facing endpoints is vastly overstated even without using VPCE.<p>The rest is a bunch of FUD - I spent years going through these points with some of the worlds best security teams to secure some of the most systemically important workloads. These arguments are fairly tired.<p>I’ll tackle another one - speculative attacks. First, you certainly can get bare metal exclusive access to hosts. But instances move around the broader infrastructure of an AZ, even if you’re using something like placement groups which only assure a local affinity. The chance a bad actor can colocate in the same physical device as your workload and successfully attack through side channels is vanishingly low in larger regions. To target anyone specific you would need to do such an enormous fishing expedition that it’s impractical. Further cloud providers aren’t insensate to such attacks and accounts that are doing that sort of topological mapping are easily detected. A better solution is to simply cycle your instances periodically to migrate your workloads around. For very sensitive workloads where the extraordinary unlikelihood isn’t sufficient, just get a bare metal instance.<p>I don’t dissuade anyone from running data centers. But I’ve yet to find anyone running back.
I'd say the attack surface on rolling your own tech in a datacenter is substantially higher.<p>Yes DynamoDB has an API, but I'd wager a AWS engineer with good security skills has looked at it carefully. Do you have an equally skilled security expert on hand to look at the datacenters stack and then same for whatever you're deploying on it?<p>Not all internet exposure is equal. Moving out of cloud often makes sense, but security isn't the right motivation for it.
Is there some plugin that removes GIFs from blog posts? IMHO an article looses a lot of credibility if I also have to look at some unrelated GIF in an otherwise very well written article.
> <i>I used to “sell” computer leases about 20 years ago saying hey don’t buy a computer, rent it and upgrade it in a year. Turns out the fine print was terrible.</i><p>This prompts the question of "So, what are you selling <i>this</i> year?"<p>I'm sympathetic to on-prem and datacenters, but maybe all the reaction GIFs are distracting the CIO/CTO from the new fine print?
No. We rolled our own stack for 10 years, until 2018, well after AWS and Azure were around.<p>We switched to Azure in 2018 and never looked back.<p>Sure you trade security (do you really, though?) in exchange for:<p>- not needing to head to the DC because a power supply failed and a rando who was in the cage never plugged in the redundant one<p>- not having to be way over-capacity in scalability, or suddenly under-capacity and emergency ordering some more 1Us<p>- not sifting through eBay to buy a spare hard drive that one of your boxes from 2011 needs but dell no longer makes<p>and the list goes on ad infinitum.<p>This problem has been solved. It's time to move on. Any time a company spends tinkering with their stack is less time spent delivering something of value.
> in traditional datacenters, with Infrastructure and Support teams separate from Development (anti DevOps), there are/were strong human checks and balances. If your Devs wanted to make an API Internet accessible and connect it to what they thought was a “sanitized” database, they probably had to raise a change, submit some firewall rules, maybe talk to a DBA to get credentials.<p>This is just romanticizing things. In every deployment - whether in it's "in cloud" or anywhere else - there is always the quick on off change that someone makes. Probabably for a valid reason, e.g. solving a production problem quickly. Chances are high that will go unnoticed, until another problem manifests (and that is hopefully not a security issue!).<p>I would argue that in cloud setups the chances for that to happen are actually slightly lower, because teams are incentivized to use immutable and declarative infrastructure. And that there might be an audit log in place which tells what changes have been made - although that still requires people to look at it, which again happens only in the case problems show up.
I thought for a second I was hearing a.suggestion that by running your own hardware you can turn off all mitigations. Which should save you a ton of money; for the same core you may be getting more than double the performance. And since you're not running any unknown foreign workloads, no one would have access to be running timing attacks on you.<p>The article pretty quickly moved off that point. And it seems to at points be saying that making it hard to setup infrastructure is a feature, will help the bottom line. Which has some truth but at enormous emotional cost to your teams.<p>I do think this shift needs to happen and I appreciate such lengthy sets of concerns being brought out. But I'm pretty lukewarm on this analysis.
In my experience small firms will struggle to hire excellent system engineers that can manage a bare metal setup, it's easier however to hire okay-ish SWE's that can design applications appropriate for cloud.
> Serverless functions – still running on someone’s server… but if you have a function that you need to evoke infrequently and cold start times don’t matter that much, yeah good for cloud.<p>While I agree in principle, these functions don't exist in thin air. You will often want to store the results somewhere. And then protect them. So with time you are almost replicating a lot of infrastructure. And if you are a big org you will probably want to enable SecurityHub and other security/governance services like AWS Config and things get expensive again.