That's sure nice, but I'm waiting for AWS to switch to automatic sustained use discounts [0] like GCP offers.<p>[0]: <a href="https://cloud.google.com/compute/docs/sustained-use-discounts" rel="nofollow">https://cloud.google.com/compute/docs/sustained-use-discount...</a>
Link to AWS Blog Post: <a href="https://aws.amazon.com/blogs/aws/new-per-second-billing-for-ec2-instances-and-ebs-volumes/" rel="nofollow">https://aws.amazon.com/blogs/aws/new-per-second-billing-for-...</a>
I really wish AWS would allow users to cap billing. Something that freezes all AWS services if the monthly bill exceeds X would make me a lot more comfortable when experimenting with AWS.
Per second billing is somewhat of a gimmick just so Amazon can say they are more granular than Google Compute. The difference between seconds and a minute of billing is fractions of a cent. Rounding errors.<p>The exception is Google Compute has a 10 minute minimum, so if you are creating machines and destroying them quickly, per second billing will be noticeable.
This is one of the better things to happen in ec2 in years for me. We have a bunch of scripts so a spot instance can track when it came online and shut itself down effectively. It took far too much fiddling around to work around aws autoscale and get efficient billing with the per hour model. In the end we came up with a model where we protect the instances for scale in and then at the end of each hour, we have a cron that tries to shut all the worker services down, and if it can't it spins them all up again to run for another hour. If it can, then it shuts the machine down (which we have set on terminate to stop). The whole thing feels like a big kludge and for our workload we still have a load of wasted resources. We end up balancing not bringing up machines too fast during a spike against the long tail of wasted resource afterwards. This change by ec2 is going to make it all much easier.
Back to the future: this was how computing worked back in the punch card days. Minicomputers and personal computers were supposed to liberate you from this tyranny: computing so cheap that you could have a <i>whole</i> computer to your self for a while!
Likely due to GCP competition. I believe GCP was always per-second? [Edit: Misremember that, they were always per-minute. Lots of good information below directly form the related parties.]<p>Azure looks to be per-hour [Edit: Wrong again, they are per-minute as well. Oddly enough, I did check their pricing page before, but missed the per-minute paragraph and only saw the hourly pricing] but I'm seeing something about container instances possibly being per-second.
This should enable some entirely new use cases, especially around CI and automation in general.<p>Per-second billing greatly reduces the overhead to bringing up an instance for a short task then killing it immediately - so I can do that. There's no need to build a buffer layer to add workers to a pool and leave them in the pool, so that you didn't end up paying for 30 hours of instance time to run 30, two-minute tasks within an hour.
I once considered writing an EC2 autoscaler that knew the exact timestamps of the instances so that it could avoid shutting down VMs that still had 59 minutes of "free" time left because they'd been up across another hour-long threshold. That sort of nonsense logic shouldn't be useful, but Amazon was giving a huge economic incentive for it.<p>This is certainly a long time coming.
This is great news and a long time coming.<p>I really hope Amazon build something like Azure Container Instances [1], as per second billing would make this sort of thing feasible.<p>[1] <a href="https://azure.microsoft.com/en-us/services/container-instances/" rel="nofollow">https://azure.microsoft.com/en-us/services/container-instanc...</a>
Ah, finally. They've ruined my idea for an optimal EMR job runner. Under the old system, if you have a linearly scalable Hadoop job, it's cheaper to, say, use 60 instances to do some work in an hour vs 50 instances to do the work in 70 minutes, assuming you're getting rid of the cluster once you're done. No more!
I think the per-second billing is off the point. How does it help, if the EC2 instance takes tens of seconds to launch, and tens of seconds to bootstrap?<p>To make the most of per-second billing, the compute unit should be deployed within seconds, e.g. immutable. prebaked container. You launch containers on demand, and pay by seconds.
Really welcome, although per millisecond would be better.<p>It's now possible to boot operating systems in milliseconds and have them carry out a task (for example respond to a web request) and disappear again. Trouble is the clouds (AWS, Google, Azure, Digital Ocean) do not have the ability to support such fast OS boot times. Per second billing is a step in the right direction but needs to go further to millisecond billing, and clouds need to support millisecond boot times.
Serverless advocates/engies are probably the only people celebrating this, everyone else keeps waiting for self renew instance reservation... last time i forgot about them it was too late.