This guy must have an incredibly large number of files if 2 XL instances running deletes for a month only cuts his S3 usage by 10%. I routinely see 700 DELETEs per second from a small EC2 instance; if performance scales linearly with CPU speed, he should be able to do 700 * 8 * 2 = 11200 DELETEs per second, or 29 billion DELETEs per month; if that's 10% of his objects, he must have 290 billion objects stored.<p>Except that, oops, S3 only passed 100 billion objects a couple of months ago, and at its current rate of growth is probably still under 150 billion, never mind 200 or 290 billion.<p>My guess is that the "FAIL" here is whatever process he's using for deleting files -- not in S3 itself.