Given the level of impact that this incident caused, I am surprised that the remediations did not go deeper. They ensured that the same problem could not happen again in the same way, but that's all. So some equivalent glitch somewhere down the road could lead to a similar result (or worse; not all customers might have the same "robust and resilient architectural approach to managing risk of outage or failure").<p>Examples of things they could have done to systematically guard against inappropriate service termination / deletion in the future:<p>1. When terminating a service, temporarily place it in a state where the service is unavailable but all data is retained and can be restored at the push of a button. Discard the data after a few days. This provides an opportunity for the customer to report the problem.<p>2. Audit all deletion workflows for all services (they only mention having reviewed GCVE). Ensure that customers are notified in advance whenever any service is terminated, even if "the deletion was triggered as a result of a parameter being left blank by Google operators using the internal tool".<p>3. Add manual review for <i>any</i> termination of a service that is in active use, above a certain size.<p>Absent these broader measures, I don't find this postmortem to be in the slightest bit reassuring. Given the are-you-f*ing-kidding-me nature of the incident, I would have expected any sensible provider who takes the slightest pride in their service, or even is merely interested in protecting their reputation, to visibly go over the top in ensuring nothing like this could happen again. Instead, they've done the bare minimum. That says something bad about the culture at Google Cloud.