This is not a critique, CloudFlare is clearly a solid, well engineered system given its scale, just look at their other post-mortems.<p>But it's just kind of interesting, you can have all the redundant systems and smart software and some dude could accidentally pull cables – oh humans!<p>Would love to see what other mitigations they came up with than the ones listed (apart from probably putting 20 BRIGHT RED labels next to the patch panels saying DO NOT DISCONNECT, EVER EVER EVER!).<p>Perhaps one mitigation could be a better way to literally identify who's there and call them up within seconds and ask what they just did?
>"Documentation: After the cables were removed from the patch panel, we lost valuable time identifying for data center technicians the critical cables providing external connectivity to be restored. We should take steps to ensure the various cables and panels are labeled for quick identification by anyone working to remediate the problem. This should expedite our ability to access the needed documentation."<p>So they failed to label their cables? I'm sorry but this is "datacenter 101" stuff. How are none of the cables plugged into your patch panels labeled? Every colo has a label gun you can borrow! Also remote hands will gladly send you a pic of a rack or cabinet to verify what they're looking at.
It’s strange to me that their remediation did not include distributing these systems to be redundant across multiple datacenters, maybe with a globally distributed database.<p>> we knew that the failback from disaster recovery would be very complex<p>The disaster recovery failover to a second data center (and failback) should not force a choice to failover or not. They should be able to immediately failover and the system should self-heal once the original data center was back online.
I'll just leave this here ... <a href="https://github.com/netbox-community/netbox" rel="nofollow">https://github.com/netbox-community/netbox</a>