Probably around 1-2 man-hours directly. The first bit was myself and my coworker running around trying to figure out what this was and how big of a deal after seeing the post on HN. I manually patched a few non-essential systems to make sure the patches took without a huge dependency tree update, then my coworker rolled it out automatically to all the systems.<p>We spent some follow up time checking Amazon's site to make sure our ELBs were updated (because it wasn't by the time we patched) and sending out the post-mortem to the team. Our certs were already going to expire, so we renewed them and updated them again via automation.
Applying the security update took minutes, even on a large number of hosts, thanks to automation.<p>The harder part was working out how to treat things from there, did we need to assume we'd been hit in the past, and regenerate certificates? That took a good couple of hours of debate with different people.<p>Call it half a day to be generous.
We run + manage our own servers, have ~4 dedicated servers and a large amount of VPSes. We lost about 7 man-days patching, cleaning up, updating certs, PR, etc.
About a half-day of work. Fixing our systems went pretty quick, but had to go track down a lot of clients' accounts and systems to rekey their certs.