Is it time to start seriously considering downtime as something that is not to be avoided, but rather embraced in small, controllable amounts? Hear me out. If you have a server and keep adding software and features to it without rebooting it, how will you know that on the next reboot all the software and features will start up and be functional again? Here, we have a case where a piece of networking equipment had to be replaced. Because the replacement was such a rare occurrence, nobody at the datacenter probably understood the effects of such a replacement.<p>On the other hand, if equipment is routinely powered down and or unplugged, the technicians working on it will have a better idea of what goes where, which gizmo affects what doodad, etc. It just generally seems like a good idea to do this under controlled conditions rather than unexpected and uncontrollable ones.<p>I personally would rather have a 95% uptime with close to 5% <i>scheduled</i> downtime than 99% uptime with 1% unscheduled downtime. The former scenario lets me set clear expectations to customers. The latter doesn't. I can't tell them what the hell is going on or when it will get better. It often takes days to find out why it even ended up happening if an explanation is ever discovered.