One time we observed a dramatic drop-off in performance from one of our services after a certain day that week. I looked at recent releases and saw it perfectly coincided with one.<p>I asked the engineer in question to investigate, but after looking he said, "It's nothing I could be doing."<p>So I sat with him and used git-bisect to prove to him it was his commit: he had added trace logging within a couple of tight loops in the hottest parts of the code base. I smiled.<p>"But it's trace. That's disabled in production. It can't be that," he said. But we had already proven it was that commit, and the only thing that changed was additional logging.<p>Long story short, the logging library was filtering calls by level just before actually writing, rather than as close as possible to the call site—a design bug, for sure.<p>I had him swap out the library everywhere it was being used.<p>Moral: logging is not free.
I don't understand why more people don't use Solaris Zones, they seem to me to be the superior solution by far, and with work done by Joyent you now have modern LX-branded zones also. Is the lack of adoption mainly due to the fact that it's Solaris, and not Linux?<p>(Solaris lives on in Illumos et al)
on osx, i know for a fact Docker for OSX is pretty darn slow due to its way they handle filesystem.<p>but using Dinghy greatly helped sped everything up due to it using nfs. just in case anyone wanted to know.
Nice debugging story but the conclusion was totally wrong! The author even knows this. If they would be logging 3-4x the usual rate they would have seen the same problem on bare metal too. Nothing to do with docker or competing containers or whatever.