It doesn't have to something complicated, but rather something sweet, that leaves a taste so sweet that it shies honey away.<p>Starting: Some 4 years back, I worked at a place which did not let engineers use internet. So while constrained to MSDN documentation, I was not even able to communicate with fellow engineers across the table because, well looks odd to jump around the office. So I developed this simple chat utility in VB for the three of us. That was sweet.<p>What about you?
Once, some time ago, I was tasked with travelling to a Boeing lab in Seattle. Our new shiny piece of aircraft equipment was failing to start a dataload (a mechanism by which the software loaded on aircraft equipment can be updated, without having to take the kit off the aircraft).<p>We had spent some time previously on this part of the software, and were fairly certain that we were following the protocol correctly.<p>This protocol (as had been given to us) had quite strict timing for our equipment to respond with; just a few milliseconds (ok, so this was some time ago).<p>We had to use an expensive (5 figure $) protocol analyser to show that, yes, our equipment was following the protocol correctly.<p>It turned out that the aircraft wide, large corporation, provided kit that was the control system for dataload had failed to meet their side of the protocol. Our equipment responses were being ignored, so our system just assumed the process had timed out.<p>Score 1 (one) for byte-level checking of message data.<p>There followed the amusing telephone conference call with their technical team, of the form:
Q: so, are you meeting the timing requirements for this protocol?
A: well, I thought we were, but, err, um, we'll have to go and check..<p><sound of silence as they mute their side of the call>
...<p>A: um, yeah, um, we'll have to call you back..
We have an image analysis pipeline at work and one part of it involves peak detection. The peak detection algorithm we were using worked for normal data. One day a scientist came to me with data that was abnormal in that it had peaks with a high dynamic range, which failed given the current algorithm.<p>I thought about it a bit and was able to come up with a non-parametric test of randomness that detected whether a sequence of data was probably random or structured and it was able to detect not only the usual, large peaks but also those that were very small.<p>The reason I was really please with the result is that it was a case of applying a little thought to the problem, getting a nice probabilistic theory in place, and the resulting code was so straight forward and performant that it looked like a simplistic hack!
The only one that comes to mind was a performance bug while doing data migration.<p>Started putting logging in the code, recompiled and deployed and with essentially what was a binary search nailed it down to a chunk of code that was taking too much time.<p>It happened to be an update or insert statement and we had removed all indexes from all tables for migration. Once the table grew sufficiently it was taking ages to search through it and update. Adding the indexes sped up the process about 5 times so instead of 5 weeks we were now looking at < 1 week for migration.
I built a toolkit to automate instrumentation and collection of line coverage data from unit/integration tests. It eliminated the need for an entire team's job (the one I had been managing). They were re-assigned and my client dropped me for someone half my rate. It was a great feeling to create that automation that directly saved money but a little bitter-sweet when robots replaced people's jobs, including mine :)