If you're clever, you can do this kind of postprocessing without a big CPU hit on mobile phones. Phones compress video in hardware, and as part of that process the hardware looks for blocks of video that are roughly the same across frames. When it finds one, it attaches a pointer indicating where it should go in the next frame.<p>Taken in aggregate, all of these pointers in the compressed data stream effectively show you which way the image "shook" relative to the previous image (and how far it shook) saving you CPU cycles to determine this yourself. You can even detect rotation. So all you need to do now is compensate for the shake by rewriting the compressed stream, "panning" (and perhaps rotating) in the opposite direction of the shake. In order to have room to pan, you need to emit a smaller rectangle than the original video.<p>This isn't sufficient for advanced stabilization, but it's a quick first pass.<p>I took a stab at writing this for the iPhone in 2010, but by that time the writing was clearly on the wall: Apple was soon going to offer this functionality in hardware (and they do, on the iPhone 4S and 5), and they would only do a better job with every phone refresh. The only way one can hope to compete is to perform global optimizations across the entire video clip that the hardware encoder can't do (e.g. dynamic programming), or else apply fancier transforms which are so CPU intense they kill the mobile experience. Good job on the part of the developers; the video looks great. As iPhone GPUs get more powerful stabilization algorithms will only get better.<p>One business angle here is to give the app. away for free and charge a dollar per video to deshake clips as a web service in the cloud.