Downsampling images before a blur seems to be a pretty standard technique to get more perceived bang (blur) for your buck. If you don't downsample the image too far, the blur adequately covers any artifacts the downsampling (and re-scaling) would have. The result is, an N radius blur can be stretched a lot farther, because the downsampling (and re-scaling) itself adds an blur of its own.<p>This performance optimization doesn't really work for variable-width blurs (like depth of field) on 3d scenes though, because some parts of your image need to be crisp, while others blurred. Downsampling the entire image would lose resolution on the crisp parts.<p>3d depth of field blurs are actually pretty interesting though. They're variable width, which is just another way of saying that each pixel on the final image might be blurrier or less blurry than another pixel. Implementing this kind of variable blurring is a tough task, and typically done by scaling up or scaling down a disk of random sampling coordinates for each pixel. When the disk of coordinates is large, the sampling coordinates sample further from the pixel its centered on, so the resulting pixel has a bigger blur. When the sampling disk is small, the surrounding coordinates are closer to the center pixel, creating a smaller blur. The size of this sampling disk is controlled by the depth of the pixel from the near and far focus planes of the virtual camera.<p>I've glossed over a lot (like the artifacts that can result from 3d DoF), but Nvidia's GPU gems has a great article on the subject <a href="http://http.developer.nvidia.com/GPUGems3/gpugems3_ch28.html" rel="nofollow">http://http.developer.nvidia.com/GPUGems3/gpugems3_ch28.html</a>