I saw their talk at Siggraph in 2012. This is really interesting work.<p>The target image processing algorithms are low level. Think "How can I write <i>the fastest</i> 3x3 box blur for Sandy Bridge and later architectures", not "How can I speed up my face detector".<p>Examples of scheduling decisions that Halide deals with:
process image in 4x4? 8x8? 16x16 chunks?
use a temporary buffer for an intermediate processing stage, causing worse cache behavior but reusing results that are needed more than once?
use a sliding window the size of a couple rows as a compromise?<p>This kind of work means the difference between a laggy or interactive Gaussian Blur radius slider in Photoshop.<p>The Halide code shown at the talk was replacing really hard-to-read code with lots of SIMD compiler intrinsics. Dozens of lines of code doing something that would be 5 lines in a naive implementation. With Halide, it's almost as readable as the naive version because the scheduling stuff is separated from the algorithm.<p>For an application like Photoshop, this is a big win because they will <i>never</i> choose code readability over performance. Performance wins every time. If they can get the same performance with readable code, they are very happy.<p>GPU code generation falls naturally out of the scheduling intelligence and restricted problem domain.<p>I have never used Halide, so I do not intend to endorse it, but this line of inquiry is absolutely useful for a certain niche of programming.
As a computer vision researcher, this looks <i>very</i> interesting, although somehow I have yet to understand how they want to generalize highly complicated optimization patterns (access order, locality, pipelines, platform limitations ...), especially since some algorithms (other than the shown blur filter) require quite complicated access patterns on the image data and can only be hand optimized most of the time (that doesn't imply that they would not benefit from general optimization at all, just that they might be way faster when hand optimized). Still, if Halide produces faster code for some cases (e.g. filter operations amongst others), it will still be worth its salt.
As usual, the Unix Room at Bell Labs already had a go at this. It was called Pico and is from 10th Ed. Unix. 1984<p>I have the print version of book in my collection. It is interesting.<p><a href="http://spinroot.com/pico/" rel="nofollow">http://spinroot.com/pico/</a><p><a href="http://spinroot.com/pico/tutorial.pdf" rel="nofollow">http://spinroot.com/pico/tutorial.pdf</a><p><a href="http://spinroot.com/pico/atttj.pdf" rel="nofollow">http://spinroot.com/pico/atttj.pdf</a>
I might be a stickler here, but this strikes me as a good example of a "domain-specific language" rather than of a general programming language.<p>Now, it's certainly possible to nail general-purpose programming features onto a DSL (e.g. matlab or R), but the approach (also used by Halide) to embed the DSL into a general-purpose language that allows you to do so (here:C++) is, in my eyes, vastly better suited for scaling up from experimentation code to something that can become part of a larger system (think OpenCV in robotics etc.)
We evaluated this some time ago in the company I used to work but it doesn't have intel compiler nor ms compiler support so it was an instant no go. Anyway it is an interesting idea.
Can you combine this with AVISynth for rendering videos? It would be cool to be able to completely bypass Lightroom when I need to process individual images and do it instead in a single script.
Sincerely, I was expecting something like in Blade Runner [0]. It's quite easy to take "language" for something spoken.<p>[0] - <a href="http://www.imdb.com/title/tt0083658/quotes?item=qt1827458" rel="nofollow">http://www.imdb.com/title/tt0083658/quotes?item=qt1827458</a>
Really cool name. In rare case someone does not get it: <a href="http://en.wikipedia.org/wiki/Silver_halide" rel="nofollow">http://en.wikipedia.org/wiki/Silver_halide</a>
One wonders if you could build an OpenCV interface using Halide to maintain compatibility but also the potential for optimizing across OpenCV function calls?<p>I want to write in OpenCV, but I also want the compiler to fix everything across function boundaries :-)
I cannot shake the feeling that this is just yet another iteration of the usual concept.<p>If you look at Google Renderscript, Microsoft C++AMP, NV Cuda, OpenCL and whatnot you'll realize that they all do exactly the same thing, difference being only in syntax level (with few caveats, but those are relatively minor). Some being more cumbersome to use than others.<p>Halide looks neat to use syntax wise but it doesn't seem to make the actual algorithms any easier to write. You still have to do the same mental work on same level as you have to do with OpenCL.<p>All of these are to eachother what Fortran, Pascal and C are to eachother. Same basic idea in different package. I'm waiting for the first system that is equivalent of C++. Something that really brings new concepts to the table instead of just a different syntax.