> In the general-purpose programming context, imagine if you could give examples of a program output (domain data) along with a skeleton of a program (source file with incomplete parts) and ask a system to fill in the holes.<p>This part reminds me of some of capabilities of the Idris compiler [1]. In an Idris program you can leave "holes" to stand in for incomplete parts of a program [2], and the compiler can infer various bits of code from types and holes. In a demo of the in-progress Idris 2 compiler [3], Edwin Brady refers to it as a "lab assistant" and shows it writing a whole function when given a function type.<p>[1] <a href="http://docs.idris-lang.org/en/latest/tutorial/interactive.html#editing-commands" rel="nofollow">http://docs.idris-lang.org/en/latest/tutorial/interactive.ht...</a><p>[2] <a href="http://docs.idris-lang.org/en/latest/tutorial/typesfuns.html#holes" rel="nofollow">http://docs.idris-lang.org/en/latest/tutorial/typesfuns.html...</a><p>[3] <a href="https://www.youtube.com/watch?v=mOtKD7ml0NU" rel="nofollow">https://www.youtube.com/watch?v=mOtKD7ml0NU</a>
Modelling uncertainty is definitely a useful tool to have, but I'm not sure why the author expects there to be a "scientific" (a.k.a. mechanistic) way of doing it.<p>In normal programming, there's no fool-proof formula for picking the best data structure or the best algorithm. If there were, we could just write one program to write all other programs and be done with it!
If you look at kernel code for many flavors of Linux you'll see annotations that hint at which branches of code are more likely.<p>Similarly, many JIT compilers create statistics on the fly already; for instance these are used to better predict which branches are most likely to occur and thus be prefetched.
> seems to me that the the act of compiling knowledge into probabilistic models is still more art than science<p>Because there’s no modularity: writing probability models is still like using unstructured assembly.