Somewhat related -- A few months ago I wrote a decorator that can allow for function currying [0], like so:<p><pre><code> @curry
def add3(a,b,c): return a+b+c
# normal function application
>>> add3(1,2,3)
6
# add3(1,2) returns a unary function which is then applied to 3
>>> add3(1,2)(3)
6
# rebinding partially applied functions to another variable, then applying
>>> add2 = add3(100)
>>> add2(5,6)
111
>>> add1 = add2(3.14)
>>> add1(5)
108.14
>>> map(add1,range(5))
[103.14, 104.14, 105.14, 106.14, 107.14]
</code></pre>
[0] <a href="https://gist.github.com/grantslatton/9221084" rel="nofollow">https://gist.github.com/grantslatton/9221084</a>
This is really cool. The AST transformation stuff here is neat, but relatively well-trodden ground.<p>The more impressive new science here is the lazy_function decorator, which is implemented as a bytecode transformer on the code object that lives inside the decorated function. The author built his own library for the bytecode stuff, which lives here: <a href="https://github.com/llllllllll/codetransformer" rel="nofollow">https://github.com/llllllllll/codetransformer</a>.
You said in a comment that you're looking for a usecase for this technique, so I'll provide one for something similar, perhaps we'll get ideas.<p>I've been toying with something similar lately, as a caching framework for scientific computations. I will have something like:<p><pre><code> x = load_big_file(filename) # takes 2 minutes
y = sqrt(1 / x ** 2) # takes 4 seconds
...
</code></pre>
Then, as my work proceed, I will change and tweak and re-run in the same process...clearly a lot of my time would have been saved by caching (though a different part each time depending on what I tweak).<p>The way to go currently is use joblib, which provides a decorator to put on a function to do basic caching. However you have to take care to manually clear cache if a function you're dependent on changes, or sometimes it will clear cache itself because you changed something irrelevant to the computation.<p>So the lazy alternative idea I had is a lazily evaluated tree similar to this, where the purpose is looking up a cache using the AST as key. What I have now looks more like this:<p><pre><code> @pure
def load_big_file(filename): ...
>>> x = load_big_file(lazy(fileref(filename))) # fileref is like a string but hashes by timestamp of file..
>>> y = sqrt(1 / x ** 2)
>>> print y
<lazy 23wfas
input:
v1: 32rwaa "/home/dagss/data/myfile.dat" @ 2015-03-03 08:43:23
program:
e0: 43wafa v1**2
e1: 4rfafq 1 / e0
e2: sqrt(e1)
>
</code></pre>
The point is every node in the syntax tree gives a hash (leafs having their value hashed, inner nodes having a hash based on the operation and the inputs). Then implementing a cache is simply<p><pre><code> NULL = object()
y = cache.get(y, NULL)
if y is NULL:
y = cache[y] = compute(y)
</code></pre>
I'm leaning a bit towards explicit being better than implicit though (having to call compute(y) rather than it happening when you need it to be evaluated transparently..)