This is an instance of the "Maximally Powerful, Minimally Useful" principle from a few days ago [0]. Because the language being described is sufficiently minimal it has multiple valid, meaningful interpretations including the direct "just execute it" interpretation, a theoretical "compile this to some kind of other language and interpret it there", and, most interestingly, interpret these relations as a linear algebra problem.<p>It should be quite clear that a more "powerful" language would not admit this last meaning/interpretation/denotation.<p>[0] <a href="http://blog.higher-order.com/blog/2014/12/21/maximally-powerful/" rel="nofollow">http://blog.higher-order.com/blog/2014/12/21/maximally-power...</a>
Arithmetic confined to addition, subtraction, and multiplication by constants is Presburger arithmetic. It's decidable; there are automatic theorem provers for statements in it. The classic Oppen-Nelson prover converts theorems to be proven to matrices, then uses
linear programming on rational numbers as a solution mechanism. That may be isomorphic to this approach.
<i>"The exponentiation algorithm is quite simple:<p>If the power is zero, we will return the identity matrix.
If the power is even (2N), we can recursively compute M^N, and then return the square of the obtained matrix.
If the power is odd (2N+1), it’s enough to recursively compute M^2N value, and return the obtained matrix, that has been multiplied by M."</i><p>Tidbit: that likely is the best approach in practice, but perhaps surprisingly, it is not optimal in the number of matrix multiplications. See <a href="http://en.m.wikipedia.org/wiki/Addition-chain_exponentiation" rel="nofollow">http://en.m.wikipedia.org/wiki/Addition-chain_exponentiation</a> and <a href="https://oeis.org/A003313" rel="nofollow">https://oeis.org/A003313</a>. Wikipedia has a link mentioning addition-subtraction chains, so it may even be beneficial to compute the inverse of the matrix in some cases (M^1023 likely is such a case, if inverting M isn't too hard)
How about computing any Fibonacci number in <i></i>constant<i></i> time?
This can be done using the eigendecomposition trick for computing powers of matrices:
<a href="http://minireference.com/static/excerpts/fibonacci_in_constant_time.pdf" rel="nofollow">http://minireference.com/static/excerpts/fibonacci_in_consta...</a><p>caveat: will fail for large N due to limited precision of floats... works fine on paper though.<p>[update] caveat 2: assumes a model of computation where ϕ^N can be computed in constant time, which only works on paper.
The author needs to be careful in what model they are using to show asymptotic runtime. Sure, this reduces to O(log n) matrix multiplications, but matrix multiplication is not a constant time operation (and is actually between quadratic/cubic in the dimension of the matrix, assuming arithmetic operations are performed in constant time, which is also not always a great assumption).
Link bait ?<p>Original article: <a href="http://habrahabr.ru/post/148901/" rel="nofollow">http://habrahabr.ru/post/148901/</a>
Post date: 2012<p>More from the same author: <a href="http://habrahabr.ru/users/skidanovalex/topics/" rel="nofollow">http://habrahabr.ru/users/skidanovalex/topics/</a>
More good ideas in this direction: Hashlife <a href="http://en.wikipedia.org/wiki/Hashlife" rel="nofollow">http://en.wikipedia.org/wiki/Hashlife</a>
> […] will calculate the 1 000 001st and 1 000 002nd numbers in 4 seconds. Classic version would take much more time, as the program has to operate with multi-thousand-digit numbers.<p>I'm not sure what classical algorithm of computing Fibonacci numbers OP is mentioning, but using Mathematica on my machine takes about 4 ms to compute the 1,000,001st Fibonacci number exactly…<p>Maybe by classical he means naive, in that case: doesn't it ever get tired to pick as an example an algorithm that will be optimized away by any half-decent compiler?
(Slightly tangential)<p>> I have recently read an article about calculating the Nth Fibonacci number in O(log N) arithmetic operations.<p>Either you assume fixed-precision integers, in which case you just use a lookup table and call it O(1), or you assume arbitrary-precision integers, in which case you <i>cannot</i> have anything less than a O(n) algorithm, as the number of digits of the Nth Fibonacci number is O(n) (as the Fibonacci sequence is exponential).<p>I see this all over the place, and it's frustrating every time.
So basically this interpreter checks if a loop contains only linear transformations of the variables in the lexical scope, and if so it builds the corresponding matrix and computes the appropriate power of it.<p>If so that's fine but it's probably a rare use case, since loops usually have conditional branches, don't they? If they don't the algorithm is poorly written in the first place as it should have been noticed that there was a linear transformation involved.
I believe this technique is by Valiant?<p>Leslie G. Valiant: General Context-Free Recognition in Less than Cubic Time. J. Comput. Syst. Sci. 10(2): 308-315 (1975)