It's interesting to note that if you add a modulus to the calculation (i.e. if you calculate F(n) mod M), you can calculate ridiculously large values using the matrix/doubling method. You can, for instance, calculate the last 10 digits of the Googolth Fibonacci number instantly (since you only need O(log n) multiplications, it's just a few hundred for 10^100 ). Doing it the "naive" dynamic programming way, it would take you far longer than the age of the universe.<p>It's a neat illustration of asymptotic running times.
The fast exponentiation algorithm actually generalizes:<p><a href="http://kukuruku.co/hub/algorithms/automatic-algorithms-optimization-via-fast-matrix-exponentiation.html" rel="nofollow">http://kukuruku.co/hub/algorithms/automatic-algorithms-optim...</a>
There is a log time way to do this, use Binet's formula (<a href="https://en.m.wikipedia.org/wiki/Fibonacci_number#Closed-form_expression" rel="nofollow">https://en.m.wikipedia.org/wiki/Fibonacci_number#Closed-form...</a>), but do the arithmetic in the field Q[√5].<p>Edit: Someone's implementation: <a href="http://ideone.com/NWQe38" rel="nofollow">http://ideone.com/NWQe38</a><p>Edit: constant time -> log time
for really big numbers there are the generalisations of karatsuba and toom-cook for higher numbers of splits, and beyond that there are the FFT multiplication methods which i believe are the fastest general case methods for very large numbers.<p>also the laddering scheme used for the 'exponentiation' can be in different forms, the left-to-right or right-to-left form or even a Montgomery ladder...<p>i seemed to recall there is a 'fastest known' method for generating fibonacci or lucas numbers... but google is not helping me.<p>pretty sure i had seen it in this book: <a href="https://www.amazon.co.uk/Prime-Numbers-Computational-Carl-Pomerance/dp/0387252827" rel="nofollow">https://www.amazon.co.uk/Prime-Numbers-Computational-Carl-Po...</a> i'll have to check when i have access to it next. :)
The BigInteger implementation in java uses better multiplication algorithms in Java 1.8 then in the version of 1.6 tested [1], switching to toom-cook[2] for even larger numbers. So that in practice Fibonacci in java becomes an benchmark of allocation rate.<p>[1]: <a href="http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/8-b132/java/math/BigInteger.java?av=f#1464" rel="nofollow">http://grepcode.com/file/repository.grepcode.com/java/root/j...</a><p>[2]:<a href="https://en.wikipedia.org/wiki/Toom%E2%80%93Cook_multiplication" rel="nofollow">https://en.wikipedia.org/wiki/Toom%E2%80%93Cook_multiplicati...</a>
For anyone interested in Fibonacci numbers: <i>Fibonacci Quarterly</i> has been published for more than fifty years.<p><a href="http://www.fq.math.ca/" rel="nofollow">http://www.fq.math.ca/</a>
Isn't there a constant time implementation using the Golden Mean? What is the advantage of these algorithms over the constant time one? (i.e. Binet's formula)
So, seems like matrix exponentiation is faster than computing eigenvalues / eigenvectors in practice? Take a look: <a href="http://mathoverflow.net/questions/62904/complexity-of-eigenvalue-decomposition" rel="nofollow">http://mathoverflow.net/questions/62904/complexity-of-eigenv...</a>
Product of Lucas Numbers Algorithm - "A fast algorithm for computing large Fibonacci numbers" -<a href="https://pdfs.semanticscholar.org/9652/9e1fedb0a372b825215c7b471a99088f4515.pdf" rel="nofollow">https://pdfs.semanticscholar.org/9652/9e1fedb0a372b825215c7b...</a>
the continued fraction of (1+√5)/2 is [1;1,1,1,1,1,1,...] it goes on forever.<p>there are more advanced formulas in this article of Curtis McMullen
<a href="http://www.math.harvard.edu/~ctm/papers/home/text/papers/cf/cf.pdf" rel="nofollow">http://www.math.harvard.edu/~ctm/papers/home/text/papers/cf/...</a>