By the way, if anyone is interested in good open source opportunities, computational linear algebra is nowhere near a solved problem and there's good opportunity for impactful contribution. The computational challenges of the algebra versus factorizations is one angle. Dense versus sparse is another. Shared memory parallelization vs distributed memory vs GPUs is another. Even on the GPU, there are different strategies depending on whether or not the entire matrix fits on a single GPU or if we have to use multiple GPUs. Incomplete or multilevel direct methods used as effective preconditioners for iterative methods are also important. Hell, even efficient direct techniques embedded in indirect solvers is important.<p>Part of the way to get started would be to look at something like a general numerical linear algebra book like Numerical Linear Algebra from Trefethen and Bau. There are better computational algorithms than what they present, but they do a good job at introducing important factorizations and why we care about them. Then, have a look at Tim Davis' book Direct Methods for Sparse Linear Systems. The codes in that book are online. Then, try to reimplement these algorithms in other languages, parallelize them, or make them better. These are good algorithms, but there are better and Tim's more recent codes are actively used by both MATLAB and Octave. Then, look for missing routines in open source libraries. For example, I just did a quick look and MAGMA currently lists missing routines between them and LAPACK.<p>Anyway, it's not a field for everyone, but it's one that good architecture and parallelization knowledge can have a positive impact. Nearly all engineering codes depend on good solvers, so the impact is wide.