> Consequences ? You can’t use Rust arrays to represent a matrix bigger than 4×8, how useful is that?<p>I went and Google'd "rust linear algebra library" and immediately found <a href="https://nalgebra.org" rel="nofollow">https://nalgebra.org</a> which can represent such matrices just fine, and even integrates with LAPACK. Did the author do this most cursory of research before springing to write their rant? This is like saying "Python fails hard at scientific computing" because the default list type isn't an efficient basis for representing matrices, while entirely ignoring NumPy.<p>Also, those are <i>memory slices</i>, not arrays: they aren't pointers to heap-allocated data. So [[f32; 9]; 5] corresponds to a contiguous block of memory representing a 5x9 matrix of 32-bit floats. It has the same memory structure as [f32; 45], but you can index it as eg. my_matrix[3][2] to access the element in the 2nd column of the 3rd row. Again, I guess it's much easier to bash something than try to understand it.
What happened to the readability of code? This looks dreadful. I am sure there is a way to come up with a better syntax without sacrificing the type safety aspects.<p><pre><code> fn main() {
let my_str: Rc<RefCell<Box<MyStruct1>>> =
Rc::new(RefCell::new(Box::new(MyStruct1)));
my_trait_fn(Rc::new(RefCell::new(Box::new((**my_str.borrow()).clone()))));
my_str.borrow().my_fn();
}
</code></pre>
I come from a Python/Java world and I remember when I first delved into embedded MCU programming in C/C++, it was a cognitive overload and a half. I think we need to take a step back and while I understand the differences between high level languages (Java/Python) and C/C++ having the burden to allow for low-level programming; readability aspects of code often gets the lowest attention.<p>If we, humans, spend so much time writing/parsing code through our vision system, programming languages need to take this aspect seriously and optimize the syntax over readability than expressivity.<p>I recall Kernighan's "The Elements of Programming Style" where he talks about making the code more understandable than "clever". The syntax of a programming language directly facilitates (or impedes) this idea.<p>Edit - Everyone seems to keep pointing out it is a contrived example. Yes, I agree but the syntax has nothing to do with making the contrived example even more difficult to read?
I wish the HN community would collectively decide to downvote posts like these, which reward people who use clickbaity titles to get attention at the cost of hiding more careful analyses.<p>The post can be summarized as a critique of a programming language that did not meet the author's desires, but sections of the critique is (1) dislike of the syntax, (2) a specific missing feature, and (3) a very specific feature which doesn't exist in any of the popular scientific computing languages. It's fine to talk about not liking a tool, but it doesn't merit "fails hard".
Aside from all the discussion around these low-level complaints, the Rust community is well aware of the lack of higher-level scientific computing libraries, which IMO is the more relevant reason not to use Rust today:
<a href="http://www.arewelearningyet.com/scientific-computing/" rel="nofollow">http://www.arewelearningyet.com/scientific-computing/</a><p>Hell, Rust doesn't even have stable bindings around BLAS/LAPACK/LINPACK and friends which are the lifeblood of <i>every</i> language capable of scientific computing.<p>Give it time. Scientific computing was never the primary thrust for Rust, and other more pertinent domains have yet to reach complete maturity.[0]<p>[0]: <a href="http://www.arewewebyet.org/" rel="nofollow">http://www.arewewebyet.org/</a>
Lack of multidimensional arrays as a language primitive is a problem. It was a problem in C, C++, and Go, too. There, you either have arrays of arrays, or some macro/template that does a multiply on the subscript. The reply in one of the Rust forums is that multidimensional arrays are only found in "domain-specific languages", like FORTRAN.<p>When someone brings this up the usual excuses are trotted out:<p>- "It can be done with templates".<p>- "You can have an array of arrays".<p>Then there's bikeshedding. Some people want to be able to extract an arbitrary subarray slice from an N-dimensional array and treat that like a regular array. This is possible but complicated, and if implemented slows down the simple operations. Then there are people who want N-dimensional arrays where N is determined at run time. (This now includes the machine learning people.) Endless arguments between these groups resulted in nothing being done in Go. I haven't followed that discussion for Rust.<p>The big advantage of doing this in the language is that there's a lot of hardware support for doing number-crunching fast on dense arrays, and you want the compiler to be using it effectively. Matrix multiply on arrays of arrays is much slower than on a dense array where the compiler knows the spacing between the rows at compile time.<p>At least have arrays as good as FORTRAN. This is why some heavy number-crunching work is still done in FORTRAN.
I am going through the rust by example thing every weekend and also trying to write some Rust code on my own and I found the 'too much symbols'/unreadable code and just the general complexity of the language very off putting.<p>If I have to write low level code, I might just stick with C or C++ - the devils I know rather than trying to tame a new incarnation of the devil that is Rust. The complexity and learning curve costs of Rust far outweigh the benefits at this point.<p>Edit: If I sounded like I am giving up on Rust above, that's not yet the case. I am currently enjoying challenging myself to learn something complex and to that effect, Rust is holding my interest.
> then your matrix is allocated on the heap not the stack, meaning slower operations<p>This doesn't seem right at all... Does anyone know more? Maybe they meant slow to allocate / deallocate? The author's problem appears to require you to generate a lot of temporary data and then throw it away. They might benefit from just writing their own pool allocator, so they don't have to wait for heap allocates (assuming that's their problem).
Re: (2), this RFC was merged: <a href="https://github.com/rust-lang/rfcs/pull/2133" rel="nofollow">https://github.com/rust-lang/rfcs/pull/2133</a>
I don't know if Nim would be my first choice for a future scientific computing language. I think I'd be more focused on Julia or, recently, Crystal.<p>Julia has some extreme magic that it can do to make your life easy. Crystal will attract the Ruby-like crowd of people who will create very neat and effective libraries.
I suspect Rust also fails hard at interactive code (UIs).<p>Why? Because Rust's memory management doesn't work well with cyclic structures, and closures are such structures. Closures have shown to be very useful for interactive code (see Javascript).<p>Someone please prove me wrong. (I'm wondering how Firefox deals with this).
> You might say, 30 lines, easy no? Well except that Go can be played on 9x9, 11x11, 13x13, 15x15 and 17x17 too, any oddxodd size, no way I'm implementing those 30 lines x 6 times, this is not fun<p>I'm curious if the author has tried using macros, which would require him writing the code only once.
Scientific Computing doesn't really happen in low level language spaces today. FORTRAN, C and C++ are what powers the Higher Level languages.<p>Most is done in R, Matlab, Python, and Julia and for good reason. Most people doing scientific computing are not coders that can do the work at a higher level at a productive level.