The performance graph is deceptive for two reasons: (1) Leaf with CuDNN v3 is a little slower than Torch with CuDNN v3, yet the bar for leaf is positioned to the left of the one for Torch, and (2) there's a bar for Leaf with CuDNN v4, but not for Torch.<p>It's good to see alternatives to Torch, Theano, and TensorFlow, but it's important to be honest with the benchmarks so that people can make informed decisions about which framework to use.
I think Microsoft's approach with CNTK is far preferable to this. Rather than defining all the layers in Rust or C++ it uses a DSL to specify mathematical operations as a graph.<p>You can easily add new layer types, and recurrent connections are easy too - you just add a delay node.<p>Furthermore, since the configuration file format is fairly simple, it is possible to make GUI tools to visualise it and - in future - edit it.
I'm honestly skeptical that Rust is all that appealing for this type of work. It just doesn't seem like the main concerns like performance and type safety are #1 the top priority in this space and #2 this offering is differentiated enough from what you already get from Java today.<p>Honesly, many modeling problems are clunky and inefficient at scale - however that's ok. When you need to scale bad enough, you already have a significant set of library support in Java to support this.<p>I'm failing to see an answer to the one question I have, "why rust?"
> super-human image recognition<p>That's a bold claim. As far as I know there was one paper that reported a model beating human scores in a specific test (imagenet, I believe). Whether that translates to "superhuman" results in general is followed by a very big question mark.<p>In general I really struggle to see how any algorithm that learns from examples, especially one that minimises a measure of error against further examples, can ever have better performance than the entities that actually compiled those examples in the first place (in other words, humans).<p>I'm saying: how is it possible to learn superhuman performance in anything from examples of mere human performance at the same task? I don't believe in magic.
I'm completely new to ML and what real world applications it's suitable for. Are we at the point yet where you can train a computer to look at arbitrary images and count the number of people in it? What if it was the largely on the same background and only the number of people were changing -- for example, a camera shooting a queue of people to determine queue depth at a bus station.
I will take a look at it, but are the benchmarks comparable, since to quote the site, "For now we can use C Rust wrappers for performant libraries."? Torch is LuaJit over C, and Tensorflow has Python and C++. Is Rust making it fast, or the interface code to the C libraries?
Its interesting to see "technical debt" become a more common term. Is there a rigid definition for it?<p>From the article: <i>"Leaf is lean and tries to introduce minimal technical debt to your stack."</i><p>What exactly does that mean?
This is very cool! When I presented it to my CTO however, he said that he doesn't think this will gain traction from data scientists over Scala or Python, as Rust is even more complex than Scala (which is not the simplest language out there, even though I'm a big fan of both Scala and Rust and I know this might start a flame war)<p>Do you think Data Scientists can write their models directly using Leaf? do you think there will need to be a DSL that translates form the R / Python world to something you can run on Leaf to make it happen?
The benchmarks would be a lot more useful if the context around them were more obvious. In particular, it would be nice to know if the benchmarks are for a single input, or for a batch of inputs. If for a batch, then the batch size is important too. Maybe this stuff is somewhere on their site, but it shouldn't require digging.<p>Without this information it's hard to make a useful comparison at all.
I'm glad that rust has crossed the point where posts to HN that would be "_ in Rust" are now just "_". I hope this means that Rust is starting to be used for its own merits rather than just novelty.