I'm a bit confused. How useful is this if:<p>- Rust cannot compile to the GPU<p>- Neural network programs are usually not large and therefore do not need the type safety that Rust offers<p>- All cool neural network research is done on Keras/Tensorflow, so developing on that platform gives access to new algorithms automatically<p>- Scripting in Python is virtually at least as fast as anything else because you can use Tensorflow which uses the GPU
I'm confused by several of the API choices in the example. Why is the training set part of the network? I would have expected it to be a parameter to the train() function. Same for the activation function, shouldn't this be a property of the layer rather than fixed for the network as a whole?<p>I get that this is just in the early stages and more for learning than anything else, but it doesn't seem very well thought-out IMO.
Keep going! I really like being able to follow projects that start small, as opposed to 'here is my 10000 line toy project.<p>That being said, you will probably get some flak, probably because of the insane amounts of rust evangelism people on hn have had to deal with
There's nothing to see here.<p>Trivial NN implementations are a dime a dozen, and this one is no different. It's just a partial work in progress; it's not a 'Show HN'; it's just a few hundred lines of toy code.<p>...and that's the same feedback it got on /r/rust last week.<p>I don't see why it's turned up here now.<p>(Just as a baseline, at this point, if you can't use your NN implementation to <i>at least</i> do a basic classifier on MNIST, its probably not worth showing people)