I would love for a non-python based deep learning framework to gain traction.<p>My initial impression though is that the scope is very broad. Trying to be both sci-kit learn and numpy and torch seems like a recipe for doing none of these things very well.<p>Its interesting to contrast this with the visions/aspirations of other new-ish deep learning frameworks. Starting with my favorite, Jax offers "composable function transformations + autodiff". Obviously there is still a tonne of work to do this well, support multiple accelerators etc. etc. But notably I think they made the right call to leave high level abstractions (like fully-fledged NN libraries or optimisation libraries) out of the Jax core. It does what it says on the box. And it does it really really well.<p>TinyGrad seems like another interesting case study, in the sense that it is aggressively pushing to reduce complexity and LOC while still providing the relevant abstractions to do ML on multiple accelerators. It is quite young still, and I have my doubts about how much traction it will gain. Still a cool project though, and I like to see people pushing in this direction.<p>PyTorch obviously still has a tonne of mind-share (and I love it), but it is interesting to see the complexity of that project grow beyond what it is arguably necessary. (e.g. having a "MultiHeadAttention" implementation in PyTorch is a mistake in my opinion).
As someone who uses ML on embedded devices, it is great to see good alternatives in compiled languages. Nim seems like a very useful and pragmatic language in this regard. Certainly a huge step up from the C and C++ which is still very entrenched.
I think that solid libraries for deep learning is something we will see in practically all programming languages. In 10 years a library covering core usecase (of today) will be as standard as a JSON parser and a web sever, for almost any ecosystem.
Having grown up with JavaScript Python and R, I’m kinda looking towards learning a compiled language.<p>I’ve given a bit of thought to Rust since it’s polars native and I want to move away from pandas.<p>Is nim a good place to go?
IMO, no language without a Jupyter kernel can ever be a serious contender in the machine learning research space.<p>I was pretty skeptical of Jupyter until recently (because of accessibility concerns), but I just can't imagine my life without it any more. Incidentally, this gave me a much deeper appreciation and understanding of why people loved Lisp so much. An overpowered repl is an useful tool indeed.<p>Fast compilation times are great and all, but the ability to modify a part of your code while keeping variable values intact is invaluable. This is particularly true if you have large datasets that are somewhat slow to load or models that are somewhat slow to train. When you're experimenting, you don't want to deal with two different scripts, one for training the model and one for loading and experimenting with it, particularly when both of them need to do the same dataset processing operations. Doing all of this in Jupyter is just so much easier.<p>With that said, this might be a great framework for deep learning on the edge. I can imagine this thing, coupled with a nice desktop GUI framework, being used in desktop apps for using such models. Things like LLM Studio, Stable Diffusion, voice changers utilizing RVC (as virtual sound cards and/or VST plugins), or even internal, proprietary models, to be used by company employees. Use cases where the model is already trained, you already know the model architecture, but you want a binary that can be distributed easily.
Interesting that it "Supports tensors of up to 6 dimensions". Is it difficult to support an arbitrary number of dimensions, e.g. does Nim lack variadic generics?