If you actually want to understand and implement neural nets from scratch, look into 3Blue1Brown's videos as well as Andrew Ng's course.<p><a href="https://www.3blue1brown.com/topics/neural-networks" rel="nofollow">https://www.3blue1brown.com/topics/neural-networks</a><p><a href="https://www.coursera.org/learn/machine-learning" rel="nofollow">https://www.coursera.org/learn/machine-learning</a>
Interesting find. Just FYI, this repo has been the OG for several years, when it comes to building NN from scratch:<p><a href="https://github.com/eriklindernoren/ML-From-Scratch" rel="nofollow">https://github.com/eriklindernoren/ML-From-Scratch</a>
For autograd from scratch, see <a href="https://github.com/karpathy/micrograd" rel="nofollow">https://github.com/karpathy/micrograd</a> and/or<p><a href="https://windowsontheory.org/2020/11/03/yet-another-backpropagation-tutorial/" rel="nofollow">https://windowsontheory.org/2020/11/03/yet-another-backpropa...</a>
I think NNs are going to be a challenge as complexity grows.<p>I'm trying to make mobs behave autonomously in my 3D action MMO.<p>The memory (depth) I would need for that to succeed and the processing power to do it in real-time is making my head spin.<p>Let's hope Raspberry 5 has some hardware to help with this.<p>At this point I'm probably going to have some state machine AI (think mobs in Minecraft; basically check range / view then target and loop) but instead of deterministic or purely random I'm going to add some NN randomness to the behaviour so that it can be interesting without just adding quantity (more mobs).<p>So the inputs would be the map topography and entities (mobs and players) and the output whether to engage or not, the backpropagation would be success rate I guess? Or am I thinking about this the wrong way?<p>I wonder what adding a _how_ to the same network after the _what_ would look like, probably a direction as output instead of just an entity id?
I remember doing this in PHP(4? 5?) for my undergrad capstone project because I had a looming due date and it was the dev environment I had readily available. No helpful libraries in that decade. Great way to really grok the material, and really lets me appreciate how spoiled we are today in the ML space.
Interesting read, but there's a few things I haven't understood. In the training [function](<a href="https://colab.research.google.com/drive/1YRp9k_ORH4wZMqXLNkc3Ir5w4B5f-8Pa?usp=sharing" rel="nofollow">https://colab.research.google.com/drive/1YRp9k_ORH4wZMqXLNkc...</a>):<p>1- In the instruction `hidden_layer.data[index] -= learning_rate * hidden_layer.grad.data[index]`where was the `hidden_layer.grad` value updated?<p>2- from what I've understood, we'll update the hidden_layer according to the inclination of the error function (because we want to minimize it). But where are `error.backward()` and `hidden_layer.grad` interconnected?
For those interested in simple neural networks to CNN and RNNs implemented with just Numpy (including backprop):<p><a href="https://github.com/parasdahal/deepnet" rel="nofollow">https://github.com/parasdahal/deepnet</a>
If you are interested in learning what makes a Deep Learning library, and want to code one, for learning experience, you should check out- Minitorch [0].<p>[0]: <a href="https://github.com/minitorch/" rel="nofollow">https://github.com/minitorch/</a>
How important is it to learn DNN/NN from scratch? I have several years of experience working in the tech industry and I am learning DNN for applying it in my domain. Also for hobby side projects.<p>I did the ML Coursera course by Andrew Ng a few years ago. I liked the material but I felt the course focused a little too much on the details and not enough on actual application.<p>Would learning DNN from a book like
1. <a href="https://www.manning.com/books/deep-learning-with-pytorch" rel="nofollow">https://www.manning.com/books/deep-learning-with-pytorch</a> or
2. <a href="https://www.manning.com/books/deep-learning-with-python-second-edition?query=deep%20learning%20with%20python" rel="nofollow">https://www.manning.com/books/deep-learning-with-python-seco...</a><p>Be a better approach for someone looking to learn concepts and application rather than the detailed mathematics behind it?<p>If yes, which of these two books (or alternative) would you recommend ?
Nice! I made a gpu accelerated backpropagation lib a while ago to learn about NNs, if you are interested check it out here: <a href="https://github.com/zbendefy/machine.academy" rel="nofollow">https://github.com/zbendefy/machine.academy</a>
I did this for my AI class. You can watch the result here:
<a href="https://www.youtube.com/watch?v=w2x2t03xj2A" rel="nofollow">https://www.youtube.com/watch?v=w2x2t03xj2A</a>