I took a Machine Learning course this past winter and this article in particular would have been really helpful since I struggled most with this concept in particular (and gradient descent in general). While most resources show you the mechanics of neural networks, none I found were very good at explaining (to me) the purpose and meaning behind them. Sure, I could follow along and eventually figure out how to write my own neural network, and I did, but I honestly never completely understood what was going on. The problem with most ML texts/resources for people like me without a strong math background is that a lot of high-level math is presented without an explanation of what mathematical concepts are being used. I admit that the onus is on me, the math dummy, to go out and learn the concepts involved, but it's difficult to look at a confusing algorithm chock full of unfamiliar concepts and know where to start. This article explains things nicely and I hope to see more like this in ML.