Please note that I'm not the author of the presentation. Made by Quentin de Laroussilhe <a href="http://underflow.fr" rel="nofollow">http://underflow.fr</a><p>I had to make a copy to my Google account to keep the slides.
Worth to mention that a Statistical Learning Stanford course [1] just started and according to the lecturers there is a lot of overlap in both areas.<p>[1] <a href="https://lagunita.stanford.edu/courses/HumanitiesSciences/StatLearning/Winter2016/about" rel="nofollow">https://lagunita.stanford.edu/courses/HumanitiesSciences/Sta...</a>
If you are just starting out with applied machine learning I would focus heavily on understanding bias and variance as it will really help you succeed. It's I think what (largely) separates the sklearn kiddies from the pros.
This really is a fantastic presentation for newcomers to the field. When I was taking these classes I found it difficult to keep all of the available algorithms organized in my mind. Here's an outline of his presentation:<p>Overview (5 slides)<p>General Concepts (9 slides)<p>K nearest Neighbor (6 slides)<p>Decision trees (6 slides)<p>K means (4 slides)<p>Gradient descent (2 slides)<p>Linear regression (9 slides)<p>Perceptron (6 slides)<p>Principal component analysis (6 slides)<p>Support vector machine (6 slides)<p>Bias and variance (4 slides)<p>Neural networks (6 slides)<p>Deep learning (15 slides)<p>I especially like the nonlinear SVM example on slides 57 and 58. It provides a visual of projecting data into a higher dimensional space.
Nobody concerned about plagiarism here? I am pretty sure I've seen a number of the slides and graphics elsewhere. Correct attributions however seem amiss.
Yes, thank you. I'm hoping to build an ANN this summer and don't have the luxury of taking an actual class.<p>Does anyone have any other resources?
that was a really good introduction :) sort of like an executive summary - all the "why we care" and some of the words you might want to look at to actually learn the details