If you're new to ML or datascience, I would recommend working to build a strong basis in Bayesian statistics. It will help you understand how all of the "canonical" ML methods relate to one another, and will give you a basis for building off of them.<p>In particular, aspire to learn probabilistic graphical models + the libraries to train them (like pyro, tensorflow probability, Edward, Stan). They have a steep learning curve, especially if you're new to the game, but the reward is great.<p>All of these methods have their place. SVM's have their place, but also aren't great for probability calibration and non-linear SVM's like every single kernel method can scale absolutely terribly. Neural networks have their place, sometimes as a component of a larger statistical model, sometimes as a feature selector, sometimes in and of themselves. They're also very often the wrong choice for a problem.<p>Don't fall into the beginner trap: sometimes people tend to mistake 'what is the hottest research topic' for 'what is the right solution to my problem given my constraints, (data limitations, time limitations, skill limitations, etc.)'. Be realistic, don't use magical thinking, and have a strong basis in statistics to weed out the beautiful non-bullshit from the bullshit that is frustratingly prevalent (everyone and their mother is an ML expert today).<p>EDIT: I want to also clarify: I don't mean to suggest the author is new to ML, I just mean this as general advice for anyone coming here who is new to DS/ML. The article looks great!