The basic mechanisms for building a neural network from scratch are almost disappointingly simple (provided you know a little bit of calculus and linear algebra). And setting up a basic network in an existing architecture is pretty trivial.<p>I'm currently busy with the neural networks and deep learning specialization on Coursera.<p><a href="https://www.coursera.org/specializations/deep-learning" rel="nofollow">https://www.coursera.org/specializations/deep-learning</a><p>The trick, as far as I can tell, lies in with the various techniques for setting up your data, tuning your hyperparameters, and picking the right architecture for the job. At least, this seems to be the message of the course. It seems to still be a bit of an ad-hoc field. There are a number of techniques and things to try, without there necessarily being more than a shallow theoretical understanding from the experts as to why they actually work.<p>Then, of course, there are the experts and researchers who come up with entirely new architectures. Now that actually takes skill.