I've been very interested in machine learning/neural nets for the past year, and have been reading papers on arxiv in detail. Having implemented some papers myself, and training the nets and watching conference talks, I feel as I am missing a chunk in my understanding.<p>I can understand how new advances are being made, but what I do not understand on how to discover them in the first place, and what tradeoffs are. Why is going deeper better than a shallower, more connected architecture? Why are Resnet's better than RNN/LSTM's, when they seem like the same thing? It feels to me that we are picking up mud, and throwing it against a wall to see what sticks. I have a physics background, so I'm more used to deliberate design way of thinking (we used this architecture because of X, Y, and Z).<p>Anyway, I was hoping to see if someone know's where I can find some mentorship where these questions are answered. Maybe someone records a professor's office hours? Is there an online meetup that discusses these issues? I have particular interest in multiple object tracking and segmentation.<p>I will be applying for Google Brain soon, but I just wanted to have more of a fundamental understanding first. Thank you!