A few years ago, it seemed pretty straightforward to me.(something along the lines of: learn advanced math/stats/python, learn to use/build from scratch a few algorithms for regression/bayes/rnn/cnn/markov or whatever, learn to use numpy/pytorch/tensorflow/pandas etc...)<p>Today with the rise of LLMs and OpenAI, it seems a bit different, if your goal isn't to be a researcher working on foundation models, does it still make senses to go through all that trouble given that there are multiple general purposes models that can solve so many problems with no additional training?<p>I'm aware that the word "learn X" may be a bit vague, I mean learn in the same sense that someone would learn web or app development (i.e: know how to build it, fix problems with it, put it in production, maintain it and improve it when necessary)