I am 38, a Linux Admin and SRE. Have a nice job which allows me to keep learning new things. And in my job, over the past few months have been using AI to help me Code (write and parse) and learn complicated topics by making AI explain stuff in a very simplified manner.<p>I understand using to Code is one application and I am doing that. I also use AI in my linux-administration tasks. But like in Machine-Learning people learn math and python-coding, what should I learn for AI to always try to be ahead of the curve.<p>I know that I should learn how to make good prompts, which will be a never ending thing, but am Okay with that cause I myself like simplifying things. Should I learn lot of deep Math to go deep into LLMs? I am totally lost here. Should I be, and I donno whether I will be able to, read some particular AI-related code base, like for example, LangChain libraries?<p>Please help me out here.
Anyone who’s tried to self host a model will tell you that the DevOps experience of LLMs is a shit show right now. There is going to be plenty of opportunity to ply your trade around LLM/AI.
Below every deployed AI system is a Linux Kernel that supports CUDA. So, maybe try diving into that driver development and ML integration rather than, starting basic ML and moving upto AGI