> suspect it is easier to see opportunities when you have some experience working with the tools<p>Yes, absolutely. The most effective we I know to develop that sort of intuition (not just in AI/ML, but most subjects) is to try _and fail_ many times. You need to learn the boundaries of what works, what doesn't, and why. Pick a framework (or, when learning, you'd ideally start with one and develop the rest of your intuition by building those parts yourself), pick a project, and try to make it work. Focus on getting the ML bits solid rather than completing products if you want to get that experience faster (unless you also have no "product" experience and might benefit from seeing a few things through end-to-end).<p>> stay relevant in the long run<p>Outside of the mild uncertainty in AI replacing/changing the act of programming itself (and, for that, I haven't seen a lot of great options other than learning how to leverage those tools for yourself (keep in mind, most tasks will be slower if you do so, so you'll have a learning curve before you're as productive as before again; you can't replace everything with current-gen AI), and we might be screwed anyway), I wouldn't worry about that in the slightest unless you explicitly want to go into AI/ML for some reason. Even in AI-heavy companies, only something like 10% of developers tangentially touch AI stuff (outside of smallish startups where small employee counts admit more variance). Those other 90% of jobs are the same as ever.<p>> keep up my learning in these areas<p>In addition to the general concept of trying things and failing, which is extremely important (also a good way to learn math, programming, and linguistics), I'd advise against actively pursuing the latest trends until you have a good enough mentor or good enough intuition to have a feel for which ones are important. There are too many things happening, there's a lot of money on the line, and there are a lot of people selling rusty pickaxes for this gold rush (many intentionally, many because they don't know any better). It'll take way too much time, and you'll not have a good enough signal-to-noise ratio for it to be worth it.<p>As one concrete recommendation, start following Yannic Kilcher on YouTube. He covers most of the more important latest models, papers, and ideas. Most of his opinions in the space are decent. I don't think he produces more than an hour per day or so of content (and with relatively slow speaking rates (the thing the normal YT audience wants), so you might get away with 2x frame rate if you want to go a bit faster). Or find any good list of "foundational" papers to internalize (something like 5-20). Posting those is fairly common on HN; find somebody who looks like they've been studying the space for awhile. Avoid advice from big-name AI celebrities. Find a mentor. The details don't matter too much, but as much as possible you'd like to find somebody moderately trustworthy to borrow their expert knowledge to separate the wheat from the chaff, and you'll get better results if their incentive structure is to produce good information rather than a lot of information.<p>Once you have some sort of background in what's possible, how it works, performance characteristics, ..., it's pretty easy to look at a new idea, new service, new business, ..., and tell if it's definitely viable, maybe viable, or full of crap. Your choice of libraries, frameworks, network topologies, ..., then becomes fairly easy.<p>>> other people saying to build something simple with LLMs and brag about it<p>Maybe. Playing with a thing is a great way to build intuition. That's not too dissimilar from what I recommended above. When it comes to what you're telling the world about yourself though, you want to make sure to build the right impression. If you have some evidence that you can lightly productize LLMs, that's in-demand right this second. If you publish the code to do so, that also serves as an artifact proving that you can code with some degree of competency. If you heavily advertise LLMs on your resume, that's also a signal that you don't have "real" ML experience. It'll, ideally, be weighed against the other signals, but you're painting a picture of yourself, and you want that picture to show the things you want shown.<p>> can't see any use case for AI/ML<p>As a rule of thumb (not universal, but assuming you don't build up a lot of intuition first), AI/ML is a great solution when:<p>(1) You're doing a lot of _something_ with complicated rules<p>(2) You have a lot of data pertaining to that _something_<p>(3) There exists some reason why you're tolerant of errors<p>I won't expand that into all the possible things that might mean, but I'll highlight a couple to hopefully help start building a bit of intuition right away:<p>(a) Modern ML stuff is often written in dynamic languages and uses big models. That gives people weird impressions of what it's capable of. At $WORK we do millions of inferences per second. At home, I used ML inside a mouse driver to solve something libinput struggled with and locked up handling. If you have lot of data (mouse drivers generate bajillions of events), and there's some reasonable failure strategy (the mouse driver problem is just filtering out phantom events; if you reject a few real events per millisecond then your mouse just moves 0.1% slower or something, which you can adjust in your settings if you care), you can absolutely replace hysterisis and all that nonsense with a basic ML model perfectly representing your system. I've done tons of things beyond that, and the space of opportunities dwarfs anything I've written. Low-latency ML is impactful.<p>(b) Even complicated, error-prone computer-vision tasks can have some mechanism by which they're tolerant of errors. Suppose you're trying to trap an entire family of wild hogs at once (otherwise they'll tend to go into hiding, produce a litter of problems, and never enter your trap again since they lost half their family in the process). You'd like a cheap way to monitor the trap over a period of time and determine which hogs are part of the family. Suppose you don't close the trap when you should have. What happens? You try again another day; no harm, no foul. Suppose you did close it when you shouldn't have? You're no worse off than without the automation, and if it's even 50-80% accurate (in practice you can do much, much better) then it saves you countless man-hours getting rid of the hogs, potentially taking a couple tries.<p>(c) Look at something like plant identification apps. They're usually right, they crowd-source photos to go alongside predictions, they highlight poisonous lookalikes, the prediction gives lists and confidences for each prediction, and the result is something easy for a person to go investigate via more reliable sources (genus, species, descriptions of physical characteristics, ...). I'm sure there exists _somebody_ who will ignore all the warnings, never look something up, and eat poison hemlock thinking it's a particularly un-tasty carrot, but that person probably would have been screwed by a plant identification book or a particularly helpful friend showing them what wild carrots look like, and IMO the app is much easier to use and more reliable for everyone else in the world, given that they have mechanisms in place to handle the inevitable failures.