Obviously, things are moving very quickly. How are you keeping up?<p>I'm building a list of folks to follow on X (formerly Twitter, sigh...) and trawling the comments here on HN.
By and large, just ignore them? Life's a lot easier when you don't fancy yourself a 10x programmer, accept mediocrity, and just absorb best practices by osmosis a few years after they're developed by the bigshots.<p>This reminds me a lot of early Javascript: a thousand small companies each promising a 10% improvement to your workflow. Wait a couple years and it'll be three companies each offering 150% boosts instead, with clear instructions.<p>It's an exciting time if you like to be on the bleeding edge, but then you should be working with one of those companies. Otherwise, just wait for the dust to settle...
The same way I've successfully kept up on tech progress for 3+ decades now:<p>By sitting back, doing what I know, and letting everyone else burn their energy on "keeping up". Then, when the hype has died down and I see what really has worked and stuck with people, I go learn it.
I ignore them. If a technique is useful enough, it will become mainstream and won't need to be advertised by a small community of apostles on X or Mastodon.<p>I work as a junior programmer/sysadmin. Most of the work I do has centered around being aware of the abstract logical nature of the problems I solve. Thus, I have had little issues solving a routing policy in IOS, JunOS, OpenBSD or plain old Linux, for example. Or writing some WSGI application to interface with an old water meter. Or fixing a broken Ubuntu 12 webserver that no-one has accessed in years.<p>Considering the comments on another topic on HN today, It still looks like GPTs have great issues with abstraction, which I personally experienced when I asked it to fix some issue I've been having regarding data structures I've been using in a project. So I don't think I'll be running out of things to do so soon.
Here are a few lesser-known resources I’ve found:<p>- aider - AI pair programming in your terminal <a href="https://aider.chat/" rel="nofollow noreferrer">https://aider.chat/</a><p>- Tips on coding with GPT (from the author of the above): <a href="https://news.ycombinator.com/item?id=36211879">https://news.ycombinator.com/item?id=36211879</a><p>- Cursor - The AI-first Code Editor <a href="https://cursor.sh/" rel="nofollow noreferrer">https://cursor.sh/</a>
LlamaIndex, HuggingFace, LangChain, and Ludwig are functioning as partial proxy aggregators of what is working and working better than alternatives. Most people in this space have multiple models for virtually every step in their heads. You cannot and should not process them. Thus you need aggregation and filtration on some objective(s) and presumptions with a purpose or you're a boat on a turbulent sea.<p>One simple tool I use to evaluate is how easy is it to spin up a working demo, in isolation or context of other tools, to show the maturity of thought on the idea moving to application. If there is no benchmark on output(s) before you start that is more science/fantasizing than business. but maybe that's your goal, just don't delude yourself in what you're doing. I also look up project leads on Linkedin and such since these things require push and without a good promoter, usually don't pan out(see crypto for example).<p>Buried lead: I've built a few tools allowing me to parse the tree and graph structures of new projects in charts/schemas to get quick visuals for myself and the LLMs I use to code. Additionally, I have built tooling to test rapidly. For if everything is dynamic, optimizing measurement is quite valuable in and of itself.
Keep up? I just ask chat gpt stuff and use copilot autocomplete. Just doing that has been a huge help and I’m not sure how much more I need at this time!
I just ignore them mostly, I find that in the space that I currently work in both ChatGPT and Bard hallucinate like crazy and I get recommendations for things that does not exist and when I ask them about this they always keep hallucinating. Python packages that doesn't exist at all, solutions that are impossible, step-by-step guides that lie straight up.<p>On top of that I can't really use any AI tools in my editor as that would violate work policy, so no auto-completion for work stuff.<p>I use GH Copilot at home sometimes for fun but mostly for generating mock data or quick examples I can give to students when I work in another role.
Simply Hackernews makes you better informed than about 80% of other engineers. Follow and read popular documents from a few professors, entrepreneurs, and communities on Twitter/Reddit/Quora to get to about 95%. Hunt for specific information to get to around 98%. Build something new and useful to get to 99%.
I read about the standout parts after the fact. I find any other approach (for any newish topic - not just AI) is unsustainable.<p>E.g. I see a lot of similar comments with multiple replies specifically about Copilot - but that's exactly what I'm talking about. ChatGPT, DallE, Copilot - these are the standout things that everybody not keeping up with AI knows about.
By the time things are useful, you start noticing a proportion of colleagues using them (or the tools are imposed on you by your company). If it's not the case, the tools are probably not mature enough or just passing fads.