I think there is a 3rd mode "I know exactly what I want, I just need this thing to autocomplete 90%+ of it in one shot so I don't have to type it all". Like when you're building out a class and you know it needs to have certain variations of constructor, certain types, certain public and private methods, certain ways to iterated and deep copy, certain ways to pretty print and build up the plumbing to tie those to the default printing methods of the language, and so on.<p>It's not really the "I don't care" mode as you care very much it matches specifically what you want to build out rather than "something which seems to work as a class if I paste it in". It's also not really "I want to learn something here" as you already know exactly what you want and you're not looking for it to deviate, you're just looking to have it appear a couple times faster than if you typed it out. This is, more or less, "I want faster autocomplete for this task" usage.
Last time I asked AI something, it started its response with "Yes, it's certainly possible to x with y." and closed its response with "in conclusion, unfortunately, it's not possible to x with y". In the same session, it told me one must press shift to get the number 1. I'm simultaneously amazed at its ability to generate what it can and disappointed at how it falls short so routinely. It'll get there eventually I'm sure, but I'm pretty dubious when people say they get a lot of value out of it.
To me, the problem always seemed like people who use ChatGPT and alike default into the "I don't care" mode, and copy-paste blindly.<p>Personally I think this is the root cause of most sloppy AI code. If you just look at the code that was generated, and you don't think "I would've come up with that", then probably the code is wrong.
I’ve been using O1 for some question/answering stuff around CMake and symbol resolution — stuff I know little about, yet stuff the internet knows a ton about.<p>O1 has been really useful, but just the practice of putting my convoluted question into words has often helped me figure out the answer without even clicking submit.
<i>Give a fuck</i> about what you’re doing. You get paid a lot of money to write quality software. No engineer’s default mode should ever be “I don’t care I just want the end result”. We’re talking about pressing some keys on a keyboard. Do you want other engineering professions to take similar attitudes? Want to trust your life on some machine or structure designed by someone who just threw some prompts into an LLM and skimmed the results briefly before submitting to production?<p>Don’t rot your brain on this AI autocomplete stuff, learn how to apply AI to do things that were previously impossible or unfeasible, <i>not</i> as a way to just save time or do things cheaper as so many are tempted to.
Zero memory of chat conversations resonates to me.<p>At a practical level, this is a good reason to run your own AI plugin, even if it just a wrapper around some api.<p>You can log your requests and the responses, and then use a similarity score to periodically see what sorts of things you’re asking.<p>I may even update mine to hassle me and be like “you’re asking this a lot, maybe you should remember it…”<p>(If, you can be bothered, rather just reaching for copilot)
For a reflection on the period /without/ ai, this text spent a looot of time about time /with/ ai<p>I understand that your goal was to review the "default" you got into, but I'd love to know a lot more about struggles (and counters to them) you experienced in the NAD itself