I think of AI like a Junior Engineer.<p>If I understand the problem well enough, and have a really good description of what I want, like I'm explaining it to a junior engineer, then they do an OK job at it.<p>At my last job, we had a coding "challenge" as part of the interview process, and there was a really good readme that described the problem, the task, and the goal, which we gave the candidate at the start of the session. I copy/pasted that readme into copilot, and it did as good a job as any candidate we'd ever interviewed, and it only took a few minutes.<p>But whenever there are any unknowns or vagaries in the task, or I'm exploring a new concept, I find the AIs to be more of a hindrance. They can sometimes get something working, but not very well, or the code they generate is misleading or takes me down a blind path.<p>The thing for me, though, is I find writing a task for a junior engineer to understand to be harder than just doing the task myself. That's not the point of that exercise, though, since my goal is to onboard and teach the engineer how to do it, so they can accomplish it with less hand-holding in the future, and eventually become a productive member of the team. That temporary increase in my work is worth it for the future.<p>With the AI, though, its not going to learn to be better, I'm not teaching it anything. Every time I want to leverage it, I have to go through the harder tasks of clearly defining the problem and the goal for it.