The space is moving so fast that, if I wrote down my workflows and workarounds just two months ago, so much of it would be stale today. I think all these recommendations need to list the models and harnesses being described front and center.
I use the llm as a glorified search engine. Instead of googling I ask it stuff.
Its fine for that but its a hit or miss. Often the output is garbage and its better to just use google.<p>I dont use it much to generate code, I ask it higher level questions more often. Like when I need a math formula.
is it really more efficient to have an LLM generate code, then review that code, fix errors and spend some time to fully understand it? I wish there were tangible stats and metrics around this. Is it really more efficient than just writing the code yourself, but using LLMs to look up things or demo solutions?
I've been experimenting with LLMs for coding for the past year - some wins, plenty of frustrations. Instead of writing another "AI will change everything" post, I collected practical insights from other senior engineers who've figured out what actually works. No hype, just real experiences from people in the trenches.
I think none of these offer much useful insight beyond the overarching idea of peer programming beating just vibe coding.<p>The best structure I've found which leverages this idea is called BMAD, and treats the LLM as though it were a whole development team in an orchestrated way that you have full control over.<p><a href="https://youtu.be/E_QJ8j74U_0" rel="nofollow">https://youtu.be/E_QJ8j74U_0</a>
<a href="https://github.com/bmadcode/BMAD-METHOD">https://github.com/bmadcode/BMAD-METHOD</a>
I would have said that Harper Reed's workflow (brainstorm spec, then co-plan a plan, then execute using LLM codegen) is basically best practice today and I'm surprised that the author adds that "I’ve not been successful using this technique to build a complete feature or prototype."<p>Here's an example of using this pattern with Brokk to solve a real world bug: <a href="https://www.youtube.com/watch?v=t_7MqowT638" rel="nofollow">https://www.youtube.com/watch?v=t_7MqowT638</a>
The article references <a href="https://www.seangoedecke.com/practical-ai-techniques" rel="nofollow">https://www.seangoedecke.com/practical-ai-techniques</a>, which states:<p>> AI is much better than strong engineers at writing very short programs: in particular, it can produce ten to thirty lines of straightforward mostly-working code faster than any engineer.<p>> How can you leverage this? There’s not much demand for this kind of program in the day-to-day of a normal software engineer. Usually code either has to be a modification to a large program, or occasionally a short production-data script (such as a data backfill) where accuracy matters a lot more than speed.<p>While this may be technically correct — there’s little demand for standalone small programs — it overlooks a crucial reality: the demand for small code segments <i>within</i> larger workflows is enormous.<p>Software development (in my experience) is built around composing small units — helpers, glue code, input validation, test cases, config wrappers, etc. These aren’t standalone programs, but they’re written constantly. And they’re exactly the kind of 10–30 line tasks where LLMs are most effective.<p>Engineers who break down large tasks into AI-assisted microtasks can move faster. It’s not about replacing developers — it’s about amplifying them.
> Peer Programming with LLMs, For Senior+ Engineers<p>> [...] a collection of blog posts written by other senior or staff+ engineers exploring the use of LLM in their work<p>It seems to be <i>by</i> senior engineers if anything, I don't see anything in the linked articles indicating they're <i>for</i> senior engineers, seems programmers of all seniority could find them useful, if they find LLMs useful.
Though I haven't tried it, I would probably enjoy peer programming with an LLM more than I do with a real person (which I have tried and hated).<p>I could assign the LLM the simple drudgery that I don't really want to do, such as writing tests, without feeling bad about it.<p>I could tell the LLM "that's the stupidest fucking thing I've ever seen" whereas I would not say that to a real person.
My main feeling is that its great as long as I constrain it to working in a conceptual boundary that I can reason about, like a single system component where I am telling it the API. That way each piece that gets built up I have an understanding of it. If you try to let it go to wide it starts to make mistakes and I lose my mental model.
I write a lot of “defensive” C# code in my day job expecting that someone very inexperienced / offshore will be working with it in the future (and I will be reviewing it four months later when no longer on the project). I call it “corporate coding”. Lots of interfaces that must be adhered to, ioc, injection and annoyingly strong patterns. Anything that makes going off the rails a lot of work — the path of most resistance — glaring in code reviews. But…key logic concentrated in a few taller files so none of the drilling through abstraction so easy to comprehend for a newbie. I want to take some time with a defensive coding approach and LLMs. Particularly scoping it to a certain project or folder in a layered architecture. Why let it know of the front end, back end, database all at once? Of course it’ll get discombobulated.<p>I’ve also been experimenting with giving an LLM coins and a budget. “You have 10 coins to spend doing x, you earn coins if you m,n,o and lose coins if you j,k,l” this has reduced slop and increased succinctness. It will come back, recount what it’s done explaining the economy and spending. I’ve had it ask “All done boss I have 2 left how can i earn some more coins?” It’s fun to spy on the thinking model working through the choices “if I do this it’ll cost me this so maybe I should do this instead in 1 line of code and I’ll earn 3 coins!”
Thanks for sharing pmabanugo, a couple of those posts are new to me too. If you’re taking submissions, I’ve been exploring how to make the most of these tools for the past few months, here’s my latest post:<p><a href="https://blog.scottlogic.com/2025/05/08/new-tools-new-flow-the-cognitive-shift-of-ai-powered-coding.html" rel="nofollow">https://blog.scottlogic.com/2025/05/08/new-tools-new-flow-th...</a>
I want to note that the headlines gave me an idea for a nonprofit: "Peer Programming with LLM's for <i>Seniors</i>."<p>Somebody jump on that. It's yours. :)
(Site is unreadable for me on Firefox 138, but the text is still there if you select all. Qutebrowser based on Chromium 130 doesn't render it either.)