> Today’s AI Still Has a PB&J Problem<p>If this is how you're modelling the problem, then I don't think you learned the right lesson from the PB&J "parable."<p>Here's a timeless bit of wisdom, several decades old at this point:<p>Managers think that if you can just replace code with <i>something else</i> that isn't <i>text with formal syntax</i>, then all the sudden "regular people" (like them, maybe?) will be able to "program" a system. But it never works. And the reason it never works is <i>fundamental to how humans relate to computers</i>.<p>Hucksters continually reinvent the concept of "business rules engines" to sell to naive CTOs. As a manager, you might think it's a great idea to encode logic/constraints into some kind of database — maybe one you even "program" visually like UML or something! — and to then have some tool run through and interpret those. You can update business rules "live and on the fly", without calling a programmer!<p>They think it's a great idea... until the first time they try to actually use such a system in anger to encode a real business process. Then they hit the PB&J problem. And, in the end, they must get <i>programmers</i> to interface with the business rules engine for them.<p>What's going on there? What's missing in the interaction between a manager and a business rules engine, that gets fixed by inserting a programmer?<p>There are actually two things:<p>1. <i>Mechanical sympathy</i>. The programmer knows the <i>solution domain</i> — and so the programmer can act as an advocate for the solution domain (in the same way that a compiler does, but much more human-friendly and long-sighted/predictive/10k-ft-view-architectural). The programmer knows enough <i>about the machine</i> and <i>about how programs should be built</i> to know <i>what just won't work</i> — and so will <i>push back</i> on a half-assed design, rather than carrying the manager through on a <i>shared delusion</i> that what they're trying to do is going to work out.<p>2. <i>Iterative formalization</i>. The programmer knows <i>what information is needed</i> by a versatile union/superset of possible solution architectures in the solution space — not only to design a particular solution, but also to "work backward", comparing/contrasting which solution architectures might be a better fit given the design's parameters. And when the manager hasn't <i>provided</i> this information — the programmer knows to <i>ask questions</i>.<p>Asking the right questions to get the information needed to determine the right architecture and design a solution — that's called <i>requirements analysis</i>.<p>And no matter what fancy automatic "do what I mean" system you put in place between a manager and a machine — no matter how "smart" it might be — if it isn't playing the role of a programmer, both in <i>guiding the manager through the requirements analysis process</i>, and in <i>pushing back through knowledge of mechanical sympathy</i>... then you get PB&J.<p>That being said: LLMs aren't fundamentally <i>incapable</i> of "doing what programmers do", I don't think. The current generation of LLMs is just seemingly<p>1. highly sycophantic and constitutionally scared of speaking as an authority / pushing back / telling the user they're wrong; and<p>2. trained to always try to solve the problem as stated, rather than asking questions "until satisfied."