When AI agents are generating some, most, or all of your code, then occasional git commits of the resulting source code aren't sufficient. You also need a tool that ties the generated code back to the prompts and AI interactions that generated it.<p>Here’s a short technical explainer video of GOOD, a Git companion designed for this: <a href="https://github.com/specstoryai/getspecstory/blob/main/GOOD.md" rel="nofollow">https://github.com/specstoryai/getspecstory/blob/main/GOOD.m...</a><p>The core tool will be free (as in beer), but we may or may not be FOSS. We’ll figure that out soon’ish.<p>I would love some feedback on this!
You need to have something people can try right now for a Show HN - take a look at <a href="https://news.ycombinator.com/showhn.html">https://news.ycombinator.com/showhn.html</a>
This is really interesting. Could you use it even on teams that aren’t using AI to generate code? It seems like it could help clarify intent for any kind of team. How different is it from well commented code? Or could it be used to add comments to old, poorly documented code? Could the AI ever infer intent so developers don’t have to document it - they just have to validate that Good correctly inferred their intent?
Love this -- as AI generated code inevitably leads to more decisions being made with less intentionality/understanding, having the context that AI was given is key to not creating a mess over the long run!
THIS!!! The ability to retain intent is huge. Also creates an objective measure of process that can ultimately be improved. Essentially "intent + prompts = outcome" over time.