Hi HN,<p>I built BabyStepsJs to track the development of my newborn daughter in the form of a skill tree.
It’s a mix of gamified documentation of her current progress and a bit of education on what she might discover next.
I conducted a similar development experiment a few months ago with the same rules (see the cs-util-com/InstantScribe repo):<p>- Act mainly as the PM and QA for the AI (this time ChatGPT-o1) and iteratively add user stories and requirements to extend the feature set.<p>- Each iteration was typically a single commit, so the AI’s proposed changes were easy to review in the git diff.<p>- I generally did not modify the code much myself, only making minor adjustments like tweaking colors, positions, etc.<p>- I also made some initial high-level decisions, such as choosing D3 for the graph.<p>Some findings/impressions from that process:<p>- I was able to prototype this in a few hours. The process was fun and allowed me to change requirements as I learned more about what I wanted.<p>- Most of the changes were correct on the first try (using o1). With other models, I often noticed more random changes in unrelated parts of the app.<p>- o1 tends to strip comments, even when they provide useful context on what the code does.<p>- Some ideas (e.g. how to smartly arrange the graph) were suggestions o1 came up with on its own. It was often worth specifying the requirements more on the "why" and less on the "how", giving the AI the freedom to propose solutions I might not have considered.<p>- More complex requirements were sometimes ignored for a while when they were just part of the "user stories" comment block at the top of the HTML file. Only after explicitly pointing out that they weren’t implemented did the AI make an effort to address them.<p>In summary, I’m excited to continue exploring this development approach over the coming years to see if increasingly complex applications can be built iteratively in this way.