I saw Peter Robert's AMA on HN yesterday (<a href="https://news.ycombinator.com/item?id=43363056">https://news.ycombinator.com/item?id=43363056</a>) and wondered what would happen if I fine-tuned a model on all of his past answers.<p>I vibe-coded this with Claude Code every time my 8 month old son had a nap today. I also fine-tuned OpenAI GPT-4o mini to power it. Zero lines of code were manually written or modified by me — everything was done by Claude.<p>Writing the version you see now took about 45 minutes of LLM time, and it cost around $12.<p>I've spent no time improving the scraping and data preparation beyond the first version, so there's a ton of room for improvement. I might fix some low hanging fruits during nap times tomorrow — probably won't.<p>You can check it out on GitHub, including the data that was used for fine-tuning the model: <a href="https://github.com/s16h/proberts" rel="nofollow">https://github.com/s16h/proberts</a>.<p>I had fun!