I've been tinkering with a hobby project lately, and I wonder how related it is to this.<p>I've been wanting to build a tool to help find and work with combos for deck builders of card games -- specifically Magic: The Gathering, but it could also apply to other games.<p>There is a wonderful dataset of combos available for download from Commander Spellbook -- currently boasting over 26k combos in the database, and growing all the time.<p>One thought I've had is to train my own embedding model so that cards that are likely to combo with each other embed closely with one other. This way, even after new cards are printed, we can rapidly discover cards that are likely to combo with them. In practice, the first attempt that I had at fine-tuning my own embedding model proved lackluster, but I intend to refine my data and try again -- possibly after pre-training.<p>Second thought is to fine-tune an LLM on the text of existing combos -- give it the text of each card in the combo, and then train it to predict the rest of the interactions. This is cool and all, but I don't entirely know how to train it to (reliably) give "these cards don't combo" answers -- I fear that it would tend to hallucinate for cards that don't combo, and I don't know how to handle that.<p>Obviously any answers that come out of this system would need to be vetted by humans before adding to the database, but it feels like this could be an interesting way to explore the game space if nothing else.<p>In a related way, it feels like a mathematical proof begins with a set of starting conditions, a conjecture, and then works forward using established rules. In a similar way, a combo in Magic starts with a set of starting conditions, a conjecture ("this combo will result in infinite life" or "this combo will result in infinite damage"), and then works forward to detail the process of using established rules to accomplish the conjecture.<p>Anyways, it's an interesting use-case to me, and I'm excited to learn more about the parallels. I don't know if my embedding model or my LLM approach are worthwhile, and I would like to learn about other tactics I might employ!