From the article: "Should the mantle of 'creator' lie with the program or the programmer?"<p>Strangely enough, this was the exact dilemma that Clarissa faced in in Episode 29 of Clarissa Explains it All, "Poetic Justice," in which she developed a program that generated poetry. When her program's poetry won the school's contest, she decided to publicly relinquish her award to her computer. <a href="http://en.wikipedia.org/wiki/List_of_Clarissa_Explains_It_All_episodes#Season_3:_1992.E2.80.931993" rel="nofollow">http://en.wikipedia.org/wiki/List_of_Clarissa_Explains_It_Al...</a>
Interesting application of genetic algorithms. The weird thing about GA though, and any optimiser, is that you need an "error function" that you will us to determine when one solution is better than another. Presumably he designed an error function for "fun", and his GA system finds local minima within it. But this is what I mean by "weird thing": you have to come up with a quantification of what makes a game fun, and _this_ is ultimately what will determine what kinds of solutions you come up with. Regardless of how the optimiser works, the error function is what the programmer is designing, and therefore the programmer is designing the space in which solutions will exist, so I would argue that any specific games are attributed to the programmer (or whoever designed the error function), and any specific game just happens to be one choice within the available search space.<p>Unless you want to manually try every iteration and rate it from 1 to 10, this implies that it's possible to come up with a "funness" model of human game playing. That in itself is sort of a interesting thing, falling somewhere between philosophy and psychology, rather than computer science.
This paper, "Automatic Design of Balanced Board Games" [2007] might be of interest for similar reasons: <a href="http://www.aaai.org/Papers/AIIDE/2007/AIIDE07-005.pdf" rel="nofollow">http://www.aaai.org/Papers/AIIDE/2007/AIIDE07-005.pdf</a>
A section of the article says this:
<i>Raf describes two requirements for serendipity to occur:
1. Active searching: The designer should not simply wait for inspiration to strike, but should immerse himself in ideas and look for harmonies between them...</i><p>It reminds me of one of my favorite quotes from Picasso:<p>“That inspiration comes, does not depend on me. The only thing I can do is make sure it catches me working.”
― Pablo Picasso
The principle of subset contradiction (as in Yavalath, where you win by making four in a row but lose if you make three first) is extremely interesting, and it also seems like it can be applied in a fairly simple, atomic way to various types of rulesets. I wonder if a Monte Carlo ruleset search system could integrate higher-level rule change "principles" such as subset contradiction, and apply them as operations to existing rulesets...
Not unrelated is Kevan Davis' <i>Ludemetic Game Generator</i> (2003), which randomly combines categories and mechanics from those at BoardGameGeek to create new (and largely useless) game ideas, with arbitrarily appropriate titles:<p><a href="http://kevan.org/ludeme" rel="nofollow">http://kevan.org/ludeme</a><p>Some of the random games sound more fun than others. ;)<p><pre><code> Game: "Indkub"
Categories: Industry / Manufacturing, Comic Book.
Mechanics: Set Collection, Hand Management.
Game: "Ugplay"
Categories: Prehistoric, Trains.
Mechanics: Acting, Trading.</code></pre>
A friend of mine implemented the GA-generated game Yavalath for Android. You can find it on Android Market at <a href="https://market.android.com/details?id=boardgamer.yavalath" rel="nofollow">https://market.android.com/details?id=boardgamer.yavalath</a>.<p>Yavalath has surprising depth, but it is no BattleField 3.<p>It is nice to see BoardGameGeek listed on HN.
Hmm i recall Trillion Credit Squadron back in the 80s where the use of computers was almost required to play and being at the bleeding edge of Ai research would be useful