I've seen this once. But just now I noticed something:<p>>the fewer "repeat interactions" there are, the more distrust will spread.<p>Doesn't this explain quite well why big internet communities devolve into a cesspool? I'm not behaving like a jerk (or worse, a moron) if I know people around me will recognise and avoid me, but as the odds of that happening get closer to zero, it becomes more advantageous to fling shit.
I’ve played this and it’s fantastic. Do play it to the end.<p>I found it extremely insightful and their observations connected long standing disparate dots for me. It’s as if a big jumbled up puzzle suddenly clicked into place.
Played this sometime in the past and had a lot of fun. I did not want to, given the 30 minute play time estimate, but I ended up playing the anxiety demo [0] on the same site, and spending about an hour total.... it was incredibly informative and eye-opening.<p>[0]: <a href="https://ncase.me/anxiety-demo/" rel="nofollow">https://ncase.me/anxiety-demo/</a>
The outcome of the various games depend on the way rewards are distributed. If the reward for bad behaviour is high, obviously the bad guys are going to win. This is evident in the 5 step "The Evolution of Distrust". Try reducing the rewards for bad behaviour and increasing for good behaviour :)<p>So, moral of the story is: Reward good behaviour.
Game theory, just like essentially everything in math, physics and probability, and cs is about adjoints, norms, and fixed points <a href="https://github.com/adamnemecek/adjoint/" rel="nofollow">https://github.com/adamnemecek/adjoint/</a><p>Nash equilibrium is a fixed point.
What happens if you add another level of meta to it, and the object of the game is to avoid complete randomness/undecidability but also for the game to go on for as many iterations, ideally with the lowest amount of mistakes(/entropy?) and defections/cheats.<p>i.e stuff is happening, there's a lot of predictability but no easily determinable stabilisation endpoint?
In the sandbox, if you change the payoff of <i>both</i> parties cheating to -5, the angelic cooperator seems to win in most simulations.<p>In a way, this makes sense: if the penalty when both parties cheat is harsher than the upside of swindling your partner, "trust at all costs" wins out!<p>But on a societal level, I suppose this means that if you want to optimize for do-gooders, you should punish failure harshly, no matter which party is responsible...
Interesting: In the "change the payoffs" version, if you set the cooperate/cooperate to +2 (or +1) and change the cheat/cheat punishment to -3 or -4 or higher, you get a situation that alternates between a greater or lesser number of cheaters and cooperators. The copycats get eaten instantly.
I cannot recommend "The evolution of cooperation" (Robert Axelrod), the source for the game, enough.
It is written by a journalist, so it reads well, about a Math doctorate applying the theory in other areas. Richard Dawkings vets it for biology and writes the foreword in the book.
This has been posted a couple of times, also very recently. But never got a lot of discussion.<p><a href="https://hn.algolia.com/?q=https%3A%2F%2Fncase.me%2Ftrust%2F" rel="nofollow">https://hn.algolia.com/?q=https%3A%2F%2Fncase.me%2Ftrust%2F</a>
What a great game.<p>It would be interesting to see what happens when the types „slide“ towards the next personality (copycat into copykittens into always cooperative) as they get possitive feedback (and the opposite for always cheater).
When I started programming, the first mistake I encountered was ignoring the spacing rules in Python, thus creating errors even though the code was correct.
there should be another character similar to the "copy kitten"<p>co-operate first, then if the other person cheats once then cheat back always