When experimenting with ways to interact with augmented reality I thought Rock Paper Scissors offered a way for people to play a simple game whose results could then affect augmented reality objects.<p>In Rock Paper Checkmate, people play against the computer and choose their weapon (rock, paper, or scissors) by making the corresponding hand gestures for the camera. If the player wins, a "smart" AI makes a chess move for the player and a "dumb" AI makes a chess move for the computer. If the player loses, the opposite happens. So a game of chess can be autoplayed through multiple bouts of rock paper scissors.<p>Thanks to Steve Barnegren and his SwiftChess library which handles all the underlying chess logic and AI: <a href="https://github.com/SteveBarnegren/SwiftChess" rel="nofollow">https://github.com/SteveBarnegren/SwiftChess</a><p>I trained a few image classification and hand pose classification models using large public image datasets but did not achieve the results I wanted. I ended up training a hand pose classification model using a few hundred pictures I took myself. Feedback on whether the hand pose classifier often gets your chosen weapon wrong (e.g. Rock Paper Checkmate thinks you're playing Rock when you're actually playing Paper) would be appreciated.