Other tools for developing / comparing RL algorithms:<p>* Burlap (from Brown-UMBC) <a href="https://github.com/jmacglashan/burlap" rel="nofollow">https://github.com/jmacglashan/burlap</a><p>* RL Glue <a href="http://glue.rl-community.org/wiki/Main_Page" rel="nofollow">http://glue.rl-community.org/wiki/Main_Page</a><p>Also looks like some of the challenges come from ALE: <a href="https://github.com/mgbellemare/Arcade-Learning-Environment" rel="nofollow">https://github.com/mgbellemare/Arcade-Learning-Environment</a>
I don't know why it took me this long to realize, but this could be a sort of new-age journal. Published research [on github], reviewed by peers, and reproduced by others.
-- gdb@ was that in your mind as you built this?<p>I <i>really</i> hope they gain traction.
Can someone with knowledge in AI explain to me what this framework does compared to others (mainly OS but also proprietary) and if it provides any advances in the field?
How often will environment versions change? Does a more sophisticated versioning scheme make sense? [like Semantic Versioning?]<p>I don't really know what it means to have a backward compatible change in an environment, but you know.
Awesome. I can't wait to play with this. I had actually been doing a side project with the same idea (though of course much simpler!). I just got a lot more free time :)