If you have mastered the basics (e.g. Norvig's <i>AIMA</i>, Hastie and Tibshirani's <i>Elements of Statistical Learning</i>, Koller's <i>PGM</i>), then I would suggest that the only place to really get a view of the state of the art is by reading papers.<p>In general, scientific books are an overview of a field, which can only occur with sufficient time for hindsight and synthesis. Even a thousand page book such as Koller's <i>PGM</i> will be littered with references and suggestions of papers to read for a deeper understanding.<p>One partial exception might be the Deep Learning book by Goodfellow and Bengio, which was made public only a month or so ago. Even this, however, is just an overview. <a href="http://www.deeplearningbook.org/" rel="nofollow">http://www.deeplearningbook.org/</a>
This is the inevitable standard reference: <a href="http://aima.cs.berkeley.edu/" rel="nofollow">http://aima.cs.berkeley.edu/</a><p>Another textbook, often linked on HN, and freely available online: <a href="http://artint.info/" rel="nofollow">http://artint.info/</a>