The kind of AI that gets the public attention right now lacks a quality that can be described as "formal correctness", "actual reasoning", "rigorous thinking", "mathematical ability", "logic", "explainability", etc.<p>This is the quality that should be studied and developed in symbolic AI approach. However, the actual symbolic AI work I know of seems to fall in one of the two buckets:
1. "Let's solve a mathematical problem (e.g. winning at chess) and say that the solution is AI" (because humans can play chess, and now computers can too!)
2. "Let's make something like Prolog but with different solver algorithm / knowledge representation". Products like Cyc and Wolfram seem to work essentially in this manner, although with lots of custom coding for specific cases to make them practical. There's lots of work on separate aspects of this as well, like temporal and other modal logics.<p>I see the first bucket as just applied maths, not really AI. The second bucket is actually aimed at general reasoning, but the approaches and achievements in it are somewhat uninspiring, maybe because I don't know many of them.<p>So my broad question is: what is happening in such "logical AI" research/development in general? Are there any buckets I missed in the description above, or maybe my description is wrong to begin with? Are there any approaches that seem promising, and if so, how and why?<p>I would be grateful for suggestions of the books/blogs/other resources on the topic as well.