Just two systems is too simple, too naive.<p>The Society of Mind by Marvin Minsky is a must read for anyone who is trying to speak about AI.<p>The notion that there are quick and slow systems are old one - reflexes vs. recognition is the simplest example - recognition is too slow.<p>But there are not two or three systems, there are.. I cannot say how many, but if we believe that we have a distinct "subsystems" to recognize eyes, and mouths and faces by matching some cues, there must be vast hierarchies of such sub-systems (agencies in Minsky terminology).<p>So, some processes (agencies) are "slow" some are "fast", some works with quick-and-dirty data, such like a sudden motion catches our attention, before we're able to recognize what's going on..<p>The notion that each word we see does some "priming", pre-fetching in CS terms leads to the vastly complex view of how even seemingly simple tasks are performed.<p>So, all we can do is to recognize familiar (known in advance) patterns and look at its weights as modern computer translation services do.
Two points:<p>1) Kahneman's model is probably too high level to apply to solving NLP problems. It's an interesting abstraction, not a road map to human reasoning.<p>2) System 2 is <i>Effortful Reasoning</i>, not just doing math. For example, if you were engaging system 2 to plan how long a project was going to take system 2 would think about what kind of questions were relevant to generating an answer (how long have similar projects taken, are there any differences between this project and the others, etc.) and then think about how to combine these to produce an answer. Computers do not perform this kind of reasoning well at all.
It kind of makes intuitive sense that "understanding" is a graph of nodes representing arbitrary (meaningless) concepts with edges doing all the work of imbuing meaning.<p>So how do we test this idea with a graph large enough and fast enough to approach the complexity of the brain? All computer architectures seem to deeply resist the graph architecture by forcing edges/nodes of large graphs to be sequentially accessed over a set of limited memory bus lines. Large computer networks force edges/nodes to be sequentially accessed over network access points, hubs and switches. Everywhere we look, graph traversal and transformation is unavoidably sequentialized at critical moments on today's hardware. What's the solution?
I wish there was a mathematical theory for what 'understanding' is. For instance, I too suspect that whatever we call a function of a system, something that the system does, can not simply occur by random parts coming together.<p>To be clear, I don't think a layer of sands has the function of filtering water. It does filter water, but it does it in the virtue of static property, in the first of the geometry of the system not the mechanics.<p>There is something about systems that do something that I can't name, having limited knowledge on the subject.<p>Could anyone give me a hint on research done on this?