I can honestly say that this post has revolutionized my thoughts on AI. Primarily this is because of what I perceive as the thesis statement, which is:<p>"<AI> is the construction of weighted tables (choices, data, meta relations, whatever) from large sets of <prior data> by <method>"<p>This is kind of crazy, because I think it says you could make a Turing AI by using large datasets of prior life data for humans. In essence, "<my life> is the construction of weighted tables from large sets of <life experience> by <human learning>." For example, if you had an AI that could learn through text, you could have extensive transcribed conversation logs of people and then large time-activity logs to use as your inputs.<p>If it could learn through video (IE, it could view images, understand objects, object relations, events in time, and assign will to the person behind actions / events) then you could instead just feed it huge video logs of people's lives. If you wanted a copy of a person, you could feed it only a single individual, and if you wanted a more general AI, then you could feed it cross sections of the population.<p>In addition, there's a very cool meta aspect to the large dataset concept, in that it can be large datasets for when to use, or to feed data to, specialized sub-AI's. For example, you might have a math sub-AI that has been trained by feeding it massive sets of math problems (or perhaps it can learn math through the video life logs of a person?). If its then being used as a part of a larger piece, then you'd want to know when to use it to solve problems, or when to feed it experience inputs for further learning. In essence, its tables of categories for experience types, and then grown / paired sub-AI's for those types.<p>I would wager that it is possible, right now, to create a chatbot that can pass Turing using the above by feeding it the equivalent of mass IRC chat or somesuch huge, human interaction by text dataset over a variety of topics. This would naturally need sub-AI's for mechanical things like grammar or parts of speech, and then possibly higher level meta-AI's for interpreting intent, orchestrating long form thought, or planning. In a way, its layers of AI based on level of thought abstraction. If it were a human, the high intensity portions of sub-AI would occupy space relative to intensity within reconfigurable co-processor zones (sight:visual cortex, sight:face recognition:occipital and temporal lobes, executive functions:frontal lobes, ect...)