I emailed Chomsky a month back about Thoughts and Language. I hope he doesn't mind that I'm sharing it.
My email,<p>I was fascinated by your talk at British Academy. I'm no linguist. Please Ignore my ignorance.<p>If the formal language is not designed for communication but for our thoughts, Is it not the case that there are two different mechanisms of perception? One to understand internal language (if I can call it a language) and external language we use socially? It is bewildering. As I type this email, I'm thinking of the words that needs to be followed. Because of my ability for pattern recognition or for the conceptual understanding of rules of the external language. I'm not sure if my ability is a natural phenomenon or a self trained one(as far as English is concerned). English is not my native language. It is acquired by study. Recently, My thoughts have been primarily in English. I could switch between English and my native language for thoughts.<p>Is language the only way to think? Is it crazy to think that our brain does not use language to think at all? Can thoughts/perception , like language, have two different mechanism. Understanding external logic and internal logic? The internal system might be visual, physical etc. It seems to lead to a structure problem. Does internal thoughts have far more complex structure? If dreams are thoughts, It is extremely complex and might be a evidence that there are different systems. I strongly think it is because we have different involuntary mechanisms to perceive like sense of smell etc.<p>Chomsky's reply,<p>Very little is understood about the interesting questions you’re raising. To study them we’d have to have a way of accessing thinking by some means that does not involve conscious use of language – which is probably a superficial reflection of something happening within. But so far there are no good ways.
Historical correction: The article mentions "AI Winter" as occurring in the 1970s. AI Winter actually refers to the period from about 1990-1997 when the Expert Systems approach was deemed to be a failure, funding for AI dropped considerably, and it was a mistake to put AI on your resume.<p>This opened a window that allowed the statistical-based AI techniques that currently dominate the field to mature.
We're still figuring out probabilistic knowledge representation. So much depends on context.<p>Earlier approaches are fraught with problems. Expressing an encoded fact that translates to "Obama is the president." is far too simplistic. It does not take into account the proper context (president of the USA), and time period (from January 20th, 2009 to the current time) and so on.<p>These contexts: conversation (were we talking about politics?), location (USA, Russia, etc.), all need to be captured as best as possible and appropriately considered when trying to apply a learned fact. Maybe the context is hypothetical (talking about an alternate history where McCain won the election) or fantastical (talking about President Garrett Walker in the TV show House of Cards).<p>The existing systems I've seen can't really handle all of that.<p>Somehow we humans are generally good at figuring out which sets of contexts are appropriate to apply in a given situation, and doing it quickly. What's tricky is that even in a single conversation with one person, the context may switch from historical, to hypothetical and back again quickly.
This is one of the better explanations of this fundamental shift in AI research. I remember even a decade ago this would have been a controversial stance.<p>Now it is very clear the neats have lost and the scruffies are on the rise. <a href="http://en.wikipedia.org/wiki/Neats_vs._scruffies" rel="nofollow">http://en.wikipedia.org/wiki/Neats_vs._scruffies</a>
The article mentions Marvin Minsky as an advocate of symbolic logic approaches, and that Perceptrons (the book) was an attack on perceptrons and neural nets and connectionist theories. This is false on both accounts.<p>1) Minsky was emblematic of a Scruffy, and to this day, Scruffy approaches are associated with Minsky and MIT. He didn't believe that statistical or connectionist approaches were bad, just that they were not a silver bullet and not going to solve every problem with AI. He has embraced nearly every prominent approach to AI, and simultaneously criticized the near-religious beliefs that they spawn.<p>2) Minsky didn't attack perceptrons and neural nets. His book was an objective criticism of them, laying out very clearly the mathematical limitations of them, as well as philosophical problems with approaching them as a catch all solution.
This shows the folly of modern approaches:<p>> Pornographic images, for instance, are frequently identified not by the presence of particular body parts or structural features, but by the dominance of certain colors in the images<p>Neural networks are unsuccessful because they are not detecting anything. They're just making statistical correlations that happen to be right for the specific set of training data they have received. There's no robustness whatsoever. Artificial gullibility.<p>If we want to teach computers to think, then we have to teach them to model the physical world we live in. Plus the emotional, social world.<p>And even <i>that</i> is entirely separate from heavyweight abstract reasoning. Just because a computer can pass the turing test, doesn't mean it can spontaneously diagnose your medical problems, design a suspension bridge or play Go.
Seems to me you can buy and sell a face detector (or find it open source on github) all the better if somebody or something somewhere can identify the concept of a face detector.
state of the art in 1984:<p><a href="https://www.youtube.com/watch?v=_S3m0V_ZF_Q" rel="nofollow">https://www.youtube.com/watch?v=_S3m0V_ZF_Q</a>