"Contextual adaptation" is what they want, but that doesn't mean it's coming in the near future. However, this does mean that funding for research on it will be available.<p>As I've said for years, the big lack in AI is in the "common sense" and unstructured manipulation area. Nobody can build something with squirrel levels of manipulation and agility, even in simulation. Robot manipulation in unstructured situations is still very poor. The people trying to simulate C. elegans at the neuron level can't get that to work, despite a full wiring diagram and years of effort.<p>Something very low level is not understood. There's a Nobel Prize waiting for whomever figures that out.
BTW, the Tay example in the slides is bogus. Zero AI involved. You should ask what's cut off when you see a tweet reply like that. This is what's cut off: <a href="https://twitter.com/daviottenheimer/status/712889915533500416" rel="nofollow">https://twitter.com/daviottenheimer/status/71288991553350041...</a><p>Tay had a 'repeat after me' feature (an ancient feature of any IRC chatbot to just echo or print a given string). That's all this is. A troll account issued the command 'Repeat after me' and then tweeted '@TayandYou HITLER DID NOTHING WRONG!', and the Tay daemon dutifully repeated it back to the troll in that thread.
(2016?)<p>Also:<p>> First wave: Handcrafted knowledge<p>> Second wave: Statistical learning<p>> Third wave: Contextual adaptation<p>I understood clearly enough the first two, but the slides become increasingly ambiguous and fuzzy towards the end; and it seems to me they are mixing up a bunch of not self-evidently related desiderata.<p>It is not immediate, for instance, that small "generative" models that are easy to interpret necessarily lead to better "abstraction" (whatever that means). And whatever this all has to do with "contextual adaptation" is to me anyone's guess.<p>Highly alarming (but sadly, from experience, unsurprising) to see such fuzzy position document from such an important funding agency for AI.
It's good to see a main funding agency's perspective.<p>As a researcher, I like their non-hype way of defining AI as "programmed ability", which is accurate and realistic -- also puts AI further apart from real intelligence, which means unanticipated activities.<p>I would like to know more what they see as "abstracting", from their perspective.<p>We haven't got much further in our scientific understanding of intelligence - if you bought a psychology text book today and ten years ago there wouldn't be much of a breakthrough change detectable in terms of modeling cognition. And as impressive as some computer science AI models perform certain tasks, I haven't been taken by surprise by them asking me a question out of the blue, which is one of my personal litmus tests for intelligence.
I wonder who is trying to get what funded? "Second wave" is going gangbusters, despite Gary Marcus' every-six-months rant; the review of statistical learning is reasonable given a barely technical audience, but the summary of "third wave" seems designed to extract large amounts of funding from people who aren't up to date on the state of the field.
Here is the presentation in form of a blog post:<p><a href="https://machinelearning.technicacuriosa.com/2017/03/19/a-darpa-perspective-on-artificial-intelligence/" rel="nofollow">https://machinelearning.technicacuriosa.com/2017/03/19/a-dar...</a>
Nice high level overview of the three waves of AI (two have happened, the "Contextual adaptation" wave is yet to occur). Includes examples and successes and failures. ("Young man holding a baseball bat", indeed :) ).<p>"Systems construct contextual explanatory models for classes of real world phenomena" is the next goal. That is, understanding + being able to describe the reasoning for the understanding.<p>No technical depth, really, but lots of words to google if you want to learn more.
Here’s the presentation in video form with a bit more context & detail.<p><a href="https://youtu.be/-O01G3tSYpU" rel="nofollow">https://youtu.be/-O01G3tSYpU</a>
The most pressing dangers of AI most researchers see:<p>- error rate too high<p>- you can trick a classifier with noise<p>- it's racist sometimes<p>Actual dangers of AI:<p>- stop problem<p>- infeasibility of sandboxing<p>- difficulty of aligning black boxes with human values