I'd add robot manipulation in unstructured situations. Progress has been very slow, but has picked up a little in recent years. Stanford vision-guided robot assembly, 1973.[1] DARPA robot manipulation project, 2012.[2]<p>[1] <a href="https://archive.org/details/sailfilm_pump" rel="nofollow">https://archive.org/details/sailfilm_pump</a>
[2] <a href="https://www.youtube.com/watch?v=jeABMoYJGEU" rel="nofollow">https://www.youtube.com/watch?v=jeABMoYJGEU</a>
This seems like a compendium of metrics for processes which AI is a making progress on now. Doing something like that seems like a fine idea - I can't judge the quality of these metrics but it's hard to be excited by this.<p>However, what I think would be interesting would be for researchers to make a compendium of "human abilities", classifying and quantifying them as well as possible. One could then analyze the progress which AI could make towards emulating those capacities.<p>Obviously, this would be a rather crude measure but it at least could give some idea of AI's toward new capacities.
I made an alternative interface that displays the same data using D3: <a href="https://jamesscottbrown.github.io/ai-progress-vis/index.html" rel="nofollow">https://jamesscottbrown.github.io/ai-progress-vis/index.html</a>
<a href="https://rodrigob.github.io/are_we_there_yet/build/" rel="nofollow">https://rodrigob.github.io/are_we_there_yet/build/</a> is something similar, although last update was in Feb 2016.
There's no way we have reached human-level performance on image recognition tasks. My guess is that when there is ambiguity (e.g. 'ship' vs 'boat') the AI is better at learning the answer the labellers chose. Humans haven't looked through the training data so they use their real-life biases which may not match those of the labellers.<p>Just a guess, but whether that is true or not we're definitely not at human-level performance.
Harvard Professor, "We are Building Artificial Brains and Uploading Minds to Cloud right now"
<a href="https://www.youtube.com/watch?v=amwyBmWxESA" rel="nofollow">https://www.youtube.com/watch?v=amwyBmWxESA</a>
1) No robotics? No interaction with the physical world at all?<p>2) No measure of the AI's ability to teach others? How can you say AI really understands if it can't then teach 1) what it has learned, and 2) understand what essential facts a tyro does not know or misunderstands?<p>3) No assessment of the AI's semantic interpretive skills, like those long emphasized by cognitive scientists, such as those in Doug Hofstadter's "Fluid Concepts and Creative Analogies" -- i.e. Miller analogies, double entendres, literary symbolism, poetry interpretation, and so on?<p>Without mastery of analogies, an AI will have all the cultural insightfulness of a portrait of leisuresuit Elvis in neon paint on velvet.