Do people generally read these things before upvoting? Legitimately curious.<p>It lists some possible goals to achieve more general human-like intelligence beyond the fancy function approximation we get with deep supervised learning, as stated in the abstract for both architectural and functional perspectives. In general I find the language fairly wishi-washy and the writing often awkward, but it is a nice summary of relevant thoughts and concepts. Beyond the abstract, here is a bit of summary and my thoughts.<p>For architectural aspects, it lists:<p>1) Unsupervised - Agrees with LeCun, Bengio, etc. But not sure it's fair to conclude this yet, maybe it should be reinforcement? our brains are prewired to do some things<p>2) Compositional - basically hierarchical, aka deep. Seems reasonable.<p>3) Sparse and Distributed - again plausible and empirically seen in deep learning. One reason ReLu neurons are nice is that they lead to sparser distributed representations.<p>4) Objectiveless - a metaphysical statement having to do with the Chinese room argument? This seems to mean not optimizing an objective function with gradient descent, and instead "Clearly, the learning algorithm should have a goal, which might be defined very broadly such as the theory of curiosity, creativity and beauty described by J. Schmidhuber". Seems vague and not clear.<p>5) Scalable - Again not the best choice of words, it seems to argue for parallelism as well as a "hierarchical structure allowing for separate parallel local and global updates of synapses, scalability and unsupervised learning at the lower levels with more goal-oriented finne-tuning in higher regions. " I am disappointed no discussion of memristors or neuromorphic computing was here.<p>For function aspects, it lists:<p>1) Compression - sure, pattern matching is in a sense compression so this seems fairly obvious.<p>2) Prediction - "Whereas the smoothness prior may be considered as a type of spatial coherence,
the assumption that the world is mostly predictable corresponds to temporal or
more generally spatiotemporal coherence. This is probably the most important ingredient of a general-purpose learning procedure." Again, reasonable enough.<p>3) Understanding - basically equivalent to predicting?<p>4) Sensorimotor - not clear? Similar to human eye movement?<p>5) Spatiotemporal Invariance - "one needs to
inject additional contex" having constant concepts of things?<p>6) Context update/pattern completion - "The last functional component postulated by this paper is a continuous (in the-
ory) loop between bottom-up predictions and top-down context." Constant cycling between prediction and word state update, pretty clear.