Seems like a variant of a Siamese network which uses binarized embedding vectors for predictions instead of the raw embedding vectors. What exactly is the novelty presented here?
I would like to see deep learning working with embodied cognition somehow: <a href="http://www.jtoy.net/blog/grounded_language.html" rel="nofollow">http://www.jtoy.net/blog/grounded_language.html</a>
you might also be interested in the recent work on "resonator networks" VSA architecture [1-4] by Olshausen lab at Berkeley (P. Kanerva who created the influential SDM model [5] is one of the lab members).<p>It's a continuation of Plate [6] and Kanerva work in the 90s and Olshausen' groundbreaking work on sparse coding [7] which inspired the popular autoencoders [8].<p>I find it especially promising they found this superposition based approach to be competitive with optimization so prevalent in modern neural nets. May be backprop will die one day and be replaced with something more energy efficient along these lines.<p>[1] <a href="https://redwood.berkeley.edu/wp-content/uploads/2020/11/frady2020resonator.pdf" rel="nofollow">https://redwood.berkeley.edu/wp-content/uploads/2020/11/frad...</a><p>[2] <a href="https://redwood.berkeley.edu/wp-content/uploads/2020/11/kent2020resonator.pdf" rel="nofollow">https://redwood.berkeley.edu/wp-content/uploads/2020/11/kent...</a><p>[3] <a href="https://arxiv.org/abs/2009.06734" rel="nofollow">https://arxiv.org/abs/2009.06734</a><p>[4] <a href="https://github.com/spencerkent/resonator-networks" rel="nofollow">https://github.com/spencerkent/resonator-networks</a><p>[5] <a href="https://en.wikipedia.org/wiki/Sparse_distributed_memory" rel="nofollow">https://en.wikipedia.org/wiki/Sparse_distributed_memory</a><p>[6] <a href="https://www.amazon.com/Holographic-Reduced-Representation-Distributed-Structures/dp/1575864304" rel="nofollow">https://www.amazon.com/Holographic-Reduced-Representation-Di...</a><p>[7] <a href="http://www.scholarpedia.org/article/Sparse_coding" rel="nofollow">http://www.scholarpedia.org/article/Sparse_coding</a><p>[8] <a href="https://web.stanford.edu/class/cs294a/sparseAutoencoder.pdf" rel="nofollow">https://web.stanford.edu/class/cs294a/sparseAutoencoder.pdf</a>
What makes us humans intelligent and able to learn so quickly is our reasoning faculty especially our conceptual reasoning capabilities. There is no intelligence and learning without that, just sophisticated ml/dl pattern matching / perception. Symbolic AI led to the first AI winter because a symbol is just an object that represents another object. That's not a lot to work with.<p>The AI industry needs to finally discover conceptual reasoning to actually achieve any understanding. In the mean time huge sums of money, energy and time are being wasted on ml/dl on the idea that given enough data and processing power, intelligence will magically happen.<p>This IBM effort doesn't even remotely model how the human brain works.