At face value I'm suspicious, I thought that there were genuine ambiguities or label errors in some of those datasets that makes it very surprising you could even really define 100% accuracy.<p>Also, reading the paper a bit, it's either badly written and I just don't understand what they're saying at all, or BS. It doesn't really explain anything about their implementation, it just says they did do and got 100% accuracy, and throws in a bunch of jargon. Maybe I'm just not familiar with this area enough, but the way it's laid out raises even more red flags
Also, can anybody share a more informal high-level intuition about what this ‘Learning with signatures’ approach is about? It seems to be a rather recent topic in Learning (paper cites 2019+ publications)
Awesome and interesting article. Great things you’ve always shared with us. Thanks. Just continue composing this kind of post.
<a href="https://www.myaccountaccess.one/" rel="nofollow">https://www.myaccountaccess.one/</a>