Also, most recent state of progress: Predictive Coding Can Do Exact Backpropagation on Any Neural Network (2021)<p><a href="https://arxiv.org/abs/2103.04689" rel="nofollow">https://arxiv.org/abs/2103.04689</a>
here's a link with reviewer comments <a href="https://openreview.net/forum?id=PdauS7wZBfC" rel="nofollow">https://openreview.net/forum?id=PdauS7wZBfC</a> (praise to openreview!)
AstralCodexTen (formerly SlateStarCodex) has discussed this here - <a href="https://astralcodexten.substack.com/p/link-unifying-predictive-coding-with" rel="nofollow">https://astralcodexten.substack.com/p/link-unifying-predicti...</a><p>He mostly points to this post in LessWrong - <a href="https://www.lesswrong.com/posts/JZZENevaLzLLeC3zn/predictive-coding-has-been-unified-with-backpropagation" rel="nofollow">https://www.lesswrong.com/posts/JZZENevaLzLLeC3zn/predictive...</a>
If backprop is not needed, would this finding make automatic-differentiation functionality obsolete in DL frameworks, allowing these frameworks to become much simpler? Or is there still some constant factor that makes backprop favorable?