I was brainstorming sideproject ideas.
I already implemented Shannon 1948 : A Mathematical Theory of Communication and Hinton 1986 : Learning representations by back-propagating errors.<p>I was curious to find out, if Shannon collaborated with Hinton after 1986, what would they make?<p>Would we have a wild neural network architecture?
Would we be using data compressors instead of matrices to represent neural networks?<p>What do you guys think? I bet it's a silly question
Here's the Twitter discussion if you'd like to contribute there : <a href="https://x.com/murage_kibicho/status/1883473056989172120" rel="nofollow">https://x.com/murage_kibicho/status/1883473056989172120</a>