Hey, I wonder if we can use LLMs to learn learning patterns, I guess the bottleneck would be the curse of dimensionality when it comes to real world problems, but I think maybe (correct me if I'm wrong) geographic/domain specific attention networks could be used.<p>Maybe it's like:<p>1. Intention, context
2. Attention scanning for components
3. Attention network discovery
4. Rescan for missing components
5. If no relevant context exists or found
6. Learned parameters are initially greedy
7. Storage of parameters gets reduced over time by other contributors<p>I guess this relies on there being the tough parts: induction, deduction, abductive reasoning.<p>Can we fake reasoning to test hypothesis that alter the weights of whatever model we use for reasoning?