Great that folks are starting to think about this, I think it's important to keep the power of AI decentralized. Thought it seems like for this to be practical, they need to flesh out the Security section in the appendix to deal with bad actors. I like the probabilistic approach in lieu of full byzantine fault tolerance. Would be interesting to see what kind of guarantees we could still have about correctness/convergence even with bad actors.
Here is the main arxiv page (html is a mess on lockdown mode):<p><a href="https://arxiv.org/abs/2312.08361" rel="nofollow">https://arxiv.org/abs/2312.08361</a>
Can check out their project at <a href="https://github.com/bigscience-workshop/petals">https://github.com/bigscience-workshop/petals</a>
I’ve long assumed something like this existed in a corporate lab but hadn’t been made public (or, ahem, “open”).<p>It’s an obvious move as open source models get bigger, really happy to see this out in the world - especially with a HF author attached.
Wow ! I was basically mulling over it earlier today in the HN learn discord[1], for a practical project. If you want to tag along to do something useful, come say hi (the more the merrier ! ).<p>[1] <a href="https://discord.gg/SAAn3xXC" rel="nofollow">https://discord.gg/SAAn3xXC</a>