None of these solutions are ideal, although Zenodo's better than most.
As far as I can tell, they're all targeted more towards the final, authoritative release, so it seems you're still out of luck during the paper <i>writing</i> process.
What if I'm just trying to share a dataset/pre-trained model with remote collaborators?<p>I ran into this when doing some OCR experiments[1], finding acquiring data and pre-trained models to be the most time-consuming part of the enterprise.
This ended up adding enough additional hassle that I didn't manage to get anything really interesting going, although figuring out how to containerize other peoples' code was educational.
Personally, I think I'll be relying on some combination of institutional repositories + torrents/IPFS for any large datasets/models I end up releasing in the future.<p>-----<p>1. <a href="https://github.com/rldotai/ocr-experiments" rel="nofollow">https://github.com/rldotai/ocr-experiments</a>