I've always found R dependency and package management to be considerably worse than Python. Years ago we needed to use a Microsoft mirror of cran to pin versions which routinely went down making builds impossible. Pushing to cran is arbitrary and tedious, for instance you _must_ support Oracle Solaris. Lastly, the maintainer of cran is notoriously prickly. This means it's way to easier to get new algorithms on pypi than cran, and hence a big reason why DS leans python over R.<p>I feel like cran is a great example of how more restrictions and tests is actually worse for software quality when it prevents iteration and creates toil for devs.
This gives a good overview of the pains in getting a package on to CRAN, but that is sadly only the beginning. If you end up getting users, the process described here becomes fractally bad for future updates, as you can also receive rejections for breaking any test in any dependent package.<p>Any test, in any package, even if it is using some regex on an error message and you updated the wording to remove a typo.
There’s probably a healthy balance somewhere between this and the PyPI approach. But I will say, R (including most CRAN libraries) is a dream to use for data analysis, whereas Python is an exercise in frustration. CRAN libraries are idiomatic, work without fuss, and feel like a cohesive ecosystem. In my experience, working with too many Python packages feels like holding an airplane together with tape.
In the R ecosystem, it seems like the faults of the language and packages are made up for by the ease of use of RStudio. I've tinkered/hacked in a lot of interpreted languages, but I've been most productive with R because of RStudio.