It's not a terrible idea to support the absolute basics like mean & variance, but anything beyond that (particularly things like models or tests) is not a good idea for a standard library. Once you hit even something simple like a linear regression you have issues of how to represent missing or discrete variables, handling colinearity, or whether to do online or batch modes which can give different results. Tests in particular are fraught because if you're going to make them available for general consumption they need a good explanation of when they're appropriate, which is basically a semester course in statistics and well out of scope for standard library docs.<p>Basically, the idea of "batteries included" should also mean that if something looks like you can put a D-cell in there, you're unlikely to blow your arm off.
Batteries included is a fine philosophy when starting a language to encourage early adoption, but at this point, I don't think it's worth adding new libraries to the stdlib. Here's why:<p>- It's very easy to find and install third party modules<p>- Once a library is added to stdlib, the API is essentially frozen. This means we can end up stuck with less than ideal APIs (shutil/os, urllib2/urrlib, etc) or Guido & co are stuck in a time consuming PEP/deprecate/delete loop for even minor API improvements.<p>- libraries outside of the stdlib are free to evolve. users of those libraries who don't want to stay on the bleeding edge are free to stay on old versions.
Just out of curiosity, I submitted this yesterday:<p><a href="https://news.ycombinator.com/item?id=6190603" rel="nofollow">https://news.ycombinator.com/item?id=6190603</a><p>The URL was<p><pre><code> http://www.python.org/dev/peps/pep-0450/
</code></pre>
While this is<p><pre><code> http://www.python.org/dev/peps/pep-0450
</code></pre>
That is, exactly the same except for a trailing slash. Doesn't the deduplication algorithm handle this case?
> For many people, installing numpy may be difficult or impossible. For example, people in corporate environments may have to go through a difficult, time-consuming process before being permitted to install third-party software.<p>I do not regard this as a good justification for putting something in the standard library! If you don't have root access, use vitualenv (which you might want to do anyway) and install the package somewhere under your home directory.
Great idea, but while assembling this library, don't leave out permutations, combinations, and the binomial Probability Mass Function (PMF) and Cumulative Distribution Function (CDF). Small overhead, easy to implement, very useful. More here:<p><a href="http://arachnoid.com/binomial_probability" rel="nofollow">http://arachnoid.com/binomial_probability</a>
Reminds me of the story that made rounds here couple of years ago: The Python Standard Library - Where Modules Go To Die<p><a href="https://news.ycombinator.com/item?id=3913182" rel="nofollow">https://news.ycombinator.com/item?id=3913182</a>
Nice proposal. I think the problem is numpy itself. If you could just do pip install numeric_package then nobody can complain. I don't quite understand why a package has to depend on LINPACK. I will probably switch to julia-lang, because numpy is (at least for me) not that great to work with.
I'm in favor. I was surprised and annoyed to find there wasn't a standard library for doing excel-level statistics. If you throw basic least-squares linear regression in there too, I can eliminate Excel from my physics classes.
Kudos to PHP for apparently being ahead of the curve among dynamic languages with regard to statistics. Another interesting, yet unmentioned option is Clojure/Incanter.
I'm against this. Either you have to create a new statistics module or you would have to include numpy/pandas/statsmodels into the standard library. In both cases it would essentially freeze the modules for further development outside the python release cycle...