I think the part on "How arbitrary is the origin, really?" is not correct. The origin <i>is</i> arbitrary. As the Wikipedia article points you you can pick <i>any</i> point, whether or not it is the origin, and use the James-Stein estimator to push your estimate towards that point and it will improve one's mean squared error.<p>If you pick a point to the left of your sample, then moving your estimate to the left will improve your mean squared error on average. If you pick a point to the right of your sample, then moving your estimate to the right will improve your mean squared error as well.<p>I'm still trying to come to grips with this, and below is conjecture on my part.
Imagine sampling many points from a 3-D Gaussian distribution (with identity covariance), making a nice cloud of points. Next choose any point P. P could be close to the cloud or far away, it doesn't matter. No matter which point P you pick, if you adjust all the points from your cloud of samples in accordance to this James-Stein formula, moving them all towards your chosen point P by various amounts, then, on average they will move closer to the center of your Gaussian distribution. This happens no matter where P is.<p>The cloud is, of course, centered around the center of the Gaussian distribution. As the points are pulled towards this arbitrary point P some will be pulled away from the the center of Gaussian, some are pulled towards the center, and some are squeezed so that they are pulled away from the center in the paralled direction, but squeezed closer in the perpendicular direction. Anyhow, apparently everything ends up, on average, closer to the center of the Gaussian in the end.<p>I'm not entirely sure what to make of this result. Perhaps it means that mean squared error is a silly error metric?
Sorry, I'm siding with the physicists here. If you're going to declare that your seemingly arbitrary choice of coordinate system is actually not arbitrary and part of your prior information about where the mean of the distribution is suspected to be, you have to put that in the initial problem statement.
Stein's paradox is bogus. Somebody needs to say that.<p>Here's one wikipedia example:<p><pre><code> > Suppose we are to estimate three unrelated parameters, such as the US wheat yield for 1993, the number of spectators at the Wimbledon tennis tournament in 2001, and the weight of a randomly chosen candy bar from the supermarket. Suppose we have independent Gaussian measurements of each of these quantities. Stein's example now tells us that we can get a better estimate (on average) for the vector of three parameters by simultaneously using the three unrelated measurements.
</code></pre>
Here's what's bogus about this: the "better estimate (on average)" is mathematically true ... for a certain definition of "better estimate". But whatever that definition is, it is irrelevant to the real world. If you believe you get a better estimate of the US wheat yield by estimating also the number of Wimbledon spectators and the weight of a candy bar in a shop, then you probably believe in telepathy and astrology too.
My intuition is that the problem is in using squares for the error. The volume of space available for a given distance of error in 3-space is O(N^3) the magnitude of the error, so an error term of O(N^2) doesn't grow fast enough compared to the volume that can contain that magnitude of error.<p>But I really don't know, it's just an intuition with no formalism behind it.
I do not get it: if variance is too large, a random sample is very little representative of the mean. As simple as that?<p>Now the specific formula may be complicated. But otherwise I do not understand the “paradox”? Or am I missing something?
I don’t understand the picture with the shaded circle. Sure the area to the left is smaller, but it also is more likely to be chosen because in a Gaussian values closer to the mean are more likely. So the picture alone doesn’t prove anything.
Can someone confirm the validity of the section called ”Can we derive the James-Stein estimator rigorously?"?<p>The claim that the best estimator must be smooth seemed surprising to me.
I'm horrible at stats, but is this saying that if I have 5 jars of pennies, and I guess the amount in each one. Then I find the average of all my guesses, and the variance between the guesses, then I can adjust each guess to a more likely answer with this method?