Expected value doesn't mean jack shit if the game can only be played once.<p>> Expected value (also known as EV, expectation, average, or mean value) is a long-run average value of random variables.<p>If you can only press a button once - you should take the guaranteed money in almost all circumstances (assuming you have finances that look like most Americans - if you're already a millionaire... do what you want, this game doesn't matter much to you).<p>Basically - This is a dire misunderstanding of how statistics works in general. The population <i>at large</i> might be better off pressing the 50% at 50 million button (because then you are running this game many times and you will likely achieve the expected value) - but as an individual, who can only roll the dice once, you are much better off just taking the immediate and guaranteed win.<p>And that's not even accounting for the drop off in marginal value of each dollar as you accumulate them - that first million is <i>far</i> more impactful than the next 49.
There's diminishing returns on the utility of money. If you're living paycheck to paycheck that guaranteed million is gonna give you a higher expected return of utility than the next 49 million combined. I disagree with the title calling it a "dumb" financial decision. It can be perfectly rational to take the million.
I don't know how that article is written without mentioning utility theory in economics and the concept of diminishing marginal utility of money and the risk aversion it implies. No need to even bring in behavioral economics.<p><a href="https://en.wikipedia.org/wiki/Risk_aversion" rel="nofollow">https://en.wikipedia.org/wiki/Risk_aversion</a>
Nassim Taleb famously destroys those lines of reasoning in his books: game theory is unpractical for most people, because it almost always ignore variables that don't exist in a lab but are crucial IRL.<p>Comments have been explaining which ones already apply to this article, I'm not going to repeat them.<p>But there is an another example from Taleb that always makes me smile:<p>- If the other player tosses a coin and gets 9 tail in a row, what are the chances of getting tail on the next toss?<p>- 50%!<p>- No, 100%. The other player is cheating.
Honestly I'd hit the red button. I'd rather take a guaranteed payoff of my mortgage and all other debt, with plenty left over for a few neat toys, than chance walking away with nothing.
This is a nice concrete refutation of the fallacious reasoning in Pascal’s Mugging [0]. You can take this argument to its absurd conclusion by setting probability p arbitrarily small and the payout X arbitrarily large, such that p*X is arbitrarily greater than $1M, e.g. a 1/100000 chance of winning 100 trillion dollars.<p>[0] <a href="https://en.m.wikipedia.org/wiki/Pascal%27s_mugging" rel="nofollow">https://en.m.wikipedia.org/wiki/Pascal%27s_mugging</a>
"A 50% chance of winning $50 million would equate to an expected value of $25 million."<p>If you hit the green button you either get $50 million or 0$. Hitting the red button gives $1 million.<p>Unless you don't want $1 million or don't need it, you're going to hit the red button and not the green button.
> The mathematical answer is you hit green every time.<p>Nope. There's a whole field of research about this - decision theory - which doesn't agree with this decision.<p>Most people appear to go with the Minmax approach - they minimize potential losses(or in this example: maximize minimal payouts).<p>For one-time events it's a sound strategy.
Surprise surprise, people are not perfect emotionless economic units!<p>I don’t think it’s a good look for the “Director of Institutional Asset Management at Ritholtz Wealth Management” to call perfectly sensible decisions by people whose net worth is many orders of magnitude smaller than his “dumb” just because he can afford to pass up a guaranteed million.
The dumb is in the author thinking these are dumb decisions. $50M is nowhere near 50x as valuable as $1M, thus the green button is nowhere near 25x the value of the red button. For most people pushing red is the smart decision, not the dumb one.
I constantly run into situations where I spend money in ways that are financially non optimal, but socially good (in my mind).<p>An easy to understand example is, I believe I should pay more in taxes and everyone as wealthy as I am should too.<p>I rent an apartment, but I rent it out at the cost it takes to maintain it in good condition, because I think profiting off rent is unethical. This means I'm generally renting much much cheaper than local rents, and my tenants can therefore build savings.
seriously flawed perspective. it's a 50% chance of nothing versus a 100% chance of a life-changing amount of money. if it was $1K:$25K or $100K:$2.5M then you'd take the risk.
It's a catchy headline, but the "decisions" used as examples, aren't really "dumb" under the complete set of facts.<p>Really, what this is about is that the typical mathematics used to discuss a certain type of financial decision (mostly things like investments) uses an incomplete model that doesn't consider appropriately the actual values involved -- for example, failing to consider the <i>wildly nonlinear</i> curve of the marginal value of one dollar.
Admittedly, I'm not an expert at this stuff, but it seems like strictly using expected values to calculate optimum decisions can get you into some strange situations, like infinite expected value [1]. For some reason the author brings up lump sums vs annuities, which I don't think is at all comparable to betting (annuities from the US govt are guaranteed payments). That aside, a number of people have already mentioned Kelly criterion [2]. This strategy would tell you that you should take the guaranteed $1 million, but this is a long-run strategy. I personally would take the $1 million because it is guaranteed. I'm also not sure if relying on math for a one-off event like this makes sense.<p>[1] <a href="https://en.wikipedia.org/wiki/St._Petersburg_paradox" rel="nofollow">https://en.wikipedia.org/wiki/St._Petersburg_paradox</a>
[2] <a href="https://en.wikipedia.org/wiki/Kelly_criterion" rel="nofollow">https://en.wikipedia.org/wiki/Kelly_criterion</a>
Other commenters have mentioned marginal utility, but this article is basically explaining minimax [0] decision making - people (and chess AIs) tend to pick the option that minimizes worst case losses.<p>[0] <a href="https://en.wikipedia.org/wiki/Minimax" rel="nofollow">https://en.wikipedia.org/wiki/Minimax</a>
There is a documentary about the psychology of financial decisions (behavioral economics):
Mind Over Money: Nova (2010)
<a href="https://moviewise.wordpress.com/2013/01/14/mind-over-money-nova/" rel="nofollow">https://moviewise.wordpress.com/2013/01/14/mind-over-money-n...</a><p>"Emotion may lead you to make bad financial decisions. For example, people who feel sad will pay more, sometimes four times more, for a consumer product than those who do not feel sad."<p>The "Nash equilibrium" also delves a bit into the psychology of decision making:
<a href="https://www.reddit.com/r/math/comments/1tc80g/is_the_explanation_of_nashs_equilibrium_in_the/" rel="nofollow">https://www.reddit.com/r/math/comments/1tc80g/is_the_explana...</a>
All the classic economic models for choice that I've seen fail to consider that the perception of not only risk but reward are BOTH nonlinear. IMO, this has been an Achilles heel of classic price and game theory. The rise of behavioral economics in recent decades would seem to agree with this iconoclysm.<p>If I need $1 million right now or else a loved one dies, then it doesn't matter how big the reward of a riskier alternative choice may be. I take the million NOW. If the additional reward is a victim of decreasing value as that offer rises, it's only rational for the decider to show diminished interest in choosing the greater reward (even if the marginal odds are only a tiny amount less likely).<p>Disregarding the reward curve of the individual is going to consistently misjudge economic choice and will surely be a poor basis for any economic model.
For those who say they would press the red button ...<p>* Imagine the payout on the red button were not $1M but $100K or $50K or $10K. Is there any point as it diminishes toward zero that would make you switch buttons?<p>* Imagine the payout on the green button were not $50M but $100M or $500M or $1B. Is there any point as it increases toward infinity that would make you switch buttons?<p>For those who say they would press the green button ...<p>* Imagine the payout on the red button were not $1M but $2M or $5M or $10M. Is there any point as it increases toward $50M that would make you switch buttons?<p>* Imagine the odds on the green button were not 1:2 but 1:3 or 1:5 or 1:10. At what point, as the odds diminish, would you switch buttons?
> A 50% chance of winning $50 million would equate to an expected value of $25 million.<p>No it doesn't. Statistics is the science of populations of events, expected value applies only if you have a sufficiently large population.
Given my current financial situation, 1 million would let me retire immediately. What I see when I look at those buttons are: 100% chance of being able to retire early vs 50% chance of being able to retire early.
This is similar to the choice between a salary (quite predictable) and a startup (maybe a lot, might just as well be zero). Or, in a company, between doing consulting or in-house product development.
The discussion here seems to be whether, if we can capture all of the relevant details, a certain person is making a rationally optimal decision. Taking this to its logical conclusion we're fitting math to a process of decision making and adjudicating which criteria are considered rational and which are not. Sure there is mathematics involved here, but it reads a lot more like a question of who is our isn't allowed agency, in this case in their economic and financial decisions.
I still don’t get it how hitting the green button (50% at 50 mullion) is the “rational” choice, it isn’t. 1 million in your pocket, no matter what, is exponentially and life-changing (for the majority of us) better than a 50% of getting nothing. Maybe if the value behind the red button would have been smaller (let’s say $1000 or even $100) then things would have been different, but, again an $1 million in one’s pocket no matter what is life-changing for most of us.
You can witness people buying lotery scratch cards every day in the UK and wonder why people are so dumb given the odds of actually winning a big prize.<p>But then bear in mind that this person maybe has a big bill to pay and only £5 to their name, do they keep the £5 knowing that it isn't going make any difference or take a wild chance that will?
What people say they will do on a hypothetical is not the same thing as what they will do if actually faced with the decision.<p>The way I've been able to deal with this personally is by thinking "what would I do if this was Monopoly money?" and then reconcile that with my emotional decision.
Given this choice I would press the green button.<p>If I was flat broke, living on the street, or in debt even, I would find investors to pay, say 5 @ 200k each, for me to press the green button and reward them 1mm each in case of payout.
Isn't the best solution here to find a wealthy investor and sell him the option to press the green button priced at $20M? You get $20M, and they get an instrument with an EV of $25M at the cost of $20M.
If there's multiple players with these buttons the optimal strategy is to pool your resources and distribute the winnings evenly.<p>Teamwork makes the dream work.
Fyi, an imho more illustrative example why we should think in terms of expected utility than expected cash flows:<p><a href="https://en.wikipedia.org/wiki/St._Petersburg_paradox" rel="nofollow">https://en.wikipedia.org/wiki/St._Petersburg_paradox</a><p>tl;dr: doubling winnings on each throw of heads and paying out on the first tail is a game with expected winnings diverging to positive infinity, yet probably no one would pay more than a few bucks to enter
The rational behaviour is to make everyone press the green button and then give away a million dollars to anyone who didn't get a prize but the obvious problem is that no such thing happens. Instead of cooperating some people insist on getting the full 50 million dollars as if they deserve it and were destined to get the money while the plebs who didn't get anything also deserve to stay poor.<p>In other words, the problem is that humans are cruel to each other and peace of mind vs other cruel people is worth more than a higher reward.