Conventional wisdom says risk decisions should be made by subjective preference: your risk tolerance or your utility function. But in 1956, John Kelly published a contrary result: that there is a calculable amount of risk that always does best in the long run. Most people think that taking more risk increases the probability of both very good and very bad outcomes. Kelly showed that beyond a certain point, more risk only increases the probability of bad outcomes. Moreover, taking less than optimal risk actually guarantees doing worse in the long run; it only appears to be a safer course.
To see why this is true, suppose you had $1,000 to use to make 100 even-money bets in a row. You know you will win exactly 60 out of the 100, but the order of your wins will be chosen by the bettor on the other side. You have to specify your bets in advance and you are never allowed to bet more than you have.
For one example, you could bet $24 each time. You'll win 60 and lose 40 to end up with $1,480. But this is the best you can do with fixed bets. If you bet less than $24, you end up with less than $1,480. If you bet $25 or more, the other bettor will arrange for the 40 losses to come first, wiping you out so you cannot make any more bets.
Suppose instead you decide to bet 20 percent of your current bankroll each time. The nice thing about this rule is that the order of the wins and losses doesn't matter. You can always make the bet—you never go broke—and you always have the same amount ...