One of my favorite courses in college was “Experimental Economics”, where we examine how real live humans differ from *homo economicus*– the perfectly logical, profit-maximizing agent of classical economics.

A good example is the ultimatum game, a 2 player game where one person has $10, and suggests a way to split that $10. The second player can accept, and then they go on their merry way, or decline and both players get nothing. *Homo economicus* would accept $0.01, since she would be better off than when she started, but most of us would be pretty insulted by such an offer. Conversely, most of us probably would not offer only $0.01, even if it *is *the “best play”.

Another game I am intrigued by is called the St. Petersburg paradox, apparently first posed and studied by some combination of the Bernoulli family. The idea is that we will play a game where I flip a coin until it comes up tails. If it comes up heads *n* times in a row, you win dollars. The question is: *how much would you pay to play this game? *

A reasonable way to approach answering this is to compute that you have a 1/2 chance of winning $1, 1/4 chance of winning $2, 1/8 chance of winning $4, and so on. Hence, your “expected winnings” are , which is a divergent sum. This means that if I offer to let you play this game for $100, a “supremely logical” person would accept. Or for $1,000,000. Or any number. On the other hand, I know very few people who would pay even $10 to play the game. The wikipedia article above has plenty of good explanations for why, but this strikes me as a great example of where human behavior diverges from what is “optimum”.

Since I also like creating some content every once in a while, I also have a python script that plays this game using a random integer generator. Playing the game 10,000,000 times, the average payout was $11.66, and the largest payout was $4,194,304, meaning 22 heads in a row. This is somewhat meaningless, as we already calculated that the average payout will grow without bound as I perform more trials, but gives a feel for how much the game might be worth intuitively.

from random import randint
def game():
coin = randint(0,1)
count = 0
tot = 1
while coin != 0:
tot+=2**(count)
count +=1
coin = randint(0,1)
return tot
def lotsogame(N):
tot = 0
top = 0
for j in range(N):
n = game()
tot += n
if n>top:
top = n
return float(tot)/float(N), top
print lotsogame(1000000)

### Like this:

Like Loading...

*Related*

It’s not illogical not to play the game. Expected winnings works only on the premise that you can repeatedly play the game. Each play however requires an investment of money, of which the average person doesn’t have an infinite supply, otherwise they wouldn’t play the game at all.

So, to be fair you’d have to calculate what type of losing runs you’d have as well. Given that a person has a fixed amount of money there is only so many rounds they can play before they run out of money. Once they’ve run out of money their further expected winnings are nothing. Thus the real expected winnings cannot be infinite since there is a possibility to end up with nothing.

I can see where you are coming from intuitively, but still cannot work that out mathematically. Even allowing dependence on how many times you can play the game or how much money you already have (which I guess are somewhat equivalent), I cannot come up with a reasonable answer to “how much is playing this game worth” other than “any amount of money.”

The wikipedia article goes into a bit on “expected utility”, which is the only reasonable way I see of dealing with the paradox, but still shows that strict mathematical modeling without factoring in human behavior does not always make sense. I would be very interested if you could come up with a number (or indicate more precisely how to come up with a number) to answer how much the game is worth.

I like the description of the “Finite St. Petersburg lotteries” in the Wikipedia article, it helps establish a more realistic expected value for a limited bank.

In the same way you could calculate from the players perspective. Say a player starts with C money and the game costs K. This caps the number of rounds a player can play — which puts in question the expected value which is based on playing an infinite number of rounds. Given these starting conditions there must be an expected number of rounds a player can go before having no cash left, thus not being able to play the next round.

Using a monte carlo simulation you should be able to calculate this number of rounds for various values of C and K. For each test simply have the player play the game until they have no cash left, or they exceed some threshhold (they are rich). This number will give the chance that a player will end up nothing as opposed to the infinite expected sum.

Converting that to a closed form calculation might be difficult, or even impossible.

Note, this types of expected values come up in Poker as well (which I did a lot of work with). In a cash game, where a player can always put more money on the table, expected value can be considered valid. In a tournament however, with fixed sums, the expected values have to be treated very carefully, since there is a real chance the player will be removed from the tournament, at which point their expected value drops to 0.

That is great- maybe this suggests that the proper way to approach any such calculation is to specify not only the rules of the game, but also the money the bank starts with, B, and the money the player starts with, C.

When both B and C are infinite, then you have the paradox, which is then understandably unreasonable. If just C is infinite, then the wikipedia article has a closed form solution for how much the game is worth which plays well with intuition (those who haven’t clicked through: $23.77, for a bank with literally all the money in the world). That is an interesting question on what happens when both B and C are finite, as well as the applications to poker.