A simple coin toss game?
Let’s imagine we are playing the following simple game:
In the beginning, we both have $N$ coins. Each round, we flip a coin, and if the result is “heads”, you give me one of your coins, while if the result is “tails”, I give you one of my coins. To make things a bit more interesting, let us assume that the coin is not necessarily fair, and that the probability for “heads” is $p \in (0, 1)$. The game ends as soon as one of us runs out of coins (that person loses).
We can now ask ourselves the question: Given some values for $N$ and $p$, what is the probability that I win the game?
For some special cases, the answer is rather trivial. Of course, for $N = 1$, I win with probability $p$, and surely for a fair coin (i.e., $p=0.5$), my chances of winning should be 50%. However, what happens for arbitrary $N$ and $p$? It turns out that the answer is not quite as simple as one might naïvely expect. (And when I say “one”, I mostly mean myself and my own expectation.)
When I encountered this question, I did what I always do in such cases: Bring out Python and just play around with some simulations. The result, which is of course just a very basic example of a random walk in one dimension, looks like this:
0.56
Let’s try to work out the math. Obviously, I win the game if you lose it, and one way for you to lose the game is if you lose $N$ coin flips in a row. The probability for that is $(1p)^N$. However, there is also the scenario where you win 1 game and lose $N+1$. The probability for that should be ${N+2 \choose 1} \cdot p \cdot (1p)^{N+1}$, where ${N+2 \choose 1} $ is the binomial coefficient that counts how many different ways there are to lose 1 out of $N+2$ games. …oh, but wait! Now we forgot to take into account the fact that the game ends if you lose $N$ games in a row, so that’s one possibility that we have to subtract from ${N+2 \choose 1} $. Ugh, this is going to be a bit messy if we really want to sum up all possible cases. After all, the game can go back and forth arbitrarily often!
Okay, let’s maybe think about this problem in a different way. Let us introduce a function $P(n)$ which denotes that probability that I win the game given that I currently have $n$ coins. Since I lose the game if I don’t have any coins, we know that:
$$ P(0) = 0 $$
Likewise, once I have all the coins (and there are $2N$ coins in the game), I win:
$$ P(2N) = 1 $$
Our original question now is equivalent to computing $P(N)$. To that end, we need one more piece of knowledge about $P(n)$ that relates it to the way our game works:
$$ P(n) = p \cdot P(n+1) + (1p) \cdot P(n1) $$
Basically, my chance of winning the game at some arbitrary point is the average of my chance of winning the game if I gain one coin in the next round and my change of winning if I lose one coin in the next round, weighted by the respective probabilities of these two events.
Now how do we work out $P(N)$ from this recursive definition? One possibility is to interpret the problem as an system of linear equations which we can solve using Gaussian elimination or the like. For simplicity, let us assume $N=3$, and $p=0.56$, that is, a slightly but not very biased coin.^{1} In this case, the corresponding linear system looks like this:
$$ \begin{pmatrix} 1.0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0.44 & 1.0 & 0.56 & 0 & 0 & 0 & 0 \\ 0 & 0.44 & 1.0 & 0.56 & 0 & 0 & 0 \\ 0 & 0 & 0.44 & 1.0 & 0.56 & 0 & 0 \\ 0 & 0 & 0 & 0.44 & 1.0 & 0.56 & 0 \\ 0 & 0 & 0 & 0 & 0.44 & 1.0 & 0.56 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1.0 \end{pmatrix} \cdot \begin{pmatrix} P(0) \\ P(1) \\ P(2) \\ P(3) \\ P(4) \\ P(5) \\ P(6) \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 1 \end{pmatrix} $$
If we solve this equation system, we find:
$$ \begin{pmatrix} P(0) \\ P(1) \\ P(2) \\ P(3) \\ P(4) \\ P(5) \\ P(6) \end{pmatrix} \approx \begin{pmatrix} 0.000 \\ 0.280 \\ 0.500 \\ 0.673 \\ 0.809 \\ 0.916 \\ 1.000 \end{pmatrix} \,, $$
meaning that in this setting, I have approximately a 67% chance of winning the game. Not so bad for a coin that is only slightly biased, huh?
It looks like we have found a way to solve our initial problem. That’s pretty cool! There’s only one caveat: Our solution involves solving a linear system of size $(2N + 1) \times (2N + 1)$. The naïve algorithm for this scales like $\mathcal{O}(N^3)$, and while you can do slightly better than this practice, this is still going to be rather computationally expensive if I ask you about my chances of winning if we both start the game as billionaires.
Let’s see if we can do better. For that, let us take a look at the recursive definition of $P(n)$ again. It is hopefully not too hard to see that we can rewrite this as:
$$ \begin{pmatrix} P(n+1) \\ P(n) \end{pmatrix} = \begin{pmatrix} \frac{1}{p} & \frac{1p}{p} \\ 1 & 0 \end{pmatrix} \cdot \begin{pmatrix} P(n) \\ P(n1) \end{pmatrix} = \begin{pmatrix} \frac{1}{p} & \frac{1p}{p} \\ 1 & 0 \end{pmatrix}^n \cdot \begin{pmatrix} P(1) \\ P(0) \end{pmatrix} $$
That already looks a bit nicer, because it is only a fixed $2 \times 2$ matrix that we have to multiply $n$ times with itself, which scales linearly in $n$. Unfortunately, this does not yet immediately help us, because we do not have easy access to $P(1)$. However, it might inspire the following approach:
Let us, for one second, forget that the $P(n)$ we are looking for is describing a probability, and let us treat it as a general function $P: \mathbb{Z} \to \mathbb{R}$ which is defined by the above recursion for all $n \in \mathbb{Z}$. Inspired by the above recursion, we may feel tempted to make the following ansatz:
$$ P(n) = \lambda^n \,. $$
If we plug this into the original recursion, we get:
$$ \lambda^{n} = p \cdot \lambda^{n+1} + (1p) \cdot \lambda^{n1} \quad\Longleftrightarrow\quad p \lambda^2  \lambda + (1p) = 0 \,. $$
Solving for $\lambda$ gives two solutions, $\lambda_1 = 1$ and $\lambda_2 = \frac{1}{p}  1$. Is anybody going to be surprised now if I tell you that, of course, those are precisely the eigenvalues of the $2 \times 2$ matrix above? Right.
If both these values for $\lambda$ satisfy the equation on the left hand side of the arrow above, then any linear combination of the two must also satisfy it. Therefore, our ansatz becomes:
$$ P(n) = a \cdot \lambda_1^n + b \cdot \lambda_2^n = a + b \cdot \left( \frac{1}{p}  1 \right)^n \,. $$
The coefficients $a$ and $b$ are now easily determined from our boundary conditions above:
$$ P(0) = 0 \quad\Rightarrow\quad a + b \cdot \left( \frac{1}{p}  1 \right)^0 = 0 \quad\Rightarrow\quad a = b \\ P(2N) = 1 \quad\Rightarrow\quad a + b \cdot \left( \frac{1}{p}  1 \right)^{2N} = 1 \quad\Rightarrow\quad b = \frac{1}{\left(\frac{1}{p}  1 \right)^{2N}  1} \\ $$
The expression for $b$ is only undefined for $p = 0.5$, which luckily is one of the trivial cases that we discussed at the very beginning. In all other cases, we find the following closed form solution:
$$ P(n) = \frac{\left( \frac{1  p}{p} \right)^n  1}{\left(\frac{1  p}{p}\right)^{2 N}  1} $$
To get the answer to our original question—what is my chance of winning our game for given $N$ and $p$—we only need to evaluate this function at $N$:
$$ P_\text{win} = P(N) = \frac{1}{\left( \frac{1}{p}  1 \right)^N + 1} \,. $$
If we do a quick sanity check, we find that for $p = 1$, $P_\text{win} = 1$, as expected. Likewise, for $p \to 0$, we find $P_\text{win} \to 0$. We can even plug in $p = 0.5$ here and get the expected answer $P_\text{win} = 0.5$.
Finally, if we go just one step further and rewrite $\left( \frac{1}{p}  1 \right)^N$ as $\exp \lbrace N \cdot \log\left( \frac{p}{1  p} \right) \rbrace$, we get:
$$ P_\text{win} = \frac{1}{\exp \lbrace N \cdot \text{logit}(p) \rbrace + 1} = \text{Sigmoid}\left( N \cdot \text{logit}(p) \right) \,, $$
which, after all, seems like a surprisingly elegant solution considering how hard we had to work to actually find it :)
This article, and in particular the solution of the problem here, are heavily based on discussions with A.K.

This value for $p$ matches a finding reported in The Guardian on January 4, 2002: “When spun on edge 250 times, a Belgian oneeuro coin came up heads 140 times and tails 110.” In case you are wondering if this is actually sufficient evidence to cause suspicion, the everbrilliant David MacKay has an answer for you (and yes of course, it is a Bayesian argument). ↩︎