Category Archives: coding

How to Measure Euler’s Number by Shuffling Cards.

Previously on this blog, I have talked about Buffon’s Needle, which is a way to measure \pi using random numbers. In this entry, I will be talking about a way to measure another fundamental constant e, also by using random numbers.

To measure e, we will shuffle a deck of cards many times and count the number of shuffles that result in the deck being deranged. This means that no card remains in its original position in the deck. This type of arrangement is called a “derangement”.

We will start by computing the theoretical probability (p) getting a derangement, which turns out to be 1/e or about 36.7%, for a sufficiently large number of cards. We will then experimentally measure e by shuffling cards.

To compute the theoretical probability p, we need to understand two numbers:
The first is the total number of arrangements that the cards can be in (R).
The second is how many of those arrangements are derangements (D).
The theoretical probability is the ratio of those two numbers:
p = D/R
and it approaches 1/e.
(Or you could say R/D approaches e.)

The first number is easy, it’s just the factorial of the number of cards in the deck (n).
R_n = n!
The second number D_n is not as straightforward. We have to count the number of derangements a deck can have.

To show a simple example of counting derangements, let’s look at a deck of only 3 cards:
There are 6 ways to arrange these 3 cards (3! = 6)
3,A,2  (this one is a derangement)
2,3,A  (this one is a derangement)

Only 2 of these arrangements have every card moved from its original position. So the number of derangements for 3 cards is 2. So in this case, the probability of getting a derangement is about 33%.
D_3 = 2
p_3 = D/R = 2/3! = 1/3
That’s close to 1/e, but not quite, because it turns out 3 cards are just not enough to produce that probability. We need more.

In general, with a deck of n cards, we can arrange any number k of those cards in P_{nk} number of ways, where:
P_{nk} = \frac{n!}{(n-k)!}
We can use this formula to count the number of ways there are to rearrange a subset k of the cards, while n-k cards remain in a fixed position (we are counting the number of permutations).

Note that n-k \neq 0, then by definition, the corresponding arrangements will not be derangements. If we can count all of these, then we can subtract this number of arrangements from the total number n! to get the number of derangements.

Let’s see this in action by going back to the deck of 3 cards. If we select all the cards to rearrange, we can get all 6 possible permutations:
P_{3,3} = \frac{3!}{(3-3)!} = \frac{3!}{0!} = 3! = 6

If we select only 2 cards out of the 6 to rearrange, we only get a subset of the arrangements. However if we count how many there are with the formula we get 6 permutations:
P_{3,2} = \frac{3!}{(3-2)!} = \frac{3!}{1!} = 3! = 6
So there are 6 ways to arrange 2 cards (6 ways to hold 1 card fixed). But why do we get 6 permutations here? Well, let’s look at all the ways to do this.
If we select 2 and 3 to rearrange, we get 2 permutations:
If we select A and 3 to rearrange, we get 2 permutations:
And if select A and 2 to rearrange, we get 2 permutations:
Notice however that A,2,3 occurs 3 times, and so there are actually 4 unique arrangements here, and they are not derangements by definition. So we have 6 - 4 = 2 derangements for 6 cards.
But is there a way to count unique arrangements without doing it by hand like we did here? Let’s keep going with this pattern.

If we select only 1 card to rearrange, then that’s trivial, because you can’t rearrange one card.
P_{3,1} = \frac{3!}{(3-1)!} = \frac{3!}{2!} = 3
If selecting A:
If selecting 2:
If selecting 3:
All 3 permutations are the same arrangement!

Then if we select no cards to rearrange, this is also trivial.
P_{3,0} = \frac{3!}{(3-0)!} = 1
If selecting nothing:
That’s just 1 permutation.

All duplicate permutations, are accounted for here. The number of permutations for P_{3,0} accounts for one of the duplicates in P_{3,1}:
P_{3,1} - P_{3,0} = 3 - 1 = 2
The permutations left in this difference account for 2 of the duplicates in P_{3,2}:
P_{3,2} - (P_{3,1} - P_{3,0}) = 6 - 2 = 4
And you are left with the unique 4 arrangements that are not derangements.
So to get the number of derangements, we just have to subtract this from the total number of arrangements P_{3,3}:
P_{3,3} - (P_{3,2} - (P_{3,1} - P_{3,0})) = 6 - 4 = 2
And so we see that D_3 = 2

We have a pattern here. We can see that by distributing the negative signs in the above equation we get:
D_3 = P_{3,3} - P_{3,2} + P_{3,1} - P_{3,0}

This can be expressed as a sum:
D_3 = \sum\limits_{k = 0}^3 P_{3,k} (-1)^{3-k}
Or more explicitly:
D_3 = \sum\limits_{k = 0}^3 \frac{3!}{(3-k)!} (-1)^{3-k}

It turns out that this pattern holds for decks of any number of cards n (just replace the 3 here):
D_n = \sum\limits_{k = 0}^n \frac{n!}{(n-k)!} (-1)^{n-k}

For a deck of 3 cards, the number of derangements is: D_3 = 2
For a deck of 4 cards: D_4 = 9
For a deck of 5 cards: D_5 = 44
which then give us the probabilities of derangement:
p_3 = 2/3! = 33.3\%
p_4 = 9/4! = 37.5\%
p_5 = 44/5! = 36.7\%
So by 4 cards, we start seeing a probability closer to 1/e.

Now, to get the probability p of getting a derangement for n cards:
p_n = \frac{D_n}{R_n} = \frac{1}{n!}\,\sum\limits_{k = 0}^n \frac{n!}{(n-k)!} (-1)^{n-k}
The n! factors cancel out, and we are left with:
p_n = \sum\limits_{k = 0}^n \frac{1}{(n-k)!} (-1)^{n-k}

Since the order of addition doesn’t matter, we can change the direction of this sum to start at n and end at 0:
p_n = \sum\limits_{k = n}^0 \frac{1}{(n-k)!} (-1)^{n-k}
If we do some relabeling, the relationship to e will become more obvious. Let’s set m = n-k, so we can write:
p_n = \sum\limits_{m=0}^n \frac{1}{m!} (-1)^{m}
This looks like a truncated Taylor expansion of the exponential function e^x.
A Taylor expansion is a way of expressing a function as a polynomial.

The Taylor expansion of the exponential function is:
e^x = \sum\limits_{m=0}^\infty \frac{1}{m!} x^m
If we plug in x = -1 we get:
e^{-1} = \sum\limits_{m=0}^\infty \frac{1}{m!} (-1)^m
which is the limiting case of a deck with infinite cards:
p_\infty = \frac{1}{e}

This equivalence to the Taylor expansion is the reason why as you increase the number of cards, the probability of getting a derangement approaches 1/e.
To put it another way:
\frac{1}{p} = \frac{R}{D} \approx e.

Now that we have seen the theoretical probability, let’s do the experiment.
Take a standard 52 card deck, note the original positions of the cards, then shuffle and see if you have a derangement. Keep shuffling and noting if the result is a derangement. Keep doing this about 40,000 times. Count the total number of shuffles R and the number of derangements D you get.
If you then take the ratio \frac{R}{D} you will get something approaching e.

Just kidding. That would be extremely tedious. Instead, we can have a computer it for us.

For a deck of 52 cards, it seems to take about 40k shuffles for the ratio of shuffles to derangements to approach e. See the figure above.

The code used to compute e from shuffles is shown below:
Here I used MATLAB code, to take advantage of vectorization and logical array operations. The inputs are cards (the number of cards in a deck) and games (the number of shuffles). Wins (a win is when you get a derangement) are then counted.

 function eulersnum = eulerdeck(cards,games)
 wins = 0;
 series = 1:cards;
 for ii=1:games
  shuffled = randperm(cards);
  wins = wins + ~any(series == shuffled);
 eulersnum = games/wins;

The known value of e is 2.7183…
To get an accurate measurement we should use a deck with a lot of cards, since that will include more terms of the Taylor expansion. We should also shuffle many times to be sure that we capture the probability.
So with a deck of 1000 cards, and 10 trials, each of 1000 shuffles, we measure e to be:
2.7265 \pm 0.1060


More Buffon’s Needle

In my previous blog entry, I discussed Buffon’s needle (a method of measuring \pi using random numbers). I tried to recreate what would occur in physical reality without using trig functions in the measurement of \pi. The tricky part was trying to make a uniform probability distribution along the angle \theta, which is what you expect with physical needles being dropped. In this entry, I will do something more exact, but less physical.

In reality, the probability density in \theta is uniform due to there being no preferential direction for a needle to point. To simulate this with uniform random numbers without using trigonometric functions (to avoid just measuring python’s \pi) is tricky. The way I did this was assign uniform random numbers (uniform probability density) to a projection of a needle, either perpendicular or parallel to the lines (with a probability of 0.5 assigned to each case). This results in a probability distribution in \theta that isn’t quite uniform, but it’s close enough to approximate \pi after a large number of needles are dropped:
2 \frac{N_T}{N_C} = \pi
where N_T and N_C are the total number of needles dropped and number of needles crossing a line, respectively.

If instead of assigning a uniform probability to either projection fairly, what if we assign it to just one? This was shown in the previous post. As the number of needles dropped increases, the ratio 2 \frac{N_T}{N_C} approaches some number around 2.52.6, shown in the figure below (\pi shown in red). Why is this the case?

The reason why has to do with a uniform probability density along a projection (q) does not translate into a uniform probability density along \theta, see figure below. This means needles have a preferred direction. It turns out this can still be used to extract a measurement of \pi!
The uniform probability density is assigned to the projection q:
\rho_q = 1/L, for 0 \leq q \leq L, and \rho_q=0 elsewhere.
To get this in terms of \theta, some substitutions are made in the probability density integral:
\int_0^L \rho_q dq = 1
The variables q and \theta are related by:
\text{cos}(\theta) = q/L
so by taking the derivative we get the substitution:
dq = -L\text{sin}(\theta) d \theta

So now the probability density integral can be written as:
\int_0^{\pi/2} \rho_q L \text{sin}(\theta) d \theta
which is also:
= \int_0^{\pi/2} \rho_{\theta} d \theta
So it gives the form of the probability density for \theta in terms of the one for q:
\rho_{\theta} = \rho_q L\text{sin}(\theta)

Plug in \rho_q = 1/L to get:
\rho_{\theta} = \text{sin}(\theta)
and it is obviously NOT uniform, it’s just the sine function!
Plotting a histogram of the number of needles per angle \theta will give the shape of the sine function:

Following the same steps as in the previous entry, we can find out why when a uniform probability is assigned to one projection, 2 N_T/N_C approaches 2.5 instead of \pi.
For simplicity W=L, and again for a needle to be crossing a line it has to be below the upper bound:
x \leq \frac{L}{2}\text{sin}(\theta)
and the probability of a needle crossing a line is:
P = \int_0^{\frac{L}{2}\text{sin}(\theta)} \int_0^{\pi/2} \rho_x \rho_{\theta} d \theta d x
Plugging in the probability densities: \rho_x = 2/L and \rho_\theta = \text{sin}(\theta):
P = \int_0^{\pi/2} \text{sin}^2(\theta) d \theta = \pi/4

P = \frac{N_C}{N_T} when the number of needles dropped is large, so by combining these equations:
2 \frac{N_T}{N_C} = \frac{8}{\pi} = 2.52
This is the number that is being approached!
But wait, \pi is there, so we can extract it by:
4 \frac{N_C}{N_T} = \pi

By plotting this ratio as a function of number of needles dropped, we see it approaching \pi. See the figure below.

Even though this isn’t how needles behave in reality, it’s a simpler way to measure \pi by using random numbers in a simulation. The code used to make these plots is shown below.

By the last drop:
2 \frac{N_T}{N_C} = 2.54
4 \frac{N_C}{N_T} = 3.14

import os
import sys
import math
import numpy
import pylab
import random

CL = 0
NN = 0

PIa = []
PIb = []
NNa = []
PIc = []

for ii in range(100000):

    PPy = random.uniform(0.0,1.0) #A-end y position (lines separated by unit distance)

    RRx = random.uniform(0.0,1.0) #x projection
    RRy = numpy.sqrt(1.0 - RRx**2) #y projection

    QQy = PPy + RRy #B-end y position
    QQyf = math.floor(QQy)
    PPyf = math.floor(PPy)

    NN = NN + 1
    if (QQyf-PPyf > 0.0): #if crossing line
        CL = CL + 1

    RAT = (1.0*CL)/(1.0*NN)
    if (CL==0):



print PIa[-1]
print PIb[-1]

pylab.ylabel("2 NT/NC")

pylab.hist(ANGLES, bins=10)
pylab.xlabel("Acute angle in factors of pi")
pylab.ylabel("Needles dropped")

pylab.ylabel("4 NC/NT")

Imaginary Powers

Imaginary numbers (i = \sqrt{-1}) are used to relate Cartesian coordinates to polar coordinates as shown in the figure below. The argument \varphi is the angle from the real axis along the unit circle, and can be expressed as two terms: a real and an imaginary, giving the Cartesian (x,y) position.
e^{i\varphi} = \text{cos}(\varphi) + i \text{sin}(\varphi) = x + i y
(If you want to expand beyond the unit circle just multiply by the desired radius: e^{i\varphi} \rightarrow re^{i\varphi})

This is known as the Euler relation, and it gives us some neat results, like:
e^{i\pi} = -1
e^{i \pi /2} = i
But what happens when you exponentiate i with itself? i^i.

From the Euler relation we know that i=e^{i \pi /2}, so it’s really a matter of substitution:
i^i = e^{i \pi / 2 i} = e^{-\pi / 2} = 0.20788...
So i^i is a real number!

But wait a second, this isn’t the whole answer. We can add or subtract 2\pi from any angle and wind up in the same place on the circle. So i = e^{i \pi /2 + i 2 \pi n}, where n is any integer. So we end up with a set of values:
i^i = e^{- \pi /2 - 2 \pi n}
which are all real!

The curve above goes through the set of values for i^i, where n is an integer.
i^i = \{...,111.32,0.20788,0.00038820,...\}
But 0.20788… will be looked at as the primary value for now since it is the result when n=0, and having one value keeps things simple.

So what happens when we repeat this exponentiation? i^{i^i}
First, this can be written in a simpler way:
^3i = i^{i^i}
This operation is called “tetration” or “power tower”.
You start from the top and work your way down. So ^3i = i^{i^i} = i^{0.20788}
Now use the Euler relation again (ignoring 2 \pi n terms in the exponential):
i^{0.20788} = e^{i \pi/2 \times 0.20788} = e^{i 0.32653}
So the result is complex:
^3i = 0.94715+0.32076i

We can keep going using results from the last iteration:
^4i = i^{0.94715+0.32076 i} = 0.05009+0.60211 i

Eventually you find that with ^mi as m increases, the results start to converge on one number. This implies there’s a value for ^\infty i. After plugging and chugging we can eventually conclude that:
^\infty i = 0.43828 + 0.36059 i
Convergence can be seen in both the real and imaginary components, around m \approx 70 if we require the difference between iterations to be less than 0.1%. (See figures below).

Refig Imfig

Each result has a different radius associated with it, since the imaginary component of the previous result puts a real term in the exponential of the next. This real term can be expressed as a factor r:
e^{i\varphi} = a + bi \rightarrow i^{a+bi} = e^{i \pi/2 (a + bi)} = e^{-b\pi/2}e^{i a \pi/2} = r e^{i a \pi/2}
The radius can be found by multiplying the result by its complex conjugate and then find the square root (i.e. the Pythagorean theorem).

We can then plot radius of ^mi as a function of m (see plot below).


The python code for making these plots is:

import os
import sys
import math
import numpy
import pylab

REAL = []
IMAG = []
RADII = []
ITERA = []

pp = 1j
for n in range(100):
    ppL = pp #save previous value for cut off
    pp = (1j)**pp #interate power tower
    RA = 0.001 #ratio for cut off
    if (abs(pp.real - ppL.real)/(ppL.real + 0.00001) < RA) and (abs(pp.imag - ppL.imag)/(ppL.imag + 0.00001) < RA): #establish a cut off
        print n
        print pp
        print numpy.sqrt(pp*pp.conjugate()) #radius

        REAL.append(pp.real) #save real component
        IMAG.append(pp.imag) #save imaginary component
        RADII.append(numpy.sqrt(pp*pp.conjugate())) #save radius
        ITERA.append(n+2) #save iteration "m"

RADII = [1]+RADII #add in the initial value for the plot




pylab.ylabel("imaginary component")


pylab.ylabel("real component")