Tag Archives: physics

Strangeness from Adding a Dimension

The shortest distance between two points is a straight line (if there’s nothing in the way). But how can we prove this?

Imagine some arbitrarily curved path defined by the function y between two fixed points: this is clearly not the shortest path. Figure below shows curved path between two fixed points (spanning the interval of x=a to x=b).

path2

What is the distance along this path between these two fixed points? Solving this problem is clearly not as simple as using the Pythagorean theorem, or is it?

If we break up the length of the path s into infinitesimal segments of size ds, we can then use the Pythagorean theorem to characterize the length of the segments:
d\vec{s} = (dy,dx)
Where dx and dy are the infinitesimal legs, and ds is the hypotenuse.
ds = \sqrt{dx^2 + dy^2}

Figure below shows an infinitesimal line segment of a curved path.

path5

Computing the entire length of the path is then a matter of adding up all these infinitesimal ds segments. To do this, take note that the first derivative of the path’s function y is the ratio of the infinitesimal legs (this is the same thing as slope).
y_x = \frac{dy}{dx}
and so the length of the hypotenuse can be expressed in terms of the derivative of y:
ds = \sqrt{dx^2 + dy^2} = \sqrt{1 + \left(\frac{dy}{dx} \right)^2} \, dx = \sqrt{1 + y_x^2} \, dx

The total length s along path from x = a to x = b is then the sum of all the ds segments from a to b. You find the sum of infinitesimals by taking the integral:
s =  {\displaystyle \int} ds = {\displaystyle \int_a^b} \sqrt{1 + y_x^2}\, dx

Alright, so we are able to measure the length of a path by evaluating this integral. But how does this prove the shortest path is a straight line? For this we need to delve into what is known as “calculus of variations”. What we want to do is minimize the distance along the path between two points, and see if the resulting function y defines a straight line.

For the sake of simplicity, we can express the integrand above as L.
s = {\displaystyle \int_a^b} L\, dx
where:
L = \sqrt{1 + y_x^2}

A more generalized path from a to b is a function of position y and slope y_x, which we can write as:
L = L(y,y_x)
which is also know as a Lagrangian.

We want to minimize the length of path, so that means we want to minimize any deviation, or variation from the shortest path.
To characterize how much variation is in the length of a path, we can vary the integral using \delta to signify what is being varied. This works like a derivative operator:
\delta s = {\displaystyle \int_a^b} \delta L\, dx

Then inside the integral we have a varied Lagrangian:
\delta L = \frac{\partial L}{\partial y} \delta y + \frac{\partial L}{\partial y_x} \delta y_x

Here we’ll use some substitution from the product rule, to substitute the second term, so that we are varying in terms of \delta y and not both \delta y and \delta y_x:
\frac{d}{dx} \left( \frac{\partial L}{\partial y_x } \delta y \right) = \frac{d}{dx} \frac{\partial L}{\partial y_x } \delta y + \frac{\partial L}{\partial y_x } \delta y_x

\delta L = \frac{\partial L}{\partial y} \delta y - \frac{d}{dx} \frac{\partial L}{\partial y_x } \delta y  + \frac{d}{dx} \left( \frac{\partial L}{\partial y_x } \delta y \right)

= \left( \frac{\partial L}{\partial y} - \frac{d}{dx} \frac{\partial L}{\partial y_x } \right) \delta y + \frac{d}{dx} \left( \frac{\partial L}{\partial y_x } \delta y \right)
So now all the variation is only in terms of \delta y.

We can now look at the whole integral and evaluate it.
\delta s = {\displaystyle \int_a^b} \delta L dx

= {\displaystyle \int_a^b} \left( \frac{\partial L}{\partial y} - \frac{d}{dx} \frac{\partial L}{\partial y_x } \right) \delta y dx +  {\displaystyle \int_a^b} \frac{d}{dx} \left( \frac{\partial L}{\partial y_x } \delta y \right) dx

The second integral term is easily evaluated
{\displaystyle \int_a^b} \frac{d}{dx} \left( \frac{\partial L}{\partial y_x } \delta y \right) dx = \frac{\partial L}{\partial y_x } \delta y \bigg|_a^b

At the endpoints of our path x = a and b there is no variation, since they are fixed by definition. So \delta y = 0. And the second term is equal to 0, so we are left with:
\delta s = {\displaystyle \int_a^b} \left( \frac{\partial L}{\partial y} - \frac{d}{dx} \frac{\partial L}{\partial y_x } \right) \delta y dx

Figure below shows variation between the endpoints, but no variation at the endpoints.

path3

Now if we want to minimize the variation in path length, we set the variation to zero: \delta s = 0, and since \delta y \neq 0 between the endpoints, that means:
0 = \frac{\partial L}{\partial y} - \frac{d}{dx} \frac{\partial L}{\partial y_x }
This result is known as the Euler-Lagrange equation, and it is used to compute things like equations of motion.

We can then plug in our Lagrangian from earlier L = \sqrt{1 + y_x^2} into the Euler-Lagrange equation. The first term is easy:
\frac{\partial L}{\partial y} = 0, since there’s no y dependence in our L.
So this only leaves the other term
\frac{d}{dx} \frac{\partial L}{\partial y_x } = 0
which implies
\frac{\partial L}{\partial y_x } = C
Where C is an arbitrary constant from integration.

We then compute the other term:
\frac{\partial L}{\partial y_x } = \frac{y_x}{\sqrt{1 + y_x^2}}
\frac{y_x}{\sqrt{1 + y_x^2}} = C
Now we can solve for y_x so we can then get the form of y:
y_x = \frac{C}{\sqrt{1-C^2}}
Since C is an arbitrary constant, we can just say:
y_x = m
dy = m\,dx

Integrate both sides to get the form of y
We get a factor of x and an arbitrary constant b
y = mx + b
This is a straight line!
So the path with the minimal variation is a straight line.

Figure below shows blue line with arbitrary variation, and black line with minimal variation.
path4

What about higher dimensions of space? We just saw a 1-D path in a 2-D space. (The path with the minimal variation is a straight line). What about a 2-D surface in 3-D space? What is the surface with minimal variation? Instead minimizing length, we’ll now be minimizing surface area.

To characterize the nature of a surface we resort to infinitesimal segments again: except here they are planes instead of hypotenuses.
Figure below shows that a curved surface can be thought of as many infinitesimal planes stitched together:
surface

Here we will use the fact that a plane can be defined by two vectors, or rather the vector resulting from their cross product:
d\vec{u} = (dx,0,dz)
d\vec{v} = (0,dy,dz)
d\vec{A} = d\vec{u} \times d\vec{v} = (-dz\,dy, -dz \, dx, dx \, dy)

Figure below shows how infinitesimal vectors form an infinitesimal plane (segment of a surface):

SURFACEVEC

The magnitude of the cross product vector defining this plane is equivalent to the area between the two original vectors (the area of the plane):
dA = \sqrt{dz^2 dy^2 + dz^2 dx^2 + dx^2 dy^2}
Using the same factoring method we used before, we can express area in terms of derivatives:
dA = \sqrt{ \big( \frac{dz}{dx} \big)^2+ \big( \frac{dz}{dy} \big)^2 + 1} \, dxdy = \sqrt{z_x^2 + z_y^2 + 1} \, dxdy

To add up the infinitesimal area segments we take the integral along both dimensions x and y:
A = \iint\limits_{S} \sqrt{z_x^2 + z_y^2 + 1}\,dxdy

The integrand here is also a Lagrangian:
A = \iint\limits_{S} L \,dxdy
L = \sqrt{z_x^2 + z_y^2 + 1}

Just like with the line, this surface Lagrangian generalizes to:
L = L(z,z_x,z_y)
Notice now there are two derivatives in the Lagrangian.

We can vary this Lagrangian too to minimize the area:
\delta A = \iint\limits \delta L \,dxdy
\delta L = \frac{\partial L}{\partial z} \delta z + \frac{\partial L}{\partial z_x} \delta z_x + \frac{\partial L}{\partial z_y} \delta z_y

We can also use the same substitution trick we used before to vary only in terms of \delta z:
\frac{d}{dx} \left( \frac{\partial L}{\partial z_x } \delta z \right) = \frac{d}{dx} \frac{\partial L}{\partial z_x } \delta z + \frac{\partial L}{\partial z_x } \delta z_x
\frac{d}{dy} \left( \frac{\partial L}{\partial z_y } \delta z \right) = \frac{d}{dy} \frac{\partial L}{\partial z_y } \delta z + \frac{\partial L}{\partial z_y} \delta z_y

\delta L = \frac{\partial L}{\partial z} \delta z + \frac{d}{dx} \left( \frac{\partial L}{\partial z_x } \delta z \right) - \frac{d}{dx} \frac{\partial L}{\partial z_x } \delta z + \frac{d}{dy} \left( \frac{\partial L}{\partial z_y } \delta z \right) - \frac{d}{dy} \frac{\partial L}{\partial z_y } \delta z

\delta A = \iint \frac{\partial L}{\partial z} \delta z \,dxdy + \int \frac{\partial L}{\partial z_x } \delta z \,dy  \big|- \iint \frac{d}{dx} \frac{\partial L}{\partial z_x } \delta z \,dxdy + \int\frac{\partial L}{\partial z_y } \delta z \,dx \big|- \iint \frac{d}{dy} \frac{\partial L}{\partial z_y } \delta z \,dxdy
At the endpoints \delta z = 0, so the terms with pipes vanish.

If we then plug \delta L back into the surface integral and minimize variation:
\delta A = 0
0 = \iint \left(  \frac{\partial L}{\partial z}  - \frac{d}{dx} \frac{\partial L}{\partial z_x }  - \frac{d}{dy} \frac{\partial L}{\partial z_y } \right)\delta z \, dxdy

Between the endpoints there is variation \delta z \neq 0:
So the Euler-Lagrange equation in the 2-D case is:
0 = \frac{\partial L}{\partial z}  - \frac{d}{dx} \frac{\partial L}{\partial z_x }  - \frac{d}{dy} \frac{\partial L}{\partial z_y }

We can then plug in our Lagrangian into this Euler-Lagrange equation:
L = \sqrt{z_x^2 + z_y^2 + 1}

What we end up with is something not as simple as a straight line. We get an equation that describes what are called minimal surfaces:
0 = z_{xx} (z_y^2 + 1) + z_{yy} (z_x^2 + 1) - 2 z_x z_y z_{xy}
This equation is also known as Lagrange’s equation.

The higher dimensional analog to a straight line, a plane, satisfies Lagrange’s equation
plane.png
z = \alpha x + \beta y + \gamma
Plug it in and see. But it’s not the only solution!

In 2-D, the only one solution is a straight line. The strangeness comes when we add a dimension. In 3-D, there are more solutions than just the plane!
Other solutions include catenoids, helicoids, and weird things like the Saddle Tower.

Figure below is a catenoid:

catenoid2

Minimal surfaces can be created by dipping wire frames into soapy water. Surprising to me though is that a sphere or spherical bubble: r^2 = x^2 + y^2 + z^2 is not a minimal surface! (If you plug it into the Lagrange equation, you get back a contradiction.)

With no outside forces, a line gives the shortest distance between two points, a plane, a catenoid, etc give the 2-D version of this. But what happens when outside forces, like gravity are present? The shortest distance may no longer be a straight line. The minimal surface may no longer be a flat plane. This is where you can derive things like the shape of catenary cables, and brachistochrones. I’ll leave this topic of applying force for another day.

Ehrenfestival

Ehrenfest theorem allows us to see how physical quantities evolve through time in terms of other physical quantities. In quantum mechanics, physical quantities, like momentum or position, are represented by operators. To get the average value of a physical quantity of a system, we act the operator on a wavefunction. The wavefunction represents the physical system’s probabilistic behavior. The average of a physical quantity is called an “expectation value”.

The expectation value of an operator \mathscr{O} is:
\left< \mathscr{O} \right> = \left< \Psi \left| \mathscr{O} \right| \Psi \right> = \int dx \Psi^{\dagger} \mathscr{O} \Psi
where \Psi is a wavefunction \Psi(x).

This definition allows us to take a time derivative of the expectation value.
\frac{d}{dt}\left< \mathscr{O} \right> = \frac{d}{dt} \left< \Psi \left| \mathscr{O} \right| \Psi \right> = \frac{d}{dt} \int dx \Psi^{\dagger} \mathscr{O} \Psi
The total derivative moves inside the integral as a partial derivative, where we use the product rule to differentiate:
= \int dx (\frac{\partial}{\partial t} \Psi^{\dagger} \mathscr{O} \Psi + \Psi^{\dagger} \frac{\partial}{\partial t} \mathscr{O} \Psi + \Psi^{\dagger} \mathscr{O} \frac{\partial}{\partial t} \Psi)

A time derivative acting on a wave function is equivalent to a Hamiltonian operator acting on a wave function (with a factor of \frac{1}{i \hbar}):
\frac{\partial}{\partial t} \Psi = \frac{1}{i \hbar} H \Psi
Take the complex conjugate:
\frac{\partial}{\partial t} \Psi^{\dagger} = \frac{-1}{i \hbar} \Psi^{\dagger} H

We can then swap the time derivatives for Hamiltonians:
\frac{d}{dt}\left< \mathscr{O} \right> = \int dx ( \frac{-1}{i \hbar} \Psi^{\dagger} H \mathscr{O} \Psi + \Psi^{\dagger} \frac{\partial}{\partial t} \mathscr{O} \Psi + \frac{1}{i \hbar} \Psi^{\dagger} \mathscr{O} H \Psi)
= \frac{-1}{i \hbar} \left< \Psi \left| H \mathscr{O} \right| \Psi \right> + \left< \Psi \left| \frac{\partial}{\partial t} \mathscr{O} \right| \Psi \right> + \frac{1}{i \hbar} \left< \Psi \left| \mathscr{O} H \right| \Psi \right>
The first and last terms can be combined using a commutator:
= \left< \Psi \left| \frac{\partial}{\partial t} \mathscr{O} \right| \Psi \right> + \frac{1}{i \hbar} \left< \Psi \left| [\mathscr{O},H] \right| \Psi \right>

So then we have Ehrenfest Theorem, relating the time derivative of an expectation value to the expectation value of a time derivative:
\frac{d}{dt}\left< \mathscr{O} \right> = \left< \frac{\partial}{\partial t} \mathscr{O} \right> + \frac{1}{i\hbar}\left< [\mathscr{O},H] \right>

The general form of the Hamiltonian H, has a momentum-dependent kinetic energy term, and a position-dependent potential energy term V(x).
H = \frac{p^2}{2m} + V(x)

As an example, let’s see how changes in momentum over time can be expressed.
\frac{d}{dt}\left< p \right> = \left< \frac{\partial}{\partial t} p \right> + \frac{1}{i\hbar}\left< [p,H] \right>
Momentum doesn’t have an explicit time-dependence, so the first term is zero. Further, operators commute with themselves: [p,p^2] = 0, so the Hamiltonian reduces to just the potential energy term. So we’re left with:
\frac{d}{dt}\left< p \right> = \frac{1}{i\hbar}\left< [p,V(x)] \right>

To see what this remaining commutator of operators reduces to, we will have to use a little calculus. Momentum expressed in terms of position is essentially the derivative operator:
p = -i\hbar \frac{d}{dx}. Keep in mind that potential energy is a function of position which is why momentum does not commute with it.
Since we are dealing with operators, we need them to act on something: we’ll use a dummy wavefunction \Phi(x). Then just remember the product rule for derivatives:
[p,V]\Phi = -i\hbar[\frac{d}{dx},V]\Phi = -i\hbar(\frac{d}{dx}V\Phi - V\frac{d}{dx}\Phi) = -i\hbar(\frac{d}{dx}V)\Phi
So then we can see that the change in the expectation value of momentum with respect to time is:
\frac{d}{dt}\left< p \right> =-\left< \frac{d}{dx}V(x) \right>
On the left you have impulse over change in time, and on the right you have change in potential energy over change is position. Both are ways of measuring force in classical physics. In fact, this Newton’s second law!
\frac{d}{dt}\left< p \right> = \left< F \right>

If we do the same for change in expectation value of position with respect to time, we get an equation for velocity:
\frac{d}{dt}\left< x \right> = \left< \frac{\partial}{\partial t} x \right> + \frac{1}{i\hbar}\left< [x,H] \right>
Again, position has no explicit time-dependence, so the first term is zero. However, now position commutes with potential energy (since it’s just a function of position), and position does not commute with kinetic energy (momentum). So we’re left with:
\frac{d}{dt}\left< x \right> = \frac{1}{i\hbar} \frac{1}{2m}\left< [x,p^2] \right>
If we go through the dummy wavefunction process we’ll arrive at:
\frac{d}{dt}\left< x \right> = \frac{\left< p \right>}{m}

What if we try a much more complication operator, like the translation operator? I covered how this operator works in a previous blog entry. A translation operator shifts a wavefunction’s position by some distance a:
T(a)\Psi(x) = \Psi(x+a)
What does the change in this expectation value of this operator look like?
\frac{d}{dt}\left< T(a) \right> = \left< \frac{\partial}{\partial t} T(a) \right> + \frac{1}{i\hbar}\left< [T(a),H] \right>
There’s no explicit time dependence here, so the first term is zero. The commutator term is all that is left. We must now consider the explicit form of T(a).
T(a) = e^{a \frac{d}{dx}}
It only contains x derivative operators (momentum operators) and so it commutes with the kinetic energy term in the Hamiltonian, but not the potential energy term.
\frac{d}{dt}\left< T(a) \right> = \frac{1}{i\hbar}\left< [e^{a \frac{d}{dx}},V(x)] \right>
How do we evaluate something like this with a derivative operator in the exponent? This is what Taylor expansions are for!
e^{a \frac{d}{dx}} = 1+ \sum\limits_{n=1}^{\infty} \frac{a^n}{n!} (\frac{d}{dx})^n
After using a dummy wavefunction and Pascal’s triangle a bit, you get:
\frac{d}{dt}\left< T(a) \right> = \frac{1}{i\hbar}\sum\limits_{n=1}^{\infty} \frac{a^n}{n!} \sum\limits_{m=0}^{n-1}  \frac{n!}{m!(n-m)!} \left< ((\frac{d}{dx})^{n-m}V) (\frac{d}{dx})^m \right>
which can be expressed as a sum of products of force derivatives and momentum.
-\sum\limits_{n=1}^{\infty} \sum\limits_{m=0}^{n-1}  \frac{1}{(i\hbar)^{m+1}}\frac{a^n}{m!(n-m)!} \left< ((\frac{d}{dx})^{n-m-1}F) p^m  \right>
This tells us the change in a translation operator with respect to time can be quantified with this particular series of force and momentum products. This result looks messy, but it seems intuitive that force would be involved here.

An Intro to Time Evolution: The Heisenberg and Schrödinger Pictures

A quantum state is just a function that describes the probabilistic nature of a particle (or particles) in terms of measurable quantities. Measurable quantities are represented by hermitian operators that act on the state to give possible values. But what happens to a quantum state over time? How does it change? How do the measurable quantities change? Here I will elaborate on what is called “time evolution”, a method of evolving states and operators.
Evolving a state to a later time, and including time dependence are done in the same way.
\left| \Psi (x,0) \right> \rightarrow \left| \Psi (x,t) \right>
This is the Schrödinger picture, which evolves states. In the Heisenberg picture, operators evolve. The pictures are equivalent, but are suited for different purposes. One can’t talk about one without talking about the other.
To evolve a state, we want to construct a linear operator that changes the argument of the function.
\mathscr{U} \left| \Psi (x,0) \right> = \left| \Psi(x,t) \right>
The operator should be unitary, to conserve probability.
P = \left< \Psi(x,0) | \Psi(x,0) \right> = \left< \Psi(x,t) | \Psi(x,t) \right> = \left< \Psi(x,0) \left| \mathscr{U}^{\dagger} \mathscr{U} \right| \Psi(x,0) \right>
\therefore \mathscr{U}^{\dagger} \mathscr{U} = 1

One way to construct this operator is to solve the time-dependent Schrödinger equation, with initial conditions imposed on it. For simplicity, let’s assume that the Hamiltonian operator H has no time dependence itself.
\frac{\partial}{\partial t} \Psi = \frac{-i}{\hbar} H \Psi
ln(\Psi) = \frac{-i}{\hbar} \int H dt
\Psi (x,t) = e^{\frac{-i}{\hbar} H t} A
\Psi(x,0) = A
\Psi (x,t) = e^{\frac{-i}{\hbar} H t} \Psi (x,0)
\therefore \mathscr{U} = e^{\frac{-i}{\hbar} H t}
The resulting operator \mathscr{U} is a unitary time evolution operator.

In this simple case, one can also use the same approach used to construct the translation operator, except it will translate through time instead of space.
\Psi (x,t) \rightarrow \Psi (x,t + \Delta t)
\Psi (x, t + \Delta t) = \Psi(x,t) + \Delta t \frac{d}{dt} \Psi(x,t) + \Delta t^2 \frac{1}{2!} \frac{d^2}{dt^2} \Psi(x,t) + ... = e^{\Delta t \frac{d}{dt}} \Psi(x,t)
\therefore \mathscr{U} = e^{\Delta t \frac{d}{dt}} = e^{\frac{-i}{\hbar} H \Delta t}

The Schrödinger-style time evolution can be transformed into Heisenberg-style by pulling the time evolution operators from the state \Psi and attaching them to the operator \mathscr{O}.
\left< \Psi (x,0) \left| \mathscr{O} \right| \Psi (x,0) \right> \rightarrow \left< \Psi (x,t) | \mathscr{O} | \Psi (x,t) \right>
\left< \Psi (x,t) | \mathscr{O} | \Psi (x,t) \right> = \left< \Psi (x,0) \left| e^{\frac{i}{\hbar} H t} \mathscr{O} e^{\frac{-i}{\hbar} H t} \right| \Psi (x,0) \right> = \left< \Psi (x,0) \left| \mathscr{O}(t) \right| \Psi (x,0) \right>
The resulting time-evolved operator is then:
\mathscr{O}(t) = e^{\frac{i}{\hbar} H t} \mathscr{O} e^{\frac{-i}{\hbar} H t}

Taking the time derivative of this general operator will give the Heisenberg equation of motion. Plugging in an operator for \mathscr{O} will give an equation describing how that operator evolves with time.
\frac{d}{dt} \mathscr{O}(t) = \frac{i}{\hbar} e^{\frac{i}{\hbar} H t} H \mathscr{O} e^{\frac{-i}{\hbar} H t} + e^{\frac{i}{\hbar} H t} \frac{\partial \mathscr{O}}{\partial t} e^{\frac{-i}{\hbar} H t} + \frac{-i}{\hbar} e^{\frac{i}{\hbar} H t} \mathscr{O} H e^{\frac{-i}{\hbar} H t}
The exponential operator e^{\frac{\pm i}{\hbar} H t} commutes with H, since it is made of only H operators. In the first term and last term, the exponential operator can act on the \mathscr{O} to evolve it into \mathscr{O}(t), as shown previously. So the expression can be reduced to:
\frac{d}{dt} \mathscr{O}(t) = \frac{1}{i \hbar} [\mathscr{O}(t), H] + e^{\frac{i}{\hbar} H t} \frac{\partial \mathscr{O}}{\partial t} e^{\frac{-i}{\hbar} H t}
This is the Heisenberg equation of motion.

So, let’s try out a specific operator, to see how it will evolve with time.
First, we need to define the Hamiltonian operator:
H = \frac{p^2}{2m} + V(x)
The first term is kinetic energy in terms of momentum, and the second term is potential energy in terms of position. A Hamiltonian can be arbitrarily more complicated, but this form is fairly general, and relatively simple.
So let’s see how the position operator evolves over time, so plug in x:
\frac{d}{dt} x = \frac{1}{i \hbar} [x, H] + e^{\frac{i}{\hbar} H t} \frac{\partial x}{\partial t} e^{\frac{-i}{\hbar} H t}
The operator x has no explicit time dependence, so \frac{\partial x}{\partial t} = 0. So we are left with:
\frac{d}{dt} x = \frac{1}{i \hbar} [x, H]
[x,H] = [x, \frac{p^2}{2m} + V(x)] = [x, p^2]/2m
Since V(x) depends only on x, it commutes with x.
[x,V(x)] = 0
However, x does not commute with p. This is what gives the uncertainty principle between position and momentum.
[x,p] = i \hbar
\therefore [x,p^2] = 2 i \hbar p
Using this result we can show:
[x,H] = i \hbar \frac{p}{m}
\therefore \frac{d}{dt} x = \frac{p}{m}
The equation of motion that describes the evolution of x is then:
x(t) = \int \frac{p}{m} dt

The same can be done for the momentum operator.
\frac{d}{dt} p = \frac{1}{i \hbar} [p, H]
[p,H] = [p, \frac{p^2}{2m} + V(x)] = [p,V(x)] = -i \hbar \frac{\partial}{\partial x} V(x)
\frac{d}{dt} p = -\frac{\partial}{\partial x} V(x)
The equation of motion that describes the evolution of p is then:
p(t) = -\int \frac{\partial}{\partial x} V(x) dt
This corresponds to Newton’s law, and shows how momentum is linked to the potential energy V(x). The derivative of potential energy with respect to distance gives force.

To make this more familiar, let’s set V(x) = 0, to get the free-particle scenario (no forces acting on the particle).
So now \frac{d}{dt} p = 0 and the equations of motion become:
p(t) = p(0)
x(t) = x(0) + \frac{p(0)}{m} t
which describes a particle moving at constant momentum (constant velocity).

Billiard Balls and the 90-Degree Rule

If you play pool, you may have noticed that when the cue ball strikes a target ball off-center, that they will separate at a 90-degree angle after the collision. This is because the collision is very elastic, and the balls are the same mass. All kinetic energy along the line connecting the two balls is transferred to the target ball, leaving the cue ball with no energy along that direction. Any remaining energy in the cue ball is along the line perpendicular to the line connecting the two balls. This is why they separate at a 90-degree angle. (Also note: if the cue ball strikes the target ball dead-center, there will be no energy along a perpendicular line, all energy will be transferred to the target ball, unless there is substantial inelasticity.)

So let’s explore the simple case, where we are assuming that there is negligible spinning and friction, and that the collision is perfectly elastic. Initially, the cue ball is put into motion toward the target ball, and the target ball is stationary.
First, we use conservation of momentum to get our first equation, where m_1 is the mass of the cue ball, m_2 is the mass of the target ball, \overrightarrow{v} represents their velocity vectors, with the subscripts i meaning “initial”, and f meaning “final”.
m_1 \overrightarrow{v}_{1,i} + m_2 \overrightarrow{v}_{2,i} = m_1 \overrightarrow{v}_{1,f} + m_2 \overrightarrow{v}_{2,f}   (1)
Now, since the masses of the balls are equal and the initial velocity of the second ball is 0, we can simplify the first equation to:
\overrightarrow{v}_{1,i} = \overrightarrow{v}_{1,f} + \overrightarrow{v}_{2,f}   (2)
Second, we use conservation of energy to get our second equation:
\alpha( \frac{1}{2} m_1 v_{1,i}^2 + \frac{1}{2} m_2 v_{2,i}^2) = \frac{1}{2} m_1 v_{1,f}^2 + \frac{1}{2} m_2 v_{2,f}^2   (3)
Again, we can reduce this, since the initial velocity of the second ball is 0, the masses are equal, and the collision is perfectly elastic \alpha = 1.
v_{1,i}^2 = v_{1,f}^2 + v_{2,f}^2   (4)
Now we can take equation 2 and square it.
v_{1,i}^2 = v_{1,f}^2 + v_{2,f}^2 + 2 \overrightarrow{v}_{1,f} \cdot \overrightarrow{v}_{2,f}
= v_{1,f}^2 + v_{2,f}^2 + 2 v_{1,f} v_{2,f} cos (\theta)    (5)
Where \theta is the angle between the two balls after the collision.
Now, if you compare equations 4 and 5, you’ll see that they are equal. However, there’s this cross term 2 v_{1,f} v_{2,f} cos (\theta) that shows up in equation 5, but not in 4. This means it’s equal to 0.
So then,
cos(\theta) = 0
and this can only be true if \theta = \frac{\pi}{2}, which is equivalent to 90 degrees!
(If the target ball is struck dead-center, then the cross term goes to 0 from v_{1,f} = 0.)

In a more general case, where we include inelasticity and the possibility of the balls having different masses, we arrive at a more general formula for \theta. We’ll still assume negligable friction, spinning, and that the target ball is initially stationary.
So we still have conservation of momentum:
m_1 \overrightarrow{v}_{1,i} = m_1 \overrightarrow{v}_{1,f} + m_2 \overrightarrow{v}_{2,f}   (6)
And conservation of energy, but with some of the energy going to sound and heat. This is represented by an elasticity factor \alpha, that ranges from 1 (perfectly elastic) to 0 (perfectly inelastic).
\alpha \frac{1}{2} m_1 v_{1,i}^2 = \frac{1}{2} m_1 v_{1,f}^2 + \frac{1}{2} m_2 v_{2,f}^2   (7)
So we can take equation 6, and just like last time, we square it.
m_1^2 v_{1,i}^2 = m_1^2 v_{1,f}^2 + m_2^2 v_{2,f}^2 + 2 m_1 m_2 v_{1,f} v_{2,f} cos(\theta)   (8)
Now divide equation 8 by m_1 on both sides, and you will have an expression for m_1 v_{1,i}^2
m_1 v_{1,i}^2 = m_1 v_{1,f}^2 + \frac{m_2^2}{m_1} v_{2,f}^2 + 2 m_2 v_{1,f} v_{2,f} cos(\theta),
which can be plugged into the left side of equation 7.
So now, you can solve for \theta in terms of the final velocities, masses, and elasticity factor.
The general result is then:
cos(\theta) = \frac{1-\alpha}{2 \alpha} \frac{m_1}{m_2} \frac{v_{1,f}}{v_{2,f}} + \frac{1}{2 \alpha} (1 - \alpha \frac{m_2}{m_1}) \frac{v_{2,f}}{v_{1,f}}
We can get back the previous result cos(\theta) = 0,  by setting m_1 = m_2 and \alpha = 1.

Translating a Wave Function

In algebra, or pre-calc, you learn that you can change the position of a function by modifying the argument. In quantum physics this idea is used to displace wave functions. If a function starts off at one position, and moves to another position, all that is needed is a change in argument. However, quantum physics likes to use linear operators to alter functions. What would an operator look like if it can change the argument of a function? In this post, I will construct a 1-D example of such an operator.

A general wave function can be written as: \Psi (x), where the shape of \Psi is dependent on the spatial variable x.
To translate a function by distance a, modify the argument of \Psi,
To move the function right by a, \Psi(x) \rightarrow \Psi (x-a)
To move the function left by a, \Psi(x) \rightarrow \Psi(x+a)
Let’s just take the \Psi(x+a) example, and without loss of generality say that a can be positive or negative.
Next we can take advantage of Taylor expansions.
A function f(x) can be expanded around a point a: f(x) = f(a) + (x-a)\frac{d}{dx}f(a) + \frac{(x-a)^2}{2!} \frac{d^2}{dx^2}f(a) + ...
Here, in our example, we want to expand \Psi(x+a) around x, to express the translated function \Psi(x+a) in terms of the original function \Psi(x):
\Psi(x+a) = \Psi(x) + a\frac{d}{dx}\Psi(x) + \frac{a^2}{2!} \frac{d^2}{dx^2} \Psi(x) + ...
Note: \frac{d}{dx} = \frac{i}{\hbar} p.
A more complete version of this expression would be:
\Psi(x+a) = \sum\limits_{n=0}^{\infty} \frac{a^n}{n!} \frac{d^n}{dx^n} \Psi(x) = \Psi(x) + \sum\limits_{n=1}^{\infty} \frac{a^n}{n!} \frac{d^n}{dx^n} \Psi(x)
This sum’s structure is similar to the Taylor expansion of the exponential function.
e^x = 1 + x + x^2/2! + ... = \sum\limits_{n=1}^{\infty} \frac{x^n}{n!}
Every operator in the \Psi(x+a) expansion can be reduced into an simplified operator: \Psi(x+a) = e^{a\frac{d}{dx}} \Psi(x)
= e^{\frac{ai}{\hbar} p} \Psi(x).
This new operator e^{a\frac{d}{dx}} can be expanded to return to what we had before: e^{a\frac{d}{dx}} = \sum\limits_{n=0}^{\infty} \frac{a^n}{n!} \frac{d^n}{dx^n}.
So the translated function is a version of the original function with a specific type of interference. Such that the structure is: Translated = Original + Interference.
\Psi(x+a) = \Psi(x) + \sum\limits_{n=1}^{\infty} \frac{a^n}{n!} \frac{d^n}{dx^n} \Psi(x) = \Psi(x) + \Delta(a)\Psi(x)
We can reduce the operators in the interference terms into an exponential in the usual way:
\Delta(a) = \sum\limits_{n=1}^{\infty} \frac{a^n}{n!} \frac{d^n}{dx^n}
= e^{a \frac{d}{dx} } - 1

The expectation value \left< e^{a\frac{d}{dx}} \right> characterizes the average overlap between \Psi(x) and \Psi(x+a).
\left< e^{a\frac{d}{dx}} \right> = \left< \Psi(x) \right| e^{a\frac{d}{dx}} \left| \Psi(x) \right>
= \left< \Psi(x) | \Psi(x+a) \right>
= \left< \Psi(x) \right| 1 + \Delta(a) \left| \Psi(x) \right>
= 1 + \left< \Delta(a) \right>
The expectation value of the translation operator is then unity plus the expectation value of the interference operator.
\left< \Delta (a) \right> = \sum\limits_{n=1}^{\infty} \frac{a^n}{n!} \left< \frac{d^n}{dx^n} \right>
= \sum\limits_{n=1}^{\infty} \frac{(ia)^n}{(\hbar)^n n!} \left< p^n \right>
The interference expectation value is shown to be an expectation value of a function of the momentum operator.
In the case of localized waves, as a gets much greater than the width of \Psi(x), the interference term approaches -1, since the overlap between the original and translated wave function decreases. This is equivalent to the original and displaced wave function becoming more and more orthogonal.
limit _{a\rightarrow \infty} \left< \Delta (a) \right> = -1
limit _{a\rightarrow \infty} \left< \Psi(x) | \Psi(x+a) \right> = 0