### Sidebar

Add a new page:

equations:hamiltons_equations

$\dot q = \frac{\partial H}{\partial p}, \quad \dot p = - \frac{\partial H}{\partial q}$

# Hamilton's Equations

## Intuitive

Hamilton’s Equations show how the $q_i$'s and $p_i$'s undergo a ‘dance to the music of time’, a dance in which, as some $q_i$'s or $p_i$'s increase in value, others decrease in value, but always such as to keep the energy constant (in conservative systems), and always such as to keep the total action minimized, both instant by instant, and over the whole path between ‘surfaces-of-common-action’. This ‘dance’ is governed by one function, $H$, - that is to say, while $H$ is different for different systems (orbiting planets, a statistical ensemble, an electrical circuit, positrons orbiting an atomic antinucleus, a spinning top, juggling pins, a flowing river and so on) yet within any one system there is just one overaching function (there is no need for individual functions, $H_1$, $H_2$,…,$H_n)$. The Lazy Universe by Coopersmith

Now, how are we to visualize Hamilton's equations in terms of phase space? First, we should bear in mind what a single point Q of phase space actually represents. It corresponds to a particular set of values for all the position coordinates $x_1,x_2,\ldots.$ and for all the momentum coordinates $p_1,p_2, …$ That is to say, Q represents our entire physical system, with a particular state of motion specified for every single one of its constituent particles. Hamilton's equations tell us what the rates of change of all these coordinates are when we know their present values; i.e. it governs how all the individual particles are to move. Translated into phase-space language, the equations are telling us how a single point Q in phase space must move, given the present location of Q in phase space. Thus, at each point of phase space, we have a little arrow - more correctly a vector- which tells us the way that Q is moving, in order to describe the evolution of our entire system in time. The whole arrangement of arrows constitutes what is known as a vector field (Fig. .11). Hamilton's equations thus define a vector field on phase space.

Let us see how physical determinism is to be interpreted in tel filS of phase space. For initial data at time t = 0, we would have a particular set of values ~specified for all the position and momentum coordinates; that is to say, we have a particular choice of point Q in phase space. To find the evolution of the system in time. we simply follow the arrows. Thus the entire evolution of our system with time - no matter how complicated that system might be - is described in phase space as just a single point moving along following the particular arrows that it encounters. We can think of the arrows as indicating the "velocity" of our point Q in phase space. For a "long" arrow, Q moves along swiftly, but if the arrow is "short", Q's motion will be sluggish. To see what our physical system is doing at time t, we simply look to see where Q has moved to, by that time, by following arrows in this way. Clearly, this is a deterministic procedure. The way that Q moves is completely determined by, the Hamiltonian vector field.

page 174ff in "The Emperors new Mind" by R. Penrose

test

## Concrete

Hamiltons equations are $$\dot q = \frac{\partial H}{\partial p} \\ \dot p = - \frac{\partial H}{\partial q}$$

These equations tell you how to move a point in phase space infinitesimally given a scalar function H on the phase space. Such a transformation is an infinitesimal canonical transformation.

Time evolution of an Observable

In general, the time derivative of a function $F$ that is a function of generalized position and momentum coordinates, but not a direct function of time (which is often accurate) is

$$\frac{dF}{dt}=\frac{\partial F}{\partial q}\frac{dq}{dt}+\frac{\partial F}{\partial p}\frac{dp}{dt}$$

Using Hamilton's equation we get

$$\frac{dF}{dt}=\frac{\partial F}{\partial q}\frac{\partial H}{\partial p}-\frac{\partial F}{\partial p}\frac{\partial H}{\partial q}\equiv \{H,F\},$$

where $\{H,F\}$ denotes the Poisson bracket. Take note that there is a close connection between this equation and the Heisenberg equation in quantum mechanics: $$\frac{\mathrm{d}\hat F}{\mathrm{d}t} = -\frac{i}{\hbar}[\hat F,\hat H] + \frac{\partial \hat F}{\partial t}.$$

Symmetries of Hamiltons equations - Canonical Transformations

Hamilton's equations can be written more compactly when we introduce the 2n-dimensional vector $x\equiv ( q_1,q_2,\ldots, p_1,p_2,\ldots)$ and the $(2N \times 2N)$ matrix

$J\equiv \left[ \begin{array}{cc} 0 & -I_N \\ I_N & 0 \end{array} \right]$

Then, the equations then read

$$\dot x = J \frac{\partial H}{\partial x} .$$

The question is now, which transformations

$$q_i \to Q_i(q,p) \quad \text{and} \quad p_i \to P_i(q,p)$$ are allowed without changing the equations. Since in the Hamiltonian formalism we put $q_i$ and $p_i$ there are now even more symmetries than in the Lagrangian formalism.

Example: Oscillator

A simple example is the harmonic oscillator in 3 variables. SO(3) rotations act on the configuration variables, preserving the action, so Noether’s theorem gives you 3 conserved quantities, the angular momentum variables. The moment map point of view however gives you much more. The phase space is 6 dimensional (3 positions + 3 momenta) and the Lie group Sp(6,R) of linear symplectic transformations acts on it, with a subgroup U(3) preserving the Hamiltonian. The U(3) includes the SO(3) rotations as a subgroup, but it is much larger (9 dimensions vs. 3), so the moment map gives you many more conserved quantities. After quantization, you learn that energy eigenstates are U(3) representations, telling you much more about them than what angular momentum tells you.https://www.math.columbia.edu/~woit/wordpress/?p=7146

To analyze the type of transformations that are possible, we denote them as

$$x_i \to y_i(x).$$

We then have

$$\dot{y}_i = \frac{\partial y_i}{\partial x_j}\dot{x}_j = \frac{\partial y_i}{\partial x_j} J_{jk} \frac{\partial H}{\partial y_l}\frac{\partial y_l}{\partial x_k}$$

or in matrix notation $$\dot{y} = (\mathcal{J} J \mathcal{J}^T) \frac{\partial H}{\partial y},$$

where $\mathcal{J}_{ij} \equiv \frac{\partial y_i}{\partial x_j}$ is the Jacobian of the transformation. We can now see that transformations which leave Hamiltons equations invariant are all those whose Jacobian $\mathcal{J}$ satisfies

$$\mathcal{J} J \mathcal{J}^T = J \quad \frac{\partial y_i}{\partial x_j} J_{jk} \frac{\partial y_l}{\partial x_k} = J_{il}.$$

We call a Jacobian $\mathcal{J}$ that fulfills this property symplectic. A transformation whose Jacobian is symplectic is called a canonical transformation.

Take note that also Poisson brackets are invariant under canonical transformations and in turn, all transformations that leave the Poisson bracket unchanged: $$\{ Q_i , Q_j\} =0 , \quad \{ P_i , P_j \} = 0 , \quad \{ Q_i , P_j \} = \delta_{ij}$$ are called canonical transformations. For a proof, see page 104 here.

Example of a canonical transformation
\begin{eqnarray*} q_1 &=&Q_1\cos Q_2 \\ q_2 &=&Q_1\sin Q_2 \\ p_1 &=&P_1\cos Q_2-\frac{P_2}{Q_1}\sin Q_2 \\ p_2 &=&P_1\sin Q_2+\frac{P_2}{Q_1}\cos Q_2 \end{eqnarray*}

The associated Jacobian matrix is given by

$M=\left[ \begin{array}{cc} A & B \\ C & D \end{array} \right]$

where

\begin{eqnarray*} A &=&\left[ \begin{array}{cc} \cos Q_2 & -Q_1\sin Q_2 \\ \sin Q_2 & Q_1\cos Q_2 \end{array} \right] \\ B &=&\left[ \begin{array}{cc} 0 & 0 \\ 0 & 0 \end{array} \right] \\ C &=&\left[ \begin{array}{cc} \frac{P_1}{Q_1^2}\sin Q_2 & -P_1\sin Q_2-\frac{P_2}{Q_1}\cos Q_2 \\ -\frac{P_2}{Q_1^2}\cos Q_2 & P_1\cos Q_2-\frac{P_2}{Q_1}\sin Q_2 \end{array} \right] \\ D &=&\left[ \begin{array}{cc} \cos Q_2 & -\frac{\sin Q_2}{Q_1} \\ \sin Q_1 & \frac{\cos Q_2}{Q_1} \end{array} \right] \end{eqnarray*}

Using the fact that

$\left[ \begin{array}{cc} A & B \\ C & D \end{array} \right] ^T=\left[ \begin{array}{cc} A^T & C^T \\ B^T & D^T \end{array} \right]$

we check directly that $M^TJM=J$:

$\left[ \begin{array}{cc} A^T & C^T \\ B^T & D^T \end{array} \right] \left[ \begin{array}{cc} 0 & \begin{array}{cc} -1 & 0 \\ 0 & -1 \end{array} \\ \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} & 0 \end{array} \right] \left[ \begin{array}{cc} A & B \\ C & D \end{array} \right] =\left[ \begin{array}{cc} 0 & -I \\ I & 0 \end{array} \right]$

Generalized Hamilton's equations and infinitesimal canonical transformations

Next, we consider infinitesimal canonical transformations, which will help us to understand normal canonical transformations better.

We can write an infinitesimal transformation as

\begin{align} q_i \to Q_i &= q_i + \alpha F_i(q,p) \notag \\ p_i \to P_i &= p_i + \alpha E_i(q,p) \notag , \end{align} where $\alpha$ is infinitesimally small.

The question we now need to answer is: which functions $F_i(q,p)$ and $E_i(q,p)$ are allowed such that these transformations are indeed canonical? As we found out above, canonical transformations are defined through their Jacobian. Thus we now calculate the Jacobian of our infinitesimal transformation

$\mathcal{J} = \left[ \begin{array}{cc} \partial_{ij} + \alpha \partial F_i/\partial q_j & \alpha \partial F_i / \partial p_j \\ \alpha \partial E_i & \partial_{ij} + \alpha \partial E_i/\partial p_j \end{array} \right] .$ The defining condition for canonical transformations $\mathcal{J} J \mathcal{J}^T \stackrel{!}{=} J$ then tells us that

$$\frac{\partial F_i}{\partial q_j} \stackrel{!}{=} - \frac{\partial E_i}{\partial p_j}$$ must hold.

This equation is fulfilled if

$$F_i = \frac{\partial G}{\partial p_i } \quad \text{ and } \quad E_i = - \frac{\partial G}{\partial q_i },$$ since partial derivatices commute:

$$\frac{\partial F_i}{\partial q_j} = \frac{\partial \partial G}{\partial q_j\partial p_i } \stackrel{!}{=} \frac{\partial\partial G }{\partial p_j \partial q_i} = - \frac{\partial E_i}{\partial p_j} \checkmark$$

The functions $G(q,p)$ therefore generate the infinitesimal transformations.

Next we can consider families of transformations $$q_i \to Q_i (q,p; \alpha) \quad \text{ and } \quad p_i \to P_i (q,p; \alpha) .$$ For each value of $\alpha$ we have a different transformation and

$$Q_i (q,p; 0) = q_i \quad \text{ and } \quad P_i (q,p; 0) = p_i .$$

Starting in one state of the system $(q,p)$ we can use canonical transformations to get to another state of the system, i.e. a different point in phase space. As we vary $\alpha$ continuously we trace out a path in phase space.

We can then put our generating function $G$ into the formulas for a general infinitesimal transformation that we started with

\begin{align} q_i \to Q_i (q,p; \alpha) &= q_i + \alpha F_i(q,p) = q_i + \alpha \frac{\partial G}{\partial p_i } \notag \\ p_i \to P_i(q,p; \alpha) &= p_i + \alpha E_i(q,p) = p_i - \alpha \frac{\partial G}{\partial q_i } \notag . \end{align}

we rewrite this as

\begin{align} \frac{Q_i (q,p; \alpha) - q_i}{\alpha} &= \frac{\partial G}{\partial p_i } \notag \\ \frac{P_i(q,p; \alpha) - p_i}{\alpha} &= - \frac{\partial G}{\partial q_i } \notag . \end{align}

Therefore, as we take $\alpha \to 0$ we get the derivative with respect to $\alpha$

$$\frac{ dq_i}{d\alpha} = \frac{\partial G}{\partial p_i} \quad \text{ and } \quad \frac{ dp_i}{d\alpha} = - \frac{\partial G}{\partial q_i}.$$ ($\alpha$ is the parameter that parametrizes our curve in phase space.)

These look exactly like Hamilton's equations, but instead of the Hamiltonian we have the general generating function $G$ and instead of the time, we have the general parameter $\alpha$.

We can now understand that Hamilton's equations as written above are just a special case where we consider time-translations as our transformation. The corresponding generating function is the Hamiltonian.

Another example would be where $G= p_k$, i.e. the momentum along the $k$ axis. Then, our canonical infinitesimal transformation reads $q_i \to q_i + \alpha \delta _{ik}$ and $p_i \to p_i$ which is simply a translation. Therefore, we can say that translations are generated by the conjugate momentum.

## Abstract

If a first order ODE can be written in the form $$\dot s = \mathbb J \frac{\partial H}{\partial s}$$

where $\mathcal S$ is a $2n$ dimensional manifold, and $\mathbb J$ a $2n\times 2n$ symplectic matrix of the form $$\mathbb J = \left( \begin{matrix} 0 & \mathbb{I}_{n\times n} \\ -\mathbb{I}_{n\times n} & 0 \end{matrix}\right)$$ for some function $H:\mathcal S \to \mathbb R$, it is said that the equations form a Hamiltoinian System with Hamiltonian Funciton $H$.

## Why is it interesting?

Hamilton's Equations are a way of rewriting Newton's Second Law of Motion on a first-order system of differential equations, i.e, it only has first derivatives in time.

This comes at the cost of doubling the size of the system.

[E]verybody loves Hamilton’s equations: there are just two, and they summarize the entire essence of classical mechanics.

You get equations like Hamilton's whenever a system *extremizes something subject to constraints*. A moving particle minimizes action; a box of gas maximizes entropy. John Baez

## Origin

"Hamilton’s equations and the Maxwell relations—are mathematically just the same. They both say simply that partial derivatives commute." See https://johncarlosbaez.wordpress.com/2012/01/19/classical-mechanics-versus-thermodynamics-part-1/ and https://johncarlosbaez.wordpress.com/2012/01/23/classical-mechanics-versus-thermodynamics-part-2/

Contributing authors:

equations/hamiltons_equations.txt · Last modified: 2019/02/12 14:21 by 129.13.36.189