Geodesics - from intuition to equations

The simplest kind of geometry, taught in schools, is the so called Euclidean geometry - named after an ancient Greek mathematician, Euclid, who described its basics in the 4th century BC in his "Elements". It is based on the notions of points, straight lines and planes and it seems to correspond perfectly to our everyday experiences with various shapes. However, we can notice problems for which Euclidean geometry is insufficient even in our immediate surroundings.

Let's imagine, for example, that we are airline pilots and our task is to fly as quickly as possible from Warsaw, Poland to San Francisco. We take a world map and knowing from Euclidean geometry that a straight line is the shortest path between two points, we draw such a line from Warsaw to San Francisco. We're getting ready to depart and fly along the course we plotted... but fortunately, our navigator friend tells us that we fell into a trap.

The trap is that the surface of the Earth isn't flat! The map we used to plot our straight line course is just a projection of a surface that is close to spherical in reality. Because of that, the red line on the map below is not the shortest path - the purple line is:

Red line - a straight line between Warsaw and San Francisco on the map. The purple line is the actual shortest path.

It might be a bit clearer if we take a look at the Earth in a spherical form:

The shortest path between Warsaw and San Francisco on a globe

Let's note that the purple line (or the black one in the second picture) is still straight in some sense. If we get on a plane and start flying straight ahead, we'll be flying along this path. We won't have to make any turns.

Such lines - analogous to straight lines, but on curved surfaces - are called geodesics and we'll talk a bit about plotting (or maybe rather: calculating) them here.

But before we dive into the details, we have to expand our conceptual apparatus a bit, so that we have the right vocabulary to talk about such general notions/spaces.


Let's start by looking at generalizations of the Euclidean space called manifolds.

What is a manifold? It's just a set of points, which locally resembles a Euclidean space. What does it mean? To put it in simple terms, it means that if we choose a point on our manifold and look at its close neighborhood, it will look like a Euclidean space, that is: it will be a good approximation to talk about straight lines and other shapes known from Euclidean geometry in this neighborhood. So, basically, it's possible to use Euclidean geometry in small neighborhoods of points of a manifold.

(How small does the "small neighborhood" need to be? Mathematicians use the notion of a limit in such cases. Simply put, the smaller the neighborhood, the closer its geometry will be to Euclidean geometry. Strictly speaking, it might only become exactly Euclidean when the size of the neighborhood is zero - which might not seem particularly useful, because we're talking about a single point then - but it turns out that it's enough for many interesting purposes.)

What can you do with manifolds? Primarily, you can introduce coordinate systems on it, which are called charts in this context. A chart is a function mapping a subset of the manifold (or, sometimes, the whole manifold) to a subset of a Euclidean space, usually identified with \mathbb{R}^n. What it means that we assign n real numbers to every point of some part of our manifold - which is exactly the same as when coordinate systems are introduced on a plane, or in a Euclidean space. n is the dimension of the manifold here - just like Euclidean spaces, manifolds can have arbitrary dimensions.

It might be that the shape of the manifold is so complex that it's impossible to define a chart covering all of it. It's not a problem as long as every part can be covered by some chart, and parts covered by different charts have some points in common. One can then describe different parts of the manifold in different charts, and translate the description from one chart to another in the common parts. A set of charts defined on a manifold is called an atlas.

Example: M is a manifold with two charts, one defined on the subset U_\alpha (green), the second on U_\beta (purple). The subsets have a part in common (cyan), where both charts can be used. It's thus possible to transition from one chart to the other, and vice versa (\tau_{\alpha,\beta} and \tau_{\beta,\alpha}).

The functions mapping one chart to another on a part of the manifold where multiple charts are defined, are called transition maps.

Because a chart is de facto a coordinate system, I'll just write about coordinate systems and transformations between them in the later part of the article.

Note that transition maps (the transformations between coordinate systems) are functions from \mathbb{R}^n to \mathbb{R}^n. Such functions are sometimes differentiable. Exactly the case when they are is the subject of interest of differential geometry. The manifold is then called a differentiable manifold.

Once we have a differential manifold and coordinates on it, we can also talk about vector, covector and tensor fields, and do various interesting things. But in order to talk about geodesics, we need another piece of the puzzle - the metric. Briefly speaking, a metric is something that defines distances between the points of our manifold, also in a sense defining its shape this way. A 2-dimensional manifold without a metric can be anything. It's the metric that allows us to distinguish between a part of a plane and a part of a sphere, or a part of a hyperbolic paraboloid.

I wrote a bit more about the metric and other notions related to differential geometry in the series Mathematics of black holes. It's unfinished, but I still recommend reading it before proceeding to the next part of this article. If you are not familiar with differential geometry, it should clarify some notions and notation I will be using later in the article. (This article will probably become a part of the series at some point, but I might have to rethink it first.)

Straight lines

Before we get to geodesics, let's consider how to describe simple straight lines in Euclidean geometry, but using the notions introduced above.

So, let's assume that our manifold is a simple Euclidean plane. We have Cartesian coordinates (x, y) on that plane, and the metric in these coordinates is just the identity matrix (see Part 3 - the metric). Say that we also have a curve given as a function t \mapsto (x(t), y(t)) (also called the parametric form). How can we tell whether this curve is actually a straight line?

Let's consider what is the simplest way of describing a straight line in the parametric form. We want to get different points on a straight line for different values of t. We can achieve that by choosing a point \vec{x_0} on the line and translate it by some vector along the line, depending on the value of t. It could look like this, for example:

 \vec{x}(t) = \vec{x_0} + \vec{v}t

(The similarity to the equation of straight, uniform motion is not a coincidence ;) )

Using a notation more similar to the one usually used in the context of differential geometry, we could also write it like this:

 x^\mu(t) = x_0^\mu + v^\mu t

This notation has a nice property - a complete lack of assumptions regarding the number of dimensions. This equation will look exactly the same on a plane, as in a 16-dimensional space. It's only the question of what the range of the \mu index is.

What can we see here? The general parametric equation of a straight line is just a linear function of the parameter t (or, more accurately: n linear functions, one per each coordinate). There is a simple equation, the solutions of which are all linear functions and only linear functions. This equation is:

 \frac{d^2 x^\mu}{dt^2} = 0

Or: the second derivatives of the coordinates with respect to the parameter vanish.

If we denote the derivative with respect to t with a dot above the variable, which is a commonly used convention, the equation will look as follows:

 \ddot{x}^\mu = 0

It's easy to see that this is equivalent to the straight line equation above. If we take that equation and calculate the first derivative, the constant x_0^\mu will vanish, and the term v^\mu t will get reduced to v^\mu - a constant, which will vanish when we calculate the second derivative. This means that the straight line equation satisfies this differential equation.

And the other way round: if the second derivative of x^\mu is zero, it means that the first derivative is a constant, and x^\mu itself is a linear function of t.

This is then what the general equation of a straight line looks like in Euclidean geometry. Short and to the point.

(Small note: this is the equation of a straight line in so called affine parameterization, which means that t is proportional to the distance from the point at t=0. Other parameterizations are possible, in which the second derivatives of the coordinates don't necessarily vanish. However, it would complicate the reasoning, and every parameterization can be transformed into an affine parameterization, so I'll focus on this case only.)

We will now see how to start from this equation, and eventually get the general equation for geodesics.


After the warm-up in the Euclidean space, it is time to look at arbitrary manifolds. Let's assume that we have some manifold, a coordinate system x^\mu and a metric expressed in these coordinates g_{\mu\nu}. We are given a curve t \mapsto x^\mu(t) and we have to tell whether it is a geodesic.

A question that immediately comes to mind is: but how is a geodesic actually defined?

We'll exploit the fact that every manifold locally resembles Euclidean space. And if it locally resembles Euclidean space, then we can introduce coordinates resembling Cartesian coordinates on its small subset. Therefore, we'll say this: a curve is a geodesic (in an affine parameterization), if for every point on this curve, if we introduce coordinates y^\alpha resembling Cartesian coordinates in the neighborhood of that point , then in these coordinates the curve satisfies the equation \ddot{y}^\alpha = 0.

So, in simpler terms: our curve is a geodesic if, when we take a point on it and look at a small neighborhood of this point - small enough for it to resemble Euclidean space - then in this neighborhood our curve resembles a straight line.

An illustration for the definition above: we introduce coordinates y resembling Cartesian coordinates in a neighborhood of the red point. In such coordinates, our line should satisfy the equation \ddot{y}^\alpha = 0.

As it turns out, this is enough to get the geodesic equation. We'll just need to specify some things in more detail, primarily: what does "coordinates resembling Cartesian coordinates" mean?

We'll define it using the metric. We'll say that coordinates y^\alpha locally resemble Cartesian coordinates, if:

  • the metric g expressed in these coordinates (let's denote it by h_{\alpha\beta}) is equal to the identity matrix in the point that is of interest (let's denote it by y_0)
  • the derivatives of the metric h at the point y_0 are 0.

Or, in equations:

 h_{\alpha\beta}(y_0) = \left( \begin{array}{cccc} 1 & 0 & \cdots & 0 \\ 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & 1 \end{array} \right)

 \frac{\partial}{\partial y^\gamma} h_{\alpha\beta} (y_0) = 0

As it turns out, it is always possible to choose coordinates such that these conditions are satisfied at a single point. If it's possible to choose ones such that the conditions are satisfied everywhere, then our manifold is a Euclidean space and our coordinates are Cartesian coordinates.

A digression: coordinate transformations

Since we will be operating on two coordinate systems in the later part of the article, it is worth reminding ourselves what transformations between coordinate systems look like, especially for points, vectors and the metric.

A coordinate transformation is given as a set of functions: every coordinate of one of the systems is expressed as a function of the coordinates of the other system:

 y^\alpha = y^\alpha(x^\mu)

 x^\mu = x^\mu(y^\alpha)

For example, we can have two coordinate systems on a plane: Cartesian (x,y) and polar (r, \vartheta). The transformations look like this, then:

 \left\{ \begin{array}{l} r = r(x,y) = \sqrt{x^2 + y^2} \\ \vartheta = \vartheta(x,y) = \arctan(\frac{y}{x}) \end{array} \right.

 \left\{ \begin{array}{l} x = x(r, \vartheta) = r \cos \vartheta \\ y = y(r, \vartheta) = r \sin \vartheta \end{array} \right.

Thus, if we know the coordinates of a point in one system, we just use these functions to transform them to the other coordinate system.

How about vectors? Imagine that we have a vector v^\mu expressed in coordinates x and we want to express it as u^\alpha in coordinates y. Let's remember that a vector as a full mathematical object is actually a differential operator, in this case: v^\mu \frac{\partial}{\partial x^\mu} (see Part 2 - coordinates, vectors and the summation convention). When we express it in coordinates y as u^\alpha \frac{\partial}{\partial y^\alpha}, it is still the same vector. So:

 v^\mu \frac{\partial}{\partial x^\mu} = u^\alpha \frac{\partial}{\partial y^\alpha}

As the next step, we "move" \frac{\partial}{\partial y^\alpha} to the left side of the equation. Such an operation doesn't actually exist as a correct mathematical operation, but it's a useful mnemonic for remembering how to get a correct result - because the result below turns out to be correct:

 u^\alpha = v^\mu \frac{\partial y^\alpha}{\partial x^\mu}

Back to the example with a plane and polar and Cartesian coordinates: if v^x, v^y are the coordinates of a vector in Cartesian coordinates, and we want to calculate the polar coordinates (u^r, u^\vartheta), we can do it like this:

 u^r = v^x \frac{\partial r}{\partial x} + v^y \frac{\partial r}{\partial y}

 u^\vartheta = v^x \frac{\partial \vartheta}{\partial x} + v^y \frac{\partial \vartheta}{\partial y}

Let's note that the result will depend on which point we are performing the transformation at. The vector (1,1) in Cartesian coordinates will have different polar coordinates depend on which point it is bound to! Because of that, every transformation should be understood either as performed at a specific point, or as a function of coordinates (in one system or the other).

Finally, the metric. We'll use a similar trick here as with the vectors, that is, we'll notice that the full geometrical object is actually g_{\mu\nu}dx^\mu dx^\nu, which has to be equal to h_{\alpha\beta}dy^\alpha dy^\beta. Hence:

 h_{\alpha\beta} = g_{\mu\nu} \frac{\partial x^\mu}{\partial y^\alpha} \frac{\partial x^\nu}{\partial y^\beta}

So, for example, if we have the metric in Cartesian coordinates and we want to transform it into polar coordinates, it will look like this:

 h_{rr} = g_{xx} \frac{\partial x}{\partial r} \frac{\partial x}{\partial r} + g_{xy} \frac{\partial x}{\partial r} \frac{\partial y}{\partial r} + g_{yx} \frac{\partial y}{\partial r} \frac{\partial x}{\partial r} + g_{yy} \frac{\partial y}{\partial r} \frac{\partial y}{\partial r}


I recommend completing this calculation, knowing that the metric in Cartesian coordinates is the identity matrix (that is, g_{xx} = g_{yy} = 1, g_{xy} = g_{yx} = 0), as an exercise for the Reader.

As a final note: \frac{\partial x^\mu}{\partial y^\alpha} is a matrix (with values depending on the coordinates), called the jacobian of the transformation. Derivatives of the transformation in the opposite direction - \frac{\partial y^\alpha}{\partial x^\mu} - constitute the inverse matrix. This means that:

 \frac{\partial x^\mu}{\partial y^\alpha} \frac{\partial y^\alpha}{\partial x^\nu} = \delta^\mu_\nu

where \delta^\mu_\nu is a so-called Kronecker delta - an identity matrix. (It is different from the metric in Cartesian coordinates in that it has one upper and one lower index, while the metric has two lower indices. This makes it an identity matrix in every coordinate system. The reason why that is is beyond the scope of this article - what's important for us is that this is true.)

The Kronecker delta multiplied by some other value only "changes the index", that is, for example:

 g_{\mu\nu} \delta^\nu_\alpha = g_{\mu\alpha}

Back to geodesics

Let's go back to our curve on the manifold. We have coordinates x^\mu and the metric expressed in these coordinates g_{\mu\nu}, and coordinates y^\alpha and the same metric expressed in these coordinates, h_{\alpha\beta}.

According to our definition, the geodesic equation in the coordinates y is:

 \frac{d^2}{dt^2} y^\alpha = 0

We shall now try to express it in coordinates x, which are our initial coordinates (reminder, y are just local coordinates, the ones resembling Cartesian ones).

The first derivatives of the coordinates of points on the curve: \frac{d y^\alpha}{dt} constitute the vector tangent to the curve. Since it's a vector, we know how to express it in coordinates x:

 \frac{d y^\alpha}{dt} = \frac{dx^\mu}{dt} \frac{\partial y^\alpha}{\partial x^\mu}

Let's calculate the derivatives of that with respect to t:

 \frac{d^2 y^\alpha}{dt^2} = \frac{d^2 x^\lambda}{dt} \frac{\partial y^\alpha}{\partial x^\lambda} + \frac{dx^\lambda}{dt} \frac{d}{dt} \frac{\partial y^\alpha}{\partial x^\lambda}

How do we calculate the derivative of the term \frac{\partial y^\alpha}{\partial x^\lambda}? Remember that this expression depends on the coordinates (be it x or y). The coordinates are, on the other hand, some functions of t along our curve. This means that we can use the chain rule and calculate derivatives with respect to coordinates, and multiply by derivatives of coordinates with respect to t:

 \frac{d}{dt} \frac{\partial y^\alpha}{\partial x^\lambda} = \frac{\partial^2 y^\alpha}{\partial x^\lambda \partial x^\nu} \frac{d x^\nu}{dt}


 \frac{d^2 y^\alpha}{dt^2} = \frac{d^2 x^\lambda}{dt} \frac{\partial y^\alpha}{\partial x^\lambda} + \frac{dx^\lambda}{dt} \frac{\partial^2 y^\alpha}{\partial x^\lambda \partial x^\nu} \frac{d x^\nu}{dt}

The second derivatives of y are 0 along a geodesic, so we get (switching the notation again to the "dotted" one):

 0 = \ddot{x^\lambda} \frac{\partial y^\alpha}{\partial x^\lambda} + \frac{\partial^2 y^\alpha}{\partial x^\lambda \partial x^\nu} \dot{x}^\lambda \dot{x}^\nu

Let's also multiply both sides by \frac{\partial x^\mu}{\partial y^\alpha}:

 0 = \ddot{x^\lambda} \frac{\partial y^\alpha}{\partial x^\lambda}\frac{\partial x^\mu}{\partial y^\alpha} + \frac{\partial^2 y^\alpha}{\partial x^\lambda \partial x^\nu} \dot{x}^\lambda \dot{x}^\nu \frac{\partial x^\mu}{\partial y^\alpha}

Now, remember that \frac{\partial y^\alpha}{\partial x^\lambda}\frac{\partial x^\mu}{\partial y^\alpha} = \delta^\mu_\lambda and that v^\lambda \delta^\mu_\lambda = v^\mu, which gives us:

 0 = \ddot{x^\mu} + \frac{\partial x^\mu}{\partial y^\alpha} \frac{\partial^2 y^\alpha}{\partial x^\lambda \partial x^\nu} \dot{x}^\lambda \dot{x}^\nu

Okay. We got some equation for the coordinates x, but there are still derivatives of the transformations between x and y and vice versa in there. Can we get rid of y completely? Turns out that we can. We need to use the metric for that.

Let's remember that we still have the metric expressed in coordinates x as g_{\mu\nu} and in coordinates y as h_{\alpha\beta}. Because it's the same metric, just in different coordinates, we can write:

 g_{\mu\nu} = h_{\alpha\beta} \frac{\partial y^\alpha}{\partial x^\mu} \frac{\partial y^\beta}{\partial x^\nu}

Let's calculate the derivatives of the metric g with respect to coordinates x:

 \frac{\partial g_{\mu\nu}}{\partial x^\lambda} = \frac{\partial}{\partial x^\lambda} \left( h_{\alpha\beta} \frac{\partial y^\alpha}{\partial x^\mu} \frac{\partial y^\beta}{\partial x^\nu} \right)

We get:

 \frac{\partial g_{\mu\nu}}{\partial x^\lambda} = \frac{\partial h_{\alpha\beta}}{\partial x^\lambda} \frac{\partial y^\alpha}{\partial x^\mu} \frac{\partial y^\beta}{\partial x^\nu} + h_{\alpha\beta} \frac{\partial^2 y^\alpha}{\partial x^\mu \partial x^\lambda} \frac{\partial y^\beta}{\partial x^\nu} + h_{\alpha\beta}\frac{\partial y^\alpha}{\partial x^\mu} \frac{\partial^2 y^\beta}{\partial x^\nu \partial x^\lambda}

But wait - we assumed that the derivatives of the metric h are 0! (Well, we assumed that about the derivatives with respect to y, but if all the derivatives are 0 in one coordinate system, they are also 0 in all coordinate systems - I recommend checking that as an exercise.) This means that the whole first term vanishes and we get:

 \frac{\partial g_{\mu\nu}}{\partial x^\lambda} = h_{\alpha\beta} \frac{\partial^2 y^\alpha}{\partial x^\mu \partial x^\lambda} \frac{\partial y^\beta}{\partial x^\nu} + h_{\alpha\beta}\frac{\partial y^\alpha}{\partial x^\mu} \frac{\partial^2 y^\beta}{\partial x^\nu \partial x^\lambda}

(We will now denote \frac{\partial g_{\mu\nu}}{\partial x^\lambda} as g_{\mu\nu,\lambda} for the sake of brevity.)

Let's express h_{\alpha\beta} using g, now:

 h_{\alpha\beta} = g_{\rho\sigma} \frac{\partial x^\rho}{\partial y^\alpha} \frac{\partial x^\sigma}{\partial y^\beta}

After a substitution:

 g_{\mu\nu,\lambda} = g_{\rho\sigma} \frac{\partial x^\rho}{\partial y^\alpha} \frac{\partial x^\sigma}{\partial y^\beta} \left( \frac{\partial^2 y^\alpha}{\partial x^\mu \partial x^\lambda} \frac{\partial y^\beta}{\partial x^\nu} + \frac{\partial y^\alpha}{\partial x^\mu} \frac{\partial^2 y^\beta}{\partial x^\nu \partial x^\lambda}\right)

Let's remember again that opposite jacobians give the Kronecker delta when multiplied:

 g_{\mu\nu,\lambda} = g_{\rho\sigma}  \left( \frac{\partial^2 y^\alpha}{\partial x^\mu \partial x^\lambda} \frac{\partial x^\rho}{\partial y^\alpha} \delta^\sigma_\nu + \frac{\partial^2 y^\beta}{\partial x^\nu \partial x^\lambda} \frac{\partial x^\sigma}{\partial y^\beta} \delta^\rho_\mu \right)


 g_{\mu\nu,\lambda} = g_{\rho\nu} \frac{\partial^2 y^\alpha}{\partial x^\mu \partial x^\lambda} \frac{\partial x^\rho}{\partial y^\alpha} + g_{\mu\sigma} \frac{\partial^2 y^\beta}{\partial x^\nu \partial x^\lambda} \frac{\partial x^\sigma}{\partial y^\beta}

Let's define \Gamma^\lambda_{\mu\nu} = \frac{\partial^2 y^\alpha}{\partial x^\mu \partial x^\nu} \frac{\partial x^\lambda}{\partial y^\alpha} and \Gamma_{\lambda\mu\nu} = g_{\lambda\sigma} \Gamma^\sigma_{\mu\nu}. Then:

 g_{\mu\nu,\lambda} = \Gamma_{\nu\mu\lambda} + \Gamma_{\mu\nu\lambda}

And the geodesic equation takes the form:

 0 = \ddot{x}^\mu + \Gamma^\mu_{\lambda\nu} \dot{x}^\lambda \dot{x}^\nu

Let us also note that \Gamma_{\lambda\mu\nu} = \Gamma_{\lambda\nu\mu}, that is, it is symmetric in the last two indices. That's because in the expression for this value, these indices only appear in the second derivatives of y, and derivatives are symmetric with respect to the order of differentiation, that is:

 \frac{\partial^2 y^\alpha}{\partial x^\mu \partial x^\nu} = \frac{\partial^2 y^\alpha}{\partial x^\nu \partial x^\mu}

We are now just a single step from the finish line. The only thing that's left is to write an expression for \Gamma, that doesn't contain y coordinates. We will get that thanks to the following equations:

 g_{\mu\nu,\lambda} = \Gamma_{\nu\mu\lambda} + \Gamma_{\mu\nu\lambda}

And with shuffled indices:

 g_{\mu\lambda,\nu} = \Gamma_{\lambda\mu\nu} + \Gamma_{\mu\lambda\nu} =  \Gamma_{\lambda\mu\nu} + \Gamma_{\mu\nu\lambda}

 g_{\lambda\nu,\mu} = \Gamma_{\nu\lambda\mu} + \Gamma_{\lambda\nu\mu} =  \Gamma_{\nu\mu\lambda} + \Gamma_{\lambda\mu\nu}

Adding the first two equations and subtracting the third one, we get:

 g_{\mu\nu,\lambda} + g_{\mu\lambda,\nu} - g_{\lambda\nu,\mu} = 2\Gamma_{\mu\nu\lambda}


 \Gamma_{\mu\lambda\nu} = \frac{1}{2} \left( g_{\mu\lambda,\nu} + g_{\mu\nu,\lambda} - g_{\lambda\nu,\mu} \right)

 \Gamma^\mu_{\lambda\nu} = g^{\mu\sigma} \Gamma_{\sigma\lambda\nu} = \frac{1}{2} g^{\mu\sigma} \left( g_{\sigma\lambda,\nu} + g_{\sigma\nu,\lambda} - g_{\lambda\nu,\sigma} \right)

g^{\mu\sigma} is called the inverse metric. It is the matrix that is inverse to the metric, that is, one such that g^{\mu\sigma}g_{\sigma\nu} = \delta^\mu_\nu.

As an aside, \Gamma^\mu_{\nu\lambda} are called Christoffel symbols - they are a certain measure of how curvilinear the coordinate system is, and by calculating their derivatives one can get the Riemann curvature tensor, which quantifies the curvature of the manifold. However, these are topics that are far beyond the scope of this article.

The final equation

Thus, based on the intuition that a geodesic should be something analogous to a straight line on an arbitrary surface, we eventually got the equation:

 0 = \ddot{x}^\mu + \Gamma^\mu_{\lambda\nu} \dot{x}^\lambda \dot{x}^\nu


 \Gamma^\mu_{\lambda\nu} = \frac{1}{2} g^{\mu\sigma} \left( g_{\sigma\lambda,\nu} + g_{\sigma\nu,\lambda} - g_{\lambda\nu,\sigma} \right)

There are no y coordinates in these equations. This means that we can now find geodesics by knowing only the shape of the surface, given as a metric in an arbitrary coordinate system.

Example: geodesics on a sphere

As a final example, I will show how to use these equations to find the equations for geodesics on a sphere (called orthodromes). We will use the geographic coordinates, that is (\vartheta, \varphi) (\vartheta - latitude, from -\frac{\pi}{2} to \frac{\pi}{2}, \varphi - longitude, from -\pi to \pi), in which the metric of a sphere looks like the following:

 g_{\vartheta\vartheta} = R^2

 g_{\varphi\varphi} = R^2 \cos^2 \vartheta

 g_{\vartheta\varphi} = g_{\varphi\vartheta} = 0

R is the radius of the sphere here.

The inverse metric is:

 g^{\vartheta\vartheta} = \frac{1}{R^2}

 g^{\varphi\varphi} = \frac{1}{R^2 \cos^2 \vartheta}

 g^{\vartheta\varphi} = g^{\varphi\vartheta} = 0

The derivatives of the metric - only the g_{\varphi\varphi} isn't constant and it only depends on \vartheta, so the only nonzero derivative among 8 possible ones is:

 g_{\varphi\varphi,\vartheta} = -2R^2 \sin\vartheta \cos\vartheta

Nonzero Christoffel symbols with the lowered index are only those where \varphi occurs twice, and \vartheta once:

 \Gamma_{\varphi\varphi\vartheta} = \Gamma_{\varphi\vartheta\varphi} = \frac{1}{2}g_{\varphi\varphi,\vartheta} = -R^2 \sin\vartheta \cos\vartheta

 \Gamma_{\vartheta\varphi\varphi} = -\frac{1}{2}g_{\varphi\varphi,\vartheta} = R^2 \sin\vartheta \cos\vartheta

What's left is to raise the first index:

 \Gamma^\varphi_{\varphi\vartheta} = \Gamma^\varphi_{\vartheta\varphi} = g^{\varphi\varphi} \Gamma_{\varphi\varphi\vartheta} + g^{\varphi\vartheta} \Gamma_{\vartheta\varphi\vartheta} = -\tan \vartheta

 \Gamma^\vartheta_{\varphi\varphi} = g^{\vartheta\vartheta}\Gamma_{\vartheta\varphi\varphi} + g^{\vartheta\varphi}\Gamma_{\varphi\varphi\varphi} = \sin\vartheta \cos\vartheta

We can now write the geodesic equations:

 0 = \ddot{\vartheta} + \Gamma^\vartheta_{\vartheta\vartheta} \dot{\vartheta} \dot{\vartheta} + \Gamma^\vartheta_{\vartheta\varphi} \dot{\vartheta} \dot{\varphi} + \Gamma^\vartheta_{\varphi\vartheta} \dot{\varphi} \dot{\vartheta} + \Gamma^\vartheta_{\varphi\varphi} \dot{\varphi} \dot{\varphi} = \ddot{\vartheta} + \dot{\varphi}^2\sin\vartheta \cos\vartheta

 0 = \ddot{\varphi} + \Gamma^\varphi_{\vartheta\vartheta} \dot{\vartheta} \dot{\vartheta} + \Gamma^\varphi_{\vartheta\varphi} \dot{\vartheta} \dot{\varphi} + \Gamma^\varphi_{\varphi\vartheta} \dot{\varphi} \dot{\vartheta} + \Gamma^\varphi_{\varphi\varphi} \dot{\varphi} \dot{\varphi} = \ddot{\varphi} - 2\dot{\vartheta}\dot{\varphi}\tan \vartheta

Or, written differently:

 \ddot{\vartheta} = -\dot{\varphi}^2 \sin\vartheta\cos\vartheta

 \ddot{\varphi} = 2\dot{\vartheta} \dot{\varphi}\tan \vartheta

Using these equations, it's possible to plot an orthodrome, knowing the initial position and heading.


This article was intended to show that it's possible to derive the general geodesic equation using only the basics of differential geometry and the intuition about geodesics - that is, that they are lines that are "as straight as possible", which I tried to formalize a bit as satisfying the straight line equation in coordinates "locally resembling Cartesian coordinates". Whether I succeeded - it's not for me to judge, but I'll gladly learn from the comments :) I also encourage asking questions if something is unclear, and of course pointing out mistakes, which I probably didn't manage to avoid completely :)