[An Introduction to Exterior Calculus] 7. Powerful Calculations

By 苏剑林 | November 11, 2016

Here we will demonstrate how powerful the method from the previous section is for calculating the Riemann curvature tensor! We list all the formulas we have obtained once again. First, the conceptual ones:

\begin{aligned}&\omega^{\mu}=h_{\alpha}^{\mu}dx^{\alpha}\\ &d\boldsymbol{r}=\hat{\boldsymbol{e}}_{\mu} \omega^{\mu}\\ &ds^2 = \eta_{\mu\nu} \omega^{\mu}\omega^{\nu}\\ &\langle \hat{\boldsymbol{e}}_{\mu}, \hat{\boldsymbol{e}}_{\nu}\rangle = \eta_{\mu\nu}\end{aligned} \tag{65}

Then:

\begin{aligned}&d\eta_{\mu\nu}=\omega_{\nu\mu}+\omega_{\mu\nu}=\eta_{\nu\alpha}\omega_{\mu}^{\alpha}+\eta_{\mu \alpha}\omega_{\nu}^{\alpha}\\ &d\omega^{\mu}+\omega_{\nu}^{\mu}\land \omega^{\nu}=0\end{aligned} \tag{66}

These two help us determine $\omega_{\nu}^{\mu}$; next is:

\begin{equation} \mathscr{R}_{\nu}^{\mu} = d\omega_{\nu}^{\mu}+\omega_{\alpha}^{\mu} \land \omega_{\nu}^{\alpha} \tag{67} \end{equation}

Finally, if you want the components $\hat{R}^{\mu}_{\nu\beta\gamma}$ in the orthogonal frame, you write:

\begin{equation} \mathscr{R}_{\nu}^{\mu}=\sum_{\beta < \gamma} \hat{R}^{\mu}_{\nu\beta\gamma}\omega^{\beta}\land \omega^{\gamma} \tag{68} \end{equation}

If you want the $R^{\mu}_{\nu\beta\gamma}$ in the original frame, you write:

\begin{equation} (h^{-1})_{\mu'}^{\mu}\mathscr{R}^{\mu'}_{\nu'}h_{\nu}^{\nu'} = \sum_{\beta < \gamma} R^{\mu}_{\nu\beta\gamma}dx^{\beta}\land dx^{\gamma} \tag{69} \end{equation}

Then you simply read off $R^{\mu}_{\nu\beta\gamma}$ sequentially, like filling out a table.

Two-dimensional example: Sphere

Let's warm up with a two-dimensional example. We will calculate the Riemann curvature tensor for a sphere with the metric $ds^2 = d\theta^2 + \sin^2 \theta d\phi^2$.

We choose:

\begin{equation} \omega^1 = d\theta, \quad \omega^2 = \sin\theta d\phi \tag{70} \end{equation}

which means:

\begin{equation} \boldsymbol{h}=\begin{pmatrix}1&0\\0&\sin\theta\end{pmatrix},\quad \boldsymbol{\eta}=\begin{pmatrix}1&0\\0&1\end{pmatrix} \tag{71} \end{equation}

Since $\boldsymbol{\eta}$ is the identity matrix, $d\eta_{\mu\nu}=\eta_{\alpha \nu}\omega_{\mu}^{\alpha}+\eta_{\mu \alpha}\omega_{\nu}^{\alpha}$ tells us that $\omega_{\nu}^{\mu}$ is an antisymmetric matrix. Writing $d\omega^{\mu}+\omega_{\nu}^{\mu}\land \omega^{\nu}=0$ in matrix form:

\begin{equation} \begin{pmatrix} 0 & \omega_2^1 \\ -\omega_2^1 & 0 \end{pmatrix}\land \begin{pmatrix} d\theta \\ \sin\theta d\phi \end{pmatrix}=-d\begin{pmatrix} d\theta \\ \sin\theta d\phi \end{pmatrix}=-\begin{pmatrix} 0 \\ \cos\theta d\theta\land d\phi \end{pmatrix} \tag{72} \end{equation}

Due to antisymmetry, $\omega_{\nu}^{\mu}$ has only one independent component. It is not difficult to find $\omega_2^1=-\cos\theta d\phi$. This solution process can be done using a bit of "guess and check." Next, find $\mathscr{R}_{\nu}^{\mu} = d\omega_{\nu}^{\mu}+\omega_{\alpha}^{\mu} \land \omega_{\nu}^{\alpha}$, which is:

\begin{aligned}\mathscr{R}_{\nu}^{\mu} = &d\begin{pmatrix} 0 & -\cos\theta d\phi \\ \cos\theta d\phi & 0 \end{pmatrix}\\ &+\begin{pmatrix} 0 & -cos\theta d\phi \\ \cos\theta d\phi & 0 \end{pmatrix}\land \begin{pmatrix} 0 & -\cos\theta d\phi \\ \cos\theta d\phi & 0 \end{pmatrix} \end{aligned} \tag{73}

The matrix multiplication term is clearly zero. In fact, it can be proven that in any 2D space, $\omega_{\alpha}^{\mu} \land \omega_{\nu}^{\alpha}$ is identically zero. Therefore:

\begin{equation} \mathscr{R}_{\nu}^{\mu} = d\begin{pmatrix} 0 & -\cos\theta d\phi \\ \cos\theta d\phi & 0 \end{pmatrix}=\begin{pmatrix} 0 & \sin\theta d\theta\land d\phi \\ -\sin\theta d\theta\land d\phi & 0 \end{pmatrix} \tag{74} \end{equation}

And since:

\begin{equation} \mathscr{R}_{\nu}^{\mu}=\sum_{\beta < \gamma} \hat{R}^{\mu}_{\nu\beta\gamma}\omega^{\beta}\land \omega^{\gamma}=\hat{R}^{\mu}_{\nu 1 2 }\sin\theta d\theta \land d\phi \tag{75} \end{equation}

By comparison, we see:

\begin{equation} \hat{R}^{\mu}_{\nu 1 2 } = \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} \tag{76} \end{equation}

That is to say, in the orthogonal frame, we have $\hat{R}^{1}_{2 1 2 } = 1, \hat{R}^{2}_{1 1 2 } = -1$. Then, calculate $R^{\mu}_{\nu\beta\gamma}$ according to $(h^{-1})_{\mu'}^{\mu}\mathscr{R}^{\mu'}_{\nu'}h_{\nu}^{\nu'} = \sum_{\beta < \gamma} R^{\mu}_{\nu\beta\gamma}dx^{\beta}\land dx^{\gamma}$:

\begin{aligned}&\begin{pmatrix}1&0\\0&\sin\theta\end{pmatrix}^{-1}\begin{pmatrix} 0 & \sin\theta d\theta\land d\phi \\ -\sin\theta d\theta\land d\phi & 0 \end{pmatrix}\begin{pmatrix}1&0\\0&\sin\theta\end{pmatrix}\\ =&R^{\mu}_{\nu 12}d\theta\land d\phi\end{aligned} \tag{77}

Which yields:

\begin{equation} R^{\mu}_{\nu 1 2 } = \begin{pmatrix} 0 & \sin^2\theta \\ -1 & 0 \end{pmatrix} \tag{78} \end{equation}

This means $R^{1}_{2 1 2 } = \sin^2\theta, R^{2}_{1 1 2 } = -1$. The entire process involves only matrix multiplication, which is familiar to us and saves a lot of trouble compared to the tedious summation over multiple indices. Comparing the forms of $\hat{R}^{\mu}_{\nu\beta\gamma}$ and $R^{\mu}_{\nu\beta\gamma}$, one can also find that the orthogonal frame indeed provides significant simplification.

Four-dimensional example: Schwarzschild metric

The first exact solution to Einstein's field equations is the Schwarzschild metric, which is obtained by solving for a metric of the following form:

\begin{equation} ds^2= -e^{2\Phi}dt^2 + e^{2\Lambda} dr^2 + r^2 d\theta^2 + r^2 \sin^2\theta d\phi^2 \tag{79} \end{equation}

The starting point is to consider an isotropic metric, so $\Phi, \Lambda$ are assumed to be functions only of $r$. Let's calculate the Riemann curvature tensor for this case (intermediate-high difficulty).

Naturally, we choose:

\begin{equation} \omega^1 = e^{\Phi}dt, \quad \omega^2 = e^{\Lambda }dr, \quad \omega^3 = rd\theta, \quad \omega^4 = r\sin\theta d\phi \tag{80} \end{equation}

In this case:

\begin{equation} \boldsymbol{h}=\begin{pmatrix}e^{\Phi}&0&0&0\\0&e^{\Lambda }&0&0\\0&0&r&0\\0&0&0&r\sin\theta\end{pmatrix},\quad \boldsymbol{\eta}=\begin{pmatrix}-1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{pmatrix} \tag{81} \end{equation}

From $d\eta_{\mu\nu}=\omega_{\nu\mu}+\omega_{\mu\nu}$ and combining it with the form of $\boldsymbol{\eta}$, we can conclude that $\omega_{\nu}^{\mu}$ has the following form:

\begin{equation} \begin{pmatrix}0&\omega_2^1&\omega_3^1&\omega_4^1\\ \omega_2^1&0&\omega_3^2&\omega_4^2\\ \omega_3^1&-\omega_3^2&0&\omega_4^3\\ \omega_4^1&-\omega_4^2&-\omega_4^3&0\end{pmatrix} \tag{82} \end{equation}

Its characteristic is that, viewed as a block matrix:

\begin{equation} \left(\begin{array}{c:ccc}0&\omega_2^1&\omega_3^1&\omega_4^1\\ \hdashline \omega_2^1&0&\omega_3^2&\omega_4^2\\ \omega_3^1&-\omega_3^2&0&\omega_4^3\\ \omega_4^1&-\omega_4^2&-\omega_4^3&0\end{array}\right) =\left(\begin{array}{c:c}E & F\\ \hdashline G&H\end{array}\right) \tag{83} \end{equation}

it is symmetric, but the diagonal blocks $E, H$ are antisymmetric. How it is partitioned depends on how we partition $\boldsymbol{\eta}$ into $\left(\begin{array}{c:c}-I & 0\\ \hdashline 0&I\end{array}\right)$, where $I$ represents the identity matrix.

With the specific form of $\omega_{\nu}^{\mu}$, we can write $d\omega^{\mu}+\omega_{\nu}^{\mu}\land \omega^{\nu}=0$:

\begin{aligned}&\begin{pmatrix}0&\omega_2^1&\omega_3^1&\omega_4^1\\ \omega_2^1&0&\omega_3^2&\omega_4^2\\ \omega_3^1&-\omega_3^2&0&\omega_4^3\\ \omega_4^1&-\omega_4^2&-\omega_4^3&0\end{pmatrix}\land \begin{pmatrix}e^{\Phi}dt\\ e^{\Lambda }dr\\rd\theta\\r\sin\theta d\phi\end{pmatrix}\\ =&-d\begin{pmatrix}e^{\Phi}dt\\ e^{\Lambda }dr\\rd\theta\\r\sin\theta d\phi\end{pmatrix}=-\begin{pmatrix}e^{\Phi} \dot{\Phi} dr\land dt\\ 0 \\dr\land d\theta\\ \sin\theta dr\land d\phi+ r\cos\theta d\theta\land d\phi \end{pmatrix}\end{aligned} \tag{84}

where the dot $\dot{}$ denotes a derivative with respect to $r$. With some brainstorming, one can quickly pinpoint the answers. For example, seeing that the second row is identical to 0, we can conclude that $\omega_2^1, \omega_3^2, \omega_4^2$ are respectively related only to $dt, d\theta, d\phi$. Combining this with the first row, we can determine $\omega_2^1 = e^{\Phi-\Lambda}\dot{\Phi} dt$, and conclude that $\omega_3^1, \omega_4^1$ are related only to $d\theta, d\phi$. Then combined with the third row, we can determine $\omega_3^1=0$ and $\omega_3^2=-e^{-\Lambda}d\theta$, and conclude that $\omega_4^3$ is related only to $d\phi$. Finally, looking at the fourth row, it is quickly determined that $\omega_4^1=0, \omega_4^2 = -\sin\theta e^{-\Lambda} d\phi, \omega_4^3 = -\cos\theta d\phi$. Finally we get:

\begin{equation} \omega_{\nu}^{\mu}=\begin{pmatrix}0& e^{\Phi-\Lambda}\dot{\Phi} dt & 0 & 0 \\ e^{\Phi-\Lambda}\dot{\Phi} dt &0&-e^{-\Lambda}d\theta&-\sin\theta e^{-\Lambda} d\phi\\ 0 &e^{-\Lambda}d\theta&0&-\cos\theta d\phi\\ 0&\sin\theta e^{-\Lambda} d\phi&\cos\theta d\phi&0\end{pmatrix} \tag{85} \end{equation}

Now we can calculate $\mathscr{R}_{\nu}^{\mu} = d\omega_{\nu}^{\mu}+\omega_{\alpha}^{\mu} \land \omega_{\nu}^{\alpha}$:

\begin{aligned}\mathscr{R}_{\nu}^{\mu} =&d\begin{pmatrix}0& e^{\Phi-\Lambda}\dot{\Phi} dt & 0 & 0 \\ e^{\Phi-\Lambda}\dot{\Phi} dt &0&-e^{-\Lambda}d\theta&-\sin\theta e^{-\Lambda} d\phi\\ 0 &e^{-\Lambda}d\theta&0&-\cos\theta d\phi\\ 0&\sin\theta e^{-\Lambda} d\phi&\cos\theta d\phi&0\end{pmatrix}\\ &+\begin{pmatrix}0& e^{\Phi-\Lambda}\dot{\Phi} dt & 0 & 0 \\ e^{\Phi-\Lambda}\dot{\Phi} dt &0&-e^{-\Lambda}d\theta&-\sin\theta e^{-\Lambda} d\phi\\ 0 &e^{-\Lambda}d\theta&0&-\cos\theta d\phi\\ 0&\sin\theta e^{-\Lambda} d\phi&\cos\theta d\phi&0\end{pmatrix}\\ &\land \begin{pmatrix}0& e^{\Phi-\Lambda}\dot{\Phi} dt & 0 & 0 \\ e^{\Phi-\Lambda}\dot{\Phi} dt &0&-e^{-\Lambda}d\theta&-\sin\theta e^{-\Lambda} d\phi\\ 0 &e^{-\Lambda}d\theta&0&-\cos\theta d\phi\\ 0&\sin\theta e^{-\Lambda} d\phi&\cos\theta d\phi&0\end{pmatrix} \end{aligned} \tag{86}

Which results in:

\begin{aligned}&\mathscr{R}_1^1=\mathscr{R}_2^2=\mathscr{R}_3^3=\mathscr{R}_4^4=0\\ &\mathscr{R}^1_2=\mathscr{R}^2_1=-e^{\Phi-\Lambda}(\ddot{\Phi}+\dot{\Phi}^2-\dot{\Phi}\dot{\Lambda}) dt\land dr\\ &\mathscr{R}^1_3=\mathscr{R}^3_1=-e^{\Phi-2\Lambda} \dot{\Phi} dt\land d\theta\\ &\mathscr{R}^1_4=\mathscr{R}^4_1=-e^{\Phi-2\Lambda} \dot{\Phi} \sin\theta dt\land d\phi\\ &\mathscr{R}^2_3=-\mathscr{R}^3_2=e^{-\Lambda}\dot{\Lambda} dr\land d\theta\\ &\mathscr{R}^2_4=-\mathscr{R}^4_2=e^{-\Lambda}\dot{\Lambda}\sin\theta dr\land d\phi\\ &\mathscr{R}^3_4=-\mathscr{R}^4_3=(1-e^{2\Lambda})\sin\theta d\theta\land d\phi\end{aligned} \tag{87}

From this, $\hat{R}^{\mu}_{\nu\beta\gamma}$ can be read off sequentially, for instance:

\begin{equation} \hat{R}^{1}_{212}=-e^{-2\Lambda}(\ddot{\Phi}+\dot{\Phi}^2-\dot{\Phi}\dot{\Lambda}), \quad\hat{R}^{1}_{312}=-\frac{1}{r}e^{-2\Lambda}\dot{\Phi} \tag{88} \end{equation}

and so on. If you wish, you can continue to find $R^{\mu}_{\nu\beta\gamma}$ because $\boldsymbol{h}$ is a diagonal matrix, which won't increase the labor much.

If readers attempt to perform these calculations themselves, they might still complain about the time consumed and feel that the simplification isn't significant. However, following the steps above is feasible even for manual calculation. This is at least a method that manual calculation can realistically achieve; the process above was calculated by the author by hand without using software like Mathematica. I believe no one has manually calculated a Riemann curvature tensor for more than three dimensions using the original expression $R^{\mu}_{\nu\beta\gamma}=\frac{\partial \Gamma^{\mu}_{\nu\gamma}}{\partial x^{\beta}}-\frac{\partial \Gamma^{\mu}_{\nu\beta}}{\partial x^{\gamma}}+\Gamma^{\mu}_{\alpha\beta}\Gamma^{\alpha}_{\nu\gamma}-\Gamma^{\mu}_{\alpha\gamma}\Gamma^{\alpha}_{\nu\beta}$, right? Forget the calculation itself—simply keeping track of the summation indices is not easy. In comparison, the exterior derivative technique is much more efficient. Of course, regardless of the method, it first requires some time to practice until familiar; secondly, even after becoming familiar with the method, it takes some time and thought to work out—it is impossible to see it at a glance, unless you are a computer.