# Differential geometry

The subject of Differential geometry[1][2] is the application of calculus to the problem of describing curves, surfaces, and volumes in two and three dimensions, as well as analogous structures in higher dimensions. The most immediate application of differential geometry in geophysics is the representation of curves and surfaces in geologic models. Seismic ray tracing is an application as well, as are the other geometric aspects of solutions to partial differential equations, such as field lines and flow lines in problems from potential theory and from fluid dynamics.

"A curve ${\displaystyle {\boldsymbol {x}}(\sigma )}$, where ${\displaystyle \sigma }$ is a running parameter along the curve and ${\displaystyle ({\hat {{\boldsymbol {e}}_{1}}},{\hat {{\boldsymbol {e}}_{2}}},{\hat {{\boldsymbol {e}}_{3}}})}$ are the unit basis vectors in the respective ${\displaystyle (x_{1},x_{2},x_{3})}$ directions."

# The Theory of Curves

This exposition begins in 3 dimensions, but all of the results presented here generalize immediately to arbitrary dimensions.

## Definition of a Curve

In ${\displaystyle \mathbb {R} ^{3}}$ (the three dimensional Euclidean space) we define a curve as a vector valued function ${\displaystyle {\boldsymbol {x}}}$, where

${\displaystyle {\boldsymbol {x}}(\sigma )\equiv \left(x_{1}(\sigma ),x_{2}(\sigma ),x_{3}(\sigma )\right)}$.

Here ${\displaystyle \sigma }$ is a running variable along the curve. You can think of this as being like the marks on a tape measure. We assume for now that ${\displaystyle {\boldsymbol {x}}\in \mathbb {C} ^{3}}$ meaning that the components of ${\displaystyle {\boldsymbol {x}}}$ are continuous and at least 3-times differentiable with respect to ${\displaystyle \sigma }$.

"In the limit as ${\displaystyle h\rightarrow 0}$, the vector line element defined by the difference ${\displaystyle {\boldsymbol {x}}(\sigma +h)-{\boldsymbol {x}}}$ tends to the tangent vector at the point ${\displaystyle {\boldsymbol {x}}(\sigma )}$. "

## The tangent vector to a curve

We can define the tangent to each point ${\displaystyle {\boldsymbol {x}}(\sigma )}$ as the first derivative of ${\displaystyle {\boldsymbol {x}}(\sigma )}$ with respect to ${\displaystyle \sigma }$

${\displaystyle {\boldsymbol {t}}\equiv {\boldsymbol {x}}^{\prime }(\sigma )\equiv \lim _{h\rightarrow 0}{\frac {{\boldsymbol {x}}(\sigma +h)-{\boldsymbol {x}}(\sigma )}{h}}=\left({\frac {dx_{1}}{d\sigma }},{\frac {dx_{2}}{d\sigma }},{\frac {dx_{3}}{d\sigma }}\right)\equiv {\frac {d{\boldsymbol {x}}}{d\sigma }}\equiv {\frac {dx_{i}}{d\sigma }}}$ where ${\displaystyle i=1,2,3}$ in ${\displaystyle \mathbb {R} ^{3}}$.

The last form with subscript ${\displaystyle i}$ is index notation.

The subtraction of the vector positions ${\displaystyle {\boldsymbol {x}}(\sigma +h)-{\boldsymbol {x}}}$ define a "directed line segment" pointing in the direction of the lesser value of ${\displaystyle \sigma }$ to the greater value of ${\displaystyle \sigma }$. As ${\displaystyle h\rightarrow 0}$, the vector

${\displaystyle {\frac {{\boldsymbol {x}}(\sigma +h)-{\boldsymbol {x}}(\sigma )}{h}}}$

tends to point in the direction tangent to the curve at the point ${\displaystyle {\boldsymbol {x}}(\sigma )}$.

## Arclength, the natural parameter of Curves

Suppose there is a coordinate ${\displaystyle s}$ such that the tangent vector is a unit vector

${\displaystyle {\dot {\boldsymbol {x}}}\equiv {\frac {d{\boldsymbol {x}}}{ds}}={\hat {t}}.}$ Here we use the "." to indicate differentiation with respect to this special coordinate ${\displaystyle s}$.

Alternatively, we may consider the running parameter ${\displaystyle \sigma }$ to be a function of this new parameter ${\displaystyle s}$ such that ${\displaystyle \sigma \equiv \sigma (s)}$. This means that we can write the unit tangent vector as

${\displaystyle {\hat {\boldsymbol {t}}}={\frac {d{\boldsymbol {x}}(\sigma (s))}{ds}}={\frac {d{\boldsymbol {x}}}{d\sigma }}{\frac {d\sigma }{ds}}\equiv {\boldsymbol {x}}^{\prime }{\dot {\sigma }}}$.

From our knowledge of vectors, we may also represent the unit tangent vector as the ratio of the vector with its magnitude

${\displaystyle {\hat {t}}\equiv {\frac {\boldsymbol {t}}{|{\boldsymbol {t}}|}}}$, implying that ${\displaystyle {\dot {\sigma }}\equiv {\frac {1}{|{\boldsymbol {x}}^{\prime }|}}\equiv {\frac {d\sigma }{ds}}}$ which further implies that ${\displaystyle \sigma (s)=\int _{s_{0}}^{s}{\frac {ds}{|{\boldsymbol {x}}^{\prime }(\sigma (s)|}}}$.

### But what is ${\displaystyle s}$?

We note that

${\displaystyle {\frac {d{\boldsymbol {x}}}{ds}}={\hat {\boldsymbol {t}}}}$ which implies that ${\displaystyle d{\boldsymbol {x}}={\hat {\boldsymbol {t}}}ds}$.

We can write formally

${\displaystyle d{\boldsymbol {x}}\cdot d{\boldsymbol {x}}={\hat {\boldsymbol {t}}}ds\cdot {\hat {\boldsymbol {t}}}ds={\hat {\boldsymbol {t}}}\cdot {\hat {\boldsymbol {t}}}(ds)^{2}=(ds)^{2}}$

Formally, we may also write

${\displaystyle d{\boldsymbol {x}}\cdot d{\boldsymbol {x}}=(dx_{1})^{2}+(dx_{2})^{2}+(dx_{3})^{2}=(ds)^{2}}$

meaning that ${\displaystyle ds\equiv {\sqrt {(dx_{1})^{2}+(dx_{2})^{2}+(dx_{3})^{2}}}}$ is differential arc length. Thus ${\displaystyle s}$ is arc length.

The arclength ${\displaystyle s}$ is called the natural parameter of differential geometry by some authors.

"The unit tangent vector ${\displaystyle {\boldsymbol {\hat {t}}}}$ and the unit principal normal vector ${\displaystyle {\boldsymbol {\hat {p}}}}$ at the point ${\displaystyle {\boldsymbol {x}}(\sigma )}$. "

### The Principal Normal vector to a curve

We can define a second vector of interest in our understanding of the applications of calculus to curves. This is the principal normal vector.

We define the principal normal vector ${\displaystyle {\boldsymbol {p}}}$ as the first derivative with respect to ${\displaystyle s}$ of the unit tangent vector

${\displaystyle {\boldsymbol {p}}\equiv {\dot {\hat {\boldsymbol {t}}}}={\ddot {\boldsymbol {x}}}}$

### But where does ${\displaystyle {\boldsymbol {p}}}$ point?

We consider the dot product of the unit tangent vector with itself

${\displaystyle {\hat {\boldsymbol {t}}}\cdot {\hat {\boldsymbol {t}}}=1}$

If we differentiate both sides of the dot product with respect to ${\displaystyle s}$

${\displaystyle {\frac {d}{ds}}({\hat {\boldsymbol {t}}}\cdot {\hat {\boldsymbol {t}}})=2{\hat {\boldsymbol {t}}}\cdot {\dot {\hat {\boldsymbol {t}}}}=2{\hat {\boldsymbol {t}}}\cdot {\boldsymbol {p}}=0}$.

Thus, the principal normal vector ${\displaystyle {\boldsymbol {p}}}$ is orthogonal to the tangent vector ${\displaystyle {\hat {\boldsymbol {t}}}}$.

We can define a unit principal normal vector ${\displaystyle {\hat {\boldsymbol {p}}}}$ by dividing by its magnitude

${\displaystyle {\boldsymbol {\hat {p}}}\equiv {\frac {\hat {\boldsymbol {p}}}{|{\hat {\boldsymbol {p}}}|}}={\frac {\ddot {\boldsymbol {x}}}{|{\ddot {\boldsymbol {x}}}|}}}$.

"Here ${\displaystyle {\boldsymbol {x}}(s)}$ is a circle of radius ${\displaystyle \rho }$. It's unit tangent vector is ${\displaystyle {\boldsymbol {\dot {x}}}(s)}$ and its unit principal normal vector is ${\displaystyle \rho {\boldsymbol {\ddot {x}}}}$. Here, the angle ${\displaystyle \sigma =s/\rho }$ shows the relationship between angle ${\displaystyle \sigma }$ in radians and arclength ${\displaystyle s}$.

## Simple examples on a circle

A simple circle of radius ${\displaystyle \rho }$ in the ${\displaystyle (x_{1},x_{2})}$ provides a simple demonstration of the tangent and principal normal vectors.

A circle may be represented as

${\displaystyle {\boldsymbol {x}}(\sigma )=\left(\rho \cos(\sigma ),\rho \sin(\sigma ),0\right)}$

where ${\displaystyle \rho }$ is the (constant) radius of the circle and ${\displaystyle \sigma }$ is the angular coordinate. This expression shows position on the circle of radius ${\displaystyle \rho }$ as a vector which points from the center of the circle to the point at coordinates ${\displaystyle (\rho ,\sigma )}$.

### The tangent to a circle

We may differentiate the components of ${\displaystyle {\boldsymbol {x}}(\sigma )}$ to obtain the tangent vector

${\displaystyle {\boldsymbol {t}}(\sigma )={\boldsymbol {x}}^{\prime }(\sigma )=\left(-\rho \sin(\sigma ),\rho \cos(\sigma ),0\right)}$.

This expression describes a tangent vector pointing in the counter-clockwise direction on the circle, where ${\displaystyle \sigma }$ increase in the counter-clockwise direction.

The magnitude of the tangent vector ${\displaystyle |{\boldsymbol {\hat {t}}}|}$ is the square root of the sum of the squares of the components of the tangent vector

${\displaystyle |{\boldsymbol {\hat {t}}}|={\sqrt {\rho ^{2}\sin ^{2}(\sigma )+\rho ^{2}\cos ^{2}(\sigma )}}=\rho }$.

Thus, we may write the unit tangent vector as ${\displaystyle {\boldsymbol {\hat {t}}}={\frac {\boldsymbol {t}}{|{\boldsymbol {t}}|}}={\frac {\boldsymbol {t}}{\rho }}\equiv {\frac {d{\boldsymbol {x}}(\sigma (s))}{ds}}={\frac {d{\boldsymbol {x}}}{d\sigma }}{\frac {d\sigma }{ds}}}$.

Thus, for a circle

${\displaystyle {\frac {d\sigma }{ds}}={\frac {1}{\rho }}}$ which implies that ${\displaystyle d\sigma ={\frac {ds}{\rho }}}$

meaning that ${\displaystyle \sigma =\int _{s_{0}}^{s}{\frac {dl}{\rho }}}$. Here ${\displaystyle l}$ is a dummy variable of integration.

If we take ${\displaystyle \sigma \equiv 0}$ when ${\displaystyle s=s_{0}}$. and because ${\displaystyle \rho ={\mbox{const.}}}$ for a circle, we can relate ${\displaystyle \sigma }$ to arclength ${\displaystyle s}$ via ${\displaystyle \sigma =s/\rho .}$

### Arclength coordinates

We can rewrite the formulas for the circle and its tangent vector in term of arclenth ${\displaystyle s}$ using ${\displaystyle \sigma =s/\rho }$

The formula for the circle becomes

${\displaystyle {\boldsymbol {x}}(s)=(\rho \cos(s/\rho ),\rho \sin(s/\rho ),0)}$

and the function describing its unit tangent vectors becomes

${\displaystyle {\boldsymbol {\hat {t}}}(s)={\boldsymbol {\dot {x}}}(s)=(-\sin(s/\rho ),\cos(s/\rho ),0)}$.

We may then take an additional derivative to obtain the principal normal vector (which is not a unit vector)

${\displaystyle {\boldsymbol {p}}={\boldsymbol {\ddot {x}}}(s)=(-(1/\rho )\cos(s/\rho ),-(1/\rho )\sin(s/\rho ),0)}$.

The magnitude of the principal normal vector is

${\displaystyle |{\boldsymbol {p}}|={\boldsymbol {\ddot {x}}}(s)={\sqrt {(1/\rho ^{2})\cos ^{2}(s/\rho )+(1/\rho ^{2})\sin(s/\rho )}}=1/\rho }$. We call ${\displaystyle 1/\rho }$ the curvature of the circle and ${\displaystyle \rho }$ the radius of curvature of the circle.

Thus the unit principal normal vector is

${\displaystyle {\boldsymbol {\hat {p}}}={\frac {{\boldsymbol {\ddot {x}}}(s)}{|{\boldsymbol {\ddot {x}}}(s)|}}=\rho {\boldsymbol {\ddot {x}}}=(-\cos(s/\rho ),-\sin(s/\rho ),0).}$

As we can see ${\displaystyle {\boldsymbol {x}}}$ points in a radial direction away from the center of the circle, ${\displaystyle {\boldsymbol {\dot {x}}}}$ points in the counter-clockwise tangent direction to the circle, and the principal normal vector ${\displaystyle \rho {\boldsymbol {\ddot {x}}}}$ points in the radial direction toward the center of the circle.

## The Curvature vector ${\displaystyle \kappa }$

The notions of the tangent, principal normal, and curvature generalize beyond circles to more general curves. We need only imagine that at each point of more general curve that there is a circle tangent to that curve that has a radius of curvature defined by the second derivative of the function describing the curve.

We may define a curvature vector

${\displaystyle {\boldsymbol {\kappa }}(s)\equiv \kappa (s){\boldsymbol {\hat {p}}}(s)}$

where we define ${\displaystyle \kappa (s)\equiv |{\boldsymbol {\ddot {x}}}(s)|=1/\rho (s)}$. The curvature is ${\displaystyle \kappa (s)}$ and the radius of curvature is ${\displaystyle \rho (s)}$. We note for future reference that

${\displaystyle {\dot {\boldsymbol {\hat {t}}}}=\kappa (s){\boldsymbol {\hat {p}}}}$.

## The binormal vector

You may have seen this coming. Given two unit vectors, we can define a third vector via the cross product. In this case, we define the unit binormal vector as the cross product of the unit tangent and the unit principal normal vectors

${\displaystyle {\boldsymbol {\hat {b}}}\equiv {\boldsymbol {\hat {t}}}\times {\boldsymbol {\hat {p}}}}$.

For curves confined to a plane, this vector always points in the constant vertical direction. For more general curves, the orientation of this vector changes as we move along the curve.

As before, differentiating the dot product ${\displaystyle {\boldsymbol {\hat {b}}}\cdot {\boldsymbol {\hat {b}}}=1}$ with respect to ${\displaystyle s}$

${\displaystyle {\frac {d}{ds}}({\boldsymbol {\hat {b}}}\cdot {\boldsymbol {\hat {b}}})=2{\dot {\boldsymbol {\hat {b}}}}=0}$

Which indicates that the derivative of the binormal vector must point orthogonal to ${\displaystyle {\boldsymbol {\hat {b}}}}$ which is in the ${\displaystyle ({\boldsymbol {\hat {t}}},{\boldsymbol {\hat {p}}})}$-plane.

We know that the binormal is perpendicular to both the tangent ${\displaystyle {\boldsymbol {\hat {t}}}}$ and the principal normal ${\displaystyle {\boldsymbol {\hat {p}}}}$ vectors. Hence ${\displaystyle {\boldsymbol {\hat {b}}}\cdot {\boldsymbol {\hat {t}}}=0}$ and ${\displaystyle {\boldsymbol {\hat {b}}}\cdot {\boldsymbol {\hat {p}}}=0}$.

Differentiating with respect to arclength ${\displaystyle s}$

${\displaystyle {\frac {d}{ds}}({\boldsymbol {\hat {b}}}\cdot {\boldsymbol {\hat {t}}})={\boldsymbol {\hat {b}}}\cdot {\boldsymbol {\dot {\hat {t}}}}+{\boldsymbol {\dot {\hat {b}}}}\cdot {\boldsymbol {\hat {t}}}=0}$.

Thus,

${\displaystyle {\boldsymbol {\dot {\hat {b}}}}\cdot {\boldsymbol {\hat {t}}}=-{\boldsymbol {\hat {b}}}\cdot {\boldsymbol {\dot {\hat {t}}}}={\boldsymbol {\hat {b}}}\cdot {\boldsymbol {p}}=-\kappa (s)({\boldsymbol {\hat {b}}}\cdot {\boldsymbol {\hat {p}}})=0}$ .

Hence ${\displaystyle {\boldsymbol {\dot {\hat {b}}}}}$ points in either the ${\displaystyle {\boldsymbol {\hat {p}}}}$ or the ${\displaystyle -{\boldsymbol {\hat {p}}}}$ direction

## The Torsion

For curves not confined to a plane, it is possible for there not only to be a curvature, but also a "twist" of the curve. This twist will be related to the derivative of the binormal vector, as this ceases to be a constant vector for curves that are not confined to a single plane. To quantify this twist, we define the torsion ${\displaystyle \tau (s)}$ as being, from the Frenet equations, proportional to the derivative of the binormal vector

${\displaystyle {\boldsymbol {\dot {\hat {b}}}}\equiv -\tau (s){\boldsymbol {\hat {p}}}\qquad }$ which implies that ${\displaystyle \qquad {\boldsymbol {\hat {p}}}\cdot {\boldsymbol {\dot {\hat {b}}}}\equiv -\tau (s)({\boldsymbol {\hat {p}}}\cdot {\boldsymbol {\hat {p}}})=-\tau (s)}$.

Here the choice of the minus sign is a matter of convention.

To find out the value of torsion, we need a few more results. We begin with the cross product representation of the binormal vector ${\displaystyle {\boldsymbol {\hat {b}}}={\boldsymbol {\hat {t}}}\times {\boldsymbol {\hat {p}}}}$ and differentiate this with respect to arclength ${\displaystyle s}$

${\displaystyle {\boldsymbol {\dot {\hat {b}}}}={\boldsymbol {\dot {\hat {t}}}}\times {\boldsymbol {\hat {p}}}+{\boldsymbol {\hat {t}}}\times {\boldsymbol {\dot {\hat {p}}}}}$.

We can take the dot product of both sides of this expression with ${\displaystyle -{\boldsymbol {\hat {p}}}}$

${\displaystyle \tau (s)=-{\boldsymbol {\hat {p}}}\cdot {\boldsymbol {\dot {\hat {b}}}}=-{\boldsymbol {\hat {p}}}\cdot \left[{\boldsymbol {\dot {\hat {t}}}}\times {\boldsymbol {\hat {p}}}+{\boldsymbol {\hat {t}}}\times {\boldsymbol {\dot {\hat {p}}}}\right]}$.

We note that ${\displaystyle {\boldsymbol {\dot {\hat {t}}}}={\boldsymbol {\ddot {x}}}=\kappa (s){\boldsymbol {\hat {p}}}\qquad }$ and that ${\displaystyle \qquad {\boldsymbol {\hat {p}}}\times {\boldsymbol {\hat {p}}}=0}$

${\displaystyle \tau (s)=-{\boldsymbol {\hat {p}}}\cdot \left[\kappa (s)({\boldsymbol {\hat {p}}}\times {\boldsymbol {\hat {p}}})+{\boldsymbol {\hat {t}}}\times {\boldsymbol {\dot {\hat {p}}}}\right]}$

${\displaystyle \tau (s)=-{\boldsymbol {\hat {p}}}\cdot {\boldsymbol {\hat {t}}}\times {\boldsymbol {\dot {\hat {p}}}}}$.

We can make the following replacement ${\displaystyle {\boldsymbol {\hat {p}}}=\rho (s){\boldsymbol {\ddot {x}}}}$ to obtain

${\displaystyle \tau (s)=-\rho (s){\boldsymbol {\ddot {x}}}\cdot \left[{\boldsymbol {\hat {t}}}\cdot {\frac {d}{ds}}(\rho (s){\boldsymbol {\ddot {x}}})\right]}$

${\displaystyle \qquad =-\rho (s){\boldsymbol {\ddot {x}}}\cdot \left[{\boldsymbol {\hat {t}}}\cdot ({\dot {\rho (s)}}{\boldsymbol {\ddot {x}}}+\rho (s){\boldsymbol {\bar {x}}})\right]}$.

Here we note that ${\displaystyle {\boldsymbol {\ddot {x}}}\cdot ({\boldsymbol {\hat {t}}}\times {\boldsymbol {\ddot {x}}})=0}$, also ${\displaystyle {\boldsymbol {\bar {x}}}\equiv {\frac {d^{3}{\boldsymbol {x}}}{ds^{3}}}}$ (because MediaWiki's math mode does not recognize the LaTeX triple dot derivative operator) .

${\displaystyle \tau (s)=-\rho ^{2}(s)({\boldsymbol {\ddot {x}}}\cdot {\boldsymbol {\hat {t}}}\times {\boldsymbol {\bar {x}}})=\rho ^{2}(s)({\boldsymbol {\dot {x}}}\cdot {\boldsymbol {\ddot {x}}}\times {\boldsymbol {\bar {x}}})}$.

Finally, recalling that ${\displaystyle \rho ^{2}={\frac {1}{|{\boldsymbol {\ddot {x}}}\cdot {\boldsymbol {\ddot {x}}}|}}}$

allowing the torsion to be written in the compact form

${\displaystyle \tau (s)={\frac {{\boldsymbol {\dot {x}}}(s)\cdot {\boldsymbol {\ddot {x}}}(s)\times {\boldsymbol {\bar {x}}}(s)}{|{\boldsymbol {\ddot {x}}}(s)\cdot {\boldsymbol {\ddot {x}}}(s)|}}}$.

"Here ${\displaystyle {\boldsymbol {x}}(s)}$ is a simple circular helix of radius 1. In this case, the angular coordinate is arclength ${\displaystyle s}$.

## Example: a simple circular helix with a linear third coordinate

We can write the equation of a simple helix, its tangent, principal normal, and binormal vectors as

${\displaystyle {\boldsymbol {x}}(s)=(\cos(s),\sin(s),cs)}$ the helix

${\displaystyle {\boldsymbol {\dot {x}}}(s)=(-\sin(s),\cos(s),c)}$ its tangent

${\displaystyle {\boldsymbol {\ddot {x}}}(s)=(-\cos(s),-\sin(s),0)}$ its principal normal

${\displaystyle {\boldsymbol {\bar {x}}}(s)=(-\sin(s),-\cos(s),0)}$ its third derivative

### Torsion for a simple unit circular helix in arclength ${\displaystyle s}$ with a linear third coordinate

The torsion is given by the triple scalar product

${\displaystyle \tau (s)={\frac {{\boldsymbol {\dot {x}}}\cdot {\boldsymbol {\ddot {x}}}\times {\boldsymbol {\bar {x}}}}{|{\boldsymbol {\ddot {x}}}\cdot {\boldsymbol {\ddot {x}}}|}}={\frac {1}{|{\boldsymbol {\ddot {x}}}\cdot {\boldsymbol {\ddot {x}}}|}}\det {\begin{bmatrix}-\sin(s)&\cos(s)&c\\-\cos(s)&-\sin(s)&0\\-\sin(s)&-\cos(s)&0\end{bmatrix}}=c(\cos ^{2}(s)+\sin ^{2}(s))=c.}$

Here, we note that ${\displaystyle |{\boldsymbol {\ddot {x}}}\cdot {\boldsymbol {\ddot {x}}}|=\sin ^{2}(s)+\cos ^{2}(s)=1.}$

Hence, for the simple unit circular helix in arclength coordinates, the torsion (the "twist") is the constant factor ${\displaystyle c}$ that lifts the helix off of the plane.

The concept of torsion extends to more general curves than simple circular helixes. Just as we can imagine a circle of radius ${\displaystyle \rho }$ tangent to a more general plane curve, we may also consider a helix tangent to a curve that is not confined to a plane.

### Helix of radius ${\displaystyle \rho }$

We can generalize the example of the helix to a circular helix of radius ${\displaystyle \rho }$ and with angular coordinate ${\displaystyle \sigma }$

${\displaystyle {\boldsymbol {x}}(\sigma )=(\rho \cos(\sigma ),\rho \sin(\sigma ),c\sigma )}$ the helix

${\displaystyle {\boldsymbol {x^{\prime }}}(\sigma )=(-\rho \sin(\sigma ),\rho \cos(\sigma ),c)}$ the tangent vectors to the helix

To calculate the unit tangent, we need the magnitude of the tangent vector ${\displaystyle {\boldsymbol {x^{\prime }}}(\sigma )}$

${\displaystyle |{\boldsymbol {x^{\prime }}}(\sigma )|={\sqrt {\rho ^{2}\sin ^{2}(\sigma )+\rho ^{2}\cos ^{2}\sigma +c^{2}}}={\sqrt {\rho ^{2}+c^{2}}}\equiv w}$ .

As before

${\displaystyle {\boldsymbol {\hat {t}}}={\frac {\boldsymbol {x^{\prime }}}{|{\boldsymbol {x^{\prime }}}(\sigma )|}}={\boldsymbol {\dot {x}}}(\sigma (s))={\frac {d{\boldsymbol {x}}}{d\sigma }}{\frac {d\sigma }{ds}}}$

implying that ${\displaystyle {\frac {d\sigma }{ds}}={\frac {1}{w}}\rightarrow \sigma =s/w}$ for ${\displaystyle \sigma =0}$ when ${\displaystyle s=0}$.

#### Helix of radius ${\displaystyle \rho }$ in arclength ${\displaystyle s}$ coordinates

We can write the equation of a simple circular helix of arbitrary radius, and the associated derivative related quantities as

${\displaystyle {\boldsymbol {x}}(s)=\left(\rho \cos(s/w),\rho \sin(s/w),cs/w\right)}$ the helix in arclength coordinates

${\displaystyle {\boldsymbol {\dot {x}}}(s)=\left(-(\rho /w)\sin(s/w),(\rho /w)\cos(s/w),c/w\right)}$ the unit tangent vector

${\displaystyle {\boldsymbol {\ddot {x}}}(s)=\left(-(\rho /w^{2})\cos(s/w),-(\rho /w^{2})\sin(s/w),0\right)}$ the principal normal vector

${\displaystyle {\boldsymbol {\bar {x}}}(s)=\left(-(\rho /w^{3})\sin(s/w),-(\rho /w^{3})\cos(s/w),0\right)}$ the third derivative with respect to arclength

We note ${\displaystyle |{\boldsymbol {\ddot {x}}}\cdot {\boldsymbol {\ddot {x}}}|=(\rho ^{2}/w^{4})(\cos ^{2}(s/w)+\sin ^{2}(s/w))=(\rho ^{2}/w^{4})}$

And we write the torsion as

${\displaystyle \tau (s)={\frac {{\boldsymbol {\dot {x}}}\cdot {\boldsymbol {\ddot {x}}}\times {\boldsymbol {\bar {x}}}}{|{\boldsymbol {\ddot {x}}}\cdot {\boldsymbol {\ddot {x}}}|}}={\frac {1}{|{\boldsymbol {\ddot {x}}}\cdot {\boldsymbol {\ddot {x}}}|}}\det {\begin{bmatrix}-(\rho /w)\sin(s)&(\rho /w)\cos(s)&c/w\\-(\rho /w^{2})\cos(s)&-(\rho /w^{2})\sin(s)&0\\-(\rho /w^{3})\sin(s)&-(\rho /w^{3})\cos(s)&0\end{bmatrix}}=(w^{4}/\rho ^{2})(c/w)(\rho ^{2}/w^{5})(\cos ^{2}(s)+\sin ^{2}(s))={\frac {c}{w^{2}}}={\frac {c}{\rho ^{2}+c^{2}}}.}$

"The unit tangent vector ${\displaystyle {\boldsymbol {\hat {t}}}}$, the unit principal normal vector ${\displaystyle {\boldsymbol {\hat {p}}}}$, and the unit binormal vector ${\displaystyle {\boldsymbol {\hat {p}}}}$ constitute the basis of the Frenet coordinate frame. "

## The Frenet coordinate frame

The unit tangent, unit principal normal, and unit binormal vectors form a coordinate frame at each point along a curve called the Frenet frame for French astronomer and mathematician | Jean Frédéric Frenet (1816-1900) .

We recognize the following equivalent expressions which follow from even and odd permutations of the cross product

${\displaystyle \qquad {\boldsymbol {\hat {t}}}\times {\boldsymbol {\hat {p}}}={\boldsymbol {\hat {b}}}\qquad {\boldsymbol {\hat {p}}}\times {\boldsymbol {\hat {b}}}={\boldsymbol {\hat {t}}}\qquad {\boldsymbol {\hat {b}}}\times {\boldsymbol {\hat {t}}}={\boldsymbol {\hat {p}}}\qquad }$ and ${\displaystyle \qquad {\boldsymbol {\hat {p}}}\times {\boldsymbol {\hat {t}}}=-{\boldsymbol {\hat {b}}}\qquad {\boldsymbol {\hat {b}}}\times {\boldsymbol {\hat {p}}}=-{\boldsymbol {\hat {t}}}\qquad {\boldsymbol {\hat {t}}}\times {\boldsymbol {\hat {b}}}=-{\boldsymbol {\hat {p}}}}$ .

From previous results, we recall that

${\displaystyle {\boldsymbol {\ddot {x}}}\equiv {\boldsymbol {\dot {\hat {t}}}}=\kappa (s){\boldsymbol {\hat {p}}}}$

We note also that we can differentiate the cross product representation of ${\displaystyle {\boldsymbol {\hat {p}}}}$

${\displaystyle {\boldsymbol {\dot {\hat {p}}}}={\boldsymbol {\dot {\hat {b}}}}\times {\boldsymbol {\hat {t}}}+{\boldsymbol {\hat {b}}}\times {\boldsymbol {\dot {\hat {t}}}}}$.

From the expression ${\displaystyle {\boldsymbol {\dot {\hat {t}}}}=\kappa (s){\boldsymbol {\hat {p}}}}$ and from the definition of torsion ${\displaystyle {\boldsymbol {\dot {\hat {b}}}}=-\tau (s){\boldsymbol {\hat {p}}}}$

${\displaystyle {\boldsymbol {\dot {\hat {p}}}}=-\tau (s){\boldsymbol {\hat {p}}}\times {\boldsymbol {\hat {t}}}+\kappa (s){\boldsymbol {\hat {b}}}\times {\boldsymbol {\hat {p}}}}$.

Because ${\displaystyle {\boldsymbol {\hat {p}}}\times {\boldsymbol {\hat {t}}}=-{\boldsymbol {\hat {b}}}}$ and ${\displaystyle {\boldsymbol {\hat {b}}}\times {\boldsymbol {\hat {p}}}=-{\boldsymbol {\hat {t}}}}$

${\displaystyle {\boldsymbol {\dot {\hat {p}}}}=\tau (s){\boldsymbol {\hat {p}}}-\kappa (s){\boldsymbol {\hat {t}}}}$.

### The Frenet equations

From the previous results we have the following autonomous system in the arclength variable ${\displaystyle s}$ of ordinary differential equations the are, writing out the dot . derivatives as ${\displaystyle d/ds}$

${\displaystyle {\frac {d}{ds}}{\boldsymbol {\hat {t}}}=\kappa (s){\boldsymbol {\hat {p}}}}$

${\displaystyle {\frac {d}{ds}}{\boldsymbol {\hat {p}}}=-\kappa (s){\boldsymbol {\hat {t}}}+\tau (s){\boldsymbol {\hat {b}}}}$

${\displaystyle {\frac {d}{ds}}{\boldsymbol {\hat {b}}}=-\tau (s){\boldsymbol {\hat {p}}}}$,

where the last equation follows from the definition of torsion.

We may write this system in matrix and vector notation as

${\displaystyle {\frac {d}{ds}}{\begin{bmatrix}{\boldsymbol {\hat {t}}}\\{\boldsymbol {\hat {p}}}\\{\boldsymbol {\hat {b}}}\end{bmatrix}}={\begin{bmatrix}0&\kappa (s)&0\\-\kappa (s)&0&\tau (s)\\0&-\tau (s)&0\end{bmatrix}}{\begin{bmatrix}{\boldsymbol {\hat {t}}}\\{\boldsymbol {\hat {p}}}\\{\boldsymbol {\hat {b}}}\end{bmatrix}}}$

The Frenet result is beautiful, but suffers from the flaw that while the Frenet frame is, pointwise, an orthogonal frame, it does not constitute an orthogonal coordinate system centered on the curve. This follows because ${\displaystyle {\boldsymbol {\dot {\hat {p}}}}}$ and ${\displaystyle {\boldsymbol {\dot {\hat {b}}}}}$ do not point parallel to ${\displaystyle {\boldsymbol {\hat {t}}}}$. In short, the principal normal and binormal vectors are not transported in a parallel way along the curve.

## The First Fundamental Form of Differential geometry

The notion of a curve can be generalized beyond orthogonal cartesian coordinates to more general coordinates. This is done by considering differential arclength in a more general coordinate system.

We recall that the unit tangent vector is defined as the first derivative with respect to arclength of position on a curve ${\displaystyle {\boldsymbol {x}}(s)}$

${\displaystyle {\frac {d{\boldsymbol {x}}}{ds}}={\boldsymbol {\hat {t}}}}$ implying that differential position is ${\displaystyle d{\boldsymbol {x}}={\boldsymbol {\hat {t}}}ds}$ and that differential arclength squared is given by the dot product of ${\displaystyle d{\boldsymbol {x}}}$ with itself

${\displaystyle d{\boldsymbol {x}}\cdot d{\boldsymbol {x}}=({\boldsymbol {\hat {t}}}\cdot {\boldsymbol {\hat {t}}})(ds)^{2}}$.

### More general parameterization

Suppose that ${\displaystyle {\boldsymbol {x}}=(x_{1},x_{2},x_{3})}$ is parameterized in terms of a new coordinate ${\displaystyle {\boldsymbol {\sigma }}=(\sigma ^{1},\sigma ^{2})}$ in 2-dimensions, or ${\displaystyle {\boldsymbol {\sigma }}=(\sigma ^{1},\sigma ^{2},\sigma ^{3})}$ in 3-dimensions, such that [/itex]${\displaystyle {\boldsymbol {x}}({\boldsymbol {\sigma }})}$. The superscripted indices indicate that ${\displaystyle {\boldsymbol {\sigma }}}$ need not be an orthogonal cartesian coordinate system.

In 3-dimensions we write

${\displaystyle {\boldsymbol {x}}({\boldsymbol {\sigma }})\equiv \left(x_{1}(\sigma ^{1},\sigma ^{2},\sigma ^{3}),x_{2}(\sigma ^{1},\sigma ^{2},\sigma ^{3}),x_{3}(\sigma ^{1},\sigma ^{2},\sigma ^{3})\right)}$ and the differential position is given by

${\displaystyle d{\boldsymbol {x}}\equiv {\frac {\partial {\boldsymbol {x}}}{\partial \sigma ^{1}}}d\sigma ^{1}+{\frac {\partial {\boldsymbol {x}}}{\partial \sigma ^{2}}}d\sigma ^{2}+{\frac {\partial {\boldsymbol {x}}}{\partial \sigma ^{3}}}d\sigma ^{3}}$ .

Formally, if we want the differential arclength in for this new representation, it will be

${\displaystyle d{\boldsymbol {x}}\cdot d{\boldsymbol {x}}=\left({\frac {\partial {\boldsymbol {x}}}{\partial \sigma ^{1}}}d\sigma ^{1}+{\frac {\partial {\boldsymbol {x}}}{\partial \sigma ^{2}}}d\sigma ^{2}+{\frac {\partial {\boldsymbol {x}}}{\partial \sigma ^{3}}}d\sigma ^{3}\right)\cdot \left({\frac {\partial {\boldsymbol {x}}}{\partial \sigma ^{1}}}d\sigma ^{1}+{\frac {\partial {\boldsymbol {x}}}{\partial \sigma ^{2}}}d\sigma ^{2}+{\frac {\partial {\boldsymbol {x}}}{\partial \sigma ^{3}}}d\sigma ^{3}\right)}$.

Because the dot product is on the components of ${\displaystyle {\boldsymbol {x}}}$ the expression will be quite complicated if written out in complete detail. Fortunately we may make use of index notation to compactify this. We may write ${\displaystyle d{\boldsymbol {x}}}$ as

${\displaystyle d{\boldsymbol {x}}={\frac {\partial {\boldsymbol {x}}}{\partial \sigma ^{\alpha }}}d\sigma ^{\alpha }\qquad }$ for ${\displaystyle \qquad \alpha =1,2,3.}$

Because the dot product is on the components of ${\displaystyle {\boldsymbol {x}}}$ all possible combinations of the partial derivatives and differentials multiply each other. In index notation this is compactly represented as

${\displaystyle d{\boldsymbol {x}}\cdot d{\boldsymbol {x}}=\left({\frac {\partial {\boldsymbol {x}}}{\partial \sigma ^{\alpha }}}\cdot {\frac {\partial {\boldsymbol {x}}}{\partial \sigma ^{\beta }}}\right)d\sigma ^{\alpha }d\sigma ^{\beta }}$ where ${\displaystyle \beta =1,2,3}$, as well.

To further compactify the notation we define the tangents in the ${\displaystyle \sigma ^{\alpha }}$ and ${\displaystyle \sigma ^{\beta }}$ directions via

${\displaystyle {\boldsymbol {x}}_{\alpha }\equiv {\frac {\partial {x}}{d\sigma ^{\alpha }}}\qquad }$ and ${\displaystyle \qquad {\boldsymbol {x}}_{\beta }\equiv {\frac {\partial {x}}{d\sigma ^{\beta }}}}$

Thus, we can write the differential arclength squared as

${\displaystyle (ds)^{2}\equiv d{\boldsymbol {x}}\cdot d{\boldsymbol {x}}=({\boldsymbol {x}}_{\alpha }\cdot {\boldsymbol {x}}_{\beta })d\sigma ^{\alpha }d\sigma ^{\beta }}$ .

We can think of the quantity ${\displaystyle ({\boldsymbol {x}}_{\alpha }\cdot {\boldsymbol {x}}_{\beta })}$ as a 2x2 matrix in 2-dimensions and a 3x3 matrix in 3-dimensions, where the elements of the matrix are dot products of tangent vectors. We define this quantity as the metric tensor ${\displaystyle g_{\alpha \beta }\equiv ({\boldsymbol {x}}_{\alpha }\cdot {\boldsymbol {x}}_{\beta })}$. The metric tensor reduces to the identity matrix in orthogonal cartesian coordinates, and to a diagonal matrix, with values possibly other than 1 on the diagonal.

We call the expression, which is a quadratic form,

${\displaystyle (ds)^{2}=g_{\alpha \beta }d\sigma ^{\alpha }d\sigma ^{\beta }}$

the First Fundamental Form of Differential Geometry.

## Covariant and Contravariant transformations of tensors

The "covariant" or "covariance" finds several meaning in physics. The first meaning refers to a particular transformation law for changing coordinate systems. Here, this will refer to two different possible tensor transformation laws and the reason why we have superscripted and subscripted indexes to denote components to vectors and higher order tensors.

An easy way to see why we naturally have two different types of components of a vector can be seen in the simple example of the full differential of a scalar function

${\displaystyle df={\frac {\partial f}{\partial \sigma ^{1}}}d\sigma ^{1}+{\frac {\partial f}{\partial \sigma ^{2}}}d\sigma ^{2}+{\frac {\partial f}{\partial \sigma ^{3}}}d\sigma ^{3}}$

The quantity ${\displaystyle df}$ is a scalar, possibly representing a family of level curves or surfaces. This scalar function is the result of the inner or dot product of two vector quantities--a vector tangent to the level surfaces represented by the values of ${\displaystyle f}$ and a differential distance along the same coordinate direction. Employing index notation, we may write

${\displaystyle df={\frac {\partial f}{\partial \sigma ^{\alpha }}}d\sigma ^{\alpha }=\left({\frac {\partial f}{\partial \sigma ^{1}}},{\frac {\partial f}{\partial \sigma ^{2}}},{\frac {\partial f}{\partial \sigma ^{3}}}\right)\cdot \left(d\sigma ^{1},d\sigma ^{1},d\sigma ^{3}\right)}$

which has the form of an inner or dot product of two vectors.

### Changing coordinate systems

Suppose we want to change from the ${\displaystyle {\boldsymbol {\sigma }}=(\sigma ^{1},\sigma ^{2},\sigma ^{3})}$ to a new coordinate system ${\displaystyle {\boldsymbol {y}}=(y^{1},y^{2},y^{3})}$ coordinates.

We can transform the tangent vector

${\displaystyle {\frac {\partial f}{\partial \sigma ^{\alpha }}}{\frac {\partial \sigma ^{\alpha }}{dy^{\gamma }}}={\frac {\partial f}{\partial y^{\gamma }}}}$. Here ${\displaystyle \gamma =1,2,3}$.

We can transform the differentials (called the co-vector as

${\displaystyle {\frac {\partial y^{\gamma }}{\partial \sigma ^{\alpha }}}d\sigma ^{\alpha }}$.

If we put these together, we have

${\displaystyle df={\frac {\partial f}{\partial \sigma ^{\alpha }}}{\frac {\partial \sigma ^{\alpha }}{dy^{\gamma }}}{\frac {\partial y^{\gamma }}{\partial \sigma ^{\alpha }}}d\sigma ^{\alpha }={\frac {\partial f}{\partial y^{\gamma }}}dy^{\gamma }}$. Note that the scalar function ${\displaystyle df}$ is "invariant" under the transformation from the ${\displaystyle {\boldsymbol {\sigma }}}$ to the ${\displaystyle {\boldsymbol {y}}}$.

The tangent vector follows a covariant transformation law

${\displaystyle {\frac {\partial \sigma ^{\alpha }}{\partial y^{\gamma }}}{\frac {\partial f}{\partial \sigma ^{\alpha }}}={\frac {\partial f}{\partial y^{\gamma }}}}$

Were as the co-vector obeys a "contravariant transformation law"

${\displaystyle {\frac {\partial y^{\gamma }}{\partial \sigma ^{\alpha }}}d\sigma ^{\alpha }=dy^{\gamma }}$.

### Generalizing the co-variant and contravariant transformation laws

In general, we note that the coordinate transformation ${\displaystyle {\boldsymbol {\sigma }}\rightarrow {\boldsymbol {y}}}$ transforms the covariant vector ${\displaystyle a_{\alpha }}$

${\displaystyle a_{\alpha }{\frac {\partial \sigma ^{\alpha }}{\partial y^{\gamma }}}={\bar {a}}_{\gamma }}$.

Here the bar reminds us that the components of ${\displaystyle a_{\bar {\gamma }}}$ are in the new ${\displaystyle {\boldsymbol {y}}}$ coordinates.

The corresponding transformation for a contravariant vector ${\displaystyle b^{\beta }}$ may be written as

${\displaystyle b^{\gamma }{\frac {\partial y^{\gamma }}{\partial \sigma ^{\gamma }}}={\bar {b}}^{\gamma }}$.

Again the bar is a reminder that the components of ${\displaystyle {\bar {a}}_{\gamma }}$ and ${\displaystyle {\bar {b}}^{\gamma }}$ are in the new ${\displaystyle {\boldsymbol {y}}}$ coordinates.

### Definition: a contraction

A contraction is the result of summing over repeated indexes where one index is covariant and the other is contravariant. For example

${\displaystyle a_{\gamma }b^{\gamma }=c}$

the contraction of a contravariant, with a covariant is an invariant (a scalar).

"The surface ${\displaystyle {\boldsymbol {x}}(\sigma ^{1},\sigma ^{2})\equiv {\boldsymbol {x}}(\sigma ^{1},\sigma ^{2},\sigma ^{3})}$, with the ${\displaystyle \sigma ^{3}={\mbox{const.}}}$ for points on the surface. The coordinate system ${\displaystyle (\sigma ^{1},\sigma ^{2})}$ is not, in general an orthogonal coordinate system. "

# The Theory of Surfaces

A surface is structure that is a geometrical object parameterized in two dimensions via ${\displaystyle {\boldsymbol {x}}(\sigma ^{1},\sigma ^{2})}$. A family of level surfaces may be defined with an additional coordinate ${\displaystyle \sigma ^{3}}$ which is equal to a constant on a given surface ${\displaystyle {\boldsymbol {x}}(\sigma ^{1},\sigma ^{2},\sigma ^{3}={\mbox{const.}})}$.

The parameters ${\displaystyle (\sigma ^{1},\sigma ^{2})}$ describe coordinate curves on the surface allowing us to write expressions for tangent vectors to each coordinate curve

${\displaystyle {\boldsymbol {t}}_{1}\equiv {\frac {\partial {\boldsymbol {x}}}{\partial \sigma ^{1}}}\qquad }$ and ${\displaystyle \qquad {\boldsymbol {t}}_{2}\equiv {\frac {\partial {\boldsymbol {x}}}{\partial \sigma ^{2}}}}$.

The tangent vectors ${\displaystyle ({\boldsymbol {t}}_{1},{\boldsymbol {t}}_{2})}$ span the tangent plane to any given point ${\displaystyle {\boldsymbol {x}}(\sigma ^{1},\sigma ^{2})}$. Because the coordinates ${\displaystyle (\sigma ^{1},\sigma ^{2})}$ are not, in general orthogonal coordinates, the tangent vectors, are not, in general orthogonal.

We recall that the components of the metric tensor ${\displaystyle g_{\alpha \beta }}$ are given by dot products of tangent vectors. In this case we note for future reference that ${\displaystyle g_{11}={\boldsymbol {x}}_{1}\cdot {\boldsymbol {x}}_{1}}$, meaning that ${\displaystyle |{\boldsymbol {x}}_{1}|={\sqrt {g_{11}}}}$.

"A plane spanned by (not unit) vectors ${\displaystyle ({\boldsymbol {t}}_{1},{\boldsymbol {t}}_{2})}$ in a non-orthogonal coordinate system. There are two ways of representing a vector ${\displaystyle {\boldsymbol {V}}}$ in terms of components. a) One way is by finding vectors that are parallel to the coordinate axes that sum to the vector by the parallelogram law. b) The other way is the traditional method of projecting orthogonally via the dot product."

## The Covariant and Contravariant components of a vector

Because we are no longer confined to considering only orthogonal coordinate frames, we have the possibility of representing the components of a given vector in either covariant or contravariant coordinates. The mental picture that the reader may already have of representing a vector by its components, would be by taking the dot product of the vector with the unit basis vectors in the given coordinate axes. Thus we use the scalar product to project the vector down a perpendicular to the coordinate direction. This is the "normal" way to find the components of a vector in a coordinate system.

There is another way that we can form a basis of a vector, and that is to find the two vectors in the coordinate axis directions that sum to our vector via the parallelogram law. For orthogonal cartesian coordinates, the are the same coordinates as from the perpendicular projection. In the case of a non-orthogonal coordinate frame, the projection is parallel to the other coordinate axis.

Suppose the vector ${\displaystyle {\boldsymbol {V}}}$ is in a plane spanned by the two vectors ${\displaystyle ({\boldsymbol {t}}_{1},{\boldsymbol {t}}_{2})}$ that are the tangent vectors to a point on a surface ${\displaystyle {\boldsymbol {x}}(\sigma ^{1},\sigma ^{2})}$. Because ${\displaystyle (\sigma ^{1},\sigma ^{2})}$ are not, in general, arclength coordinates on the surface, the tangent vectors ${\displaystyle ({\boldsymbol {t}}_{1},{\boldsymbol {t}}_{2})}$ are not, in general unit tangent vectors.

### The Covariant components of a vector

We can easily form the covariant components of a vector ${\displaystyle {\boldsymbol {V}}}$ by noting formally that

${\displaystyle {\boldsymbol {V}}=a_{1}{\boldsymbol {\hat {t}}}_{1}+a_{2}{\boldsymbol {\hat {t}}}_{2}}$,

but we also are aware that the tangents are not unit tangents, so there is a factor of ${\displaystyle |{\boldsymbol {x}}_{\alpha }|={\sqrt {g_{\alpha \alpha }}}}$ may be moved from the unit tangents to the components

${\displaystyle {\boldsymbol {V}}={\frac {a_{1}}{\sqrt {g_{11}}}}{\boldsymbol {t}}_{1}+{\frac {a_{2}}{\sqrt {g_{22}}}}{\boldsymbol {t}}_{2}}$.

We can work this from the other direction by writing the dot product of ${\displaystyle {\boldsymbol {V}}}$ with the unit tangent vectors, respectively

${\displaystyle {\boldsymbol {V}}\cdot {\boldsymbol {\hat {t}}}_{\alpha }={\boldsymbol {V}}\cdot {\frac {{\boldsymbol {x}}_{\alpha }}{|{\boldsymbol {x}}_{\alpha }|}}\equiv {\frac {|V||{\boldsymbol {x}}_{\alpha }|\cos(\theta _{\alpha })}{|{\boldsymbol {x}}_{\alpha }|}}={\frac {|V||{\boldsymbol {x}}_{\alpha }|\cos(\theta _{\alpha })}{\sqrt {g_{\alpha \alpha }}}}}$. Here, the ${\displaystyle \theta _{\alpha }}$ is the angle between ${\displaystyle {\boldsymbol {V}}}$ and the respective ${\displaystyle {\boldsymbol {t}}_{\alpha }}$ direction.

When we compare this with the previous representation of the covariant components of ${\displaystyle a_{\alpha }}$ we see that the covariant components of ${\displaystyle a_{\alpha }}$ are

${\displaystyle a_{1}=({\boldsymbol {V}}\cdot {\boldsymbol {x}}_{1})\qquad }$ and ${\displaystyle \qquad a_{2}=({\boldsymbol {V}}\cdot {\boldsymbol {x}}_{2})}$.

The presence of the factor of ${\displaystyle {\sqrt {g_{\alpha \alpha }}}}$ reminds us that the coordinates are not arclength coordinates, otherwise, the tangent vectors would be unit tangent vectors.

"Details of the contravariant components of a vector ${\displaystyle {\boldsymbol {V}}}$. The contravariant components are the parallel projections of the vectors."

### The Contravariant Components of a vector

The contravariant components of a vector require a bit more work. We can begin the same way. If ${\displaystyle {\boldsymbol {V}}}$ is repreented in terms of components in the respective rangent directions, this must look like a dot product with the tangents

${\displaystyle {\boldsymbol {V}}=a^{1}{\boldsymbol {x}}_{1}+a^{2}{\boldsymbol {x}}_{2}=a^{1}{\sqrt {g_{11}}}{\boldsymbol {\hat {t}}}_{1}+a^{2}{\sqrt {g_{22}}}{\boldsymbol {\hat {t}}}_{2}}$, considering the geometry as a parallel projection to the opposing coordinate axis.

To find ${\displaystyle a^{1}}$ and ${\displaystyle a^{2}}$ we consider the cross products

${\displaystyle |{\boldsymbol {V}}\times {\boldsymbol {x}}_{1}|=|{\boldsymbol {V}}||{\boldsymbol {x}}_{1}|\sin(\theta _{1})}$

${\displaystyle |{\boldsymbol {V}}\times {\boldsymbol {x}}_{2}|=|{\boldsymbol {V}}||{\boldsymbol {x}}_{2}|\sin(\theta _{2})}$

${\displaystyle |{\boldsymbol {x}}_{1}\times {\boldsymbol {x}}_{2}|=|{\boldsymbol {x}}_{1}||{\boldsymbol {x}}_{2}|\sin(\theta _{2}){\sqrt {g_{11}}}{\sqrt {g_{22}}}\sin(\theta _{1}+\theta _{2})}$.

We note that

${\displaystyle \sin(\pi -\theta _{1}-\theta _{2})=\sin(\theta _{1}+\theta _{2})}$

and

${\displaystyle {\frac {|{\boldsymbol {V}}|}{\sin(\pi -\theta _{1}-\theta _{2})}}={\frac {|{\boldsymbol {V}}|}{\sin(\theta _{1}+\theta _{2})}}}$

Applying the law of sines we note that

${\displaystyle {\frac {|{\boldsymbol {V}}|}{\sin(\theta _{1}+\theta _{2})}}={\frac {a_{1}{\sqrt {g_{11}}}}{\sin(\theta _{1})}}={\frac {a_{2}{\sqrt {g_{22}}}}{\sin(\theta _{2})}}}$.

We may make use of the cross product relations to substitute for the sines in the law of sines to produce

${\displaystyle {\frac {|{\boldsymbol {V}}|{\sqrt {g_{11}}}{\sqrt {g_{22}}}}{|{\boldsymbol {x}}_{1}\times {\boldsymbol {x}}_{2}|}}={\frac {a^{1}{\sqrt {g_{11}}}|{\boldsymbol {V}}|{\sqrt {g_{22}}}}{|{\boldsymbol {V}}\times {\boldsymbol {x}}_{2}|}}\qquad }$ and ${\displaystyle \qquad {\frac {|{\boldsymbol {V}}|{\sqrt {g_{11}}}{\sqrt {g_{22}}}}{|{\boldsymbol {x}}_{1}\times {\boldsymbol {x}}_{2}|}}={\frac {a^{2}{\sqrt {g_{22}}}|{\boldsymbol {V}}|{\sqrt {g_{11}}}}{|{\boldsymbol {V}}\times {\boldsymbol {x}}_{1}|}}}$.

Solving for ${\displaystyle a^{1}}$ and ${\displaystyle a^{2}}$

${\displaystyle a^{1}={\frac {|{\boldsymbol {V}}\times {\boldsymbol {x}}_{2}|}{|{\boldsymbol {x}}_{1}\times {\boldsymbol {x}}_{2}|}}\qquad }$ and ${\displaystyle \qquad a^{2}={\frac {|{\boldsymbol {V}}\times {\boldsymbol {x}}_{1}|}{|{\boldsymbol {x}}_{1}\times {\boldsymbol {x}}_{2}|}}}$.

Here, ${\displaystyle |{\boldsymbol {x}}_{1}\times {\boldsymbol {x}}_{2}|}$ is the area of the parallelogram with sides given by the tangent vectors ${\displaystyle {\boldsymbol {x}}_{1}}$ and ${\displaystyle {\boldsymbol {x}}_{2}}$.

The quantity ${\displaystyle |{\boldsymbol {V}}||{\boldsymbol {x}}_{2}|\sin(\theta _{2})}$ is the area of the parallelogram with sides formed by the tangent vector ${\displaystyle {\boldsymbol {x}}_{2}}$ and the vector ${\displaystyle {\boldsymbol {V}}}$ and similarly, the quantity ${\displaystyle |{\boldsymbol {V}}||{\boldsymbol {x}}_{1}|\sin(\theta _{1})}$ is the area of the parallelogram formed by the tangent vector ${\displaystyle {\boldsymbol {x}}_{2}}$ and the vector ${\displaystyle {\boldsymbol {V}}}$.

Thus, the contravariant components of ${\displaystyle {\boldsymbol {V}}}$ may be related to the ratios of parallelogram areas

${\displaystyle a^{1}={\frac {{\mbox{Paralellogram Area}}(|{\boldsymbol {V}}\times {\boldsymbol {x}}_{2}|)}{{\mbox{Parallelogram Area}}(|{\boldsymbol {x}}_{1}\times {\boldsymbol {x}}_{2}|)}}\qquad }$ and ${\displaystyle \qquad a^{2}={\frac {{\mbox{Paralellogram Area}}(|{\boldsymbol {V}}\times {\boldsymbol {x}}_{1}|)}{{\mbox{Parallelogram Area}}(|{\boldsymbol {x}}_{1}\times {\boldsymbol {x}}_{2}|)}}}$.

## Covariant and Contravariant transformation laws

We now have defined the covariant and contravariant components of an arbitrary vector ${\displaystyle {\boldsymbol {V}}}$ in a coordinate frame that is not, in general, orthogonal. The task now is to verify that these representations transform according to the respective covariant or contravariant transformation laws.

### Coordinate transformation of the covariant components of a vector

Above, we found that the representation of a vector ${\displaystyle {\boldsymbol {V}}}$ has covariant components in the nonorthogonal coordinate system ${\displaystyle {\boldsymbol {\sigma }}=(\sigma ^{1},\sigma ^{2})}$

${\displaystyle a_{\alpha }={\boldsymbol {x}}_{\alpha }\cdot {\boldsymbol {V}}}$ . Here ${\displaystyle {\boldsymbol {x}}_{\alpha }={\frac {\partial {\boldsymbol {x}}}{\partial \sigma ^{\alpha }}}}$ is the tangent to the coordinate curve in the ${\displaystyle \sigma ^{\alpha }}$ direction.

We consider transforming the covariant components to a new coordinate system ${\displaystyle {\boldsymbol {y}}=(y^{1},y^{2})}$ such that

${\displaystyle {\bar {a}}_{\beta }={\boldsymbol {\bar {x}}}_{\beta }\cdot {\boldsymbol {V}}={\boldsymbol {x}}_{\alpha }{\frac {\partial \sigma ^{\alpha }}{\partial y^{\beta }}}\cdot {\boldsymbol {V}}={\frac {\partial \sigma ^{\alpha }}{\partial u^{\beta }}}{\boldsymbol {x}}_{\alpha }\cdot {\boldsymbol {V}}={\frac {\partial \sigma ^{\alpha }}{\partial y^{\beta }}}a_{\alpha }}$.

Thus, ${\displaystyle {\bar {a}}_{\beta }={\frac {\partial \sigma ^{\alpha }}{\partial y^{\beta }}}a_{\alpha }\qquad }$ and ${\displaystyle \qquad {\frac {\partial y^{\beta }}{\partial \sigma ^{\alpha }}}{\bar {a}}_{\beta }=a_{\alpha }}$ in accordance with the covariant transformation law.

### Coordinate transformation of the contravariant components of a vector

Similarly, the representation of the vector ${\displaystyle {\boldsymbol {V}}}$ has contravariant components in the nonorthogonal coordinate system ${\displaystyle {\boldsymbol {y}}=(y^{1},y^{2})}$ with vectors tangent to the ${\displaystyle y^{\beta }}$ coordinate directions given by ${\displaystyle {\boldsymbol {\bar {x}}}_{\beta }}$ (indicated by the bar)

${\displaystyle {\boldsymbol {V}}={\bar {a}}^{\beta }{\boldsymbol {\bar {x}}}_{\beta }={\bar {a}}^{\beta }{\frac {\partial {\boldsymbol {x}}}{\partial y^{\beta }}}}$.

If we wanted to transform the tangent vectors into the ${\displaystyle {\boldsymbol {\sigma }}=(\sigma ^{1},\sigma ^{2})}$ coordinates, this would be the same as above, as the tangent behaves as a covariant vector

${\displaystyle {\boldsymbol {\bar {x}}}_{\beta }{\frac {\partial y^{\beta }}{\sigma ^{\alpha }}}={\frac {\partial {\boldsymbol {x}}}{\partial y^{\beta }}}{\frac {\partial y^{\beta }}{\partial \sigma ^{\alpha }}}={\boldsymbol {x}}_{\alpha }}$.

In the ${\displaystyle {\boldsymbol {\sigma }}=(\sigma ^{1},\sigma ^{2})}$ we may write

${\displaystyle {\boldsymbol {V}}=a^{\alpha }{\boldsymbol {x}}_{\alpha }=a^{\alpha }\left({\frac {\partial y^{\beta }}{\partial \sigma ^{\alpha }}}\right){\boldsymbol {\bar {x}}}_{\beta }=\left(a^{\alpha }{\frac {\partial y^{\beta }}{\partial \sigma ^{\alpha }}}\right){\boldsymbol {\bar {x}}}_{\beta }={\bar {a}}^{\beta }{\boldsymbol {\bar {x}}}_{\beta }}$.

Comparing with the first expression for the contravariant representation of ${\displaystyle {\boldsymbol {V}}}$ we must conclude that

${\displaystyle a^{\alpha }{\frac {\partial y^{\beta }}{\partial \sigma ^{\alpha }}}={\bar {a}}^{\beta }\qquad }$ and ${\displaystyle \qquad a^{\alpha }={\frac {\partial \sigma ^{\alpha }}{\partial y^{\beta }}}{\bar {a}}^{\beta }}$

in accordance with the contravariant transformation law.

## Relating the covariant and the contravariant components of a vector

We may find the ${\displaystyle \alpha }$-th component of a vector ${\displaystyle {\boldsymbol {V}}}$ by taking the inner product with the tangent vector to the coordinate curve in the ${\displaystyle \alpha }$-th direction

${\displaystyle a_{\alpha }={\boldsymbol {x}}_{\alpha }\cdot {\boldsymbol {V}}={\boldsymbol {x}}_{\alpha }\cdot \left(\alpha ^{\beta }{\boldsymbol {x}}_{\beta }\right)=\left({\boldsymbol {x}}_{\alpha }\cdot {\boldsymbol {x}}_{\beta }\right)a^{\beta }=g_{\alpha \beta }a^{\beta }}$.

Here we recognize the metric tensor ${\displaystyle g_{\alpha \beta }=\left({\boldsymbol {x}}_{\alpha }\cdot {\boldsymbol {x}}_{\beta }\right)}$.

The metric tensor may be thought of having the power to lower and index

${\displaystyle a_{\alpha }=g_{\alpha \beta }a^{\beta }}$.

We identify a matrix with ${\displaystyle g_{\alpha \beta }}$ that must be invertible if coordinate transformations are to be invertible, hence, there should be a ${\displaystyle \left(g_{\alpha \beta }\right)^{-1}\equiv g^{\alpha \beta }}$. Formally, this inverse should have the property that

${\displaystyle g^{\alpha \beta }a_{\alpha }=a^{\beta }\qquad }$ and ${\displaystyle \qquad g_{\beta \gamma }g^{\alpha \beta }a_{\alpha }=a_{\gamma }\qquad }$ implying that ${\displaystyle \qquad g_{\beta \gamma }g^{\alpha \beta }\equiv {\delta _{\gamma }}^{\beta }={\begin{cases}1&\gamma =\beta \\0&\gamma \neq \beta \end{cases}}}$ which is called the Kronecker delta.

We can find the components of ${\displaystyle g^{\alpha \beta }}$ by considering formally considering the inverse of a 2x2 symmetric matrix ${\displaystyle A}$ (the metric tensor is symmetric)

Given ${\displaystyle A={\begin{bmatrix}a&c\\c&b\end{bmatrix}}}$, we know that ${\displaystyle A^{-1}={\begin{bmatrix}{\frac {b}{\det A}}&-{\frac {c}{\det A}}\\-{\frac {c}{\det A}}&{\frac {b}{\det A}}\end{bmatrix}}}$ where ${\displaystyle \det A=ab-c^{2}}$.

Applying these results to the metric tensor, we define ${\displaystyle g\equiv \det(g_{\alpha \beta })=g_{11}g_{22}-g_{12}^{2}}$ and ${\displaystyle {\frac {1}{g}}\equiv \det(g^{\alpha \beta })}$. Thus we may write

${\displaystyle g^{11}={\frac {g_{22}}{g}},\qquad }$ ${\displaystyle \qquad g^{12}=g^{21}={\frac {-g_{12}}{g}},\qquad }$, and ${\displaystyle \qquad g^{22}={\frac {g_{11}}{g}}}$.

"The tangent vectors ${\displaystyle {\boldsymbol {x}}_{1}}$ and ${\displaystyle {\boldsymbol {x}}_{2}}$ and the differential distances in the ${\displaystyle \sigma ^{1}}$ and ${\displaystyle \sigma ^{2}}$ directions given by ${\displaystyle d{\boldsymbol {x}}^{(1)}}$ and ${\displaystyle d{\boldsymbol {x}}^{(2)}}$, respectively."

## Surface area elements

In integral calculus we all learn that a change of coordinates in terms of a differential area, volume, or hypervolume will result of increments in the transformed coordinates, multiplied by the Jacobian of the transformation. I this section we see how our tensor representation relates to this classical representation.

We consider the tangents, ${\displaystyle {\boldsymbol {x}}_{1}}$ and ${\displaystyle {\boldsymbol {x}}_{2}}$, to a point on a surface ${\displaystyle {\boldsymbol {x}}(\sigma ^{1},\sigma ^{2})}$, which are not in general arclength coordinates. We further consider an area element ${\displaystyle dS}$ formed by the cross product of increments in the ${\displaystyle {\boldsymbol {x}}_{1}}$ and ${\displaystyle {\boldsymbol {x}}_{2}}$. This area element is given by finding the cross product between the increments ${\displaystyle d{\boldsymbol {x}}^{(1)}}$ and ${\displaystyle d{\boldsymbol {x}}^{(2)}}$. Here the superscripts ${\displaystyle (1)}$ and ${\displaystyle (2)}$ are labels rather than indexes.

We represent these differential distances by

${\displaystyle d{\boldsymbol {x}}^{(\alpha )}\equiv {\frac {\partial {\boldsymbol {x}}}{\partial \sigma ^{\alpha }}}d\sigma ^{\alpha }}$ where ${\displaystyle \alpha =1,2}$.

The differential area ${\displaystyle dS}$ is given by the cross product of these differential distances

${\displaystyle dS\equiv |d{\boldsymbol {x}}^{(1)}\times d{\boldsymbol {x}}^{(2)}|=\left|\left({\frac {\partial {\boldsymbol {x}}}{\partial \sigma ^{1}}}d\sigma ^{1}\right)\times \left({\frac {\partial {\boldsymbol {x}}}{\partial \sigma ^{2}}}d\sigma ^{2}\right)\right|}$

${\displaystyle \quad \qquad =\left|{\frac {\partial {\boldsymbol {x}}}{\partial \sigma ^{1}}}\times {\frac {\partial {\boldsymbol {x}}}{\partial \sigma ^{2}}}\right|d\sigma ^{1}d\sigma ^{2}=\left|{\boldsymbol {x}}_{1}\times {\boldsymbol {x}}_{2}\right|d\sigma ^{1}d\sigma ^{2}}$ .

We consider the area of the parallelogram spanned by the tangent vectors, represented by the cross product of the tangent vectors, where ${\displaystyle \theta }$ is the angle between the vectors, and noting ${\displaystyle \sin ^{2}(\theta )=1-\cos ^{2}(\theta )}$

${\displaystyle \left|{\boldsymbol {x}}_{1}\times {\boldsymbol {x}}_{2}\right|^{2}=|{\boldsymbol {x}}_{1}|^{2}|{\boldsymbol {x}}_{2}|^{2}\sin ^{2}(\theta )=|{\boldsymbol {x}}_{1}|^{2}|{\boldsymbol {x}}_{2}|^{2}\left(1-\cos ^{2}(\theta )\right)=|{\boldsymbol {x}}_{1}|^{2}|{\boldsymbol {x}}_{2}|^{2}-|{\boldsymbol {x}}_{1}|^{2}|{\boldsymbol {x}}_{2}|^{2}\cos ^{2}(\theta )}$

${\displaystyle \quad \qquad \qquad =({\boldsymbol {x}}_{1}\cdot {\boldsymbol {x}}_{1})({\boldsymbol {x}}_{2}\cdot {\boldsymbol {x}}_{2})-({\boldsymbol {x}}_{1}\cdot {\boldsymbol {x}}_{2})^{2}\equiv g_{11}g_{22}-g_{12}^{2}=\det(g_{\alpha \beta })=g}$.

Thus, the relationship between the determinant of the metric tensor and the Jacobian of the transformation from the ${\displaystyle {\boldsymbol {x}}}$ coordinates and the ${\displaystyle {\boldsymbol {\sigma }}}$ coordinates is established as

${\displaystyle |J|\equiv |d{\boldsymbol {x}}^{(1)}\times d{\boldsymbol {x}}^{(2)}|={\sqrt {g}}}$. Thus,

${\displaystyle dS=|J|d\sigma ^{1}d\sigma ^{2}={\sqrt {g}}d\sigma ^{1}d\sigma ^{2}}$.

Traditionally, the Jacobian of the transformation would be represented by

${\displaystyle |J|=\det \left|{\begin{bmatrix}{\hat {e}}_{1}&{\hat {e}}_{2}&{\hat {e}}_{3}\\{\frac {\partial x_{1}}{\partial \sigma ^{1}}}&{\frac {\partial x_{2}}{\partial \sigma ^{1}}}&{\frac {\partial x_{3}}{\partial \sigma ^{1}}}\\{\frac {\partial x_{1}}{\partial \sigma ^{2}}}&{\frac {\partial x_{2}}{\partial \sigma ^{2}}}&{\frac {\partial x_{3}}{\partial \sigma ^{2}}}\end{bmatrix}}\right|}$

which is equivalent to the cross product representation above.

### The unit normal vector to a surface

A surface normal ${\displaystyle {\boldsymbol {\hat {n}}}}$ is the normal direction to the tangent plane at a given point on a surface.

Hence, in light of the previous section,

${\displaystyle {\boldsymbol {\hat {n}}}={\frac {{\boldsymbol {x}}_{1}\times {\boldsymbol {x}}_{2}}{|{\boldsymbol {x}}_{1}\times {\boldsymbol {x}}_{2}|}}={\frac {1}{\sqrt {g}}}({\boldsymbol {x}}_{1}\times {\boldsymbol {x}}_{2})}$ .

"A curve parameterized by nonorthogonal surface coordinates ${\displaystyle {\boldsymbol {x}}(\sigma ^{1},\sigma ^{2})}$ has a unit tangent ${\displaystyle {\boldsymbol {\hat {t}}}}$, unit principal normal ${\displaystyle {\boldsymbol {\hat {p}}}}$, unit binormal ${\displaystyle {\boldsymbol {\hat {b}}}}$. The tangent to the curve is also tangent to the surface, so the unit surface normal vector ${\displaystyle {\boldsymbol {\hat {n}}}}$ lies in the plane spanned by ${\displaystyle {\boldsymbol {\hat {p}}}}$ and ${\displaystyle {\boldsymbol {\hat {b}}}}$."

### The unit normal vector to a curve on a surface

A curve on a surface has its unit tangent, principal normal, and binormal vectors ${\displaystyle ({\boldsymbol {\hat {t}}},{\boldsymbol {\hat {p}}},{\boldsymbol {\hat {b}}})}$. There is no reason that the unit normal to the surface ${\displaystyle {\boldsymbol {\hat {n}}}}$ along the curve be co-aligned with the unit principal normal ${\displaystyle {\boldsymbol {\hat {p}}}}$ or the unit binormal vectors ${\displaystyle {\boldsymbol {\hat {b}}}}$. What we do know is that because the tangent to a curve on a surface is also a tangent to the surface, the surface normal ${\displaystyle {\boldsymbol {\hat {n}}}}$ is orthogonal to ${\displaystyle {\boldsymbol {\hat {n}}}}$. Thus the unit tangent vector ${\displaystyle {\boldsymbol {\hat {t}}}}$ is orthogonal to the ${\displaystyle {\boldsymbol {\hat {p}}}}$, ${\displaystyle {\boldsymbol {\hat {b}}}}$, and the ${\displaystyle {\boldsymbol {\hat {n}}}}$ vectors.

Let ${\displaystyle \theta }$ be the angle between the unit principal normal ${\displaystyle {\boldsymbol {\hat {p}}}}$ and the unit surface normal vector ${\displaystyle {\boldsymbol {\hat {b}}}}$, then

${\displaystyle {\boldsymbol {\hat {p}}}\cdot {\boldsymbol {\hat {n}}}=\cos(\theta )}$.

We recall that ${\displaystyle {\boldsymbol {\hat {p}}}={\frac {\boldsymbol {p}}{|{\boldsymbol {p}}|}}={\frac {\boldsymbol {\ddot {x}}}{|{\boldsymbol {\ddot {x}}}|}}={\frac {1}{\kappa (s)}}{\boldsymbol {\ddot {x}}}}$. It follows, therefore that

${\displaystyle \kappa _{n}(s)\equiv \kappa (s)\cos(\theta )={\boldsymbol {\ddot {x}}}\cdot {\boldsymbol {\hat {n}}}}$.

This quantity ${\displaystyle \kappa _{n}(s)}$ is the normal curvature" to the curve, this being the curvature of the curve in a plane passing through the surface normal direction along the curve. Such a normal surface is called the normal section along the curve by some authors.

## The Second Fundamental Form of Differential Geometry

The second derivatives of a curve are related to its curvature. The second derivatives of curves on a surface allow us to describe the curvature of a surface.

Continuing using our notation of ${\displaystyle {\boldsymbol {x}}(\sigma ^{1},\sigma ^{2})}$ representing a curve on a surface. The tangent vectors in the ${\displaystyle \sigma ^{\alpha }}$ direction are given by

${\displaystyle {\boldsymbol {x}}_{\alpha }\equiv {\frac {\partial {\boldsymbol {x}}}{\partial \sigma ^{\alpha }}}}$

which, in arclength coordinates is

${\displaystyle {\frac {d{\boldsymbol {x}}(s)}{ds}}={\frac {\partial {\boldsymbol {x}}}{\partial \sigma ^{\alpha }}}{\frac {\partial \sigma ^{\alpha }}{\partial s}}}$

The matrix of second derivatives is given by

${\displaystyle {\boldsymbol {x}}_{\alpha \beta }\equiv {\frac {\partial ^{2}{\boldsymbol {x}}}{\partial \sigma ^{\alpha }\partial \sigma ^{\beta }}}={\boldsymbol {x}}_{\beta \alpha }}$

Allowing us to write

${\displaystyle {\boldsymbol {\dot {x}}}={\boldsymbol {x}}_{\alpha }{\dot {\sigma }}^{\alpha }}$

${\displaystyle {\boldsymbol {\ddot {x}}}={\boldsymbol {\dot {x}}}_{\alpha }{\dot {\sigma }}^{\alpha }+{\boldsymbol {x}}_{\alpha }{\ddot {\sigma }}^{\alpha }}$

We note that

${\displaystyle {\boldsymbol {\dot {x}}}_{\alpha }={\frac {\partial {\boldsymbol {x}}_{\alpha }}{\partial \sigma ^{\beta }}}{\frac {\partial \sigma ^{\beta }}{\partial s}}={\boldsymbol {x}}_{\alpha \beta }{\dot {\sigma }}^{\beta }}$.

Subsitituting this in the expression above, we have

${\displaystyle {\boldsymbol {\ddot {x}}}={\boldsymbol {x}}_{\alpha \beta }{\dot {\sigma }}^{\alpha }{\dot {\sigma }}^{\beta }+{\boldsymbol {x}}_{\alpha }{\dot {\sigma }}^{\beta }}$.

If we take the dot product of the surface unit normal vector ${\displaystyle {\boldsymbol {\hat {n}}}}$ with both sides of this result we have

${\displaystyle {\boldsymbol {\hat {n}}}\cdot {\boldsymbol {\ddot {x}}}=({\boldsymbol {\hat {n}}}\cdot {\boldsymbol {x}}_{\alpha \beta }){\dot {\sigma }}^{\alpha }{\dot {\sigma }}^{\beta }+({\boldsymbol {\hat {n}}}\cdot {\boldsymbol {x}}_{\alpha }){\dot {\sigma }}^{\beta }}$.

Because ${\displaystyle {\boldsymbol {\hat {n}}}}$ is orthogonal to the tangent plane ${\displaystyle ({\boldsymbol {\hat {n}}}\cdot {\boldsymbol {x}}_{\alpha })=0}$ in the second term on the right yielding

${\displaystyle {\boldsymbol {\hat {n}}}\cdot {\boldsymbol {\ddot {x}}}=({\boldsymbol {\hat {n}}}\cdot {\boldsymbol {x}}_{\alpha \beta }){\dot {\sigma }}^{\alpha }{\dot {\sigma }}^{\beta }}$.

Alternatively, we could take the ${\displaystyle {\frac {\partial }{\partial \sigma ^{\beta }}}}$ of ${\displaystyle {\boldsymbol {\hat {n}}}\cdot {\boldsymbol {x}}_{\alpha }=0}$

${\displaystyle {\frac {\partial {\boldsymbol {\hat {n}}}}{\partial \sigma ^{\beta }}}\cdot {\boldsymbol {x}}_{\alpha }+{\boldsymbol {\hat {n}}}\cdot {\boldsymbol {x}}_{\alpha \beta }=0}$ .

We define the second fundamental tensor ${\displaystyle b_{\alpha \beta }}$ as

${\displaystyle b_{\alpha \beta }\equiv {\boldsymbol {\hat {n}}}\cdot {\boldsymbol {x}}_{\alpha \beta }=-{\boldsymbol {\hat {n}}}_{\beta }\cdot {\boldsymbol {x}}_{\alpha }}$.

This suggests that in arclength variables

${\displaystyle b_{\alpha \beta }{\dot {\sigma }}^{\alpha }{\dot {\sigma }}^{\beta }=(-{\boldsymbol {\hat {n}}}_{\beta }\cdot {\boldsymbol {x}}_{\alpha }){\dot {\sigma }}^{\alpha }{\dot {\sigma }}^{\beta }=-{\frac {\boldsymbol {\hat {n}}}{ds}}\cdot {\frac {d{\boldsymbol {x}}}{ds}}}$.

Finally, we may infer the Second Fundamental Form of Differential Geometry

${\displaystyle b_{\alpha \beta }d\sigma ^{\alpha }d\sigma ^{\beta }=-d{\boldsymbol {x}}\cdot d{\boldsymbol {\hat {n}}}}$

Where the right hand side represents the dot product of incremental changes in position with incremental changes in the surface normal vector.

## Normal Curvature

In the light of the results for the second fundamental tensor and the second fundamental form, we can return to results derived above for the normal curvature ${\displaystyle \kappa _{n}}$

${\displaystyle \kappa _{n}(s)=\kappa (s)\cos(\theta )={\boldsymbol {\ddot {x}}}\cdot {\boldsymbol {\hat {n}}}=({\boldsymbol {\hat {n}}}\cdot {\boldsymbol {x}}_{\alpha \beta }){\dot {\sigma }}^{\alpha }{\dot {\sigma }}^{\beta }=b_{\alpha \beta }{\dot {\sigma }}^{\alpha }{\dot {\sigma }}^{\beta }}$.

Introducing another variable ${\displaystyle t(s)}$ we may write

${\displaystyle \kappa (s)\cos(\theta )=b_{\alpha \beta }{\dot {\sigma }}^{\alpha }{\dot {\sigma }}^{\beta }=b_{\alpha \beta }{\frac {d\sigma ^{\alpha }}{dt}}{\frac {dt}{ds}}{\frac {d\sigma ^{\beta }}{dt}}{\frac {dt}{ds}}={\frac {b_{\alpha \beta }{\frac {d\sigma ^{\alpha }}{dt}}{\frac {d\sigma ^{\beta }}{dt}}}{\left({\frac {ds}{dt}}\right)^{2}}}}$

implying that

${\displaystyle \kappa (s)\cos(\theta )={\frac {b_{\alpha \beta }d\sigma ^{\alpha }d\sigma ^{\beta }}{\left(ds\right)^{2}}}={\frac {b_{\alpha \beta }d\sigma ^{\alpha }d\sigma ^{\beta }}{g_{\alpha \beta }d\sigma ^{\alpha }d\sigma ^{\beta }}}}$.

Where we have substituted for ${\displaystyle (ds)^{2}}$ from the first fundamental form. Thus, the normal curvature ${\displaystyle \kappa _{n}(s)}$ is the second fundamental form, divided by the first fundamental form. We can also define a normal curvature vector ${\displaystyle {\boldsymbol {k}}_{n}}$

${\displaystyle {\boldsymbol {k}}_{n}\equiv \kappa _{n}{\boldsymbol {\hat {n}}}={\frac {b_{\alpha \beta }d\sigma ^{\alpha }d\sigma ^{\beta }}{g_{\alpha \beta }d\sigma ^{\alpha }d\sigma ^{\beta }}}{\boldsymbol {\hat {n}}}}$

Note that the numerator and denominator of the normal curvature are evaluated separately. There is no summation of repeated indexes in the numerator and denominator.

### The extrema of Normal Curvature---the Principal Curvatures and the Principal Curvature directions

We want to find the maximum and minimum curvatures at a particular point. To do this, we must take the expression for the normal curvature, differentiate it with respect to some local coordinates and sent the result equal to zero.

If we are in the local coordinates ${\displaystyle l^{\alpha }}$ and ${\displaystyle l^{\beta }}$ in the neighborhood of a given location, the result for the normal curvature suggests the following relation

${\displaystyle (b_{\alpha \beta }-\kappa _{n}g_{\alpha \beta })l^{\alpha }l^{\beta }\equiv a_{\alpha \beta }l^{\alpha }l^{\beta }}$

Differentiating with respect to ${\displaystyle l^{\gamma }}$ we have

${\displaystyle {\frac {\partial }{\partial l^{\gamma }}}(a_{\alpha \beta }l^{\alpha }l^{\beta })={\frac {\partial a_{\alpha \beta }}{\partial l^{\gamma }}}l^{\alpha }l^{\beta }+a_{\alpha \beta }{\frac {\partial l^{\alpha }}{\partial l^{\gamma }}}l^{\beta }+a_{\alpha \beta }{\frac {\partial l^{\beta }}{\partial l^{\gamma }}}l^{\alpha }}$.

Because the first and second fundamental tensors tend to constants at any given point on a surface, the derivatives of ${\displaystyle a_{\alpha \beta }}$ vanish as the coordinates ${\displaystyle l^{\alpha }\rightarrow 0}$ and ${\displaystyle l^{\beta }\rightarrow 0}$. We note also that the derivatives ${\displaystyle {\frac {\partial l^{\alpha }}{\partial l^{\gamma }}}\equiv \delta _{\gamma }^{\alpha }}$ and ${\displaystyle {\frac {\partial l^{\beta }}{\partial l^{\gamma }}}\equiv \delta _{\gamma }^{\beta }}$, yielding

${\displaystyle a_{\gamma \beta }l^{\beta }+a_{\alpha \gamma }l^{\alpha }=2a_{\gamma \alpha }l^{\alpha }=0}$

where the ${\displaystyle \beta }$ index was changed to ${\displaystyle \alpha }$ in the first term, and where ${\displaystyle a_{\gamma \alpha }}$ is recognized as being symmetric because ${\displaystyle b_{\alpha \beta }}$ and ${\displaystyle g_{\alpha \beta }}$ are symmetric. Finally we may write

${\displaystyle (b_{\alpha \gamma }-\kappa _{n}g_{\alpha \gamma })l^{\alpha }=0}$.

If we multiply by ${\displaystyle g^{\gamma \beta }}$

${\displaystyle (g^{\gamma \beta }b_{\alpha \gamma }-\kappa _{n}g^{\gamma \beta }g_{\alpha \gamma })l^{\alpha }=0}$

this has the effect of raising one of the indexes of the second fundamental tensor ${\displaystyle g^{\gamma \beta }b_{\alpha \gamma }={b_{\alpha }}^{\beta }}$ and to make to make the metric tensor into a Kronecker delta ${\displaystyle g^{\gamma \beta }g_{\alpha \gamma }={\delta _{\alpha }}^{\beta }}$ resulting in

${\displaystyle ({b_{\alpha }}^{\beta }-\kappa _{n}{\delta _{\alpha }}^{\beta })l^{\alpha }=0}$

which is the eigenvalue problem

${\displaystyle \det({b_{\alpha }}^{\beta }-\kappa _{n}{\delta _{\alpha }}^{\beta })=0}$

with ${\displaystyle \kappa _{n}}$ being the eigenvalue. Because this is a symmetric problem, this implies that the eigenvectors are orthogonal representing two orthogonal principal directions of curvature, with the eigenvalues ${\displaystyle \kappa _{n}=\kappa _{1}}$ and ${\displaystyle \kappa _{n}=\kappa _{2}}$ representing to principal curvatures of the surface.

We can construct the solution to this problem with a few definitions. First, we represent the determinant of the second fundamental tensor as ${\displaystyle b\equiv \det(b_{\alpha \gamma })}$. The determinant of the contravariant form of the metric tensor is ${\displaystyle \det(g^{\beta \gamma })={\frac {1}{g}}}$. The determinant of ${\displaystyle \det(b_{\alpha }^{\beta })=\det(g^{\beta \gamma })\det(b_{\alpha \gamma })={\frac {b}{g}}}$. We also recognize that ${\displaystyle \det(b_{\alpha }^{\beta })=(b_{1}^{1})(b_{2}^{2})-(b_{1}^{2})^{2}}$

Furthermore, we know that the determiniant of a matrix is the product of its eigenvalues so ${\displaystyle {\frac {b}{g}}=\kappa _{1}\kappa _{2}}$. Writing the eigenvalue problem as the determinant

${\displaystyle \det({b_{\alpha }}^{\beta }-\kappa _{n}{\delta _{\alpha }}^{\beta })=\det {\begin{bmatrix}{b_{1}}^{1}-\kappa _{n}&{b_{1}}^{2}\\{b_{2}}^{1}&{b_{2}}^{2}-\kappa _{n}\end{bmatrix}}=0}$.

Multiplying this out, we obtain the characteristic equation of the eigenvalue problem,

${\displaystyle (b_{1}^{1}-\kappa _{n})(b_{2}^{2}-\kappa _{n})-(b_{1}^{2})^{2}=0}$

${\displaystyle (b_{1}^{1})(b_{2}^{2})-(b_{1}^{1}+b_{2}^{2})\kappa _{n}+(\kappa _{n})^{2}-(b_{1}^{2})^{2}=0}$

${\displaystyle (\kappa _{n})^{2}-(b_{1}^{1}+b_{2}^{2})\kappa _{n}+(b_{1}^{1})(b_{2}^{2})-(b_{1}^{2})^{2}=0}$.

Here we recognize that the last two terms are the determinant of ${\displaystyle b_{\alpha }^{\beta }}$ allowing us to rewrite the characteristic equation as

${\displaystyle (\kappa _{n})^{2}-(b_{1}^{1}+b_{2}^{2})\kappa _{n}+(b_{1}^{1})(b_{2}^{2})-(b_{1}^{2})^{2}=0}$.

Another fact that we can make use of is that the trace of a non-singular symmetric matrix is invariant, so that ${\displaystyle b_{1}^{1}+b_{2}^{2}=\kappa _{1}+\kappa _{2}}$ and that the determinant of the matrix is the product of its eigenvalues, so ${\displaystyle \det(b_{\alpha }^{\beta })=\kappa _{1}\kappa _{2}}$.

Given this information, we can construct the following equivalent forms of the characteristic equation

${\displaystyle \kappa _{n}^{2}+(\kappa _{1}+\kappa _{2})\kappa _{n}+\kappa _{1}\kappa _{2}}$

${\displaystyle \kappa _{n}^{2}+(\kappa _{1}+\kappa _{2})\kappa _{n}+{\frac {b}{g}}}$.

### Gaussian Curvature, Mean Curvature

The product of the principal curvatures is called the Gaussian curvature ${\displaystyle K\equiv \kappa _{1}\kappa _{2}}$.

The sum average of the principal curvatures is called the mean curvature ${\displaystyle H={\frac {1}{2}}(\kappa _{1}+\kappa _{2})}$.

This allows us to write an additional form of the characteristic equation of the principal curvatures

${\displaystyle \kappa _{n}^{2}+2H\kappa _{n}+K}$

where ${\displaystyle K}$ is the Gaussian curvature and ${\displaystyle H}$ is the mean curvature.

Because ${\displaystyle b_{\alpha }^{\beta }}$ is symmetric, the eigenvectors of this matrix are orthogonal. These directions are called the principal directions.

### Umbilics and elliptic, parabolic, and hyperbolic points of a surface

The sign of the curvature of a surface is determined from an examination of the unit surface normal vector ${\displaystyle {\boldsymbol {\hat {n}}}}$ and the principal normal vector ${\displaystyle {\boldsymbol {\hat {p}}}}$. Both of these vectors lie in the plane perpendicular to the tangent to the curve in question. Therefore the curvature of the surface is given by

${\displaystyle {\boldsymbol {\hat {n}}}\cdot {\boldsymbol {\hat {p}}}={\begin{cases}>0&{\mbox{positive}}\\<0&{\mbox{negative}}\end{cases}}.}$

As we have seen before normal curvature is given by the ratio of the second and first fundamental forms

${\displaystyle {\boldsymbol {k}}_{n}\equiv \kappa _{n}{\boldsymbol {\hat {n}}}={\frac {b_{\alpha \beta }d\sigma ^{\alpha }d\sigma ^{\beta }}{g_{\alpha \beta }d\sigma ^{\alpha }d\sigma ^{\beta }}}{\boldsymbol {\hat {n}}}}$.

Here, because the first fundamental form is always positive, we see that the sign of the normal curvature is related only to the values of the elements of the second fundamental tensor ${\displaystyle b_{\alpha }^{\beta }}$. The second fundamental form is positive or negative definite if and only if the determinant ${\displaystyle b_{11}b_{22}-(b_{12})^{2}=\kappa _{1}\kappa _{2}}$ is positive. In this case the principal curvatures ${\displaystyle \kappa _{1}}$ and ${\displaystyle \kappa _{2}}$ both have the same sign, either positive of negative. The principal directions are orthogonal, thus an inward curving (negative) or outward curving (positive) ellipsoid is described by the principal directions. Thus, any point of a surface that has principal curvatures of the same sign is called an elliptic point. All points of an ellipsoid are elliptic points.

If ${\displaystyle b_{11}b_{22}-(b_{12})^{2}=\kappa _{1}\kappa _{2}=0}$ because only one principal curvature is zero, then the point is called a parabolic point. A value of a ${\displaystyle \kappa _{n}}$ means that the curvature is zero and that a line on which a curvature is zero is called an asymptotic direction.

If ${\displaystyle \kappa _{1}<0}$ and ${\displaystyle \kappa _{2}>0}$ at a point, then the point is called a hyperbolic (or saddle) point.

If ${\displaystyle \kappa _{n}={\mbox{const.}}}$ then we note from

${\displaystyle (b_{\alpha \beta }-\kappa _{n}g_{\alpha \beta })l^{\alpha }=0}$ that the second fundamental tensor is proportional to the first fundamental tensor ${\displaystyle b_{\alpha \beta }=\lambda (\sigma ^{1},\sigma ^{2})b_{\alpha \beta }}$. Such a point is called an umbilic. In this case, the determinants of the first and second fundamental tensors are related via ${\displaystyle b=\lambda ^{2}g}$.

If ${\displaystyle \lambda >0}$ then the point is called an elliptic umbilic. If ${\displaystyle \lambda =0}$ then ${\displaystyle b=0}$ and the point is a flat spot or a parabolic umbilic.

Umbilics may be considered pathological points on a surface.

### The Formulae of Weingarten

Just as for curves, the Frenet equations provide a natural coordinate frame at each point of a curve ${\displaystyle ({\boldsymbol {\hat {t}}},{\boldsymbol {\hat {p}}},{\boldsymbol {\hat {b}}})}$, surfaces have a natural frame at each point.

This frame consists of the surface normal and the two tangent directions, given by ${\displaystyle ({\boldsymbol {x}}_{1},{\boldsymbol {x}}_{2},{\boldsymbol {\hat {n}}})}$ . As with the Frenet frame, we can find relationships between the members of this frame and their derivatives, though these are not, in general, arclength coordinates.

As earlier in these notes, we may differentiate the dot product of a unit vector with itself, ${\displaystyle {\boldsymbol {\hat {n}}}\cdot {\boldsymbol {\hat {n}}}=1}$ to obtain an orthogonal vectors ${\displaystyle {\boldsymbol {x}}_{\alpha }\equiv {\frac {\partial {\boldsymbol {\hat {x}}}}{\partial \sigma ^{\alpha }}}}$

${\displaystyle {\frac {\partial ({\boldsymbol {\hat {n}}}\cdot {\boldsymbol {\hat {n}}})}{\partial \sigma ^{\alpha }}}=0\rightarrow {\boldsymbol {\hat {n}}}_{\alpha }\cdot {\boldsymbol {\hat {n}}}=0}$.

While we have retained the hat symbol in the derivative, we recognize that the derivative of a unit vector is, in general, not a unit vector.

Because the derivative of the unit surface normal points orthogonal to the tangent plane at the point where it is defined, the derivatives must be proportional to the tangent vectors. Writing this proportionality constant by ${\displaystyle {C_{\alpha }}^{\mu }}$

${\displaystyle {\boldsymbol {\hat {n}}}_{\alpha }={C_{\alpha }}^{\mu }{\boldsymbol {x}}_{\mu }}$ .

We must find the value of ${\displaystyle {C_{\alpha }}^{\mu }}$. We can take the dot product of a tangent vector with each side of this equation

${\displaystyle {\boldsymbol {\hat {n}}}_{\alpha }\cdot {\boldsymbol {x}}_{\lambda }={C_{\alpha }}^{\mu }{\boldsymbol {x}}_{\mu }\cdot {\boldsymbol {x}}_{\lambda }}$.

We recognize that ${\displaystyle {\boldsymbol {x}}_{\mu }\cdot {\boldsymbol {x}}_{\lambda }\equiv g_{\mu \lambda }\qquad }$ and that ${\displaystyle \qquad {\boldsymbol {\hat {n}}}_{\alpha }\cdot {\boldsymbol {x}}_{\lambda }=-b_{\alpha \mu }={C_{\alpha }}^{\mu }g_{\mu \lambda }}$.

The contravariant form of the metric tensor ${\displaystyle g^{\lambda \nu }}$ can be used to raise the index

${\displaystyle {\boldsymbol {\hat {n}}}_{\alpha }\cdot {\boldsymbol {x}}_{\lambda }g^{\lambda \nu }=-b_{\alpha \lambda }g^{\lambda \nu }=-{b_{\alpha }}^{\nu }={C_{\alpha }}^{\mu }g_{\mu \lambda }g^{\lambda \nu }={C_{\alpha }}^{\mu }{\delta _{\mu }}^{\nu }={C_{\alpha }}^{\nu }}$,

where we have used the fact that that ${\displaystyle \qquad g_{\mu \lambda }g^{\lambda \nu }\equiv {\delta _{\mu }}^{\nu }}$.

Thus, we may write the formulae of Weingarten as

${\displaystyle {\boldsymbol {\hat {n}}}_{\alpha }=-{b_{\alpha }}^{\beta }{\boldsymbol {x}}_{\beta }}$.

Thus, the derivative of the surface normal vector is related to the tangent vectors through the mixed form of the second fundamental tensor.

### The Formulae of Gauss

For the Formulae of Gauss, we need to consider derivatives of the tangent vectors.

The second partial derivative, which is the derivative of the tangent vector must yield a result that depends both on the tangent directions and on the normal direction. The other way of looking at this is that we need to find a way to redefine the derivative in these non-orthogonal coordinates that acts like the derivative that we are familiar with.

We write

${\displaystyle {\boldsymbol {x}}_{\alpha \beta }\equiv {\frac {\partial ^{2}{\boldsymbol {x}}}{\partial \sigma ^{\alpha }\partial \sigma ^{\beta }}}={\Gamma _{\alpha \beta }}^{\gamma }{\boldsymbol {x}}_{\gamma }+a_{\alpha \beta }{\boldsymbol {\hat {n}}}}$

The coefficients ${\displaystyle a_{\alpha \beta }}$ and ${\displaystyle {\Gamma _{\alpha \beta }}^{\gamma }}$ remain to be determined.

The term with the tangent vector can be eliminated by taking the dot product of both sides with the unit surface normal vector ${\displaystyle {\boldsymbol {\hat {n}}}}$. Because ${\displaystyle {\boldsymbol {\hat {n}}}\cdot {\boldsymbol {x}}_{\gamma }=0}$ and because ${\displaystyle {\boldsymbol {\hat {n}}}\cdot {\boldsymbol {x}}_{\alpha \beta }\equiv b_{\alpha \beta }}$, we may immediately identify that

${\displaystyle a_{\alpha \beta }\equiv b_{\alpha \beta }}$ yielding

${\displaystyle {\boldsymbol {x}}_{\alpha \beta }={\Gamma _{\alpha \beta }}^{\gamma }{\boldsymbol {x}}_{\gamma }+b_{\alpha \beta }{\boldsymbol {\hat {n}}}}$.

We may eliminate the term with ${\displaystyle {\boldsymbol {\hat {n}}}}$ by dotting both sides with a tangent vector

${\displaystyle {\boldsymbol {x}}_{\alpha \beta }\cdot {\boldsymbol {x}}_{\lambda }={\Gamma _{\alpha \beta }}^{\gamma }{\boldsymbol {x}}_{\gamma }\cdot {\boldsymbol {x}}_{\lambda }}$. Here, we recognize that ${\displaystyle {\boldsymbol {x}}_{\gamma }\cdot {\boldsymbol {x}}_{\lambda }\equiv g_{\gamma \lambda }}$.

Multiplying both sides by ${\displaystyle g^{\lambda \kappa }}$

${\displaystyle {\boldsymbol {x}}_{\alpha \beta }\cdot {\boldsymbol {x}}_{\lambda }g^{\lambda \kappa }={\Gamma _{\alpha \beta }}^{\gamma }g_{\gamma \lambda }g^{\lambda \kappa }={\Gamma _{\alpha \beta }}^{\gamma }{\delta _{\gamma }}^{\kappa }={\Gamma _{\alpha \beta }}^{\kappa }}$ where and we note that ${\displaystyle g_{\gamma \lambda }g^{\lambda \kappa }\equiv {\delta _{\gamma }}^{\kappa }}$.

This permits us to solve for the coefficients

${\displaystyle {\Gamma _{\alpha \beta }}^{\kappa }={\boldsymbol {x}}_{\alpha \beta }\cdot {\boldsymbol {x}}^{\kappa }={\boldsymbol {x}}_{\alpha \beta }\cdot {\boldsymbol {x}}_{\lambda }g^{\lambda \kappa }}$.

### Christoffel Symbols

The ${\displaystyle {\Gamma _{\alpha \beta }}^{\kappa }}$ are called Christoffel symbols. Though indexed, these items do not transform as tensors do, and are therefore symbols rather than tensors. There are variations on the notations for the Christoffel symbols.

### Christoffel symbols of the first kind

With all indices covariant, we define the Christoffel symbols of the first kind as ${\displaystyle \Gamma _{\alpha \beta \lambda }={\boldsymbol {x}}_{\alpha \beta }\cdot {\boldsymbol {x}}_{\lambda }}$.

We can immediately see that there is a symmetry of the first two indexes of the Christoffel symbol of the first kind

${\displaystyle \Gamma _{\alpha \beta \lambda }=\Gamma _{\beta \alpha \lambda }}$ because ${\displaystyle {\boldsymbol {x}}_{\alpha \beta }={\boldsymbol {x}}_{\beta \alpha }}$.

It is apparent that we can generate Christoffel symbols of the first kind by differentiating the covariant metric tensor

${\displaystyle {\frac {\partial g_{\alpha \lambda }}{\partial \sigma ^{\beta }}}={\frac {\partial ({\boldsymbol {x}}_{\alpha }\cdot {\boldsymbol {x}}_{\lambda })}{\partial \sigma ^{\beta }}}={\boldsymbol {x}}_{\alpha \beta }\cdot {\boldsymbol {x}}_{\lambda }+{\boldsymbol {x}}_{\alpha }\cdot {\boldsymbol {x}}_{\lambda \beta }\equiv \Gamma _{\alpha \beta \lambda }+\Gamma _{\lambda \beta \alpha }}$.

Permuting indexes

${\displaystyle {\frac {\partial g_{\lambda \beta }}{\partial \sigma ^{\alpha }}}\equiv \Gamma _{\lambda \alpha \beta }+\Gamma _{\beta \alpha \lambda }}$

and

${\displaystyle {\frac {\partial g_{\beta \alpha }}{\partial \sigma ^{\lambda }}}\equiv \Gamma _{\beta \lambda \alpha }+\Gamma _{\alpha \lambda \beta }}$ .

${\displaystyle {\frac {\partial g_{\lambda \beta }}{\partial \sigma ^{\alpha }}}+{\frac {\partial g_{\alpha \lambda }}{\partial \sigma ^{\beta }}}-{\frac {\partial g_{\beta \alpha }}{\partial \sigma ^{\lambda }}}\equiv \Gamma _{\lambda \alpha \beta }+\Gamma _{\beta \alpha \lambda }+\Gamma _{\alpha \beta \lambda }+\Gamma _{\lambda \beta \alpha }-\Gamma _{\beta \lambda \alpha }-\Gamma _{\alpha \lambda \beta }}$ .
${\displaystyle \Gamma _{\beta \alpha \lambda }=\Gamma _{\alpha \beta \lambda }}$