As I mentioned in the introduction, I assume that the reader knows what a derivative of a function is. It is a good foundation, but to get our hands wet in relativity, we need to expand that concept a bit. Let's then get to know the **partial derivative**. What is it?

Let's remember the ordinary derivatives first. We denote a derivative of a function as or . It means, basically, how fast the value of the function changes while we change the argument x. For example, when , then .

But what if the function depends on more than one variable? Like if we have a function that assigns to each point of the plane the square of its distance from the origin. How do we even define the derivative of such a function?

This problem is what partial derivatives solve. A partial derivative can be calculated with respect to any of the variables, so in this case we have two possibilities: and (for the sake of simplicity those are sometimes denoted as and or and ). To calculate a partial derivative one assumes that only the variable with respect to which we differentiate is a variable, the rest is treated as constants.

To present what this means we will use a linear function , where is some constant. The derivative of this function is . If was a variable from the beginning - this would be precisely the partial derivative with respect to x! Taking a function and treating as a constant, we get exactly the described situation. Thus, . If we changed the symbols a bit and wrote instead of , we would get:

On the other hand, we can treat as a constant, and as a variable and calculate - it is exactly analogous and in this case we get .

Let's go back to our initial function, the square of the distance. To calculate we assume to be constant - which means that the whole part is constant and will vanish upon differentiation. So we only have to calculate the derivative of , which gives us . Voila.

What about the derivative with respect to ? It's the same, but now it is that is constant and we get .

As is the case with ordinary derivatives, also with partial derivatives it is possible to have higher-order derivatives (second derivative, third derivative etc.). They are calculated exactly the same way as the ordinary ones - by differentiating the function, then again differentiating the result etc. The only difference is that in the case of partial derivatives there are more possible higher-order derivatives. The reason is simple - each time we can choose one of the many variables.

Let's say then that we have a function of variables: . We have possible first derivatives: , , ... , .

The number of second derivatives is then : , , ..., , , , ..., .

The number of third derivatives would be , etc.

It is not the whole truth, though. Not all the derivatives differ. Actually, differentiating with respect to different variables is commutative, that is, it doesn't matter if we first differentiate with respect to and then , or the opposite: .

One more thing worth mentioning is that often expressions like or are treated as separate objects - **differential operators**. A differential operator is then just something that applied to a function will differentiate it. Differential operators can also be "multiplied", yielding higher-order operators: (now you can see why the higher-order derivatives are written as they are - why only the derivative symbol has a "power" in the "numerator", and the whole expression like in the "denominator"). Some other objects can also be created, but I will explain that in the next part.

I recommend calculating some partial derivatives for yourself in order to acquire some familiarity with them. Example functions:

The task - calculate all their first and second derivatives. I will check the solutions for the readers who would like that ;)