Functions of several variables: Total differential and Taylor approximation
Propagation of error
Let and by two quantities that do not depend on each other and are measured independently, say with measured values and , and with measurement errors and . Suppose you need a composite variable . What will the error in be? The linearization of plays a role in answering this question. After all,
For multiplication, the relative error in the result is the sum of the relative errors in the factors.
Let's have a look now at the quotient of variables. Via the total differential we find now the follwoing:
For division. the relative error in the result is the sum of the relative errors in the numerator and denominator.
So far we have noted that the total differential suggests the rules governing the propagation of error in formulas. This is because we do not take the sign of errors into account. If we do, then it makes sense to study squares of deviations. We look at and we move in the above two cases to 'quadratic addition' of relative errors. This gives:
For addition and subtraction, the square of the absolute error in the result is equal to the sum of the squares of the absolute errors in the terms.
Form the multiplication and division, the square of relative error in the result is equal to the sum of the squares of the relative errors in the factors.