Calculating with numbers: Decimal numbers
Significance and precision
A common misconception is that the number of decimals reflect indicates the accuracy of a number, but this is strictly speaking incorrect: the number of decimals only indicates the precision of a number. Only the number of significant digits in a number says something about the accuracy (significance) of that number. These are the digits that really matter; digits without significance can be omitted.
The number of significant figures can be determined with the following rules:
- all digits different from zero are significant;
- all zeroes between nonzero digits are significant;
- all zeros on the right-hand side of the decimal point are significant when they are preceded by a digit unequal to zero ( "zeros on the right-hand side count");
- zeros that are only there to indicate the position of the decimal point are not significant;
- zeros that can be omitted without changing the numerical value number are not significant ( "zeros on the left-hand side of a decimal notation do not count")
The following table of examples illustrates the concepts of precision and accuracy.
With precision, we mean the actual position of the right-most significant digit in the decimal notation.
In scientific notation, we often write numerical values as a product of a number between 1 and 10 and some power of 10. The factor before the 10-th power then comprises only significant digits. In the next section we will discuss the scientific notation.
\[\begin{array}{|l|l|l|l|l|} \hline
\mathit{number} & \mathit{significance\phantom{XXXXXX}} & \mathit{precision} & \mathit{scientific} & E\mbox{-}notation\\
& & & \mathit{notation} & \\
\hline
1.205 & 4\mathrm{\; significant\;digits} & 3\mathrm{\; decimals} & 1.205 & \mathrm{1.205E0} \\
12.05 & 4\mathrm{\; significant\;digits} & 2\mathrm{\; decimals} & 1.205\times 10^1 & \mathrm{1.205E1} \\
0.0123 & 3\mathrm{\; significant\;digits} & 4\mathrm{\; decimals} & 1.23\times 10^{-2} & \mathrm{1.23E\,\mbox{-}2} \\
300 & 1\mathrm{\; significant\;digit} & \mathrm{hundreds} & 3\times 10^2 & \mathrm{3E2} \\
0300 & 1\mathrm{\; significant\;digit} & \mathrm{hundreds} & 3\times 10^2 & \mathrm{3E2} \\
300. & 3\mathrm{\; significant\;digits} & \mathrm{unity} & 3.00\times 10^2 & \mathrm{3.00E2} \\
30.10 & 4\mathrm{\; significant\;digits} & 2\mathrm{\; decimals} & 3.010\times 10^1 & \mathrm{3.010E1} \\
0.3 & 1\mathrm{\; significant\;digit} & 1\mathrm{\; decimal} & 3. \times 10^{-1} & \mathrm{3.E\,\mbox{-}1} \\
0.30 & 2\mathrm{\; significant\;digits} & 2\mathrm{\; decimals} & 3.0\times 10^{-1} & \mathrm{3.0E\,\mbox{-}1} \\
0.0030 & 2\mathrm{\; significant\;digits} & 4\mathrm{\; decimals} & 3.0 \times 10^{-3} & \mathrm{3.0E\,\mbox{-}3} \\ \hline
\end{array}\]
Also note the agreement that an integer such as 300 only has one significant figure. Would you like to write it down with three significant digits, then you put a decimal point behind there. This gives sometimes "weird" replies when rounding an integer to significant notation: suppose for example that you want to round 301.25 to two significant figures: 301 is an obvious answer, but consists of three significant digits, and is therefore is too precise. But a rounding to two digits is actually not possible and you end up at best at 300, a number with only one significant digit. With the number 310.25 you do not have such problems and the result is 310. Crazy that one method of rounding is much more accurate than the other. The conclusion that can be drawn from this is that the scientific notation, which is later discussed, is really the only consistently good working format.
Matchcentre video clips
Decimals (41:25)