Scientific Notation

Working with large and small numbers

rule

Background Information

With the mathematical understanding of the universe starting with Isaac Newton (1642-1727), and the astronomers who immediately preceded him, there grew a need for a mathemical notation for working with very large and very small numbers.  But the actual use of exponential numbers in science seems to date to the measurements for electricity shortly after 1850.  The earliest use of the term of Scientific Notation was about 1934 where it was noted as equivalent to condensed numbers.

The author appoligies to several decades of students for telling them that there is nothing scientific about scientific notation.  It now appears that the first serious use of this condensed notation outside of mathematics may well have been by physicists.  But the point remains that scientific notation is little more than a useful tool for representing large and small numbers, a tool worth learning.

The Rules of Scientific Notation

  1. Start with a large or small number.
  2. Move the decimal point until the value of the number is greater than or equal one, but smaller than ten.
  3. Multiple by the exponential power of ten so that the number is equal the original.
original number = (1 ≤ n < 10) • 10x

Examples

1,000,000 = 1 x 106

234,000,000,000 = 2.34 x 1011     Note counting zeroes leads to error.

0.000,007,89 = 7.89 x 10-6     Note numbers smaller than one have negative exponents.

References

rule


to experiment 3-6
to Physical Science menu
to Mathematics menu
to site menu

28 November 2003
last revised 30 May 2007
by D Trapp
Mac made