Notes also available as PDF.
exponents, roots, and irrationals
We won’t define the real numbers. That requires more time than we can allow here. We will simply use the reals, denoted , as more than the rationals. This was the state of affairs until around 1872 when Richard Dedekind finally discovered a way to construct real numbers formally.
So the reals fit into our system of sets on the very top,
\text{ natural numbers } ⊊ \text{ whole numbers } ⊊ J ⊊ ℚ ⊊ .

Look up the term “Dedekind cut” for more on actually defining real numbers.
We will cover:
We’ve already used positive exponents when discussing the digit representation of numbers:
10  =  10  = 1{0}^{1} 
100  =  10 ⋅ 10  = 1{0}^{2} 
1000  =  10 ⋅ 10 ⋅ 10  = 1{0}^{3} 
10000  =  10 ⋅ 10 ⋅ 10 ⋅ 10  = 1{0}^{4} 
\mathop{\mathop{⋮}}  \mathop{\mathop{⋮}}  \mathop{\mathop{⋮}}

In general, for any number (integer, rational, or real), the number raised to an integer exponent is defined as:
For example,
Negative numbers have signs that bounce around:
With the symbolic definition, we can show other properties of exponentiation:
In general,
{(ab)}^{k} = {a}^{k}{b}^{k}.

For example,
1000 = 1{0}^{3} = {(2 ⋅ 5)}^{3} = {2}^{3} ⋅ {5}^{3} = 8 ⋅ 125.

Or when multiplying numbers raised to powers, we have that exponents add as in
For example,
1{0}^{2} ⋅ 1{0}^{3} = 100 ⋅ 1000 = 100000 = 1{0}^{5}.

And numbers raised to powers multiple times multiply exponents as in
{({a}^{k})}^{m} = 
For example,
10{0}^{2} = {(1{0}^{2})}^{2} = 1{0}^{4} = 10000.

Consider the following relationship between integer exponents and division:
Reasoning inductively, we suspect that
{a}^{0} = {a}^{1}∕a = 1.

Using the rule above for adding exponents along with the additive identity property that k + 0 = k, we can deduce that
{a}^{k} = {a}^{k+0} = {a}^{k} ⋅ {a}^{0}.

So for any a\mathrel{≠}0,
{a}^{0} = 1\quad \text{ when }a\mathrel{≠}0.

Why can’t we define this for a = 0? {0}^{k} = 0 for any integer k > 0. So 0 = {0}^{k} = {0}^{k} ⋅ {0}^{0} does not help to define {0}^{0}; we’re left with 0 = 0 ⋅ {0}^{0}. Because 0 ⋅ x = 0 for any x, {0}^{0} can be anything.
Examples:
Continuing inductively for a\mathrel{≠}0,
Again, we can use the fact that exponents add to derive this deductively:
1 = {a}^{0} = {a}^{k+k} = {a}^{k} ⋅ {a}^{k},

and so {a}^{k} is the multiplicative inverse of {a}^{k}, and we previously showed that to be {1\over { a}^{k}}. We have shown that
{a}^{k} = {1\over {
a}^{k}}

for all a\mathrel{≠}0.
For example:
{2}^{2} = {1\over {
2}^{2}} = {1\over
4}

is the inverse of
{2}^{2} = 4.

Also,
is the multiplicative inverse of
{2\over
3}.

And
is the multiplicative inverse of
{
\left ({2\over
3}\right )}^{2} = {2\over
3}.

So we’ve played with division and exponents. Consider now reasoning inductively using the multiplication rule for exponents:
We call {a}^{{1\over 2} } the square root of a and write \sqrt{a}.
But \sqrt{a} is only defined some of the time. Over integers, there clearly is no integer b such that {b}^{2} = 2, so \sqrt{2} is not defined over the integers and fractional exponents are not closed over integers.
Also, the product of two negative numbers is positive, and the product of two positive numbers is positive, so there is no real number whose square is negative. Hence for real a,
\sqrt{a}\text{ is undefined for }a < 0.

Remember that {(b)}^{2} = {(1)}^{2} ⋅ {b}^{2} = {b}^{2}, so the square root may be either positive or negative!
In most circumstances, \sqrt{a} means the positive root, often called the principal square root. When you hit a squareroot key or apply a square root in a spreadsheet, you get the principal square root.
Other rationals provide other roots:
{a}^{1} = {({a}^{{1\over
3} })}^{3}

is the cube root,
\root{3}\of{a} = {a}^{{1\over
3} }.

Here, though, {(a)}^{3} = {(1)}^{3} ⋅ {a}^{3} = ({a}^{3}), and there is no worry about the sign of the cube root.
Using {({a}^{k})}^{m} = {a}^{km}, we also have
{a}^{{2\over
3} } = \root{3}\of{{a}^{2}} = {(\root{3}\of{a})}^{2}

The exponential operator can be defined on more than just the rationals, but we won’t go there. However, remember that I mentioned the rationals are dense in the reals. There is a rational number close
There are more reals than rationals. This is a very nonobvious statement. To justify it, we will
Remember the table to show that there are as many integers as rationals? You cannot construct one for the reals. I might show that someday. It’s shockingly simple but still a mindbender. But for now, a few simple examples suffice to make the point.
Theorem: The number \sqrt{2} is not rational.
Proof. Suppose \sqrt{2} were a rational number. Then
\sqrt{2} = {a\over
b}

for some integers a and b. We will show that any such a and b, two must divide both and so (a,b) ≥ 2. Previously, we explained that any fraction can be reduced to have (a,b) = 1. Proving that (a,b) ≥ 2 shows that we cannot write \sqrt{2} as a fraction.
Now if \sqrt{2} = {a\over b}, then 2 = {{a}^{2}\over {b}^{2}} and 2{b}^{2} = {a}^{2}. Because 2\mathrel{∣}2{b}^{2}, we also know that 2\mathrel{∣}{a}^{2}. In turn, 2\mathrel{∣}{a}^{2} and 2 being prime imply that 2\mathrel{∣}a and thus a = 2q for some integer q.
With a = 2q, {a}^{2} = 4{q}^{2}. And with {a}^{2} = 2{b}^{2}, 2{b}^{2} = 4{q}^{2} or {b}^{2} = 2{q}^{2}. Now 2\mathrel{∣}b as well as 2\mathrel{∣}a, so (a,b) ≥ 2. □
Theorem: Suppose x and n are positive integers and that \root{n}\of{x} is rational. Then \root{n}\of{x} is an integer.
Proof. Because \root{n}\of{a} is rational and positive, there are positive integers a and b such that
\root{n}\of{x} = {a\over
b}.

We can assume further that the fraction is in lowest terms, so (a,b) = 1. Now we show that b = 1.
As in the previous proof, \root{n}\of{x} = {a\over b} implies that x ⋅ {b}^{n} = {a}^{n}.
If b ≥ 1, there is a prime p that divides b. And as before, p\mathrel{∣}b implies p\mathrel{∣}a, contradicting the assumption that (a,b) = 1. Thus b = 1 and \root{n}\of{x} is an integer. □
With decimal expansions, we will see that rational numbers have repeating expansions. Irrational numbers have decimal expansions that never repeat. There are some fascinating properties of the expansions
Irrational numbers come in two kinds, algebraic and transcendental. We won’t go into the difference in detail, but numbers like \sqrt{ 2} are algebraic, and numbers like π and e are transcendental.
Remember positional notation:
1\kern 1.66702pt 234 = 1 ⋅ 1{0}^{3} + 2 ⋅ 1{0}^{2} + 3 ⋅ 1{0}^{1} + 4 ⋅ 1{0}^{0}.

Given negative exponents, we can expand to the right of 1{0}^{0}. General English notation uses a decimal point to separate the integer portion of the number from the rest.
So with the same notation,
Operations work in exactly the same digitbydigit manner as before. When any position goes over 9, a factor of 10 carries into the next higher power of 10. If any digit becomes negative, a factor of 10 is borrowed frrom the next higher power of 10.
Other languages use a comma to separate the integer from the rest and also use a period to mark off powers of three on the other side, for example
1,234.567 = 1.234,567.

You may see this if you play with “locales” in various software packages. Obviously, this can lead to massive confusion among travellers. (A price of 1.234 is not less than 2 but rather greater than 1000.)
Typical international mathematical and science publications use a period to separate the integer and use a space to break groups of three:
1,234.567 = 1\kern 1.66702pt 234.567.

What is the part to the right of the decimal point? It often is called the fractional part of the number, giving away that it is a representation of a fraction.
Here we consider the decimal representation of rational numbers {1\over a} for different integers a. We will see that the expansions fall into two categories:
For rational numbers, these are the only two possibilities.
We can find the decimal expansions by long division.
Two simple examples that terminate:
0.  5  
2  1.  0 
1.  0  
0.  2  
5  1.  0 
1.  0  
Note that 2\mathrel{∣}10 and 5\mathrel{∣}10, so both expansions terminate immediately with {1\over 2} = .5 and {1\over 5} = .2.
Actually, all fractions with a denominator consisting of powers of 2 and five have terminating expansions. For example,
What if the denominator a in {1\over a} does not divide 10, or a ∤ 10? Then the expansion does not terminate, but it does repeat. If the denominator has no factors of 2 or 5, it repeats immediately.
Examples of repeating decimal expansions:
0.  3  3  …  
3  1.  0  0  0 
.  9  
.  1  0  
  9  
1  0  
0.  1  4  2  8  5  7  1  …  
7  1.  0  0  0  0  0  0  0  0 
.  7  
3  0  
  2  8  
2  0  
  1  4  
6  0  
  5  6  
4  0  
  3  5  
5  0  
  4  9  
1  0  
  7  
3  
We write these with a bar over the repeating portion, as in
We say that 0.\overline{3} has a period of 1 and 0.\overline{142857} has a period of 6.
We could write 0.2 = 0.2\overline{0}, but generally we say that this terminates once we reach the repeting zeros.
If the denominator a
contains factors of 2 or 5, the repeating portion occurs a number of places after the decimal. For
example, consider {1\over
6} = {1\over
2⋅3}
and {1\over
45} = {1\over
5⋅9}:
0.  1  6  6  …  
6  1.  0  0  0  0 
.  6  
4  0  
  3  6  
4  0 
0.  0  2  2  …  
45  1.  0  0  0  0 
.  9  0  
1  0  0  
  9  0  
1  0 
So the decimal representations are
Note that for all nonnegative integer k,
These tell us that the expansions have periods of 0, 0, and 1.
For seven,
so the period is of length 7.
For 45,
This is a little more complicated, but the pattern shows that there is one initial digit before hitting a repeating pattern, exactly like the expansion {1\over 45} = 0.0\overline{2}.
In each case, we are looking for the order of 10 modulo the denominator. Finding an integer with a large order modulo another integer is a building block in RSA encryption used in SSL (the https prefix in URLs).
One common stumbling block for people is that the repeating decimal expansion is not unique.
Let
n = 0.\overline{9} = 0.9999\overline{9}.

Then multiplying n by 10 shifts the decimal over one but does not alter the pattern, so
10n = 9.\overline{9} = 9.9999\overline{9}.

Given
we can subtract n from the former.
With 9n = 9, we know n = 1. Thus 1 = 0.\overline{9}!
This is a consequence of sums over infinite sequences, a very interesting and useful topic for another course. But this technique is useful for proving that rationals have repeating expansions.
Theorem: A decimal expansion that repeats (or terminates) represents a rational number.
Proof. Let n be the number represented by a repeating decimal expansion. Without loss of generality, assume that n > 0 and that the integer portion is zero. Now let that expansion have d initial digits and then a period of length p. Here we let a terminating decimal be represented by trailing 0 digits with a period of 1.
For example, let d = 4 and p = 5. Then n looks like
n = 0\kern 1.66702pt .\kern 1.66702pt {d}_{1}{d}_{2}{d}_{3}{d}_{4}\overline{{p}_{1}{p}_{2}{p}_{3}{p}_{4}{p}_{5}}.

Then 1{0}^{d}n leaves the repeating portion to the right of the decimal. Following our example d = 4 and p = 5,
1{0}^{4}n = {d}_{
1}{d}_{2}{d}_{3}{d}_{4}\kern 1.66702pt .\kern 1.66702pt \overline{{p}_{1}{p}_{2}{p}_{3}{p}_{4}{p}_{5}}.

Because it repeats, 1{0}^{d+p}n has the same pattern to the right of the decimal. In our running example,
1{0}^{4+5}n = {d}_{
1}{d}_{2}{d}_{3}{d}_{4}{p}_{1}{p}_{2}{p}_{3}{p}_{4}{p}_{5}\kern 1.66702pt .\kern 1.66702pt \overline{{p}_{1}{p}_{2}{p}_{3}{p}_{4}{p}_{5}}.

So 1{0}^{d+p}n  1{0}^{d}n has zeros to the right of the decimal and is an integer k. In our example,
k = 1{0}^{4+5}n  1{0}^{4}n = {d}_{
1}{d}_{2}{d}_{3}{d}_{4}{p}_{1}{p}_{2}{p}_{3}{p}_{4}{p}_{5}  {d}_{1}{d}_{2}{d}_{3}{d}_{4}.

We assumed n > 0, so the difference above is a positive integer. The fractional parts cancel out.
Now n = {k\over 1{0}^{d+p}1{0}^{d}} is one integer over another and thus is rational. □
Theorem: All rational numbers have repeating or terminating decimal expansions.
Proof. This is a very different style of proof, using what we have called the pidgeonhole principle. Without loss of generality, assume the rational number of interest is of the form {1\over d} for some positive integer d.
At each step in long division, there are only d possible remainers. If some remainder is 0, the expansion terminates.
If no remainder is 0, then there are only d  1 possible remainders that appear. If the expansion is taken to length d, some remainder must appear twice. Because of the long division procedure, equal remainders leave equal subproblems, and thus the expansion repeats. □
So we know that any repeating or terminating decimal expansion represents a rational, and that all rationals have terminating or repeating decimal expansions.
Thus, we have the following:
Corollary: A number is rational if and only if it has a repeating decimal
expansion.
So if there is no repeating portion, the number is irrational. One example,
0.101001000100001\mathrel{⋯}\kern 1.66702pt ,

has an increasing number of zero digits between each one digit. This number is irrational.
It’s beyond our scope to prove that π is irrational, but it is. Thus the digits of π do not repeat.
Percentage comes from per centile, or part per 100. So a direct numerical equivalent to 85% is
85% = {85\over
100} = .85.

We can expand fractions to include decimals in the numerator and denominator. The decimals are just rationals in another form, and we already explored “complex fractions” with rational numerators and denominators.
So we can express decimal percentages,
85.75% = {85.75\over
100} = .8575.

Everything else “just works”. To convert a fraction into a percentage, there are two routes. One is to convert the denominator into 100:
{1\over
2} = {50\over
100} = 50%.

Another is to produce the decimal expansion and then multiply that by 100:
{1\over
7} = 0.\overline{142857} = 14.2857\overline{142857}%.

Converting a percentage into a proper fraction required dropping the percentage into the numerator and then manipulating it appropriately:
85.75% = {85.75\over
100} = {{8575\over
100} \over
100} = {8575\over
10000} = {343\over
400}.

So far we have considered infinite expansions, ones that are not limited to a set number of digits. Computers (and calculators) cannot store infinite expansions that do not repeat, and those that do require more overhead than they are worth.
Instead, computers round infinite results to have at most a fixed number of significant digits. Operations on these limited representations incur some roundoff error, leading to a tension between computing speed and the precision of computed results. One important fact to bear in mind is that precision does not imply accuracy. The following is a very precise but completely inaccurate statement:
The moon is made of Camembert cheese.
First we’ll cover different rounding rules from the perspective of fixedpoint arithmetic, or arithmetic using a set number of digits to the right of the decimal plce. Then we’ll explain floatingpoint arithmetic where the decimal point “floats” through a fixed number of significant digits.
We will not cover the errors in floatingpoint operations, but we will cover the errors that come from typical binary representation of decimal data.
The points you need to take away from this are the following:
Despite the doomlike points above, floatingpoint arithmetic often provides results that are accurate enough. We won’t be able to cover why this is, but the highlevel reasons include:
Generally, computer arithmetic can be modelled as computing the exact result and then rounding that exact result into an economical representation.
There are more rounding methods, but these suffice for our discussion. Rounding rules are hugely important in banking and finance, and there are quite a few versions required by different regulations and laws.
Examples of each rounding method above, rounding to two places after the decimal point:
initial number  truncate  round half up  round to nearest even 
{1\over 3} = 0.\overline{3}  0.33  0.33  0.33 
{1\over 7} = 0.\overline{142857}  0.14  0.14  0.14 
0.444  0.44  0.44  0.44 
0.445  0.44  0.45  0.44 
0.4451  0.44  0.45  0.45 
0.446  0.44  0.45  0.45 
0.455  0.44  0.46  0.46 
Rounding error is the absolute difference between the exact number and the rounded, stored representation. In the table above, the rounding error in representing {1\over 3} is {1\over 3}  0.33 = {1\over 3}  {33\over 100} = {100\over 300}  {99\over 300} = {1\over 300} = 0.00\overline{3}. Note that here the rounding error is 1% of the exact result. That error is large because we use only two digits.
Note that you cannot round in stages. Consider roundtonearesteven applied to 0.99455 and rounding to two places after the point:
Incorrect  Correct 
0.99455  0.99455 
0.9946  
0.995  
1.00  0.99 
Consider repeatedly dividing by 10 in fixedpoint arithmetic that carries two digits beyond the decimal:
So (\kern 1.66702pt (\kern 1.66702pt 1 ÷ 10\kern 1.66702pt ) ÷ 10\kern 1.66702pt ) ÷ 10 evaluates to 0! This phenomenon is called underflow, where a number grows too small to be represented. A similar phenomenon, overflow, occurs when a number becomes too large to be represented. Computer arithmetics differ on how they handle over and underflow, but generally overflow produces an ∞ symbol and underflow produces 0.
Floatingpoint arithmetic compensates for this by carrying a fixed number of significant digits rather than a fixed number of fractional digits. The position of the decimal place is carried in an explicit, integer exponent. This allows floatingpoint numbers to store a wider range and actually makes analysis of the roundoff error easier.
In floatingpoint arithmetic,
This continues until we run out of representable range for the integer exponents. We leave the details of floatingpoint underflow for another day (if you’re unlucky).
Just as integers can be converted to other bases, fractional parts can be converted as well.
Each position to the right of the point (no longer the decimal point) corresponds to a power of the base. For binary, the typical computer representation,
So a binary fractional part can be expanded with powers of two:
0.110{1}_{2} = {1\over {
2}^{1}} + {1\over {
2}^{2}} + {0\over {
2}^{3}} + {1\over {
2}^{4}} = 0.8125.

To find a binary expansion, we need to carry out long division in base 2. I won’t ask you to do that.
The important part to recognize is that finite decimal expansions may have infinite, repeating binary expansions! Remember that in decimal, 2\mathrel{∣}10 and 5\mathrel{∣}10, so negative powers of 2 and 5 have terminating decimal expansions. In binary, only 2\mathrel{∣}2, so only powers of 2 have terminating binary expansions.
Numbers you expect to be exact are not. Consider 0.1. Its binary expansion is
0.1 = 0.{\overline{00011}}_{2}.

A fivebit fixedpoint representation would use
0.1 ≈ 0.0001{1}_{2}.

The error in representing this with a fivedigit fixedpoint representation is 0.00625, or over 6%.
In a fivebit floatingpoint representation,
0.1 ≈ 1.100{1}_{2} ⋅ {2}^{4}.

The error here is less than 0.0024, or less than 0.24%. You can see what floatingpoint gains here.
Ultimately, though, in a limited binary fractional representation, adding ten dimes does not equal one dollar! This is why often programs slanted towards finance (e.g. spreadsheets) use a form of decimal arithmetic. On current common hardware, decimal arithmetic is implemented in software rather than hardware and is orders of magnitude slower than binary arithmetic.