A Tale of Two Numbers

A few months ago, we had just finished talking about polynomials and were moving into matrices.  Because a lot of matrix concepts have analogs in the real numbers, we kicked things off with a review of some real number topics.  Specifically, I wanted to talk about solving linear equations using multiplicative inverses as a preview of determinants and using inverse matrices for solving linear systems.  For instance:

$latex begin{array}{ll}

2x=8 & AX=B \

2^{-1}2x = 2^{-1}8 & A^{-1}AX = A^{-1}B \

1x = frac{1}{2}8 & IX = A^{-1}B \

x=4 & X = A^{-1}B

end{array}&s=2$

As an aside, I threw out this series of equations in the hopes of (a) foreshadowing singular matrices, and (b) offering a justification for the lifelong prohibition against dividing by zero:

$latex begin{array}{l}

0x=1 \

0^{-1}0x = 0^{-1}1 \

1x = frac{1}{0}1 \

x = frac{1}{0}

end{array}&s=2$

I thought this was just so beautiful.  Why can't we divide by zero?  Because zero doesn't have a multiplicative inverse.  There is no solution to 0x = 1, so 0-1 must not exist!  Q.E.D.

As it turns out, Q.E.NOT.  One of my students said, "Why can't we just invent the inverse of zero?  Like we did with i?"

Again, we had just finished our discussion of polynomials, during which we had conjured the square root of -1 seemingly out of the clear blue sky.  They wanted to do the same thing with 1/0.  What an insightful and beautiful idea!  Consider the following stories, from my students' perspectives:

  1. When we're trying to solve quadratic equations, we might happen to run into something like x2 = -1.  Now of course there is no real number whose square is -1, so for convenience let's just name this creature i (the square root of -1), and put it to good use immediately.
  2. When we're trying to solve linear equations, we might happen to run into something like 0x = 1.  Now of course there is no real number that, when multiplied by 0, yields 1, so for convenience let's just name this creature j (the multiplicative inverse of 0), and put it to good use immediately.

Why are we allowed to do the first thing, but not the second?  Why do we spend a whole chapter talking about the first thing, and an entire lifetime in contortions to avoid the second?  Both creatures were created, more or less on the spot, to patch up shortcomings in the real numbers.  What's the difference?

And this is the tricky part: how do I explain it within the confines of a high school algebra class?  Well, I can tell you what I tried to do...

Let's suppose that j is a legitimate mathematical entity in good standing with its peers, just like i.  Since we've defined j as the number that makes 0j = 1 true, it follows that 0 = 1/j.  Consider the following facts:

$latex begin{array}{l}

2 cdot 0 = 0 \

2frac{1}{j} = frac{1}{j} \

frac{2}{j} = frac{1}{j} \

2 = 1

end{array}&s=2$

In other words, I can pretty quickly show why j allows us to prove nonsensical results that lead to the dissolution of mathematics and perhaps the universe in general.  After all, if I'm allowed to prove that 2 = 1, then we can pretty much call the whole thing off.  What I can't show, at least with my current pedagogical knowledge, is why i doesn't lead to similar contradictions.

Therein lies the broad problem with proof.  It's difficult.  If there are low-hanging fruit on the counterexample tree, then I can falsify bad ideas right before my students' very eyes.  But if there are no counterexamples, then it becomes incredibly tough.  It's easy to show a contradiction, much harder to show an absence of contradiction.  I can certainly take my kids through confirming examples of why i is helpful and useful.  But in my 50 min/day with them, there's just no way I can organize a tour through the whole scope and beauty of complex numbers.  Let's be serious, there's no way that I can even individually appreciate their scope and beauty.

The complex numbers aren't just a set, or a group.  They're not even just a field.  They form an algebra (so do matrices, which brings a nice symmetry to this discussion), and algebras are strange and mysterious beings indeed.  I could spend the rest of my life learning why i leads to a rich and self-consistent system, so how am I supposed to give a satisfactory explanation?

Take it on faith, kids.  Good enough?

Update 3/20/12: My friend, Frank Romascavage, who is currently a graduate student in math at Bryn Mawr College (right down the road from my alma mater Villanova), pointed out the following on Facebook:

"We need to escape integral domains first so that we can have zero divisors!  Zero divisors give a quasi-invertibility condition (with respect to multiplication) on 0.  They aren't really true inverses, but they are somewhat close!  In $latex Z_{6}$ we have two zero divisors, 3 and 2, because 3 times 2 (as well as 2 times 3) in $latex Z_{6}$ is 0."

In many important ways, an integral domain is a generalization of the integers, which is why they behave very much the same.  An integral domain is just a commutative ring (usually assumed to have a unity), with no zero divisors.  If there are two members of a ring, say a and b, then they are said to be zero divisors if ab = 0.  In other words, to "escape integral domains," is to move into a ring where the Zero Product Property no longer holds.  This means that, in non-integral domains, we can almost, sort of, a little bit, divide by zero.  Zero doesn't really have a true inverse, but it's close.  Frank's example is the numbers 2 and 3 in the ring of integers modulo 6, since 3 x 2 = 0 (mod 6).  In fact, the ring of integers modulo n fails to be an integral domain in general, unless n is prime.  CTL

7 thoughts on “A Tale of Two Numbers

  1. Christopher

    You have made me a very happy man! Thanks for writing this up. Lovely, lovely, lovely.

    To answer your (probably) rhetorical question...The burden isn't on you to prove that i plays nicely with the real numbers. Nope, you can challenge your student to find harvest some fruit from the counterexample tree. Make clear, of course, that "lack of counterexamples" does not equal "proof", but you can also express an interest in what it occurs to him/her/them to try.

    Reply
    1. Chris Lusto

      Well, I feel like I have to be accommodating to your requests since you are one of the approximately one people (take that, Tabitha!) who reads/comments on this sucker.

      And yes, the question is mostly rhetorical, but only in a throwing-up-of-hands sort of way. I'm still looking for a better answer, but I don't suspect a complete solution exists. We shall continue our exploration. Unfortunately, most of these kids will have to do it without me, because---according to my schedule---we're about done with $latex mathbb{C}&s=2$.

      Reply
  2. Matt

    One of my favorite "proofs" involving i is the following:

    1 = sqrt(1) = sqrt(1*1)=sqrt((-1)*(-1))=sqrt(i^2*i^2)=sqrt(i^2)*sqrt(i^2)=i*i=-1. If you really want to mess with their heads, you can ask them where this proof breaks down, i.e. how it is that i does place nicely with the real numbers, after all. One of my high school teachers did this very thing, and I remember the lesson quite well to this day.

    Reply
  3. LSquared

    As you can probably tell, I just discovered your blog, and am wending my way backwards through your (very interesting) posts. As you probably can't tell, I'm a mathematician-turned-math-educator...anyway, I can't resist a good mathematical puzzle.

    First, the lovely "equation" 1 = sqrt(1) = sqrt(1*1)=sqrt((-1)*(-1))=sqrt(i^2*i^2)=sqrt(i^2)*sqrt(i^2)=i*i=-1. What's going on here (as my complex analysis professor used to insist with much vehemence) is that when you move from reals to complex numbers, there is no longer such a thing as a principal square root. You can't order the complex numbers, so you can't pick just one answer to a square root problem--you have to take the whole set, so sqrt(-1) isn't i, it's +-i, and sqrt (1) isn't 1 it's +-1. You can solve x^2=a for any number you want, but square root isn't a function anymore. It's messy, but you don't get into trouble that way.

    Second, how about i playing nice with the real numbers. It mostly does--the complex numbers form a field, and if you do the adjoining roots to a field thing in abstract algebra, you can prove that there's a field isomorphism from the rationals with square root of 2 adjoined and the rationals with i adjoined, so in terms of field properties, i plays nice with the rational numbers just as much as square root 2 does. That's kind of the way you prove things are consistent for large systems--you can't directly prove that there are no inconsistencies in a large system, but you can prove that one large system is equivalent to another large system, so, for instance, I expect someone has proved that if the complex numbers are inconsistent then the real numbers have to be inconsistent, so it's equally trustworthy...as a field.

    Back to the first example, though, notice that I said you can't order the complex numbers (any choice you might make about, for instance, whether i is greater or less than 1 can be shown to be inconsistent), so that means there are properties of the real numbers that don't extend to the complex numbers. i is consistent with the real numbers in most ways, but not every property extends. Field properties, however, are pretty powerful--they are the ones that let you solve polynomial equations and such, so that makes the complex numbers awfully useful.

    Reply
    1. Chris Lusto

      Everything you've said here, of course, is right on. At least I think so. I will have to take a whole bunch of it on faith as an economics-major-turned-Marine-artillery-officer-turned-weird-corporate-safety-guy-turned-math-educator (notice that "mathematician" doesn't make an appearance). And therein, really, is the problem. Not with you, or even with me---mostly---but with the audience in question.

      I have to try and make at least nominally convincing arguments for all these kinds of big-picture structural questions without any appeals to sets that are not totally ordered, or to the convenience of extension fields, or to consistency-by-proxy...even though all of those things are cool. They're just not accessible. That's my real problem here. It's not so much philosophical as practical. If you (or anybody) has an answer to that, I'll pay handsomely.

      Thanks so much for your (new) readership and comments.

      Reply
    2. Emmanuel

      You are mostly correct. You run into the problem of branch cuts for defining the sqrt function but you can define a principle branch as having arg(z) between certain values. The problem is that you have to define it and it can't be made continuous... over the whole complex plane. So you run into the nasty problems that arise with Riemann surfaces.

      R adjoin i is the unique algebraic closure of R so there are no more possible roots to adjoin. Of course that isn't a great answer... Another interesting way to motivate the discussion might be with the idea that we want multiplication to be continuous so x*x^(-1)=1 for any value x. Try to compute the limit as x goes to 0 and we can't since the limit of 1/x as x goes to 0 is not uniquely defined as positive or negative infinity so that even if infinity was a number we still wouldn't have a continuous function which chose a multiplicative inverse.

      There is an interesting relation between what you are talking about and the phenomena that as n increases R^n has less and less structure (R^1 is a totally ordered field, R^2 is not totally ordered, and not associative with the cross product R^3, etc.). R^2 is basically C but some of the algebraic information is missing in R^2.

      Reply

Leave a Reply to Chris Lusto Cancel reply

Your email address will not be published. Required fields are marked *