Tag Archives: algebra

Label Maker

If you've perused this blog, you know that I love probability.  I was fortunate enough to see Al Cuoco and Alicia Chiasson give a really cool presentation at this year's NCTM conference about exploring the probabilities of dice sums geometrically and algebraically.  Wheelhouse.  After we got done looking at some student work and pictures of distributions, Al nonchalantly threw out the following question:

Is it possible to change the integer labels on two dice [from the standard 1,2,3,4,5,6] such that the distribution of sums remains unchanged?

Of course he was much cooler than that.  I've significantly nerded up the language for the sake of brevity and clarity.  Still, good question, right?  And of course since our teacher has posed this tantalizing challenge, we know that the answer is yes, and now it's up to us to fill in the details.  Thusly:

First let's make use of the Cuoco/Chiasson observation that we can represent the throw of a standard die with the polynomial

When we do it this way, the exponents represent the label values for each face, and the coefficients represent frequencies of each label landing face up (relative to the total sample space).  This is neither surprising, nor super helpful.  Each "sum" occurs once out of the six possible.  We knew this already.

What is super helpful is that we can include n dice in our toss by expanding n factors of P(x).  For two dice (the number in question), that looks like

You can easily confirm that this jibes with the standard diagram.  For instance the sum of 7 shows up most often (6 out of 36 times), which helps casinos make great heaps of money off of bettors on the come.  Take a moment.  Compare.

Okay, so now we know that the standard labels yield the standard distribution of sums.  The question, though, is whether there are any other labels that do so as well.  Here's where some abstract algebra comes in handy.  Let's assume that there are, in fact, dice out there who satisfy this property.  We can represent those with polynomials as well.  We know that the coefficient on each term must still be 1 (each face will still come up 1 out of 6 times), but we don't yet know about the exponents (labels).  So let's say the labels on the two dice are, respectively

and

.

If we want the same exact sum distribution, it had better be true that

.

For future convenience (trust me), let's call the first polynomial factor on the right hand side Q(x).  Great!  Now we just have to figure out what all the a's and b's are.  It helps that our polynomials belong to the ring Z[x], which is a unique factorization domain.  A little factoring practice will show us that

.

We just have to rearrange these irreducible factors to get the answer we're looking for.  Due to a theorem that is too long and frightening to reproduce here [waves hands frantically], we know that the unique factorization of Q(x)---our polynomial with unknown exponents---must be of the form

,

where s, t, u, and v are all either 0, 1, or 2.  So that's good news, not too many possibilities to check.  In fact, we can make our lives a little easier.  First of all, notice that Q(1) must equal 6.  Right?  Each throw of that single die must yield each of the 6 faces with equal probability.  But then substituting 1 into the factored form gives us

Clearly this means that t and u have to be 1, and we just have to nail down s and v.  Well, if we take a look at Q(0), we also quickly realize that s can't be 0.  It can't be 2 either, because, if s is 2, then the smallest sum we could obtain on our dice would be 3---which is absolutely no good at all.  So s is 1 as well.  Let's see what happens in our three remaining cases, when u is 0, 1, and 2:

Check out those strange and beautiful labels!  We can mark up the first die with the exponents from the u = 0 case, and the second die with the u = 2 case.  When we multiply those two polynomials together we get back P(x)2, which is precisely what we needed (check if you like)!  Our other option, of course, is to label two dice with the u =1 case, which corresponds to a standard die.  And, thanks to unique factorization, we can be sure that there are no other cases.  Not only have we found some different labels, we've found all of them!

If the a's on the first die are (1,2,2,3,3,4), then the b's end up being (1,3,4,5,6,8), and vice versa.  And, comfortingly, if the a's on the first die are (1,2,3,4,5,6), then so are the b's on the second one.

Two dice with the u = 1 label are what you find at every craps table in the country.  One die of each of the other labels forms a pair of Sicherman dice, and they are the only other dice that yield the same sum distribution.  You could drop Sicherman dice in the middle of Vegas, and nobody would notice.  At least in terms of money changing hands.  The pit boss might take exception.  Come to think of it, I cannot stress how important it is that you not attempt to switch out dice in Vegas.  Your spine is also uniquely factorable...into irreducible vertebrae.

*This whole proof has been cribbed from Contemporary Abstract Algebra (2nd ed.), by Joseph A. Gallian.  If you want the whole citation, click his name and scroll down.*

A Tale of Two Numbers

A few months ago, we had just finished talking about polynomials and were moving into matrices.  Because a lot of matrix concepts have analogs in the real numbers, we kicked things off with a review of some real number topics.  Specifically, I wanted to talk about solving linear equations using multiplicative inverses as a preview of determinants and using inverse matrices for solving linear systems.  For instance:

$latex begin{array}{ll}

2x=8 & AX=B \

2^{-1}2x = 2^{-1}8 & A^{-1}AX = A^{-1}B \

1x = frac{1}{2}8 & IX = A^{-1}B \

x=4 & X = A^{-1}B

end{array}&s=2$

As an aside, I threw out this series of equations in the hopes of (a) foreshadowing singular matrices, and (b) offering a justification for the lifelong prohibition against dividing by zero:

$latex begin{array}{l}

0x=1 \

0^{-1}0x = 0^{-1}1 \

1x = frac{1}{0}1 \

x = frac{1}{0}

end{array}&s=2$

I thought this was just so beautiful.  Why can't we divide by zero?  Because zero doesn't have a multiplicative inverse.  There is no solution to 0x = 1, so 0-1 must not exist!  Q.E.D.

As it turns out, Q.E.NOT.  One of my students said, "Why can't we just invent the inverse of zero?  Like we did with i?"

Again, we had just finished our discussion of polynomials, during which we had conjured the square root of -1 seemingly out of the clear blue sky.  They wanted to do the same thing with 1/0.  What an insightful and beautiful idea!  Consider the following stories, from my students' perspectives:

  1. When we're trying to solve quadratic equations, we might happen to run into something like x2 = -1.  Now of course there is no real number whose square is -1, so for convenience let's just name this creature i (the square root of -1), and put it to good use immediately.
  2. When we're trying to solve linear equations, we might happen to run into something like 0x = 1.  Now of course there is no real number that, when multiplied by 0, yields 1, so for convenience let's just name this creature j (the multiplicative inverse of 0), and put it to good use immediately.

Why are we allowed to do the first thing, but not the second?  Why do we spend a whole chapter talking about the first thing, and an entire lifetime in contortions to avoid the second?  Both creatures were created, more or less on the spot, to patch up shortcomings in the real numbers.  What's the difference?

And this is the tricky part: how do I explain it within the confines of a high school algebra class?  Well, I can tell you what I tried to do...

Let's suppose that j is a legitimate mathematical entity in good standing with its peers, just like i.  Since we've defined j as the number that makes 0j = 1 true, it follows that 0 = 1/j.  Consider the following facts:

$latex begin{array}{l}

2 cdot 0 = 0 \

2frac{1}{j} = frac{1}{j} \

frac{2}{j} = frac{1}{j} \

2 = 1

end{array}&s=2$

In other words, I can pretty quickly show why j allows us to prove nonsensical results that lead to the dissolution of mathematics and perhaps the universe in general.  After all, if I'm allowed to prove that 2 = 1, then we can pretty much call the whole thing off.  What I can't show, at least with my current pedagogical knowledge, is why i doesn't lead to similar contradictions.

Therein lies the broad problem with proof.  It's difficult.  If there are low-hanging fruit on the counterexample tree, then I can falsify bad ideas right before my students' very eyes.  But if there are no counterexamples, then it becomes incredibly tough.  It's easy to show a contradiction, much harder to show an absence of contradiction.  I can certainly take my kids through confirming examples of why i is helpful and useful.  But in my 50 min/day with them, there's just no way I can organize a tour through the whole scope and beauty of complex numbers.  Let's be serious, there's no way that I can even individually appreciate their scope and beauty.

The complex numbers aren't just a set, or a group.  They're not even just a field.  They form an algebra (so do matrices, which brings a nice symmetry to this discussion), and algebras are strange and mysterious beings indeed.  I could spend the rest of my life learning why i leads to a rich and self-consistent system, so how am I supposed to give a satisfactory explanation?

Take it on faith, kids.  Good enough?

Update 3/20/12: My friend, Frank Romascavage, who is currently a graduate student in math at Bryn Mawr College (right down the road from my alma mater Villanova), pointed out the following on Facebook:

"We need to escape integral domains first so that we can have zero divisors!  Zero divisors give a quasi-invertibility condition (with respect to multiplication) on 0.  They aren't really true inverses, but they are somewhat close!  In $latex Z_{6}$ we have two zero divisors, 3 and 2, because 3 times 2 (as well as 2 times 3) in $latex Z_{6}$ is 0."

In many important ways, an integral domain is a generalization of the integers, which is why they behave very much the same.  An integral domain is just a commutative ring (usually assumed to have a unity), with no zero divisors.  If there are two members of a ring, say a and b, then they are said to be zero divisors if ab = 0.  In other words, to "escape integral domains," is to move into a ring where the Zero Product Property no longer holds.  This means that, in non-integral domains, we can almost, sort of, a little bit, divide by zero.  Zero doesn't really have a true inverse, but it's close.  Frank's example is the numbers 2 and 3 in the ring of integers modulo 6, since 3 x 2 = 0 (mod 6).  In fact, the ring of integers modulo n fails to be an integral domain in general, unless n is prime.  CTL

Greek to Me

I recently had a chance to do one of my favorite (and my students' least favorite) things: talk about words in math class.  Math words.  I also had the opportunity to use one of my favorite math-teacher-type resources: a dictionary.  I don't mean the glossary out of a math book, or a page from Wolfram MathWorld, or any one of the approximately 10.5 million web results (as of this writing) that the Google spits out when prompted with "math" + "dictionary."  I'm talking about a good, old fashioned English dictionary, one of three left in my room by the previous English-teaching occupant: Webster's Ninth New Collegiate, circa 1989.

Continue reading

Choosy

Something deeply unsettling is afoot in the land of math education when I'm teaching the same backwards thing in the same backwards way it was presented to me as a high school kid.  To wit, combinations.

Here is the current state of the art, according to the big boys of Advanced Algebra publishing:

Fundamental Counting Principle ====> Permutations ====> Combinations ====> Pascal's Triangle ====> Binomial Theorem  ====> Celebration

I submit that, when we do it this way, we're double-charging our students for their attention.  We bog them down in unnecessary algebraic trifling, and we go out of our way to delay the payoff for just as long as possible.  It's bad marketing, and it's bad teaching.  And we don't exactly get away scot-free in all this.  I know I'm in for a rough couple of weeks any time I have to close my opening lesson with, "Trust me."

Continue reading

The Dead Puppy Theorem

So for the past few months I've been telling my kids that, every time they write $latex (x+y)^2 = x^2+y^2$, they kill a puppy.  In fact, I will hereafter refer to that equation as the Dead Puppy Theorem, or DPT.  Since its discovery in early September, my students' usage of the DPT has accounted for more canine deaths than heartworms.