# What's in a Circle?

Recently I had the good fortune to both attend #TMC13 and share my very own personal favorite lesson during, appropriately, one of the "My Favorite" sessions.  And now I'll share it with you.

I was getting ready to start a unit on conic sections with my Advanced Algebra kids.  I was planning to start, as I assume many of you do, with circles.  I imagined it going something like this, Lusto's 10 Steps to Circle Mastery:

1. Get somebody, anybody in the room, to spout the definition of a circle , hopefully including a phrase such as, "the set of all points a fixed distance from a given point..."
3. The distance formula.
4. Why the distance formula is fugly.  Remember the Pythagorean Theorem?  Of course you do.  It's awesome.
5. In the plane: 
6. In the Cartesian plane, centered at the origin: 
7. This distance, d, has a special name in circles, right?  Right: 
8. Appeal to function families and translations by <hk>.
9. 
10. Boom.

Based on my hopes for Step 1, and based on my need for like five uninterrupted minutes to take attendance, find my coffee mug, &c., I hastily scribbled an extremely lazy and unimaginative warm-up discussion question on the whiteboard.  Four simple words that  led to some surprising and amazing mathematical conversation.

What is a circle?

That's it.  My favorite lesson.  The whole thing.  And here's how it went.

I was walking around looking at/listening to all the different definitions the groups had come up with.  And they were nuts.  There were dubious claims about unquantifiable symmetry, sketchy sketches with line segments of indeterminate provenance, rampant appeals to a mysterious property known as roundness.  Most of the arguments were logically circular but, alas, mathematically not.  The word curvy appeared more than once.  It was a glorious disaster of handwaving and frustration.  I knew, deep in my reptilian brain, that this is what's known in the business as a "teachable moment."

Nailed it.

At this point I was basically just walking around being a jerk.  I was drawing all kinds of crazy figures that minimally conformed to what they were telling me a circle was, and getting lots of laughs in the process.  And then I had the thought, even deeper in my reptilian brain, that transformed the whole experience from an interesting activity into a bonafide lessonWhy the hell am I the one doing this?

So here's what the lesson eventually became, Lusto's 6 Instructions to Humans on the Brink of Amazing Mathematical Discussion:

2. Absolutely no book-looking or Googling.  If all goes well, you will be frustrated.  Your peers will frustrate you.  I will frustrate you.  Don't rob anybody else of this beautiful struggle.  If your definition includes the word locus, you are automatically disqualified from further participation.
3. Each group will have one representative present your definition to the class.  No clarification.  No on-the-fly editing.  No examples.  No pantomime.  Your definition will include, and be limited to, English words in some kind of semantically meaningful order.  Introduce variables at your own risk.
4. If you're going to refer to some other mathematical object (and I suspect you will), make sure it's not an object whose definition requires the concept of circle in the first place.  (Ancillary benefit: you will be one of the approximately .01% of the population who learns what "begging the question" actually means.)
5. Once a group presents a definition, here is your new job: construct a figure that meets the given definition precisely, but is not a circle.  Pick nits.  You are a counterexample machine.  A bonus of my undying respect for the most ridiculous non-circle of the day.
6. When you find a counterexample, make a note of the loophole you exploited.  What is non-circley about your figure?

After giving the instructions, I could pretty much just sit back for a while and watch things get awesome.  If there's one thing that's easy to do, it's get teenagers to argue with each other.  Granted, it's a little harder to get them to argue about math, but not much.  (They're basically ready to fight at all times; the MacGuffin is largely unimportant.)  So that's one thing that makes this lesson my favorite.  Another thing is that we ended up with a pretty bullet-proof circle definition by the end of the exercise.  When you spend a whole lot of time crawling through loopholes in the hopes of beating up on your peers, you find an awful lot of loopholes to close.  Talk about a fantastic mathematical habit of mind.  Yet another thing, and maybe the coolest, is that it led to some of the best questions/observations my kids ever came up with.  Here is a brief, paraphrased sampling, with my annotations as to why they're so great:

• Wait, if the circle is just the points on the edge, then how can a circle have area?  There's nothing that gets kids thinking precisely about mathematical language faster than the realization that every teacher in the world is using it incorrectly.  We should really be saying, the area of the region bounded by a circle, or the area of the circle's interior...  I had never thought twice about that, but now I sure as hell do.
• What does it mean to be "inside" a circle?  Similar to the above, but even more amazing.  The fact that a circle divides the plane into two disjoint regions is a completely nontrivial result.  It's basically a statement of the Jordan Curve Theorem, which was proved pretty recently in the history of mathematics.
• If a radius is a distance, then a "circle" depends on how you measure distance.  This one is on my Holy Shit List.  We had spent like a half hour one day talking about the Taxicab Plane, just because I thought it was cool and made the distance formula seem mildly less boring.  But somebody pointed out that circles would look totally different if we measured distances that way.  And yes!  They would!  At that point, I felt like I should probably just retire.

The final reason this lesson is my favorite is probably also the reason that you should care.  There's nothing particularly special about the word circle.  You can take the sentence, What is a [widget]? and pick just about any mathematical widget that kids have some nascent intuition about.  Give it a shot.  Maybe it won't be your favorite, but it'll be pretty great.

# Inconvenient Truths

As happens with amazing frequency, Christopher Danielson said something interesting today on Twitter.

— Christopher (@Trianglemancsd) July 29, 2013

And, as also happens with impressive regularity, Max Ray chimed in with something that led to an interesting conversation --- which, in the end, culminated in my assertion that not everything that is mathematically true is pedagogically useful.  I would go further and say that a truth's usefulness is a function of the cognitive level at which it becomes both comprehensible and important --- but not before.

By way of an example, Cal Armstrong took a shot at me (c.f. the Storify link above) for my #TMC13 assertion that it is completely defensible to say that a triangle (plus its interior) is a cone.  Because he is Canadian, I think he will find the following sentiment particularly agreeable: we're both right.  A triangle both is, and is not, a cone, depending on the context.  It might be helpful to think of it as Schrödinger's Coneangle: an object that exists as the superposition of two states (cone and triangle),  collapsing into a particular state only when we make a measurement.  In this case, the "measurement" is actually made by our audience.

When I am speaking to an audience of relative mathematical maturity, I can (ahem...correctly) say that cone-ness is a very broadly defined property: given any topological space, X, we can build a cone over X by forming the quotient space

with the equivalence relation ~ defined as follows:

If we take X to be the unit interval with the standard topology, we get a perfectly respectable Euclidean triangle (and its interior).  Intuitively, you can think of taking the Cartesian product of the interval with itself, which gives you a filled-in unit square, and then identifying one of the edges with a single point.  Boom, coneangle.  Which, like Sharknado, poses no logical problems.

Of course, it is a problem when you're talking to a middle school geometry student.  In that situation, saying that a triangle is a cone is both supremely unhelpful and ultimately dishonest.  What we really mean is that, in the particular domain of 3-dimensional Euclidean geometry, when we have a circle (disk) in a plane and one non-coplanar point, we can make this thing called a cone by taking all the line segments between the point and the base.  But to that student, in that phase of mathematical life, the particular domain is the only domain, and so we rightly omit the details.  In an eighth-grade geometry class, there is absolutely no good reason to introduce anything else.

Constructing a topological cone over the unit interval

We do this all the time as math teachers.  "Here, kid, is something that you can wrap your head around.  It will serve you quite well for a while.  Eventually we're going to admit that we weren't telling you the whole story --- maybe we were even lying a little bit --- but we'll refine the picture when you're ready.  Promise."

Which brings me back to Danielson's tweet.  From a mathematical point of view, there are all kinds of problems with saying that a rectangle has "two long sides and two short sides" (so many that I won't even attempt to name them).  But how bad is this lie?  Better yet, how bad is the spirit of this lie?  I think it depends on the audience.  I'm not sure it's so very wrong to draw a sharp (albeit technically imaginary) distinction for young children between squares and rectangles that are not squares.  It doesn't seem all that different to me, on a fundamental level, from saying that cones are 3-dimensional solids.  Or that you can't take the square root of a negative number.  Or that the sum of the interior angles of a quadrilateral is 360 degrees.  None of those statements is strictly true, but the truths are so very inconvenient for learners grappling with the concepts that we actually care about at the time.  It's not currently important that they grasp the complete picture.  And it's probably not feasible for them to do so, anyway.

Teaching mathematics is an iterative process, a feedback loop.  New information is encountered, reconciled with existing knowledge, and ultimately assimilated into a more complete understanding.  Today you will "know" that squares and rectangles are different.  Later, when you're ready to think about angle measure and congruence, you will learn that they are sometimes the same.  Today you will "know" that times can only be 0 if either a or b is zero.  And tomorrow you will learn about the ring of integers modulo 6.

I will tell you the truth, children.  But maybe not today.

# To the Limit...One More Time

There's an interesting article in this month's Mathematics Teacher about the effects of the particular language elements we use to communicate mathematical ideas.  The main thread revolves around limit concepts, primarily because they're both philosophically and practically confusing for many beginning calculus students, and because, it turns out, a teacher's particular choices regarding words and metaphors have an important impact on student (mis)understanding.

Limits comprise a special relationship between mathematical process and mathematical object.  We speak of them in terms of variables "approaching" or "tending toward" particular values, but we subsequently manipulate them as static entities.  I can, for instance, talk about the limiting value of the expression 1/x as x grows without bound (a dynamic concept), but that limiting value is ultimately just a single real (static) number: zero.  There's an uncomfortable tension in that duality.

Even the notation is ambiguous.  Here's the fact I mentioned in the preceding paragraph, symbolically:

The arrow implies motion, but the equals sign implies assignment.  There are elements of both process and object.

I've touched on this duality before, which has sparked some great conversations.  A few months ago, I had a supremely interesting email chat with Christopher Danielson after he pointed me toward the writings of Anna Sfard.  He has graciously agreed to allow me to reproduce that conversation here in its original form; I've only redacted some of the more boring pleasantries and collapsed some strings of shorter messages into longer ones.  Enjoy.

Chris Lusto
To: Christopher Danielson

Seriously, thanks for the Sfard tip.  I've read a few of the articles she has on her website (which, by the way, why are college professors' websites like the most aesthetically displeasing things on the internet?  Just use a white background and stop being weird.), and you were right: I dig her.  I read the article on duality [PDF] and had one major bone of contention.

I really like the idea of duality versus dichotomy, and she makes, I think, a compelling argument in general.  I just worry it might just be a little ambitious.  She hedges a little bit, saying things like "more often than not" mathematical objects can be conceived both operationally and structurally, but I still think this idea of duality runs into serious problems when infinite things come into play--and that's not exactly a trivial subset of "mathematical objects."

If we allow that operational conception is (a) just as valid/important as structural and (b) often, in fact, precedes structural conception, what are we to make of processes that never end, that never produce anything because they're always in production?  Sfard even says, "...interpreting a notion as a process implies regarding it as a potential rather than actual entity, which comes into existence upon request in a sequence of actions."  But what if we can't ever fulfill the request, because we're always on hold, waiting in vain for the end of an unending sequence?  And what about this business of "potential?"  That just smacks of the "potential infinities" of the ancient Greeks that held back western mathematics for a couple millennia.  It seems like we have to admit either (a) an infinite process can terminate in finite time in order to produce an structural object, or (b) these objects aren't really at all structural, because they live in the world of potentiality.  I don't find either of those particularly satisfying.  I think, in the case of infinite notions, the operational conception leads to a fundamental misconception, a la my student D.

Your thoughts?  Whenever you have a moment, of course.

Chris

Christopher Danielson
To: Chris Lusto

"Ambitious" describes Anna Sfard's intellectual habits very well, I think. She was in a half-time appointment at Michigan State (and half time at Haifa) for part of my grad school time, and she was on my dissertation committee. The woman is crazy smart. And it seems to be a characteristic of Israeli intellectuals to commit very strongly to one's ideas. Not a maybe or a perhaps to be found in her oeuvre, I don't think.

I have no explanation for the poor poor quality of academics' websites, except to say that it is representative of tech use in higher ed more generally. See also @EDTECHHULK on Twitter and Dan Meyer's comments here (esp. couple screens down the page, at "Real Talk about Grad School):

http://blog.mrmeyer.com/?p=12592

I'm still formulating thoughts on processes that never terminate. But I'm not sure I fully understand your objection. Your classroom scenarios seem to suggest that indeed process and object are both fundamentally important ways of thinking about infinity. And consider the language of limits..."as x goes to infinity" or even "as x grows without bound". Those are both process-based ways of talking, right?

csd

Chris Lusto
To: Christopher Danielson

I think Sfard's right that, in general, process and object are both important methods of mathematical conception.  And yeah, multiple representations are not only admissible, but probably desirable (thinking here, specifically, of HS algebra and the Lesh Model), but isn't operational understanding misleading when you're talking about infinity?

Thinking of f(x) = 2x as a process that doubles inputs is valuable, and so is a picture of the resulting object/graph.  And, in a case like this one, I don't think you lose or gain all that much with either vantage.  Sometimes it's helpful to think of the process, and other times the object.

But thinking of asymptotic behavior procedurally, for example, is very, very different from the object we call a "limit."  It's nice if students can understand that, as x gets larger, 1/x gets arbitrarily close to 0.  I mean, certainly if we hold a numerator constant and increase the denominator, this process yields subsequently smaller and smaller values.  But I think that's still like a mile away from understanding that lim_x-->∞ {1/x} = 0.  Like, is equal to.  Is identical to as an object.  Is just another name for.  Like, 23 + lim_x-->∞ {1/x} = 23.

If procedure (process) is linked to product (object)--like, say, "4 divided by 7" is linked to "4/7"--then how are we to reconcile a never-ending process with a finite, tangible product that can be manipulated like any other mathematical object?  Doesn't it force us to accept that 1/x eventually "gets to" 0 (which it doesn't), or that the limit is some kind of potential result (which it isn't) that can't really ever be called a proper object because the process is, by definition, never-ending?

I'm going to stop typing words, because I feel like as my words -->∞, my clarity --> 0.

C

Christopher Danielson
To: Chris Lusto

I see...so to boil it down to a debatable question...

Is the object necessarily the product of the process?

Do I have it right?

btw...if I got that question right, then I say 'no'.

I can think about 1,352,417 and treat it as an object, even though I can assure that I have never participated in any sort of process that yielded that number.

To say nothing of googolplex.

csd

Chris Lusto
To: Christopher Danielson

I think that's about right, but with one important qualification.

Is the object necessarily the product of the process?  Then I agree, no.  But you at least have the option of defining it either way.  Even if you've never constructed 1,352,417 widgets, there's nothing philosophically problematic with the process that did/could.  You're right, there isn't even a measly googol of anything, but that doesn't stop it from being the eventual result of (1+1+...+1).

So...

Is the object the result of the process?  Not necessarily, but that's not a huge problem for me.

Could the object be the result of the process?  If the answer is no (which my gut believes it to be in the infinite case), then how can we reasonably talk about it as both a process and an object?  Does the duality break down?

C

Christopher Danielson
To: Chris Lusto

See I don't see a huge difference philosophically between "a product that could be created by a known process, but not in my lifetime" (counting to googol) and "a product that could never be created" (infinity).

In both cases, for me, the process is (1) incomplete, and (2) hypothetical.
Why does it matter at the core whether the result is theoretically achievable or not? Either way, I've imagined it.

And I think imagination is key. I don't recall whether Sfard writes about that or not (probably not, since she's all language, no imagery). But I do think the transition from process to object is at least in part one involving imagination. I have to imagine the object into being in mathematics precisely because mathematical objects are abstract.

And when I'm struggling to understand a new object (say a limit), it is often helpful to imagine the process that produced it. But I don't have to see the process through to the end.

csd

Chris Lusto
To: Christopher Danielson

Think about our Hz conversation.  Even with arbitrarily huge numbers of wave combinations, we get sinusoidal waves.  I can get as close to a square wave as I want, but in order to actually obtain the square wave object, the process that got me arbitrarily close to my goal breaks down and fails.  The process is insufficient to the object.  The difference between the square wave and the sinusoidal wave that's arbitrarily close to square is ultimately qualitative, not just quantitative--and there's the rub.  Wasn't that precisely what you and Frank [Noschese] convinced me of?

C

Christopher Danielson
To: Chris Lusto

But the square wave is the limit. There's the object. The limit (process? object?) produces the square wave.

I have no idea what I convinced you of. But I know that the argument I was making was that polynomials-by definition-have finitely many terms. And e^x can be written as infinitely many terms, each one a polynomial. Is e^x a polynomial? By the letter of the law, no. But in spirit? Yes. And that's beautiful.

I got in trouble doing a CMP demonstration lesson once. I talked with students about a cylinder being a circular prism. The algebra teacher observing got upset with me because a prism has polygonal faces. Ergo, "circular prism" is nonsense.

I had occasion to follow up a year or so later with my former complex analysis professor from MSU grad school. He had absolutely no problem calling a cylinder a circular prism.  No problem at all.

What to learn? Unclear.

csd

Chris Lusto
To: Christopher Danielson

I see a huge distinction between "unachievable due to resource constraints" and "unachievable by definition."  Why is the possibility that CERN moved some particles faster than light a big deal?  We've already moved all kinds of stuff 99.999999% that fast in the lab.  The extra .000001% is practically trivial, but philosophically enormous.  It's not that faster-than-light travel seemed to be practically impossible, but literally, probability exactly 0 impossible.

The difference between almost 0 and 0, no matter how small, is mathematically gigantic.

This is seriously all kinds of fun, but I have to go do some domestic things.  To be continued...in finite time.

C

Christopher Danielson
To: Chris Lusto

That's the beautiful thing about email. It is at heart an asynchronous medium.

By the way, some would say that you have pointed to an important difference between mathematics and the sciences with your example.

csd

Thanks so much to Dr. Danielson for (a) having this discussion, and (b) letting me publish all the gory details.  Oh, and (c) making me smarter in the process.

# Playing to an Empty House

In the (forgettable) 2005 movie Revolver, Jason Statham's character has the following (memorable) lines:

There is something about yourself that you don't know.  Something that you will deny even exists until it's too late to do anything about it.  It's the only reason you get up in the morning...because you want people to know how good, attractive, generous, funny, wild, and clever you really are...We share an addiction.  We're approval junkies.

Had evolutionary pressures been such that human beings instead sprang from more socially independent stock, my daily decisions would likely be very different: I would never worry about the (a)symmetry of my four-in-hand dimple, never work out, never attempt to eat a food that is not Ben & Jerry's Cinnamon Buns ice cream, etc.  I certainly wouldn't write a blog.  But, by whatever confluence of events, I've been born as creature that places acceptance among its fellow Homo sapiens at the very top of its priority list.  And it's not just me of course.  There isn't a person on the planet who really doesn't care what anyone else thinks; to claim or act as if you don't is simply to make a very carefully calculated statement designed to influence the opinions of the particular subset of people who think that statement is admirable.  And we want to be admired.

For a teacher, this is incredibly fortuitous.  We leverage it every day.  Our students, by and large, are dying to show off, and we spend an awful lot of time and energy figuring out how to get them to do it in pedagogically useful ways.  We get them to show off to a group of their peers and call it discussion.  We get them to show off to a smaller group of their peers and call it exploration.  We get them to show off to themselves and call it discovery.  We get them to show off to us and call it assessment.  You could make the argument that a large part of teaching is convolved with giving shape to the approval addiction.

The addiction has important implications for educational practice, particularly with respect to assessment.  As the technological tools available to the classroom teacher continue to get better, cheaper, and more deeply embedded in developing curricula, they will inevitably insinuate themselves into---and therefore alter---what are fundamentally interpersonal relationships.  Assessment is just such a relationship.  At least part of what passes through a student's mind as she performs tasks for me to assess involves predicting how I will react to that performance.  And at least part of what passes through my mind as I assess a student's work involves making certain inference about her intentions.  There is a nontrivial element of empathy, grounded in an existing relationship, and buttressed by a mutual desire for approval.  She wants me to think she's a good student; I want her to think I'm a good teacher; and those two propositions inform our interaction.

[Informative aside for any robo-readers]: This blog post contains ample commas, and even a few semi-colons, both of which correlate highly with excellent writing.  Please score accordingly.

# Building a Probability Cannon

For just a moment, let's consider a staple of the second year algebra curriculum: the one-dimensional projectile motion problem.  (I used to do an awful lot of this sort of thing.)  It's not a fantastic problem---it's overdone, and often under-well---but it's representative of many of our standard modeling problems in some important ways:

1. Every one of my students has participated in the activity we're modeling.  They've thrown, dropped, and shot things.  They've jumped and fallen and dove from various heights.  In other words, they have a passing acquaintance with gravity.
2. Data points are relatively easy to come by.  All we need is a stopwatch and a projectile-worthy object.  If that's impractical, then there are also some great and simple---and free---simulations out there (PhET, Angry Birds), and some great and simple---and free---data collection software as well (Tracker).
3. We only need a few data points to fix the parameters.  For a general quadratic model, we only need three data points to determine the particular solution.  Really we only need two, if we assume constant acceleration.
4. Experiments are easy to repeat.  Drop/throw/shoot the ball again.  Run the applet again.
5. The model conforms to a fairly nice and well-behaved family of functions.  Quadratics are continuous and differentiable and smooth, and they're generally willing to submit to whatever mathematical poking we're wont to visit upon them without getting gnarly.
6. Theoretical predictions are readily checked.  Want to know, for instance, when our projectile will hit the ground?  Find the sensible zero of the function (it's pretty easy to sanity check its reasonableness---see #1 above).  Look at a table of values and step through the motion second-by-second (use a smaller delta t for an even better sense of what's going on).  Click RUN on your simulation, and wait until it stops (self-explanatory).  And, if you're completely dedicated, build yourself a cannon and put your money where your mouth is.

Of course I've chosen to introduce this discussion with the example of projectile motion, but there are plenty of other candidates: length/area/volume, exponential growth and decay, linear speed and distance.  Almost without exception (in the algebra classroom), we model phenomena that satisfy the six conditions listed above.

Almost.  Because then we run into probability, and probability isn't so tame.  I'll grant that #1 still holds (though I'm not entirely convinced it holds in the same sense), but the other five conditions go out the window.

# Data points are NOT easy to come by.

I can already hear you protesting.  "Flip a coin...that's a data point!"  Well, yes.  Sort of.  But in the realm of probability, individual data points are ambiguous.  The ordered pair (3rd flip, heads) is very different from (3 seconds, 12 meters).  They're both measurements, but the first one has much, much higher entropy.  Interpretation becomes problematic.  Here's another example: My meteorologist's incredibly sophisticated model (dart board?) made the following prediction yesterday: P(rain) = 0.6.  In other words, the event "rain" was more likely than the event "not rain."  It did not rain yesterday.  How am I to understand this un-rain?  Was the model right?  If so, then I'm not terribly surprised it didn't rain.  Was the model wrong?  If so, then I'm not terribly surprised it didn't rain.  In what sense have I collected "data?"

And what if I'm interested in a compound event?  What if I want to know not just the result of a lone flip, but P(exactly 352 heads in 1000 flips)?  Now a single data point suddenly consists of 1000 trials.  So it turns out data points have the potential to be rather difficult to come by, which brings us to...

# We need an awful lot of data points.

I'm not talking about our 1000-flip trials here, which was just a result of my arbitrary choice of one particular problem.  I mean that, no matter what our trials consist of, we need to do a whole bunch of them in order to build a reliable model.  Two measurements in my projectile problem determine a unique curve and, in effect, answer any question I might want to ask.  Two measurements in a probabilistic setting tell me just about nothing.

Consider this historical problem born, like many probability problems, from gambling.  On each turn, a player rolls three dice and wins or loses money based on the sum (fill in your own details if you want; they're not so important for our purposes here).  As savvy and degenerate gamblers, we'd like to know which sums are more or less likely.  We have some nascent theoretical ideas, but we'd like to test one in particular.  Is the probability of rolling a sum of 9 equal to the probability of rolling a sum of 10?  It seems it should be: after all, there are six ways to roll a 9 ({6,2,1},{5,3,1},{5,2,2},{4,4,1},{4,3,2},{3,3,3}), and six ways to roll a 10 ({6,3,1},{6,2,2},{5,4,1},{5,3,2},{4,4,2},{4,3,3})*.  Done, right?

It turns out this isn't quite accurate.  For instance, the combination {6,2,1} treats all of the 3! = 6 permutations of those numbers as one event, which is bad mojo.  If you go through all 216 possibilities, you'll find that there are actually 27 ways to roll a 10, and only 25 ways to roll a 9, so the probabilities are in fact unequal.  Okay, no biggie, our experiment will certainly show this bias, right?  Well, it will, but if we want to be 95% experimentally certain that 10 is more likely, then we'll have to run through about 7,600 trials!  (For a derivation of this number---and a generally more expansive account---see Michael Lugo's blog post.)  In other words, the Law of Large Numbers is certainly our friend in determining probabilities experimentally, but it requires, you know, large numbers.

*If you've ever taught probability, you know that this type of dice-sense is rampant.  Students consistently collapse distinct events based on superficial equivalence rather than true frequency.  Ask a room of high school students this question: "You flip a coin twice.  What's the probability of getting exactly one head?"  A significant number will say 1/3.  After all, there are three possibilities: no heads, one head, two heads.  Relatively few will immediately notice, without guidance, that "one head" is twice as likely as the other two outcomes.

# Experiments are NOT easy to repeat.

I've already covered some of the practical issues here in terms of needing a lot of data points.  But beyond all that, there are also philosophical difficulties.  Normally, in science, when we talk about repeating experiments, we tend to use the word "reproduce."  Because that's exactly what we expect/are hoping for, right?  I conduct an experiment.  I get a result.  I (or someone else) conduct the experiment again.  I (they) get roughly the same result.  Depending on how we define our probability experiment, that might not be the case.  I flip a coin 10 times and count 3 heads.  You flip a coin 10 times and count 6 heads.  Experimental results that differ by 100% are not generally awesome in science.  In probability, they are the norm.

As an interesting, though somewhat tangential observation, note that there is another strange philosophical issue at play here.  Not only can events be difficult to repeat, but sometimes they are fundamentally unrepeatable.  Go back to my meteorologist's prediction for a moment.  How do I repeat the experiment of "live through yesterday and see whether it rains?"  And what does a 60% chance of rain even mean?  To a high school student (teacher) who deals almost exclusively in frequentist interpretations of probability, it means something like, "If we could experience yesterday one million times, about 600,000 of those experiences would include rain."  Which sounds borderline crazy.  And the Bayesian degree-of-belief interpretation isn't much more comforting: "I believe, with 60% intensity, that it will rain today."  How can we justify that level of belief without being able to test its reliability by being repeatedly correct?  Discuss.

# Probability distributions can be unwieldy.

Discrete distributions are conceptually easy, but cumbersome.  Continuous distributions are beautiful for modeling, but practically impossible for prior-to-calculus students (not just pre-calculus ones).  Even with the ubiquitous normal distribution, there is an awful lot of hand-waving going on in my classroom.  Distributions can make polynomials look like first-grade stuff.

# Theoretical predictions aren't so easily checked.

My theoretical calculations for the cereal box problem tell me that, on average, I expect to buy between 5 and 6 boxes to collect all the prizes.  But sometimes when I actually run through the experiment, it takes me northward of 20 boxes!  This is a teacher's nightmare.  We've done everything right, and then suddenly our results are off by a factor of 4.  Have we confirmed our theory?  Have we busted it?  Neither?  Blurg.  So what are we to do?

# We are to build a probability cannon!

With projectile motion problems, building a cannon is nice.  It's cool.  We get to launch things, which is awesome.  With probability, I submit that it's a necessity.  We need to generate data: it's the raw material from which conjecture is built, and the touchstone by which theory is tested.  We need to (metaphorically) shoot some stuff and see where it lands.  We need...simulations!

If your model converges quickly, then hand out some dice/coins/spinners.  If it doesn't, teach your students how to use their calculators for something besides screwing up order of operations.  Better yet, teach them how to tell a computer to do something instead of just watching/listening to it.  (Python is free.  If you own a Mac, you already have it.)  Impress them with your wizardry by programming, right in front of their eyes, and with only a few lines of code, dice/coins/spinners that can be rolled/flipped/spun millions of times with the push of a button.  Create your own freaking distributions with lovely, computer-generated histograms from your millions of trials.  Make theories.  Test theories.  Experience anomalous results.  See that they are anomalous.  Bend the LLN to your will.

Exempli Gratia

NCTM was kind enough to tweet the following problem today, as I was in the middle of writing this post:

Okay, maybe the probability is just 1/2.  I mean, any argument I make for Kim must be symmetrically true for Kyle, right?  But wait, it says "greater than" and not "greater than or equal to," so maybe that changes things.  Kim's number will be different from Kyle's most of the time, and it will be greater half of the times it's different, so...slightly less than 1/2?  Or maybe I should break it down into mutually exclusive cases of {Kim rolls 1, Kim rolls 2, ... , Kim rolls 6}.  You know what, let's build a cannon.  Here it is, in Mathematica:

Okay, so it looks like my second conjecture is right; the probability is a little less than 1/2.  Blammo!  And it only took (after a few seconds of typing the code) 1.87 seconds to do a million trials.  Double blammo!  But how much less than 1/2?  Emboldened by my cannon results, I can turn back to the theory.  Now, if Kyle rolls a one, Kim will roll a not-one with probability 5/6.  Ditto two, three, four, five, and six.  So Kim's number is different from Kyle's 5/6 of the time.  And---back to my symmetry argument---there should be no reason for us to believe one or the other person will roll a bigger number, so Kim's number is larger 1/2 of 5/6 of the time, which is 5/12 of the time.  Does that work?  Well, since 5/12 ≈ 0.4167, which is convincingly close to 0.416159, I should say that it does.  Triple blammo and checkmate!

But we don't have to stop there.  What if I remove the condition that Kim's number is strictly greater?  What's the probability her number is greater than or equal to Kyle's?  Now my original appeal to symmetry doesn't require any qualification.  The probability ought simply be 1/2.  So...

What what?  Why is the probability greater than 1/2 now?  Oh, right.  Kim's roll will be equal to Kyle's 1/6 of the time, and we already know it's strictly greater than Kyle's 5/12 of the time.  Since those two outcomes are mutually exclusive, we can just add the probabilities, and 1/6 + 5/12 = 7/12, which is about (yup yup) 0.583.  Not too shabby.

What if we add another person into the mix?  We'll let Kevin join in the fun, too.  What's the probability that Kim's number will be greater than both Kyle's and Kevin's?

It looks like the probability of Kim's number being greater than both of her friends' might just be about 1/4.  Why?  I leave it as an exercise to the reader.

That tweet-sized problem easily becomes an entire lesson with the help of a relatively simple probability cannon.  If that's not an argument for introducing them into your classroom, I don't know what is.

Thanks to Christopher Danielson for sparking this whole discussion.

# A Tale of Two Numbers

A few months ago, we had just finished talking about polynomials and were moving into matrices.  Because a lot of matrix concepts have analogs in the real numbers, we kicked things off with a review of some real number topics.  Specifically, I wanted to talk about solving linear equations using multiplicative inverses as a preview of determinants and using inverse matrices for solving linear systems.  For instance:

$latex begin{array}{ll} 2x=8 & AX=B \ 2^{-1}2x = 2^{-1}8 & A^{-1}AX = A^{-1}B \ 1x = frac{1}{2}8 & IX = A^{-1}B \ x=4 & X = A^{-1}B end{array}&s=2$

As an aside, I threw out this series of equations in the hopes of (a) foreshadowing singular matrices, and (b) offering a justification for the lifelong prohibition against dividing by zero:

$latex begin{array}{l} 0x=1 \ 0^{-1}0x = 0^{-1}1 \ 1x = frac{1}{0}1 \ x = frac{1}{0} end{array}&s=2$

I thought this was just so beautiful.  Why can't we divide by zero?  Because zero doesn't have a multiplicative inverse.  There is no solution to 0x = 1, so 0-1 must not exist!  Q.E.D.

As it turns out, Q.E.NOT.  One of my students said, "Why can't we just invent the inverse of zero?  Like we did with i?"

Again, we had just finished our discussion of polynomials, during which we had conjured the square root of -1 seemingly out of the clear blue sky.  They wanted to do the same thing with 1/0.  What an insightful and beautiful idea!  Consider the following stories, from my students' perspectives:

1. When we're trying to solve quadratic equations, we might happen to run into something like x2 = -1.  Now of course there is no real number whose square is -1, so for convenience let's just name this creature i (the square root of -1), and put it to good use immediately.
2. When we're trying to solve linear equations, we might happen to run into something like 0x = 1.  Now of course there is no real number that, when multiplied by 0, yields 1, so for convenience let's just name this creature j (the multiplicative inverse of 0), and put it to good use immediately.

Why are we allowed to do the first thing, but not the second?  Why do we spend a whole chapter talking about the first thing, and an entire lifetime in contortions to avoid the second?  Both creatures were created, more or less on the spot, to patch up shortcomings in the real numbers.  What's the difference?

And this is the tricky part: how do I explain it within the confines of a high school algebra class?  Well, I can tell you what I tried to do...

Let's suppose that j is a legitimate mathematical entity in good standing with its peers, just like i.  Since we've defined j as the number that makes 0j = 1 true, it follows that 0 = 1/j.  Consider the following facts:

$latex begin{array}{l} 2 cdot 0 = 0 \ 2frac{1}{j} = frac{1}{j} \ frac{2}{j} = frac{1}{j} \ 2 = 1 end{array}&s=2$

In other words, I can pretty quickly show why j allows us to prove nonsensical results that lead to the dissolution of mathematics and perhaps the universe in general.  After all, if I'm allowed to prove that 2 = 1, then we can pretty much call the whole thing off.  What I can't show, at least with my current pedagogical knowledge, is why i doesn't lead to similar contradictions.

Therein lies the broad problem with proof.  It's difficult.  If there are low-hanging fruit on the counterexample tree, then I can falsify bad ideas right before my students' very eyes.  But if there are no counterexamples, then it becomes incredibly tough.  It's easy to show a contradiction, much harder to show an absence of contradiction.  I can certainly take my kids through confirming examples of why i is helpful and useful.  But in my 50 min/day with them, there's just no way I can organize a tour through the whole scope and beauty of complex numbers.  Let's be serious, there's no way that I can even individually appreciate their scope and beauty.

The complex numbers aren't just a set, or a group.  They're not even just a field.  They form an algebra (so do matrices, which brings a nice symmetry to this discussion), and algebras are strange and mysterious beings indeed.  I could spend the rest of my life learning why i leads to a rich and self-consistent system, so how am I supposed to give a satisfactory explanation?

Take it on faith, kids.  Good enough?

Update 3/20/12: My friend, Frank Romascavage, who is currently a graduate student in math at Bryn Mawr College (right down the road from my alma mater Villanova), pointed out the following on Facebook:

"We need to escape integral domains first so that we can have zero divisors!  Zero divisors give a quasi-invertibility condition (with respect to multiplication) on 0.  They aren't really true inverses, but they are somewhat close!  In $latex Z_{6}$ we have two zero divisors, 3 and 2, because 3 times 2 (as well as 2 times 3) in $latex Z_{6}$ is 0."

In many important ways, an integral domain is a generalization of the integers, which is why they behave very much the same.  An integral domain is just a commutative ring (usually assumed to have a unity), with no zero divisors.  If there are two members of a ring, say a and b, then they are said to be zero divisors if ab = 0.  In other words, to "escape integral domains," is to move into a ring where the Zero Product Property no longer holds.  This means that, in non-integral domains, we can almost, sort of, a little bit, divide by zero.  Zero doesn't really have a true inverse, but it's close.  Frank's example is the numbers 2 and 3 in the ring of integers modulo 6, since 3 x 2 = 0 (mod 6).  In fact, the ring of integers modulo n fails to be an integral domain in general, unless n is prime.  CTL

# 0!rganized Emptiness

On the back of the fundamental counting principle, my class has just established the fact that we can use n! to count the number of possible arrangements of n unique objects.  This is fantastic, but we don't always want to arrange all of the n things available to us, which is okay.  We've also been introduced to the permutation function, which has the very nice property of counting ordered arrangements of r-sized subsets of our n objects.  Handy indeed.

Today we made an interesting observation: we now have not one, but two ways to count arrangements of, let's say, 7 objects.

1. We can fall back on our old friend, the factorial, and compute 7!
2. We can use our new friend, the permutation function, and compute $latex bf{_7P_7}$

Since both expressions count the same thing, they ought to be equal, but then we run into this interesting tidbit when we evaluate (2):

$latex _7P_7 = frac{7!}{(7-7)!} = frac{7!}{0!}&s=2$,

which seems to imply that 0! = 1.  To say this is counterintuitive for my kids would be a severe understatement.  And in this moment of philosophical crisis, when the book might present itself as a palliative ally, students are instead met with this:

To prevent inconsistency?  How in the world are kids supposed to trust a mathematical resource that paints itself into a corner, only tacitly admits such, and then drops a bomb of a deus ex machina in order to save face?  I haven't been so angry since the ending of Lord of the Flies.  Especially when this problem appears two pages later:

Okay, 8!.  So how many ways can I arrange my bookshelf with a zero-volume reference set?  One: I can arrange an empty shelf in exactly one way.  And, since we already know that n! counts the ways I can arrange n objects, it follows naturally that this 1 way of arranging 0 things must also be represented by 0!.

There are a lot of good proofs/justifications available for the willing Googler, but this one, to me, seems like the most natural and straightforward for a high school classroom.  At a bare minimum, it's much, much better than, "Because I need it to be true for my own convenience."

Only a math textbook could take something so lovely and make it seem dirty.

# Cereal Boxes Redux

In my last post, my students were wrestling with a question about cereal prizes.  Namely, if there is one of three (uniformly distributed) prizes in every box, what's the probability that buying three boxes will result in my ending up with all three different prizes?  Not so great, turns out.  It's only 2/9.  Of course this raises another natural question: How many stupid freaking boxes do I have to buy in order to get all three prizes?

There's no answer, really.  No number of boxes will mathematically guarantee my success.  Just as I can theoretically flip a coin for as long as I'd like without ever getting tails, it's within the realm of possibility that no number of purchases will garner me all three prizes.  But, just like the coin, students get the sense that it's extremely unlikely that you'd buy lots and lots of boxes without getting at least one of each prize.  And they're right.  So let's tweak the question a little: How many boxes do I have to buy on average in order to get all three prizes?  That's more doable, at least experimentally.

I have three sections of Advanced Algebra with 25 - 30 students apiece.  I gave them all dice to simulate purchases and turned my classroom---for about ten minutes at least---into a mathematical sweatshop churning out Monte Carlo shopping sprees.  The average numbers of purchases needed to acquire all prizes were 5.12, 5.00, and 5.42.  How good are those estimates?

Simulating cereal purchases with dice

Here's my own simulation of 15,000 trials, generated in Python and plotted in R:

I ended up with a mean of 5.498 purchases, which is impressively close to the theoretical expected value of 5.5.  So our little experiment wasn't too bad, especially since I'm positive there was a fair amount of miscounting, and precisely one die that's still MIA from excessively enthusiastic randomization.

And now here's where I'm stuck.  I can show my kids the simulation results.  They have faith---even though we haven't formally talked about it yet---in the Law of Large Numbers, and this will thoroughly convince them the answer is about 5.5.  I can even tell them that the theoretical expected value is exactly 5.5.  I can even have them articulate that it will take them precisely one box to get the first new toy, and three boxes, on average, to get the last new toy (since the probability of getting it is 1/3, they feel in their bones that they should have to buy an average of 3 boxes to get it).  But I feel like we're still nowhere near justifying that the expected number of boxes for the second toy is 3/2.

For starters, a fair number of kids are still struggling with the idea that the expected value of a random variable doesn't have to be a value that the variable can actually attain.  I'm also not sure how to get at this next bit.  The absolute certainty of getting a new prize in the first box is self-evident.  The idea that, with a probability of success of 1/3, it ought "normally" to take 3 tries to succeed is intuitive.  But those just aren't enough data points to lead to the general conjecture (and truth) that, if the probability of success for a Bernoulli trial is p, then the expected number of trials to succeed is 1/p.  And that's exactly the fact we need to prove the theoretical solution.  Really, that's what we need basically to solve the problem completely for any number of prizes.  After that, it's straightforward:

The probability of getting the first new prize is n/n.  The probability of getting the second new prize is (n-1)/n ... all the way down until we get the last new prize with probability 1/n.  The expected numbers of boxes we need to get all those prizes are just the reciprocals of the probabilities, so we can add them all together...

If X is the number of boxes needed to get all n prizes, then

$latex E(X) = frac{n}{n} + frac{n}{n-1} + cdots + frac{n}{1} = n(frac{1}{n} + frac{1}{n-1} + cdots + frac{1}{1}) = n cdot H_n&s=2$

where Hn is the nth harmonic number.  Boom.

Oh, but yeah, I'm stuck.

# Pruning Tree Diagrams

A few days ago we opened up with some group work surrounding the following problem.  I gave no guidance other than, "One representative will share your solution with the class."

My favorite cereal has just announced that it's going to start including prizes in the box.  There is one of three different prizes in every package.  My mom, being cheap and largely unwilling to purchase the kind of cereal that has prizes in it, has agreed to buy me exactly three boxes.  What is the probability that, at the end of opening the three boxes, I will have collected all three different prizes?

It's a very JV, training-wheels version of the coupon collector's problem, but it's nice for a couple of reasons:

1. The actual coupon collector's problem is several years out of reach, but it's a goody, so why not introduce the basics of it?
2. There is a meaningful conversation to be had about independence.  (Does drawing a prize from Box 1 change the probabilities for Box 2?  Truly?  Appreciably?  Is it okay to assume, for simplicity, that it doesn't?  How many prizes need to be out there in the world for us to feel comfortable treating this thing as if it were a drawing with replacement?  If everybody else is buying up cereal---and prizes---uniformly, does that bring things closer to true independence?  farther away?)
3. There are enough intuitive wrong answers to require some deeper discussion: e.g, 1/3 (Since all the probabilities along the way are 1/3, shouldn't the final probability of success also be 1/3?), 1/27 (There are three chances of 1/3 each, so I multiplied them together.), and 1/9 (There are three shots at three prizes, so nine outcomes, and I want the one where I get all different toys.)  The correct answer, by the by, is 6/27 or 2/9 (try it out).

Many groups jumped right into working with the raw numbers (see wrong answers above).  A few tried, with varying levels of success, to list all the outcomes individually (interestingly, a lot of these groups correctly counted 27 possibilities, but then woefully miscounted the number of successes...hmmm).  A small but determined handful of groups used tree diagrams to help them reason about outcomes sequentially.

This business of using tree diagrams was pleasantly surprising.  We hadn't yet introduced them in class, and I hadn't made any suggestions whatsoever about how to tackle the problem, so I thought it was nice to see a spark of recollection.  That said, it's not terribly surprising; presumably these kids have used them before.  But I did run across one student, Z, who interpreted his tree diagram in novel way---to me at least.

Most students, when looking at a tree diagram, hunt for paths that meet the criteria for success.  Here's a path where I get Prize 1, then Prize 2, then Prize 3.  Here's another where I get Prize 1, then Prize 3, then Prize 2...  The algorithm goes something like, follow a path event-by-event and, if you ultimately arrive at the compound event of interest, tally up a success.  Repeat until you're out of paths.  That is, most students see each path as an stand-alone entity to be checked, and then either counted or ignored.

What Z did was different in three important ways.  First of all, he found his solutions via subtraction rather than addition.  Second, he attacked the problem in a very visual---almost geometric---way.  And third, he didn't treat each path separately; rather, Z searched for equivalence classes of paths within the overall tree.

Z's (paraphrased) explanation goes as follows:

First I erased all of the straight paths, because they mean I get the same prize in every box.  Then I erased all of the paths that were almost straight, but had one segment that was crooked, which means I get two of the same prize.  And then I was left with the paths that were the most crooked, which means I get a different prize each time.

Looking at his diagram, I noticed that Z hadn't even labeled the segments; he simply drew the three stages, with three possibilities at each node, and then deleted everything that wasn't maximally crooked.  How awesome is that?  In fact, taking this tack made it really easy for him to answer more complicated followup questions.  Since he'd already considered the other cases, he could readily figure out the probability of getting three of the same prize (the 3 branches he pruned first), or getting only two different prizes (the next 18 trimmings).  He could even quickly recognize the probability of getting the same prize twice in a row, followed by a different one (the 6 branches he trimmed that went off in one direction, followed by a straight-crooked pattern).

Of course this method isn't particularly efficient.  He had to cut away 21 paths to get down to 6.  For n prizes and boxes, you end up pruning nn --- n! branches.  Since nn grows much, much faster, than n!, Z's algorithm becomes prohibitively tedious in a hurry.  If there are 5 prizes and 5 boxes, that's already 3005 branches that need to be lopped off.  So yes, it's inefficient, but then again so are tree diagrams.  Without more sophisticated tools under his belt, that's not too shabby.  What the algorithm lacks in computational efficiency, it makes up for in conceptual thoughtfulness.  I'll take it that tradeoff any day of the week.

Last week we started working with infinite geometric series, a topic I personally love.  First of all, it's one of the few places in a high school curriculum where deep, genuine philosophical questions bubble all the way up to the surface of a mathematical discussion.  Second, it marks the place in my own academic life where I experienced a religious conversion to Orthodox Mathematicism:

In the beginning there was a single term.  And to that term the Teacher did add another of smaller magnitude.  Then a third term, smaller still, appeared upon the right hand side of the chalkboard, and it was revealed to me that the terms did decrease exponentially.  My heart saw that this shrinking and adding proceedeth forever and ever, terms without end, Amen.  And lo, when I beheld the sum, it was finite, and I knew that it was Good.

If my introduction to convergent series was a baptism, then using one to demonstrate that .999... = 1 was my confirmation.  Now, having done the same thing with my students, I think it might be even more interesting from this side of the desk.  In particular, two of their questions/comments highlight two very different understandings of infinity and the real numbers.

First, the ingredients of a metaphor.  If you've ever been a runner, this is easy.  If not, I'm going to need you to go on a quick jog before you read any farther so you can appreciate the rest of this carefully crafted rhetorical device.  I'll wait...

When you drive the same stretch of road over and over again, you tend to experience it dynamically.  You pass a landmark, anticipate a curve, accelerate over a little rise.  The road changes in front of your eyes.  You see the road as a process.  But when you run along the same route, it looks completely different.  There is just this monolithic expanse of concrete laid out over the landscape.  You can creep around and explore its different features, but you experience the road essentially as a static object.  In other words, you experience the road as it actually is.  Keep this in mind as you read the following two questions from my actual students.

# D: "But Mr. Lusto, if .999... is exactly 1, then .999... plus .999... should equal exactly 2, but it doesn't.  It's 1.999...8."

What a freaking fantastic argument!  Here's a student who has accepted my proof, interpreted it, thought about it critically, and deduced a logical contradiction.  My heart swelled a little bit.  Unfortunately, the flaw in his reasoning highlights a fundamental misconception.  D is viewing .999... like a driver.  He sees it as a dynamic process, repeatedly appending a 9 to an ever-expanding sequence of 9s.  He might even accept that this can theoretically go on forever, but his point-of-view still gets him into some trouble.  When D mentally sums .999... and .999..., he's suggesting that there are two "last 9s" that, when added, produce a trailing 8.  But of course there are no "last 9s."  He's implicitly terminated the process prematurely (which is to say, at all).  Hence his objection, though thoroughly beautiful, is ultimately illusory.

# J: "But Mr. Lusto, if .999... equals 1, then doesn't 1.999... equal 2?  Then can't we write every number in two different ways?"

This student views .999... like a runner.  The reason that .999... and 1 can be meaningfully thought of as equal is because they represent the same static value.  They're just two different names for the same object.  Here's a student who sees .999... as it actually is.  And now, because of that, his concern is genuine.  The fact that many real numbers have two decimal representations (one with infinite trailing 0s, one with infinite trailing 9s) is a true mathematical/philosophical problem.  In fact, it's an important result: those sorts of numbers turn out to be dense in the reals (in the topological sense).  J may never care about, or even get enough math under his belt to understand, that statement,  but his view of the nature of infinity is already more nuanced than D's.

Something to think about next time you're driving.  Better yet, next time you're running.