2024-09-02

A Strange Commitment

Failed attempt to solve an elementary math problem

I was recently reminded of this anecdote from Junaid Mubeen, originally in Bright Magazine. Juniad recalls a time where he assisted his 12-year old nephew on a math homework problem. As it turns out, the problem had no solution, Junaid could elegantly prove it, and the nephew excitedly accepted their reasoning. Then:

What happened next was revealing. My sister told me that my nephew devoted the rest of the evening to searching for a solution. That’s a strange commitment to make when you’ve just been convinced that no such solution exists.

My nephew was in a state of cognitive dissonance. He understood that this problem has no solution. And yet, this was his maths homework. From all his years of schooling, every maths question always had an answer. Why should this time be any different?

My nephew’s deference towards school maths, with closed questions and prescribed answers, betrayed his mathematical reasoning.

More at Medium.

2024-08-26

Phonics Makes a Comeback

Lee Anna Vasquez, a reading interventionist, uses a sound wall to teach students the articulatory features of phonemes

As a long-time instructor of remedial math at a community college (among other things), I've always felt strongly that our discipline is closely tied to basic natural-language literacy, and to the teaching of reading and writing. 

Teaching basic symbolic math is fundamentally the ground-floor of a particular writing language for the sciences. If a student entering an algebra class can't read a sentence (either to follow a direction, answer a word problem, or see the parallel with algebraic equations), attend to fine structure, or understand assertions about symbolic parts-of-speech, then they will flounder. I've written about this here multiple times (see: The War on Structure and Phonics and Bases). Note the subtitle of the blog with its essential spotlight on clear expression.

Two articles in American Educator in the last year so give some hope that the science of structurally-focused reading instruction, with phonics instruction as a key part, may (with great struggle) finally be making a comeback. Both are by reading researchers who have been at the wheel for many decades.

The first is by Louisa C. Moats in Spring 2023, Creating Confident Readers. Moats has developed and taught a program for graduate-program service teachers called LETRS (Language Essentials for Teachers of Reading and Spelling) in some form since the early 1990's. It leans into service teachers becoming knowledgeable about how sounds are articulated in the mouth, how they vary across languages, how they map to written graphemes in English, and how students can decode new texts on their own when necessary. 

I think that Moats put the most interesting part at the end of her article (perhaps out of a sense of gently downplaying the section that might be considered the most inflammatory), which I'll highlight here:

Teachers often experience complex emotional reactions as they learn more about the science of reading and the structure of language. Some teachers express grief and regret over their past use of ineffective (but widespread) practices and anger that their prior opportunities to learn about teaching reading were inadequate or even misinformed. A common reaction of participating teachers to their experience in LETRS is, “Why didn’t anybody teach me these things before?” The value of the information is readily apparent when students begin to make progress. Student growth quickly validates teachers’ efforts to teach language, reading, and writing explicitly.

I immediately recognized  this comment about “Why didn’t anybody teach me these things before?” — I've gotten this exact response from students in remedial algebra courses, at key moments where I was identifying and trying fix broken understandings that their K-12 teachers had hopelessly mangled. Moats continues:

In translating concepts and guidance from research, we encourage teachers to confront and abandon ideas, practices, and programs that many have used or been taught—often under district or state standards and requirements—that do not align with current understandings grounded in evidence. For example, many districts are still wedded to programs and approaches based on “cueing systems,” a tenet of guided reading that does not recognize the central role of phonology or phonic decoding in learning to read and spell. An underlying assumption that reading is primarily a visual imprinting activity drives other misconceived but all-too-common practices, such as posting “sight” words on an alphabetic word wall regardless of the beginning sounds in the words (e.g., posting out, once, only, and often under o). Many district and state standards require kindergarten and first-grade readers to memorize dozens of words on flash cards or spell lists of words by rote visual memory, even though in reading science, all words are eventually learned “by sight” through a process of speech-to-print mapping, beginning with phoneme-level processing. Turning away from common but unsupported practices poses dilemmas for teachers and schools because the misconceived ideas have been established in reading education for so long. Many published programs have yet to catch up to the science...

We can see here implications of the primary mistake that proponents of the catastrophic "whole word" approach made — they thought about how proficient readers function (mostly identifying familiar, known words on sight), and assumed students could jump directly to that level without passing through the scaffolding phases that naturally occur beforehand (connecting spoken sounds to written symbols). Moats says this more directly elsewhere:

The ability to recognize printed words out of context, quickly and accurately, is gained not by a visual imprinting process, but by building a mental map connecting speech with print. By learning incrementally how graphemes (letters and letter combinations) represent speech, novice readers and spellers gradually build a mental storehouse of known words that can be instantly recognized and recalled. Every phase of this process depends on the ability to recognize and mentally manipulate the phonemes or speech sounds that make up words (phoneme awareness). From pre-alphabetic, to partial alphabetic, to full alphabetic, and then to consolidated word recognition and recall, children must gradually differentiate the sounds in spoken words and map them to letters and letter sequences.

And another interesting observation:

When teachers have not had ample opportunities to learn how to explain words’ spellings, they are much more inclined to believe—and teach—that the English writing system is chaotic and nonsensical. Believing that is the case too often leads educators to rely on “sight” word methods such as “using your eyes like a camera,” drilling with flash cards, telling students to look at pictures and use context to guess an unknown word, or reciting letter sequences to memorize words.

One central goal of LETRS is to put meaning over rote memorization. That’s why part of the phonics lesson plan is working with the meanings of words that students are learning to decode or spell. Our theoretical frameworks emphasize the importance of connecting sound, meaning, and spelling while the mental code-mapping process is under construction...

On the same theme, another article appeared by Linna C. Ehri in Fall 2023, Phases of Development in Learning to Read and Spell Words. In large part, she reiterates the same natural structure noted by Moats above: (1) pre-alphabetic, (2) partial alphabetic, (3) full alphabetic, and (4) consolidated alphabetic phases. The thing I want highlight here is the rather unsettling introduction of the word "phonics" in a tone that suggests trained K-12 reading instructors have likely never heard of it (!):

To move into the full alphabetic phase, children need to acquire the major letter-sound (grapheme-phoneme) relations of the writing system. They need to acquire decoding skill to sound out letters and blend the sounds to form words. The type of reading instruction that helps children master these skills is called phonics. In systematic phonics instruction, teachers follow a “scope and sequence” chart to teach the major letter-sound relations; they also teach segmenting sounds, decoding words, and spelling skills. Phonics instruction can reduce the time that students spend in the partial alphabetic phase and move them quickly into the full phase, typically by the end of kindergarten or in first grade. The skills children acquire help them store words in their memory for reading by sight and spelling words correctly.

Given that, phonics is the cornerstone to the conclusion of the piece:

In school, children benefit most from systematic phonics instruction to acquire these skills. One great way to support your child’s growth at home is to create lots of opportunities for them to practice reading—and to talk about what you’ve read together to boost their comprehension. And if your child is not progressing through the four phases, be sure to go to their school to ask for additional supports.

I thought that both of these articles were well worth the time spent reading them. If only solid, structural reading skills were taught, we could see widespread benefits not just in language arts, but in other important subjects like math, science, logic, computing, and other technical fields.

2024-08-05

Duality of Floor and Modulus

Floor and modulus symbols

Have you ever considered the duality of the floor and modulus operators, in that either one can be expressed in terms of the other (along with multiply, divide, and a subtraction)?

This became practically important for me when my institution switched from the Blackboard learning management system to Brightspace. Both of these have testing systems with a type of question that presents test-takers with questions incorporating randomly-determined base values, and then automatically check their answers via a mathematical formula set up by the instructor. On Blackboard this type of question is called a "Calculated Formula", and among the supported functions, it includes floor but not modulus. On Brightspace it's called an "Arithmetic" question, and its list of functions has the inverse coverage: modulus but not floor.

Now I use this facility to make tests for the introductory C++ programming courses that I teach. Modulus is one of the likely new basic arithmetic operators that we want to test for. Floor is arguably even more essential, in that it's what I need to represent integer division truncation, something really fundamental to how integer math works on a computing system.

So, on Blackboard, if I want to test students' knowledge of the behavior of the expression \(a \% b\) (remember that in the students' view, the \(a\) and \(b\) will be filled in with literal numbers), then in the answer formula I have to use the expression: \(a - b * \lfloor a/b \rfloor\).

In other words: Use the floor function to find the largest whole-number of times that \(b\) goes into \(a\), and subtract that maximal product from \(a\), leaving the modulus as a remainder.

Meanwhile, on Brightspace, if I want to assess awareness of the truncation that happens automatically with an integer division \(a / b\), then every time that occurs in the answer, I need to make use of the expression: \(a/b - a/b\%1\).

That is: Use modulus by 1 to find the decimal remainder in the ratio \(a/b\), and subtract that from the full \(a/b\), leaving behind only the integer part.

Having translated all of my test questions from one to the other in the last week, I'd say that for CS purposes the floor function is more essential, as it's baked in naturally to any integer math expressions, and so it was overall more natural for the work to be done in Blackboard. Use of modulus is more of a special-case check in my assessments, so that arcane formula only showed up a few times. On the other hand, switching to Brightspace, there were about twice as many instances where I had to conjure a floor truncation via the modulus operator that exists there.

Obviously, it would be best if either system supported both of these basic operators, but as we've discovered, you are operationally complete with either one of the pair.

2024-07-08

Yes, Euclid's Proof of Infinite Primes Uses Contradiction

It's common nowadays in conversations about the method of proof-by-contradiction for someone to pop in and say, "People think Euclid's proof of there being infinite prime numbers uses proof-by-contradiction, but it doesn't, it's a direct proof". For example, the current Wikipedia article on Euclid's theorem says this:

Euclid is often erroneously reported to have proved this result by contradiction beginning with the assumption that the finite set initially considered contains all prime numbers, though it is actually a proof by cases, a direct proof method. The philosopher Torkel Franzén, in a book on logic, states, "Euclid's proof that there are infinitely many primes is not an indirect proof..."

Okay, admittedly Euclid's theorem is not in its entirety structured as a proof by contradiction. Yes, there's a proof by cases, in which a number of the form \(lcm(ABC) + 1\) is assessed as being either prime or not prime. But the core of that second case is clearly a proof by contradiction!

If we look at Euclid's original text for the second case, we see the following. Given that \(lcm(ABC) + 1\) is not prime, Euclid takes \(G\) to be some prime number that divides it. He then reasons like this (looking at the Fitzpatrick translation of Heiberg's presentation of the Greek):

I say that \(G\) is not the same as any of \(A, B, C\). For, if possible, let it be... [some logic here]... The very thing is absurd. Thus, \(G\) is not the same as one of \(A, B, C\).

This form is patently a proof by contradiction, and use of the phrase with "absurd" (in the original Greek, "ὄπερ ἄτοπον") highlights that fact.

While the overall superstructure of Euclid's theorem is not a proof by contradiction... Yes Virginia, Euclid's theorem uses a proof by contradiction, and it's an essential part of his proof that there are infinite primes.

2024-02-12

A Game for Bored Math Teachers

Head with jigsaw puzzle inside
Let's say you're an instructor in a low-level math course; maybe something like college algebra, a liberal-arts math course, or something similar. It's possible that you consider this to be beneath you and you're bored in class. Here's a little game you can play with yourself that will spice things up a bit:

When you ask a question to the class, and someone answers incorrectly, see if you can conversationally edit the question in a way that the student would have been right. That is, respond by starting with, "Well, that would be right if the problem said ____", and fill in the end of that sentence in appropriate way. 

I'm very fond of this technique. It actually accomplishes several things:

  • Makes things a bit more challenging for the instructor, keeping them on their toes
  • Cushions the "no you're wrong" response to the student (a bit like the "shit sandwich" feedback protocol)
  • Force you to diagnose & clarify the misdirected mental pathway for you and the student (and in fact usually the student has misperceived some pattern that's just adjacent to the given problem).

Try it and see how it feels. To be clear: I don't do this because I'm bored in class, but nevertheless I've found it to be a compelling and clarifying technique.

2024-01-22

Radicals and Absolute Values

Here's a fact that I've never seen expressed clearly, or in this way, in any of the several college-level algebra books from which I've taught. Say we're working in the domain of real numbers, and have a radical of some index over a variable to a power. In broad strokes, we can divide the power by the radical index -- however, in some cases, distressingly, you'd need an absolute value to express the result. The question is, exactly when do you need that absolute value?

$$\sqrt[n]{a^m} = a^{m \over n} \text{ or } |a^{m \over n}|? $$

Say we're in this situation, with m and n whole numbers, and m is evenly divisible by n. The primary issue is that when the initial power m is even, it makes any product nonnegative, wiping out any negatives that the base a might represent. So when the reduced power m/n is odd, then it would fraudulently claim to possibly produce negatives, which our initial expression cannot do -- and so require the absolute value as a correction. 

Let's give more detail by inspecting all the permutations of even/odd possibilities between the starting and ending powers in the expression:

  1. Odd m, odd m/n: The odd starting power m can produce values of any sign, and so can the odd reduced power m/n. So all is fine here, and we don't need the absolute value.
  2. Even m, even m/n: The even power m wipes out any negatives, and the reduced even power m/n does the same thing. So again they're aligned, and no absolute value is needed.
  3. Even m, odd m/n: This is the case alluded to above -- the even start power m wipes out negatives, but the odd ending power m/n would deceive us into thinking negatives could be a possible product. This is the situation in which we need the absolute value as a correction.
  4. Odd m, even m/n: This case is impossible. If m/n is even, then any multiple (e.g., by n to produce m) is also even.

So it's only that third case in which the power switches from even to odd where we need the absolute value bars for full fidelity. Interestingly, since the fourth case can't happen, we could express the protocol briefly as follows:

When the powers switch parity, then we need absolute value bars.