2021-11-02

Backtracking Detracking

Tracking station

There's an interesting article from the Brookings Institution last month, on the perpetual debate over tracking in the U.S. school system -- separating classes at the same grade by skill level. 

Whenever this comes to mind I think about the giant debate that occurred at my high school system right around the time I graduated, which saw a new principal hired with a mandate to detrack all of the school's curriculum. I didn't actually experience that, but my sister, who's two years younger than myself, did. This all being 30 years ago as I write this. (I also had a more recent awareness of a friend's child in junior-high-school, who had special-needs students detracked in the same classroom, who would basically scream incoherently all day long and make any kind of learning impossible.)

As usual with education issues, the rocky shoals upon which all proud ships crash is the mathematics discipline. You can pretty easily get away with a mix of skill levels in arts and social disciplines -- read the same texts (or whatnot), and accept that you'll get different levels of interpretations, and it's possible to grade on a relative "best effort" basis. In the hard sciences (things that are based on math), things get harder -- maybe you can get across some core concepts, but people with math skills will be able to dig deeper and make predictions and verifications in ways that other people cannot. But with our mathematical queen, this is basically impossible -- if someone doesn't have the prerequisite ability to read, write, and think in our language, then absolutely nothing will make sense, and they won't be able to interface with it in any way, producing nothing but raw gobbledygook (as I've seen hundreds or thousands of times). A number of times on this blog I've called this the "brutal honesty" of mathematics. That said: it never stops a legion of arts & social-science people from dictating supposed solutions for the mathematics professors, as crazy as that sounds. 

So in the recent article, Tom Loveless of Brookings notes that the "tracking" argument goes back even farther than my 30-year experience:

Research on tracking extends over a century. Hundreds upon hundreds of studies have not settled the debate. The literature is usually described as “mixed,” but with a clear warning that tracking can exacerbate gaps between high and low achievers.[1] Research is more plentiful on tracking as a problem, as a source of inequality, rather than detracking as a solution. Reformers have been hampered by a lack of empirical evidence that abolishing tracking would reduce inequities. Evaluations of untracked schools tend to be based on a small number of schools or on samples that were not scientifically selected to support generalizable findings...

These case studies indicate that detracking may work under certain conditions, but they are less persuasive evidence that abolishing tracking in favor of classes with students heterogeneous in ability, all studying the same curriculum, will work everywhere or even in most schools. A study that forcefully raises that question was conducted by David N. Figlio and Marianne E. Page. They analyzed data from the National Education Longitudinal Survey of 1988 (NELS:88), which followed a random sample of several thousand students from eighth grade through high school and into post-secondary education and work. Using several methods of identifying whether schools were tracked or untracked, Figlio and Page uncovered neutral to positive effects of tracking. The most surprising finding of the analysis was that students from disadvantaged backgrounds appeared to benefit from tracking. Figlio and Page concluded, “We can find no evidence that detracking America’s schools, as is currently in vogue, will improve outcomes among disadvantaged students. This trend may instead harm the very students that detracking is intended to help”.

Ironically, the data that Figlio/Page analyzed was current to the year right before I graduated high school; but the article they published about it wasn't until 14 years later. I wonder if it would have made any difference at the time?

At least as interesting is what prompted the Brookings article at this time: back in May of this year the Washington Post had an article about a contentious push in the state of Virginia for detracking. After parental outcry, the state superintendent was forced to release a statement backtracking from the idea:

Under the VMPI plan, [parent] Fox said, “every student would be required to take the same math class through 10th grade of high school. There would be no classes for struggling students needing remedial help or for advanced students seeking accelerated math.”

When I called Virginia State Superintendent of Public Instruction James F. Lane to ask about this, he insisted that the state has no plans to eliminate tracking (separate classes for students at different levels) from kindergarten through 10th grade, even though the VMPI website strongly suggests that ending tracking is key to the suggested reforms...

Lane, the Virginia state superintendent, is an experienced administrator, having led three school districts. He seems to understand how politically poisonous it would be to tell parents that every child is going to be on the same math track through 10th grade...

Lane’s spokesman later told me “he does unequivocally denounce the idea that every student should be forced to take the exact same math courses at the same time without options for acceleration.”

Will this detracking debate go on ad infinitum?

Brookings: Does detracking promote educational equity?

Washington Post: Virginia allies with, then backs away from, controversial math anti-tracking movement

2021-10-18

Growth Mindset Theory: Failures to Replicate

Psychologist Carol Dweck's "growth mindset" theory has become a popular solution and intervention technique in (mostly American) schools of all ages. We might say that it's become the new version of the "self-esteem" movement seen in the 80's. While Dweck first developed the theory in the 90's, it's really taken hold of popular consciousness from the 2010's on.

Unfortunately, we should remember that psychology has an ongoing replication crisis in many of its landmark findings. Many of the "easy" ideas for transformative effects have not borne fruit over the years, and been later found to have tainted methods by core researchers. Sure enough, in recent years many or most of the large-scale, high-quality attempts at replicating the claims of growth mindset have failed to so. Here are a few examples:

Li, Y., & Bates, T. C., Ph.D. (2017). Does growth mindset improve children’s IQ, educational attainment or response to setbacks? Active-control interventions and data on children’s own mindsets. https://doi.org/10.31235/osf.io/tsdwy (Study done in China, students aged 9-13 years, N = 624)

No effect of the classic growth mindset manipulation was found for either moderate or more difficult material... children’s mindsets were unrelated to resilience to failure for either outcome measure... Finally, in 2 studies relating mindset to grades across a semester in school, the predicted association of growth mindset with improved grades was not supported. Neither was there any association of children’s mindsets with their grades at the start of the semester. Beliefs about the malleability of basic ability may not be related to resilience to failure or progress in school.

Bahník, Štěpán, and Marek A. Vranka (2017). Growth mindset is not associated with scholastic aptitude in a large sample of university applicants. Personality and Individual Differences 117: 139-143. https://doi.org/10.1016/j.paid.2017.05.046 (Study of university students taking an admissions test in the Czech Republic, N = 5,653).

We found that results in the test were slightly negatively associated with growth mindset (r = −0.03). Mindset showed no relationship with the number of test administrations participants signed up for and it did not predict change in the test results. The results show that the strength of the association between academic achievement and mindset might be weaker than previously thought.

Foliano, F., Rolfe, H., Buzzeo, J., Runge, J., & Wilkinson, D. (2019). Changing mindsets: Effectiveness trial. National Institute of Economic and Social Research. Summary at PsychBrief. (Study in England, Year 6 students, N = 4,584.)

The difference between the control group and the intervention group on all 3 primary outcomes [math, reading, GPS] was 0... The difference between the groups for all 4 secondary outcomes was also 0... This RCT was a highly powered test of the efficacy of growth mindset in a real-world environment across a wide range of schools in the England. The fact none of the primary or secondary outcomes were distinguishable from 0 raises serious questions as to the efficacy of growth mindset for Year 6 students... Given the evidence so far, it is unrealistic to expect growth mindset to have large and/or wide-scale impact.

Caitlin Brez, Eric M. Hampton, Linda Behrendt, Liz Brown & Josh Powers (2020) Failure to Replicate: Testing a Growth Mindset Intervention for College Student Success, Basic and Applied Social Psychology, 42:6, 460-468, DOI: 10.1080/01973533.2020.1806845 (U.S. study, university math & psychology students, N = 2,607).

The pattern of findings is clear that the intervention had little impact on students’ academic success even among sub-samples of students who are traditionally assumed to benefit from this type of intervention (e.g., minority, low income, and first-generation students)... These findings support some of the emerging literature that demonstrates that growth mindset interventions may not be as effective as once thought... The proposition that a one-time intervention at the postsecondary level will result in long-term measurable student outcomes was not supported in the present study.

Sisk, V. F., Burgoyne, A. P., Sun, J., Butler, J. L., & Macnamara, B. N. (2018). To What Extent and Under Which Circumstances Are Growth Mind-Sets Important to Academic Achievement? Two Meta-Analyses. Psychological Science, 29(4), 549-571. https://doi.org/10.1177/0956797617739704 (U.S., meta-analysis of 273 studies, N = 365,915).

Our meta-analyses do not support this claim. Effect sizes were inconsistent across studies, but most analyses yielded small (or null) effects. Overall, the first meta-analysis demonstrated only a very weak relationship between mind-sets and academic achievement. Similarly, the second meta-analysis demonstrated only a very small overall effect of mind-set interventions on academic achievement... from a practical perspective, resources might be better allocated elsewhere than mind-set interventions.


Now, a not-uncommon defense in a number of these cases in psychology is that the attempts to replicate didn't properly recreate the conditions or variables for a true test. The counter-argument here would be the observer-expectancy effect -- in some cases a primary researcher has even argued that only they have the necessary knowledge to ever do so. Indeed, Dweck has made the "not anyone can do a replication" argument (BuzzFeed News interview). In response, Nick Brown, who developed the GRIM (Granularity-Related Inconsistency of Means) test and found several errors in Dweck's seminal paper, said this:

The question I have is: If your effect is so fragile that it can only be reproduced [under strictly controlled conditions], then why do you think it can be reproduced by schoolteachers?

Finally, psychologist Russell Warne wrote on his blog:

I discovered the one characteristic that the studies that support mindset theory share and that all the studies that contradict the theory lack: Carol Dweck... So, there you go! Growth mindsets can improve academic performance –if you have Carol Dweck in charge of your intervention.

This is somewhat hyperbolic, but clarifies the issue at stake. Growth mindset theory fits fairly snugly into the basket of psychological "quick fixes" that make up the replication crisis, broadly cuts against long-standing findings from neuroscience on intelligence, and is racking up more failures-to-replicate as it garners more attention. Like other similar principles that came before, it's probably a bad bet that institutional interventions based on the theory will be worth the resources spent on them.

This post was initially written as an answer on Stack Exchange: Mathematics Educators. Thanks to the community there for reading and refining it.

2021-08-31

On Chained Relations

In all of the college math courses I teach -- from basic algebra, precalculus, calculus, discrete mathematics, etc. -- there's a particular piece of syntax that perpetually trips up students, and it's this: chained relations

To be clear, chained relations are compound statements in mathematics with more than one relational symbol (including equalities and inequalities). Crack open any math textbook and you're bound to see almost any piece of symbolic expression written in that format. And yet my students are always tripping all over themselves at the difficulty of either reading or writing them. Have you ever noticed this before? Let's consider the several factors contributing to this difficulty:

  1. Even a single equality is hard for people to truly understand. Numerous academic papers have been written on this. More than one person has pointed out that the use of the equals-sign in grade-school problems and the calculator point people in the incorrect direction of a functional understanding, rather than a relational understanding.

  2. There is no explicit instruction in the form in any curriculum. To my knowledge, I've never seen the status of chained relations directly addressed or tested in any math textbook at any level (again, whether in basic algebra, precalculus, etc., etc.). At some point instructors just start using it and we assume students will understand by osmosis.

  3. The compound form is entirely foreign to a natural language like English. Consider something super simple like \(a = b = c\). Translated literally to English, it says, "a is equal to b is equal to c" -- and that's a run-on sentence, disallowed by the rules of English grammar. But here in the algebraic language we have an entirely novel mode of permitted expression.

Considering that last point,we might observe that there is (surprisingly, for such a basic point?) unresolved confusion about how one should even pronounce out loud a simple chained relation. For example:

(Note that while the question is essentially the same, those two queries have entirely different top-voted answers.)

In my opinion, the status of chained relations is one of those classic blindspot/submarined issues that's buried in math education, and winds up troubling students throughout their career. To instructors: it's "obvious" and never rises to consciousness as an issue. To students: it's a quagmire that's never clearly addressed or exercised.

To this end, I've found that I need to start my discrete mathematics classes foremost with direct instruction on this issue; namely a short document that I ask students to read -- and to which I'll be referring them throughout the semester when mistakes are made. You can download it here:

On Chained Relations (PDF)

And then to practice reading them, a timed quiz at the Automatic Algebra site:

Quiz on Chained Relations

Interestingly with that quiz, I've had different math-trained professionals try it and tell me variously that (a) it was entirely trivial and of unclear value, or (b) it was entirely impossible within the span given on the timer. Isn't that interesting? What do you think?

2021-07-20

Veritasium on Learning Styles

The Veritasium channel on YouTube recently released a very high-quality video on "The Biggest Myth in Education", to wit: the Learning Styles theory. It's much needed and much appreciated. Includes comments by famed cognitive psychologist Daniel Willingham, with whom I had the chance to speak in person on this (constantly frustrating) subject a few years ago. From the climax:

Review articles of learning styles consistently conclude there is no credible evidence that learning styles exist. In a 2009 review, the researchers note: "The contrast between the enormous popularity of the learning styles approach within education and the lack of credible evidence for its utility is, in our opinion, striking and disturbing... If classification of students' learning styles has practical utility, it remains to be demonstrated."

Big thanks to Veritasium for producing this. Pass it on!



2021-07-06

Minimal Mental RSA Cryptosystem


As an exercise, I went hunting for smallest possible RSA cryptosystem such that I could perform all of the encryption and decryption calculations in my head. This turns out to be quite silly, of course. But perhaps you could consider using this as a toy-box classroom example, or possibly like me, you may find it helps you remember the process by being able to exercise the whole thing concretely in your brain. 

I'll use the same notation as in the Wikipedia: RSA (cryptosystem) article, which you may want to review (I'll avoid repeating the basic principles of the system here). So our system needs distinct primes p and q, the product n = pq, and decryption/encryption exponent keys d and e. I'll also use, as per the original RSA paper, φ(n) = (p − 1)(q − 1) as the value with which our keys need to avoid common factors.

Obviously, the fundamental calculation for both encryption and decryption is computations of me mod n. Which looks pretty simple, unless you have a large exponent e, and you're trying to do this in your head. Secondarily, having big plaintext values m complicates the job. Being able to reduce modulo n seems nice, but even as an intermediary step I don't want to be raising double-digits m's to double-digit e's, say. 

So the first job is to find a usable encryption exponent e. As noted on Wikipedia, "the smallest (and fastest) possible value for e is 3". Let's just reflect on why that is for a second. You can't use e = 0, aside from collapsing every block to a value of 1, because gcd(0, φ(n)) = φ(n) ≥ 2, instead of the required 1 (since the minimal primes p and q are 2 and 3, the minimal φ(n) is (1)(2) = 2). You can't use e = 1, because that would send every block to itself, and not demonstrate any encryption at all. You can't use e = 2, because at least one of your pair of primes is odd, so either (p − 1) or (q − 1) has a factor of 2, which can't be shared by e; and for the same reason e can't be any other even number.

So our first valid option for exponentiation is e = 3, and the next one is e = 5, etc. I don't know about you, but I really don't think I can do 5th or 7th powers of arbitrary numbers reliably in my head (even modulo n), so I'm aiming to just use e = 3 here.

Now, what can we use for the primes p and q? Even using one as big as 7 causes problems; say if p = 7, then (p − 1) = 6 = 2(3), at which point the factor of 3 blocks you from using our desired encryption key of e = 3. If I used a pair like 5 and 11, then n = 55, and I'm poised to be cubing numbers as big as 54, which is way outside my mental times-tables. So it looks like I'll be compelled just use 3 and 5 as our prime pair.

In this case, if p = 3 and q = 5, so n = 15, then we don't even have sufficient space to encode all the letters of the alphabet (I said this was silly, right?). Also, we'll only be able to encode one of our limited characters at a time, which loses sight of the fact that RSA is a block cipher, but let's choose to forgive that. We'll enumerate the first part of the alphabet as A = 0, B = 1, C = 2, ... O = 14. Also note that
φ(n) = 2(4) = 8 (so the only factor we need to avoid in e is 2, and we can indeed use e = 3).

What about our decryption key exponent d? Well, there's a goofy coincidence here: solving the required congruence de ≡ 1 (mod φ(n)), that is, 3d ≡ 1 (mod 8)), we get d = 3 (because 3(3) = 9 = 8 + 1). Here we have d = e, so our encryption and decryption keys happen to be the exact same thing. On the one hand, this seems like a degenerate example for how the RSA system should work, but for our mental arithmetic, it's pretty darned nice that both keys are the smallest value possible. (As noted, I really didn't want to be raising values to a 7th power or something for decryption purposes!). So we can work with that.

Here's an example. Let's say we want to mentally encrypt the plaintext message "MIND". Using our number-encoding scheme, these four characters are represented by the numbers (12, 8, 13, 3). Computing the necessary cubings-modulo-15 to encrypt, we can feasibly find in our head:

  • 12³ = (144)(12) = (135 + 9)(12) ≡ (9)(12) = 108 = 105 + 3 ≡ 3 (mod 15)
  • 8³ = (64)(8) = (60 + 4)(8) ≡ (4)(8) = 32 = 30 + 2 ≡ 2 (mod 15)
  • 13³ = (169)(13) = (165 + 4)(13) ≡ (4)(13) = 52 = 45 + 7 ≡ 7 (mod 15)
  • 3³ = (9)(3) = 27 = 15 + 12 ≡ 12 (mod 15)

So our encrypted message is (3, 2, 7, 12), or "DCHM" if we send it as alphabetic characters. Decrypting is (fortunately or unfortunately, depending on your perspective) exercising the exact same mental algorithm:

  • ≡ 12 (mod 15) [as above]
  • ≡ 8 (mod 15) [super easy!]
  • 7³ = (49)(7) = (45 + 4)(7) ≡ (4)(7) = 28 = 15 + 13 ≡ 13 (mod 15)
  • 12³ ≡ 3 (mod 15) [as above]

And indeed we recover our original transmission of (12, 8, 13, 3), or "MIND".

Okay, hopefully you can appreciate that example. Because I had to work really, really hard to find one that avoided being totally nonsensical. Here's why: In our toy cryptosystem, most characters get sent to themselves under encryption! Specifically (looking at the numerical representations): 0 → 0, 1 → 1, 2 → 8, 3 → 12, 4 → 4, 5 → 5, 6 → 6, 7 → 13, 8 → 2, 9 → 9, 10 → 10, 11 → 11, 12 → 3, 13 → 7, 14 → 14.

In summary: Only the six plaintext values 2, 3, 7, 8, 12, and 13 get sent to other images; the other nine values all get sent to themselves. That is, the only letters that get transformed are C, D, H, I, M, N -- and you'll notice that I carefully constructed my example of "MIND" to use exclusively, and almost all of, these available characters. Note in particular that "I" is the only available vowel in that set (initially I considered encoding A = 1, B = 2, etc.; but in that case we get no vowels with images different from themselves.)

In case you want to memorize these transformations directly, then it might help to observe the pattern, starting with 14 and wrapping around, that you get 3 values sent to the same as themselves, then 2 with different images, then 3 same, 2 different, 3 same, and 2 different. (Again noting a total of 6 characters that get sent to images different from themselves.) And the decryption function is, obviously, identical. 

Is that of any interest? As I opened with, it's pretty darned silly. We can only handle letters up to O, we lose the block-cipher action, the encryption and decryption keys are identical, and most of our values get sent to themselves under the transformation. But I thought it was an interesting puzzle to see if there as any example that I could compute mentally without any mechanical or even pen-and-paper aid, and it turns out there is.

Leave a comment if you ever use that in a class or paper as a warm-up demonstration!