Thursday, May 31, 2012

Math Conference


Initial Thoughts on the CUNY 2012 Mathematics Conference

About two weeks ago, I had the good fortune of attending the CUNY 2012 Mathematics Conference on Effective Instructional Strategies. To a large degree, this is a product of CUNY's Improving Undergraduate Mathematics Learning (IML) program, which in 2009 vetted and funded 10 research programs at various of the school's colleges. Many of the final papers were presented at this conference, and I'll plan to spend several posts presenting some of my thoughts on them -- you can see the final reports here.

As noted in many places, several of the reports, and on this blog previously, at least half our (my) job is teaching non-credit remedial Arithmetic and Algebra classes for students who can't initially pass a placement test for those subjects, and so many of the research projects were looking for ways to improve that teaching. (Again: Nationwide, about 60% of CC students need such remediation, and only about 30% complete it after some number of years.) Interestingly, within the hour of my sitting down to write this, a top AP headline crossed the news wire on exactly this issue: "Experts: Remedial College Classes Need Fixing".

The strategies looked at in the different CUNY research projects were fairly wide-ranging, and usually required total overhauls of the classes in some way. In general, they seemed to usually be one of: (1) group work/project-based exploratory learning, (2) online/software-based homework and exercise applications, and (3) "inverted" classrooms where video lectures were watched before class, and discussion and exercise drills run in-class.

Compare to my previous blog post. On the one hand, the American Educator article by Clark, et. al., on "Fully Guided Instruction" had me thinking that the group-work-exploration craze was petering out, but apparently that's not quite the case. Consider also, say, David Klein's anti-reformist essay "A Brief History of American K-12 Mathematics Education in the 20th Century" (section "Public Resistance to the NCTM Standards"):
To understand the public backlash against the NCTM math programs of the 1990s, one needs to understand some of the mathematical shortcomings of these programs... Student discovery group work was the preferred mode of learning, sometimes exclusively, and the guidelines for discovery projects were at best inefficient and often aimless... Arithmetic and algebra were radically de-emphasized. Mathematical definitions and proofs for the higher grades were generally deficient, missing entirely, or even incorrect. Some of the elementary school programs did not even provide books for students, as they might interfere with student discovery. 
I would say that each of these identifiers from the 1990's programs appeared in at least one of the research programs from the IML.

At a certain point, one of the speakers referred to Clark's research that expert learners do well with discovery-based methods, and novices do better with fully-guided instruction -- that being the same Clark who inspired my previous blog post (see more here). I was nodding along with this line of reasoning as very important, and then the speaker concluded with the line, "And since most of our students are aged 20-25, they count as experts, and need self-directed methods", at which point I almost fell out of my chair. (If our students can't pass a basic arithmetic/algebra test, then I don't see how it's valid to conclude that they're experts.)

More thoughts later.

Monday, May 28, 2012

Fully Guided Instruction


Highly Recommended Reading by Clark, Kirschner, and Sweller in American Educator Magazine: "Putting Students on the Path to Learning: The Case for Fully Guided Instruction".


I suppose that the "math wars" of the 1990's aren't entirely over yet. For example, for several years I've gotten this one magazine every other month called the NEA Higher Education Advocate (which in this household is referred to as "the crappy teaching magazine"). Every edition has a central keynote article under the heading of "Thriving in Academe", which almost uniformly features a call to "reform" type instructional techniques such as group-work projects, exploratory/discovery-based learning, and the like. When it comes in every other month, I tend to say, "Ah, the Pravda has arrived."

Among the funny recurring jokes of the feature is that it always has a "Issues to Consider" section (FAQ, basically), which frequently fields the question of "Won't this take more time and effort/ Not allow us to cover as many topics?" And the answer is usually some flavor of "Definitely!". For example, from the Dec-2008 article on "CRISP" (there's usually someone peddling a new acronym/system in every issue):
Won’t being C.R.I.S.P. cause me to sacrifice coverage? 

Of course. The sciences are especially concerned with complete nomenclature. They are worried that if a Biology 101 student doesn’t learn every bone, muscle, and organ in the body, the student won’t be prepared for Biology 102, not to mention advanced study in related fields such as nursing and exercise and sports science. However, since studies indicate that students will “forget” (they actually put the information in their short-term memories only) 75 to 90 percent of the material in three months anyway, shouldn’t you worry more that students develop skills and fundamental concepts? If students truly comprehend, for instance, how the bones work in general, shouldn’t they be able to figure out how a specific bone functions or know where to look it up? 


So in contrast, the American Educator magazine is what I call the "good teaching magazine" and seems to have much higher-quality, more in-depth, and more interesting articles in each issue (it's published on a quarterly basis). The current "Fully Guided Instruction" article by Clark, et. al. was a bracing breath of fresh air, representing the opposite point of view, that attempting to have students "discover" principles on their is a weaker technique than instructors cutting to the chase and simply telling them how things work in a straightforward manner (and modeling proper usage, and then overseeing practice). It cites seemingly strong research that the two techniques can be appropriate for different groups of students: in particular, strong students (with pre-existing deep background knowledge) work well with discovery-based learning, whereas weak students (those with deficiencies) do better with explicit direction. In fact:
Worse, a number of experiments found that less-skilled students who chose or were assigned to less-guided instruction received significantly lower scores on posttests than on pretest measures. For these relatively weak students, the failure to provide strong instructional support produced a measurable loss of learning

So this seems particularly relevant to my work, over half of which is teaching remedial arithmetic and algebra at a large, urban community college (the stats for us, and nationwide, being about 60% of students taking remedial math, and only about 30% successfully completing it after 3 years).

And the other fascinating thing in the article was the description of a decades-long history of similar "discovery-based" reform efforts since at least the 1950's, each of which have been given a new name and similarly came up empty with research-based results for it. Highly recommended reading.

Thursday, May 17, 2012

Necessary but Not Sufficient

Here's a random math/logic-book exercise I'd like to see:


The picture above shows a door with triple locks. Answer the following questions:

(a) Do you need the key to the top lock to enter this door? Yes.
(b) Will you be able to enter this door if you have only the key to the top lock? No.
(c) What's the technical term for this relationship? Necessary but not sufficient.


[Photo by LiGhtSynC under CC2.]

Tuesday, May 8, 2012

"Something Highly Unlikely"


On the Need to Establish Hypotheses Prior to Testing; Or, The Fact that It Is Overwhelmingly Likely that Something Highly Unlikely Will Happen in Any Experiment.

Lately I've gotten in the habit of doing several card-drawing demonstrations in my statistics classes (as concrete examples of sampling, estimating a population mean, hypothesis testing, interpretations, etc.) Here's one for a test question that I sometimes ask: "Is it acceptable to decide what type of test to conduct [left, right, or two-tailed] by examining the sample data?" Say that I bring in a deck of playing cards, shuffle, and deal out 6 cards. For example, when I just did this at my desk I got this:



Now, what follows in this paragraph would be an example faulty reasoning -- Note that I just got duplicate 4's in this draw, and of course, the probability of that happening is highly unlikely in a standard deck (specifically, a 6% chance to get two or more 4's *). Therefore, one might conclude that I doctored that deck with extra 4's.

What's wrong with that reasoning? Well, we didn't establish the hypotheses prior to testing, so it's unfair and biased to use this as data in support of that hypothesis. Or perhaps it's better to look at it this way: It's overwhelmingly likely that something highly unlikely will happen in any such experiment (if you look at the data post-facto and labor to draw out some weird numerology-like pattern). Specifically, the chance of getting some duplicated card value from a standard deck in this case (not necessarily 4's) is actually 65%. **

So let's try this again: It is fair to use the first draw as suggestive of a new hypothesis. Let's hypothesize: "He doctored this deck with extra 4's". So if we shuffle and draw 6 cards again, then we should expect to see one or more 4's. And when I ran this experiment just now the result was:



Which rather obviously destroys the hypothesis; in this case I didn't get any 4's at all. (I did get duplicate 8's, but again, normal probability says that you'll usually get duplicate somethings from a standard deck when drawing 6 cards, so it's not really surprising or interesting at all.) To be doing interesting science, you have to establish coherent hypotheses in advance, and be able to predict and replicate your results.



Calculation Footnotes:
* Drawing 6 cards: Chance to get zero 4's is: 48P6/52P6 = 0.603. Chance to get an initial 4 and then all non-4's is: 4×48P5/52P6 = 0.056; so chance to get a single 4 in some order is 6×0.056 = 0.336. Sum of these is 0.603+0.336 = 0.939. Therefore, the chance to get two or more 4's is P(not zero or one 4)  = 1 - 0.939 = 0.061 ~ 6%.

** Drawing 6 cards: Chance to get no duplicates is 52/52×48/51×44/50×40/49×36/48×32/47 = 0.345. Therefore, the chance to get at least one duplicate is P(not zero duplicates) = 1 - 0.345 = 0.655 ~ 65%. (And as a sanity check, the preceding should be approximately 1/13 this, i.e., 65%/13 ~ 5% which does check out.)