tag:blogger.com,1999:blog-77184627935169688832015-11-25T14:51:04.070-05:00AngryMath"Beauty is the Enemy of Expression"Deltahttp://www.blogger.com/profile/00705402326320853684noreply@blogger.comBlogger201125tag:blogger.com,1999:blog-7718462793516968883.post-87916056913718976702015-11-23T05:00:00.000-05:002015-11-23T05:00:07.566-05:00A Bunch of Dumb Things Journalists Say About PiA lovely rant by Dave Renfro, via Pat Ballew's blog, here:<br /><div style="text-align: center;"><span style="font-size: large;"><br /></span></div><div style="text-align: center;"><span style="font-size: large;"><a href="http://pballew.blogspot.com/2010/03/guest-blog-rant-from-dave-renfro.html">http://pballew.blogspot.com/2010/03/guest-blog-rant-from-dave-renfro.html</a></span></div><br /><br />Deltahttp://www.blogger.com/profile/00705402326320853684noreply@blogger.com0tag:blogger.com,1999:blog-7718462793516968883.post-29165681699488240102015-11-16T05:00:00.001-05:002015-11-16T05:00:07.521-05:00Joyous ExcitementDid you know that this week is the 100th anniversary of Einstein's completion of General Relativity? Specifically it was November 18, 1915 when Einstein drafted a paper that realized the final fix to his theories that would account for the previously unexplainable advance of the perihelion of Mercury. The next week he submitted this paper, "The field equations of gravitation", to the Prussian Academy of Sciences, which included what we now refer to simply as <a href="https://en.wikipedia.org/wiki/Einstein_field_equations">"Einstein's equations"</a>. <br /><br />Einstein later recalled of this seminal moment:<br /><blockquote class="tr_bq"><i>For a few days I was beside myself with joyous excitement.</i></blockquote><br />And further: <br /><blockquote class="tr_bq"><i> ... in all my life I have not laboured nearly so hard, and I have become imbued with great respect for mathematics, the subtler part of which I had in my simple-mindedness regarded as pure luxury until now. </i></blockquote><br />(Quotes from <a href="http://www-history.mcs.st-and.ac.uk/HistTopics/General_relativity.html">"General Relativity"</a> by J.J. O'Connor and E.F. Robertson at the School of Mathematics and Statistics, University of St. Andrews, Scotland). <br /><br /><br />Deltahttp://www.blogger.com/profile/00705402326320853684noreply@blogger.com0tag:blogger.com,1999:blog-7718462793516968883.post-67131474895848830302015-11-09T05:00:00.000-05:002015-11-12T18:36:08.598-05:00Measurement GranularityAnswering a question on StackExchange, and I came across some very nice little articles by the Six Sigma system people on Measurement System Analysis: <br /><blockquote class="tr_bq"><i>Establishing the adequacy of your measurement system using a measurement system analysis process is fundamental to measuring your own business process capability and meeting the needs of your customer (specifications). Take, for instance, cycle time measurements: It can be measured in seconds, minutes, hours, days, months, years and so on. There is an appropriate measurement scale for every customer need/specification, and it is the job of the quality professional to select the scale that is most appropriate.</i></blockquote><br />I like this because this issue comes up a lot in issues of the mathematics of game design: What is the most convenient and efficient scale for a particular system of measurement? And what should we be considering when we mindfully choose those units at the outset?<br /><br />One key example in my D&D gaming, is that at the outset, units of encumbrance (weight carried) were ludicrously set in <i>tenths-of-a-pound</i>, so tracking gear carried by any characters involves adding up units in the hundreds or thousands, frequently requiring a calculator to do so. As a result, D&D encumbrance is infamous for being almost entirely unusable, and frequently discarded during play. My argument is that this is almost entirely due to an incorrect choice in measurement scale for the task -- equivalent to measuring a daily schedule in seconds, when what you really need is hours. I've recommended for a long time using the flavorfully archaic scale of "stone" weight (i.e., 14-pound units; <a href="http://deltasdnd.blogspot.com/2010/09/stone-encumbrance-detail-example.html">see here</a>), although the advantage could also be achieved by taking 5- or 10-pound units as the base. Likewise, I have a tendency to defend other Imperial units of weight as being useful in this sense (see: <a href="https://en.wikipedia.org/wiki/Human_scale">Human scale measurements</a>), although I might be biased just a bit for being so steeped in D&D (further example: a league is about how far one walks in an hour, etc.).<br /><br />The Six Sigma articles further show a situation where the difference in two production processes is discernible at one scale of measurement, but invisible at another incorrectly-chosen scale of measurement. See more below:<br /><ul><li><a href="http://www.isixsigma.com/tools-templates/measurement-systems-analysis-msa-gage-rr/measurement-system-analysis-resolution-granularity/">Measurement System Analysis Resolution, Granularity </a></li><li><a href="http://www.isixsigma.com/tools-templates/measurement-systems-analysis-msa-gage-rr/proper-data-granularity-allows-stronger-analysis/">Proper Data Granularity Allows for Stronger Analysis </a></li><li><a href="http://www.isixsigma.com/tools-templates/measurement-systems-analysis-msa-gage-rr/measurement-systems-analysis-process-industries/">Measurement Systems Analysis in Process Industries </a></li></ul><br />Deltahttp://www.blogger.com/profile/00705402326320853684noreply@blogger.com2tag:blogger.com,1999:blog-7718462793516968883.post-46651750291859277342015-11-02T05:00:00.000-05:002015-11-12T18:31:56.318-05:00On Common CoreAs people boil the oil and man the ramparts for this decade's education-reform efforts, I've gotten more questions recently about what I think regarding Common Core. Fortunately, I had a chance to look at it recently as part of CUNY's ongoing attempts to refine our algebra remediation and exam structure.<br /><br />A few opening comments: One, this is purely in regards to the math side of things, and mostly just focused on the area of 6th-8th grade and high school Algebra I that my colleagues and I are largely involved in remediating (see the standards here: <a href="http://www.corestandards.org/Math/">http://www.corestandards.org/Math/</a>... and I would highlight the assertion that "Indeed, some of the highest priority content for college and career readiness comes from Grades 6-8.", <a href="http://www.corestandards.org/Math/Content/note-on-courses-transitions/courses-transitions/">Note on courses & transitions</a>). Second, we must distinguish what Common Core specifies and what it does not: it does dictate <i>things to know at the end of each grade level</i>, but <u>not</u><i> how they are to be taught</i>. In general:<br /><blockquote class="tr_bq"><i>The standards establish what students need to learn, but they do not dictate how teachers should teach. Teachers will devise their own lesson plans and curriculum, and tailor their instruction to the individual needs of the students in their classrooms.</i> (<a href="http://www.corestandards.org/about-the-standards/frequently-asked-questions/">Frequently Asked Questions</a>: What guidance do the Common Core Standards provide to teachers?)</blockquote>Specifically in regards to math: <br /><blockquote class="tr_bq"><i>The standards themselves do not dictate curriculum, pedagogy, or delivery of content. (<a href="http://www.corestandards.org/Math/Content/note-on-courses-transitions/courses-transitions/">Note on courses & transitions</a>)</i></blockquote>So this foreshadows a two-part answer:<br /><br /><h3>(1) I think the standards look great.</h3>Everything that I've seen in the standards themselves looks smart, rigorous, challenging, core to the subject, and pretty much indispensable to a traditional college curriculum in calculus, statistics, computer programming, and other STEM pursuits. I encourage you to read them at the link above. It includes pretty much everything in a standard algebra sequence for the last few centuries or so. <br /><br />I like the balanced requirement to achieve both conceptual understanding <i>and</i> procedural fluency (<a href="http://www.corestandards.org/Math/Practice/"> http://www.corestandards.org/Math/Practice/</a>). As always, my response in a lot of debates is, "<i>you need both</i>". And this reflects the process of presenting higher-level mathematics theorems: a careful proof, and then applications. The former guarantees correctness and understanding; the latter uses the theorem as a powerful shortcut to get work done more efficiently. <br /><br />Quick example that I came across last night: "<i>By the end of Grade 3, know from memory all products of two one-digit numbers.</i>" (<a href="http://www.corestandards.org/Math/Content/3/OA/">http://www.corestandards.org/Math/Content/3/OA/</a>). That's not nonsense exercise, that's a necessary tool to later understand long division, factoring, fractions, rational versus irrational numbers, estimations, the Fundamental Theorems of Arithmetic and Algebra, etc. I was happy to spot that as a case example. (And I deeply wish that we could depend on all of our college students having that skill.) <br /><br />I like what I see for sample tests. Here are some examples from the nation-wide PARCC consortium (by Pearson, of course; <a href="http://parcc.pearson.com/practice-tests/math/">http://parcc.pearson.com/practice-tests/math/</a>): I'm looking at the 7th- and 8th-grade and Algebra I tests. They all come in two parts: Part I, short questions, multiple-choice, with no calculators allowed. Part II, more sophisticated questions, short-answer (<i>not</i> multiple choice), with calculators allowed. I think that's great: <i>you need both</i>. <br /><br />New York State writes their own Common Core tests instead of using PARCC, at least at the high school level (<a href="http://www.nysedregents.org/">http://www.nysedregents.org/</a>): here I'm looking mostly at Algebra I (<a href="http://www.nysedregents.org/algebraone/">http://www.nysedregents.org/algebraone/</a>). Again, a nice pattern of one part multiple-choice, the other part short-answer. I wish we could do that in our system. Now, the NYS Algebra I test is all-graphing-calculator mandatory, which sets my teeth on edge a bit compared to the PARCC tests. Maybe I could live with that as long as students have confirmed mental mastery at the 7th- and 8th-grade level (not that I can confirm that they do). Even the grading rubric shown here for NYS looks fine to me (approximately half-credit for calculation, and half-credit for conceptual understanding and approach on any problem; that's pretty close to what I've evolved to do in my own classes). <br /><br />In summary: Pretty great stuff as far as published standards and test questions (at least for 7th-8th grade math and Algebra I).<br /><br /><h3>(2) The implementation is possibly suspect. </h3>Having established rigorous standards and examinations, these don't solve some of the endemic problems in our primary education system. Granted that "Teachers will devise their own lesson plans and curriculum, and tailor their instruction to the individual needs of the students in their classrooms." (above):<br /><br />Most teachers in grades K-6, and even 7-8 in some places (note that's specifically the key grades highlighted above for "some of the highest priority content for college and career readiness") are not mathematics specialists. In fact, U.S. education school entrants are perennially the <a href="http://qz.com/493971/inside-chipotles-extremely-intense-39-point-checklist-for-good-management/">very weakest of all incoming college students in proficiency and attitude towards math</a> (also: <a href="http://www.angrymath.com/2014/12/academically-adrift.html">here</a>). If the teachers at these levels fundamentally don't understand math themselves -- don't understand the later algebra and STEM work that it prepares them for -- then I have a really tough time seeing how they can understand the Common Core requirements, or effectively select and implement appropriate mathematical curriculum for their classrooms. Sometimes I refer to students at this level as having "anti-knowledge" -- and I find that it's much easier to instruct a student who has <i>never heard of algebra ever</i> (which sometimes happens for graduates of certain religious programs) than it is to deconstruct and repair incorrect the conceptual frameworks of students with many years of broken instruction. <br /><br />Before I go on: The best solution to this would be to massively increase salary and benefits for all public-school teachers, and implement top-notch rigorous requirements for entry to education programs (as done in other top-performing nations). A second-best solution, which is probably more feasible in the near-term, would be to place mathematics-specialist teachers in all grades K-12.<br /><br />The other key problem I see is: how are the test scores generated? We already know that in many places students take tests, and then the test scores are arbitrarily inflated or scaled by the state institutions, manipulating them to guarantee some particular high percentage is deemed "passing" (regardless of actual proficiency, for political purposes). For example, the conversion chart for NYS Algebra I Common Core raw scores to final scores for this past August is shown below (from NYS regents link above): <br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-T7T1hNwm1RM/VghDVbpYFHI/AAAAAAAADtM/npVWf15jkLY/s1600/NYSCCAlgebraI-ConversionChart.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="http://4.bp.blogspot.com/-T7T1hNwm1RM/VghDVbpYFHI/AAAAAAAADtM/npVWf15jkLY/s320/NYSCCAlgebraI-ConversionChart.png" width="320" /></a></div><br />Now, this is a test that had a maximum total 86 possible points scored. If we linearly converted this to a percentage, we would just multiply any score by 100/86 = 1.16; it would add 14 points at the top of the scale, about 7 points at the middle, and 0 points at the bottom. But that's not what we see here -- it's a nonlinear scaling from raw to final. The top adds 14 points, but in the middle it adds 30 or more points in the raw range from 13 to 40. <br /><br />The final range is 0 to 100, allowing you to think it might be a percentage, but it's not. If we consider 60% be minimal normal passing at a test, for this test that would occur at the 52-point raw score mark; but that gets scaled to a 73 final score, which usually means a middle-C grade. Looking at the 5 performance levels (more-or-less equivalent to A- through F- letter grades): A performance level of "3" is achieved with a raw score of just 30, which is only 30/86 = 35% of the available points on the test. A performance level of "2" is achieved with a raw score of only 20, that is, 20/86 = 23% of the available points on the test. And these low levels (near random-guessing) are considered acceptable for awards of a high school diploma (<a href="http://www.p12.nysed.gov/assessment/reports/commoncore/tr-a1-ela.pdf">www.p12.nysed.gov/assessment/reports/commoncore/tr-a1-ela.pdf</a>, p. 19): <br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-4CU-Lvlz2M0/VghGWI7mZwI/AAAAAAAADtY/r0Y1BbinbPA/s1600/RegentsPerformanceLevels.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="211" src="http://3.bp.blogspot.com/-4CU-Lvlz2M0/VghGWI7mZwI/AAAAAAAADtY/r0Y1BbinbPA/s320/RegentsPerformanceLevels.png" width="320" /></a></div><br />In summary: While the publicized standards and exam formats look fine to me, the devil is in the details. On the input end, actual curriculum and instruction are left as undefined behavior in the hands of primary-school teachers who are not specialists, and rarely empowered, and frequently the very weakest of all professionals in math skills and understanding. And on the output end, grading scales can be manipulated arbitrarily to show any desired passing rate, almost entirely disconnected from the actual level of mastery demonstrated in a cohort of students. So I fear that almost any number of students can go through a system like that and not actual meet the published Common Core standards to be ready for work in college or a career. <br /><br /><br />Deltahttp://www.blogger.com/profile/00705402326320853684noreply@blogger.com0tag:blogger.com,1999:blog-7718462793516968883.post-15816536022658974552015-10-26T05:00:00.000-04:002015-10-26T05:00:02.002-04:00Double Factorial TableThe <a href="https://en.wikipedia.org/wiki/Double_factorial">double factorial</a> is the product of a number and every <i>second</i> natural number less than itself. That is:<br /><br /><div style="text-align: center;">\(n!! = \prod_{k = 0}^{ \lceil n/2 \rceil - 1} (n - 2k) = n(n-2)(n-4)...\)</div><br />Presentation of the values for double factorials is usually split up into separate even- and odd- sequences. Instead, I wanted to see the sequence all together, as below: <br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-Ua53g6RQ9j4/Vf2fRankanI/AAAAAAAADsw/q2cUs29962c/s1600/DoubleFactorialTable.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Values of the double factorial function" border="0" height="320" src="http://1.bp.blogspot.com/-Ua53g6RQ9j4/Vf2fRankanI/AAAAAAAADsw/q2cUs29962c/s320/DoubleFactorialTable.png" title="Double Factorial Table" width="151" /></a></div><br />Deltahttp://www.blogger.com/profile/00705402326320853684noreply@blogger.com0tag:blogger.com,1999:blog-7718462793516968883.post-3716737898871287132015-10-19T05:00:00.000-04:002015-10-19T05:00:05.958-04:00Geometry Formulas in TauHere's a modified a geometry formula sheet so all the presentations of circular shapes are in terms of tau (not pi); tack it to your wall and see if anybody spots the difference.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-Z2yzqWUGFxw/Vf2bZHmeRvI/AAAAAAAADsk/L__jdLNQDN4/s1600/GeometryFormulasWithTau.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="http://3.bp.blogspot.com/-Z2yzqWUGFxw/Vf2bZHmeRvI/AAAAAAAADsk/L__jdLNQDN4/s320/GeometryFormulasWithTau.gif" width="256" /></a></div><br />(<a href="https://www.pinterest.com/pin/377176537513172091/">Original sheet here.</a>)Deltahttp://www.blogger.com/profile/00705402326320853684noreply@blogger.com0tag:blogger.com,1999:blog-7718462793516968883.post-19306243693066243732015-10-12T05:00:00.000-04:002015-10-12T11:14:09.008-04:00On ZerationIn my post last week on hyperoperations, I didn't talk much about the operation under addition, the zero-th operation in the hierarchy, which many refer to as "zeration". There is a surprising amount of disagreement about exactly how zeration should be defined. <br /><br /><a href="https://en.wikipedia.org/wiki/Peano_axioms">The standard Peano axioms defining the natural numbers stipulate a single operation called the "successor".</a> This is commonly written S(n), which indicates the next natural number after n. Later on, addition is defined in terms of repeated successor operations, and so forth. <br /><br /><a href="https://en.wikipedia.org/wiki/Hyperoperation#Definition">The traditional definition of zeration, per Goodstein, is: \(H_0(a, b) = b + 1\).</a> Now when I first saw this, I was surprised and taken aback. All the other operations start with \(a\) as a "base", and then effectively apply some simpler operation \(b\) times, so it seems odd to start with the \(b\) and just add one to it. (If anything my expectation would have been to take \(a+1\), but that doesn't satisfy the regular recursive definition of \(H_n\) when you try to construct addition.) <br /><br />As it turns out, when you get to this basic level, you're doomed to lose many of the regular properties of the operations hierarchy. <a href="http://math.eretrandre.org/tetrationforum/showthread.php?tid=122">So there's nothing to do but start arguing about which properties to prioritize as "most fundamental" when constructing the definition.</a><br /><br />Here are some points in <b>favor</b> of the standard definition \(b+1\): (1) It does satisfy the recursive formula that repeated applications are equivalent to addition (\(H_1\)). (2) It does looking passingly like counting by 1, i.e., the Peano "successor" operation. (3) It shares the key identity that \(H_n(a, 0) = 1\), for all \(n \ge 3\). (4) Since it is an elementary operation (addition, really), it can be extended from natural numbers to all real and complex numbers in a fashion which is analytic (infinitely differentiable). <br /><br />But here are some points <b>against</b> the standard definition (1) It is not "really" a binary operator like the rest of the hierarchy, in that it totally ignores the first parameter \(a\). (2) Because of its ignoring \(a\), it's not commutative like the other low-level operations n = 1 or 2 (yet like them it is still associative and distributive, or as I sometimes say, collective of the next higher operation). (3) For the same reason, it has no identity element (no way to recover the value \(a\), unique among the entire hyperoperations hierarchy). (4) It's the only hyperoperation which doesn't need a special base case for when \(b = 0\). (5) I might turn around favorable point #3 above and call it weird and unfavorable, in that it is misaligned in this way with operations n = 1 and 2, and it's the only case of one of the key identities being <i>added</i> at a lower level instead of being lost. See how weird that looks below?<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/--kwtOW8ccho/VfyFj-RI_jI/AAAAAAAADsQ/a8TUBy0C_Is/s1600/ZerationIdentities.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/--kwtOW8ccho/VfyFj-RI_jI/AAAAAAAADsQ/a8TUBy0C_Is/s1600/ZerationIdentities.png" /></a></div><br />So as a result, a variety of alternative definitions have been put forward. I think my favorite is \(H_0(a, b) = max(a, b) + 1\). Again, this looks a lot like counting; I might possibly explain it to a young student as "count one more than the largest number you've seen before". Points in <b>favor</b>: (1) Repeated applications are again the same as addition. (2) It is truly a binary operation. (3) It is commutative, and thus completes the trifecta of commutativity, association, and distribution/collection being true for all operations \(n < 3\). (4) It does have an identity element, in \(b = 0\). (5) It maintains the pattern of <i>losing</i> more of the high-level identities, and in fact perfects the situation in that <i>none</i> of the five identities hold for this zeration (all "no's" in the modified table above for \(n = 0\)). Points <b>against</b>: (1) It isn't exactly the same as the unary Peano successor function. (2) It's non-differentiable, and therefore cannot be extended to an analytic function over the fields of real or complex numbers.<br /><br />There are vocal proponents of related possible re-definition: \(H_0(a, b) = max(a, b) + 1\) if a ≠ b, \(a + 2\) if a = b. Advantage here is that it matches some identities in other operations, like \(H_n(a, a) = H_{n+1}(a, 2)\) and \(H_n(2, 2) = 4\), but I'm less impressed by specific magic numbers like that (as compared to having commutativity and the pattern of actually <i>losing more identities</i>). Disadvantage is obviously that the possibility of adding 2 in the \(a+2\) case gets us even further away from the simple Peano successor function. <br /><br />And then some people want to establish commutativity so badly that they assert this: \(H_0(a, b) = ln(e^a + e^b)\). That does get you commutativity, but at that point we're so far away from simple counting in natural numbers that I don't even want to think about it.<br /><br /><br />Final thought: While most people interpret the standard definition of zeration, \(H_0(a, b) = b + 1\) as "counting 1 more place from b", it makes more sense to my brain to turn that around and say that we are <i>"counting b places from 1"</i>. That is, ignoring the \(a\) parameter, start at the number 1 and apply the successor function repeatedly b times: \(S(S(S(...S(1))))\), with the \(S\) function appearing \(b\) times. This feels more like "basic" Peano counting, it maintains the sense of \(b\) being the number of times some simpler operation is applied, and it avoids defining zeration in terms of the higher operation of addition. And then you also need to stipulate a special base case for \(b = 0\), like all the other hyperoperations, namely \(H_0(a, 0) = 1\). <br /><br />So maybe the standard definition is the best we can do, and the closest expression of what Peano successor'ing in natural numbers (counting) really indicates. Perhaps we can't really have a "true" binary operator at level \(H_0\), at a point when we haven't even discovered what the number "2" is yet. <br /><br />P.S. Can we consider defining an operation one level even lower, perhaps \(H_{-1}(a, b) = 1\) which ignores <i>both</i> parameters, just returns the natural number 1, and loses every single one of the regular properties of hyperoperations (including recursivity in the next one up)? <br /><br /><br />Deltahttp://www.blogger.com/profile/00705402326320853684noreply@blogger.com3tag:blogger.com,1999:blog-7718462793516968883.post-40354459098553366992015-10-05T05:00:00.000-04:002015-10-05T05:00:04.163-04:00On HyperoperationsConsider the basic operations: Repeated counting is addition; repeated addition is multiplication; repeated multiplication is exponentiation. Hyperoperations are the idea of generally extending this sequence. This was first proposed as such, in a passing comment, by R. L. Goodstein in an article to the <i>Journal of Symbolic Logic</i>, "Transfinite Ordinals in Recursive Number Theory" (1947):<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-6bLiFM91yvM/VfuCoLClA7I/AAAAAAAADq4/nGoQWpl4ij0/s1600/Goodstein-Hypoeroperations.png" imageanchor="1"><img alt="" border="0" height="190" src="http://4.bp.blogspot.com/-6bLiFM91yvM/VfuCoLClA7I/AAAAAAAADq4/nGoQWpl4ij0/s320/Goodstein-Hypoeroperations.png" title="Goodstein's definition of hyperoperations (1947)" width="320" /></a></div><br />At this point, there are a lot of different ways of denoting these operations. There's \(H_n\) notation. There's the Knuth up-arrow notation. There's box notation and bracket notation. The Ackerman function means almost the same thing. Conway's chained arrow notation can be used to show them. Some people concisely symbolize the zero-th level operation (under addition) as \(a \circ b\), and the fourth operation (above exponentiation) as \(a \# b\). <a href="https://en.wikipedia.org/wiki/Hyperoperation#cite_ref-nega_16-1">Wikipedia reiterates Goodstein's original definition like so</a>, for \(H_n (a,b): (\mathbb N_0)^3 \to \mathbb N_0\):<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-HQZmXXVpbGE/VfuEv_aHK6I/AAAAAAAADrE/3bNRkA4GX9s/s1600/HyperoperationsDefinition.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="" border="0" height="105" src="http://1.bp.blogspot.com/-HQZmXXVpbGE/VfuEv_aHK6I/AAAAAAAADrE/3bNRkA4GX9s/s320/HyperoperationsDefinition.png" title="Wikipedia's definition of hyperoperations" width="320" /></a></div><br />Let's use Goodstein's suggested names for the levels above exponentiation. Repeated exponentiation is tetration; repeated tetration is pentation; repeated pentation is hexation; and so forth. Since I don't see them anywhere else online, below you'll find some partial hyperproduct tables for these next-level operations (<a href="http://www.superdan.net/download/blog/angrymath/HyperoperationTables.ods">and ODS spreadsheet here</a>). Of course, the values get large very fast; you'll see some entries in scientific notation, and then "#NUM!" indicates a place where my spreadsheet could no longer handle the value (that is, something greater than \(1 \times 10^{308}\)). <br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-4-wH3LalKdQ/VfuKeX4ZjFI/AAAAAAAADr0/YhPh8llVFjM/s1600/TetrationTable.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Values of repeated exponentiation" border="0" height="221" src="http://1.bp.blogspot.com/-4-wH3LalKdQ/VfuKeX4ZjFI/AAAAAAAADr0/YhPh8llVFjM/s320/TetrationTable.png" title="Hyperoperation tetration table" width="320" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-Xs24vrw9x3g/VfuKFhmzX5I/AAAAAAAADrs/IfCmlebhWjU/s1600/PentationTable.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Values of repeated tetration" border="0" height="227" src="http://2.bp.blogspot.com/-Xs24vrw9x3g/VfuKFhmzX5I/AAAAAAAADrs/IfCmlebhWjU/s320/PentationTable.png" title="Hyperoperation pentation table" width="320" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-fftHCHmAjhc/VfuJgYNFqiI/AAAAAAAADrY/Le31DE6naYE/s1600/HexationTable.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Values of repeated pentation" border="0" height="233" src="http://3.bp.blogspot.com/-fftHCHmAjhc/VfuJgYNFqiI/AAAAAAAADrY/Le31DE6naYE/s320/HexationTable.png" title="Hyperoperation hexation table" width="320" /></a></div><br />From this point forward, the hyperoperation tables look passingly similar in this limited view. You have some fixed values in the first two rows and columns; the 2-by-2 result is eternally 4; and everything other than that is so astronomically huge that you can't even usefully write it in scientific notation. Here are some identities suggested above that we can prove pretty easily for all hyperoperations \(n > 3\):<br /><ol><li>\(H_n(a, 0) = 1\) (by definition)</li><li>\(H_n(a, 1) = a\)</li><li>\(H_n(0, b) = \) 0 if b odd, 1 if b even</li><li>\(H_n(1, b) = 1\)</li><li>\(H_n(2, 2) = 4\)</li></ol>One passingly interesting question is how many of these master identities hold true in the lower operations (n = 1 to 3; addition, multiplication, and exponentiation); in short, each step further down the hierarchy loses more of these identities, as summarized here:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-ZblrA9ngVro/VfuOI8Ud-EI/AAAAAAAADsA/XIc8ToRiYks/s1600/HyperoperationIdentities.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="Key identities are lost in lower operations" border="0" src="http://2.bp.blogspot.com/-ZblrA9ngVro/VfuOI8Ud-EI/AAAAAAAADsA/XIc8ToRiYks/s1600/HyperoperationIdentities.png" title="Identities in lower operations" /></a></div><br />Now, to connect up to my post last week, recall the basic properties of real numbers taken as axiomatic at the start of most algebra and analysis classes. Addition and multiplication (n = 1 and 2) are <b>commutative</b> and <b>associative</b>; but exponents are not, and neither are any of the higher operations. <br /><br />Finally consider the general case of <b>distribution</b>, what in my algebra classes I summarize as the "General Distribution Rule" (<a href="http://www.angrymath.com/2012/07/power-rules.html">Principle #2 here</a>). Or perhaps based on last week's observation I might suggest it could be better phrased as "collecting terms of the next higher operation", like \(ac + bc = (a+b)c\) and \(a^c \cdot b^c = (a \cdot b)^c\), or in the general hyperoperational form: <br /><br /><div style="text-align: center;">\(H_n(H_{n+1}(a, c), H_{n+1}(b, c)) = H_{n+1}(H_n(a, b), c)\)</div><br />Well, just like commutativity and associativity, distribution in this general form also holds for n = 1 and 2, but <b>fails for higher operations</b>. Here's the first counterexample, using \(a \uparrow b\) for exponents (\(H_3\)), and \(a \# b\) for tetration (\(H_4\)):<br /><br /><div style="text-align: center;">\((2\#2)\uparrow (0\#2) = 4 \uparrow 1 = 4\), but</div><div style="text-align: center;">\((2 \uparrow 0)\#2 = 1 \# 2 = 1\). </div><br />Likewise, what I call the "Fundamental Rules of Exponents" (Principle #1 above, <a href="http://www.angrymath.com/2012/08/fundamental-rule-of-exponents.html">or also here</a>) works only for levels \(n \le 3\), and fails to be meaningful at higher levels of hyperoperation. <br /><br />Deltahttp://www.blogger.com/profile/00705402326320853684noreply@blogger.com0tag:blogger.com,1999:blog-7718462793516968883.post-88490026909415087592015-09-28T05:00:00.000-04:002015-09-28T10:31:27.146-04:00Why Is Distribution Prioritized Over Combining?So I've come up with this question that's been bothering me for weeks,and I've been searching and asking everyone and everywhere that I can. I suspect that it may have no answer. The question is this:<br /><blockquote class="tr_bq"><i>Consider the properties of real numbers that we take for granted at the start of an algebra or analysis class (commutativity, association, and distribution of multiplying over addition). Granted that the last one, distribution (the transformation \(a(b+c) = ab + ac\)), is effectively equivalent to what we might call "combining like terms" (the transformation \(ax + bx = (a+b)x\)). It seems like the latter is more fundamental and easier to intuit as an axiom, since it resembles simple addition of units (e.g., 3 feet + 5 feet = 8 feet). So historically and/or pedagogically, what was the reason choosing the name and order we have ("distribution", \(a(b+c)=ab+ac\)), instead of the other option ("combining", \(ax+bx = (a+b)x\)) for the starting axiom?</i></blockquote>I suspect now that there simply isn't any reason that we can document. Some expansion on the problem:<br /><br /><a href="http://ocw.mit.edu/courses/mechanical-engineering/2-25-advanced-fluid-mechanics-fall-2013/dimensional-analysis/Rayleigh_similitude_1915_.pdf">In dimensional analysis, some call the idea of only adding or comparing like units the "Great Principle of Similitude".</a> Which provides some of my motivation for wishing that we would start with this ("combining") and then derive distribution (using commutativity a few times). Note that this phrase is in many places erroneously attributed to Newton; in truth the earliest documented usage of the phrase is by Rayleigh in a letter to <i>Nature</i> (No. 2368, Vol. 95; March 18, 1915). I could probably write a whole post just on the hunt for this quote. Big thanks to Juan Santiago who teaches a class by that name at Stanford (<a href="http://explorecourses.stanford.edu/search?view=catalog&academicYear=&page=0&q=ME&filter-departmentcode-ME=on&filter-coursestatus-Active=on&filter-term-Autumn=on">link</a>) for helping me track down the article. <br /><br /><a href="http://mathforum.org/library/drmath/view/52599.html">The Math Forum at Drexel discusses some history of the names of the basic properties.</a> The best that Doctor Peterson can track down is that terms such as "distribution" were first used in the late 1700's to 1800's (starting in French in a memoir by Francois Joseph Servois). No commentary on a <i>reason</i> for why this was picked over alternative formulations. But perhaps the fact that the original discussion was in terms of functions (not binary operators) provides a clue. (For the full French text, see <a href="http://jeff560.tripod.com/c.html">here</a> and search "commutative and distributive"). <br /><br /><a href="http://math.stackexchange.com/questions/1417856/why-is-distribution-prioritized-over-combining">Here's me asking the question at the StackExchange Mathematics site.</a> Unfortunately, most commentators considered it to be uninteresting. When it got no responses, I cross-posted to the Mathematics Educators site -- which is apparently a huge <i>faux pas</i>, and immediately got it down-moderated into oblivion. The only relevant answer to date was from Benjamin Dickman, who pointed to a very nice quote from Euclid: when he states a similar property in geometric terms (area of a rectangle versus the sum of it being sliced up), it happens to be in the same order as we present the distribution property. But still no word on any reason <i>why</i> it should be in that order and not the reverse. <br /><br />Observations from a few textbooks that I have lying around:<br /><ul><li>Rietz and Crathorne, <i>Introductory College Algebra</i> (1933). Article 4 shows combining like terms, and asserts that's justified by the associative law (which is nonsensical). The distributive property isn't presented until later, in Article 8.</li><li>Martin-Gay, <i>Prealgebra & Introductory Algebra</i>. In the purely numerical prealgebra section, this first shows up as distribution among numbers (Sec 1.5). But the first time it appears in the algebra section with variables it is in fact written and used for combining like terms (Sec 3.1: \(ac + bc = (a+b)c\), although still called the "distributive property"). Combining like terms is actually done even earlier than that on an intuitive basis (see Sec. 2.6, Example 4). Only later is the property presented and used to remove parentheses from variable expressions.</li><li>Bittinger's <i>Intermediate Algebra</i> shows standard distribution, followed immediately by use for combining like terms. Sullivan's <i>Algebra & Trigonometry</i> does the same. </li></ul>So my point with those sources is that even though distribution is usually presented in a removing-parentheses format, in practice many textbook authors find themselves unable to escape the need to use combining like terms at some earlier point in their presentation (Rietz and Crathorne, Martin-Gay). This observation bolsters my growing instinct that it would be more intuitive to present the property in that format in the first place (as Martin-Gay does, the first time it appears with variables), and then derive what we call distribution from that. <br /><br />Another thought is that while you can point to distribution as justifying the standard long-multiplication process (across decimal place value), the interior additions are implied and not explicit, and so they don't really serve to develop intuition in the same way that simple unit addition does. <br /><br />Therefore, I find myself fantasizing about the following. Write a slightly nonstandard algebra textbook that starts by assuming commutativity, association, and the combining-like-terms-property (and shortly after deriving the distribution property). Perhaps for a better name it could be called "collection of like multiplications inside addition", or something like that.<br /><br />Do you think this would be a better set of axioms for a basic algebra class? Can you think of a solid historical or pedagogical reason why the name and presentation were not the other way around, like this? Likely some more on this later.<br /><br />Deltahttp://www.blogger.com/profile/00705402326320853684noreply@blogger.com4tag:blogger.com,1999:blog-7718462793516968883.post-29545847217629596542015-09-21T05:00:00.000-04:002015-09-21T23:52:31.261-04:00Rational Numbers and Randomized Digits<div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-8nGbt4i0wZ4/Ve3Md4bRJ4I/AAAAAAAADqk/e4FMYyd3pNM/s1600/d10.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="200" src="http://3.bp.blogspot.com/-8nGbt4i0wZ4/Ve3Md4bRJ4I/AAAAAAAADqk/e4FMYyd3pNM/s200/d10.png" width="200" /></a></div>Here's a quick thought experiment to develop intuition about the cardinality of rational versus irrational decimal numbers. We know that any rational number (a/b with integer a, b and b ≠ 0) has a decimal expansion that either terminates or repeats (and terminating is itself equivalent to ending with a repeating block of all 0's).<br /><br />Consider randomizing decimal digits in an infinite string (say, by using a standard d10 from a roleplaying game, shown above). How likely does it seem that at any point you'll start rolling repeated 0's, and nothing but 0's, until the end of time? It's obviously diminishingly unlikely, so effectively impossible that you'll roll a terminating decimal. Alternatively, how probable does it seem that you'll roll some particular block of digits, and then repeat them in exactly the same order, and keep doing so without fail an infinite number of times? Again, it seems effectively impossible.<br /><br />So this intuitively shows that if you pick any real number "at random" (in this case, generating random decimal digits one at a time), it's effectively certain that you'll produce an irrational number. The proportion of rational numbers can be seen to be practically negligible compared to the preponderance of irrationals. <br /><br /><br />Deltahttp://www.blogger.com/profile/00705402326320853684noreply@blogger.com24tag:blogger.com,1999:blog-7718462793516968883.post-31304500464992126692015-09-14T05:00:00.000-04:002015-09-14T05:00:01.013-04:00Algebra for CryptographyCryptography researcher Victor Shoup recently gave a talk at the Simons Institute at Berkeley. Richard Lipton quotes him in one of his interesting observations about cryptography:<br /><blockquote class="tr_bq"><span style="font-size: large;"><i><span style="color: ”#0066cc?;"><span style="color: ”#000000?;">He also made another point: For the basic type of systems under discussion, he averred that the mathematics needed to describe and understand them was essentially high school algebra. Or as he said, “at least high school algebra outside the US.” </span></span></i></span></blockquote><a href="https://rjlipton.wordpress.com/2015/08/14/cryptography-and-quicksand/">Quoted here.</a>Deltahttp://www.blogger.com/profile/00705402326320853684noreply@blogger.com0tag:blogger.com,1999:blog-7718462793516968883.post-75557101157202718732015-09-07T05:00:00.000-04:002015-09-07T05:00:09.099-04:00The MOOC Revolution that Wasn'tThree years ago I wrote a review of "Udacity Statistics 101" that went semi-viral, finding the MOOC course to be slapdash, unplanned, and in many cases pure nonsense (<a href="http://www.angrymath.com/2012/09/udacity-statistics-101.html">link</a>). I wound up personally corresponding with Sebastian Thrun (Stanford professor, founder of Udacity, head of Google's auto-car project) over it, and came away super skeptical of his work. Today here's a fantastic article about the fallen hopes for MOOCs and Thrun's Udacity in particular -- highly recommended, jealous that I didn't write this.<br /><blockquote class="tr_bq"><i>Just a few short years after promising higher education for anyone with an Internet connection, MOOCs have scaled back their ambitions, content to become job training for the tech sector and for students who already have college degrees... <br /><br />"In 50 years,” Thrun told Wired, “there will be only 10 institutions in the world delivering higher education and Udacity has a shot at being one of them.”<br /><br />Three years later, Thrun and the other MOOC startup founders are now telling a different story. The latest tagline used by Thrun to describe his company: “Uber for Education.”</i></blockquote>I want to quote the whole thing here; probably best that you just go and read it. Big kudos to Audrey Waters for writing this (and tip to Cathy O'Neil for sharing a link).<br /><br /><div style="text-align: center;"><span style="font-size: large;"><a href="http://kernelmag.dailydot.com/issue-sections/headline-story/14046/mooc-revolution-uber-for-education/">The MOOC Revolution that Wasn't</a></span></div><br />Deltahttp://www.blogger.com/profile/00705402326320853684noreply@blogger.com0tag:blogger.com,1999:blog-7718462793516968883.post-75302607540097401912015-08-24T05:00:00.000-04:002015-08-24T05:00:03.188-04:00On Registered Clinical TrialsA new PLoS ONE study looks at the effect of mandatory pre-registration of medical study methods and outcome measures, starting in 2000. Major findings:<br /><ul><li>Studies finding positive effects fell from 57% prior to the registry to just 8% afterward.</li><li>"...focused on human randomized controlled trials that were funded by the US National Heart, Lung, and Blood Institute (NHLBI) [and so required advanced registration by a 1997 U.S. law]. The authors conclude that registration of trials seemed to be the dominant driver of the drastic change in study results."</li><li>"Steven Novella of Yale University in New Haven, Connecticut, called the study 'encouraging' but also 'a bit frightening' because it casts doubt on previous positive results...”</li><li>"Many online observers applauded the evident power of registration and transparency, including Novella, who wrote on his blog that all research involving humans should be registered before any data are collected. However, he says, this means that at least half of older, published clinical trials could be false positives. 'Loose scientific methods are leading to a massive false positive bias in the literature,' he writes."</li></ul><a href="http://www.nature.com/news/registered-clinical-trials-make-positive-findings-vanish-1.18181">Reported in Nature here.</a><br /><br />Deltahttp://www.blogger.com/profile/00705402326320853684noreply@blogger.com0tag:blogger.com,1999:blog-7718462793516968883.post-46943109278878198942015-08-16T05:00:00.000-04:002015-08-16T05:00:01.053-04:00Is Cohabitation Good for You?Last week, Ars Technica (and I'm sure other news sites) posted an article on a large-scale survey of health outcomes in Britain, under the headline, "Good news for unmarried couples — cohabitation is good for you" (subtitle: "Married partners tend to be healthy, but living with someone works just as well"). <a href="http://arstechnica.com/science/2015/08/unmarried-couples-get-health-benefits-too/">Link.</a><br /><br /><br />I'm actually hyper-critical about people who sling around the phrase "correlation does not imply causation" too much in improper cases, but here's a golden example where it does apply; the headline "cohabitation is good for you", is totally unwarranted. Now, the findings do say that married & cohabiting people are healthier than people who live alone. But this could be either X causes Y, or Y causes X, or other more complicated interactions. One hypothesis is that "cohabitation is good for you [by improving health]"; another hypothesis is that "being healthy is good for your prospects of getting a partner", i.e., healthy people make for more attractive marriage/cohabitation partners. If you think about it, I'd say that the latter is actually the more common-sense direction of the causation here.<br /><div class="commentBody"><div id="comment_body_221972"><br />How could the direction of this effect be formally disentangled? Well, you could be on the lookout for a "natural experiment" where someone who did manage to get married/cohabited breaks up or gets divorced, and see if their health degrades during the later period in which they lack a partner. Of course, the researchers here were smart enough to do exactly that, and an entire paragraph of the Ars Technica article is in fact devoted to these findings:<br /><br />"The study found that changes in status had no obvious impact—the transitions from/to marriage and nonmarital cohabitation did not have a detrimental effect on health. There wasn’t an obvious difference in these biomarkers when participants divorced and then remarried or cohabitated; they looked the same as participants who remained married. For men who divorced in their late 30s and didn’t remarry, the risk of metabolic syndromes in midlife was reduced."<br /><br />In other words, for anyone in the category of at least being healthy and attractive enough to get married/cohabited once, <b>being married or cohabited made no difference to their health</b>. Which to my eye is overwhelming evidence that the causation is in the other direction, i.e., these headlines of "cohabitation is good for you" are flat-out wrong.<br /><br />Might be a good example to include in my fall statistics course.<br /><br /></div></div>Deltahttp://www.blogger.com/profile/00705402326320853684noreply@blogger.com2tag:blogger.com,1999:blog-7718462793516968883.post-11383302002478283052015-07-20T05:00:00.000-04:002015-07-20T10:49:13.782-04:00The Difference a Teacher MakesI had an interesting natural experiment this past spring semester: I was teaching two remedial algebra courses, one in the afternoon, and one in the evening. Same calendar days, same class sizes, identical lectures from day to day, exact same tests, exact same numbers taking the final exam. In one class, only 23% of the registrants passed the final exam, while in the other class 60% passed. (Median scores on the final were 48% in one class and 80% in the other.)<br /><br />This got me to wondering: How much difference does the teacher make in these classes? And the honest answer is: not very much. To be humble about it, I could do everything humanly possible both inside and outside the classroom as a teacher, work at maximal effort all the time (and that is generally my goal), and have it make very little difference in the overall classroom result. The example here of enormous variation between two sections, identically treated by me as an instructor, really highlights this fact.<br /><br />On this point, I found a 2013 paper from ETS by Edward H. Haertel -- principally about the unreliability of teacher VAM scores -- that summarizes several studies as finding that the difference in test scores attributable to teacher proficiency is only about 10% (see p. 5). That actually seems about right based on my recent experiences.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-b5VJTm9MbCg/Va0JxYpgwNI/AAAAAAAADnk/VRNNVW4Tel8/s1600/TeacherInfluence.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="223" src="http://2.bp.blogspot.com/-b5VJTm9MbCg/Va0JxYpgwNI/AAAAAAAADnk/VRNNVW4Tel8/s320/TeacherInfluence.png" width="320" /></a></div><br /><a href="https://www.ets.org/s/pdf/23497_Angoff%20Report-web.pdf">https://www.ets.org/s/pdf/23497_Angoff%20Report-web.pdf</a><br /><br /><br />Deltahttp://www.blogger.com/profile/00705402326320853684noreply@blogger.com1tag:blogger.com,1999:blog-7718462793516968883.post-41948188832628502602015-06-08T05:00:00.000-04:002015-07-20T10:33:32.673-04:00Why Technology Won't Fix SchoolsKentaro Toyama is a professor at U. Michigan, a fellow at MIT, and a former researcher for Microsoft. He's just written a book titled "Geek Heresy: Rescuing Social Change from the Cult of Technology" (although I'd quibble with the title in one way: practically all geeks I know consider the following to be obvious and common-sense). He writes:<br /><blockquote class="tr_bq"><i>But no matter how good the design, and despite rigorous tests of impact, I have never seen technology systematically overcome the socio-economic divides that exist in education. Children who are behind need high-quality adult guidance more than anything else. Many people believe that technology “levels the playing field” of learning, but what I’ve discovered is that it does no such thing.</i></blockquote><br />And, oh, how much do I agree with the following!: <br /><blockquote class="tr_bq"><i>... what I’ve arrived at is something I think of as technology’s Law of Amplification: Technology’s primary effect is to amplify human forces. In education, technologies amplify whatever pedagogical capacity is already there.</i></blockquote><br /> More at the Washington Post (<a href="http://www.washingtonpost.com/posteverything/wp/2015/06/04/technology-wont-fix-americas-neediest-schools-it-makes-bad-education-worse/">link</a>). <br /><br />Deltahttp://www.blogger.com/profile/00705402326320853684noreply@blogger.com0tag:blogger.com,1999:blog-7718462793516968883.post-45252710616573927212015-06-01T05:00:00.000-04:002015-06-01T05:00:07.285-04:00Noam Chomsky on Corporate CollegesNoam Chomsky speaks on issues of non-teaching administrators taking over America's colleges, the use of part-time and non-governing faculty, and related issues:<br /><blockquote class="tr_bq"><i>The university is probably the social institution in our society that comes closest to democratic worker control. Within a department, for example, it’s pretty normal for at least the tenured faculty to be able to determine a substantial amount of what their work is like: what they’re going to teach, when they’re going to teach, what the curriculum will be. And most of the decisions about the actual work that the faculty is doing are pretty much under tenured faculty control.</i><br /><br /><i>Now, of course, there is a higher level of administrators that you can’t overrule or control. The faculty can recommend somebody for tenure, let’s say, and be turned down by the deans, or the president, or even the trustees or legislators. It doesn’t happen all that often, but it can happen and it does. And that’s always a part of the background structure, which, although it always existed, was much less of a problem in the days when the administration was drawn from the faculty and in principle recallable.</i><br /><br /><i>Under representative systems, you have to have someone doing administrative work, but they should be recallable at some point under the authority of the people they administer. That’s less and less true. There are more and more professional administrators, layer after layer of them, with more and more positions being taken remote from the faculty controls.</i></blockquote> <a href="http://www.salon.com/2014/10/10/noam_chomsky_corporate_business_models_are_hurting_american_universities_partner/?utm_source=facebook&utm_medium=socialflow">More at Salon.com. </a>Deltahttp://www.blogger.com/profile/00705402326320853684noreply@blogger.com0tag:blogger.com,1999:blog-7718462793516968883.post-29224474038061392962015-05-25T05:00:00.000-04:002015-05-25T05:00:00.980-04:00Online Courses Fail at Community CollegesMore evidence for one of the most uniformly-verified findings I've seen in education: online courses strike out for community college students. From a paper by researchers at U.C.-Davis, presented at the American Educational Research Association's conference in April: <br /><blockquote class="tr_bq"><i>“In every subject, students are doing better face-to-face,” said Cassandra Hart, one of the paper’s authors. “Other studies have found the same thing. There’s a strong body of evidence building up that students are not doing quite as well in online courses, at least as the courses are being designed now in the community college sector.”</i></blockquote><a href="http://www.alternet.org/education/studies-find-online-courses-not-working-well-community-colleges">More at Alternet.org.</a><br /><br />Deltahttp://www.blogger.com/profile/00705402326320853684noreply@blogger.com0tag:blogger.com,1999:blog-7718462793516968883.post-20252328554781063422015-05-18T05:00:00.000-04:002015-05-18T05:00:02.767-04:00Teaching Evolution in KentuckyA professor discusses teaching evolution in Kentucky. "Every time a student stomps out of my auditorium slamming the door on the way, I can’t help but question my abilities." (<a href="https://orionmagazine.org/article/defending-darwin/">Link.</a>)<br /><br />(Thanks to <a href="http://jonathanscottmiller.blogspot.com/">Jonathan Scott Miller</a> for the link.)<br /><br /><br />Deltahttp://www.blogger.com/profile/00705402326320853684noreply@blogger.com0tag:blogger.com,1999:blog-7718462793516968883.post-35686927681408766562015-05-11T05:00:00.000-04:002015-07-20T10:34:29.372-04:00On DefinitionsFrom the MathBabe blog by poster EllipticCurve, <a href="http://mathbabe.org/2015/04/18/aunt-pythias-and-uncle-aristippus-advice/#comments">here</a>:<br /><blockquote class="tr_bq"><span style="font-size: large;"><i>Mathematical definitions mean nothing until you actually use them in anger, i.e. to solve a problem...</i></span></blockquote>Deltahttp://www.blogger.com/profile/00705402326320853684noreply@blogger.com0tag:blogger.com,1999:blog-7718462793516968883.post-90441642065128734322015-05-04T05:00:00.000-04:002015-05-04T05:00:07.706-04:00Quad Partners Buys Inside Higher EdFrom Education News in January:<br /><blockquote class="tr_bq"><em>Although there has been no public announcement made, Quad Partners, a New York private equity firm devoted to the for-profit college industry, recently gained a controlling stake in the education trade publication Inside Higher Ed (IHE). The publication routinely reports on for-profit colleges and surrounding policy disputes, and the publication is now listed among investments on the Quad Partners website.</em></blockquote><a href="http://www.educationnews.org/higher-education/quad-partners-acquires-stake-in-inside-higher-ed/">Read more here. </a><br /><br /><br />Deltahttp://www.blogger.com/profile/00705402326320853684noreply@blogger.com0tag:blogger.com,1999:blog-7718462793516968883.post-62550240258819295482015-04-27T05:00:00.000-04:002015-07-20T10:35:32.893-04:00ETS on MillennialsA fascinating report on international education and job-ready skills from the Educational Testing Service. Particularly so, as it almost directly impinges on committee work that I've been doing lately. Core findings:<br /><ul><li>While U.S. millennials have far higher degree certifications than prior generations, their literacy, numeracy, and use-of-technology skills are demonstrably lower. </li><li>U.S. millennials rank 16th of 22 countries in literacy. They are 20th of 22 in numeracy. They are tied for last in technology-based problem solving.</li><li>Numeracy for U.S. millennials has been dropping across all percentiles since at least 2003.</li></ul><br /><a href="http://www.ets.org/s/research/30079/index.html">See the online report here.</a><br /><br /><br />Deltahttp://www.blogger.com/profile/00705402326320853684noreply@blogger.com0tag:blogger.com,1999:blog-7718462793516968883.post-58573101946618980572015-04-20T05:00:00.000-04:002015-04-20T05:00:05.769-04:00Causes of College Cost InflationFrom testimony at Ohio State (<a href="http://academeblog.org/2015/03/05/ohio-conference-president-provides-senate-testimony-on-the-decline-in-state-support-administrate-bloat-the-cost-of-intercollegiate-athletics-and-faculty-workload/">link</a>):<br /><ol><li>Decreased state funding</li><li>Administrative bloat</li><li>Cost of athletics</li></ol>(Thanks to<a href="http://jonathanscottmiller.blogspot.com/"> Jonathan Scott Miller</a> for the link.) <br /><br /><br />Deltahttp://www.blogger.com/profile/00705402326320853684noreply@blogger.com0tag:blogger.com,1999:blog-7718462793516968883.post-90462788584779744572015-04-13T05:00:00.000-04:002015-04-13T05:00:07.810-04:00Pupils Prefer PaperYou may have already seen this article on the work of Naomi S. Baron at American University: her studies show that for textbook-style reading and studying, young college students still prefer paper books over digital options. Why? Because of reading. <br /><blockquote class="tr_bq"><i>In years of surveys, Baron asked students what they liked least about reading in print. Her favorite response: “It takes me longer because I read more carefully.”...</i><br /><br /><i>Another significant problem, especially for college students, is distraction. The lives of millennials are increasingly lived on screens. In her surveys, Baron writes that she found “jaw-dropping” results to the question of whether students were more likely to multitask in hard copy (1 percent) vs. reading on-screen (90 percent). </i></blockquote><br /><a href="http://www.washingtonpost.com/local/why-digital-natives-prefer-reading-in-print-yes-you-read-that-right/2015/02/22/8596ca86-b871-11e4-9423-f3d0a1ec335c_story.html">Read the article at the Washington Post.</a><br /><br /><br />Deltahttp://www.blogger.com/profile/00705402326320853684noreply@blogger.com2tag:blogger.com,1999:blog-7718462793516968883.post-15191312758369228582015-04-06T05:00:00.000-04:002015-04-06T11:48:52.434-04:00Academically Adrift AgainOne more time, as we've pointed out here before (<a href="http://www.angrymath.com/2014/12/academically-adrift.html">link</a>), in this case from Jonathan Wai of Duke University: "<span class="anno-span">the rank order of cognitive skills of various majors and degree holders has remained remarkably constant for the last seven decades", with Education majors perennially the very lowest of performers (closely followed by Business and the Social Sciences). </span><br /><span class="anno-span"><br /></span><a href="http://qz.com/334926/your-college-major-is-a-pretty-good-indication-of-how-smart-you-are/"><span class="anno-span">See Wai's article and charts here.</span></a><br /><span class="anno-span"><br /></span><span class="anno-span"><br /></span>Deltahttp://www.blogger.com/profile/00705402326320853684noreply@blogger.com0