Tuesday, January 22, 2019

How Wrong Should You Be?







What this makes clear is that gaming our testing system is capable of producing a form of imbalance..  We are all rather familiar with this and it needs to somehow be addressed.  The best approach needs to be a set program that student is expected to do twice in order to optimize his skill to that 85% level.
 
For STEM students, the default has been first year calculus.  Redoing it in the next year would be a really good idea as it allows focus to be put on the more difficult material.  Properly mastering analysis ( delta- epsilon proofs ) at this level is an excellent idea as well.
 
What matters in all areas of study is mastery.  The university have failed terribly at a lot of that.  Cs and Ds get degrees and often in areas that are memory dependent.


How Wrong Should You Be?



If you always get 100 percent on your tests, they aren’t hard enough. If you never get above 50 percent, you’re probably in the wrong major


Roy Mehta https://blogs.scientificamerican.com/observations/how-wrong-should-you-be/


My best friend in college was a straight-A student, an English major. In part he got all A’s because he is whip smart—his essays were systematically better than everyone else’s. But the other reason was that he refused to enroll in a course unless he was certain he would ace it. Consequently, he never really challenged himself to try something beyond his comfort zone. I, on the other hand, was not a straight-A student. My first semester I took atomic physics with Professor Delroy Baugh, self-proclaimed “Laser Guy.” I’d never taken a physics course before in my life, and as a reward for my willingness to transcend my comfort zone I received a D.


Somewhere between the two of us lies a sweet spot: if you only ever get 100 percent on your tests, they aren’t hard enough. If you never get above 50 percent, you’re probably in the wrong major. The trick is to be right enough, but not so right that you never allow yourself the opportunity to be wrong.


So, how wrong should you be?


An article from a team led by University of Arizona cognitive scientist Robert Wilson provides an answer: 15 percent. The researchers argue that a test is optimally difficult if the test-taker gets 85 percent of the questions right, with 15 percent incorrect. Any more than that, the test was too easy. Any less, the test was too hard. They call it “The Eighty-Five Percent Rule for Optimal Learning.”


Wilson and his colleagues derive the number from experiments in machine learning. Under loose assumptions, they show that the optimal error rate for training a broad class of deep learning algorithms is 15 percent. The fastest learning progress occurs when the error rate hits this sweet spot. They show that this number is also in line with previous work on learning in humans and animals.


The implications of the 85 percent rule in the classroom are straightforward. If you’re a teacher, your tests should be difficult enough that the average score is 85 percent. If you’re a student, the optimal level of challenge is about a B or a B+ average. An A might look nice on your transcript, but you could have stood to learn more from a class that was harder. Outside the classroom, the implications of the 85 percent rule are similar. If you are learning a new language, say on Duolingo, then you should be getting about 15 percent of the answers wrong. Otherwise, you’re not being challenged at the right level to consistently improve in picking up your new language.


But the 85 percent rule only holds for one class of problems, where the point is to build up expertise over many trials. With other kinds of problems you really don’t want to get them wrong at all. For example, should you believe in God? Pascal wagered that you should, because if you don’t there’ll be hell to pay. Most people make up their minds one way about this issue and tend not to change their position too frequently. This presents a paradox: suppose 50 percent of people believe in God and 50 percent don’t. Half are on the side opposite the truth. Yet no matter what evidence you present them with they won’t admit they’re wrong. In this case we see an error rate of zero. Why is no one ever wrong about their belief in God?


Here another answer comes from cognitive science, in a paper by Samuel Gershman of Harvard University titled “How to Never Be Wrong.” In the paper Gershman considers the problem of auxiliary hypotheses. The idea is that any given theory comes with a set of undisclosed assumptions, which can protect the core theory from being disproved. For example, the seven-day creation story of Genesis is at odds with the fossil record. So if you accept the fossil record, do you have to forfeit Genesis?

Nope. All you have to do is note that a “day” doesn’t have to be 24 hours—especially not if God hasn’t created the sun and moon yet. The definition of day is merely an auxiliary hypothesis of the core theory that God created the world. Such auxiliary hypotheses form a “protective belt” around the core theory, deflecting contrary evidence from pesky one-offs, such as a handful of really old rocks, while leaving the main argument unscathed.


But never being wrong isn’t an especially good thing. To the contrary, being wrong is important because it is the first step on the way to being right. If you’re never wrong, you never learn anything you didn’t already know. So whether you’re taking a test from the Laser Guy or reconsidering your slate of metaphysical tenets, getting a few answers wrong is like salting a meal: a little bit makes the whole thing better, just don’t take it too far.

No comments: