Wednesday, March 24, 2010

When I recently flunked at a test to pick teachers for low-income schools, I cursed the system that values scores over motivation. A teacher need not be cherry-picked but just be let into the system and then performance can be screened. But now, I changed my mind after reading Steven Pinker’s review of the New Yorker journalist, Malcolm Gladwell’s What the Dog Saw. As Pinker writes pointedly:

Gladwell notes that I.Q. scores, teaching certificates and performance in college athletics are imperfect predictors of professional success. This sets up a “we” who is “used to dealing with prediction problems by going back and looking for better predictors.” Instead, Gladwell argues, “teaching should be open to anyone with a pulse and a college degree — and teachers should be judged after they have started their jobs, not before.

But this “solution” misses the whole point of assessment, which is not clairvoyance but cost-effectiveness. To hire teachers indiscriminately and judge them on the job is an example of “going back and looking for better predictors”: the first year of a career is being used to predict the remainder. It’s simply the predictor that’s most expensive (in dollars and poorly taught students) along the accuracy-­cost trade-off.

Subverting long-held “ideals of talent, intelligence and analytical prowess in favor of luck, opportunity, experience and intuition” has become a favorite pastime of populist writers like Gladwell, from whom Pinker says there’s much to learn (if they remain a journalist) but should be wary of (if they don the hats of social scientists).

Pinker puts it brilliantly:

Improving the ability of your detection technology to discriminate signals from noise is always a good thing, because it lowers the chance you’ll mistake a target for a distractor or vice versa. But given the technology you have, there is an optimal threshold for a decision, which depends on the relative costs of missing a target and issuing a false alarm. By failing to identify this trade-off, Gladwell bamboozles his readers with pseudoparadoxes about the limitations of pictures and the downside of precise information.

I think the problem is with generalization that’s such a pain even in my field of interest: psychology. How can research done on how American subjects trust others (coming from a culture that I assume largely mistrusts neighbors) be applicable to say Indians (who hardly mistrust to start with at least)? It still baffles me and more so with popular books. Pinker again:

The problem with Gladwell’s generalizations about prediction is that he never zeroes in on the essence of a statistical problem and instead overinterprets some of its trappings. For example, in many cases of uncertainty, a decision maker has to act on an observation that may be either a signal from a target or noise from a distractor (a blip on a screen may be a missile or static; a blob on an X-ray may be a tumor or a harmless thickening).

The brilliant Nassim Taleb calls this narrative fallacy: the past cannot be a predictor of the future and more than increasing the knowledge, it just improves the confidence in your hypothesis. We cannot reverse-engineer history as we can hardly predict what happened then.

So, I’ve come to believe in the best alternative that is axiomatically precise and valid: mathematics. Here I come, not too soon to disappear!

No comments: