Studying Students, Studying Snow

Standard

Last week, the Washington Post published a curious little article by Jason Samenow, the paper’s weather editor.  The article is a post-mortem of sorts, in which the author analyzes some of the mistakes he made in forecasting a recent area snowstorm.  It caught my eye (and is relevant to this blog) for three reasons.

First of all: humility.  The author begins by saying that he’s dissatisfied at having made an inaccurate prediction, one that turned out to have “consequences for people’s daily routines,” and then he sets out to identify the root cause of his mistake.  I think I find this so delightful because it’s rare for researchers of any persuasion (hard sciences, soft sciences, social sciences, you name it) to dissect their errors, especially in a public, humble, and voluntary fashion.  So mad props to you, Mr. Samenow.  Mad props.

The second interesting thing about this article is the author’s conclusion: “What I take home from this is that we have to be a little bit more skeptical of the models and take clues from what’s happening outside our windows.”  Correct me if I’m wrong, but I’m pretty sure what just happened was this: a white-coated, lab-dwelling, equation-lovin’ researcher told us to go outside and get a clue about the actual reality of the thing we’re studying.  Hmm… I wonder if there are any parallels to education research here.  Again, Mr. Samenow FTW!

The final reason this article stuck in my mind is a bit more substantive and, unfortunately, a bit more dire.  Meteorologist study patterns in the weather and generalize from these patterns.  They use these generalizations to make predictions which help people make decisions about their lives.  When a meteorologist’s predictions align or fail to align with how events actually play out (1) it’s obvious to everyone (e.g., did the predicted six inches of snow actually accumulate, or is there only a light dusting on the ground?) and (2) the turn-around time for observing the incorrect prediction is a matter of hours or days.

Education researchers do the same thing (study patterns, generalize, make predictions), except instead of studying the weather, they study kids.  (I suspect you can see where I’m going with this.)  When an education researcher’s predictions align or fail to align with how events actually play out (1) it’s often very hard to discern (e.g., did learning through a Writer’s Workshop model develop Ana’s ability to think metacognitively about her own writing?) and (2) the turn-around time for observing the incorrect prediction is sometimes a matter of months, more often a matter of years.

So what? you’re probably thinking… five paragraphs and you’ve all you’ve got is that kids are different from weather systems?  Yeah, that’s pretty much my point, but gimme three more sentences, okay?  The danger comes when we approach education research as if it were meteorology.  In the latter, the elements, relationships, and processes under study are conspicuous and it’s very easy to confirm or disconfirm predictions.  In the former, where we seek to understand what goes on inside kids’ minds and hearts, this is very much not the case.

gt

Advertisements

One thought on “Studying Students, Studying Snow

  1. Pingback: The Misconstrual of Research for Popular Consumption… And What to Do About It | The Teaching Diablogue

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s