The Misconstrual of Research for Popular Consumption… And What to Do About It


What can hurricanes tell us about education research?  Quite a bit, it seems.

This is not the first time I’ve referenced an article by Jason Samenow, weather editor for the Washington Post, and I suspect it won’t be the last.  His recent commentary on the misinterpretation of hurricane predictions is eerily relevant to education researchers, who must also deal with the recasting of their findings by popular media outlets.

Samenow lists two concerns about the ways that meteorological research is distorted as it moves into general, nontechnical discourse:

  • the public is not presented with the full range of weather possibilities, just the eye-catching ones that involve “sexy model simulations” (in this case, hurricanes)
  • models that project more than five days into the future are fundamentally unsound, but this disclaimer is conspicuously absent from popular weather reports.

Sound familiar?  It should…

How many headlines have we read touting the newest silver bullet of education reform, which, upon further investigation, is not grounded in the quoted research?  How many times have we seen legitimate findings misquoted in support of insupportable claims?  How many times have we seen a reinterpretation of education research for the general public that is stripped of the necessary cautions and caveats?

A meteorologist makes predictions about the weather based on existing data, prior research, and theory; an education researcher makes predictions about students, teachers, and schools based on the same collection of knowledge.  Both scientists answer questions about which the lay-public cares deeply.  Both deal with a subject matter that is complicated and unpredictable.  Both use a set of analytic tools whose complexity far exceeds the public’s technical sophistication.

And both face the same challenges in ensuring the accurate dissemination of their findings.  So what’s a conscientious researcher to do if she wants her work correctly communicated to the public?

Samenow answers by sketching the role of a responsible meteorologist in this troubling dynamic of misinformation.  He discourages a campaign of damage control (i.e., publicly calling out those who publish distorted research), arguing that this is “a never-ending and unwinnable game of whack-a-mole.”  Instead, he urges scientists to “focus on educating their readers and viewers about the limitations of weather forecasts,” “discuss what is known and not known,” and “share good examples of colleagues doing this the right way.”

I’ll be honest: the firebrand in me is a little disappointed with Samenow’s modest, measured conclusion that “education is the only weapon we have in this fight against social media misinformation.”  In response to this misconstrual of the facts – especially when it’s intentional – heads need to roll, right?  At the intersection of education research and reform, amidst the rather severe notions of “accountability” that shape current policy, it seems to me that the stakes are just too high to tolerate reckless distortions of the truth.

But the longer I mull it over, the more I think maybe Samenow is right.  If we, as researchers, are in this for the long haul, if we really want our work to inform and educate society, then maybe a campaign of thoughtfulness, humility, and leading by example isn’t such a bad way to go.



4 thoughts on “The Misconstrual of Research for Popular Consumption… And What to Do About It

  1. I agree with Samenow; educating people/readers/policymakers little by little about the complexity and uncertainties around the issue at hand; and I’m glad that you ended up siding with him. As with many complex issues in public policy the problem is that we typically can’t ascertain someone is “recklessly distorting the truth”, instead of somewhat selectively interpreting certain pieces of information according to particular perspectives, models, questions, frameworks, etc, which in a way we all do.

    “The Truth” tends to be quite the thorny territory and to embark on some kind of “methodological crusade for truth” sounds….well, just like it sounds..

  2. Another recent example of the growing disconnect between the complexity of “scientific data” and the public’s perception of this complexity is DNA testing. Here’s a link to an excellent story on this published in the LA times a number of years ago:

    You will recognize a lot of the same issues here. The big irony is that one of the authors then went to gain fame as the writer of the infamous Value Added series of stories in the LA Times that publicly “identified” what the paper termed “effective and ineffective” teachers, the most scandalous recent example of misuse of “scientific” information that I know of.

  3. My favorite suggestion from Samenow is, “discuss what is known and not known.” Too often as a teacher, I feel I have to make decisions with limited research using only my personal classroom experience. While relying on the experiences I have with my kids works well for adapting to their needs, it does not help me create structures and systems that could support more classrooms and students on a bigger level. That being said, as I peruse the interwebs for access to academic journals and reliable research, too often I run into two problems: the research on K-2 education and how our youngest learners learn is quite thin and articles discussing research take it out of context.
    Sometimes its the researchers who warrant only one line in the closing of their arguments about the limitations of their work and the work that still needs to be done. Perhaps they feel unable to express that limitations exist and they may not have all the answers. I think the public and researchers both need to shift they way to view the purpose of published information. It is to inform, not to give you all the answers or an “end all be all” stance on a subject.
    If perhaps, people felt safer in admitting what they do not know or can’t answer, the curiosity of the public could lead to furthering the research and scope of many projects. I for one find my curiosity piqued when an article leaves me with more questions than answers, yet right now I don’t have the role or tools to develop and sustain a full-fledged research project. But, I would fully support researchers who clearly state their intentions and limitations and engage in discussion with the public in a way that someone could walk away making up their own mind about something rather than just mentally consuming it. Many times the media distorts the intent of researchers and if more researches did what Samenow suggests, it might just lead to a shift in the right direction.

    • Thanks for this, Adrienne. You hit the nail on the head with your two observations: (1) that the pickings are pretty slim when it comes to high-quality, K-2 education research and (2) that when research is “translated” from its original format (a study in a peer-reviewed, academic journal, for instance) to a more lay-public-friendly format, a lot is – shall we say – “lost in translation.”

      To your first point, there are probably lots of reasons why education researchers tend to study older students. One of the big ones, I suspect, is that it’s really, really expensive, time-consuming, and logistically difficult to study little kids in a meaningful way. Well, let me back up a step: it’s hard to study students of any age in a meaningful way. But through some sorta bizarre double-speak, groupthink, we’ve convinced ourselves that it’s okay to let standardized test scores stand in for “learning” in the case of older students. These score are inexpensive and widely available for students in grades 3-12, and as a result, research questions tend to be oriented towards this age group.

      Thankfully (although the tides appear to be changing on this point), even the most out-of-touch, stodgy, tweed-wearing, university-dwelling, never-been-in-a-kindergarten-classroom academic realizes that the “data” you get from the standardized test of a 5 year-old is gonna be sketchy at best.

      The real shame here is that there are good ways to study the learning and development of students, particularly young ones. No, you can’t just give ‘em a test. But you can watch them. You can talk with them. You can talk with their teachers. You can talk with their parents. You can develop age-appropriate tasks that measure the particular thing you’re interested in. All of this is expensive, time-consuming, and logistically daunting, though… so it doesn’t get done.

      To your second point (about the “translation of research” for the general public), wow – there’s a dissertation-worthy topic. More to come… As ever, thanks for keeping the conversation going. Your voice is valuable and necessary; know that it is heard.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s