The Misconstrual of Research for Popular Consumption… And What to Do About It


What can hurricanes tell us about education research?  Quite a bit, it seems.

This is not the first time I’ve referenced an article by Jason Samenow, weather editor for the Washington Post, and I suspect it won’t be the last.  His recent commentary on the misinterpretation of hurricane predictions is eerily relevant to education researchers, who must also deal with the recasting of their findings by popular media outlets.

Samenow lists two concerns about the ways that meteorological research is distorted as it moves into general, nontechnical discourse:

  • the public is not presented with the full range of weather possibilities, just the eye-catching ones that involve “sexy model simulations” (in this case, hurricanes)
  • models that project more than five days into the future are fundamentally unsound, but this disclaimer is conspicuously absent from popular weather reports.

Sound familiar?  It should…

How many headlines have we read touting the newest silver bullet of education reform, which, upon further investigation, is not grounded in the quoted research?  How many times have we seen legitimate findings misquoted in support of insupportable claims?  How many times have we seen a reinterpretation of education research for the general public that is stripped of the necessary cautions and caveats?

A meteorologist makes predictions about the weather based on existing data, prior research, and theory; an education researcher makes predictions about students, teachers, and schools based on the same collection of knowledge.  Both scientists answer questions about which the lay-public cares deeply.  Both deal with a subject matter that is complicated and unpredictable.  Both use a set of analytic tools whose complexity far exceeds the public’s technical sophistication.

And both face the same challenges in ensuring the accurate dissemination of their findings.  So what’s a conscientious researcher to do if she wants her work correctly communicated to the public?

Samenow answers by sketching the role of a responsible meteorologist in this troubling dynamic of misinformation.  He discourages a campaign of damage control (i.e., publicly calling out those who publish distorted research), arguing that this is “a never-ending and unwinnable game of whack-a-mole.”  Instead, he urges scientists to “focus on educating their readers and viewers about the limitations of weather forecasts,” “discuss what is known and not known,” and “share good examples of colleagues doing this the right way.”

I’ll be honest: the firebrand in me is a little disappointed with Samenow’s modest, measured conclusion that “education is the only weapon we have in this fight against social media misinformation.”  In response to this misconstrual of the facts – especially when it’s intentional – heads need to roll, right?  At the intersection of education research and reform, amidst the rather severe notions of “accountability” that shape current policy, it seems to me that the stakes are just too high to tolerate reckless distortions of the truth.

But the longer I mull it over, the more I think maybe Samenow is right.  If we, as researchers, are in this for the long haul, if we really want our work to inform and educate society, then maybe a campaign of thoughtfulness, humility, and leading by example isn’t such a bad way to go.


Let’s do an experiment: Put 100 Kindergartners into one classroom…


There’s a classroom in Detroit with nearly 100 Kindergarteners. The class is housed in what used to be the school library, and it is led by three teachers, so the student-teacher ratio is not as astronomical as the headline, but the idea has still raised concerns. Brenda Scott Academy is a chronically failing school that is now part of the Education Achievement Authority (EAA). EAA schools also have a longer school year, serve students breakfast, lunch, and dinner, and focus on a student-centered teaching approach, along with combining classrooms.

Newspaper reports appear outraged: School puts nearly 100 kindergartners in one class in a teaching experiment

Or at least dubious: Last February, administrators began what they thought would be a worthwhile teaching experiment: combining three classes of kindergartners into one “hub” and instructing nearly 100 youngsters together for a good part of the day.

Comments from readers in the Washington Post and Detroit Free Press point out obvious concerns about putting so many 5–year olds in a small space, along with the fact that the school apparently no longer has a library. Some sound incensed at the “warehousing” of kids, presumably to cut costs, and at the idea of “unique,” “wild,” “pet” experiments with our most at risk children. Others point out the similarities to the “open-classroom fad” that faded out in the seventies.

When I saw the headline, I was incredulous. I mean, it sounds a little nutty, right? Well, school leaders clearly think they have a model that works. And as I looked into it in more detail, I came to think that the truly worrisome part of this story has very little to do with the 100 kids. Read on for that, but first, here’s what they are doing, according to the Detroit Free Press:

Each teacher has a homeroom, math and reading class. For reading and math, kids are put in a high-, middle- or low-level group and move to the corresponding teacher’s section. There, activities can include whole-group lessons, small-group lessons and instructional games on laptops. Writing is taught in homeroom.

The entire group spends time together, too, such as on a day in May when about 70 students (a number were absent) sat on a rug to watch a teacher demonstrate how to cut out a paper watering can from an outline. A paraprofessional helps out two hours a day.

And they seem to think it’s working:

“To be able to put (advanced students) together, they can really push each other, and just excel that much more and that much faster,” teacher Sara Ordaz said. “The same thing with our lowest kids.”

Ordaz said by mid-May, the highest-level math students were doing word problems.

“They’re just flying through,” she said.

Through testing and the students’ work on laptops, teachers say they can keep close tabs on their progress.

And one observer cited in the Free Press article did mention some positives, such as that she liked the co-teaching and co-planning aspects.

So, how could we know? Maybe they are innovating based on local needs and context. Maybe they are just winging it, trying something new

“EAA officials are encouraged by their own internal test data that they say shows students making gains,” says the reporter.

“Research has shown smaller sizes work, but this model has pretty much in a sense, early on, has kind of proved that wrong,” says the principal.

With all due respect, I don’t think they have enough proof.

This school has chosen to try something that flies in the face of our best understandings in terms of class size and early childhood needs, so it seems their model might fail. On the other hand, they have incorporated promising ideas to promote collaborative teaching and individualized learning, and they have a longer school year, so the model might succeed.

The sad thing is, I don’t think we ever will know. Instead, we appear to be left with the worst of all possible worlds. The model quite possibly will make things worse, and, if it doesn’t, if it works, it will likely be impossible for us to know. And figuring out why it worked will prove even more difficult.

In spite of what the headlines might claim, this is not an experiment. Instead, the headline suggests complete disregard for a meaningful definition of the word experiment. And the principal’s comments suggest a disrespect for research and an ignorance of what constitutes proof, at least in the scientific sense.

Now, the principal may have been speaking of proving his school’s model in the rhetorical or political sense, and the newspaper headline likely refers to “experiment” in the colloquial sense.

But I think that is actually a reflection of a larger problem. Putting aside the particular questions of class size or the needs of young children, this example illustrates how the principles of science, evidence, and proof are routinely ignored when it comes to education policy.

Far too often, research findings are used to bolster an argument for a decision that has already been made, rather than to make a decision about a question that is of legitimate interest. Instead, decisions are made based on rhetoric, politics, and hunches. So the field of education is left largely unable to learn.

We make changes in policies, but we make them without thinking about what we wish to learn from the changes.

This is not an extreme case of a “wild” experiment, but rather a clear illustration of how, in our schools, decisions get made without regard to what we know, evidence is ignored in the face of expediency, and research is rhetorically refuted by anecdotes and opinions.