A veteran teacher turned coach shadows 2 students for 2 days – a sobering lesson learned


Granted, and...

The following account comes from a veteran HS teacher who just became a Coach in her building. Because her experience is so vivid and sobering I have kept her identity anonymous. But nothing she describes is any different than my own experience in sitting in HS classes for long periods of time. And this report of course accords fully with the results of our student surveys. 

I have made a terrible mistake.

I waited fourteen years to do something that I should have done my first year of teaching: shadow a student for a day. It was so eye-opening that I wish I could go back to every class of students I ever had right now and change a minimum of ten things – the layout, the lesson plan, the checks for understanding. Most of it!

This is the first year I am working in a school but not teaching…

View original post 1,889 more words


The Misconstrual of Research for Popular Consumption… And What to Do About It


What can hurricanes tell us about education research?  Quite a bit, it seems.

This is not the first time I’ve referenced an article by Jason Samenow, weather editor for the Washington Post, and I suspect it won’t be the last.  His recent commentary on the misinterpretation of hurricane predictions is eerily relevant to education researchers, who must also deal with the recasting of their findings by popular media outlets.

Samenow lists two concerns about the ways that meteorological research is distorted as it moves into general, nontechnical discourse:

  • the public is not presented with the full range of weather possibilities, just the eye-catching ones that involve “sexy model simulations” (in this case, hurricanes)
  • models that project more than five days into the future are fundamentally unsound, but this disclaimer is conspicuously absent from popular weather reports.

Sound familiar?  It should…

How many headlines have we read touting the newest silver bullet of education reform, which, upon further investigation, is not grounded in the quoted research?  How many times have we seen legitimate findings misquoted in support of insupportable claims?  How many times have we seen a reinterpretation of education research for the general public that is stripped of the necessary cautions and caveats?

A meteorologist makes predictions about the weather based on existing data, prior research, and theory; an education researcher makes predictions about students, teachers, and schools based on the same collection of knowledge.  Both scientists answer questions about which the lay-public cares deeply.  Both deal with a subject matter that is complicated and unpredictable.  Both use a set of analytic tools whose complexity far exceeds the public’s technical sophistication.

And both face the same challenges in ensuring the accurate dissemination of their findings.  So what’s a conscientious researcher to do if she wants her work correctly communicated to the public?

Samenow answers by sketching the role of a responsible meteorologist in this troubling dynamic of misinformation.  He discourages a campaign of damage control (i.e., publicly calling out those who publish distorted research), arguing that this is “a never-ending and unwinnable game of whack-a-mole.”  Instead, he urges scientists to “focus on educating their readers and viewers about the limitations of weather forecasts,” “discuss what is known and not known,” and “share good examples of colleagues doing this the right way.”

I’ll be honest: the firebrand in me is a little disappointed with Samenow’s modest, measured conclusion that “education is the only weapon we have in this fight against social media misinformation.”  In response to this misconstrual of the facts – especially when it’s intentional – heads need to roll, right?  At the intersection of education research and reform, amidst the rather severe notions of “accountability” that shape current policy, it seems to me that the stakes are just too high to tolerate reckless distortions of the truth.

But the longer I mull it over, the more I think maybe Samenow is right.  If we, as researchers, are in this for the long haul, if we really want our work to inform and educate society, then maybe a campaign of thoughtfulness, humility, and leading by example isn’t such a bad way to go.


Let’s do an experiment: Put 100 Kindergartners into one classroom…


There’s a classroom in Detroit with nearly 100 Kindergarteners. The class is housed in what used to be the school library, and it is led by three teachers, so the student-teacher ratio is not as astronomical as the headline, but the idea has still raised concerns. Brenda Scott Academy is a chronically failing school that is now part of the Education Achievement Authority (EAA). EAA schools also have a longer school year, serve students breakfast, lunch, and dinner, and focus on a student-centered teaching approach, along with combining classrooms.

Newspaper reports appear outraged: School puts nearly 100 kindergartners in one class in a teaching experiment

Or at least dubious: Last February, administrators began what they thought would be a worthwhile teaching experiment: combining three classes of kindergartners into one “hub” and instructing nearly 100 youngsters together for a good part of the day.

Comments from readers in the Washington Post and Detroit Free Press point out obvious concerns about putting so many 5–year olds in a small space, along with the fact that the school apparently no longer has a library. Some sound incensed at the “warehousing” of kids, presumably to cut costs, and at the idea of “unique,” “wild,” “pet” experiments with our most at risk children. Others point out the similarities to the “open-classroom fad” that faded out in the seventies.

When I saw the headline, I was incredulous. I mean, it sounds a little nutty, right? Well, school leaders clearly think they have a model that works. And as I looked into it in more detail, I came to think that the truly worrisome part of this story has very little to do with the 100 kids. Read on for that, but first, here’s what they are doing, according to the Detroit Free Press:

Each teacher has a homeroom, math and reading class. For reading and math, kids are put in a high-, middle- or low-level group and move to the corresponding teacher’s section. There, activities can include whole-group lessons, small-group lessons and instructional games on laptops. Writing is taught in homeroom.

The entire group spends time together, too, such as on a day in May when about 70 students (a number were absent) sat on a rug to watch a teacher demonstrate how to cut out a paper watering can from an outline. A paraprofessional helps out two hours a day.

And they seem to think it’s working:

“To be able to put (advanced students) together, they can really push each other, and just excel that much more and that much faster,” teacher Sara Ordaz said. “The same thing with our lowest kids.”

Ordaz said by mid-May, the highest-level math students were doing word problems.

“They’re just flying through,” she said.

Through testing and the students’ work on laptops, teachers say they can keep close tabs on their progress.

And one observer cited in the Free Press article did mention some positives, such as that she liked the co-teaching and co-planning aspects.

So, how could we know? Maybe they are innovating based on local needs and context. Maybe they are just winging it, trying something new

“EAA officials are encouraged by their own internal test data that they say shows students making gains,” says the reporter.

“Research has shown smaller sizes work, but this model has pretty much in a sense, early on, has kind of proved that wrong,” says the principal.

With all due respect, I don’t think they have enough proof.

This school has chosen to try something that flies in the face of our best understandings in terms of class size and early childhood needs, so it seems their model might fail. On the other hand, they have incorporated promising ideas to promote collaborative teaching and individualized learning, and they have a longer school year, so the model might succeed.

The sad thing is, I don’t think we ever will know. Instead, we appear to be left with the worst of all possible worlds. The model quite possibly will make things worse, and, if it doesn’t, if it works, it will likely be impossible for us to know. And figuring out why it worked will prove even more difficult.

In spite of what the headlines might claim, this is not an experiment. Instead, the headline suggests complete disregard for a meaningful definition of the word experiment. And the principal’s comments suggest a disrespect for research and an ignorance of what constitutes proof, at least in the scientific sense.

Now, the principal may have been speaking of proving his school’s model in the rhetorical or political sense, and the newspaper headline likely refers to “experiment” in the colloquial sense.

But I think that is actually a reflection of a larger problem. Putting aside the particular questions of class size or the needs of young children, this example illustrates how the principles of science, evidence, and proof are routinely ignored when it comes to education policy.

Far too often, research findings are used to bolster an argument for a decision that has already been made, rather than to make a decision about a question that is of legitimate interest. Instead, decisions are made based on rhetoric, politics, and hunches. So the field of education is left largely unable to learn.

We make changes in policies, but we make them without thinking about what we wish to learn from the changes.

This is not an extreme case of a “wild” experiment, but rather a clear illustration of how, in our schools, decisions get made without regard to what we know, evidence is ignored in the face of expediency, and research is rhetorically refuted by anecdotes and opinions.


An IEP is not a SEP…


By definition, standardized tests are designed in such a way that the questions, conditions for administering, scoring procedures, and interpretations are consistent and predetermined.  Look carefully at this description: “consistent,” “predetermined.”

Having met Mr. Duncan some time ago, I am quite disturbed by his recent decision to expand the uses, interpretations, and accountability measures associated with the test scores of Students with Disabilities (not to be confused with the term, “SPED” students).  The reality is that Students with Disabilities have long been exposed to the wonderful world of high-stakes testing.  And do you know what they get labeled?  We talk about these students as a “subgroup,” often “below average,” “below basic.”  And do you know why?  Because many of them enter the world of academic rigor “below the norm,” as measured and defined by psychological and psycho-educational evaluations, eligibility reports, and a host of other nationally normed evaluations.

When we engage in a conversation about the true academic abilities of Students with Disabilities, we have to consider more than their performance on random, out of context reading passages about “Astronauts!” on standardized tests.  We have to consider the whole child.  We have to consider the nature of these assessments, because paper and pencil tests and scantrons are not for everybody.  But Duncan’s decision doesn’t seem to take that into account.

As Special Educators, we fight tirelessly.  Do you know how long it takes to get the student who is three grade levels behind to a point where he is only one grade level behind?  All we have is 180 days…  180 days to rewrite what this student has been thinking about himself for quite some time: “I’m below basic, I’m not good enough” … “inadequate” … “failure.”

I am not saying that Students with Disabilities should not be exposed to standardized testing.  They have been for years.  But what I am saying is that this “standard” decision needs to consider the very nature of an IEP: it’s an “Individualized Education Plan,” not a “Standardized Education Plan.”

How can we as Special Educators work year after year to help students master goals that are individualized only to turn around and say, “I know you can’t add two-digit numbers, but I want you to take this standardized test where half the questions involve adding with two-digit numbers so that I can see where you are.”  Umm, what?

Look, after certain early developmental stages, students recognize and realize when they just can’t do something.  And they know that their teacher knows… Think of what it does to the trust and understanding between a student and teacher when that student has to sit in front of that teacher and repeatedly fail standardized tests.

Mr. Duncan needs to be in the presence of Students with Disabilities who are assessed with DIBELS, for example, an early literacy assessment.  These students’ IEPs may stipulate “extended time,” but DIBELS administration prohibits it.  The students never get close to scoring “benchmark,” and they know it.

Mr. Duncan needs to have a conversation with the students who are overwhelmed on a standardized test of 40 questions.  If these students’ IEPs mandate chunked and tiered assignments, how can we be surprised when they are unable to finish the test?

Mr. Duncan needs to spend time with the student who has limited working memory and processing speed, so that he knows how this student feels when trying to respond to even a single question in a “standard” way.  How can we be surprised if the student quickly bubbles in answers, just hoping to get the process over with quickly?

Mr. Duncan needs to be in a room with a student who cries from anxiety during a high-stakes testing session because he is overwhelmed trying to decode all the words in non-fiction passage, just so that he can finally get to all the comprehension questions.  True story.

Again, I am not saying that Students with Disabilities should not be exposed to standards-based measures, but I am opposed to Mr. Duncan’s one-size-fits-all approach.  I think the addition of a portfolio assessment, for example, would give us a more robust view of what students are capable of doing.

The reality is that Students with Disabilities sometimes come into our schools with disadvantages that are beyond their control (i.e., Autism Spectrum Disorders).  As Special Educators, it is our job to assure our students that growth is possible, growth matters.  But this must be individualized growth, not standardized growth.

 Alexis Mays-Fields is a, elementary school Special Education teacher in Washington D.C.



Last week, Russ Whitehurst (director of the Brown Center on Education Policy at the Brookings Institute in Washington, DC) published an interesting essay proposing a novel approach to standards-based accountability.  I’ll summarize his main point below, but what I’d really like to talk about in this post is something that happens in the eighth paragraph of his several-page essay.  In two sentences, Whitehurst makes a common rhetorical move that I’ll call “bracketing.”  I want to discuss how this move affects our discourse and thinking about educational quality.

Whitehurst expresses concerns about melding the new rigor of the CCSS with the impractical “100% proficiency for 100% of students” approach of NCLB.  His novel solution involves a two-tiered accountability system: states and the federal government would be in charge of minimum competency standards, and schools and districts would take care of anything above and beyond the basics.  I trust this is a fair, if brief, summary of his article.  Now, on to his eighth paragraph…

“Note that my focus is test-based accountability.  Other things I’ll not cover here, such as students’ aspirations and soft skills, are important too.”

In his title (“The Future of Test-Based Accountability”), Whitehurst makes it clear that his discussion is about how to use test scores, not about what test scores tell us (or don’t), how they shape teaching and learning, or their unintended consequences.  These are all concerns that he chooses to “bracket.”

Careful thinking about the complex issues involved in education reform often requires us to set aside (or “bracket”) certain issues in order to narrow our focus and examine a particular issue in depth.  It is certainly defensible, then, for Whitehurst to “bracket” what he considers to be nonessential concerns within the context of his paper.

How does “bracketing” work?  Basically, by contracting the scope of the conversation.  It is a powerful silencing move because it essentially disallows discussion of the “bracketed” topic.  When the same issue is bracketed again and again, in a variety of contexts and discussions, then bringing it up can become difficult… you start to feel like that student who keeps raising his hand to say the same thing.  At some point, you begin to sense that the other students are rolling their eyes and getting irritated, and so you decide just to let it go.

This can become problematic in conversations about educational quality when citizens, policymakers, and researchers habitually (almost reflexively) “bracket” the same set of concerns and – this is the dangerous part – neglect to “un-bracket” them.  It seems to me that this has happened with the very concerns that Whitehurst “brackets,” concerns about the centrality of test scores in our concept of educational quality.

Recently, Arne Duncan announced a shift in federal policy that involves an unprecedented use of special-needs students’ standardized test scores. Predictably, he “brackets” the same issues Whitehurst does.

Next week we’ll hear from a special educator from Washington, DC, who will argue passionately against Duncan’s “bracketing.”  She’ll paint a very real picture of “students’ aspirations and soft skills” and argue that, particularly for special-needs students, the consequences of “bracketing” are just too high.


The Added Cost of Data


As a new contributor, let me just start with a big thank you! The amazing thing about this blog is it’s willingness to consider all voices and the value placed on the voice of the teacher. ¡Gracias!


This post is a response/addition to “The Cost of Data” written on June 16.

As a classroom teacher, I can very must attest to the ‘cost’ of data in terms of instructional time; however that is not the only cost of data. As data continues to be used to secure funding, open and close schools, and hire and fire teachers, it’s important to also consider the cost following: the type of assessments collecting the data, and how the data is being used in the context of the day-to-day teaching of children.


In terms of the types of assessments being used, I think that there should be more transparency between the big corporations who profit from selling standards aligned, PARCC aligned resources and those who decided to use the new kinds of assessments. While I think PARCC is heading in the right direction with the type of performance tasks that are based in the real world and promote critical thinking and problem solving, there is disturbingly little information on how these assessments will be adapted and used in the primary grades (a shocking trend). After witnessing testing anxiety in my six-year-old students, we have got to change the way we talk about testing and consequently data. The tests need to be developmentally appropriate and vetted by the people who actually have to use them: teachers. Amazingly enough, PARCC has posted sample assessments and asked for feedback, though I wish more teachers knew about it so they could actually give feedback.


When teachers don’t understand where the tests came from, the background of their development, or even how they address the standards, the cost is a wealth of information that doesn’t know how to be used to inform instruction or shared with parents about student performance. The cost is not only a rise in student anxiety, but also teacher anxiety as they teach to and prepare for a test they don’t really understand simply because the results are so important.


This leads me to the ‘how’ of data. In the daily life of teaching, data is used many ways: to make small groups, to decide what and when to reteach, to differentiate instruction, and to create and monitor interventions. When used correctly, data can be the best tool to tailor instruction for students. I’ve used it myself to engage parents in supporting students at home. I’ve seen it create ‘lightbulb’ moments where parents really see their children for the first time. Yet, there is a dark side to data as well. I have watched administrators sit in ESL or SPED meetings and use data to stereotype and pigeon-hole students. I’ve seen it used to bully parents instead of inspire them. I’ve seen it used to scare and intimidate teachers instead of using it to help them grow. I’ve seen it shared with students in an effort to ‘motivate’ them to do better only to leave them crying and wounded.


Data can be a wonderful tool used to make schools and teachers better, but there is an added cost when the assessments are foreign and tied to high-stakes and the data isn’t shared in a constructive way. The only way to keep making progress while using data is to have a conversation about it. Kudos Glo for keeping the dialogue going 🙂



Students teaching students: What can we learn?


After reading the last two posts on pre-service teacher education (here and here), I began to reflect on an experience I had before any formal knowledge of teaching. When I was in college, I spent several summers teaching with Summerbridge Hong Kong (SBHK), an English summer program that provided a low-cost English immersive experience to low-income local students.

Like many overseas language programs, SBHK hires relatively inexperienced high school and college students from the US, UK, and Canada to help local students improve their English skills and increase their exposure to international cultures. Unlike many such programs, SBHK focuses on the development and empowerment of its teachers as much as of its students.

Looking back, after graduate study and several additional years of teaching, I continue to wonder what more I can glean from the program’s philosophy of education and model of teacher training.

Rather than prescribing a curriculum, the program’s administrators scaffold teachers towards designing and delivering their own 4-week lessons. Before arriving in Hong Kong, teachers work with a prior SBHK teacher as a “virtual mentor” to begin developing a final project for their class. Once they arrive, they participate in a series of training workshops designed to introduce them to basic principles and techniques of teaching, as well as to help them break their final project into 3-5 themed “mini-units.” Each mini-unit focuses on a skill that students will need to complete the final project. Throughout the summer, teachers use a similar backwards approach on a smaller scale to develop individual lesson plans, each of which works towards the focus of the mini-unit as well as the final project for the class.

Every step of the way, the Dean of Faculty, an experienced teacher or graduate student in an educational field, provides feedback that is intended to support teachers in meeting their own and students’ goals in the classroom. Just as students are encouraged to think of English as more than just a school class, teachers are encouraged to think of teaching as more than just a job. This is facilitated by encouraging teachers to try new strategies and techniques and to learn from their mistakes rather than fearing consequences. Celebrating such attempts allows teachers to develop as teachers and to take pride in their teaching.

Despite its seemingly loose structure, this program transforms some 300 low-income Hong Kong high school students from disheartened English language learners into dynamic language users who by the end of the summer communicate with confidence on stage in front of hundreds of people. Over 90% of program graduates consistently pass the HKCEE (Hong Kong Certificate of Education Examination), compared to the 65% average of all Hong Kong students. While there is no such quantitative survey of teachers’ progress, in my years of working with this program I have yet to meet a single teacher who has described a negative experience. Regardless of how unrealistic it seems, such a model seems to work, at least for language education.

While there are obvious limitations to duplicating this exactly in the K-12 setting, structuring an environment in which teachers and students alike are encouraged to take ownership of their own classroom and education seems both productive and intuitive. Such a program can be structured enough to prescribe end goals for students and teachers through a semester, while remaining flexible enough for individual teachers to develop strategies optimum for their own classes or contexts. However, doing so requires parents and administrators to treat teachers as professionals and afford them the trust and respect such status deserves. In the same vein, it encourages the implementation of policies that increase teachers’ resources and confidence and encourage them to engage students and facilitate learning to the greatest degree possible.

However, such an approach is impossible to standardize. While every student in the country certainly deserves to benefit from the best teaching methods on hand, standardization assumes that there exists a singular optimal approach to teaching regardless of region, student demographic, income level, familiarity with US culture, etc. Standardization and scripted lessons can result in lost learning opportunities because such segues deviate from the script or in inefficient teaching methods that neglect students’ personal interests or previous knowledge. One of the most memorable and enjoyable lessons I ever taught was on body parts and illnesses to a night class of adults, each of whom had multiple tattoos and piercings.

Encouraging teachers to adapt their teaching to fit their own contexts rather than penalizing them for doing so provides spaces in which they and their students can have agency – and interest – in their own classrooms.


Geeta Aneja