News – New NAEP Trends Report


The new NAEP Trends Report just came out.

Here are highlights from the NAEP website:

Compared to the first assessment in 1971 for reading and in 1973 for mathematics, scores were higher in 2012 for 9- and 13-year-olds and not significantly different for 17-year-olds.

In both reading and mathematics at all three ages, Black students made larger gains from the early 1970s than White students.

Hispanic students made larger gains from the 1970s than White students in reading at all three ages and in mathematics at ages 13 and 17.

Female students have consistently outscored male students in reading at all three ages, but the gender gap narrowed from 1971 to 2012 at age 9.

At ages 9 and 13, the scores of male and female students were not significantly different in mathematics, but the gender gap in mathematics for 17-year-olds narrowed in comparison to 1973.

Full report:

Your thoughts?

Pretty Much Anyone? Part IV: Where the Rubber Meets the Road


The million-dollar question is: can (as we’ve seen with MTP and ITP-P) pretty much anyone be trained, in a general way, in ITP-C?  I believe the answer is no.  ITP-C is where the rubber meets the road.  To put it bluntly this is where you’ve actually got to know what you’re talking about.  Given a particular grade, content area, and type of student, there is a non-general, non-universal, specific set of best practices, theories, techniques, tricks of the trade, common misconceptions, and potential trainwrecks that form the guts if ITP-C.

Which participant structures are most appropriate for a collaborative learning model in a 12th grade science classroom?  What’s the best way to build those structures into the classroom culture?  How do those structures change when there are a large number of English Learners in the classroom?

Is critical peer-editing of creative writing even possible in kindergarten?  What does it look like?  How do you teach kids to do it? How can you ensure that it is a constructive process for all students, even those whose emerging literacy development tends to the edges of the bell curve?

The above two paragraphs illustrate (hopefully) that you can’t just make this stuff up.  That is, there is a real, non-trivial, nuanced, unobvious set of knowledge and skills associated with a specific grade level, content area, and type of student.  My own teaching experience in early elementary grades has prepared me to engage confidently with the questions in the second paragraph.  But if I’m being honest, I wouldn’t touch the first paragraph with a ten foot pole.  I mean, sure, I could make up some nice-sounding sentence about “the importance of establishing classroom norms” or “conscientious decisions about grouping strategies,” but I would never foist all that generic stuff on a struggling, first-year high school biology teacher.  What such a teacher needs, in order to improve her practice, are detailed and precise techniques, strategies, and insights that speak to the specifics of her teaching situation.  That is, she needs ITP-C from someone who knows what they’re talking about.

Pretty Much Anyone? Part III: Pretty Much Anyone Can Talk About SMART Goals


Another distinction is in order, this one between two parts of ITP.  This distinction is best made by referencing a classic, but now-outdated (Shulman, 1986), dichotomy in the philosophy of teaching: pedagogy and content.  Insofar as ITP is a teaching endeavor (the teaching of teaching teachers, that is), we can consider the pedagogy of ITP (ITP-P) and the content of ITP (ITP-C).  I think we need a diagram.

image 1

Trust me, all the acronyms are worth it… we’re about to get to the juicy stuff.

ITP-P involves such concepts as “how adult learners learn,” “how to engage and motivate adult learners,” “how to foster productive dialogue with colleagues or subordinates,” “how to deliver constructive feedback in a productive way,” etc.  Note that these skills can be taken as somewhat universal.  That is, they’re not tied to teaching teachers of a particular grade level, content area, or type of student.  And the implication, again, is that pretty much anyone can be trained in these skills.  For example, think of an instructional coach who encourages the teachers under her guidance to set SMART (Specific, Measurable, Attainable, Relevant, Time-bound) goals.  This improvement strategy is universally suitable (or not, as the case may be) to all types of teachers.

Pretty Much Anyone? Part II: Pretty Much Anyone Can Use a Rubric


Many teacher evaluation systems are predicated on the belief that MTP can be carried out by pretty much anyone who is sufficiently trained in the use of an observation protocol, scoring rubric, or other measurement instrument.  Observers or raters are trained in the use of the instrument, they practice until a threshold of reliability is achieved, and then they are sent forth to use the instrument.  Questions like, “how can someone who has never taught my grade, content area, or type of student measure the quality of my instruction?” are answered with reference to the fact that, after sufficient training, raters with teaching experience and those without experience produce similar ratings.  For the moment, let’s accept this answer and move on to address ITP.

Pretty Much Anyone? Part I: An Introduction


Teacher evaluation systems whose primary purpose is the development and support of teachers necessarily employ two processes: measuring teacher practice (MTP) and improving teacher practice (ITP).  ITP often goes by the name Professional Development.

In an ideal world, every developing teacher would be supported by a master teacher-mentor-coach, proficient in the particular content area, grade level, and student demographic that the developing teacher taught.  In reality, however, budgets and logistics demand systems that offer support that is not nearly so individualized.  In many schools a vice principal or instructional coach is responsible to observe and coach all (or a large cluster of) teachers.  Certification courses exist, which aim to “help you build your collaboration, facilitation, coaching, and mentoring skills so you can create effective professional development for teachers” (

The purpose of this series of posts is to distinguish between MTP and ITP and pose the question, “what training or qualifications are required of those who are to carry out these processes?”

Let’s start by differentiating between MTP and ITP.  To oversimplify, the first process asks the question, “What is going on?” and the second process says, “Here’s how to improve what’s going on.”  Any system designed to support the development of teachers needs both processes, arranged in something of a feedback loop.  To oversimplify again, that feedback loop runs something like,

MTP: “I see that your execution of teaching skill X needs improvement.”

ITP: “Here’s how to improve teaching skill X.”

MTP: “I see that your execution of teaching skill X still needs improvement,” or “Your execution of teaching skill X has improved.  I now see that teaching skill Y needs improvement

ITP: “Here’s a different strategy to improve teaching skill X,” or “Here’s how to improve teaching skill Y.

You get the picture.  MTP gathers the information and ITP acts on the information; both processes are necessary in the endeavor to support teachers in their development.

Extra! Extra! NCTQ finds that UCLA’s Ed School Syllabi Wipe out, Ohio State’s Hang Ten!


This blog is committed to the idea that research matters, that research methods matter, and that they matter to regular teachers, principals, and district policymakers, and this blog is also dedicated to figuring out why real research often doesn’t matter in the least, why it is often disregarded, ignored, or unknown to practitioners and policymakers, and what we all can do about this sorry state of affairs.

And now, just as we’ve been trying to get this blog going, along comes a gift-wrapped controversy: the NCTQ report on the woes of our university-based teacher preparation programs.  It’s perfect because this report manages to be completely off-base and laughably preposterous in every specific claim while at the same time making a broad claim that is undeniably and inarguably true.  The idea that this report’s methodology can accurately rank one teacher preparation program as higher quality than another program is just plain silly, but the underlying idea that teacher preparation programs are collectively weak is just plain common knowledge.

So, first, this report got a ton of attention and was considered by many as a serious indictment of the quality of teacher preparation, in spite of the fact that it was based exclusively on examining course syllabi and online program materials.

Seriously.  This report purports to rate the quality of our nation’s ed schools based on their course syllabi.

Yet, it garnered headlines such as this one from the LA Times: “New teacher training study decries California universities.”  And here is the Times’ summary: “A controversial policy group singles out teacher training programs at UCLA and Loyola Marymount as hardly worth attending. But the schools say the report is flawed.”  And their analysis of the methods: “The researchers were trying to develop a consistent, relevant rating scale, including such measures as whether incoming teachers learn to analyze student performance data and whether they learn about phonics-based reading instruction. The council said its effort will evolve and should become increasingly reliable.”

And here was the Superintendent’s reaction:  “It’s widely agreed upon that there’s a problem” with teacher training, said L.A. schools Supt. John Deasy. “The report points out that California has an acute set of problems.”

Now, to be fair to the media, the report has also been widely and prominently criticized for its methodological weaknesses and inaccuracies (Linda Darling-Hammond) and ideological bias (Diane Ravitch).  So I won’t reiterate the details here.

But, because the NCTQ report is pointing to a problem that is widely perceived (you’d be hard pressed to find a teacher who would say s/he was well prepared for the job on day one), and because the critics of the report are often perceived as politically motivated, the “controversy” is playing out much the way the LA times’ subheading states: “others say the report is flawed.”

This report ends up feeling a lot like the LA Times’ value-added controversy: an inflammatory and deeply flawed way of attracting attention to a problem that everyone already knows about.  Surely there are other ways to fix the rotten planks in our education system than by singling out individual teachers or ed schools to poke them in the eye?

This blog will attempt to be a place to find these other means, a place to discuss problems with strong opinions, but also honest appraisals of the evidence on all sides.

And as for this NCTQ report?  It’s not that it had nothing of value to report.  UCLA, for instance, from my experience, could likely benefit from including a course focused on classroom management and reading Fred Jones, Rick Morris, Harry Wong, and others.  And analyzing teacher education syllabi and course descriptions can probably provide us with broad lessons about the inconsistent approaches of various programs.

But the true impact of the NCTQ report ought to be a cautionary tale: when reading a “study,” read the Limitations before the Conclusion.

Reform Clutter, Part III: Is it a bad thing?


Reform Clutter, Part III: Is It a Bad Thing?

This is a question that will never be answered because it’s ultimately a question about the nature of teaching and learning.  But we can’t throw up our hands quite yet.  If we dig a bit, we can understand the roots of this question; and, having understood precisely what there is to disagree about, we can disagree productively and respectfully.

There are undeniable benefits to consistency within a school.  I don’t think anyone would argue that basic tenets of the dress code should apply whether you’re in math class or in the lunch room.  If you’re not allowed to run down the stairs when Ms. Green is the hallway monitor, then you oughtn’t be allowed to run on Mr. Grey’s clock either.  Organizational norms of conduct are just that, organizational.  That is, they’re intended to apply throughout the organization.

Furthermore, there are pedagogical advantages to consistency within a school.  Students transitioning from grade to grade (or even from subject to subject within a day) have an easier time making connections to prior learning (and across content areas) when there is consistency in curriculum and vocabulary.

Finally, consistency within a school just makes sense in terms of efficiency.  For example, if the kindergarten, 1st, and 2nd grade teachers all use a similar guided reading procedure, then kids don’t have to spend the first several weeks of each year settling into a new routine; they know the drill, they can hit the ground running, as it were.

All of this makes a strong case against reform clutter and illustrates why it’s good to “have everyone on the same page”.  There’s another side of the coin, though.  Simply put, everyone’s different.  Teachers are individuals, with varying personalities, styles, philosophies of teaching, particular strengths, levels of experience and expertise, backgrounds, types of professional training, beliefs, etc.  A given reform, be it a new curriculum series or a new approach to behavior management or a new protocol for recordkeeping, will naturally be a “better fit” for some teachers than for others.  It would be short-sighted to argue that teachers should only embrace policies that happen to sound fun to them: a reflective and improvement-minded teacher should be able and willing to adjust her practice in order to refine and improve it.  But it’s worth recognizing that a one-size-fits-all approach may fail to capitalize on (may, in fact, stifle) brilliant classroom practice that arises from the individuality of teachers.

For example, I taught with a woman whose approach to establishing classroom culture was to front-load heavily.  She would spend a lot of classroom time during the first few weeks of school in critical class-wide conversations, replete with role-plays and examples.  The students would create posters to illustrate their norms of conduct, and they would practice peer-consultation (a student-to-student discussion protocol to respond to peer behavior outside of those norms).  Having established such a solid foundation, this teacher spent very little subsequent instructional time dealing with disruptions to the learning environment.  A reference to a particular poster or a peer-consultation was usually sufficient to get everyone back on track.

I mention this example because it illustrates a unique system, designed by a particular teacher, which was inarguably effective.  Should all teachers in the building have been expected to adopt this system?  If a new, uniform system (one that was less effective for that particular teacher because it didn’t align with her belief that children should manage the culture in their classroom) would have compromised that teacher’s ability to teach, should she have been required to adopt it in the interest of uniformity?  In short, is it possible that reform clutter is sometimes the healthy and natural state of affairs that arises when a common goal (the education of students) unites a group of uniquely gifted practitioners?