Teaching is delicate work. What’s inside a student’s head is invisible, unique, and constantly evolving, and as teachers, our job is to know what’s in there and craft our instruction to match. Figuring out exactly what a student is thinking and why is hard, but I suspect most teachers are, like me, inveterate junkies for the process. It’s like careful detective work, a sort of cognitive investigation, in which we uncover confusion and map out the current state of understanding in order to build on that understanding.
This detective work is fundamental to the teaching and learning process, and good teachers have been doing it since the dawn of time. We currently call it lots of things (“data-driven instruction,” “responsive teaching,” “formative assessment,” etc.). We put it front and center in many conversations about effective teaching, and rightly so: you need to know what kids know in order to help them know more.
When we recognize the importance of up-to-date, accurate data on student understanding, we are faced with a tricky question: how frequently should we test students? Let me point out an important factor that I believe is often overlooked in answering this question.
We must admit that the balancing act between instruction and formal assessment is a zero sum game. Put bluntly, we have ‘em for eight precious hours a day… how do we want to spend those hours? In my experience, decision-makers outside the classroom (administrators, researchers) often overlook the cost of the data, and in cases where this trade-off is considered, the arithmetic can be faulty. What do I mean?
Well, sometimes calculating the cost of test data is as straightforward as it seems: if a particular test takes 45 minutes and class periods are 45 minutes long, then that data costs one period of instruction. Easy-peasy, right?
But sometimes simple arithmetic doesn’t work. What if a test takes 25 minutes and needs to be administered in the computer lab. Lining up, trekking to the lab, finding seats, booting up computers, logging on, and getting settled… maybe 5 minutes. And then logging off, lining up again, trekking back to class, and getting re-settled… another 5 minutes. (This is all if we’re lucky and if we’re talking about older kids.) Now we have 10 minutes left in the period, and the classroom zeitgeist is likely a bit jumbled. Several kids probably need to go to the bathroom because of testing nerves or excitement over the disruption in routine. Someone is crying because they’re worried about the test. Someone else left their jacket in the computer lab. Let’s face it: the period is over. A 25-minute test cost 45 minutes of instruction.
Or consider the phrase, dreaded by every classroom teacher ever: “oh, I’ll just be pulling kids to do some testing throughout the morning.” Sure, each individual assessment might only last 15 minutes, but can we say that only 15 minutes of instructional time was lost? Nope. The constant movement of individuals in and out compromises the learning community that naturally forms over the arc of a lesson: each kid is missing a different chunk of learning, and (for young students) the distraction and novelty of classmates coming and going can be critically distracting.
What I’m saying is that assessment data is not free. We pay for it in instructional minutes, and it behooves us to think long and hard about whether the information we obtain is worth the cost we pay for it.