Saturday, May 21, 2016

Ten Teen

My two and a half year old daughter did several things with numbers yesterday that I had never heard her do before.

We were out on a walk, and someone walking two dogs passed us.

(That was my count, anyway.)

"Five!" Mia exclaimed.

"Where do you see five?" I asked.

"Five dogs!" she answered, pointing back over her shoulder.

Then she looked ahead to another dog that was approaching.

"Six!" she proclaimed.

Two new things here:

  1. I had never heard her use a number greater than 2 to describe the total quantity in a group of objects. She has said "two books" and "one moon," but nothing over two that would show that she understands that a bigger number can be a total quantity. (We math teachers call that cardinality.)
  2. She said "five" and then she said "six." I know this doesn't sound like a big deal. But it was the first time I had heard her count on, without starting at 1. 
Sadly, the dogs passed so quickly by that we never had a chance to see if there had been 5 dogs or 2 in that first group. (I am pretty sure I was right, though.)

Later, at dinner, she stretched her hand up in the air and started counting at 4.

"4, 5, 6, 7, 8, 9, 10, 11, 12, 16, 17, 18, 19, ten teen," she said.

"Yes!" I said. "We really should have a number called 'ten teen.'" 

"I don't think she knows about 13 and 14," my husband said.

Mia, overhearing him say "fourteen," immediately started counting at 40.

"40, 41, 42, 43, 44, 45, 46, 47, 48, 49, forty-ten!" she said happily.

"Yes," I said. "And the name for forty-ten is fifty!"

"50, 51, 52, 53, 54, 55, 56, 57, 58, 59, fifty-ten!" she continued.

"Yes," I said. "And the name for fifty-ten is sixty!"

What is so cool here is her understanding that there is a repeating pattern to our number system. She has only heard someone count above 30 about 3 times in her life, I would guess. But she has internalized something about the counting pattern.

And so we continued on to ninety-ten, at which point I told her ninety-ten was called one hundred, even though I wasn't sure if that was the right thing to say, since something big changes at 100. She's only 2 and a half, though, so I told her it was 100 without getting too complex, and we kept counting together until we got tired of the game. 

Then I dictated notes to my husband, who jotted down on the back of an envelope what had just happened while I held our wiggly ten month old with one hand and tried to finish my dinner with the other.

Tuesday, June 5, 2012

What insects need to live

One of the objectives of our second grade insects curriculum is that students should learn that insects need 4 things to live.

You know them, right? Quick! Name them!

I'm sure you got it right, but in case you didn't, here are the 4 things: food, water, air, and space.

In the past few years, we've started writing learning targets for our lessons, so that both the teachers and the students know what they are supposed to learn. One of our learning targets is this: "I can list what insects need to live." If you can list those 4 things, you get it right. If you can't, you have missed your target.

(We write more interesting learning targets than that one. But it is easiest to write and assess objectives that involve remembering facts, and hardest to write and assess objectives that involve thinking and analysis. This is one danger of objectives.)

A few weeks ago, in an inquiry group for science teachers, we took a deeper look at some student work having to do with this learning target. We looked at one student's observational drawing of a milkweed bug habitat, and we looked closely at partial transcripts of science discussions from two classrooms.

We noticed, when using the Collaborative Assessment Conference to look at the drawing, that while the student had painstakingly labeled, with arrows, the insects' food, water, and air holes, when she wrote the word "space," she drew an arrow pointing to the word "space" itself. Her label was pointing at itself.

This prompted us to think about how abstract "space" is. Who decided insects need space, anyway? What does that mean? Do they need a space to live in? Do they need just enough space for their bodies so they don't get squished? We began to eye our list of four needs with some suspicion.

In the science talk transcripts, the students dug deeply into the idea of what insects need to live. That's not what the discussions were intended to be about -- the teachers had asked where insects live. But as students shared ideas about where they live, they naturally started to talk about what they need to survive. They talked about food, and that insects live in places where they can get the kind of food they need. They talked about protection -- insects live under logs because it is dark and safe and hard to find them.

Then one student said that insects need each other to survive.

We, the teachers, thought hard about that. It made us wonder: What does "live" mean? Does it mean that an individual insect lives? Or does it mean that a species survives? If it's the latter, they most certainly do need each other. 

The second graders, though, weren't thinking about reproduction. They were thinking about safety. They were pretty sure that some insects protect each other. If that was the case, didn't those insects need each other to survive? (If you're not sure about this, check out this video of fire ants making a raft so they can survive a flood in the jungle.)

[The students in one class designed an experiment to see if insects need each other to live. They took one mealworm and put it, alone, in a habitat with food, water, air, and space. It died. The mealworms die easily, so this is hardly incontrovertible evidence, but the second graders were pretty convinced.]

The more we thought about it, the sillier this learning target seemed to us. Do the curriculum writers have any idea of the diversity of insects on earth? Those insects need very different kinds of things to live in very different places. What is really interesting about insects is how they live in certain places so they can get what they need to live -- an idea the students began discussing almost immediately. This seems like a Big Idea about insects (and all living things) that could lead to all kinds of thinking and analysis, instead of just memorizing four things that insects supposedly need to live. 

This is one of the many stories that make me think we should always word our objectives as questions, not as answers. What do insects need to live? There are many answers, and we could investigate them all year.

Thursday, May 31, 2012

What I Learned at Harvard

It's been a year since I last wrote a blog post -- a year in which I haven't been teaching, but have been learning instead. My year of graduate school is at an end, and so I'd like to share the biggest thing I learned at Harvard:

We only really learn the things we figure out ourselves.

This actually seems so obvious to me that I'm a little embarrassed to write it down, but from looking at the world around me, it appears it's not so obvious.

This big take-away can be phrased in other ways. You can't make anyone learn anything they're not ready to learn. People don't learn anything if you tell it to them. You can't really teach anybody anything -- you can only guide them in exploring ideas.

It seems to be a more or less controversial statement depending on how you say it. And I'm not sure all those statements really mean the same thing, or are always true. But I do know a few things.

I know that the person who does the talking, the one who explains, is the one who learns. "Learners talk and teachers listen," as my wise Professor Duckworth wrote on my journal this year, and she's right. If I hear someone explain something that makes sense to me, I kind of understand it, but I don't really own it until I explain it to someone else -- or, better yet, to several other people. Even then, it's very possible I won't remember it a few weeks later. If I really want to learn it, permanently, I have to experience it, struggle with it, and figure it out myself.

Here's another part of this that I cannot believe I never figured out in all my years of teaching, and no one ever told me (or maybe they told me, but I didn't learn it): students only learn what they DO.

In other words, as I've heard said at least one hundred times this year, "Task predicts performance." The idea is well explained here, but what it means is that we only learn to do what we practice doing. If we practice a procedure until we have memorized it, we are only learning to memorize a procedure. If we work with a team to solve a construction problem with blocks, we are learning about constructing buildings, and learning to solve problems in teams. If we sit and listen to a teacher talk, we are learning to sit and listen to a teacher talk, but nothing more. We only learn to do the things we do.

The obvious problem here is that when we think of what it means to be a student, we think of students sitting and listening to the teacher. And when we envision a teacher, we envision someone standing in front of students, talking. According to what I've learned this year, in this scenario the person doing the learning is the teacher, because the teacher is the one talking and thinking. The students are learning to sit and listen.

This was brought home to me a few weeks ago. I'm part of a group that has Harvard professors come and speak with us for one hour on Friday afternoons. It's a chance for us to hear about different professors' work, even those whose classes we didn't get to take.

Until last Friday, the visits were almost indistinguishable. The professor would announce, "I'm just going to talk for 20 or 30 minutes, and then we can have a conversation." They would open up a powerpoint presentation with 40-60 slides, talk for 50 minutes, then entertain 2 or 3 questions before leaving.

A few weeks ago, Steve Seidel, a truly great teacher, came to speak with us. He brought 15 slides, and only got through 7. He told us about some thinking he'd been doing lately, with some quotes from Frederick Douglass that had pushed his thinking. He asked what the quotes made us think. Then he told us he had to give a talk soon, and would we think about the topic of his talk, and maybe connect it to the Douglass quotes, and tell him our thoughts. He took careful notes of each person's ideas, and thanked us for helping him plan his talk.

It was the polar opposite of the other Fridays, and I was struck by how much more useful it was. Unlike the other talks, I can still remember what we talked about, and I suspect I'll remember it for some time. Most importantly, Steve came to listen, think together, and to learn from us, and the result was that we thought and learned, too.

All of this thinking about how people really learn -- by doing and talking, not by listening -- and how to teach -- by listening -- has me trapped in a new quandary. All around me, I see people "teaching" by telling people things -- Harvard professors, elementary school teachers, and myself, on a pretty regular basis. But I don't think people are learning much this way.

Teachers, policy-makers, administrators, and academics need to learn that this isn't how people learn. And my instinct is to tell them this, to say, "You know, no one's gonna learn that if you just tell it to them."

You can see my problem, of course. No one will learn this from me telling them, because that's not how people learn. Teachers won't change their practice, and school leaders won't change their priorities, because I (or some other little pipsqueak) come along and says that people don't learn this way. People have to learn it for themselves, when they're ready to learn it. They'll be ready to learn it when they experience it, or closely watch their students and observe it. And there's no way to make that happen quickly.

Thursday, April 7, 2011

Talk

I'm part of a project this year that works with new(ish) teachers on teaching science.  One of the science teaching practices we're learning about and trying out is a "Science Talk."  A Science Talk is pretty much exactly what it sounds like.  You ask your class a question -- a question that doesn't really have a right answer, that perhaps can be interpreted in different ways -- and then, with relatively little guidance from you, your students talk about the question.

In our group, teachers videotape their Science Talks, then bring their videos, with typed transcripts of the conversation, to be studied.  We look at tiny portions of the transcript at a time, and we dig into the meaning of students' words.  In the midst of talk that can often, in the rush of a classroom, seem unimportant, we find evidence of students' understanding, ideas, connections, experiences, theories, and creativity. We discover concepts we want to return to, we wonder what students meant by a certain phrase, and we are constantly amazed by the depth of their thinking.

Two things most strike me about this work.  The first is that this idea, of throwing out an open-ended question, then asking students to explore it together, is not a revolutionary pedagogical practice.  This is not a new idea; teachers have been doing this for centuries.  However, it is not something we, who teach in this context at this time, do on a regular basis.  In fact, the idea is kind of daunting for many teachers.  A question without a right answer?  A conversation the teacher doesn't control?  Many of us have little experience with this kind of teaching, and it makes us nervous.

I also realize that I used to have more conversations like this in my classroom than I do now.  When I began teaching ten years ago, we had class conversations about the definition of a triangle, whether balls can move by themselves, how to design a fair experiment, why people have wars, what is the "middle number," whether you need a mother and a father, and where a life cycle begins.  I don't think my classroom was unusual in spending time on these questions.  These were some of my very favorite teaching moments -- really, they are why I teach. 

In the past few years, I have felt less freedom to spend time on such conversations.  A constant watchword of our profession now is data.  Where are the data?  What do the data tell us?  The important data in second grade are: what level are the kids reading at?  How many words can they read per minute?  How many sight words can they read?  How many sight words can they spell?  How many math facts can they solve in a minute?  How many of the students can write an organized explanation of how they get ready for school in the morning?

(I should say that these things are, of course, important, some more than others, and I am not against teaching them.  They just aren't that exciting.)

I've inherited (as have the teachers before me) two years of pretty low readers.  ("Low readers," I should write, in quotes.)  So the "data" aren't very good.  Maybe it's because of this, or maybe it would be this way even if they were "high readers" -- we need to spend more time on math and reading instruction.  Instruction that helps the data get better, of course.  Not necessarily instruction that helps them explore ideas, dig deeply into content, or talk about their thinking.

Because these kinds of conversations are so much rarer in classrooms today, it is really important to give the practice a name ("Science Talks"), practice it a bunch in our classes, and analyze the "data" that emerge from our students' voices.  This creates a space for more student talk, for less teacher talk, for more conversation in our classrooms.  By naming it, we legitimize it as a teaching practice, and by digging into our students' words and seeing how much richness we can find there, we offer it as an antidote to timed tests, multiple choice questions, and canned essay topics.

Did I say listening to students' ideas wasn't a revolutionary pedagogical practice? Maybe it is after all.

Thursday, March 31, 2011

Cheating and other hazards of high-stakes testing

There's been a lot of news lately about cases of confirmed or suspected cheating by administrators and teachers on high-stakes tests.  This is, of course, not surprising.  When a single measure such as test scores is used to make decisions about school funding, jobs, and whether or not to keep a school open, you can be sure there'll be outright cheating.  But there's a much stickier question that arises in this kind of environment, one that's less cut-and-dried than teachers telling kids to change their answers on a test: where do we draw the line between test preparation and cheating?  When does test prep render our test results less useful?   

I was enthralled, fascinated, and suffering from a bit of an intellectual crush a few months ago when I read Measuring Up: What Educational Testing Really Tells Us, by Dan Koretz.  (I used to call him Daniel, but now he's Dan to me.)  What amazed me most of all was that neither I, nor seemingly most of the education practitioners with whom I work, really understand very much about how educational testing works.  These tests are the single most powerful force driving our daily work -- and most of us just accept the conventional wisdom about them without blinking.

Conventional wisdom: tests don't measure everything that's important, but it's too hard to measure everything that's important.  (This is true.)  Since there's no way to measure everything important, we'll have to just focus on the tests, for lack of anything better.  (I've definitely moved more in this direction over the years, but that's a mistake.)  The tests do give you important information about how your school is doing -- and if your kids are doing well, there must be a lot of good teaching going on.  (Definitely not true.)

Probably the most important thing I learned from Koretz's book was the existence of Campbell's Law.  Campbell's Law is, it turns out, is a well-known rule of social science. It states:

The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.

Examples in Koretz's book, and in the blog post linked to above, include the unintended consequences of assessing the on-time record of airline flights and the death rate of heart surgeries. When data on the death rate of individual heart surgeons began to be collected and disseminated, in an effort to assess their surgical skill, surgeons became more reluctant to operate on the most dire of cases -- who were, of course, more likely to die during surgery, thereby sullying their statistics.  Their statistics improved, but health care for heart patients did not.

When data on the on-time rate of individual airlines' flights were publicized, all airlines' data began to improve, even though individual travelers didn't notice an improvement in their own flight experiences.  Why?  Airlines began to pad their estimates of travel times, building in time for regular delays.  Even if delayed, flights appeared to arrive "on time," and many flights arrived early.  (This latter example isn't egregious -- it kind of seems like good planning.  The point is, though, that the impression that the data were improving is misleading.)

Campbell's Law has been coming up lately in the conversation about high-stakes testing (which makes me wish I'd written this blog post a few months ago, as I intended to -- now others have beaten me to the punch.)  It's essential that schools get good test scores, since their performance on tests is tied to pretty much everything -- so, scores will go up.  But increased scores don't mean that students are learning more.  It just means that students are getting better at that particular test.

Koretz did a study (which was very hard to do, because no one wanted it done in their district) to measure this effect.  In a large urban district, the average third-grade score on the standardized math test in 1986 was a grade-level equivalent of a 4.3.  Then, the district switched to a new test.  The next year, test scores plummeted to an average of 3.7.  For the next three years, though, the scores rose until third graders were again scoring at a grade-level equivalent of 4.3.

In the fourth year, Koretz also administered the old test -- the one that, four years ago, third graders had been doing well on.  And what do you think happened?  While on the new test, the one their teachers had been teaching to for four years, they scored an average of 4.3, on the old test, the average score was 3.7 -- exactly the score their counterparts had originally earned on the new test when it was first used.

In other words, students weren't getting better at math in those four years, even though their test scores were improving.  They were getting better at that one particular math test.

So why do we use educational testing?  Most of us would agree, I think, that we use tests so we can see what students know and can do: what they are learning.  When high stakes test scores go up, they tell us what they are learning in just one realm -- they tell us that they are learning how to take specific tests.  Teachers are adjusting the content they teach to match those specific tests, and teachers are passing on test-taking tricks that help their students score better.

If we want tests that tell us what a specific school, or grade or district or country, knows about math, though, high-stakes tests don't tell us much.  What amazes me is how little people are talking about this.  Campbell's Law is well-known -- but I certainly didn't know about it.  No one in my school or my district said, "You know, because these tests have so much tied to them, they don't tell us much about what kids know."  Instead, everyone told us we had to do a better job of preparing our students to pass the tests.  As if that's why we became teachers.

A month or so ago, I attended (and walked out of) a staff meeting that was purportedly about how to teach our students to answer Open Response questions on the state high stakes test.  I thought it might be useful; I had noticed that my students weren't so good at reading a question, understanding what it was asking, and distilling what they knew about the answer into a few sentences.  I thought I might get a few ideas about how to help kids write more clearly about what they knew.

It turned out to be a presentation about tricks that help kids get higher scores on Open Response questions, not how to write about what you know in response to a question.  The speaker told us important details about how the questions are scored, including the fact that the organization of the response isn't scored at all.  All the scorers look for is the right content, so students don't need to worry about writing topic sentences or spelling.  "This is just about scoring better on Open Response," she said, "not about being better writers."

Please bear in mind: this woman was brought to our school by our school district.  This workshop was a version of one she had attended put on by the Department of Elementary and Secondary Education -- yes, the same people who bring us our state test.  So, they make the test, they create the benchmarks, they score the test -- and, they teach us how to help our kids do better on the test.  Do they want to measure what our students know about reading and writing?  Or do they just want better test scores?

I recently took an educational test -- the GRE.  And I did a lot of test prep for it. I reviewed a lot of math concepts, things I once knew but have since forgotten.  Did I learn them deeply, in such a way that I still know them now, three months later?  No.  I memorized a few formulas and a lot of tricks, including the fact that, for example, the GRE mostly uses 3, 4, 5 triangles and 5, 12, 13 triangles.  (Doesn't that sound vaguely familiar from geometry?)

Was I cheating?  Not technically.  Did I do well on the test?  Yes.  Did my performance indicate what I really know about math and can use on a daily basis?  Absolutely not.

I'm not advocating that we keep using educational tests, with their somewhat arcane language and strange scoring rules, and don't prepare students for them at all.  The risk we run then is that students' test scores will underestimate what they know and can do (which surely happens now as well).  But it does seem like a perverted system when we put so much of our teaching time and energy into helping kids beat the tests.  All this talk about extending the school day and the school year -- while students in grades 3 and up can spend up to 30 school days a year taking tests (not counting all the days they spend prepping for them).  Talk about lost learning time.

Ten years ago, I started out as a teacher who thought "standards" was a dirty word and "testing" was worse.  Over the years, my views became somewhat more mainstream, as standards became such a fact of teaching that we couldn't imagine schools without them and testing more and more determined our fate.  Now I'm on my way out of the classroom, at least for a year, and I'm coming back to where I started -- disillusioned by arbitrary standards and test results that tell us more about a student's socioeconomic background and test-taking smarts than they do about what she really knows.