Paying Students May Raise Test Scores, But The Lesson Is Not Over
(Image: Mai Ly Degnan for NPR)
Let's pretend I asked you to run a mile as fast as you can.
Now let's pretend I asked you to run a mile as fast as you can, and if you broke nine minutes, you'd get $90.
Which mile do you think would be faster?
A new study suggests that students taking a test behave like you or me: They do better with a little incentive. Dollars and cents, that is.
Jeff Livingston, the lead author of the study, is a professor of economics at Bentley University in Waltham, Mass. He also happens to be the husband of a high-school teacher, which makes him privy to a lot of discussion about the impact of high-stakes tests on the classroom.
"I hear my wife and other teacher friends express concerns all the time about teacher evaluation systems which use standardized tests as part of the metric," he says. "They constantly worry that such tests do not accurately measure what their students have learned."
Livingston set up a randomized, controlled trial. He reasoned that, if test performance increased substantially with a cash incentive, that would mean that typical standardized tests, given without the bonus, don't measure what students really know.
His subjects were students in nine middle and elementary schools in Chicago Heights, Ill. They were judged, based on previous performance, to be at risk of scoring below the passing mark on state reading and math tests.
The students were given a baseline test, then offered a tutor for about nine weeks. In various conditions of the study, a $90 reward was offered to children, their parents or their tutors, or was split among them.
At the end of the trial period, students took a "probe:" a test with about 20 questions. They were repeatedly reminded by their tutors, including right before taking the test, about the reward.
The test was given within a week of an official district test covering the same material, to which no incentives were attached.
Sure enough, students scored "substantially better" on the tests for which they, their tutor or their parent stood to gain. The effect size was relatively large, between 0.3 and 0.5 standard deviations.
The incentives had the most consistent impact on the easiest exam questions. This suggested that the improvement came from students simply trying a bit harder while taking the test and maybe double-checking their answers.
So, what does this study really tell us?
There's a whole body of research on the idea of paying students to work harder in school and get better grades. Harvard's Roland G. Fryer Jr. is most associated with this work. His research shows a range of outcomes, but quite small results on average.
Steven Levitt, of the University of Chicago and Freakonomics fame, has also done an experiment showing test score improvement with immediate rewards. But again, this seems to work best as a short-term intervention, not something that can improve school performance over the long haul.
In fact, psychology research suggests that paying students to do well in school could be counterproductive. It creates what is called "extrinsic," or external, motivation, which can paradoxically reduce students' intrinsic, or internal, motivations.
Douglas Harris, the director of the Education Research Alliance for New Orleans, has conducted his own research on financial incentives in education. He says, "in general, I'd be leery of drawing real-world implications from carefully crafted experiments."
Livingston did intend his study to have a real-world implication, though. Remember, he set out to investigate whether high-stakes tests are accurate measures of what students know — for the purposes of evaluating teachers as well as students.
Value-added measurement of teacher performance is a pretty hot topic in education policy these days.
A teacher is currently suing U.S. Education Secretary John B. King Jr., formerly the top education official in New York State, over what she argues is the unfairness of these value-added measures. A New York State Supreme Court judge ruled in her favor earlier this month, calling her rating "arbitrary" and "capricious."
Does the result that students respond to incentives on standardized tests undermine the premise of rating teachers based on changes in test scores?
Harris says, not necessarily. Remember, value-added is based on comparing students' scores on two tests taken at different times. If the students aren't given any special incentive on either test, then any bias in the results would at least partly cancel out.
But, says Harris, the basic result of Livingston's study does suggest something we have a lot of evidence about: the importance of student motivation. It's an "interesting angle," on the notion that "student scores are driven by factors beyond their own skills and knowledge."