Don't Grade Teachers With a Bad Algorithm
For more than a decade, a glitchy and unaccountable algorithm has been making life difficult for America's teachers. The good news is that its reign of terror might finally be drawing to a close.
I first became acquainted with the Value-Added Model in 2011, when a friend of mine, a high school principal in Brooklyn, told me that a complex mathematical system was being used to assess her teachers -- and to help decide such important matters as tenure. I offered to explain the formula to her if she could get it. She said she had tried, but had been told “it’s math, you wouldn’t understand it.”
This was the first sign that something very weird was going on, and that somebody was avoiding scrutiny by invoking the authority and trustworthiness of mathematics. Not cool. The results have actually been terrible, and may be partly to blame for a national teacher shortage.
The VAM -- actually a family of algorithms -- purports to determine how much “value” an individual teacher adds to a classroom. It goes by standardized test scores, and holds teachers accountable for what’s called student growth, which comes down to the difference between how well students performed on a test and how well a predictive model “expected” them to do.
Derived in the 1980s from agricultural crop models, VAM got a big boost from the education reform movements of presidents Bush and Obama. Bush’s No Child Left Behind Act called for federal standards, and Obama’s Race to The Top Act offered states more than $4 billion in federal funds in exchange for instituting formal teacher assessments. Many states went for VAM, sometimes with bonuses and firings attached to the results.
Fundamental problems immediately arose. Inconsistency was the most notable, statistically speaking: The same person teaching the same course in the same way to similar students could get wildly different scores from year to year. Teachers sometimes received scores for classes they hadn’t taught, or lost their jobs due to mistakes in code. Some cheated to raise their students' test scores, creating false baselines that could lead to the firing of subsequent teachers (assuming they didn’t cheat, too).
Perhaps most galling was the sheer lack of accountability. The code was proprietary, which meant administrators didn't really understand the scores and appealing the model's conclusions was next to impossible. Although economists studied such things as the effects of high-scoring teachers on students' longer-term income, nobody paid adequate attention to the system's effect on the quality and motivation of teachers overall.
Happily, the tide appears to be turning. In 2015, a revamp of No Child Left Behind, called the Every Student Succeeds Act, removed the federal funding incentives that had supported the algorithm. In May 2016, a Long Island teacher named Sheri Lederman won a lawsuit against New York State in which a judge deemed the state's VAM-based rating system “arbitrary and capricious.” And earlier this month, a group of teachers in Houston, where VAM had been used for firings and bonuses, won a lawsuit in which they successfully argued that the algorithm's secretive and complex nature had effectively denied them due process.
VAM expert Audrey Amrein-Beardsley told me that the Houston decision, pertaining to the country's seventh-largest school district, might have a "snowball effect," influencing the outcome of other lawsuits across the country. Let’s hope so, because teachers deserve better.
This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
To contact the author of this story:
Cathy O'Neil at email@example.com
To contact the editor responsible for this story:
Mark Whitehouse at firstname.lastname@example.org