Researchers at the University of Vermont have recently designed a new mathematical approach to judge when gerrymandering political districts go beyond fairness and into the manipulation of voting. A team led by UVM mathematician Gregory S. Warrington published the new tool in the Election Law Journal under the title, “Quantifying Gerrymandering Using the Vote Distribution”. However, is it possible that gerrymandering will grow more difficult? This article written by Zack Stanton is published in Politico. Here is an excerpt:
“While badly shaped districts are a fairly successful flag that somebody was trying to do something, they don’t really tell us what their agenda was, or whether it was nefarious or benign,” says Moon Duchin, a mathematician at Tufts University and an expert on gerrymandering. “Bad shapes are not necessarily bad, and good shapes are not necessarily good.”
For the past five years, Duchin has led Tufts’ Metric Geometry and Gerrymandering Group, a lab that has quietly upended conventional wisdom about how gerrymandering works by approaching the issue less as a political problem than a mathematical one. As the country sprints into a new redistricting cycle, understanding redistricting in those terms has taken on new importance—especially in light of a controversial change to the Census Bureau data that will be used to draw the new district maps.
This year, for the first time, the Census Bureau has added random noise to its data that makes it slightly inaccurate at the smallest, most zoomed-in level, but accurate at an aggregate, wide-angle view. The approach, known as “differential privacy,” aims to protect the anonymity of census respondents amid a glut of third-party online data that could otherwise make it possible to personally identify census respondents. The move has prompted a wave of criticism that redistricting based on those “noisy” numbers will be inaccurate.