Photo of Alex Chohlas-Wood.
Photo credit: Simon Leuthi
Photo of Alex Chohlas-Wood.
Photo credit: Simon Luethi
I'm a research fellow at the Harvard Kennedy School and the executive director of the Harvard Computational Policy Lab (CPL), which I lead from home in San Francisco. My work focuses on using technology and data science to support criminal legal reform. I received a PhD in computational social science from Stanford, and previously served as the director of analytics at the New York City Police Department. Here's what I've been up to recently:
  • Equitable algorithms

    Post thumbnail
    Post thumbnail
    The last few years have seen an explosion in research on how to constrain algorithms to avoid biased decision-making. My colleagues and I wrote a short guide for Nature Compuational Science that synthesizes this research, illustrates drawbacks to several widely cited approaches, and outlines practical steps people can take in their quest for equitable algorithms. In a related working paper, we expand on how policymakers can use algorithms to make “fair” tradeoffs when resources are limited.
  • Pretrial nudges

    Post thumbnail
    Post thumbnail
    Failing to appear in court can land a person in jail and cause them to lose their job or housing. But many people fail to appear (FTA) simply because they forget about their court date. In a randomized controlled trial at the Santa Clara Public Defender’s Office, we found that text message reminders reduced FTA-related jail involvement by over 20%. Our findings extend previous studies which found that text message reminders can help people show up to court. In future work with Santa Clara, we hope to test whether reminder variants can help different types of clients, and whether financial assistance can help clients overcome barriers to court attendance.
  • Blind charging

    Post thumbnail
    Post thumbnail
    With colleagues at CPL, I helped design and implement a race-blind charging algorithm. Our open-source tool automatically masks race-related information in incident narratives to reduce the influence of race on charging decisions, as I describe on Twitter here. We piloted the algorithm at the San Francisco District Attorney’s office, and it’s now in daily use at the Yolo County District Attorney’s office, near Sacramento. Blind charging is the subject of a new law in California, AB 2778, that mandates race-blind charging decisions across the state by 2025. I plan to study the impacts of blind charging with a large-scale randomized controlled trial in 2024 and 2025. Blind charging has been covered in numerous press articles.
  • Disparate impact in policing

    Post thumbnail
    Post thumbnail
    In a paper, my colleagues and I described how data analysis can identify and measure racially disparate impacts in police stop policies, complementing other research which investigates whether bias exists in individual stops. We gave applied examples from a handful of major cities across the U.S., including Nashville, New York, Chicago, and Philadelphia. Our paper was published in the University of Chicago Law Review, and I wrote a Twitter thread about it here.
  • COVID-19 oriented reforms

    Post thumbnail
    Post thumbnail
    In a Washington Post piece, I argued (with colleagues from CPL) that the dramatic—but temporary, and patchwork—criminal justice reforms enacted in response to COVID-19 should be made permanent and expanded across the country.
  • Risk assessment instruments

    Post thumbnail
    Post thumbnail
    I wrote a briefer on the potential advantages and risks of using risk assessment instruments in criminal justice settings for the Brookings Institute’s “AI and Bias” series on fairness in algorithms.
  • Patternizr

    Post thumbnail
    Post thumbnail
    I helped design and deploy a tool called Patternizr that helps speed up the investigation of serious historical crimes. My colleague and I described our approach in a paper in the INFORMS Journal of Applied Analytics, including a first-of-its-kind analysis from a police department that demonstrates the tool does not disproportionately recommend any specific race group. Patternizr was featured in several articles.
  • Nashville police department

    Post thumbnail
    Post thumbnail
    I worked with colleagues at CPL on a project with the city of Nashville to demonstrate that traffic stops were an ineffective tool for fighting crime. Since the release of our report, the city’s police department has reduced the use of traffic stops by 70%—an almost 90% reduction from their peak.
  • Auditron

    Post thumbnail
    Post thumbnail
    I designed an algorithm for the NYPD that looked for crimes which were misclassified as felonies or misdemeanors. Likely misclassifications were sent to an internal team for auditing and correction. I presented my approach at NYU’s Tyranny of the Algorithm? Predictive Analytics & Human Rights conference.