An AI tool for helping police make custodial decisions

Summary:

Police employed a ML algorithm to classify individuals arrested on suspicion of an offense as high, medium, or low risk of future criminal activity. This classification determined whether they were charged or diverted to a rehabilitation program.

Client: 

Durham Constabulary, England

Problem Statement: 

Durham Constabulary aimed to reduce reoffending rates and alleviate pressure on the criminal justice system. Recognizing the high recidivism rates among petty offenders, the Constabulary sought innovative and efficient methods to manage these cases with their limited resources. Algorithmic tools offer the promise of enhancing decision-making and predictive capabilities by optimizing the use of both internal and external data, including intelligence.

 

Results: 

  • Assessed 12,200 individuals a total of 22,265 times between 2016 and 2021. 
  • 98% accuracy in identifying low-risk individuals.
  • 88% accuracy in identifying  high-risk individuals.
  • Reduced risk of dangerous errors (underestimating the likelihood of reoffending) compared to cautious errors (overestimating the likelihood of reoffending).

AI Solution Overview:

In 2012, Durham Constabulary started developing the Harm Assessment Risk Tool (HART), an AI decision support system aimed at helping custody officers decide whether a suspect should be detained or referred to a rehabilitation program. This tool was developed internally in collaboration with the University of Cambridge’s Centre for Evidence-based Policing.

HART uses AI technology to analyze 104,000 histories of individuals previously arrested and processed in Durham custody suites over five years, with a two-year follow-up for each custody decision. Employing a method called “random forests,” the model evaluates numerous combinations of predictor values, mainly focusing on the suspect’s criminal history, age, gender, and geographical area.

The ML algorithm enabled Durham police to classify suspects as high, medium, or low risk of future criminal activity, determining whether they should be charged or diverted to the Checkpoint rehabilitation program. Medium-risk individuals were eligible for the Checkpoint program and could avoid prosecution upon successful completion. High-risk individuals were not eligible for diversion.

HART faced significant criticism in early 2018 for incorporating race- and class-based stereotypes in its data inputs. In 2022, Durham Constabulary discontinued using HART due to the substantial resources required to continually refine and update the model to meet ethical and legal standards. This decision contrasts with the initial rationale for using such predictive and analytical systems, which was to save costs, time, and resources, and to improve efficiency.

Durham Constabulary has not disclosed the number of individuals diverted to the Checkpoint program or those charged and prosecuted following a HART assessment.

References: 

  1. Helping police make custody decisions using artificial intelligence. https://www.cam.ac.uk/research/features/helping-police-make-custody-decisions-using-artificial-intelligence
  2.  Durham Constabulary’s AI decision aid for custody officers. A case study on the use of AI in government. https://www.centreforpublicimpact.org/assets/documents/ai-case-study-criminal-justice.pdf
  3. FOI reveals over 12,000 people profiled by flawed Durham police predictive AI tool. https://www.fairtrials.org/articles/news/foi-reveals-over-12000-people-profiled-by-flawed-durham-police-predictive-ai-tool/
  4.  Algorithmic risk assessment policing models: Lessons from the Durham Constabulary HART model. https://shura.shu.ac.uk/17462/1/Executive%20summary%20Algorithmic%20Intelligence%20Analysis%20-%20Final%20version.pdf

Industry: Public Services

Vendor: The University of Cambridge’s Centre for Evidence-based Policing

Client: Durham Constabulary

Publication Date: 2017