Comparing Conventional and Machine-Learning Approaches to Risk Assessment in Domestic Abuse Cases
We compare predictions from a conventional protocol-based approach to risk assessment with those based on a machine-learning approach. We first show that the conventional predictions are less accurate than, and have similar rates of negative prediction error as, a simple Bayes classifier that makes use only of the base failure rate. Machine learning algorithms based on the underlying risk assessment questionnaire do better under the assumption that negative prediction errors are more costly than positive prediction errors. Machine learning models based on two-year criminal histories do even better. Indeed, adding the protocol-based features to the criminal histories adds little to the predictive adequacy of the model. We suggest using the predictions based on criminal histories to prioritize incoming calls for service, and devising a more sensitive instrument to distinguish true from false positives that result from this initial screening.
We thank Terence Chau for excellent research assistance. Special thanks also to Ian Hopkins, Rob Potts, Chris Sykes, as well as Gwyn Dodd, Emily Higham, Peter Langmead-Jones, Duncan Stokes and many others at the Greater Manchester Police for making this project possible. All findings, interpretations, and conclusions herein represent the views of the authors and not those of Greater Manchester Police, its leadership, its members, or the National Bureau of Economic Research. We thank Richard Berk, Ian Wiggett, and three anonymous referees for helpful comments. No financial support was received for this project.
Jeffrey Grogger & Sean Gupta & Ria Ivandic & Tom Kirchmaier, 2021. "Comparing Conventional and Machine‐Learning Approaches to Risk Assessment in Domestic Abuse Cases," Journal of Empirical Legal Studies, vol 18(1), pages 90-130.