The Dangers of Predictive Policing
In recent years AI powered predictive policing has moved out of the realm of science fiction and into reality. While this development potentially brings many opportunities for improved policing with it, it also brings many risks. In this project we have explored and analyzed some of these risks and developed practical solutions for mitigating them.
Publication
We are very excited to announce that we will soon be able to share the results of our most recent project. As our work on the project deliverables has now finished, we expect to publish key findings and results in the coming months. The project is currently under external review, but as soon as this is completed, we will be publishing an initial report about our findings.
Bias and feedback loops
The development of more powerful and accessible artificial intelligence has made the widespread adoption of predictive policing practices possible. While some have celebrated this development, others have met it with scepsis and scrutiny. There have been many accusations against the new policing algorithms of negative bias and unfairness from watchdogs and researchers.
To better understand the risks associated with predictive policing, we undertook a comprehensive analysis of the technology powering the new policing practices. Through this analysis, we were able to identify and map how the examined algorithms created unfairness through hidden bias and generation of feedback loops.
Measuring and Enforcing Fairness
Having identified key weaknesses in existing predictive policing technology, leading to increased bias and generation of feedback loops, we developed a possible solution for the issues. This solution required a way of measuring and enforcing fairness on the algorithm.
By using an interdisciplinary approach to the problem, utilizing both traditional software development and modern criminological theories, we were able to create an innovative and effective solution.