About a decade ago, police were praising the development of data-driven predictive policing, with tools such as PredPol becoming popular with police departments. These tools use machine learning, and recently, deep learning using neural network approaches, to predict in which areas crime is likely to happen at a given time and place based on geocoded crime data coupled with other temporal GIS datasets. This helped police to focus efforts in these areas and generally it resulted in more arrests made in those areas where police patrols increased. Although data do show a reduction in crime in some areas in cities such as Los Angeles, Chicago, and Santa Cruz, others have indicated that these tools are targeted against African American and minority areas. This has called into question and a re-evaluation of the use of prediction software that some researchers and activists have stated as having racial bias built within them.
With the recent police crime and killing of George Floyd, social and political pressure has led police departments to reexamine their tactics, including software tools. While cities such as Chicago and Los Angeles may have scaled back some of their use of predictive policing tools as a response to this, Santa Cruz, a city where predictive policing was pioneered about a decade ago, is now the first city in the United States to outright ban such technology because such predictive tools generally emphasize greater policing in African American and minority neighborhoods. Short of an outright ban, Pittsburgh has also halted its use of predictive policing for now. Carnegie Mellon University had worked to develop algorithms that predicted where crimes may occur around Pittsburgh. One concern is such tools are also not always transparent as to how areas requiring more policing are determined. In other words, there could be built-in bias in such tools that reinforce racial profiling or encourage its practice Mathematicians have also written recently in Notices of the American Mathematical Society, a popular journal for the field, to encourage other mathematicians to stop working with police department on developing predictive policing tools. They note that these tools have been shown to be biased and they can reinforce racial profiling and bias in policing practices.
Facial recognition software has also come under fire, where it is also often used with predictive policing to estimate areas where crime could occur. In an upcoming book by Springer, authors claim that their facial recognition approaches can even allow them to predict if a person is likely to be a criminal, potentially long before they even commit a crime. Thus, simply having someone’s picture could allow police to follow or profile individuals that are deemed to be likely criminals. Over 2000 AI experts have expressed grave concerns with this upcoming publication, particularly because it advocates the use of facial recognition for predictive purposes and could accuse people of crimes before they are even committed. The anticipation is that these tools will also have a racial bias. The researchers also point out the fact that studies also likely discount the results of this approach, calling into question not only the ethical considerations of the work but also its scientific merits.
The recent reversal against predictive policing represents a remarkable turnaround against such algorithms. It also opens the debate around the ethics of these tools. A cursory glance on academic publications and more recent news has generally showed the positive embrace that such technologies have had, but very recent events since the killing of George Floyd have resulted in a reexamination of policing strategies, including the use of predictive policing. Does this mean predictive policing is going to disappear? Most likely it will not lead to its complete disappearance, as many police departments will likely argue there are still gains in reducing overall crime by the use of these tools. However, what is likely to happen is these tools will be more closely inspected for their bias, including if crime prediction is factoring race or not, and that policing is seen more holistically, with other factors such as income and racial inequality having to be addressed in order to reduce crime in a long-term and sustainable fashion. Policing is likely to change in many areas as public and political pressure mounts, which will likely mean a reevaluation on how technology, including artificial intelligence, is used in policing efforts and its role in creating public good.
 For more on Pittsburgh and its suspension of predictive policing, see: https://thecrimereport.org/2020/06/24/bias-concerns-prompt-pittsburgh-to-halt-predictive-policing/.
 For more on the mathematicians calling on other mathematicians to stop working with police departments on predictive policing, see: https://www.salon.com/2020/06/24/mathematicians-urge-peers-to-stop-working-on-racist-predictive-policing-technology/.
 For more on facial recognition software that could be used to predict who becomes a criminal, see: https://techcrunch.com/2020/06/23/ai-crime-prediction-open-letter-springer/
 For holistic policing practices some are calling for, see: https://www.marketplace.org/2020/06/10/community-policing-social-services-mental-health-drug-use-housing-law-enforcement/