University Of Chicago Researchers Think They’ve Built A Better Pre-Crime Mousetrap

ByJosephine J. Romero

Jul 28, 2022 , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
Migration to the cloud may be slowing down

[ad_1]

from the better-guesswork-equals-better-policing? dept

Here are just two of the many things the Securities and Exchange Commission forbids investment companies from putting in their marketing literature:

(B) Representations implying that future gain or income may be inferred from or predicted based on past investment performance; or

(C) Portrayals of past performance, made in a manner which would imply that gains or income realized in the past would be repeated in the future.

No one’s policing police tech with as much zeal, because that’s basically the entirety of predictive policing programs: the assumption that past crime data can project where future crimes are likely to occur.

Predictive policing programs, for the most part, combine garbage data generated by biased policing efforts with proprietary software to generate “heat maps” or “area voted most likely to contain a future crime” or whatever to give law enforcement agencies guidance on how to best deploy their limited resources.

The problem isn’t necessarily the software. But even if it’s robust as fuck, it’s still going to reflect the bias inherent in the raw data. Areas where minorities live tend to be over-policed. Minorities are arrested at rates far exceeding their demographic percentage. Years of overt racism has created skewed data sets that over-represent victims of systemic bias. Predictions based on that data are only going to create more of the same of racist policing. But this time it will look like science, rather than cops rousting black kids just because they can.

Not only is predictive policing tech-based recycling of decades of bad ideas, it just never seems to result in the crime reduction and community-based policing advocates of these systems claim deployment will lead to.

Someone (well, several someones) claim they’ve finally gotten predictive policing right.

Scientists from the University of Chicago have developed a new algorithm that can predict future crime a week in advance with about 90% accuracy, and within a range of about 1000 feet.

It does so by learning patterns from public data on violent and property crimes.

“We report an approach to predict crime in cities at the level of individual events, with predictive accuracy far greater than has been achieved in past,” the authors write.

Sounds great, but what is really being celebrated here? This tool may tell cops what they already know (or believe), but it’s not really a solution. It suggests enforcement and patrols should be concentrated where crimes are likely to occur simply because that’s where crimes have occurred in the past. Being right 90% of the time doesn’t mean more crimes will be prevented. Nor does it mean more cases will be closed. Software with better accuracy can’t change how cops respond to crimes. It can only put a few more cops in certain areas and hope that this somehow produces positive results.

Besides the obvious problem of declaring an area to be the host of future crimes (making everyone in the area a possible suspect until a crime is committed), there’s the problem of bias introduced by the data set. These researchers claim they can mitigate this omnipresent problem of predictive policing.

Somehow this helps?

It divides the city into “spatial tiles” roughly 1,000 feet across, and predicts crime within these areas.

Previous models relied more on traditional neighborhood or political boundaries, which are subject to bias.

That may prevent snap judgments when heat maps are first seen, but that seems something better suited to, say, setting up Congressional districts than trying to prevent garbage data from generating garbage results. This only changes how the end results are displayed. It doesn’t somehow remove the bias from the underlying data.

And, for all its accuracy, the researchers acknowledged the improved software can’t really do much to reduce biased policing.

The research team also studied the police response to crime by analyzing the number of arrests following incidents, and comparing those rates among different neighborhoods

They found that when crime levels in wealthier areas increased, that resulted in more arrests. But this did not happen in disadvantaged neighborhoods, suggesting an imbalance in police response and enforcement.

But what if it wasn’t built for cops, but rather for the public and police oversight entities? Perhaps this is how the software should be used.

“We acknowledge the danger that powerful predictive tools place in the hands of over-zealous states in the name of civilian protection,” the authors conclude, “but here we demonstrate their unprecedented ability to audit enforcement biases and hold states accountable in ways inconceivable in the past.”

That sounds like a better use of predictive policing tech: tracking police enforcement activity rather than subjecting citizens to cops who treat everyone in a certain area like a suspect just because a computer told them criminal acts were in the forecast. But no government is willing to spend millions holding officers accountable or providing the public with better insight into law enforcement activities. Those millions have already been earmarked to buy cops more tech under the dubious assumption that past performance is indicative of future results.

Filed Under: bias, police, precrime, predictive policing

[ad_2]

Source link