Pre-crime, is a vast potpourri of information, on everyday activities, used to try to predict and prevent future behavior. In “predictive” policing, computer algorithms identify signs of pre-crime in a realm in which we are all potential suspects.  Similar to the state of affairs depicted in the 2002 movie, “Minority Report,” psychic “precogs” discern which “criminals” to pursue before they commit a crime.

Hartford, CT is now using what some say looks an awful lot like pre-crime technology. "Like cities across the country, we've been grappling with ways to use this technology to make our residents safer and our communities stronger," Hartford Mayor Luke Bronin said in an interview with Vice News. "At the same time we're being very sensitive to concerns about civil liberties."

BriefCam, An Israeli-American cyber business, provides video search technology that is now being utilized in Hartford. BriefCam's video analysis software compresses hours of video into more compact presentations, referred to as "events." In cities comparable to Hartford, with at least 700 surveillance cameras accessible to police, this software reduces to minutes results that would normally take days to process.

Law enforcement's foray into utilizing this type of technology is not without its critics. Local residents and state chapters of the ACLU are uneasy with police departments' use of technology for which they say the endgame seems disturbingly clear.

Currently being employed in Fresno, CA, one example of the use of pre-crime technology is described by the Washington Post:

"...officers raced to a recent 911 call about a man threatening his ex-girlfriend, a police operator in headquarters consulted software that scored the suspect’s potential for violence the way a bank might run a credit report.

The program scoured billions of data points, including arrest reports, property records, commercial databases, deep Web searches and the man’s social- media postings. It calculated his threat level as the highest of three color-coded scores: a bright red warning.

The man had a firearm conviction and gang associations, so out of caution police called a negotiator. The suspect surrendered, and police said the intelligence helped them make the right call — it turned out he had a gun."

In Chicago, the police have been applying machine learning and predictive analytics to police data sets, including crime incidents, arrests, and weather data. When data such as previous arrest records is combined with real-time IoT data, (i.e. sensor-influenced cameras that detect gunshots) it becomes easier to ascertain problem locations. Known as the ‘pre-crime’ initiative, it was implemented through collaboration between the Chicago Police Department and Chicago University Urban Labs.

The software used is HunchLab, a geographic prediction tool that employs data modeling to predict risk in specific areas across the city. At-risk regions are highlighted on-screen, while recommendations for action are displayed alongside the at-risk region information. The information is then collated into a ‘decision support system’ and made accessible to individual police officers on the beat.

Adoption of pre-crime tech is beginning to trend in the US. PredPol, one of the leading systems on the market, is already being used by law enforcement in California, Florida, Maryland and other states.

Aside from civil liberties concerns, however, a flaw found in the design of the type of software used indicates that predictive algorithms are to blame for a whole new set of problems.

For example, when researchers in the US examined how PredPol predicts crime, they found something disturbing. The software apparently sets off a “feedback loop” that leads to law enforcement being dispatched repeatedly to certain neighborhoods--regardless of the actual crime rates in those neighborhoods.

According to New Scientist's article on the research:

"The problem stems from the logic that PredPol uses to decide where officers should be sent. If an officer is sent to a neighbourhood and then makes an arrest, the software takes this as indicating a good chance of more crimes in that area in future.

What this means, says Matt Kusner at the Alan Turing Institute in London, is that the PredPol system seems to be learning from reports recorded by the police – which may be higher in areas where there are more police – rather than from underlying crime rates."

“That’s how dangerous feedback loops are,” Joshua Loftus, Assistant Professor of Information, Operations and Management Sciences, said. These loops are only part of how PredPol makes its predictions, he said, but they may explain why predictive policing algorithms have sometimes appeared to recreate exactly the type of biases the software developers say they overcome.

So, the crime rate in one neighborhood is overestimated, without taking into account the possibility that more crime is observed there due to the fact that more officers have been sent there. It's essentially a computerized version of confirmation bias.

It may be possible to terminate the feedback loop, though. New Scientist noted that, "the authors also modeled a different system, in which the algorithm only sent more officers to a neighborhood if the area’s crime rate was higher than expected. This led it to distribute officers in a way that much more closely matched the true crime rate."

Loftus also indicated there are several other issues that need to be resolved before policing algorithms can truly be considered unbiased. “Human decisions affect every aspect of the design of the system,” he cautioned.

Pre-Crime Informational Video

Related Articles:

Facial Recognition Tech: EFF Engaged in Battle Against "Expanding Proliferation of Surveillance"