Can AI Predict Criminal Behavior?

A group of people walk in a futuristic city

The use of artificial intelligence (AI) in our modern digital world has grown exponentially since it was first conceived in the 20th century. Our reliance on machine learning algorithms to spot patterns and create predictions has become prevalent in a multitude of industries from finance and healthcare to video game development and even space exploration. So, the obvious question lies next for us - how will it be used by law enforcement?

Many Sci-Fi novels and films like Minority Report, originally written by Philip K. Dick, explored the possibility of precrime, the ability to foresee criminal activity and apprehend criminals before they commit an illegal act; but with the rise of predictive analytics and our growing reliance on deep learning, this is no longer speculation. So, how can AI be used to predict criminal behavior, and do the ends justify the means or will it risk the meaning of justice?

Historical Context

Crime prediction is not a new endeavor. Criminologists have attempted to spot patterns in criminal behavior to prevent crime using technology for decades. Since the 19th century, law enforcement agents have used tactics developed out of sociology, psychology, and even pseudoscience like phrenology to detect criminal tendencies. 

By the 20th century, more grounded theories in modern psychology and criminology began to take place, tying criminal activity to family bonds, childhood experiences, environmental factors, personality traits, and disorders that made it easier to identify potential criminals. This led to an abundance of new record-keeping that helped create historical data for law enforcement agencies. New methods like hotspot policing became more common, using historical information to identify areas that were prone to high crime. 

How AI Predicts Criminal Behavior

Machine learning is a form of AI that trains itself on datasets to identify patterns and make predictions. Unlike humans, these algorithms have powerful processing abilities that can analyze large quantities of data at incredible speeds. Even more advanced models created with deep learning can process this information with additional weights and biases that create highly actionable insights used by nearly every industry today and law enforcement is no exception. 

Law enforcement generates a vast amount of raw data from arrest records, police reports, body cam footage, and geographical data, that is perfect for AI algorithms to process. Similar to business forecasting or climate monitoring, AI models can create accurate spreadsheets and heat maps of expected criminal activity in a given area using statistical reasoning and real-time analysis.  

Potential Benefits of Using AI in Crime Prediction

Objectively speaking, implementing AI technology into law enforcement has many obvious benefits that can help make cities safer. Enhanced hotspot patrolling can help police stations allocate resources more efficiently, preventing resources from running thin and dispatching officers to areas that need them most. 

More importantly, the use of predictive analytics opens up an entirely new category of law enforcement - preventing crime before it happens. By analyzing historical data, police officers can be dispatched to areas of expected crime, arriving to stop a problem before it even occurs. Computer vision surveillance can enhance these measures even more so facial recognition makes it easy for officers to identify the supposed criminals. These advanced tracking systems can not only locate an individual expected to commit a crime but they can also be used to train future models on additional traits like body posture and temperature to flag people who appear suspicious. 

Challenges and Concerns

Despite the many benefits to security and public safety created by AI surveillance and predictive analytics, there are some clear breaches in public trust, privacy, and basic civil liberties. Profiling has already become a hot topic of debate among the police, especially regarding racial discrimination, and using AI to predict criminal behavior would exacerbate this issue. 

Moreover, AI is not omnipotent and subject to many failures. An over-reliance on AI to spot and arrest criminals before they commit a crime would easily increase the number of falsely incarcerated individuals because AI models are still subject to failure and have their faults. 

Poor, biased training can also cause a predictive crime model to become unreliable if it has been trained on data that is skewed. This is especially dangerous in cities where specific demographics and racial groups are more likely to be falsely targeted by the police, generating false data that directly harms innocent citizens. 

We know this is possible too because AI models have already been falsely trained with biased data in the past such as when Amazon used a recruiting model to filter through resumes and applications. It was unknown at the time, but the model was removing female applicants from the pool of possible employees because the training data used for the model contained real-world biases from the heavily male-dominated tech industry and countries with larger incarceration rates of minority groups would likely succumb to the same problem. 

Real-world Implementations & Case Studies

Many AI models focused on predictive criminal activity already exist and have been infamously used. The most prominent example is Geoliticia, formerly known as PredPol, which was designed to predict property crimes using algorithms created by Earthquake detection systems. Developed jointly by the LAPD and UCLA, the program became highly controversial in Southern California after citizens claimed that it only perpetuated racial discrimination. In 2020, the LAPD ended its use of the model and claimed that an internal audit showed no reduction in crime. 

Earlier in 2019, San Francisco, the tech capital of the world, banned the practice of facial recognition technology in an 8-1 vote claiming that it would only serve to perpetuate racial discrimination. However, the city has come under new scrutiny with its approval of police robotics. 

Is AI Ready for Criminal Prediction?

The argument for and against predictive crime technology is clear: do we prioritize safety or justice? To many, the use of predictive analytics to prevent crime offers a clear benefit - more public safety. Using data, AI can identify the most at-risk neighborhoods to prevent crime before it ever happens with additional police patrolling. Fear that AI can malfunction is also easy to overlook as we use AI for every other aspect of our digital lives, so what makes it any worse for law enforcement? If AI is accurate enough to operate a car then it is safe enough to predict crime. 

On the other hand, many see the use of AI as a clear infringement on their civil liberties and the meaning of justice. In the United States everyone is considered innocent until proven guilty and to arrest someone before that or to simply monitor them based on a machine’s prediction is a breach of their civil rights. Moreover, these procedures would only cause a greater divide between marginalized communities and police agencies in regions where relationships are already weak, causing a great lack of trust on all sides, and leading to more unjust police brutality. 

Future Outlook

There is no doubt that AI has created an immense amount of benefits for societies worldwide, but it may not be the correct solution for every problem in our world. While certain AI algorithms can help monitor locales that need better policing, the best way to prevent crime is through stronger community support. Better schools, cheaper daycare, and extracurricular activities are just a few examples of how a municipal body can provide safer environments for its citizens by giving them the opportunities that they need to succeed. 

AI is a powerful beast, and we have to be careful to avoid the all too real possibility of a dystopian future where analytics and historical records are used to prove our innocence over basic evidence and the occurrence of an actual crime. Instead, let us humanize our communities and provide AI models that improve these endeavors like enhancing our education system or creating more accessible health clinics where they are needed most. 

Keegan King

Keegan is an avid user and advocate for blockchain technology and its implementation in everyday life. He writes a variety of content related to cryptocurrencies while also creating marketing materials for law firms in the greater Los Angeles area. He was a part of the curriculum writing team for the bitcoin coursework at Emile Learning. Before being a writer, Keegan King was a business English Teacher in Busan, South Korea. His students included local businessmen, engineers, and doctors who all enjoyed discussions about bitcoin and blockchains. Keegan King’s favorite altcoin is Polygon.

https://www.linkedin.com/in/keeganking/
Previous
Previous

Is D&D a Generative Adversarial Network?

Next
Next

How Do We Regulate Sentient AI?