Artificial intelligence predicts crime one week in advance

Ishanu Chattopadhyay at the University of Chicago and his colleagues created an AI model that analyzed historical crime data from Chicago, Illinois, from 2014 to the end of 2016, then predicted crime levels for the weeks that followed this training period.

The AI model ​​predicted the likelihood of certain crimes occurring across the city, which was divided into squares about 300 meters across, a week in advance with up to 90 % accuracy.

It was also trained and tested on data for 7 other major US cities, with a similar level of performance.

Crime prediction

Previous efforts to use AI models to predict crime have been controversial because they can perpetuate racial bias. In recent years, Chicago Police Department has tested an algorithm that created a list of people deemed most at risk of being involved in a shooting, either as a victim or as a perpetrator.

Details of the algorithm and the list were initially kept secret, but when the list was finally released, it turned out that 56 % of Black men in the city aged between 20 to 29, featured on it.

Chattopadhyay concedes that the data used by his model will also be biased, but says that efforts have been taken to reduce the effect of bias and the AI doesn’t identify suspects, only potential sites of crime , adding: “It would be great if you could know where homicides are going to happen”.

Advances in machine learning and artificial intelligence are intriguing governments that want security services to use these predictive tools to deter crime.

Although early efforts to predict crime were controversial, they fail to take into account the systemic biases in police enforcement of the law and its complex relationship with crime and society.

Human bias

The researchers also used the data to look for areas where human bias is affecting policing. They analyzed the number of arrests following crimes in neighbourhoods in Chicago with different socioeconomic levels. This showed that crimes in wealthier areas resulted in more arrests than they did in poorer areas, suggesting bias in the police response.

Lawrence Sherman of the Cambridge Center for one of the UK branches of the company made no secret of the fact that “it is (possibly) a reflection of deliberate discrimination by the police in certain areas.”

Chatto Badiaye confirms that artificial intelligence predictions can be used more safely as a source of information for decision-makers at the highest levels, rather than directly using the resources of security services, noting that it made the data and algorithm used in the study available to everyone so that other researchers could examine the results and develop the work.

Twitter
Al Jundi

Please use portrait mode to get the best view.