August 29, 2023
AI has seen an immense surge in popularity over the past year. But, in spite of the rising concerns over its uses and applications, law enforcement organisations across the world have been introducing AI systems as a means of streamlining their workload. For instance, experts have estimated that AI technologies may help cut emergency service response rates by 20 to 35%. But in spite of its many advantages, AI has also been generating controversy regarding personal security and privacy. Here are some of the ways AI has been implemented in law enforcement.

Facial Recognition Technology

Facial recognition has been used in law enforcement for over a decade. But, similarly to most AI technologies, its application and efficiency have since evolved. Facial recognition works through a combination of machine learning and data analysis.

Firstly, the AI system needs to be able to identify a human face. After it recognises the specific features of a human face, the captured image is run through large datasets which confirm its validity as a face. Once the image is validated, it is then cross-checked with the law enforcement organisation’s database.

If the culprit identified by the AI is already in their system, then facial recognition can confirm their identity. If they are not part of the organisation’s database, then the system can run additional checks depending on its permissions. Either way, once a face has been identified, it is then catalogued and stored within the system’s database.

More Than Facial Recognition

In September 2022, Devon and Cornwall police installed an AI software aimed at capturing and identifying driving offences. In just three days, they identified over 300 offences.

The AI system works similarly to facial recognition by capturing and processing images containing potential breaches: drivers using their phones, not wearing their seatbelts or speeding. The technology is thus able to capture, identify and process offences, which are then sent over to a human for review.

The introduction of the system was highly successful, reducing the departments’ workload and streamlining the process of identifying and fining road offenders. What has also come to light, however, is the concerningly large number of drivers ignoring the law and thus endangering themselves and others. The police departments involved in the experiment are now taking new measures to reduce such offences.

Check out our guide to find out more about the implications of AI.

CTA

Stay On Top Of The Latest Technological Advances

Check Out Our FREE Commercial Awareness Newsletter

SIGN UP

Sharing Data Across Law Enforcement Organisations

Before the introduction of automated systems, police reports would be written by hand and filed manually. Firstly, the hard-copy system needed massive amounts of physical storage space. Secondly, it was particularly liable to human error – accidentally losing or destroying documents or being unable to identify them.

But the hard-copy system was not just an issue for individual departments – it created institution-wide problems. For instance, if a criminal were caught in a foreign country or jurisdiction, the process of sharing information about this individual across multiple departments or organisations would be particularly tiresome, long, and sometimes even overlooked.

With the implementation of AI, law enforcement organisations can now handle and share large amounts of data, without the need for any human intervention.

Remote Monitoring

The use of drones across law enforcement organisations has also seen a recent rise in popularity. Drones can monitor and inspect sites without endangering officers, allowing for much safer and better-planned interventions.

Moreover, drones provide a much more flexible approach to areas which might otherwise be difficult to reach. This, combined with their ability to capture detailed images of specific individuals and locations makes them a highly effective tool for law enforcement.

Using AI technology, law enforcement agencies can program drones to monitor areas for set periods of time, or perhaps even identify and capture specific images (faces, weapons, etc.). For instance, researchers at the University of Maryland and the University of Zurich recently equipped a drone with cameras and a sonar system, which made it capable of identifying and dodging objects thrown at it.

Predictive Policing

Predictive policing is still in its very early days – but the concept behind it is nonetheless captivating. Predictive policing would involve the use of large databases and data analytics to ‘predict and prevent criminal activity before it occurs.’

For instance, if an AI system identifies a pattern of crime consisting of a specific area and a specific time, law enforcement agencies could prepare in advance by tightening security around specific places or sending out patrols at different times of the day. This type of AI algorithm would therefore be able to identify and categorise crimes based on variables like crime type, location, time, etc.

Using this type of system would allow law enforcement to effectively identify crime hotspots and patterns in ways humans could not recognise, all due to the sheer amount of data AI systems are able to analyse.

AI-driven patrol routing is already being implemented by select law enforcement agencies. AI technology is used to calculate and create the most efficient route for police patrols to take based on various factors – such as the time of day, traffic, high-risk areas of their jurisdictions and crime patterns.

Preparing for interviews? Check out our guide to discussing police crime sentencing and the Courts bill. 

Concerns

While it is widely agreed that the introduction of AI in law enforcement, such as in the case of facial recognition technology, has been largely beneficial, concerns are still being raised regarding the dangers to privacy imposed by the introduction of more advanced systems.

One of the concerns with regards to AI is the lack of regulation surrounding certain programs’ access to personal data. This particular issue has been dominating the discussion surrounding AI ever since Italy decided to ban OpenAI’s ChatGPT. As a result, governments have started passing laws in favour of the stricter regulation of AI, such as the EU’s AI Act.

Another issue which has been raised, specifically in connection with facial recognition software, is its tendency to misidentify people – particularly with regard to people of colour. Certain mistakes have led to undue accusations, suggesting that perhaps technology is not quite ready to tackle issues pertaining to law enforcement just yet.

Loading

Loading More Content