Artificial intelligence (AI) is changing the world. Today, a huge range of sectors are being disrupted by AI: everything from credit scores, to image recognition; and from logistics, to crop health.
As regular readers of Exponential Investor will know, we are big fans of sci-fi nightmares – and what could be more dystopian than a justice system run by AI?
Crime has certainly been a controversial subject for Exponential investor – so I expect today’s article will provoke a few letters. We’re going to look at how AI is changing the law-enforcement landscape. It’s a morally-tricky area – but it’s one that is seeing big investment opportunities, as police and criminal justice systems around the world turn to AI to aid their decision-making.
Don’t miss out on your opportunity to profit from AI: get up to speed, with Frontier Tech Investor.
Robocop gets cerebral
As well as physical robot policemen and security guards, there are now virtual cops doing white-collar work. VALCRI, from i-intelligence, helps catch suspects by pulling together common factors from related crimes. For example, witnesses may report seeing similar-looking people at different crime scenes. If these crimes are separated in time and space, it’s very difficult for police offers to find a potential link. Yet, it’s these links that allow police to catch infrequent or widely-travelled offenders. However, a potential link is precisely that – potential. It doesn’t mean that the crimes were perpetrated by the same individual. There is a risk of miscarriage of justice – if crimes are erroneously lumped together, and suspects blindly pursued.
I hope you’re as serious about profiting from these breakthroughs as we are about bringing you the latest investment thinking. Exponential Investor can take you a long way – but if you’re really serious about profiting from the AI revolution, you need to take a good look at our sister publication, Frontier Tech Investor, where you’ll get specific stock tips and more in-depth investment analysis.
Robojudge joins Robocop
Recently, a man in the US appealed his jail sentence, on the grounds it was handed to him by a robot. While this story may seem utterly bizarre, there is a bit of a backstory here.
Judges make decisions on sentencing by weighing up a range of guidelines. For example, in the UK, there are suggested sentences for various offences – such as carrying a knife. When considering a sentence, a judge will weigh up a variety of mitigating and aggravating factors for the offence – such as whether the knife carrier had boasted about planning to stab somebody. In addition, a judge will also weigh up factors that may influence the chance of recidivism. Someone with a long criminal history may expect a steeper sentence than a first-time offender. There are a wide variety of factors, and they have to be carefully weighted and combined in the sentencing process. Understandably, it’s very difficult to weigh up the myriad factors that may affect the probability of reoffending.
The judges’ balancing-act is therefore difficult. Further, it must be performed while judges render themselves wilfully blind to particular factors (eg, ethnicity) that may correlate with reoffending – but which cannot legally be considered.
It’s possible that computers may be better at this task than humans are. They can weigh up the various issues without bias – and produce mathematically-robust decisions. These algorithms (eg, COMPAS, by equivalent) can be constantly tweaked, as new data become available. In theory, such algorithms can make sentencing decisions consistent, fair and evidence-based.
However, such algorithms are usually written by commercial companies, who may not want to expose their “secret sauce” to competitive replication (although VALCRI is one that’s open). Closed algorithms can lead to a very difficult legal and ethical situation. Police want to ensure they’re going after the right people. Likewise, judges want to offer a robust verdict, well-grounded in reason. Offenders, likewise, need to know that they have been sentenced fairly.
Here’s the problem
Police officers and judges are therefore increasingly relying on algorithms which they hope are consistent and fair – but yet may not have access to algorithms’ reasoning. Of course, the inner workings of a human mind are private – but we do tend to trust the judgement of experienced, well-trained individuals.
We don’t extend the same trust to maths – which, in the last analysis, is all that these algorithms are. Presently, many criminal justice AIs are closed systems. They have invisible workings, and no case-by-case accountability. So they’re not just maths – they’re hidden maths.
Time will tell whether we are willing to allow this new approach to take root. It will lead to a society in which machines make decisions on freedom or incarceration, life or death – and a host of comparable matters. In time, we can expect to see similar algorithms deployed for everything from social services interventions to judgements on forced hospitalisation.
We have entered a world where the rule of law is potentially going to be replaced by the rule of robots. Society will have to decide whether the consistency of an algorithm is a benefit that’s worth sacrificing human judgement for. While nobody can bribe an algorithm, they may not trust its makers – or the data it has been trained on.
Trust in algorithms also overlooks one potentially vital factor in this discussion: the human dimension. Part of what makes us accept a legal system is that it’s delivered by humans, who are entrusted to carry out society’s collective will. We not only hope to face a justice system that’s fair, but also one that’s humane. We expect it to express mercy as well as censure – and normal people are as guided by shame and admonishment as they are by criminal sanction. If that humanity is replaced by AI, we may well find the resulting system inhumane and tyrannical.
It remains to be seen whether AI is seen as scrupulously fair, or merciless and unaccountable. Either way, we can expect a lot more robots in the criminal justice system.
Submit your plea to: firstname.lastname@example.org.
Category: Artificial Intelligence