NST Leader: Artificial Intelligence in cour

Artificial intelligence (AI) — the art of using algorithm in making decisions — has now made a court appearance in Malaysia with a bang.

Not as a defendant — it would have been a novel case had it been so — but as an aid to help the magistrate with sentencing. Will AI stop there? Hard to tell.

Trends elsewhere suggest something more. But if a machine should one day sit at the bench presiding over a court battle between men and men, then it will be a surrender most ominous.

There are at least two reasons why we should not defer to machines. One, they lack humanity. It is true that AI can do many complicated things. And fast too. But dispensing justice is not one of them. Justice is not all science. It is this and more. There is much art in between. What makes justice just is the interplay of soul, conscience and compassion. AI has none of these. What machine learning is good at, though, is data crunching.

But from here to passing judgment is a leap machines can’t make. AI is as good as the people who make them. And they are all techies. To allow AI a seat on the bench would mean dispensing with legal education.

Two, the danger of bias in AI predictive-modelling. It is often argued by AI proponents that machines can’t hold personal prejudice unlike men. Hence its predictions are free from prejudice. Not so fast.

A Cornell University study conducted in 2017 points to plenty of bias. It even warns of AI amplifying such prejudice. The study does recommend a so-called bias-free model for various decision-making settings, such as sentencing, policing and paroles.

But the point is that data come with bias. What’s worse, AI can be manipulated. And it does get abused so, as Cambridge Analytica scandal tells us. Perhaps we can build machines against machine bias. But to go down this path would mean machines ad nauseam.

A 2016 American case is instructive on the dangers of predictive-modelling. Here one Eric L. Loomis, charged with eluding the police, was handed down a six-year prison term for being a “high risk” to the community.

Unlike in the two Malaysian AI cases, the Wisconsin judge arrived at his sentencing decision in part based on a rating generated by a secret algorithm called Compas on the likelihood that Loomis will commit another crime. Compas’s calculation is derived from a survey of the defendant and information of his past. The New York Times’ screaming headline “In Wisconsin, a Backlash Against Using Data to Foretell Defendants’ Futures” was telling.

There is both promise and peril in the use of AI, especially in our justice system. The promise is that AI is very good at data crunching. This is worth exploiting. Anything more is peril territory. Machines do fail, and in a big way.

There is so much we do not know about AI. Like how it arrives at certain conclusions. We may be moved to let AI into the courtroom because of its promise. But if we do, we must know that AI’s peril is not far behind.

Leave a Reply

Your email address will not be published. Required fields are marked *

15 − 1 =