- The American Conservative - http://www.theamericanconservative.com -

Automatic Justice

One important decision a judge must make is whether to let a defendant out on bail until the trial. Flight risk is one major concern; most states also instruct judges to consider the threat to public safety. But what if we used computers make that decision instead of judges? We’d see dramatically better results, at least according to a new working paper [1] published by the National Bureau of Economic Research. In fact, we could reduce the crimes these people commit by a quarter while jailing the same number of people. Alternatively, according to the study, we could reduce the number of people jailed by more than 40 percent while keeping crime rates the same.

This formulaic approach to justice, known as “risk assessment,” is becoming increasingly common. Criminal-justice officials from New Jersey [2] to California [3] are turning to algorithms for estimates of how likely offenders are to get into more trouble. In addition to bail, such tools can be used to make sentencing, probation, and parole decisions. (Robot juries remain far in the future.)

This development has prompted squeamish responses for a number of reasons, many of them valid. But if the new study is correct, the gains may prove too big to pass up. These tools may be the key to dramatically reducing our high incarceration rate without putting the public at risk.

Rather than relying on preexisting risk-assessment tools, the new study creates its own, using computerized “machine learning” on massive data sets compiled from real-life cases. The resulting formulas take account of various factors, including the current charge against the defendant, his priors, and whether he has failed to appear at a court date in the past. They do not consider any demographic traits except age.

The real-life judges made decisions quite different from what the algorithm would have recommended. In theory, this could happen because judges have important information that the algorithm doesn’t—but in reality the algorithm just works better, especially when it comes to identifying the riskiest cases. Incredibly, judges released about half of the defendants the algorithm considered to be in the riskiest 1 percent, and those defendants in fact committed new crimes more than half of the time. There are obvious gains in keeping these folks locked up while letting lower-risk offenders go, but for whatever reason human judges don’t seem to realize which defendants are truly the most likely to get into trouble if released.

A major concern about these algorithms is that they could be racially biased, even if they don’t directly consider race as a factor. Last year I showed [4] that one specific accusation along these lines was overblown, though, and there is little sign of bias in the new study, at least relative to the status quo. The algorithm would have released slightly more Hispanics and slightly fewer blacks than the real-life judges did.

The authors also show that you can force the algorithm not to change the racial makeup of the jail population—or even to ensure that the same percentage of each racial group is released—without dramatically reducing its effectiveness. However, that approach is probably a lawsuit waiting to happen. In one example, the algorithm simply “stop[s] detaining black and Hispanic defendants once we have hit the exact number of each group detained by the judges, to ensure that the minority composition of the jail population is no higher under the algorithm’s release rule compared to the judges.” It’s a quota system, in other words, that would in effect set a separate risk threshold for each racial group to make sure the final numbers are politically correct.

None of this, of course, negates the legitimate criticisms of risk-assessment tools that I mentioned in last year’s post. Some of these tools are proprietary and thus can’t be contested effectively in court, raising due-process concerns. Wisconsin’s high court, for example, has upheld [5] the use of risk assessment in sentencing, but only as one factor among many, barring the full substitution of an algorithm’s discretion for judges’—the scenario explored in the study.

And, while the study focused on simple predictors like prior offenses, other risk-assessment tools involve extensive questionnaires that raise issues of their own. One popular tool, for instance, involves asking inmates whether there are gangs and drugs in their neighborhood, criteria on which it’s arguably unfair for the state to base consequential decisions.

These questions could also allow offenders to game the system. Most criminals aren’t geniuses, but it doesn’t take a genius to figure out which answers to such questions are more likely to get you released. (The questionnaires do contain some tricks [6], however, to catch offenders who aren’t telling the truth.)

But it is worth emphasizing that, in the context of bail, this new study suggests that we could lock up 40 percent fewer people without changing the crime rate. That is an enormous amount of practical good—fewer lives disrupted and less public money spent. And there is little reason to think humans are significantly more reliable when it comes to sentencing, parole, or probation.

People of good faith can disagree as to what, exactly, a risk-assessment tool should take into account. But it’s becoming harder to deny the immense potential such tools hold.

Robert VerBruggen is managing editor of The American Conservative.
Follow @RAVerBruggen [7]

6 Comments (Open | Close)

6 Comments To "Automatic Justice"

#1 Comment By Cornel Lencar On February 26, 2017 @ 10:32 pm

Judges can indeed be biased: an israeli study has shown that the rate of conviction is higher prior to lunch time than after judges have had their lunches, are no longer hungry and are more indulgent.

However, given that state and feds have agreements with the private prison system to maintain a certain rate of incarceration, such algorithms would be ultimately made illegal by states’ and federal congresses…

#2 Comment By Von Steuben On February 27, 2017 @ 8:45 am

Wait, why is it okay to reduce the effectiveness of the system for the sake of political correctness?

Using a quota system reintroduces bias, which the system is supposed to eliminate. Why do you just gloss over that? That is supposed to comfort the reader?

If a objective algorithm that can’t see race is still producing disproportionate responses, maybe that means that our racial stigma’s aren’t so off the mark after all.

Pretty scary stuff.

#3 Comment By Kelvin On February 27, 2017 @ 12:38 pm

Cornel, even if it’s true that there are agreements in some states to provide a certain number of inmates to private prisons, that doesn’t mean we couldn’t reduce the incarceration rates in those states. No state has fully privatized its prison system, so in those states, the reduction in prisoners could be accomplished temporarily in the state-run prisons, until the agreements are renegotiated.

#4 Comment By William Dalton On February 27, 2017 @ 1:24 pm

Have things changed since I practiced criminal law? I thought every defendant, other than those charged with capital offenses, was entitled to bail. The only question for the judge is how high to set the bail. You can set it so high that only the most wealthy accused (who are rarely likely candidates for committing acts of public violence) will be released. That can keep a lot of people locked up awaiting trial. So is the computer algorithm designed to figure out how high a bail is just out of the defendant’s reach? Or can afford to pay a bail bondsman?

Perhaps the surest path to reducing the number of offenses committed by those released on bail is simply to stop releasing accused who are already on probation or parole for prior offenses of which they have already been convicted.

#5 Comment By John Fargo On February 27, 2017 @ 7:13 pm

A hypothesis has been forwarded, computers do a better job than judges in granting bail. Now, do trial testing to see if the hypothesis is true. As a side note, why isn’t state redistricting done by computer?

#6 Comment By Forbe On February 27, 2017 @ 9:21 pm

If I’m guilty try me by jury, if I’m innocent I want a judge (or maybe an algorithm)!

Von Steuben

He didn’t gloss over the use if a quota system. I think he makes it clear it’s not a good (or just!) idea. I would strongly suspect an algorithm that has a racial bias without racial information being input, would be as an artefact of past racial bias in sentencing or conviction rates. Your racial stigma (it’s not ours, just yours), or your racist tendencies to put it more bluntly, remain quite off the mark. I hope you’re not a judge.