There was a great story in Slate yesterday called “Errors in Judgment — Were hundreds of criminals given the wrong sentences because lawyers messed up a basic work sheet?”
The background: the state of Maryland established a worksheet that graded the severity of a convict’s crime and his risk to society, and was intended to make sentencing more consistent and the administration of justice a little less arbitrary.
The problem? An all too-common problem with anything to do with information and analysis: human error. Despite the high-stakes (“months and years of freedom gained or lost”), researcher Emily Owens found that the system was generating errors in 1 in 10 trials, even though there were multiple opportunities for the data to be reviewed and corrected.
And what did people find so hard about the worksheet? Simply looking at the right number! From the article (click on the little plus sign by the third-to-last paragraph):
“The work sheet generated separate “scores” for the felon and his crime. The recommended sentence was then read off a table with offender and offense scores corresponding to the rows and columns of a grid. More than 90 percent of errors resulted from the . …
There was a great story in Slate yesterday called “Errors in Judgment — Were hundreds of criminals given the wrong sentences because lawyers messed up a basic work sheet?”
The background: the state of Maryland established a worksheet that graded the severity of a convict’s crime and his risk to society, and was intended to make sentencing more consistent and the administration of justice a little less arbitrary.
The problem? An all too-common problem with anything to do with information and analysis: human error. Despite the high-stakes (“months and years of freedom gained or lost”), researcher Emily Owens found that the system was generating errors in 1 in 10 trials, even though there were multiple opportunities for the data to be reviewed and corrected.
And what did people find so hard about the worksheet? Simply looking at the right number! From the article (click on the little plus sign by the third-to-last paragraph):
“The work sheet generated separate “scores” for the felon and his crime. The recommended sentence was then read off a table with offender and offense scores corresponding to the rows and columns of a grid. More than 90 percent of errors resulted from the person completing the work sheet entering the figure from a cell next to the correct one. (Using, say, a ruler to get to the correct cell would have prevented this.) The remaining errors came mostly from incorrect choice of criminal statute in calculating the offense score and from a handful of math errors (in operations that were literally as simple as adding two plus two).”
Morals of the story for business intelligence deployments:
- You can never overestimate the “information competency” of your users
- Eliminate manual processes where possible (“the Commission had already been at work developing an automated worksheet with the explicit goal of eliminating errors.”)
- Build in checks and balances and collaboration – the answer to the “people problem” is more people to review the decision-making process. (“multiple levels of evaluation helped to undo some of the damage”).