COMPAS (software)


COMPAS, an acronym for Correctional Offender Management Profiling for Alternative Sanctions, is a case management and decision support tool developed and owned by Northpointe used by U.S. courts to assess the likelihood of a defendant becoming a recidivist.
COMPAS has been used by the U.S. states of New York, Wisconsin, California, Florida's Broward County, and other jurisdictions.

Risk Assessment

The COMPAS software uses an algorithm to assess potential recidivism risk. Northpointe created risk scales for general and violent recidivism, and for pretrial misconduct. According to the COMPAS Practitioner's Guide, the scales were designed using behavioral and psychological constructs "of very high relevance to recidivism and criminal careers."
The Violent Recidivism Risk Scale is calculated as follows:
where is the violent recidivism risk score, is a weight multiplier, is current age, is the age at first arrest, is the history of violence, is vocation education level, and is history of noncompliance. The weight,, is "determined by the strength of the item’s relationship to person offense recidivism that we observed in our study data."

Critiques and legal rulings

In July 2016, the Wisconsin Supreme Court ruled that COMPAS risk scores can be considered by judges during sentencing, but there must be warnings given to the scores to represent the tool's "limitations and cautions."
A general critique of the use of proprietary software such COMPAS is that since the algorithms it uses are trade secrets, they cannot be examined by the public and affected parties which may be a violation of due process. Additionally, simple, transparent and more interpretable algorithms have been shown to perform predictions approximately as well as the COMPAS algorithm.
Another general criticism of machine-learning based algorithms is since they are data-dependent if the data are biased, the software will likely yield biased results.

Accuracy

In 2016, Julia Angwin was co-author of a ProPublica investigation of the algorithm. The team found that “blacks are almost twice as likely as whites to be labeled a higher risk but not actually re-offend,” whereas COMPAS “makes the opposite mistake among whites: They are much more likely than blacks to be labeled lower-risk but go on to commit other crimes.” They also found that only 20 percent of people predicted to commit violent crimes actually went on to do so.
In a letter, Northpointe criticized ProPublica’s methodology and stated that: “ does not agree that the results of your analysis, or the claims being made based upon that analysis, are correct or that they accurately reflect the outcomes from the application of the model.”
Another team at the Community Resources for Justice, a criminal justice think tank, published a rebuttal of the investigation's findings. Among several objections, the CRJ rebuttal concluded that the Propublica's results: "contradict several comprehensive existing studies concluding that actuarial risk can be predicted free of racial and/or gender bias."
A subsequent study has shown that COMPAS software is no more accurate than predictions made by people with little or no criminal justice expertise. They found that: "On average, they got the right answer 63 percent of their time, and the group’s accuracy rose to 67 percent if their answers were pooled. COMPAS, by contrast, has an accuracy of 65 percent."