Artificial intelligence (AI) and machine learning technologies are progressing at an unprecedented rate, particularly over the last decade in industries such as healthcare, finance, and energy. 

These areas benefit from massive amounts of data that can be sorted and analyzed using AI, but how well do AI tools perform when faced with choice? How does AI assess the difference between right and wrong? At what point should AI be deployed in industries like law? 

The decision to apply AI in law requires careful consideration of AI’s capability to discern and navigate complex ethical dilemmas. It also involves understanding the legal, social, and practical implications of its use and its consequences for society. To effectively implement AI in law, algorithms must account for existing racial bias.

Line between technological advancement and ethical responsibility 

The debate over whether AI should be used for efficiency or to influence moral judgments is central to discussions about actuarial risk assessments in criminal administration. Actuarial risk assessments rely on historical data to estimate the probability of future events, often focusing on patterns in behaviour. Factors such as criminal history, employment, and substance abuse are assigned numerical values based on a ‘risk scale’ designed to measure an offender’s likelihood of reoffending.

Can AI tools — which use algorithms to assess individuals’ projected risk of criminal behaviour — enhance the efficiency of judicial decision-making? The algorithm incorporates ‘normative judgments’ — evaluations of the rightness or wrongness of an action — in its prediction. This is intended to reduce the influence of subjective biases, including racial biases, in sentencing individuals.

These tools may help identify individuals who need frequent carceral supervision — such as individuals with a high risk of re-offending — and those who could benefit from alternative interventions, such as rehabilitation programs or restorative justice approaches. Carceral supervision refers to detaining individuals in prisons, jails, and other correctional facilities as a form of punishment. 

In principle, if AI tools assess a defendant’s reoffending — whether low, medium, or high — judges are more likely to arrive at a sentencing decision deemed ‘objectively correct,’ or the most fair outcome. In a 2021 article, Indiana University’s law professor Jessica Eaglin states that AI tools could reduce incarceration by consistently predicting who actually needs carceral supervision. 

Over time, this methodology could shift the justice system from a punitive, incarceration-based approach to a more rehabilitative, individualized model. However, current risk assessments reinforce a racialized view of crime as seemingly neutral. This impacts fair decision-making as it fails to take an individualized approach to justice.

In his 2020 article published in ACM Digital Library, Harvard University PhD candidate Ben Green argued that “algorithmic fairness narrows the scope of judgments about justice.” Essentially, risk assessments and other machine learning models promote criminal punishment by perpetuating the existing carceral practices that are overdue for reevaluation. 

While AI tools aim to reach the ‘right’ decision within given constraints, they have yet to fully capture the moral weight of ethical decisions, which are shaped by intangible human factors in the decision-making process. 

Critical race theory and its impact on law and technology

The use of AI tools in areas like criminal administration, law enforcement, and surveillance often aims to achieve specific goals like actuarial risk assessment. However, this approach overlooks the ethical dilemmas they present, which in turn reinforce existing systemic racial hierarchies in society. 

Eaglin suggests that race and technology should be viewed as co-productive forces in achieving substantive justice. As critical race theory posits, race is a social construct shaped by people’s lived experiences and interactions, reinforced by institutional structures that perpetuate societal norms and racial categories. Technology must be understood in the same way, as it possesses its own political dimensions that either uphold or challenge racial hierarchies. 

We must look beyond the algorithm and question how AI tools are designed, and who holds discretion over the subjective choices that shape their use in sentencing. 

Understanding race and technology as a social phenomenon invites a more critical perspective on AI tools’ presumed objectivity. This is especially important for vulnerable populations, such as Black, Indigenous, and people of colour, who are susceptible to wrongful convictions. 

In 2019, Statistics Canada reported that “nearly one in five (18 per cent) Black people reported having ‘not very much’ or ‘no’ confidence in the police, which is more than double the proportion among the non-Indigenous, non-racialized population (8 per cent).”

These numbers are part of a larger conversation about how law, as a system of societal norms and legal entities, contributes to the social construction of race — a process mirrored in the technologies shaping our daily lives. Eaglin suggested that the challenge isn’t to restrict AI through law but to examine how law presents conflicting moral imperatives for the tools currently in use.