Artificial intelligence (AI) has started disrupting legal systems by creating fictitious legal cases, potentially eroding trust in justice globally.
The technology, known for producing deepfakes and misinformation, surprisingly affects the judiciary as well.
Courts rely on accurate legal arguments presented by lawyers to resolve disputes. The emergence of AI-generated fake legal content is alarming and raises serious legal and ethical concerns. For instance, when AI creates content, it can erroneously "hallucinate" information due to gaps in its training data, resulting in convincing but inaccurate material.
The impact is severe if such AI-generated errors infiltrate legal proceedings, especially given the time constraints and limited legal services access. Carelessness and shortcuts in legal research could tarnish the legal profession's reputation and diminish public faith in justice.
A notable incident is the 2023 US case Mata v Avianca. Lawyers submitted a ChatGPT-researched brief containing non-existent case citations to a New York court, leading to severe repercussions including case dismissal and sanctions.
Other instances involve
Donald Trump's former lawyer, Michael Cohen, using fictional cases from Google Bard. Cases in Canada and the UK also highlight this troubling trend.
In response, legal regulators worldwide have taken action, issuing guidelines, opinions, and, sometimes, bans on the use of generative AI in legal contexts. Guidance is beneficial, but a mandatory approach requiring lawyers to verify AI-produced information is critical, especially for those less aware of AI's limitations.
Australian legal bodies have produced guides and articles on responsible AI use, but courts and the legal community could do more, such as establishing clear rules or practice notes on AI use in litigation and promoting technology competence as part of lawyers' education. These steps would maintain public trust and uphold the integrity of the legal system and its professionals.