Tackle Self-Reps, Sovereigns & AI in Court

July 28, 2026|1:00 PM - 2:00 PM AEST

Australian courts are buckling under a surge of self-represented litigants deploying sovereign citizen tactics and unverified AI-generated arguments, turning routine cases into multi-year delays and risking unjust outcomes.

Key takeaways

  • Self-represented litigants have proliferated in Australian courts, exacerbated by sovereign citizen pseudo-legal claims and generative AI misuse, leading to exponential increases in case backlogs and judicial scrutiny in 2025.
  • Recent high-profile incidents, including violent clashes and police shootings linked to sovereign ideologies, alongside over 80 documented AI hallucination cases—mostly by self-reps—have prompted courts to issue strict guidelines and cost orders.
  • The convergence creates non-obvious tensions: while AI offers access to justice for those unable to afford lawyers, unchecked use amplifies fringe arguments, erodes court efficiency, and burdens the system with wasted resources and potential miscarriages of justice.

Courts Under Strain

Australian courts face mounting pressure from a confluence of self-represented litigants (self-reps), sovereign citizen movements, and the rapid adoption of generative AI in legal proceedings. Self-representation has long strained resources, but recent years have seen sharper rises, particularly in family, civil, and lower courts where high legal fees push people to go it alone. In Queensland, experts warn that what should resolve in days now stretches to years due to procedural chaos.

Sovereign citizens—adherents to pseudo-legal theories rejecting state authority—amplify the problem. They flood courts with irrelevant filings, challenge judicial legitimacy, and invoke bizarre arguments in minor matters like traffic fines or licensing disputes. A November 2025 report highlighted their growing presence clogging the system, while incidents such as a 2025 Victorian police shooting by an alleged sovereign citizen underscore escalating risks beyond mere delay.

Generative AI has turbocharged these issues since ChatGPT's 2022 launch, with usage exploding in 2025. Researchers identified over 80 cases involving AI, three-quarters by self-reps, often producing fabricated citations or incoherent submissions. High Court Chief Justice Gageler described judges as 'human filters' for machine-generated content, deeming the situation unsustainable. Courts responded with guidelines: Queensland updated non-lawyer advice in 2025 warning of costs orders, Victoria's Law Reform Commission tabled a 2026 report with 30 recommendations for safe AI use, and federal and state benches issued practice notes mandating verification and disclosure.

The stakes are tangible. Delays inflate court backlogs, increase public costs, and deny timely justice—particularly to vulnerable parties in family or migration cases. Self-reps risk adverse costs orders or case dismissal from AI errors, while sovereign tactics can lead to contempt findings or licence cancellations. Tensions arise between democratising access via AI and preserving judicial integrity: fringe users exploit tools to weaponise volume and pseudolaw, yet outright bans could widen inequality gaps in a system where legal aid covers only a fraction of needs.

We use cookies to measure site usage. Privacy Policy