THE SILICON SHYSTER
Why AI and Law are a Match Made in Circuit-Board Hell
Welcome to the future. It’s exactly like the past, but with more electricity and significantly less accountability. If you’ve spent any time on LinkedIn lately, you’ve probably seen a “productivity guru” – usually someone who hasn’t worked a 60-hour billable week since the advent of the dial-up modem – claiming that Artificial Intelligence (AI) is about to make lawyers as obsolete as the fax machine. “Just prompt it!” they cry. “It’s free! It’s fast! It’s basically a Junior Associate that doesn’t complain about the coffee or ask for a raise! Hoorah! Hoorah!”.
Yeah. Yeah. Yeah. We get it already.
At AJS, we love tech. We live for it. Our entire mission is to provide the digital backbone for modern law firms. But there’s a fine line between using a tool to sharpen your axe and handing the axe to a sleep-deprived robot that thinks common law is a popular genre of folk music. The reality is that for all its intelligence, AI is essentially a statistical parrot. It predicts the next most likely word in a sentence based on patterns, not on a deep-seated understanding of the Constitution of the Republic of South Africa or the subtle nuances of the Labour Relations Act (LRA).
The Jurisdictional Junkie – Why Your AI is Hallucinating
The problem with Large Language Models (LLMs) isn’t that they aren’t smart, it’s just that they’re pathologically eager to please. They’re the ultimate “yes-men” of the digital age. They’re programmed to provide an answer, regardless of whether that answer exists in reality. If you ask an AI to draft an employment contract for a Cape Town-based boutique, it doesn’t look at the South African statutes and think, “I must ensure Section 189 compliance regarding retrenchments”.
Instead, it scans a trillion data points from Texas, Tasmania, and the moon, and stitches together a legal Frankenstein’s monster. It might give you a “Right to Work” clause from Florida (which is about as useful in South Africa as a chocolate furnace) or a force majeure clause that hasn’t been valid here since the turn of the century. Worse still, it might simply hallucinate. In AI terms, a hallucination is when the model confidently states a “fact” that’s entirely fabricated. Because it lacks a moral compass or a license to practice, it doesn’t feel the cold sweat of professional negligence when it makes things up.
It just keeps on typing. Ho hum.
A Tale of Two Shysters – Fictional Citations, Real-World Repercussions
There is no rebrand for a lawyer caught lying to a judge. In the legal fraternity, your reputation is your only currency. When you present an AI hallucination as a legal authority, you aren’t just innovating – you’re committing professional suicide. Whether you’re a firm in Pietermaritzburg or a giant in Manhattan, the results of treating ChatGPT like Senior Counsel are the same – referrals, fines, and a permanent stain on your professional record that no amount of SEO can scrub away. No matter how much you pay a PR firm.
1. The South African Debacle – Mavundla v MEC
In the Pietermaritzburg High Court case of Mavundla v MEC: Department of Co-Operative Government and Traditional Affairs, we saw the dark side of “efficiency”. The legal team representing politician Philani Mavundla submitted a supplementary notice containing 9 case citations. Seven of those 9 – including a supposed landmark case called Pieterse v The Public Protector – were entirely fictitious. They didn’t exist in any law report, any database, or any reality other than the one generated by a chatbot.
Judge Elsje-Marié Bezuidenhout was not impressed by this “digital innovation”. Oh no. She dismissed the application with costs and took the unprecedented step of referring the entire legal team – senior attorneys and candidate practitioners alike – to the Legal Practice Council (LPC) for a professional misconduct investigation.
The message was clear – “The AI made me do it” is not a valid defence in a court of law.
2. The “Legal Genius” Blunder – Northbound Processing
If irony were an Olympic sport, this case would take the gold. The Gauteng High Court recently dealt with Northbound Processing (Pty) Ltd v South African Diamond and Precious Metals Regulator. The legal team used a subscription tool ironically named Legal Genius. This tool hallucinated several cases to support a mandamus application.
- A mandamus application (Latin for “we command”) is a high court order compelling a public authority, government official, or lower court to perform a mandatory public or statutory duty they have failed to act upon. It is used as a last resort to end unreasonable administrative delays, ensuring accountability.
While the court acknowledged there was no specific intent to mislead the bench, the result was the same. The practitioners were referred to the LPC. This case serves as a grim reminder that even “specialised” legal AI tools are prone to the same failures as general-purpose chatbots. Relying on them without manual, line-by-line verification by a human lawyer is a breach of the duty of care owed to the court.
3. Global Warnings – Mata v Avianca (USA) and Harber v HMRC (UK)
The South African experience isn’t unique. We must look at the OG of AI failures – Steven Schwartz in the US case of Mata v. Avianca. Schwartz submitted a brief filled with fake precedents like Varghese v China Southern Airlines. When the judge questioned the citations, Schwartz asked ChatGPT if the cases were real. The AI said “Yes,” and he believed it. He was fined $5,000, and his firm’s name is now a global punchline.
Similarly, in the UK, Harber v The Commissioners for HMRC saw a taxpayer provide fake cases to a tribunal to avoid penalties. The court struck out the evidence and issued a formal warning – the “AI excuse” has expired. Courts across the globe are moving from curiosity about AI to strict liability for its errors.
The Government’s “Oopsie” – Policy Drafted on Phantom Research
If you think the private sector, has it bad, look at the South African Government. In April 2026, the Department of Communications and Digital Technologies (DCDT) and Home Affairs (DHA) were caught in a catastrophic fake research scandal. It was the ultimate embarrassment for a state trying to position itself as a Fourth Industrial Revolution leader.
Minister Solly Malatsi was forced to withdraw the Draft National AI Policy after a sharp-eyed academic noticed that the document cited at least 6 fictitious academic sources. These were not just minor errors. If only. The AI had invented entire journals and authors to provide evidence for the policy’s claims.
Simultaneously, Minister Leon Schreiber’s Department of Home Affairs had to suspend two senior officials after hallucinations were found in the Revised White Paper on Citizenship. It’s the ultimate irony – the very policy designed to regulate AI in our borders was sabotaged by unsupervised AI use. This wasn’t just a technical glitch. It was a fundamental failure of governance.
When the state stops verifying its own research, the foundation of public policy crumbles into digital dust.
The “Data Centre Hunger Games” in Cape Town
While we play with these digital toys, we’re building massive, resource-heavy shrines to house the servers that power them. Cape Town is currently the epicentre of the “Data Centre Hunger Games”. 4 new major data centres – massive projects by giants like Equinix and Teraco – are projected to swallow a staggering 34% of Cape Town’s current electricity supply.
This isn’t just a line on a balance sheet. It’s a threat to the city’s infrastructure. It has sparked fierce opposition from groups like the Housing Assembly and Foxglove, who are represented by the Legal Resources Centre. These “energy hogs” threaten the city’s fragile grid and scarce water resources (required for cooling).
The legal battle over the King Air Industria site highlights a growing tension – do we prioritise the “clouds” of big tech over the lights and water of our citizens? We’re essentially burning the city’s actual lights to power machines that, as proven by the DCDT and DHA scandals, are quite prone to lying to us.
It’s a high-priced trade-off for a technology that we still haven’t learned to control.
The CCMA – Where “Tech Disrupted” Employment Goes to Die
Let’s bring it down to the level of the average disruptive entrepreneur. Imagine Dave, a tech startup founder who thinks paying for an attorney to draft an employment contract is “old-school”. He uses a chatbot to generate a contract that looks professional. It has “Whereas” and “Heretofore” and all the fancy shmancy legal sounding words.
6 months later, Dave fires an employee. He arrives at the CCMA only to realise that his AI-drafted contract is null and void. Under South African law, specifically the LRA and the Basic Conditions of Employment Act (BCEA), contracts must meet strict statutory requirements regarding notice periods, leave, and disciplinary procedures.
Dave’s AI, being trained on US data, included an “at-will” termination clause – meaning he thought he could fire someone for no reason. In South Africa, that is an automatic ticket to a massive payout. The CCMA Commissioner doesn’t care about Dave’s “slick” AI output. They apply South African common law and statutory protections. By trying to save R10,000 on a lawyer, Dave just cost himself R200,000 in settlements. The disruption Dave achieved was solely to his own bank account.
And Dave becomes another punch line.
How to Prompt (Without Ending Up in the LPC Disciplinary Office)
Look, we aren’t Luddites. AI is a fantastic tool for brainstorming, drafting a polite email to a difficult client, or summarising a 400-page meeting transcript. But for non-legal documents, you must follow the rules of “Prompt Engineering” to avoid the hallucination trap –
- Be a Dictator, not a Friend – don’t ask, “Can you maybe please write a blog?” Say, “Draft a 500-word blog for a legal tech website. Tone – Professional. Do not use fictional citations or invent case law.”
- Give it a Persona – tell the AI exactly who it is. “You are a professional legal technology expert with 20 years of experience in the South African legal sector”. This constrains the model’s output to a specific professional standard.
- Jurisdictional Handcuffs – explicitly state – “Limit all information and legal principles to the Republic of South Africa and its statutes”.
- The “Golden Rule” of Verification – treat every output as a lie until you prove it with a real source. If the AI gives you a fact, check it on Google or SAFLII. If you didn’t check it, you didn’t write it.
While you ponder the above, and in the meantime, if you’re in need of a service provider who has a proven track record or if you want to find out how to incorporate a new tool into your existing practice management suite (or if you simply want to get started with legal tech), feel free to get in touch with AJS. We have the right combination of systems, resources, and business partnerships to assist you with incorporating supportive legal technology into your practice. Effortlessly.
AJS is always here to help you, wherever and whenever possible!
– Written by Alicia Koch on behalf of AJS
(Sources used and to whom we owe thanks – Michalsons here and here; Myers Attorneys; Schoeman Law; GCBSA; Moonstone; Daily Maverick; Lexology; GoLegal; CDH; Reuters; YouTube; Studocuand CNBC Africa).

Leave a Reply