AJS South Africa

THE DIGITAL SCAPEGOAT

Who to Sue When Your AI Loses the Farm.

Welcome back to another instalment of our latest article. Hoorah!

Usually, we spend our time talking about the elegant reliability of practice management and the comforting precision of accounting packages – the bedrock of a sane law firm. But today, we look up from our ledgers at the swirling, neon-lit nebula of the “AI Revolution” happening out there, somewhere, in the legal cosmos.

  • But before we dive in – headfirst – into the void, we must address the ultimate gatekeeper of the digital age – the EULA (End-User License Agreement). In the simplest terms, a EULA is a legally binding contract between a software provider and the user that defines the “rules of the game” for using a piece of software. It’s the “wall of text” that stands between you and your new productivity tool, which everyone – lawyers included – accepts with a single, hopeful click of the “I Agree” button. Crucially, a EULA clarifies that you’re only renting a license to use the software. You don’t actually own it. The developer typically uses this space to limit their liability for any digital catastrophes that may follow. 

With that settled, let’s look at the warm, murky waters of Liability. Specifically – as autonomous AI agents begin to drift into the legal ecosystem, booking transactions and signing contracts on behalf of humans – who gets to wear the professional indemnity suit when things go sideways?

The Developer’s Waltz – “It’s a Feature, not a Bug” 

In the broader tech world, developers want you to know one thing. They love you. They want your lives to be easier. They want you to spend less time reading 200-page leases and more time doing whatever it is lawyers do when they aren’t billing (we assume it involves mahogany furniture and expensive Scotch. Nothing wrong with that I say).

That’s why the industry is building these agents. They don’t just “suggest” text. They act. They execute trades. They sign. They’re marketed as the “ultimate associates” – they don’t sleep, they don’t complain about the coffee, they don’t eat nor do they need to use the loo. And crucially, they don’t have feelings to hurt when you scream at them in CAPSLOCK. Ideal.

But there is a tiny, microscopic, barely-worth-mentioning legal asterisk floating in the cosmos – If the AI messes up, it’s almost certainly the lawyer’s fault. Sorry lamb chop.

The prevailing logic among tech providers is that they provide the vessel, while you provide the intent.

Think of it this way – if you give a toddler a flamethrower, you don’t sue the manufacturer of the flamethrower when the curtains go up. Basically, you start with asking yourself why in hells bells you gave a toddler a flamethrower.

In the eyes of many developers, AI is a “highly sophisticated tool”. If it “hallucinates” a clause that grants the counterparty your client’s first-born child and the mineral rights to their backyard, that’s just “creative autonomous output”. And rightly so.

The Lawyer’s Lament: The Ghost in the Billable Hour

Now, let’s pivot to the person actually holding the bag – The Lawyer.

Imagine you’re using one of these high-flying autonomous agents. Your bot, “Agent Smith” (Soo, The Matrix which we think is apt) has just “optimised” a settlement agreement. It’s faster than any human. It’s efficient. It’s also just agreed to a payment structure based on the lunar cycle because it “detected a high correlation between full moons and liquidity”.

You have two choices:

  1. The Manual Override – spend six hours checking the AI’s work, thereby completely defeating the entire purpose of having the AI in the first place.
  2. The “Jesus Take the Wheel” Approach – click ‘Approve,’ send it to the client, and pray that the Law Society’s disciplinary committee is feeling rather festive this year.

The irony isn’t lost on you. For decades, the legal profession has been a fortress of “The Buck Stops Here”. Now, the buck is being passed to an algorithm that doesn’t understand what a “buck” is, other than as a string of characters associated with USD.

When an autonomous agent makes a financial mistake, the current legal landscape looks like a Spider-Man meme – the Developer points at the User, the User points at the AI, and the AI points at a Wikipedia entry about 17th century maritime law that it accidentally ingested during training.

Go figure!

The “Agency” Crisis – Can a Bot Bind a Human?

The legal cosmos is currently grappling with Electronic Personhood (By Accident).

Traditionally, agency law requires a “meeting of the minds.” But what happens when one “mind” is a neural network and the other is a legacy database from the 90s?

  1. If it’s a Tool – you are liable for its output, just like you’d be liable if you sent a letter with a typo.
  2. If it’s an Agent – it needs authority. Did you give it apparent authority to bankrupt the firm? Probably not. But to the outside world, that bot had your digital signature and your firm’s sleek branding. 

The tech world’s “Safety Net” often feels like a suicide pact. While there are talks of “Guardrail Technology” to stop the AI the moment it tries to do something “Un-Lawyerly,” the reality is that the speed of the code usually outruns the speed of the caution.

So it’s very much a tip toe and don’t step on anything “unseemly” approach. “Be very, very quiet”.

The Wry Reality – The Algorithmic Scapegoat

Let’s be intellectually honest for a second. Why does the world really want AI agents to sign contracts?

Is it efficiency? Is it “the fourth industrial revolution”? No. It’s because humans want someone else to blame. The dream of the modern professional is to have a “Junior Associate” who never complains, but with a “Developer” we can sue if the Associate fails.

But here’s the dark secret of the tech world – The errors are the point. AI doesn’t think. It predicts. It’s a sophisticated “Next Word Predictor”. When it makes a mistake, it isn’t “failing” – it’s just predicting a reality that doesn’t happen to exist yet. Yet.

As a lawyer, you are at risk of becoming a “Janitor of Algorithms”. You follow behind the AI with a digital broom, cleaning up its “hallucinations” and trying to explain to a 70-year-old Judge why your “Agent” decided that the Statute of Limitations is “subjective and based on the vibe of the courtroom”.

Important Caveat – The Belletrist’s Burden

Before you take these musings to your senior partner, a word of caution. We at AJS provide legal practice management and accounting software – the tools that actually keep your firm’s lights on and the books balanced. We’re not the ones sending autonomous robots into the courtroom.

I, your humble narrator – The Legal Belletrist – am merely a recovering lawyer and writer of beautiful, albeit darkly satirical, words. I’m not a lawyer. Anymore. AJS isn’t a law firm. We’re simply observers of the digital chaos, offering what we presume is common sense filtered through a lens of technological curiosity.

If you treat this article as “legal advice,” the irony would be so thick you could use it as a foundation for a new office block. Giving legal advice to lawyers would be like teaching a shark how to bite – highly unnecessary and potentially dangerous for everyone involved.

We simply share what we’ve picked up while watching the AI-obsessed world spin.

So, use your own judgment. G-d knows the bots aren’t using theirs.

Exhibit A – A Satirical “Terms of Service” for the Autonomous Age 

What the fine print on those “Agentic” tools usually boils down to –

  1. Acceptance of Chaos – by toggling the “Enable Autonomy” switch, the User acknowledges that the AI may, at any time, develop a personality inspired by 1980s corporate raiders. The Provider is not responsible for any hostile takeovers initiated by the bot during its “learning phase”.
  2. The “It Felt Right” Clause – the AI operates on probabilistic models. If the AI signs a document because it “felt” like the statistically likely thing to do, the User agrees to defend such action in court using only the phrase “The algorithm works in mysterious ways”.
  3. Limitation of Liability – the Provider’s liability for financial ruin, loss of reputation, or accidental declarations of war by the AI is strictly limited to a very sincere emoji of a shrug 🤷 and a voucher for a “prompt engineering” webinar. 
  4. The “Informed User” Shield – by using this software, you acknowledge you’re a “Sophisticated Professional”. This is tech-speak for – “You should have known better than to trust us”.

Embracing the Chaos

So, who is responsible out there in the cosmos?

In the short term – you!
In the long term – also you!

Sorry sugar plum.

But don’t let that discourage you! The beauty of legal tech is that it moves faster than the law. Much faster. By the time the courts decide who is responsible for AI errors, the industry will have released AI 2.0, which will be so complex that no one – not the developers, not the lawyers, and certainly not the judges – will understand how it works well enough to assign blame.

The future is here. It’s autonomous, it’s efficient, and it’s currently booking your firm a one-way flight to a jurisdiction with no extradition treaties.

You might want to go check on that!

In the meantime, if you’re in need of a service provider who has a proven track record or if you want to find out how to incorporate a new tool into your existing practice management suite – or if you simply want to get started with legal tech – feel free to get in touch with AJS. We have the right combination of systems, resources, and business partnerships to assist you with incorporating supportive legal technology into your practice. Effortlessly.

AJS is always here to help you, wherever and whenever possible!

– Written by Alicia Koch on behalf of AJS

(Sources used and to whom we owe thanks – DocupilotOne.ComSirionBloomberg LawThomson Reuters LegalScience DirectLaw Society UKGuidelines For Responsible AI IntegrationCouncil of Bars and Law Society of EuropeLinkedInGenerative AI systems in legal practiceHouston Law Reviewand Research Gate)

Leave a Reply

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.