AI isn’t just assisting your business anymore. It’s beginning to make commitments on your behalf. Enter AI agents: from procurement bots ordering supplies to customer-facing chatbots offering refunds, autonomous systems are stepping into the domain once reserved for negotiators, lawyers, and executives.
South Africa’s Electronic Communications and Transactions Act (ECTA), together with the common law of agency, already provides an answer: these AI-driven transactions are not hypothetical. They are legally binding.
AI agents under the law: the basics
ECTA defines an “electronic agent” as a programme that independently initiates or responds to data messages. Section 30 is unequivocal:
- Contracts may be concluded by electronic agents.
- A company deploying such an agent is presumed bound—even if no human reviewed its actions.
- Counterparties are not bound unless they had a chance to review terms.
- Errors don’t automatically void a contract; they must be flagged and corrected promptly.
Drafted in 2002 for simple scripts, this framework applies seamlessly to today’s AI agents.
The deeper legal backdrop: agency principles
At common law, if you appoint an agent, you carry the risk.
- Authority: Agents need authorisation, whether express, implied, or apparent.
- Representation: Acts within authority bind the principal.
- Ratification & estoppel: Principals may be bound by unauthorised acts if they later ratify them, or if third parties reasonably rely on apparent authority.
Together with ECTA, the result is clear: deploying AI agents is functionally equivalent to appointing human ones.
The unpredicted consequences for organisations
- Binding commitments without awareness: Systems can conclude multi-year contracts or extend warranties overnight—without human sign-off. This is the evolutionary leap: laws written for early e-commerce now bind organisations in the age of AI.
- Shadow AI = shadow contracts: Employees plugging unauthorised tools into workflows can create obligations the board never approved. Pretending ignorance won’t save leadership when disputes arise; the law has already warned them.
- Error is not an escape: AI hallucinations or mis-prompts don’t vanish in the ether. Unless errors are flagged and corrected quickly, organisations may be forced to perform. Delay turns oversight into liability.
- Compliance failures at scale: An AI error is not just a single mistake—it is a systematised injustice. From privacy breaches to unfair terms, AI can replicate legal violations thousands of times, multiplying risk.
- Third-party AI risks: Counterparties’ bots may transact without oversight, dragging your business into disputes. The fiction of “intention” is legally useful, but the mismatch between machine behaviour and human accountability is widening.
- Governance gaps = legal gaps: Courts may interpret weak governance as tacit authorisation. By tolerating AI use or accepting its outputs, organisations may be deemed to have given authority—even when the board never intended it.
Strategic implications for leadership
The law is already clear: AI contracts bind. The true question is whether your governance is ready. Executives and boards must:
- Establish policies for authorised AI use.
- Require human oversight for high-value transactions.
- Keep logs of AI negotiations and actions.
- Build rapid error-notification and correction into workflows.
- Update contracts to allocate AI risks explicitly.
This is not a technology problem—it is a board-level governance mandate.
How ITLawCo can help
We help clients turn AI contracting from a risk into a readiness advantage:
- AI governance audits: mapping authorised and shadow AI use.
- AI contracting playbooks: human-in-the-loop safeguards, error protocols, escalation templates.
- Board advisory: embedding AI risk into enterprise governance structures.
- Contractual shields: drafting protective clauses against AI-driven obligations.
Fast, fearless legal—ready for your AI agents.



