Skip to main content

We need to talk more about agentive AI under POPIA. Let’s start with some context…

POPIA was written for a world where machines obeyed. Data was collected for a specified purpose (s 13), processed within tidy boundaries, and destroyed when no longer needed (s 14). It’s a law with static intentions, of straight lines and bureaucratic certainties.

But agentive AI has entered the scene: systems that don’t just respond to prompts, but pursue goals—perceiving, deciding, and learning continuously, often without waiting for human instruction. It does not ask permission. And in that restless autonomy lies a collision: between a law drafted for human clerks and a technology that behaves as if it were relatively alive.

The black box and the transparency illusion

Section 71 of POPIA forbids decisions with legal or substantial effect from being taken solely by automated processing—unless two conditions are met:

  • the subject must be allowed to make representations (s 71(3)(a)); and
  • the subject must be given “sufficient information about the underlying logic” (s 71(3)(b)).

The law presumes there is a logic to show, a neat syllogism. But agentive AI is less courtroom transcript than shifting mist. Its decision-making is emergent, non-linear, and opaque even to its makers. To tell citizens they can contest such a decision while knowing the logic is unknowable is not protection, but a farce.

The machine that cannot forget

POPIA requires that personal information “must not be retained any longer than is necessary” (s 14(4)) and when destroyed, must be done so “in a manner that prevents its reconstruction” (s 14(5)). This presumes information is a detachable object. In agentive systems, it is not. Once ingested, data mutates into the system’s statistical “genome”, encoded in the very parameters that govern its behaviour. To demand erasure is to demand amnesia from a machine designed to learn forever. The law insists on forgetfulness; the machine insists on recall. The clash is not procedural but existential.

The paradox of purpose

Section 13 commands that personal data be collected “for a specific, explicitly defined and lawful purpose”. Yet the very point of agentive AI is to exceed yesterday’s categories. A fraud-detection agent must absorb tomorrow’s schemes; a medical agent must spot anomalies no doctor has imagined. These systems thrive by enlarging their scope. And so the paradox: the better an agent does its job, the more it risks illegality. The law’s virtue of purpose becomes its vice, a cage that punishes progress.

Accountability in a hall of mirrors

Section 8 lays it bare: the “responsible party must ensure that the conditions set out in Chapter 3…are given effect to at the time of the determination of the purpose and means of the processing and during the processing itself”. But what happens when the “means” shift overnight, altered by an algorithm retraining itself? When the “purpose” is quietly reframed by a system inferring new goals? To assign accountability here is to point into a hall of mirrors. Yet to let it vanish is intolerable. The danger is not that no one is responsible, but that everyone can plausibly deny responsibility.

How ITLawCo can help

At ITLawCo, we do not accept the false choice between innovation and rights. We build legal playbooks and products that turn POPIA’s lofty principles into actionable safeguards:

  • AI + POPIA governance engines: translating Section 71 into compliance architectures that constrain agentive deployments.
  • Rapid resilience reviews: 48-hour tests of where adaptive learning collides with ss. 13–14 obligations.
  • Privacy-by-Design toolkits: frameworks embedding minimality (s. 10), pseudonymisation, and explainability before an agent ever goes live.
  • Cross-border compliance pathways: mapping POPIA against GDPR Art. 22 and the rest of Africa’s automated decision rules, ensuring African innovation scales lawfully.