After years of negotiations, the EU AI Act was eventually finalised and published on 12 July 2024. It’ll enter into force 20 days later, on 1 August 2024. That’s all good and well, but does the EU AI Act apply to South African businesses?
This article outlines what the AI Act means for businesses using AI, focusing on whether the Act applies to South African businesses. We’ll also suggest practical steps to comply with the Act’s obligations. Vitally, the Act introduces significant fines for those who fail to meet its obligations.
This article focuses only on AI users, known as “deployers” under the AI Act. Article 3 of the Act defines “deployers” as natural or legal persons using an AI system under its authority, except where it is used in a personal non-professional activity. In other words, this article doesn’t focus on the obligations of AI system providers, importers, or distributors.
Does the EU AI Act apply to South African businesses?
Short answer
Yes! If you fall within the scope of the EU AI Act.
Longer answer
Like the EU GDPR, the AI Act applies outside the EU (see Article 2). This means it applies to organisations:
- whose place of establishment or location is outside the EU; and
- where the AI system’s output is used in the EU.
The AI Act doesn’t apply to me, but I still want to comply with it
Some South African businesses may want to understand the AI Act’s approach to risk assessment and regulation of AI to use it as a benchmark for their internal assessments of AI deployment. This approach is common for international businesses.
Are there any exceptions (where I don’t need to comply)?
Yes, the Act creates exceptions for specific AI systems, including those you use:
- solely for scientific research and development
- for personal, non-professional activities
- for research, testing, and development of AI systems or models before put on the market or into service (i.e., in a controlled environment, such as a laboratory or other simulated environment)
- released under free and open-source licences unless they are “prohibited” or “high-risk” AI systems designed to interact directly with natural persons
What “AI” does the Act cover?
Definition of “AI”
The AI Act defines “AI” as:
- a machine-based system;
- designed to operate with varying levels of autonomy;
- that may exhibit adaptiveness after deployment;
- that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions;
- that can influence physical or virtual environments.
The EU should release further guidelines regarding this definition within the next six months.
Specific categories of AI
The AI Act addresses specific categories of AI:
- General purpose AI (GPAI): AI systems having a broad range of uses (e.g., ChatGPT, Siri, Google Assistant, Alexa, and Google Translate). These systems are trained on large amounts of data and can perform a wide range of tasks regardless of how the model is placed on the market. They can also be integrated into various downstream systems or applications.
- Prohibited: Types of AI which present an unacceptable risk to EU citizens
- High-risk: AI which creates a high risk to the health and safety or fundamental rights of EU citizens
- Limited risk: Risks associated with the lack of transparency about AI usage
- Minimal risk: Generally simple tasks with no interaction with EU citizens
Timeframes for complying with the EU AI Act
While the AI Act will come into force on 1 August 2024, different provisions will start to apply at different times:
- most provisions will apply 24 months after entry into force (2 August 2026)
- provisions relating to prohibited AI systems will apply after six months (2 February 2025)
- obligations on employers relating to AI literacy will apply after six months (2 February 2025)
- obligations on GPAI providers and provisions relating to penalties will apply after 12 months (2 August 2025)
- Provisions relating to high-risk AI systems under Annex I (AI systems forming a product or safety component) will apply after 36 months (2 August 2027)
The EU Commission has also launched the AI Pact, encouraging businesses to voluntarily comply with certain obligations of the AI Act before the regulatory deadlines.
How can I get in trouble for not complying?
Fines
Not complying with the AI Act could lead to significant fines for AI users:
- Prohibited AI infringements: up to the greater of 7% of global annual turnover or €35m
- High-risk and transparency infringements: up to the greater of 3% of global annual turnover or €15m
- Supply of incorrect information: up to the greater of 1.5% of global annual turnover or €7.5m
- SME caps: for SMEs, including start-ups, the fines are capped at the lower percentage of global turnover or the fixed amount
The EU AI Office
The European Commission established the EU AI Office on 29 May 2024 to support implementing and managing the AI Act. The office will foster research and innovation in trustworthy AI and position the EU as a global discussion leader. It aims to ensure coherent implementation of the AI Act by supporting governance bodies in Member States and will directly enforce rules for GPAI models. The AI Office is preparing guidelines on AI system definitions and prohibitions and will coordinate the creation of codes of practice for general-purpose AI obligations.
Explaining how the EU AI Act regulates AI
The EU AI Act takes a risk-based approach to regulation. An AI system can fall within more than one category, and obligations are set out in relation to each category.
GPAI
The AI Act imposes specific obligations on GPAI model providers and additional obligations on GPAI model providers with systemic risk. These obligations address concerns that some models could carry systemic risks if they are very capable or widely used. For instance, powerful models could cause serious accidents or be misused for far-reaching cyber-attacks.
Prohibited AI
Some types of AI that present an unacceptable risk to EU citizens are prohibited.
These include AI systems that:
- deploy subliminal, manipulative, or deceptive techniques that materially distort a person’s behaviour and are likely to cause significant harm; and
- evaluate or classify people based on their social behaviour or personality characteristics, leading to detrimental or unfavourable treatment.
Article 5 provides a detailed list of prohibited AI.
Tip: Most organisations now have an AI governance policy (if you don’t yet, we’re your plug). If the AI Act applies, you should now amend this policy to ensure that your list of “prohibited AI uses” reflects the “banned” AI under the EU AI Act.
High-risk AI
Most of the obligations in the AI Act apply to high-risk AI.
Definition of high-risk AI
An AI system is high risk if it is a safety component or a product covered by Annex 1 of the AI Act and must undergo a third-party conformity assessment under that legislation.
List of high-risk AI systems
Annex III of the AI Act lists AI systems that are considered high-risk.
Some exceptions reduce this risk in certain situations (see Article 6 (3)). The Commission can update this list and will provide guidelines with examples of high-risk and low-risk cases. There will be a database of high-risk systems that you can check before using a specific AI system.
Examples of high-risk AI
While many high-risk areas will apply only to some organisations, some areas will be relevant to many organisations. For example, AI systems will be high risk if they are intended to:
- be used for recruitment or selection (e.g., to place targeted job advertisements, analyse and filter job applications, and evaluate candidates)
- make decisions about promotions and termination, allocate tasks based on individual behaviours, personal traits or characteristics, and monitor and evaluate performance and behaviour
With more AI tools available for hiring, HR teams should know that the AI Act’s high-risk AI requirements may apply more frequently. This is important because while most rules apply to providers of high-risk AI systems, AI users also have obligations.
Financial institutions
Financial institutions must realise that AI systems intended to evaluate creditworthiness or establish credit scores will be high-risk, as will AI systems intended for risk assessment and pricing in relation to life and health insurance.
Assessing whether your AI system is high-risk
If you’re considering deploying an AI system, you’ll need to establish whether it is a high-risk system and perform some initial analyses.
Obligations for deployers of high-risk AI systems
Deployers of high-risk AI systems have direct obligations under Article 26 of the AI Act, which include:
- Taking appropriate technical and organisational measures to ensure you use such systems in accordance with the accompanying instructions for their use
- Assigning human oversight to the AI system to someone who has the requisite competence, training, and authority, as well as the necessary support
- Monitoring the operation of the high-risk AI system, including informing the provider or distributor of the AI system and the relevant MSA if certain risks present (i.e., present health, safety, or fundamental rights risks) or if there is a “serious incident”, and suspending use
- Ensuring that input data that the deployer has control over is relevant and sufficiently representative in the view of the intended purpose of the high-risk AI system
- Keeping the logs automatically generated by the high-risk AI system if the logs are under their control for at least six months
- Informing employees’ representatives and the affected employees that they will be subject to the system
- Using the information provided by the provider to carry out a Data Protection Impact Assessment
Becoming a provider
In certain circumstances, deployers of high-risk AI systems may become providers (as defined in the AI Act) of a high-risk system and have additional associated obligations. These include deployers who apply a trademark to a high-risk system already on the market, make substantial modifications to a high-risk system, or modify an AI system (including GPAI), rendering it high-risk (Article 25).
Tips:
- choose the most suitable person in your organisation to oversee high-risk AI systems and include this in your AI governance policy
- ensure teams involved in using high-risk AI systems understand their responsibilities and get legal sign-off before engaging with providers
- update your responsible AI policy to specify high-risk AI systems
Transparency obligations – Article 50
Transparency risks
Certain AI systems are subject to transparency obligations. These include systems used or intended to be used:
- to interact directly with EU citizens (e.g., chatbots)
- to generate synthetic audio, image, video, or text content (including GPAI)
- to generate or manipulate images, audio, or video content as deep fakes
- for text generation or manipulation published to inform the public on matters of public interest
Information requirements
These systems must provide specific information to individuals clearly and distinctly. This includes informing individuals that they are interacting with AI and disclosing that content has been artificially generated or manipulated. The AI Office will facilitate the creation of codes of practice to support compliance. These obligations are in addition to those imposed if a system is considered high-risk.
Tips
- ensure contracts clarify roles in the AI supply chain and compliance responsibilities
- consider potential regulation changes during the contract period
Minimal risk
AI literacy and codes of conduct
AI users must make sure their staff have adequate AI knowledge when using AI systems. They’re also encouraged to adopt voluntary codes of conduct.
Tips
- create and manage internal AI policies and publish transparency notices
- provide internal training on safe AI use and avoiding infringing third-party intellectual property rights
- embed governance in your organisation to manage AI deployment and use in a safe environment
How we can help
- Attend our event: “Does the EU AI Act apply to my organisation?”
- Contact us to help you comply with the EU’s AI Act.