Artificial intelligence is transforming how South African businesses operate, from automated credit scoring and fraud detection to customer service chatbots and predictive analytics. Yet the legal framework governing AI remains fragmented, with no single dedicated statute addressing artificial intelligence. Instead, businesses must navigate a patchwork of existing legislation, emerging policy frameworks, and common law principles to manage the legal risks that AI systems create.
This guide examines the current regulatory landscape for AI in South Africa, identifies the legal obligations that already apply to AI-driven systems, and provides practical governance frameworks for businesses deploying artificial intelligence. For a broader overview of how technology law intersects with business operations, see our Software & Technology Law hub.
The Current State of AI Regulation in South Africa
South Africa does not have a dedicated AI statute. Unlike the European Union, which enacted the AI Act in 2024, or jurisdictions such as China and Brazil that have implemented targeted AI regulations, South Africa's approach has been characterised by policy development rather than legislative action. The Department of Communications and Digital Technologies (DCDT) has been the primary government department driving AI policy discussions, culminating in the publication of the Draft National AI Policy Framework.
This does not mean that AI operates in a legal vacuum. Several existing statutes have direct application to AI systems. The Protection of Personal Information Act 4 of 2013 (POPIA), the Consumer Protection Act 68 of 2008 (CPA), the Promotion of Administrative Justice Act 3 of 2000 (PAJA), the Financial Advisory and Intermediary Services Act 37 of 2002 (FAIS), and South Africa's common law of delict all impose obligations that touch on how AI systems are developed, deployed, and monitored.
The Presidential Commission on the Fourth Industrial Revolution (PC4IR), established in 2019, released its report recommending the creation of an AI Institute and the development of a national AI strategy. While many of its recommendations have been slow to materialise into binding regulation, the report laid the policy groundwork for what is now a more structured approach to AI governance.
Key Takeaway
The absence of dedicated AI legislation does not mean the absence of legal risk. Businesses deploying AI must comply with existing statutes, and the pace of policy development suggests that dedicated regulation is on the horizon. Early adoption of governance frameworks will ease the transition when binding AI legislation is enacted.
The Draft National AI Policy Framework
The Draft National AI Policy Framework, published by the DCDT, represents South Africa's most comprehensive attempt to articulate a national approach to artificial intelligence. While not yet enacted into law, the framework signals the direction of future regulation and establishes principles that businesses should begin incorporating into their operations.
The framework is built around several core pillars. First, it emphasises a human-rights-based approach to AI, grounding governance in the values enshrined in the Constitution. Second, it calls for transparency and explainability, requiring that AI systems be designed in ways that allow their decision-making processes to be understood and scrutinised. Third, it promotes accountability, insisting that natural or juristic persons remain responsible for the outputs and impacts of AI systems they deploy.
The framework proposes a risk-based classification system that would categorise AI systems according to their potential impact on individuals and society. High-risk AI systems — such as those used in healthcare, criminal justice, financial services, and employment — would face more stringent regulatory requirements, including mandatory impact assessments, human oversight mechanisms, and ongoing monitoring obligations.
Crucially, the framework also addresses the question of institutional oversight. It proposes the establishment of a national AI governance body responsible for developing standards, conducting audits, and advising government on AI policy. The precise form this body will take — whether as a standalone regulator, a division within an existing body like the Information Regulator, or a multi-stakeholder forum — remains under discussion.
The Six Principles of the Draft Framework
- 1Human rights and human oversight — AI must not undermine constitutional rights or operate without meaningful human control.
- 2Safety and robustness — AI systems must be technically sound and operate as intended across their lifecycle.
- 3Transparency and explainability — decisions made by AI must be capable of being understood and challenged.
- 4Fairness and non-discrimination — AI must not produce biased outcomes or perpetuate existing inequalities.
- 5Accountability — clear responsibility must be assigned for the outcomes of AI systems.
- 6Privacy and data governance — AI must comply with data protection principles, particularly those in POPIA.
Existing Laws That Apply to AI
While South Africa awaits dedicated AI legislation, several existing statutes impose legal obligations on organisations that develop, deploy, or rely on AI systems. Understanding these obligations is the first step in building a compliant AI governance framework.
POPIA — The Protection of Personal Information Act
POPIA is the most directly relevant statute for AI systems that process personal information. Section 71 of POPIA provides data subjects with the right not to be subject to a decision based solely on automated processing of their personal information that has legal effects or significantly affects them. This provision directly constrains the use of AI for automated decision-making in contexts such as credit scoring, insurance underwriting, employment screening, and loan approvals.
Where a decision has been made on the basis of automated processing, the responsible party must notify the data subject and provide an opportunity for the individual to make representations. The responsible party must also be in a position to explain the logic involved in the automated decision. This "right to explanation" creates a practical requirement for AI systems to be interpretable and for businesses to maintain documentation of how their algorithms reach conclusions.
Beyond section 71, POPIA's general conditions for lawful processing apply to the data used to train and operate AI systems. The purpose limitation principle (section 13) requires that personal information collected for one purpose not be repurposed for AI training without the data subject's consent or another lawful basis. The data quality principle (section 16) requires that personal information be accurate and kept up to date — a principle with significant implications for AI systems that can perpetuate and amplify inaccuracies in their training data.
The Consumer Protection Act
The CPA protects consumers against unfair, unreasonable, and unjust business practices. Where AI systems interact with consumers — for example, through chatbots that provide advice, recommendation engines that influence purchasing decisions, or dynamic pricing algorithms — the CPA's provisions on fair dealing (section 40), product liability (section 61), and the right to fair, just, and reasonable terms (section 48) all apply.
Section 61 of the CPA establishes strict liability for harm caused by defective goods, which could extend to software products that incorporate AI if a court determines that the AI's output constitutes a "defect" within the meaning of the Act. This is a developing area of law, and businesses deploying AI in consumer-facing products should be aware of the potential exposure.
Common Law — Delict and Negligence
South Africa's common law of delict provides a cause of action for any person who suffers damage as a result of another's wrongful and culpable conduct. Where an AI system causes harm — for example, a self-driving vehicle involved in an accident, a diagnostic AI that provides an incorrect medical recommendation, or an algorithmic trading system that causes financial losses — the question of fault and causation becomes central.
Under the common law, fault typically requires either intention or negligence. The challenge with AI systems is that the "decision" causing harm may not be attributable to a specific human act of negligence but rather to the way the system was designed, trained, or deployed. Courts would likely assess whether the responsible party took reasonable steps to test, validate, and monitor the AI system — which underscores the importance of implementing robust governance frameworks from the outset.
Algorithmic Decision-Making — Legal Risks Under POPIA
Section 71 of POPIA deserves particular attention because it directly regulates the most common commercial application of AI: automated decision-making. The provision applies whenever a decision that has "legal effects" or "significantly affects" a data subject is based "solely" on automated processing, including profiling.
The scope of section 71 is broad. "Legal effects" include decisions that determine someone's legal rights, such as whether a loan application is approved, whether an insurance claim is accepted, or whether a person is shortlisted for employment. "Significantly affects" extends the provision beyond strictly legal consequences to encompass any decision that materially impacts a person's circumstances — such as determining the interest rate on a loan, the premium on an insurance policy, or eligibility for a service.
The word "solely" is legally significant. If a human being is meaningfully involved in the decision-making process — not merely rubber-stamping an algorithmic recommendation, but genuinely exercising independent judgment — then section 71 does not apply. This creates a practical incentive for businesses to build "human-in-the-loop" processes into their AI systems, ensuring that consequential decisions are reviewed by a competent person before being finalised.
Compliance Requirements Under Section 71
- Notify the data subject that a decision has been made by automated processing.
- Provide the data subject with an opportunity to make representations against the decision.
- Be able to explain the logic involved in the automated decision-making process.
- Reconsider the decision if the data subject submits representations.
The Information Regulator has signalled its intent to scrutinise automated decision-making more closely. While enforcement actions specifically targeting AI have been limited, the Regulator's growing capacity and its expressed interest in algorithmic accountability suggest that businesses relying on automated processing should treat section 71 compliance as a priority rather than a theoretical concern.
Liability for AI-Generated Outputs
One of the most complex legal questions in AI governance is determining who bears liability when an AI system produces harmful, inaccurate, or unlawful outputs. Under South African law, liability can arise under several legal frameworks, depending on the circumstances.
Contractual liability arises where an AI system fails to perform in accordance with the terms of a contract. If a SaaS provider's AI tool produces inaccurate outputs that cause the customer to suffer losses, the customer may have a claim for breach of contract. The terms of the SaaS agreement become critical — particularly the warranties (if any) provided regarding the accuracy and fitness-for-purpose of AI outputs, and the limitation of liability clauses that purport to exclude consequential damages.
Delictual liability arises where an AI system causes harm through wrongful and culpable conduct. The deployer of an AI system owes a general duty of care to those foreseeably affected by the system's outputs. Breach of this duty — through inadequate testing, insufficient monitoring, or failure to implement appropriate safeguards — could give rise to a delictual claim.
Vicarious liability is another potential avenue. If an employee uses an AI system in the course and scope of their employment, and the AI produces an output that causes harm to a third party, the employer may be vicariously liable. This creates a strong incentive for businesses to implement clear AI usage policies, provide training on the limitations of AI tools, and establish review processes for AI-generated work product.
Intellectual property liability raises additional concerns. AI systems trained on copyrighted material may generate outputs that infringe third-party copyright. The position under the South African Copyright Act 98 of 1978 is unsettled. The Act was drafted long before AI was commercially viable, and the question of whether an AI system can be an "author" or whether its outputs qualify for copyright protection remains open. What is clear is that if an AI system reproduces substantial portions of copyrighted material without authorisation, the person deploying the system may face infringement claims.
AI in Financial Services — FSCA Considerations
The financial services sector has been among the earliest and most aggressive adopters of AI in South Africa, and it has accordingly attracted particular regulatory attention. The Financial Sector Conduct Authority (FSCA) has taken an active interest in how AI is being deployed for credit decisioning, insurance underwriting, robo-advisory services, and anti-money laundering compliance.
The Financial Advisory and Intermediary Services Act (FAIS) regulates the provision of financial advice and intermediary services. The critical question is whether an AI system that provides personalised investment recommendations or suggests financial products constitutes the rendering of "financial advice" under FAIS. If it does, the system — or more accurately, the financial services provider deploying it — must comply with FAIS's fit and proper requirements, record-keeping obligations, and the general duty to act honestly, fairly, and with due skill, care, and diligence.
The FSCA has published guidance indicating that AI-driven financial services tools must comply with the principles of Treating Customers Fairly (TCF), which require firms to demonstrate that customers receive products and services appropriate to their needs. Algorithmic bias in credit scoring — where AI models produce systematically unfavourable outcomes for particular demographic groups — is a particular concern. Such bias may constitute unfair discrimination prohibited by both the Promotion of Equality and Prevention of Unfair Discrimination Act 4 of 2000 (PEPUDA) and the Constitution.
The National Credit Act 34 of 2005 (NCA) adds another layer. When AI is used for credit risk assessment, the credit provider must still comply with the NCA's affordability assessment requirements and ensure that the automated process does not result in reckless lending. The credit provider cannot delegate its statutory obligations to an algorithm — the legal duty to conduct a proper assessment remains with the provider, regardless of the tools it uses.
Practical AI Governance Frameworks for SA Businesses
While waiting for dedicated AI legislation, South African businesses should proactively implement AI governance frameworks that anticipate regulatory requirements and manage legal risk. The following framework, drawn from international best practice and adapted for the South African legal context, provides a practical starting point.
1. AI Inventory and Risk Classification
Begin by creating a comprehensive inventory of all AI systems in use across the organisation. For each system, assess and classify the risk level based on the nature of the decisions it influences, the sensitivity of the data it processes, and the potential impact on individuals. High-risk systems — those that make or materially influence decisions about credit, insurance, employment, health, or legal matters — should be subject to the most rigorous governance controls.
2. Impact Assessments
Conduct AI impact assessments before deploying new AI systems or making material changes to existing ones. These assessments should evaluate the system's potential effects on data protection (POPIA compliance), fairness and non-discrimination (PEPUDA compliance), consumer rights (CPA compliance), and sector-specific regulatory requirements. Document the findings and the mitigation measures implemented.
3. Human Oversight Mechanisms
Implement "human-in-the-loop" processes for high-risk AI applications. This means ensuring that a competent human being reviews and approves consequential decisions before they are communicated to the affected individual. The reviewer must have the authority, training, and information necessary to override the AI's recommendation. Rubber-stamping does not constitute meaningful human oversight under POPIA section 71.
4. Transparency and Explainability
Design AI systems to be as interpretable as commercially practical. Maintain documentation of how models are trained, what data is used, how decisions are reached, and how the system is monitored over time. Be prepared to explain the logic of automated decisions to data subjects who exercise their rights under POPIA, and to regulators who may audit your systems.
5. Bias Testing and Monitoring
Implement processes to test AI systems for bias before deployment and on an ongoing basis. This includes testing for disparate impact on groups protected under PEPUDA and the Constitution — particularly race, gender, disability, and socio-economic status. Document the testing methodology, results, and corrective actions taken.
6. AI Usage Policies and Training
Develop and implement clear internal policies governing how employees may use AI tools. These policies should address the types of AI tools permitted, the contexts in which they may be used, the requirement for human review of AI outputs, prohibitions on inputting confidential or personal information into external AI systems, and the intellectual property implications of AI-generated content. Provide regular training to ensure compliance.
7. Vendor Due Diligence
When procuring AI systems from third-party vendors, conduct due diligence on the vendor's own AI governance practices, data handling procedures, and compliance posture. Ensure that the data processing agreement with the vendor adequately addresses AI-specific risks, including data sovereignty, model transparency, and liability allocation for algorithmic errors.
What's Coming — Anticipated Regulatory Developments
Several regulatory developments are anticipated in the short to medium term that will materially affect how South African businesses use AI.
Finalisation of the National AI Policy Framework: The draft framework is expected to be finalised and adopted as government policy. While it may not immediately take the form of binding legislation, it will provide the basis for sector-specific regulations and enforcement guidance issued by bodies like the Information Regulator and the FSCA.
Information Regulator guidance on automated processing: The Information Regulator is expected to issue guidance notes on the application of POPIA section 71 to AI systems. These guidance notes, while not binding law, will signal the Regulator's enforcement priorities and expectations regarding transparency, fairness, and human oversight.
Sector-specific AI rules: The financial services sector is likely to see the earliest AI-specific regulatory requirements. The FSCA's interest in algorithmic accountability, coupled with the Prudential Authority's focus on operational risk arising from AI, suggests that financial institutions may face mandatory AI governance requirements within the near term.
International influence: The European Union's AI Act will have an indirect effect on South African businesses that trade with EU-based entities or process data originating from the EU. Businesses with international operations should monitor the extraterritorial reach of the AI Act and assess whether voluntary compliance with its standards would be commercially prudent.
African Union developments: The African Union has been developing a continental AI strategy that may influence South Africa's approach. The AU's emphasis on leveraging AI for development while safeguarding human rights aligns broadly with South Africa's own policy direction, and regional harmonisation of AI governance standards is a stated objective.
Professional Guidance on AI Governance
AI governance is not merely a compliance exercise — it is a strategic imperative that protects your business from legal liability while building trust with customers, regulators, and stakeholders. The regulatory landscape is evolving rapidly, and businesses that establish robust governance frameworks now will be better positioned to adapt when binding legislation is enacted.
MJ Kotze Inc advises businesses on AI governance frameworks, POPIA compliance for automated decision-making, and the contractual arrangements needed to manage AI-related legal risk. For tailored advice, please contact us.
Need AI Governance Advice? Contact MJ Kotze Inc
Our team advises businesses on AI compliance, automated decision-making risks, and building governance frameworks that meet current and anticipated regulatory requirements.