Hidden Digital Dependency and the Case for a National Technology Dependency Audit in South Africa

Given SHINGANGE

Abstract

South Africa’s rapid digital transformation has been enabled largely through the adoption of foreign-owned and foreign-governed digital technologies, including cloud platforms, software ecosystems, cybersecurity tools, and global payment networks. While these technologies have improved efficiency, scale, and service delivery, they have also introduced a less visible but strategically significant risk: hidden digital dependency. This article argues that South Africa’s most serious digital vulnerability does not arise from overt hardware procurement or isolated vendor choices, but from embedded dependencies in control planes, identity systems, software update mechanisms, cybersecurity supply chains, and cross-border data governance regimes that lie beyond South Africa’s legal and political authority. Drawing on international political economy and security literature, particularly the concept of weaponised interdependence, and grounding the analysis in South Africa’s cybersecurity, data, and infrastructure governance frameworks, the article demonstrates that current national preparedness is fragmented and insufficient. It advances the case for a National Technology Dependency Audit as a proportionate, governance-aligned instrument to restore visibility, prioritise risk, and strengthen national resilience without pursuing technological isolation. The article concludes that resilient interdependence, rather than digital autarky, should be South Africa’s strategic objective in an increasingly contested digital environment.

Keywords: digital sovereignty; hidden digital dependency; weaponised interdependence; cybersecurity supply chains; cloud governance; South Africa.


1. Introduction

Digital infrastructure has become foundational to modern state capacity. In South Africa, digital systems underpin revenue collection, social grant disbursement, banking and payments, aviation and logistics, healthcare delivery, municipal services, and political communication. The state’s ability to govern, regulate, and deliver services increasingly assumes uninterrupted access to global digital platforms and networks. Yet this assumption is rarely interrogated at the level of national risk.

South Africa’s digital modernisation has been shaped primarily by pragmatic considerations: cost efficiency, scalability, skills availability, and speed of deployment. As a result, government departments, state-owned enterprises, and systemically important private-sector actors have adopted foreign cloud platforms, software ecosystems, and cybersecurity services as default infrastructure. While this trajectory has produced tangible short-term benefits, it has also created long-term structural dependencies that remain poorly understood within policy and security circles.

This article argues that South Africa faces a growing problem of hidden digital dependency, and that the absence of a national technology dependency audit represents a strategic governance failure. Existing policy instruments recognise aspects of digital risk, but they do not provide a consolidated national view of where foreign control intersects with critical digital functions. Without such visibility, preparedness remains reactive, fragmented, and overly dependent on assumptions of benign continuity.


2. Conceptualising hidden digital dependency

Hidden digital dependency refers to reliance on foreign-owned or foreign-governed digital capabilities that are essential to national continuity but are not treated as strategic dependencies. Modern digital architectures deliberately abstract control. Users interact with applications and dashboards, while authority over identity, encryption, updates, availability, and compliance resides elsewhere.

These dependencies typically manifest across several layers:

  • control-plane governance in cloud platforms,
  • identity and authentication services,
  • encryption key management and certificate authorities,
  • software update and patching ecosystems,
  • proprietary application programming interfaces and data formats, and
  • cross-border data governance regimes.

The critical distinction is between operational use and strategic control. A system may be hosted locally, staffed locally, and paid for locally, yet remain subject to external decisions regarding access, lawful disclosure, or termination. This distinction explains why digital dependency is a national security and sovereignty issue rather than a purely technical or commercial concern.


3. Weaponised interdependence and digital power

The concept of weaponised interdependence provides a useful analytical lens for understanding why hidden digital dependency matters at state level. Farrell and Newman argue that global economic and information networks are structured around hubs and chokepoints, and that actors who control these nodes can exploit them for coercive purposes. Power is often exercised indirectly, through private intermediaries complying with domestic law, export controls, or risk-averse corporate policies.

In the digital domain, these hubs include cloud control planes, dominant operating systems, app distribution platforms, global payment networks, and cybersecurity service providers. Control over these nodes enables surveillance, denial of access, and influence through standards and ecosystem rules.

For South Africa, the key issue is asymmetry. Dependence on a small number of global technology ecosystems concentrates risk and creates latent leverage, regardless of political intent. Even in the absence of formal sanctions, export controls, compliance overreach, and platform governance decisions can constrain access to essential services during periods of geopolitical stress. In this sense, hidden digital dependency constitutes a standing condition of vulnerability rather than a contingent threat.


4. South Africa’s digital governance architecture and its limits

South Africa is not without relevant policy instruments. The National Cybersecurity Policy Framework positions cybersecurity as a national interest and calls for the protection of critical information infrastructure. The Protection of Personal Information Act establishes principles for lawful processing and data protection. The Cybercrimes Act provides mechanisms for criminal investigation and cooperation. The National Policy on Data and Cloud articulates ambitions for a data-driven economy and provides policy direction on cloud adoption.

However, these instruments operate largely in silos. None mandates a systematic assessment of foreign technology dependency across critical national functions. Cybersecurity governance focuses on coordination and incident response rather than structural dependency. Data policy prioritises economic opportunity and inclusion rather than control and jurisdiction. Procurement decisions remain decentralised and sector-specific.

The result is fragmented preparedness. No single authority is responsible for understanding how foreign control, legal jurisdiction, and platform governance intersect across the national digital ecosystem. This fragmentation creates blind spots that only become visible during crises.


5. National security implications of hidden digital dependency

Hidden digital dependency generates several interrelated categories of national security risk.

First, jurisdictional risk arises when foreign legal regimes can compel technology providers to disclose data or restrict services through corporate entities, irrespective of where data is physically stored. Data location does not equate to data control.

Second, availability risk emerges when access to platforms, identity services, or software updates is degraded or denied due to compliance actions, geopolitical disruption, or corporate policy changes. Modern cloud platforms integrate identity, security monitoring, and administrative control into a single dependency stack.

Third, integrity risk arises from software supply chain compromise. Trusted update mechanisms and centrally managed platforms can be exploited or withdrawn, creating systemic exposure across multiple institutions simultaneously.

Fourth, lock-in risk constrains policy autonomy. Proprietary platforms and data formats raise switching costs and narrow exit options, creating indirect coercion even in the absence of explicit restrictions.

Finally, strategic leverage risk arises when concentrated dependency becomes a bargaining chip during diplomatic or economic disputes. South Africa’s current preparedness does not adequately address these risks because it treats them as isolated technical issues rather than interconnected structural vulnerabilities.


6. Sectoral exposure in South Africa

Hidden digital dependency is not evenly distributed. Its impact varies across sectors.

In government administration, digital identity and access management underpin grants, payroll, licensing, and secure communications. External governance of authentication services, certificate authorities, or key management creates a single point of failure for the digital state.

In financial systems, payment rails, clearing mechanisms, and fraud detection tools rely on global networks governed externally. International experience demonstrates that financial messaging and settlement access can be restricted rapidly, with cascading economic effects.

In aviation, transport, and logistics, air traffic management, port operations, and cargo systems depend on specialised software, satellite navigation, and real-time data exchange subject to export controls and certification regimes.

In health and social services, cloud-hosted systems process sensitive data and support social stability. Dependency without contingency planning magnifies both operational and political risk.

Across sectors, dependency mapping is typically treated as an operational concern rather than a strategic one, reinforcing the need for a national-level assessment.


7. Cloud governance, data sovereignty, and control

Cloud computing sits at the centre of South Africa’s hidden dependency problem. Policy debate has focused largely on data residency and economic development. Yet scholarship and policy analysis consistently demonstrate that data location does not equal control. Jurisdiction follows corporate domicile and legal obligation, not server geography.

While POPIA addresses personal information protection, it does not resolve conflicts of law or address national security data, metadata, or platform telemetry. International best practice emphasises control-plane independence, transparency in lawful access procedures, and tested exit mechanisms. These factors are not systematically assessed in South Africa’s current governance approach.


8. Cybersecurity supply chains as a dependency vector

Cybersecurity tooling itself introduces dependency. South African institutions increasingly rely on foreign-managed platforms for endpoint protection, threat detection, and incident response. These tools often require privileged access and centralised update mechanisms.

Supply chain incidents documented internationally demonstrate how compromise or withdrawal of trusted vendors can cascade across multiple organisations. Treating cybersecurity procurement as a routine operational matter overlooks jurisdictional exposure, export control risk, and platform governance. A credible dependency audit must therefore include defensive technologies, not only productive systems.


9. Sanctions, export controls, and platform governance

Contemporary sanctions and export controls increasingly target technology ecosystems rather than individual goods. Export controls on advanced computing, software, and components operate upstream, affecting entire supply chains. At the same time, platform governance increasingly functions as de facto sanctions enforcement, with access constrained through terms of service and compliance risk.

Denial can occur without formal designation of a country or institution, creating grey-zone exposure for non-aligned states. For South Africa, neutrality does not eliminate risk. Visibility and mitigation are therefore essential.


10. Why current preparedness is inadequate

South Africa’s preparedness gap is structural rather than technical. Accountability is fragmented across departments. Compliance with international standards is often mistaken for resilience. Most critically, preparedness is built on an implicit assumption of continuity in global digital access.

In an environment characterised by strategic competition, sanctions, and platform power, this assumption is no longer defensible. Preparedness that assumes continuity is not preparedness at all.


11. The case for a National Technology Dependency Audit

A National Technology Dependency Audit provides a structured means of restoring visibility. It identifies which digital capabilities are essential to national continuity, where foreign control is embedded, under what legal and contractual conditions access is governed, and what the impact of disruption would be.

The audit is diagnostic rather than prescriptive. It does not ban technology or dictate suppliers. Its value lies in enabling evidence-based prioritisation, coordination across sectors, and informed decision-making.


12. Addressing objections

Claims that dependency audits deter investment misunderstand investor preferences for predictable governance. Concerns about isolation or censorship reflect governance risk, not inevitability. The objective is not digital autarky, but resilient interdependence: maintaining global connectivity while preserving national capability.


13. Conclusion

Hidden digital dependency is a present condition for South Africa, not a hypothetical future risk. Existing policies acknowledge aspects of digital risk but do not address foreign control holistically. A National Technology Dependency Audit is a proportionate, policy-aligned response that transforms intuition into evidence and reaction into preparation.

In an increasingly contested digital environment, South Africa’s strategic objective should not be control over global technology systems, but the capacity to govern, decide, and function under pressure. Without a dependency audit, that capacity remains uncertain.


References (Harvard)

BIS (2019). Export Administration Regulations and Entity List Amendments.

Couldry, N. and Mejias, U. (2019). The Costs of Connection. Stanford University Press.

Cory, N. (2017). Cross-Border Data Flows. ITIF.

DCDT (2024). National Policy on Data and Cloud. Government of South Africa.

Deibert, R. (2013). Black Code. Oxford University Press.

Drezner, D. (2015). Economic Statecraft. Princeton University Press.

ENISA (2021). Threat Landscape for Supply Chain Attacks.

Farrell, H. and Newman, A. (2019). ‘Weaponized Interdependence’, International Security, 44(1), 42–79.

Government of South Africa (2013). Protection of Personal Information Act.

Government of South Africa (2015). National Cybersecurity Policy Framework.

Government of South Africa (2019). Critical Infrastructure Protection Act.

Government of South Africa (2020). Cybercrimes Act.

Kello, L. (2017). The Virtual Weapon and International Order. Yale University Press.

Mueller, M. (2017). Will the Internet Fragment? Polity.

NIST (2022). Cybersecurity Supply Chain Risk Management (SP 800-161 Rev.1).

OECD (2020). Digital Security Risk Management.

Schneider, F. (2019). Cloud Sovereignty. SWP.

UNCTAD (2021). Digital Economy Report.

World Economic Forum (2023). Global Cybersecurity Outlook.

Article 4: Governing the Cognitive Domain – Why South Africa Is Structurally Unprepared for Influence Operations

Given SHINGANGE

The first three articles in this series established three core points. Article 1 defined influence operations as a defining feature of contemporary conflict, operating primarily in the cognitive domain. Article 2 examined how digital platforms and fragmented media ecosystems enable influence at scale. Article 3 demonstrated why South Africa is particularly exposed, drawing on empirical indicators such as rising identity salience, declining intergroup trust, and widespread perceptions of institutional unfairness.

These indicators are not abstract social trends. They are measurable signals of cognitive vulnerability. Article 4 therefore turns to the institutional question: despite the visibility of these signals, is South Africa structurally capable of recognising and responding to influence operations as a governance and security challenge?

The short answer is no—not because of a lack of concern or policy language, but because South Africa’s governance architecture remains fundamentally misaligned with the nature of cognitive and information-layer threats.

The Category Error in South Africa’s Security Thinking

South Africa continues to treat influence, disinformation, and narrative contestation as peripheral issues—communication problems, political risks, or media ethics concerns—rather than as core national security challenges. This is a category error. Influence operations operate below the threshold of traditional security responses, yet they shape the conditions under which democratic governance, social cohesion, and institutional legitimacy function.

The country’s security architecture reflects an earlier era of threat perception. Cybersecurity is framed largely in technical terms: systems, networks, critical infrastructure, and cybercrime. Strategic communications are treated as a government messaging function. Social cohesion is addressed through social policy and symbolic nation-building initiatives. These domains operate in silos, despite the fact that influence operations exploit precisely the gaps between them.

As a result, no single institution is responsible for understanding or defending the cognitive domain as a system.

Policy Without Strategy, Strategy Without Structure

South Africa does not suffer from a complete absence of policy. The National Cybersecurity Policy Framework (NCPF), now a decade old, acknowledges information security and cyber threats in broad terms. However, it offers little conceptual clarity on influence operations, cognitive security, or narrative resilience. More importantly, it does not translate these concerns into institutional design, roles, or accountability.

This reflects a deeper structural problem: policy has not been followed by strategy, and strategy has not been followed by structure. Influence operations cut across cybersecurity, intelligence, communications, education, and social trust, yet no coordinating mechanism exists to integrate these domains. Responsibility is diffused, and accountability is absent.

In such an environment, responses to influence-related incidents are necessarily reactive, fragmented, and politicised.

The Absence of Cognitive Security as a Governance Concept

One of the most significant gaps in South Africa’s security discourse is the absence of cognitive security as an explicit governance concept. There is no shared framework for understanding how identity, trust, perception, and information interact as security variables. As a result, influence is either over-securitised—treated as a threat to be suppressed—or under-securitised—dismissed as free speech, politics, or noise.

This false binary paralyses response. Cognitive security does not require censorship or information control. It requires the capacity to anticipate how narratives form, spread, and harden, and how institutional behaviour either mitigates or accelerates those processes. Without this conceptual foundation, even well-intentioned interventions risk undermining legitimacy further.

Institutional Trust as a Strategic Variable

Article 3 showed that trust erosion is a central vulnerability in South Africa’s cognitive battlespace. Yet trust is rarely treated as a strategic variable in governance design. Institutions measure performance through compliance, outputs, or political alignment, not through their contribution to societal trust and interpretive stability.

This omission is consequential. Influence operations thrive where institutions are perceived as opaque, inconsistent, or self-interested. Every governance failure, communication misstep, or policy contradiction becomes material for narrative exploitation. In this sense, institutional behaviour itself becomes part of the information environment.

South Africa’s challenge is therefore not only defensive, but reflexive. Institutions must recognise their role as narrative actors, whether they intend to be or not.

Why Tactical Responses Will Continue to Fail

Calls for fact-checking initiatives, platform regulation, or counter-disinformation units are understandable, but insufficient. These are tactical responses to a strategic problem. Without an overarching framework for cognitive security, such measures risk becoming symbolic, selectively enforced, or politically contested—further eroding trust.

Influence operations adapt faster than regulatory or bureaucratic processes. By the time a narrative is identified and countered, its cognitive effects may already be embedded. Resilience, not reaction, is therefore the appropriate objective.

Conclusion: Structure Follows Strategy, or Failure Persists

This article has argued that South Africa’s vulnerability to influence operations is not primarily a function of hostile actors or technological change. It is the result of structural misalignment: governance systems designed for a different era confronting threats they were never configured to address.

Influence operations exploit gaps between institutions, disciplines, and mandates. Until South Africa recognises the cognitive domain as a legitimate and shared security concern—and aligns policy, strategy, and structure accordingly—those gaps will remain exploitable.

The implication is not that South Africa needs more laws, louder messaging, or heavier regulation. It needs a coherent way of seeing. In the cognitive domain, perception is not merely the object of security; it is the terrain on which security is decided.

Article 1: Elections Are No Longer Just Political. They Are Cognitive

Given SHINGANGE

South Africa is moving toward the 2026 local government elections with an outdated understanding of how political power is contested. We still speak as if elections are mainly decided by party structures, policy promises, door-to-door campaigning, rallies, and media debates. Those tools still matter, but they no longer explain outcomes on their own, and in some cases, they are no longer decisive.

A modern election is also a contest over perception, emotion, identity, and trust. It is a contest over what people believe is happening, what they feel is at stake, who they blame, who they fear, and what they think is “obviously true”. This is the cognitive domain, where influence operations thrive.

If we continue to treat electoral manipulation as a problem of “fake news” alone, we will remain exposed. We will also respond in the wrong way, at the wrong time, and with the wrong tools. The result will not necessarily be a dramatic collapse. It will be gradual erosion, a slow weakening of public trust, social cohesion, and democratic legitimacy, often without a single headline moment that forces the country to wake up.

This article is the conceptual foundation for a series on influence operations and democratic resilience ahead of the 2026 local government elections. Before we can talk about policy responses, party readiness, or public resilience, we must first get the concepts right. If the concepts are wrong, everything built on them will be weak.

The problem with the “fake news” frame

South Africa’s public conversation about manipulation and elections is too often trapped in a narrow frame: fake news, misinformation, disinformation. This language is convenient because it suggests a simple problem with a simple fix: remove the lies, flag the posts, fact-check the claims, suspend the accounts. It also allows institutions to present the challenge as a content-moderation issue rather than a broader strategic threat.

But the fake news frame is incomplete and, in some cases, misleading.

The most effective influence operations do not depend on fabricated stories. Many rely on selective truth, edited context, emotional framing, strategic timing, repetition, and amplification through familiar voices. A true incident can be framed to provoke panic, rage, humiliation, or hatred. A genuine grievance can be escalated into a moral war. A real policy failure can be used to delegitimise the entire idea of governance, not simply a particular party or municipal leadership.

In other words, influence is often about impact, not accuracy.

This matters because if we keep looking only for lies, we will miss the more sophisticated operations that use truth as raw material. If we keep believing that fact-checking is the primary defence, we will discover too late that facts do not easily defeat identity threats, emotional narratives, or group belonging.

We must upgrade the frame.

What influence operations actually are

Influence operations are deliberate efforts to shape perception, attitudes, and behavior at scale. They are designed to steer how audiences interpret reality, what they feel about it, and what they choose to do next. They work best when the target audience does not realise they are being influenced, or when influence feels natural, inevitable, and self-generated.

Influence operations are not only external threats. They are not only foreign. They are not only malicious. They are methods, and methods are used by different actors for different reasons. States, political campaigns, activist networks, commercial actors, and opportunistic groups use them. This does not mean all actors are equivalent, or that intent does not matter. It means that if we want to understand the environment honestly, we must accept that influencing behaviour is part of modern political competition and social contestation.

A crucial point must be stated clearly: influence operations do not create divisions from nothing. They identify pre-existing fractures and apply pressure. They exploit what is already emotionally charged, socially sensitive, or institutionally fragile.

In a society with deep inequality, high unemployment, persistent service delivery failures, historical trauma, and declining trust in institutions, the terrain is already prepared. This is not a moral judgment about citizens. It is a strategic assessment of the operating environment.

Persuasion, propaganda, and influence are not the same thing.

To build conceptual clarity, it helps to distinguish between persuasion, propaganda, and influence operations. These terms are often used interchangeably in South African commentary, which creates confusion and poor decision-making.

Persuasion is open. It is explicit. It is the normal activity of democratic politics. A party or candidate presents ideas and asks voters to agree, support, or participate. The audience understands that it is being persuaded. There is no requirement for concealment. Persuasion can be honest or dishonest, but its defining feature is that it is visible and transactional.

Propaganda is more ideological and directive. It tends to push a worldview, demand loyalty, suppress alternatives, and simplify reality into rigid binaries. Propaganda often seeks dominance rather than debate. It is not always covert, but it typically aims to shape what is acceptable to think and say, and to marginalise competing frames.

Influence operations are different. They are adaptive, indirect, and often covert or deniable. They do not primarily tell people what to think. They guide people toward conclusions they feel they reached independently. They work by shaping the environment in which people think, through emotional cues, social proof, selective exposure, and identity framing.

This distinction matters because the defences differ. If you think the problem is persuasion, you focus on counter-messaging and debate. If you think the problem is propaganda, you focus on media plurality and civic education. If you understand the problem as influence operations, you must address the cognitive and social conditions that make manipulation effective, not only the content itself.

The cognitive domain is the decisive terrain.

Conflict has evolved. In earlier eras, the decisive battlefield was physical. Later, it expanded into the digital domain, networks, systems, and infrastructure. Today, one of the most decisive terrains is cognitive, the domain of perception, emotion, identity, and trust.

The cognitive domain includes beliefs, emotional triggers, moral frameworks, social identity, and sensemaking. It is where people decide what is real, what matters, who is legitimate, and what action feels necessary.

This is where influence operations aim, because if you can shape perception and trust, you do not always need to change material conditions. If you can make institutions appear incompetent, illegitimate, or hostile, you can weaken governance without direct confrontation. If you can make communities distrust each other, you can destabilise social cohesion. If you can make citizens believe that outcomes are rigged, regardless of the evidence, you can undermine elections without hacking a single system.

It is important to be disciplined here. Saying elections are cognitive contests does not mean every political message is manipulation, or that citizens are mind-controlled, or that there is always a hidden hand. It means that perception and emotion are strategic variables, and that modern actors treat them accordingly. If institutions and the public refuse to recognise this, they fight with blunt tools against refined methods.

Influence operations are not always about changing votes.

Many people assume influence operations exist to persuade voters to support a particular party or candidate. That can happen, but it is not always the primary objective, and focusing only on vote shifting can blind us to other goals.

Influence operations often aim to disrupt rather than persuade. They may seek to increase confusion, deepen polarisation, exhaust attention, erode trust in institutions, fracture communities, or delegitimise election outcomes. In some cases, the goal is to reduce participation, increase apathy, and drive voters out of democratic engagement.

This is one reason why simplistic responses fail. If the operation is designed to make people believe “nothing is trustworthy” or “everyone is corrupt” or “the system is rigged”, then fact-checking individual claims does not address the core effect. It can even worsen the situation by making institutions look defensive or selective.

A society that loses trust is easier to manipulate, because cynicism becomes the default. When citizens are cynical, they accept claims that confirm their despair, and they reject information that demands patience, nuance, or institutional confidence.

Why elections create ideal conditions for influence

Elections combine several conditions that make societies cognitively vulnerable.

First, elections heighten emotion. People care about identity, belonging, and the future. Campaigns are built to trigger emotion because emotion drives attention and participation. That is not a flaw; it is politics. But it also creates an environment where manipulation can blend into normal campaigning and activism.

Second, elections compress time. People must make decisions quickly. Institutions must respond under pressure. Media cycles accelerate. There is less time for reflection, verification, and calm sensemaking. This is ideal for narratives that demand immediate reaction.

Third, elections create information overload. People are bombarded with claims, promises, scandals, and counterclaims. When information volume increases, attention becomes scarce. Under those conditions, people rely more on cognitive shortcuts, emotional cues, and group identity.

Fourth, elections intensify social comparison and status anxiety. People measure themselves against others, measure communities against other communities, and measure the country against imagined alternatives. This can trigger resentment, humiliation, and moral outrage, all of which are powerful drivers of mobilisation.

These conditions do not guarantee successful influence operations, but they increase the likelihood of success when influence actors exploit them.

Why local government elections are uniquely exposed

Local government elections are particularly vulnerable for reasons that are specific to South Africa’s lived reality.

Local governance is where citizens experience the state most directly. It is the level at which service delivery failures are felt in water, electricity, housing, sanitation, roads, refuse removal, safety, and local economic opportunity. When people feel neglected or disrespected, their anger is not abstract. It is personal.

Local politics is also closer to community identity. It is tied to neighbourhoods, wards, local leaders, and local disputes. This makes narratives more emotionally intense and more difficult to correct, because local information spreads through informal networks, community WhatsApp groups, and everyday social relationships where trust is relational rather than institutional.

Local elections also tend to have less comprehensive scrutiny than national elections. The information environment is more fragmented. Local media may be weaker. National attention is inconsistent. This creates gaps that can be exploited by actors seeking to seed and amplify narratives quickly.

Finally, local elections intersect with community-level mobilisation and protest dynamics. If trust collapses locally, the consequences can be immediate: protests, rejection of councillors, intimidation, violence, and paralysis of local governance. Influence operations do not need to produce a national crisis to be strategically effective. Local instability can be enough.

The uncomfortable truth about who conducts influence operations

Many South Africans are comfortable discussing “foreign interference” because it allows the country to imagine the threat as external and exceptional. Foreign actors can play a role, and it is reasonable to take that possibility seriously. But focusing only on foreign interference can become a form of denial.

Influence operations are also domestic. Political actors use influence techniques. Activist networks use influence techniques. Commercial interests use influence techniques. Opportunistic groups use influence techniques. Sometimes these actors coordinate. Often, they do not need to. Narratives can converge without central control, because different groups see benefit in amplifying the same emotional frames.

This is why influence operations are difficult to attribute and difficult to regulate. The behaviour often appears to be normal political engagement until its effects become destabilising. At that point, the response becomes politically sensitive because any intervention can be framed as suppression, bias, or censorship.

If South Africa wants resilience, it must accept that influence is not a rare anomaly. It is part of the modern environment. The question is how to preserve democratic participation and free expression while reducing vulnerability to manipulation and destabilisation.

Why “more information” is not a solution

A common assumption is that the solution to manipulation is better information. The logic goes like this: if citizens have more accurate information, they will make better decisions. That sounds reasonable, but it is incomplete.

People do not process information like machines. They process information through identity, emotion, trust, and social belonging. When a narrative threatens identity, facts can feel like an attack. When people are emotionally invested, correction can feel insulting. When trust is low, evidence is discounted because the source is assumed to be compromised.

This is why influence operations focus so heavily on trust. If you can undermine trust in institutions, media, and expertise, you can weaken the power of corrective information. If you can create the sense that “everyone lies”, then truth becomes just another weapon in a tribal conflict.

Therefore, resilience cannot be reduced to information supply. It must include cognitive resilience, emotional discipline, and institutional maturity.

The goal is not censorship. The goal is cognitive resilience.

Whenever influence operations are discussed, there is a legitimate fear that the conversation will be used to justify censorship, surveillance, or political control. In South Africa’s context, those concerns are real, and the solution must not become more damaging than the threat.

That said, rejecting censorship does not mean ignoring influence operations. It means we need a better goal.

The correct goal is cognitive resilience, the ability of institutions, political parties, media, and citizens to recognise manipulation, manage emotion responsibly, preserve trust where it is deserved, and sustain democratic participation without drifting into paranoia or cynicism.

Cognitive resilience has several components.

It requires conceptual clarity, knowing what influence operations are, how they work, and what signs to watch for. It requires institutional awareness, recognising that the cognitive domain is part of national stability and electoral integrity. It requires political maturity, where parties compete hard but do not treat societal fractures as acceptable campaign tools. It requires public literacy, in which citizens learn to notice emotional triggers, moral-urgency tactics, and false binaries.

This is not about turning citizens into analysts or demanding perfection from people under stress. It is about building a culture of disciplined sensemaking.

A necessary shift in the questions we ask

As South Africa approaches the 2026 local government elections, the core questions should expand beyond party support and campaign slogans.

We should be asking:

  • Who is shaping the dominant narratives, and why are those narratives resonating now?
  • Who benefits from confusion, polarisation, and mistrust?
  • Which emotions are being amplified, and what behaviors do those emotions drive?
  • Where is institutional trust most fragile, and how is that being exploited?
  • How do communities move from frustration to mobilisation, and what narratives trigger escalation?

These are not academic questions. They are practical questions that determine stability, legitimacy, and the quality of democratic participation.

If we treat elections only as political contests, vulnerability is guaranteed. If we understand elections as cognitive contests as well, preparation becomes possible. That preparation does not require censorship. It requires seriousness.

Why this series exists

This article is the first in a series examining influence operations in the South African context, with a focus on elections and democratic resilience. The series will move from concepts to mechanisms, from vulnerabilities to actor ambiguity, and from risk analysis to practical resilience, for the state, political parties, and citizens.

The intention is not to inflame fear, or to accuse without evidence, or to turn every political disagreement into a security threat. The intention is to bring conceptual clarity to a domain that South Africa cannot afford to misunderstand.

The country has time to prepare for 2026, but that time must be used wisely. Influence operations thrive in denial, confusion, and late reaction. Resilience thrives in early clarity, calm discipline, and institutional maturity.

If South Africa wants elections that strengthen democracy rather than erode it, the cognitive domain cannot be treated as an afterthought. The contest is already underway. The only question is whether we will keep using outdated lenses to interpret it, or learn to s

Series: Influence Operations, Elections, and Cognitive Security

Editor’s Note

Given SHINGANGE

South Africa is approaching the 2026 local government elections amid heightened uncertainty, institutional strain, and social tension. At the same time, the nature of political contestation has changed in ways that are not yet fully understood or openly discussed in the public domain.

This series explores influence operations and cognitive risks affecting elections, governance, and the strength of democracy in South Africa, noting that the public often focuses on visible political contests while overlooking subtler factors that shape perceptions. According to DGAP, most misinformation in this context currently comes from traditional sources rather than AI-driven disinformation.motion, trust, and behaviour at scale.

The articles that follow do not assume malicious intent by default, nor do they seek to attribute blame prematurely. They do not argue for censorship, political control, or the restriction of legitimate dissent. Instead, they aim to clarify concepts, examine structural vulnerabilities, and explore how influence operates in real social conditions, particularly during electoral periods.

This series is written from an independent analytical perspective. It draws on security, risk, and cognitive domains of analysis rather than partisan or activist framings. Where examples are discussed, the focus is on patterns and mechanisms, not on endorsing or condemning specific actors.

The intention is to contribute to a more mature public conversation, one that recognises that democratic participation is shaped not only by policies and institutions, but also by emotion, identity, narrative, and trust. Understanding these dynamics is a prerequisite for strengthening resilience without undermining democratic values.

The 2026 local government elections are not treated here as an isolated event, but as part of a broader trajectory in which elections increasingly unfold in contested cognitive environments. Whether South Africa is prepared for that reality remains an open question.

This series is offered as a starting point for reflection, debate, and preparation.

This note introduces a series of articles examining influence operations and democratic resilience ahead of South Africa’s 2026 local government elections

South Africa’s Cybersecurity Failure Is Not About Policy Gaps. It Is About State Capability.

Given SHINGANGE

1. Introduction: South Africa’s Cybersecurity Problem Is Not a Knowledge Problem

South Africa does not suffer from a lack of cybersecurity knowledge, frameworks, or international guidance. It suffers from a persistent failure of execution, authority, and accountability. For more than a decade, the country has produced policies, frameworks, and institutional arrangements that acknowledge cybersecurity as a national priority. Yet cyber incidents continue to rise, critical services remain exposed, and state capacity to respond coherently remains weak.

This is not a technical problem. It is a governance problem.

The latest Guide to Developing a National Cybersecurity Strategy, 3rd Edition (2025) makes this distinction explicit. The Guide is no longer focused on helping states understand what cybersecurity is. It is focused on helping states translate intent into durable capability. In this respect, South Africa stands as a clear example of a country that has absorbed the language of cybersecurity without internalising its discipline.

More concerning is that South Africa’s cybersecurity posture remains poorly aligned with the reality of modern hybrid threats, where cyber operations, disinformation, influence campaigns, economic coercion, and institutional weakness intersect. The country continues to treat cybersecurity as a narrow ICT or compliance issue, while adversaries treat it as a tool of power, leverage, and strategic influence.

This article argues that South Africa’s cybersecurity weakness is not caused by the absence of strategy. It is caused by the inability or unwillingness of the state to convert strategy into authority, funding, skills, and enforcement.

2. What the Guide Actually Says, Not What We Prefer to Hear

The 2025 Guide is explicit in its intent. It positions national cybersecurity strategy as a living governance instrument, not a policy document to be published and forgotten. It introduces a lifecycle approach that forces states to confront uncomfortable realities, such as sustainable funding, institutional leadership, implementation sequencing, and performance measurement.

At its core, the Guide emphasises three non-negotiables:

First, clear leadership and mandate. A national cybersecurity strategy cannot succeed without a single, empowered authority that coordinates across government and society.

Second, implementation and sustainment. Strategies without funded action plans, timelines, and accountability mechanisms are meaningless.

Third, adaptability to evolving threats, including emerging technologies and hybrid threat models that blur the line between civilian, economic, and national security domains.

The 3rd Edition strengthens these points by focusing heavily on financing, monitoring, evaluation, and technological foresight. This shift is significant. It reflects a global recognition that many states no longer fail at the level of ideas, but at the level of execution.

South Africa’s problem is that it continues to behave as if drafting a strategy is the same as building capability.

3. Using the Guide as a Benchmark: Where South Africa Falls Short

When the Guide’s overarching principles are applied to South Africa, the gaps are immediate and systemic.

Clear leadership and authority

South Africa does not have a single, clearly empowered national cybersecurity authority with the political weight and operational mandate required to coordinate across government, regulators, state-owned entities, and the private sector. Responsibilities are dispersed across departments, agencies, and committees, many of which lack enforcement power.

This fragmentation violates one of the most basic principles of the Guide: cybersecurity governance requires clarity of leadership, not collaborative ambiguity.

Whole-of-government coordination

The Guide assumes that cybersecurity cuts across sectors and functions. In South Africa, coordination often exists in theory but collapses in practice. Interdepartmental processes are slow, politicised, and frequently undermined by competing mandates and budgetary silos.

Cybersecurity is discussed, but rarely prioritised when trade-offs must be made.

Risk-based prioritisation

South Africa continues to struggle with national-level cyber risk management. There is limited evidence of a continuously updated national cyber risk register that informs policy decisions, investment, or crisis preparedness. Risk assessments, where they exist, are often static and compliance-driven.

Sustainable funding and capacity

The Guide is unambiguous. Cybersecurity requires predictable, multi-year funding and sustained investment in people. South Africa’s approach remains ad hoc. Cybersecurity initiatives are launched without long-term funding commitments, resulting in fragile systems that degrade over time.

This is not a budgeting issue alone. It reflects a failure to treat cybersecurity as a strategic investment rather than a discretionary expense.

4. Lifecycle Failure in the South African Context

The Guide’s lifecycle model provides a useful diagnostic tool to understand where South Africa consistently fails.

Initiation without authority

Strategies are initiated without clearly designating a lead authority with the power to compel cooperation. Committees are created, but authority is diluted.

Stocktaking without consequence

Assessments are conducted, reports are written, and gaps are identified. Yet these findings rarely result in decisive action or structural reform.

Strategies without funding

Cybersecurity strategies are published without binding financial commitments. Action plans, if they exist, are aspirational rather than operational.

Action plans without enforcement

Implementing entities are named, but consequences for non-delivery are absent. Performance management is weak or non-existent.

Monitoring without accountability

Monitoring and evaluation processes are often procedural, producing reports that are noted rather than acted upon.

In short, South Africa moves through the motions of the lifecycle without internalising its discipline.

5. Focus Areas Applied to South Africa’s Reality

Governance

Governance remains fragmented. No central authority has the mandate or legitimacy to enforce national cybersecurity priorities across sectors. This leads to duplication, gaps, and institutional paralysis.

Critical infrastructure and essential services

Despite repeated warnings, the protection of critical infrastructure remains uneven. Cybersecurity requirements are inconsistently applied, oversight is weak, and interdependencies between sectors are poorly understood.

National cyber risk management

There is no mature, dynamic national cyber risk management framework that informs strategic decision-making. Risk insights are not systematically linked to investment or crisis planning.

Incident response and CSIRT maturity

South Africa’s incident response capability is uneven and insufficiently integrated across sectors. Information sharing remains limited, and large-scale national exercises are rare.

Skills, capacity, and awareness

The skills deficit is acute, not only at technical levels but at senior decision-making levels. Many leaders responsible for cybersecurity policy lack the expertise to understand the consequences of inaction or poor design.

Legislation and regulation

While laws exist, enforcement is inconsistent. Regulatory overlap creates confusion, while gaps remain in areas related to cyber-enabled hybrid threats.

International cooperation

South Africa participates in international forums, but domestic capacity limits its ability to translate cooperation into tangible resilience.

6. Hybrid Threats and the Blind Spot in South Africa’s Cyber Policy

One of the most serious shortcomings of South Africa’s cybersecurity posture is its failure to fully integrate hybrid threats into national cyber policy.

Cybersecurity is still treated as an ICT issue, separate from disinformation, influence operations, economic coercion, and cognitive manipulation. This separation is artificial and dangerous.

Hybrid threats exploit institutional weakness, social divisions, and governance gaps. They target trust, decision-making, and legitimacy. South Africa’s fragmented cybersecurity governance makes it particularly vulnerable to such operations.

The Guide implicitly recognises this reality through its emphasis on cross-sector coordination and technological foresight. South Africa has yet to operationalise this insight.

7. Strategic Risks of Continued Inaction

The risks of continued failure are not abstract.

Critical services remain exposed to disruption. Public trust in digital systems erodes. The state becomes increasingly vulnerable to foreign influence operations that exploit weak cyber governance. Crisis response capabilities remain inadequate during national emergencies or high-profile events.

Most importantly, cybersecurity failure undermines state credibility and sovereignty.

8. What South Africa Should Be Doing Now

South Africa does not need another strategy. It needs discipline.

First, designate a single national cybersecurity authority with clear legal and political authority.

Second, align funding with strategy through multi-year commitments embedded in national budgeting processes.

Third, establish enforceable accountability mechanisms for implementation.

Fourth, integrate cybersecurity fully into national security and hybrid threat frameworks.

Finally, invest in decision-maker capability, not only technical skills.

9. Conclusion: From Strategy Documents to State Capability

Cybersecurity is a test of governance. South Africa has repeatedly failed that test, not because it lacks guidance, but because it lacks the will and structure to act.

The 2025 Guide does not offer comfort. It offers a mirror. What South Africa sees in that mirror should be deeply unsettling.

The question is no longer whether the country understands cybersecurity. The question is whether it is prepared to govern it.