Business Report Economy

Why training AI agents is as crucial as training your employees

Ashley Lechman|Published

As AI technology rapidly advances, it is essential for organisations to modernise their data security practices. This approach not only protects sensitive information but also empowers employees in the digital age, ensuring a safer and more secure future.

Image: File

As Data Privacy Day approaches on 28 January, the conversation surrounding data protection is evolving at a remarkable pace.

With an ever-expanding workforce, including the integration of Artificial Intelligence (AI), it is now imperative that organisations not only educate human employees but also prioritise the training of AI agents. In 2026, we find ourselves navigating a complex landscape where both human instincts and artificial intelligence hold the keys to safeguarding the most sensitive data.

Anna Collard, Senior Vice President of Content Strategy and CISO Advisor at KnowBe4 Africa, presents a compelling argument for the outdated notion of pitting 'people against technology'.

She introduces the concept of Human Risk Management+ (HRM+), a revolutionary approach that advocates for equal training of both humans and AI systems to create a cohesive and flexible defence mechanism.

“We have entered the era of ‘Dual Defense’,” Collard asserts. “With employees increasingly utilising AI tools to process data, and malicious actors employing AI to infiltrate and steal that data, organisations can no longer afford to rely solely on human education. Training workforce members and AI defence agents must go hand-in-hand to thwart threats that could otherwise go unnoticed.”

The changing landscape of human vulnerability

The statistics surrounding data breaches still overwhelmingly indicate a human factor, but the nature of that vulnerability has transformed.

Certainly, clicking on a suspicious link remains a perilous action; however, today's risk manifests through an employee's interactions with unmanaged AI tools and their capability to respond to AI-generated threats that are often indistinguishable from reality.

“In 2026, a static defence cannot stop dynamic threats,” cautions Collard. “Cyber attackers are now using automation to enhance their social engineering techniques. If your protection relies only on an employee recalling a policy from six months ago, you will inevitably fail. We need Agentic AI – sophisticated defence agents that work in real-time alongside employees to provide supportive coaching and promptly intervene when risky behaviour is detected.”

Advancing beyond basic security awareness

In line with South Africa’s regulatory framework, specifically the Protection of Personal Information Act (POPIA), organisations must implement "reasonable technical and organisational measures" to protect data. However, Collard argues the definition of "reasonable" is changing – it now encompasses the need for adaptability.

“Compliance checklists are static forms of assurance. Risk is anything but static,” she explains. “Modern data privacy requirements necessitate a defence strategy that evolves. This means employing AI to scrutinise user behaviour in real-time, adapting measures tailored to individual roles. For example, if an employee in finance becomes the target of a spear-phishing attack, your AI defence agents must step in immediately to provide relevant support and isolate the potential threat.”

Training a unified human-AI team

To establish this advanced level of security, KnowBe4 outlines a dual defence strategy that leverages the boundaries between human and AI:

  • Train the human: Move beyond generic awareness by employing data-driven simulations that mimic specific AI-driven threats, such as deepfakes or complex business email compromise scams, tailored to each employee’s role.
  • Train the agent: Deploy AI Defence Agents (AIDA) that adapt and learn from your environment. In tandem with employee training, these security systems must recognise behavioural anomalies indicating potential mistakes.
  • Secure the interaction: Ensure that employees are aware of the privacy implications whilst using generative AI tools to avoid risks associated with Shadow AI—where sensitive data is inadvertently shared in public AI models.

The future is collaborative

Data Privacy Day represents more than just a reminder to change passwords; it advocates for a fundamental shift in how we envision our workforce.

“We must stop framing the ‘human firewall’ as a standalone mechanism,” concludes Collard. “The future of data privacy relies on the synergy between human intuition and machine velocity. When we educate both employees and AI agents to operate as a harmonious team, we not only meet compliance requirements but also establish a self-healing, adaptive defence system that continuously improves with every breach attempt.”

BUSINESS REPORT