Discover how a 1979 IBM warning about AI accountability is more relevant than ever as businesses increasingly rely on autonomous systems for management decisions. Explore the risks and legal implications of letting machines take charge.
Image: AI / Sora
A warning from 1979 on corporate accountability for ‘rogue’ AIs, and what we need to do about it.
In 1979, an internal IBM training manual contained a rule that has become a legend in computing: "A computer can never be held accountable. Therefore a computer must never make a management decision."
Fast forward to 2026, and we are ignoring that warning at scale. With nearly 40% of enterprise applications expected to deploy task-specific AI agents this year, organisations may effectively be letting machines make "management decisions" without realising the depth of the issue.
In the end, you cannot sue an algorithm.
The rise of agentic AI has created a dangerous accountability gap where autonomous bots execute multi-step plans often with little oversight, leaving organisations vulnerable to facing the legal and financial music when things go wrong.
The transition from generative AI to agentic AI has exponentially increased the attack surface.
This introduces what security experts call the ‘confused deputy’ problem.
This problem is about more than a chatbot being tricked.
The ‘confused deputy’ problem is about a trusted system being granted 'write' access to the crown jewels. When an agent has the keys to the CRM or the treasury, a prompt injection can become a remote code execution attack in plain English.
Instead of just securing code, like in the past, in cybersecurity we are now governing the unpredictable decision-making of non-human actors that believe they are helping the business, while an attacker quietly exploits that trust.
But the question is who is to be held accountable in this strange new world when things go wrong?
In one instance reported in February, an AI agent was unwittingly tricked into causing a massive data breach.
The hacker’s carefully crafted prompt injection effectively bypassed internal safeguards and triggered the agent, tasked with reconciling data for a financial services firm, to export 45,000 sensitive customer records.
The incident led to the unauthorised disclosure of sensitive financial information, leaving thousands of individuals vulnerable to identity theft and targeted phishing campaigns.
When an issue like this invariably reaches a court, though, accountability is going to require a few more definitions.
Legal and regulatory frameworks, like the EU AI Act, are struggling to keep up with the concept of errors caused by AI agents because intent is so hard to prove.
A lot of novel legal precedents are going to be set in the years to come as more autonomous AIs make ‘mistakes’ that look all too human.
In another example, a passenger asked Air Canada’s chatbot about bereavement fares.
The bot incorrectly stated he could book a full-price flight and apply for a refund retroactively, although the airline’s actual policy required applications to be submitted before travel.
The company argued in court in 2024 that the chatbot was to blame.
They said they couldn’t be held accountable for the chatbot’s misleading information and claimed that the bot should be considered a separate entity, absolving them of any legal liabilities.
However, the tribunal ruled against the company, stating that the airline owed its customers a duty of care and that it was indeed responsible for the inaccurate information that the chatbot had provided.
In this sense, organisations cannot delegate accountability to an algorithm.
For organisations, this accountability gap is a liability loophole where insurance may not even cover a breach caused by an "autonomous error" rather than a traditional hack.
This is why Collard advocates for ‘functional’ accountability.
Every AI agent in production must have a designated agent supervisor or owner. But functional accountability cannot mean hovering over an agent’s shoulder. It must mean architectural governance. We need 'circuit breakers', hard-coded limits that stop an agent from performing high-stakes actions regardless of what the LLM 'decides’.
Accountability means the human defines the boundaries of the sandbox, not just the justification for the task.
And for high stakes tasks or decisions, the human-in-the-loop imperative means that while an agent can research and draft, it must never be allowed to execute tasks like making a payment or modifying access controls, without explicit human authorisation.”
Ultimately, technology or humans alone cannot close this gap in isolation. It requires a shift in security culture.
IBM was right in 1979: a computer cannot be held accountable and because of that it must never be granted autonomous agency over high stakes decisions.
The 'human firewall' isn't there to catch every mistake, after all. It’s there to design systems where a machine's mistake cannot become a management catastrophe.
And although AI agents are new technology, the old security principle of Defence in Depth still applies, meaning that if you combine the right mix of people, processes, and technology, you create a layered safety net.
Anna Collard, Senior Vice President of Content Strategy and Chief Information Security Officer (CISO) Advisor at KnowBe4 Africa.
Anna Collard, Senior Vice President of Content Strategy and Chief Information Security Officer (CISO) Advisor at KnowBe4 Africa
Image: Supplied.
Follow Business Report on Facebook, X and on LinkedIn for the latest Business and tech news.