As banks grapple with increasingly adaptable threats, the need for collaboration and innovation has never been more critical. With the stakes higher than ever, can the financial sector truly evolve to outsmart these dynamic attackers?
Image: Pixabay
In the rapidly evolving landscape of cybersecurity, banks face an unprecedented challenge, a surge in sophisticated, AI-powered attack threats that not only exploit real-time vulnerabilities but also mimic human behaviour, turning the very systems of financial institutions against them.
Experts warn that without collaborative incident reporting and the deployment of advanced AI-driven defences, the risk of fraud for customers will escalate dramatically.
“Rather than new types of fraud, what we’re seeing is a step-change in how adaptable, realistic, and efficient existing attack patterns have become,” said Andries Maritz, Principal Product Manager at the banking and payment authentication company, Entersekt.
He noted that the traditional methods of fraud are evolving, with a significant uptick in social engineering strategies and BIN-level attacks targeting card-not-present (CNP) and 3-D Secure (3DS) environments.
Maritz’s insights are alarming; fraudsters are no longer satisfied with simplistic schemes.
They are employing bots that can learn, adapt, and personalise their attacks by interacting with live banking systems. Such technology allows criminals to probe authentication protocols, gather information, and exploit weaknesses within the banks' own frameworks.
“These aren’t simple scripted attacks anymore. The attacks have become far more detailed. Traditional identifiers are being spoofed much more accurately, and in many cases, the behaviour being presented is increasingly human-like,” Maritz explained.
This evolution raises the critical question: how can banks differentiate between genuine customers and sophisticated bots?
The shift in tactics is concerning.
Attackers are utilizing AI to create dynamic fraud strategies that evolve in real time, allowing them to discern which defences within a bank’s security framework are most susceptible.
Automation compounds these threats by enabling fraudsters to amass vast amounts of personal information, tailoring their assaults to individuals—and skilfully bypassing conventional detection methods.
“Advanced AI agents are replicating subtle human behaviours, such as typing speeds or mouse movements,” warns Maritz. “Just like us, these machines are learning through each interaction, and because they are acting at scale, their ability to improve becomes exponentially greater.”
This complexity is exacerbated by statistics from Deloitte’s Center for Financial Services, predicting that generative AI could contribute to fraud losses hitting US$40 billion in the United States by 2027.
Yet, in the face of this unprecedented challenge, many organisations remain reliant on outdated methods.
Almost half (48%) of US companies still depend on human reviews and manual checks to combat fraud, a practice that Maritz argues is insufficient given the current scale of AI attacks.
Fraud reporting has also grown more intricate. Some attacks trigger immediate alerts, with customers reporting fraud within minutes of an incident.
Conversely, other attacks may lie dormant, compromising accounts for weeks or months before money is misappropriated, making the timeline between initial breach and reporting highly variable.
Despite these daunting challenges, Maritz commended the banking sector's response, noting significant investments in advanced tooling and anomaly detection, which are driving industry-wide progress. However, he cautions that isolated data and signals across disparate systems often lead to gaps in fraud detection, hampered by regulatory and architectural constraints.
As banks navigate this turbulent environment, the role of the Chief Information Security Officer (CISO) becomes increasingly complex.
Reliance on fixed rules or static measures is becoming obsolete; Maritz advocated for robust system capabilities that support dynamic scoring and contextual risk assessments. “Fraud prevention is a team sport,” he emphasised.
In this escalating arms race between banks and fraudsters, AI is both a weapon and a shield.
Maritz highlighted its potential in providing explainable models, rapid pattern detection, and dynamic intelligence sharing, which are essential for creating adaptive defences.
The financial sector must move away from reactive strategies based on individual incidents, evolving instead towards resilient systems that can keep pace with rapidly changing fraud landscapes.
BUSINESS REPORT