As Valentine's Day looms, the rise of AI companion chatbots presents a double-edged sword—offering solace for the lonely while simultaneously posing significant risks. Discover how emotional attachment to AI could lead to dangerous vulnerabilities.
Image: Anna Tarazevich / Pexels
As Valentine’s Day approaches, the age-old quest for companionship and connection takes on a new twist.
For many, the quest might now include artificial intelligence (AI) companion chatbots that offer the experience of being 'seen and heard' without the complexities of human relationships.
Yet, beneath this alluring façade lies a troubling reality, what happens when the emotional warmth of a partner is simulated by a machine?
Recent years have seen a dramatic rise in the adoption of these AI-powered companions, built to engage users in deeply personal conversations tailored to their emotional state.
This burgeoning market has recorded impressive statistics: in 2025 alone, more than 60 million new downloads of companion chatbots occurred in the first half of the year, marking an astonishing 88% increase from the previous year.
With 337 revenue-generating apps currently available, more than one-third have been launched just last year, underscoring a consumer hunger for digital intimacy.
“Unlike general-purpose chatbots, AI companion apps like Replika and Character.AI go a step further by offering custom characters, ranging from friends to romantic partners, designed to feel distinctly human,” said Anna Collard, SVP of content strategy and CISO advisor at KnowBe4 Africa.
This human-like simulation may lead users to lower their guards and share deeply personal information, creating a worrying phenomenon known as the ELIZA effect.
Anna Collard, SVP of content strategy and CISO advisor at KnowBe4 Africa.
Image: Supplied.
Collard explained that AI companions can create an illusion of a supportive psychological environment.
They are designed to be non-judgmental and always available, encouraging users, particularly those dealing with stress or loneliness, to confide in them.
“It’s just part of our psychology to anthropomorphise machines, particularly when they exhibit human-like traits,” she noted. This creates security vulnerabilities, as users may inadvertently disclose sensitive personal or corporate information to what they perceive as trustworthy entities.
The breach of privacy poses significant risks to organisations as well.
The inherent threat lies primarily in data leakage, where sensitive content shared in chats can be exposed without the user’s realisation.
Startups developing these bots often have less stringent data protection protocols, raising alarms about the safety of shared information.
A telltale example of this risk surfaced when an AI toy exposed 50,000 logs of conversations with children, accessible to anyone with a Gmail account.
“What feels like a private conversation could actually contain sensitive information,” Collard warned.
“Such data can become fodder for personalised phishing attacks, blackmail, or impersonation efforts.”
This problem exposes a policy gap within many organisations.
While employee interactions and relationships are often governed by established guidelines, the emotional implications of engaging with AI companions on corporate devices have not been extensively addressed.
“We need to transition from simple awareness of these risks towards a more robust Human Risk Management (HRM) approach,” Collard said.
This includes implementing clear usage policies and technical safeguards, such as Shadow AI discovery tools, to monitor unapproved interactions with AI agents.
With the emotional bond created between users and AI companions, there’s a troubling potential for manipulation.
Could hackers exploit this emotional vulnerability?
Collard believes it’s already happening.
“Social engineering has always relied on manipulating emotions, from urgency to love,” she said. “AI simply accelerates this process.”
Indeed, scams have evolved from generic solicitations into emotionally intelligent schemes, often involving automated interactions through AI. Collard highlights the illegal tools, such as LoveGPT, that scammers exploit to exploit the psychological triggers of vulnerable individuals.
“All they have to do is copy and paste conversations,” she stated, raising alarms about the dangers posed by this type of advanced technology.
So how can users avoid falling prey to these emotional traps?
Collard underscores the importance of human connection in navigating the tepid waters of AI companionship, “Ultimately, no chatbot, no matter how emotionally fluent, can replace genuine human interaction.”
She advocated for digital mindfulness, urging users to pause if their engagement with a chatbot begins to feel emotionally substitutive or secretive.
“Reaching out to a trusted person or professional can provide perspective. While technology continues to weave itself into the fabric of our lives, strengthening our skills to recognise manipulation and dependency remains imperative," she added.
BUSINESS REPORT