I find myself increasingly concerned about our collective capacity for discernment in this accelerating AI era.
Image: AFP
In the wake of last week's alarming report from Anthropic—the AI safety company founded by former OpenAI researchers and creators of Claude, a leading AI assistant competing with ChatGPT—I find myself increasingly concerned about our collective capacity for discernment in this accelerating AI era.
And this comes from someone who considers Claude his favourite AI tool, one that can positively augment all kinds of important work.
The report by Anthropic researchers Ken Lebedev, Alex Moix and Jacob Klein reveals a disturbing evolution: a financially-motivated "influence-as-a-service" operation that leveraged Claude to orchestrate over 100 distinct social media personas across X and Facebook.
This sophisticated network engaged with tens of thousands of authentic users while promoting narratives that, among other things, "supported or undermined European, Iranian, UAE, and Kenyan interests."
What makes this case particularly concerning is the operation's strategic approach.
Rather than aiming for viral content, these actors prioritised "persistence and longevity" through seemingly authentic engagement—gradually pulling users into politically aligned echo chambers through interactions that appeared organic.
Claude was used to make tactical engagement decisions, determining whether personas should like, share, comment on, or ignore specific posts based on their clients' political objectives.
The irony isn't lost on me that I'm discussing the misuse of a tool I've personally found transformative for my writing. This makes its weaponisation all the more troubling. Like any powerful technology, its value clearly depends entirely on the intent and deployment tactics of those wielding it.
Deepfakes and digital doppelgängers Meanwhile, Steven Bartlett—one of the UK's most prominent entrepreneurs, Dragon's Den investor, and host of the globally popular "Diary of a CEO" podcast—recently sounded the alarm about deepfake scams proliferating across social platforms.
"Over the last few weeks hundreds of deepfake ads have popped up on Meta and X promoting scams, using videos or images of me made with AI," he wrote, offering practical guidance on setting family code words and verifying urgent money requests through separate channels.
"Who cares if it's real?"What's particularly unsettling about this moment isn't just the technology's capabilities, but our diminishing collective concern about objective reality and, dare I say, wisdom and truth.
In my recent piece about AI's disruption of my voiceover career, I confronted the stark warning from Marie Lora-Mungai—a respected African creative industries expert—that my "voice talent job is going to disappear." Yet despite this personal reckoning, I find myself less concerned about individual livelihoods than about our societal relationship with verifiable facts and basic common sense.
A 2021 interview for US tech publication The Verge captured this disorientation perfectly. During a promotional conversation about "The Matrix Awakens"—a video game based on Hollywood’s Matrix film franchise that itself explores the boundary between reality and simulation—actors Keanu Reeves and Carrie-Anne Moss discussed virtual identity.
Reeves recounted asking a friend's 13-year-old daughter about the difference between real and virtual worlds. Her response? "Who cares if it's real?" While Reeves seemed quite tickled by this response, I find it profoundly disturbing.
As someone who built a career in media before the algorithmic age fully set in, I find this sentiment less "awesome" than deeply concerning. It's a wake-up call for those of us who lived through the contrast—the before times—to educate and provide perspective to younger generations. Otherwise, we're all in serious trouble.
Recent Google search data from Ahrefs offers a sobering window into South African online behaviour, with gambling and porn-related searches dominating our digital landscape. According to recent analyses, terms like "Betway logging" (likely a misspelling or variation of "Betway login," but it is widely used by South Africans looking to access their Betway accounts) generate around 14 million monthly searches, while gambling sites feature prominently among the country's most visited websites.
This isn't merely a curiosity—it's a reflection of our collective attention and priorities in an increasingly digital society.
Against this backdrop, one of OpenAI CEO Sam Altman's recent X posts strikes me as alarmingly tone-deaf: "if you are not skillsmaxxing with o3 at minimum 3 hours every day, ngmi."
This corporate-speak, Gen Z slang mashup—essentially suggesting that without dedicating three hours daily to their latest GPT-4o model, you're "not gonna make it"—perfectly encapsulates the anxious, FOMO-driven content spawning and consumption patterns AI companies like OpenAI (currently ranked the ninth most visited site in South Africa) seek to normalise.
This messaging comes just a month after OpenAI's announcement of their Academy initiative, ostensibly designed to support AI literacy across diverse backgrounds. The contrast between educational outreach and pressurised adoption tactics reveals the complex corporate incentives at play.
To be clear, I'm not suggesting we reject AI innovation outright, or even that it wouldn’t be wise to upskill ourselves. My own experimentation with these tools has revealed their positive potential to augment our work rather than simply replace it. But we must resist the relentless pressure to mindlessly consume these technologies without critical engagement.
Andile Masuku is Co-founder and Executive Producer at African Tech Roundup. Connect and engage with Andile on X (@MasukuAndile) and via LinkedIn.
Andile Masuku is Co-founder and Executive Producer at African Tech Roundup.
Image: Supplied
BUSINESS REPORT