The New Cybersecurity Battle: Why AI is Both the Weapon and the Shield
Every time a new technology arrives, fear arrives with it.
When the internet went mainstream, people said it would destroy privacy. When social media took off, it would erode real relationships. Now, with AI agents and chatbots in everyday life, the headlines are the same: jobs will vanish, humans will lose control, and this is the most dangerous technology ever made.
History has a pattern. The fear is almost always louder than the reality.
AI agents and chatbots are tools. Extraordinarily powerful ones, yes, but tools nonetheless. Their impact depends on how we choose to use them. The real question isn’t whether AI is good or bad. It’s whether we’re building systems strong enough to handle it.
AI is Already Quietly Changing How We Work
Chances are, you’re already using AI. People use it to draft emails, summarize documents, organize schedules, and brainstorm. AI reduces friction and clears mental clutter.
Organizations are investing heavily because it moves the needle. AI analyses data faster than humans, processes patterns at scale, and eliminates repetitive manual effort, freeing teams to focus on work that actually requires judgment. Humans remain far better at nuance and context. But AI? It’s exceptional at scanning, sorting, and spotting patterns, and it never gets tired.
But Efficiency Cuts Both Ways
Here’s the uncomfortable truth: any technology that makes life easier for most people will also make life easier for people who intend harm.
AI can generate highly convincing phishing emails, realistic fake websites, synthetic identity documents, deepfake videos, and voice clones that could fool people who know you. What’s changed isn’t just capability, it’s speed and accessibility. Creating a fake website once required real technical skill. Now it takes minutes. Fraudsters have always evolved faster than security systems. AI just put that evolution on steroids.
The numbers bear this out. Phishing emails surged 1,265% following the launch of ChatGPT, as AI made it trivially easy to write convincing, personalized scam messages. And deepfakes quadrupled globally between 2023 and 2024, accounting for 7% of all fraud attempts.
The most striking real-world example: in January 2024, an employee at engineering firm Arup in Hong Kong transferred US$25.6 million to fraudsters after a video call in which every participant, including the company’s CFO, was a deepfake. The scam began with a phishing email that the employee initially suspected. His doubts were erased when he joined what appeared to be a multi-person video call with familiar colleagues. It took 15 transactions before anyone realized what had happened.
Meanwhile, a McAfee survey found that 47% of Indian phone users have experienced AI voice scams, the highest rate globally, with 66% saying they would respond to a voice call from someone claiming to be a friend or family member in urgent need.
When “Looks Legitimate” is No Longer Enough
Take merchant onboarding in fintech or banking. Traditionally, a human reviewer visits a merchant’s website, reviews KYC documents, and cross-verifies details. This takes 8-12 minutes per merchant.
Today, a convincing website can be generated by AI in minutes. Documents can be synthetically created. Malware can be hidden behind interfaces that look completely legitimate. A human reviewer, no matter how experienced, cannot manually detect embedded scripts or subtle data manipulation without technological support.
“It looks fine” is no longer a reliable conclusion.
The same problem is spreading across industries. Forged or altered documents, fake IDs, passports, proof of address, and now accounts for 50% of all fraud attempts globally. Synthetic identity fraud in financial services has grown from US$8 billion in 2020 to over US$30 billion today, a nearly 400% increase in five years. Synthetic identity document fraud in North America alone spiked 311% in the year to Q1 2025, with e-commerce, healthtech, and fintech identified as the highest-risk industries.
The Shift That Cybersecurity Must Make Now
We’re entering a world where almost anyone can create a digital identity, launch an online storefront, generate content at scale, and fabricate supporting documents convincingly and quickly.
Static security cannot hold. Defenses need to become adaptive, continuous, context-aware, and AI-assisted. That means moving beyond perimeter checks and surface-level validation. It means behavioral analysis flagging a merchant whose transaction patterns shift overnight, or a user who logs in from three countries in six hours. It means continuous monitoring post-onboarding, not just a one-time check at the gate.
The scale of sophisticated fraud is accelerating. Between 2024 and 2025, multi-step fraud attacks coordinated schemes involving several stages rose 180% year-on-year. Waiting for a breach to update your defenses is no longer a viable strategy.
So What Does AI-Powered Defense Actually Look Like?
AI isn’t just the threat. It’s the most credible answer to the threat.
AI-driven security systems can scan websites instantly for malicious code, detect inconsistencies across data points, identify behavioral anomalies in real time, and flag suspicious patterns no human reviewer would catch at scale.
A new category of AI-native risk platforms is emerging to support exactly this.
Rather than replacing human reviewers, these systems are designed to strengthen their work. Instead of manually checking a site for visible red flags, AI can analyze deeper signals such as structure, hidden scripts, digital footprints, and behavioral patterns. Humans remain responsible for judgment and accountability, while the machine handles large-scale pattern detection.
It’s not about speed for the sake of it. It’s about clarity. Giving humans better information so they can make better decisions, faster.
Where Does This Leave Us?
AI agents and chatbots are not going away. Neither are the people trying to exploit them.
The organizations that thrive will be the ones that understand balance, using AI to strengthen detection and response, while keeping humans firmly in control of judgment. Cybersecurity in the age of AI is not about fighting technology. It’s about evolving with it, deliberately, responsibly, and ahead of the curve.
When we stop treating AI as either a villain or a miracle solution and start treating it as a powerful instrument that requires thoughtful use, we move from fear to responsibility.
And that’s how you build trust in a world where intelligence, both human and artificial, keeps getting more capable.
This article originally appeared on e27 and was syndicated by e27 and Newstex. It was legally licensed through the Industry Dive publisher network. Please direct all licensing questions to legal@industrydive.com.