The State of Phishing Scams

By Nicole Jiang, Co-Founder and CEO of Fable Security

If you’re wondering if phishing scams are changing and getting worse, the answer is “yes.”Today’s phishing scams have evolved from “spray and pray” to high-volume, high-relevance scams. Attackers are using generative AI in a sophisticated way to spin up thousands of parallel campaigns that look like the real thing. First, they use AI to do deep research on the cheap. Cybercriminals can pull details from public sources, breached data, and even internal breadcrumbs if they’re able to gain partial system access—data employees leave in tickets, calendars, and collaboration tools like Slack.

Then, they use AI to make hyper-realistic lures: invoices from known vendors, calls or videos in the voice or likeness of a company executive, communications from trusted partners. And finally, they use AI to deliver targeted messages at scale, even A/B testing subject lines and messages. And because they’ve been able to do all this research and match the language and formatting of expected emails, calls, and text messages, their phishes are able to bypass defenses and trick people. They look just like all the vendor updates, policy changes, invoice issues, wire requests, MFA resets, HR messages, customer service escalations that people receive on a daily basis.

The data reflects this massive uptick in sophistication: Zscaler’s 2025 ThreatLabz Phishing Report, based on an analysis of over 2 billion blocked phishing transactions between January and December 2024, found a shift towards more targeted, AI-driven attacks despite a 20% drop in overall phishing attempts; and Microsoft’s 2025 Digital Defense Report found that AI-powered phishing achieved a 54% click-through rate—roughly 4.5x more successful than traditional phishing campaigns. So, they’re not only more numerous, but they’re more effective.

Why are phishing scams successful?

These scams are so successful because attackers take advantage of people’s desire to move fast, get work done, and be helpful. Cybercriminals don’t just make the lures super-realistic, but they also trick people by creating urgency. People sometimes act before thinking when they feel a sense of urgency, especially if they sense they might let someone down by not taking action. This is especially true when they trust the sender or when the request matches a real workflow. Attackers often keep the “ask” small: review a document, reset a password, approve an MFA prompt, pay an invoice, update payroll details—or if it’s a multi-step scam, they’ll start with small or easy requests and escalate from there.

Attackers also evade suspicion by blending in. They mimic real tools that people use day-to-day, and real processes they follow, then push victims onto familiar-looking login pages or file-sharing links. They can also reuse context from compromised threads (“pretexting”) so the email looks like part of an existing conversation, not a cold outreach. With the help of AI, it’s easy for them to do this in a realistic way, at scale.

What do CEOs need to know about phishing scams?

First, phishing is an operational risk, not just an IT nuisance. It drives credential theft, wire fraud, ransomware, and business interruption. It targets the functions that keep the company running—finance, payroll, IT support, procurement, and executives themselves. A single successful phishing message can bypass millions of dollars of security investment if it reaches the right person at the right moment.

Second, AI has changed the economics of phishing. Attackers can now research your organization quickly, generate convincing lures at scale, and test variations the way adtech pros optimize their campaigns. The obvious scams full of typos are being replaced by voice calls, text messages, and emails that look like routine business. This is reflected in our own data: In The art (and science!) of behavior change in human risk, we point out that nearly one in four threat-awareness campaigns our customers run are brand impersonations of commonly-used technologies such as ChatGPT, Zoom, and Docusign. For the bad guys, these scams are cheap to make, easy to run, and tricky.

Third, phishing goes beyond people’s email inboxes. It now includes MFA fatigue attacks, QR codes, file-sharing lures, deepfake voice calls, and collaboration tool impersonation. CEOs should assume attackers will target real workflows, not just email. The question isn’t whether your filters catch most of it—they probably do. The question is what happens when one believable message reaches one busy or distracted employee. CEOs don’t need to understand every technical detail, but they do need to ensure the company measures human risk the same way it measures financial or operational risk.

What countermeasures work best to mitigate this threat?

Use layered controls that match how attacks actually work: good authentication and access hygiene, device updates, data lockdown, workflow hardening, and behavioral interventions.

First, access. Turn on phish-resistant MFA for high-risk roles, tighten admin privileges, and reduce “who can approve what” in payment and account-change workflows. Next, device updates. Keep OS software up-to-date and make sure laptops and phones (especially BYOD) are compliant with policy. Then, harden workflows like “who can approve what” in payment and account-change workflows, and come up with an out-of-band process to validate things like disbursement requests. And finally, prepare your people with an AI-powered human risk program that you can use both to compel behavior change like device updates and MFA adoption, as well as to arm your people with precise guidance about emerging threats with same-day briefings.

Where does an enterprise start with risk mitigation for phishing scams?

Start by understanding employees and assessing risk. Most organizations spread effort evenly, but attackers don’t. Build a clear picture of risk by role, access, tenure, geography, and behavior. Who handles payments, who can reset credentials, who has privileged access, who sits in high-volume support queues. For example, Zscaler’s 2025 Phishing Report calls out that attackers target IT, HR, finance, and payroll with “deeper” campaigns—use that as your starting point.

Then pick a few of your biggest risks to fix first and design programs around them. Those can be a combination of technical controls and human risk programs. Examples are “account changes require a callback,” “only reset MFA if you requested it,” and “only upload content to our corporate-approved generative AI tools.”

If your existing program feels like it isn’t working, it may be because you’re focusing on the wrong metrics. Many security awareness programs over-rotate on training completion and phishing click rates as the measurements. We say broaden your approach to the things that actually reduce risk, from security hygiene to data handling behaviors to susceptibility to phishing. Track outcomes: MFA adoption, OS compliance, time-to-report suspicious messages, and repeat-offenders. That’s how you turn your anti-phishing program from a compliance exercise into measurable risk reduction.

About the Author

Nicole Jiang is the co-founder and CEO of Fable Security, the human risk platform that shapes employee behavior in real time. She was previously a founding team member and Head of Product at Abnormal Security, where she scaled the company from pre-revenue to a $5B valuation. Earlier in her career, Nicole held product and engineering roles at Mixpanel, Microsoft, Palantir Technologies, and Pixlee, building products across AI, SaaS, and security. She holds an engineering degree from the University of Waterloo.

Photo by ThisIsEngineering: https://www.pexels.com/photo/code-projected-over-woman-3861969/

Leave a Reply