For years, the advice was simple: look for spelling mistakes and awkward phrasing in suspicious emails. That advice is now dangerously outdated. AI can write phishing emails indistinguishable from legitimate corporate communications — and it can personalize them at scale.
What AI-powered phishing looks like
Modern spear-phishing attacks use publicly available data (your LinkedIn, your company's website, press releases) to craft emails that reference real projects, real colleagues, and real context. A phishing email today might address you by name, mention your manager, reference a real meeting, and ask you to click a link with entirely plausible framing. Security researchers have documented cases where GPT-4 was used to generate thousands of unique, personalized phishing emails per hour.
Voice cloning: the phone call you can't trust
In 2024, a finance employee at a multinational company was tricked into transferring $25 million after a video call with what appeared to be the company's CFO — but was a deepfake generated from publicly available footage. Voice cloning tools can now reproduce someone's voice from as little as 30 seconds of audio found on YouTube or podcasts.
The new rules of skepticism
Since you can no longer rely on linguistic quality as a signal, shift your attention to: urgency and pressure (real organizations rarely demand immediate action), the channel (did this arrive via an unexpected path?), and the request itself (would my company actually ask for this by email?). When in doubt, verify via a separate, known-good channel — call the person directly using a number you already have.
Technical defenses
Organizations should implement DMARC, DKIM, and SPF email authentication to make domain spoofing harder. Hardware security keys (like YubiKeys) for MFA are phishing-resistant in ways that SMS and authenticator apps are not — even if you click a phishing link and enter your code, a hardware key won't work on a fake domain.