In 2023, a finance employee at a multinational firm wired $25 million after a video call with someone they believed was their CFO. It wasn't. The entire call — every face, every voice — was a deepfake fabricated by threat actors who'd spent weeks building a detailed pretext. That single attack encapsulates why pretexting attack examples deserve your full attention: they don't exploit software vulnerabilities. They exploit human trust.
Pretexting is the backbone of the most damaging social engineering attacks happening right now. The Verizon 2023 Data Breach Investigations Report found that pretexting incidents nearly doubled year over year, now accounting for over 50% of all social engineering breaches. If your organization isn't training against these scenarios, you're leaving your biggest attack surface — your people — completely undefended.
What Makes Pretexting Different from Phishing?
Most people lump pretexting in with phishing. They're related, but they're not the same thing. Phishing casts a wide net — a mass email with a malicious link hoping someone clicks. Pretexting is a crafted narrative. The attacker creates a believable scenario and a fake identity to manipulate a specific target into handing over information, access, or money.
Think of phishing as the delivery mechanism and pretexting as the script. A phishing email that claims to be from your CEO asking for W-2 data? That's pretexting delivered via phishing. A phone call from someone claiming to be IT support who needs your password to "fix a server issue"? That's pretexting delivered via vishing (voice phishing).
The key ingredient is the pretext itself — a fabricated story designed to establish trust and urgency. Threat actors research their targets on LinkedIn, company websites, and social media to make these stories airtight.
5 Real Pretexting Attack Examples That Cost Organizations Millions
These aren't hypotheticals. Every example below is drawn from documented incidents. Study them, because your employees will face variations of each one.
1. The CEO Fraud Wire Transfer (Ubiquiti Networks, 2015)
Ubiquiti Networks disclosed in an SEC filing that attackers impersonated company executives and targeted the finance department. Using spoofed emails and a carefully constructed pretext of an urgent, confidential acquisition, the attackers convinced employees to wire $46.7 million to overseas accounts. The company recovered about $15 million. The pretext worked because it leveraged authority, urgency, and secrecy — three pillars of every effective social engineering attack.
2. The Tax Season W-2 Scam (Snapchat, 2016)
An attacker emailed Snapchat's payroll department posing as the CEO and requested W-2 tax data for current and former employees. The payroll team complied. The pretext was simple: it was tax season, the request seemed routine, and it appeared to come from the top. Personal data — Social Security numbers, salary information — for an undisclosed number of employees was exposed. Snapchat disclosed the breach publicly and offered affected employees identity theft protection.
3. The Vendor Impersonation Attack (Toyota Boshoku, 2019)
Toyota Boshoku Corporation, a Toyota subsidiary, lost $37 million when attackers impersonated a business partner and convinced a finance executive to change wire transfer payment information. The pretext involved fake but convincing email correspondence that mimicked an existing vendor relationship. By the time the company realized the bank account details had been swapped, the money was gone. This is textbook business email compromise (BEC), powered entirely by pretexting.
4. The IT Helpdesk Callback (MGM Resorts, 2023)
In September 2023, the Scattered Spider threat group brought MGM Resorts to a standstill. Reports indicate the attackers called the IT help desk, impersonated an employee they'd found on LinkedIn, and convinced the helpdesk to reset credentials. That single social engineering call gave them a foothold to deploy ransomware across MGM's systems. The attack cost MGM an estimated $100 million according to their SEC filing. The pretext? "Hi, I'm locked out of my account."
5. The Deepfake Video Call (Arup, 2024)
In early 2024, engineering firm Arup confirmed it was the victim of the deepfake video call attack referenced at the top of this post. An employee in the Hong Kong office joined a video conference where every other participant — including someone who appeared to be the CFO — was an AI-generated deepfake. The pretext involved a supposedly secret transaction requiring immediate wire transfers. The employee sent $25 million across 15 transactions before the fraud was discovered. This represents the terrifying next evolution of pretexting attack examples.
The Anatomy of a Pretext: How Threat Actors Build Their Story
Understanding the mechanics helps you spot these attacks. Every pretext follows a predictable structure.
Step 1: Reconnaissance
Attackers mine publicly available information. LinkedIn profiles reveal job titles, reporting structures, and recent hires. Company press releases announce mergers, new vendors, and leadership changes. Social media posts reveal travel schedules, conferences, and personal details. This is called open-source intelligence (OSINT), and it takes minutes — not days.
Step 2: Identity Fabrication
The attacker selects a persona. Common choices include a C-suite executive, an IT administrator, a vendor, a bank representative, or a new employee. They may register a lookalike email domain (your-company.com vs. yourcompany.com) or spoof the email header entirely.
Step 3: The Hook
The initial contact establishes the scenario. It's always tied to something plausible: a compliance audit, an urgent payment, a system migration, a security incident. The story creates a reason for the target to act and a reason not to verify through normal channels.
Step 4: Exploitation
Once trust is established, the attacker makes their request. It might be credential theft ("I need your login to verify your account during the migration"), financial fraud ("wire this payment before end of business"), or data exfiltration ("send me the employee roster for the audit").
Step 5: Exit
The attacker disappears. Emails go dead. Phone numbers disconnect. By the time the target realizes something is wrong, the damage is done.
Why Traditional Security Tools Can't Stop Pretexting
Here's the uncomfortable truth: your firewall, your endpoint detection, your email gateway — none of them can reliably stop a well-crafted pretext. These attacks don't rely on malware or exploit code. They rely on a human being making a decision based on a convincing lie.
Email security tools catch some BEC attempts, but sophisticated attackers use compromised legitimate accounts or domains that pass SPF, DKIM, and DMARC checks. Phone-based pretexting bypasses email security entirely. And deepfake video calls? We're just beginning to grapple with that threat.
This is exactly why security awareness isn't optional. It's your primary defense against pretexting. Your people need to recognize these patterns before they act on them.
How to Defend Your Organization Against Pretexting Attacks
Defense against pretexting requires layered controls — both technical and human. Here's what actually works in practice.
Train Employees with Realistic Scenarios
Generic "don't click suspicious links" training doesn't prepare anyone for a phone call from someone who sounds like their boss. You need scenario-based training that walks employees through real pretexting attack examples — like the ones above — and teaches them to recognize manipulation tactics. Our cybersecurity awareness training program covers pretexting, vishing, and BEC scenarios with practical, role-specific exercises.
Run Phishing Simulations That Include Pretexting
A phishing simulation that only tests "click or don't click" misses the point. Your simulations should include pretexting elements: emails impersonating executives, fake vendor payment requests, and IT impersonation scenarios. This builds real-world pattern recognition. We built our phishing awareness training for organizations specifically to include these multi-layered social engineering tests.
Implement Verification Procedures for Sensitive Requests
Every organization needs a policy that requires out-of-band verification for wire transfers, credential resets, and data requests. If the CFO emails asking for a $50,000 wire, the finance team picks up the phone and calls the CFO's known number — not the number in the email. This single control would have prevented the Ubiquiti, Toyota Boshoku, and Arup attacks.
Adopt Multi-Factor Authentication Everywhere
Multi-factor authentication (MFA) won't stop every pretexting attack, but it dramatically limits what an attacker can do with stolen credentials. Even if a helpdesk agent is tricked into resetting a password, MFA adds a second barrier. CISA recommends phishing-resistant MFA — hardware security keys or FIDO2 — as the strongest option. See CISA's MFA guidance for implementation details.
Move Toward Zero Trust Architecture
A zero trust model assumes every request — internal or external — could be malicious. This means continuous verification, least-privilege access, and network segmentation. Even if a pretexting attack gives a threat actor initial access, zero trust limits lateral movement. NIST Special Publication 800-207 provides the framework: NIST SP 800-207 - Zero Trust Architecture.
Limit Public Exposure of Employee Information
The less information attackers can gather during reconnaissance, the harder it is to build a convincing pretext. Review what your company publishes on its website. Coach employees on LinkedIn privacy settings. Consider limiting the visibility of your organizational chart and internal contact directories.
What Is a Pretexting Attack and How Does It Work?
A pretexting attack is a form of social engineering where a threat actor creates a fabricated scenario — a pretext — to trick a target into divulging sensitive information, transferring funds, or granting system access. Unlike opportunistic phishing, pretexting involves targeted research and a tailored narrative. The attacker impersonates a trusted figure (executive, vendor, IT staff) and uses urgency, authority, or familiarity to bypass the target's defenses. According to the Verizon DBIR, pretexting is now the leading social engineering tactic in confirmed data breach incidents.
The $4.88M Lesson Most Organizations Learn Too Late
IBM's Cost of a Data Breach Report 2023 pegged the global average cost of a data breach at $4.45 million — a record high at the time of publication. Social engineering attacks, including pretexting, consistently rank among the costliest initial attack vectors because they bypass technical controls entirely and often go undetected for weeks.
The organizations that avoid these losses aren't the ones with the biggest security budgets. They're the ones that treat their employees as a security layer — trained, tested, and empowered to question unusual requests regardless of who's supposedly making them.
Your Employees Are Either Your Strongest Defense or Your Weakest Link
Every one of the pretexting attack examples above succeeded because a human being trusted the wrong person. Not because they were careless or unintelligent — because they were untrained. They didn't know what a pretext looks like, how urgency is weaponized, or when to pause and verify.
That's a solvable problem. Consistent, scenario-based training turns potential victims into human firewalls. Start with our cybersecurity awareness training to build a baseline, then layer in ongoing phishing simulations that test real pretexting scenarios.
The threat actors behind these attacks are investing hours into research and preparation. The question is whether your organization is investing even a fraction of that time into preparing your people to recognize and resist them.