In March 2022, the Lapsus$ group didn't crack some zero-day vulnerability to breach Okta. They bribed an employee. They socially engineered their way into a third-party support provider and pivoted from there. No malware. No exploit kit. Just a human being making a decision. That single incident put roughly 366 Okta customers at risk and reminded every security team on the planet that social engineering examples aren't hypothetical classroom scenarios — they're the primary attack vector behind the majority of breaches happening right now.
According to the 2021 Verizon Data Breach Investigations Report, 85% of breaches involved a human element. Social engineering was the top pattern in those breaches. If you're searching for real-world social engineering examples to understand the threat or build a case for training your team, you're in the right place. I'm going to walk through seven attacks I've personally encountered or analyzed — and show you exactly what made each one work.
What Is Social Engineering? (The 30-Second Version)
Social engineering is the art of manipulating people into giving up confidential information, access, or money. Threat actors exploit trust, urgency, fear, and authority instead of technical vulnerabilities. It's cheaper, faster, and more reliable than writing code.
The reason it works is simple: humans are wired to be helpful, to respect authority, and to act fast under pressure. Every social engineering example on this list exploits at least one of those instincts.
Example 1: The CEO Wire Transfer That Cost $2.3 Million
I consulted with a mid-size manufacturing firm that lost $2.3 million in a single wire transfer. The attack was textbook Business Email Compromise (BEC). The CFO received an email — apparently from the CEO — requesting an urgent wire to close an acquisition. The email address was spoofed with a single-character domain swap that nobody caught.
The FBI's 2021 Internet Crime Report logged nearly 20,000 BEC complaints totaling adjusted losses of almost $2.4 billion. That makes BEC the single most financially devastating cybercrime category the FBI tracks. This isn't rare. It's routine.
Why It Worked
The threat actor researched the company for weeks. They knew the CEO was traveling. They knew an acquisition was in progress. The email referenced real deal terms. The CFO had no out-of-band verification process — no phone call, no Slack message, nothing. Authority and urgency did the rest.
Example 2: The Phishing Email That Opened the Door to Ransomware
A healthcare organization I worked with got hit with ransomware that encrypted 14 servers and took their EHR system offline for nine days. The entry point? A phishing email disguised as a Microsoft 365 password expiration notice. One employee clicked, entered credentials on a convincing fake login page, and the attacker had a foothold.
Within 72 hours, the threat actor moved laterally, escalated privileges, and deployed ransomware. The organization paid a six-figure ransom because their backups were also compromised.
Why It Worked
The phishing page was a near-perfect clone of the real Microsoft login. The employee had no multi-factor authentication enabled. And the organization had never run a phishing simulation — nobody had ever tested whether employees could spot credential theft attempts. This is exactly why I recommend organizations start with phishing awareness training designed for teams. You can't defend against what you've never practiced recognizing.
Example 3: The IT Help Desk Pretexting Call
This one still frustrates me. A threat actor called a company's IT help desk, claimed to be a remote employee locked out of their account, and talked a technician into resetting the password and disabling MFA. The caller knew the employee's full name, employee ID, and manager's name — all scraped from LinkedIn and a company newsletter posted publicly online.
This is pretexting — building a fabricated scenario to extract information or access. The Lapsus$ group used similar techniques repeatedly in their 2021-2022 campaign against major tech companies.
Why It Worked
The help desk had no identity verification protocol beyond "security questions" that were essentially public information. The technician wanted to be helpful. There was no callback procedure, no ticket escalation requirement, and no flag for MFA removal requests. Human desire to help, combined with zero process controls, handed over the keys.
Example 4: The USB Drop in the Parking Lot
This social engineering example feels like something out of a movie, but I've seen it work in penetration tests more times than I'd like to admit. During an authorized red team engagement for a financial services client, we dropped 20 branded USB drives in the parking lot and break room. Eight were plugged into corporate machines within 48 hours.
Each drive contained a payload that phoned home to our command-and-control server. In a real attack, that's initial access — game over for the perimeter.
Why It Worked
Curiosity. That's it. The drives were labeled "Q4 Salary Adjustments" with the company logo. People wanted to see if their name was on the list. No endpoint protection flagged the payload because it was a legitimate-looking script. Security awareness training could have prevented every single one of those plug-ins.
Example 5: The Vendor Impersonation Invoice Scam
A construction company I advised paid $187,000 to a fraudulent bank account. The attacker compromised a subcontractor's email account, monitored invoice threads for weeks, and then sent a message saying, "We've changed our banking details. Please update your records and send the next payment here."
The accounts payable team complied. They had no reason to question it — the email came from a known, trusted contact using their real email address. The money was gone within hours.
Why It Worked
The attacker didn't need to spoof anything. They had real access to the vendor's mailbox. The AP team had no policy requiring verbal confirmation for banking changes. This is a trust exploitation attack, and it's one of the most common social engineering examples in the BEC category.
Example 6: The LinkedIn Reconnaissance-to-Spear Phish Pipeline
I analyzed an attack against a defense contractor where the threat actor spent three months building a fake LinkedIn profile as a "recruiter." They connected with engineers, gathered information about projects, technologies, and internal tools, and then sent targeted spear-phishing emails referencing specific programs the engineers worked on.
The emails contained malicious PDFs disguised as job descriptions. Two engineers opened them. The attacker gained access to an internal development environment.
Why It Worked
Months of social media reconnaissance made the spear phish incredibly convincing. The engineers had no training on how threat actors use professional networks for targeting. The malicious documents exploited a known vulnerability that should have been patched. Multiple failures — but the social engineering was the entry point for all of them.
Example 7: The Fake Multi-Factor Authentication Push
This technique exploded in 2021 and into 2022. The attacker already has stolen credentials (often purchased on dark web markets). They attempt to log in and trigger an MFA push notification to the victim's phone. Then they do it again. And again. At 2 AM. Eventually, the exhausted or confused victim hits "Approve" just to make it stop.
This is MFA fatigue, and Lapsus$ reportedly used this exact method in multiple high-profile breaches. Multi-factor authentication is critical, but it's not a silver bullet when the human element is the weakest link.
Why It Worked
The victim didn't understand what the MFA prompt meant. They had no training on what to do when receiving unexpected authentication requests. A simple cybersecurity awareness training program would have taught them: unexpected MFA prompts mean someone has your password. Don't approve — report it and change your credentials immediately.
The Common Thread Across All Social Engineering Examples
Every single attack on this list succeeded because of a human decision. Not a firewall misconfiguration. Not an unpatched server (though that sometimes made things worse). A person trusted the wrong email, approved the wrong request, or plugged in the wrong device.
This is why zero trust as a philosophy matters — not just for network architecture, but for human processes. Verify everything. Trust nothing by default. Build verification steps into every workflow that involves access, money, or sensitive data.
How Do You Defend Against Social Engineering?
Here's the practical framework I use with every organization I advise:
- Train continuously, not annually. One-and-done training doesn't change behavior. Monthly reinforcement does. Start with structured cybersecurity awareness training that covers the full spectrum of social engineering tactics.
- Run phishing simulations. You need to baseline your organization's susceptibility and measure improvement over time. Phishing simulation platforms built for organizations let you do exactly that without the guesswork.
- Implement out-of-band verification. Any request involving money transfers, credential resets, MFA changes, or access grants must be confirmed through a separate communication channel. Email alone is never sufficient.
- Limit public information exposure. Audit what your company and employees share on LinkedIn, social media, and public websites. Threat actors mine this data for pretexting and spear phishing.
- Deploy phishing-resistant MFA. Push notifications are vulnerable to fatigue attacks. FIDO2 security keys and number-matching MFA are significantly harder to exploit. CISA's MFA guidance is a solid starting point.
- Build a reporting culture. Employees who report suspicious emails or calls should be thanked, not interrogated. Make reporting easy, fast, and rewarded. The faster you know about a social engineering attempt, the faster you can contain it.
The $4.88 Million Reason to Act Now
IBM's 2021 Cost of a Data Breach Report put the average breach cost at $4.24 million — the highest in 17 years of the study. Social engineering and phishing were top initial attack vectors. For small and mid-size businesses, a single successful attack can be existential.
I've watched companies recover from technical failures. I've rarely seen them recover gracefully from social engineering attacks that drained bank accounts or encrypted critical systems. The reputational damage alone takes years to repair.
Your Employees Are Either Your Strongest Defense or Your Biggest Vulnerability
Every social engineering example I've shared has a human at the center. That's not a weakness you can patch with software. It's a risk you manage through training, process, and culture.
The threat actors behind these attacks are patient, creative, and persistent. They study your organization. They study your people. They craft scenarios designed to bypass rational thinking and trigger emotional responses.
Your defense has to be equally deliberate. Start by getting your team into security awareness training that covers real-world social engineering examples — not abstract theory. Then layer in ongoing phishing simulations that keep the threat top of mind.
Because the next social engineering attack against your organization isn't a matter of if. It's a matter of when — and whether your people are ready for it.