In September 2023, a threat actor called Scattered Spider called MGM Resorts' IT help desk, impersonated an employee they found on LinkedIn, and convinced a technician to reset credentials. The result: an estimated $100 million in losses, a ransomware lockout across casino floors and hotel systems, and weeks of operational chaos. The attackers didn't exploit a zero-day vulnerability. They made a phone call. That's social engineering — and these social engineering examples are exactly what your organization needs to study before the same playbook gets used against you.

I've spent years watching these attacks evolve. The technology gets fancier, but the core mechanism never changes: a human being gets manipulated into doing something they shouldn't. This post breaks down seven real-world social engineering examples, explains the psychology behind each, and gives you specific steps to harden your people against them.

What Is Social Engineering? The 10-Second Answer

Social engineering is the psychological manipulation of people into performing actions or divulging confidential information. Unlike technical exploits, it targets the human layer — trust, urgency, authority, and fear. According to the 2024 Verizon Data Breach Investigations Report, 68% of breaches involved a human element, including social engineering and errors.

It's not just phishing emails. It's phone calls, text messages, in-person impersonation, deepfake audio, and even manipulated QR codes. Threat actors pick whatever channel gives them the best chance of bypassing your defenses.

Example 1: The MGM Help Desk Vishing Attack

Let's start with the one I opened with because it's the most instructive. Scattered Spider researched MGM employees on LinkedIn, identified a target, then called the IT help desk pretending to be that person. They convinced the help desk technician to reset multi-factor authentication credentials.

Once inside, they deployed ALPHV/BlackCat ransomware across MGM's infrastructure. Hotel key cards stopped working. Slot machines went dark. Reservation systems crashed.

Why It Worked

The attackers exploited authority bias and urgency. They sounded like a legitimate employee who needed immediate help. The help desk technician followed instinct — help the person on the phone — instead of following a rigid verification protocol.

Your Takeaway

Implement strict identity verification for any password or MFA reset request. No exceptions. A callback to a number already on file, a manager confirmation, or an in-person verification step would have stopped this cold. If your help desk can reset MFA based on a phone call alone, you have a critical vulnerability right now.

Example 2: The $25 Million Deepfake CFO Video Call

In early 2024, a finance employee at Arup, a multinational engineering firm based in Hong Kong, joined a video conference call with what appeared to be the company's CFO and several other colleagues. Every person on the call was a deepfake — AI-generated video and audio replicas. The employee was instructed to transfer approximately $25 million across multiple transactions.

By the time anyone realized what happened, the money was gone.

Why It Worked

This attack weaponized social proof and authority. The employee saw familiar faces and heard familiar voices. The presence of multiple "colleagues" on the call eliminated doubt. It felt routine.

Your Takeaway

Any wire transfer above a defined threshold needs out-of-band confirmation — a separate phone call to a known number, not a number provided in the same communication channel. Train your finance team specifically on deepfake risks. This isn't science fiction anymore; it's a documented attack vector.

Example 3: Business Email Compromise at Ubiquiti

Back in 2015, Ubiquiti Networks disclosed that attackers used business email compromise (BEC) to steal $46.7 million. Threat actors impersonated executives via spoofed email and instructed finance employees to wire funds to overseas accounts. This remains one of the clearest social engineering examples in corporate history.

BEC hasn't slowed down since. The FBI's 2023 Internet Crime Report showed BEC accounted for over $2.9 billion in reported losses that year alone — the single most costly cybercrime category.

Why It Worked

Authority and urgency. The emails appeared to come from C-suite executives and demanded quick action. Employees didn't want to question the boss.

Your Takeaway

Establish a dual-approval process for all wire transfers. Train employees that urgency is a red flag, not a reason to skip verification. If someone says "do this now and don't tell anyone," that's almost certainly an attack.

Example 4: The Twitter Internal Tool Takeover

In July 2020, a 17-year-old and his accomplices compromised Twitter's internal admin tools by phone-spearing Twitter employees. They used vishing — voice phishing — to trick employees into entering credentials on a fake internal VPN page. With those credentials, they hijacked verified accounts belonging to Barack Obama, Elon Musk, Apple, and others to run a Bitcoin scam.

The attackers collected over $100,000 in Bitcoin within hours.

Why It Worked

The attackers created a convincing pretext: they posed as IT support needing employees to log into a "new VPN portal." During the pandemic, remote work made this plausible. Employees were already used to new tools and changing processes.

Your Takeaway

Zero trust principles apply to people, not just networks. Never trust a request to enter credentials based solely on a phone call. Deploy phishing-resistant MFA like hardware security keys — they would have rendered those stolen credentials useless. And run regular phishing awareness training for your organization so employees recognize when they're being directed to credential-harvesting pages.

Example 5: QR Code Phishing (Quishing) at Parking Meters

Starting in late 2021 and continuing through 2024, cities across the U.S. — including Austin, Houston, and San Antonio — reported fraudulent QR codes placed on parking meters. Drivers scanned what they thought was a legitimate payment code and were directed to a convincing fake payment site that harvested credit card numbers.

This is social engineering in the physical world. No email required.

Why It Worked

People trust physical infrastructure. A sticker on a parking meter feels official. The urgency of avoiding a parking ticket eliminates critical thinking. The attack requires zero technical sophistication — just a printer and some adhesive.

Your Takeaway

Teach your employees that social engineering isn't limited to email. QR codes, USB drops, and physical tailgating are all attack vectors. A comprehensive cybersecurity awareness training program should cover the full spectrum, not just inbox threats.

Example 6: The Okta Customer Support Breach

In October 2023, Okta disclosed that a threat actor used a stolen service account credential to access its customer support management system. But here's the social engineering angle: the initial access likely involved a personal Google account on an Okta employee's work laptop, and the stolen session token gave attackers access to customer-uploaded HAR files containing session cookies.

Downstream victims included Cloudflare, BeyondTrust, and 1Password — all of which detected and contained the intrusion, but the chain started with a human misstep.

Why It Worked

Blending personal and work accounts on the same device created the opening. The threat actor didn't need to send a phishing email to Okta directly. They targeted the softer personal account and pivoted.

Your Takeaway

Enforce device hygiene policies. Personal accounts on work devices are a social engineering entry point. Implement conditional access policies that limit what can be accessed and from where. Zero trust isn't just a buzzword — it's a survival strategy.

Example 7: Ransomware via Fake IT Support Calls

Throughout 2024 and into 2025, CISA has warned about threat actors impersonating IT support to install remote access tools on employee workstations. The playbook is simple: call an employee, claim you're from IT, say their computer has a security issue, and walk them through installing a legitimate remote management tool like AnyDesk or TeamViewer. Once installed, the attacker has full access.

Groups like Royal and Black Basta have used this exact technique to deploy ransomware across enterprise environments.

Why It Worked

Employees are conditioned to comply with IT requests. The tools being installed are legitimate software, so antivirus doesn't flag them. The attack feels helpful, not hostile.

Your Takeaway

Establish a clear policy: IT will never cold-call you and ask you to install software. Period. Make sure every employee knows this. Publish your IT team's real contact methods and train staff to verify by calling back through official channels.

The Psychology Behind Every Social Engineering Attack

Every one of these social engineering examples exploits the same small set of psychological triggers. Understanding them is your first line of defense.

  • Authority: The attacker poses as someone in power — a CEO, IT admin, or law enforcement officer.
  • Urgency: "This must be done now." Pressure kills critical thinking.
  • Social proof: "Everyone else on the call is doing it." The deepfake CFO attack nailed this.
  • Trust: Exploiting existing relationships or institutional trust, like a QR code on a city parking meter.
  • Fear: "Your account will be locked" or "You'll get a parking ticket."

When your employees can name these triggers in real time, they become dramatically harder to manipulate.

The $4.88M Lesson Most Organizations Learn Too Late

IBM's 2024 Cost of a Data Breach Report pegged the global average cost of a data breach at $4.88 million. Social engineering and phishing were the most common initial attack vectors. That means the most expensive breaches start with a manipulated human — not a misconfigured firewall.

Security awareness isn't a checkbox exercise. It's risk reduction. And the data backs that up: organizations with security awareness training and phishing simulation programs detected and contained breaches faster, saving an average of hundreds of thousands of dollars per incident.

How to Protect Your Organization Starting Today

Here's what I recommend based on what actually works, not what sounds good in a slide deck:

1. Run Realistic Phishing Simulations

Not once a year. Monthly. Vary the scenarios — email, SMS, QR code, voice. Measure who clicks, who reports, and who ignores. Use results to target training, not to punish. Get started with phishing simulation and awareness training built for organizations.

2. Deploy Phishing-Resistant MFA

SMS-based MFA is better than nothing, but it's vulnerable to SIM swapping and social engineering of telecom reps. Hardware security keys (FIDO2/WebAuthn) eliminate credential theft from phishing entirely.

3. Implement Verification Protocols for Sensitive Actions

Wire transfers, credential resets, software installations, and access changes all need out-of-band verification. Build these protocols into your operational playbooks, not just your security policy documents.

4. Train Continuously, Not Annually

One-and-done training doesn't work. People forget. Threats evolve. A continuous cybersecurity awareness training program keeps social engineering defense skills fresh and top of mind.

5. Adopt Zero Trust Architecture

Assume every request is potentially malicious until verified. Apply this to network access, identity verification, and even internal communications. The NIST Zero Trust Architecture framework (SP 800-207) is your starting blueprint.

Social Engineering Isn't Going Away — Your Defenses Need to Evolve

Every one of the social engineering examples in this post happened to sophisticated organizations with real security budgets. MGM, Twitter, Arup, Ubiquiti, Okta — none of them were negligent. They were human.

That's the point. Social engineering targets the one thing you can't patch: people. But you can train them. You can build processes that make manipulation harder. You can create a culture where verifying a suspicious request is rewarded, not seen as paranoia.

The threat actors are studying your org chart, your LinkedIn profiles, and your help desk procedures right now. The question isn't whether someone will try to socially engineer your organization in 2025. It's whether your people will recognize it when it happens.