In January 2024, a finance employee at engineering firm Arup wired $25 million to threat actors after joining a video call with what appeared to be the company's CFO and other colleagues. Every person on that call was a deepfake. The attackers never exploited a software vulnerability. They exploited trust. If you want to know how to spot social engineering, this is the case study that should keep you up at night — because the tactics are getting sharper, and the targets aren't just executives anymore.
Social engineering is the single most common attack vector behind data breaches. According to the Verizon 2025 Data Breach Investigations Report, the human element was involved in roughly 60% of breaches. Threat actors don't need to crack your firewall when they can simply ask an employee to hold the door open.
This post breaks down the specific signals, psychological triggers, and real-world patterns that mark a social engineering attempt. Whether you're securing a 10-person office or a 10,000-seat enterprise, these are the red flags your people need to recognize — today.
What Social Engineering Actually Looks Like in 2025
Forget the old image of a shady email from a foreign prince. Modern social engineering is precise, researched, and disturbingly personal. Threat actors scrape LinkedIn, company websites, press releases, and even court filings to build profiles on their targets.
Here's what I've seen in incident response engagements this year: attackers posing as IT helpdesk staff during a known system migration, sending Teams messages that reference the exact software being deployed. They knew project timelines, vendor names, and internal jargon. The phishing message didn't feel suspicious because it matched reality.
Social engineering isn't one technique. It's a category that includes phishing emails, vishing (voice phishing), smishing (SMS phishing), pretexting, baiting, tailgating, and business email compromise (BEC). The common thread is manipulation of human psychology rather than exploitation of code.
The FBI's Numbers Tell the Story
The FBI Internet Crime Complaint Center (IC3) reported that BEC alone accounted for over $2.9 billion in adjusted losses in 2023 — making it the costliest cybercrime category by a wide margin. Those numbers have continued trending upward. Every one of those losses started with a human being who didn't spot the con.
The 7 Red Flags: How to Spot Social Engineering in Real Time
Knowing how to spot social engineering comes down to recognizing patterns. Threat actors rely on a handful of psychological levers, and once you know them, the attacks become far easier to catch. Here are the seven signals I train every organization to watch for.
1. Urgency That Feels Manufactured
"This must be completed in the next 30 minutes or the account will be locked." Urgency is the most common manipulation tactic in social engineering. It's designed to bypass your critical thinking. Legitimate organizations almost never impose surprise deadlines on sensitive actions like wire transfers or credential resets.
Ask yourself: would this request still make sense if I waited two hours and verified it through a separate channel? If the answer is yes, slow down.
2. Authority Without Verification
The attacker claims to be the CEO, a board member, an auditor, or law enforcement. They leverage the weight of authority to discourage questioning. In the Arup deepfake case, the entire deception rested on the perceived authority of the CFO.
Real leaders expect verification. If someone claiming to be your VP of Finance asks for an emergency wire transfer via email, pick up the phone and call the number you already have on file — not the one in the email signature.
3. Requests for Credentials or MFA Codes
No legitimate IT department will ever ask you to share your password or read back a multi-factor authentication code over the phone or via chat. This is credential theft, plain and simple. If someone asks for your MFA token, you are being targeted right now.
I've investigated cases where attackers called employees, identified themselves as the helpdesk, and asked for MFA codes to "verify the account migration." The employees complied because the caller knew their employee ID and the name of the migration project. That information came from LinkedIn and a press release.
4. Unusual Communication Channels
Your CFO has never once sent you a WhatsApp message — until today, with an urgent request. That shift in channel is a signal. Threat actors use unfamiliar channels because those channels often sit outside your organization's email security filters and logging.
Any time a request arrives on a channel that's abnormal for that sender, treat it as suspicious until verified.
5. Emotional Manipulation
Fear, curiosity, sympathy, greed. Social engineering attacks are designed to trigger an emotional response that overrides logical analysis. "Your account has been compromised" triggers fear. "You've been selected for a bonus" triggers greed. "A colleague needs emergency help" triggers sympathy.
Recognize the emotion before you act on it. If a message makes your heart rate spike, that's exactly when you should pause.
6. Mismatched Details
The email says it's from your bank, but the reply-to address is a Gmail account. The caller says they're from Microsoft, but they called your personal cell. The invoice is from a vendor you use, but the bank routing number has changed. These mismatches are the seams in the disguise.
Train your eyes to check sender addresses, hover over links before clicking, and verify financial details through established channels. These small habits catch the majority of phishing and BEC attempts.
7. Too-Good-to-Be-True Offers or Threats
"Click here to claim your $500 gift card" or "Failure to respond will result in immediate legal action." Both extremes — extraordinary rewards and extraordinary consequences — are hallmarks of social engineering. Real business communications rarely live at either pole.
Why Your Gut Isn't Enough: The Case for Structured Training
In my experience, most people think they can spot a scam. And most of them are wrong. The Verizon DBIR consistently shows that even security-aware employees fall for well-crafted pretexts. That's because social engineering specifically targets the brain's shortcuts — heuristics that evolved for survival, not for evaluating emails.
Structured cybersecurity awareness training gives employees a repeatable framework for evaluating requests. It turns a vague "that feels weird" into a concrete checklist: verify the sender, confirm through a second channel, check for urgency manipulation, never share credentials.
Phishing Simulations Change Behavior
Reading about phishing is one thing. Getting caught by a simulated phish is another. Organizations that run regular phishing awareness training for their teams see measurable reductions in click rates over time. The key is frequency and realism — quarterly simulations with post-click coaching, not an annual checkbox exercise.
The Cybersecurity and Infrastructure Security Agency (CISA) specifically recommends phishing simulation programs as a core component of organizational security awareness. This isn't optional anymore. It's a baseline.
What Is Social Engineering and Why Is It So Effective?
Social engineering is the deliberate manipulation of people into performing actions or divulging confidential information. It works because it exploits fundamental human traits: trust, helpfulness, respect for authority, and fear of consequences. Unlike a ransomware payload that triggers an endpoint detection alert, a social engineering attack bypasses every technical control by targeting the one system you can't patch — the human mind.
That's why zero trust principles matter beyond network architecture. A zero trust mindset means verifying every request regardless of who appears to be making it. Trust is established through verification, not assumption.
Real-World Social Engineering Attacks You Should Study
The MGM Resorts Breach (2023)
In September 2023, threat actors from the group Scattered Spider called the MGM Resorts IT helpdesk, impersonated an employee using information scraped from LinkedIn, and convinced a helpdesk technician to reset account credentials. That single phone call led to a ransomware attack that disrupted operations across MGM properties for over a week, costing the company an estimated $100 million.
The lesson: your helpdesk is a high-value target. Identity verification procedures for inbound calls are non-negotiable.
The Twitter Internal Tool Compromise (2020)
Attackers used phone-based social engineering to target Twitter employees, gaining access to internal admin tools. They then hijacked high-profile accounts including those of Barack Obama, Elon Musk, and Apple to run a cryptocurrency scam. The attackers were teenagers. The attack vector was a phone call.
Technical sophistication wasn't required. Persuasion was.
Business Email Compromise at Scale
BEC doesn't always make headlines, but it's devastatingly effective. A typical attack involves compromising or spoofing an executive's email account and instructing a finance team member to wire funds to a "new vendor account." The FBI's IC3 data shows these attacks hit organizations of every size, across every industry. The average loss per incident runs into six figures.
Building a Human Firewall: Practical Steps That Actually Work
Knowing how to spot social engineering is step one. Building organizational resilience requires deliberate, ongoing effort. Here's what works based on what I've seen deployed at organizations that rarely show up in breach reports.
Implement Verification Protocols
Every financial transaction above a set threshold requires voice verification through a known phone number. Every credential reset request requires identity confirmation through a pre-established method. Write these into policy. Enforce them without exception — especially for executives, who are the most common targets of pretexting attacks.
Run Realistic Phishing Simulations
Deploy simulations that mirror real attack patterns — BEC emails, fake MFA alerts, credential harvesting pages, and even vishing attempts. Measure click rates, report rates, and time-to-report. Use the data to target additional training where it's needed. An effective phishing simulation program treats results as coaching opportunities, not punishment.
Train on Current Threat Intelligence
Generic security training from three years ago won't prepare your team for deepfake-enhanced vishing or AI-generated phishing emails. Your security awareness training program should update content regularly to reflect emerging attack techniques. In 2025, that means covering AI-powered voice cloning, QR code phishing (quishing), and multi-channel social engineering campaigns.
Create a Culture Where Reporting Is Rewarded
Employees who report suspicious messages should be thanked, not questioned. If your team fears being mocked for "falling for it" or wasting IT's time, they'll stay silent. The organizations with the strongest security posture are the ones where employees report early and often, without hesitation.
Apply Zero Trust to Human Interactions
Zero trust isn't just a network architecture principle. Apply it to every request that involves sensitive data, money, or access. Verify the identity of the requester. Confirm the request through a separate channel. Assume compromise until verification is complete. This mindset stops BEC, pretexting, and impersonation attacks cold.
The $4.88M Lesson Most Organizations Learn Too Late
IBM's Cost of a Data Breach Report 2024 put the global average cost of a data breach at $4.88 million. Social engineering and phishing were among the most common initial attack vectors. The math is simple: investing in detection, training, and verification protocols costs a fraction of a single incident.
Every organization that suffered a major social engineering breach had one thing in common — they assumed their people would just "know" when something was off. That assumption is the real vulnerability.
Start building structured defenses now. Train your people on what social engineering actually looks like in 2025. Run phishing simulations that test real scenarios. Enforce verification protocols that make impersonation useless. The threat actors aren't waiting. Neither should you.