In July 2020, Twitter suffered one of the most visible cyber incidents of the year — a coordinated social engineering attack that compromised high-profile accounts including Barack Obama, Elon Musk, and Apple. The attackers walked away with over $100,000 in Bitcoin. But what stood out to me wasn't the breach itself. It was how many organizations watching from the sidelines had no idea how to report a cyber incident if the same thing happened to them.
I've worked incident response cases where companies lost 48 critical hours because nobody knew who to call, what to document, or which federal agency to contact. Those hours cost them — in data, in money, and in legal exposure. This post walks you through exactly what to do, step by step, when a cyber incident hits your organization.
Why Knowing How to Report a Cyber Incident Matters More Than Ever
The FBI's Internet Crime Complaint Center (IC3) received 467,361 complaints in 2019, with reported losses exceeding $3.5 billion. By the time the 2020 numbers are finalized, those figures will almost certainly be higher — the pandemic has been a goldmine for threat actors. The FBI IC3 has explicitly warned about surging COVID-19 related phishing, credential theft, and ransomware campaigns targeting remote workforces.
Here's what actually happens in most organizations: an employee spots something suspicious, doesn't know what to do, sits on it for a day or two, maybe mentions it to a coworker. By the time it reaches someone with authority, the attacker has had days of unimpeded access. Reporting speed is the single biggest factor in limiting damage.
Step 1: Contain First, Then Report
Before you pick up the phone, stop the bleeding. Containment and reporting happen nearly simultaneously in a well-prepared organization, but containment comes first by a hair.
Immediate Containment Actions
- Isolate affected systems. Disconnect compromised machines from the network. Don't power them off — you may destroy forensic evidence.
- Disable compromised accounts. If credential theft is suspected, force password resets and revoke active sessions immediately.
- Preserve logs. Firewall logs, email server logs, endpoint detection logs — save everything. Timestamp your actions.
- Activate multi-factor authentication on any accounts that don't already have it. This is your emergency brake against lateral movement.
I've seen organizations wipe machines in a panic, destroying the very evidence that would have helped law enforcement track the threat actor. Don't be that organization.
Step 2: Report Internally Using a Defined Chain
Every organization needs a documented incident reporting chain. If you don't have one, build one today. Here's what a functional chain looks like:
Who Gets Notified and When
- IT Security Team / SOC: Immediately. They begin technical triage.
- CISO or IT Director: Within 30 minutes. They make escalation decisions.
- Legal Counsel: Within 1 hour. Breach notification laws have strict timelines — your lawyer needs to know early.
- Executive Leadership: Within 2 hours. They own the business risk decisions.
- Communications / PR: As directed by legal. Premature public statements create liability.
Document every notification with a timestamp. This paper trail matters for regulatory compliance and potential litigation. I've reviewed incident timelines in post-breach audits where the lack of documentation turned a manageable situation into a regulatory nightmare.
Step 3: Report to Federal Law Enforcement
This is where most organizations hesitate. They worry about publicity, legal exposure, or losing control of the investigation. I understand the instinct, but reporting to law enforcement almost always works in your favor — especially when regulators later ask what you did in response.
FBI Internet Crime Complaint Center (IC3)
File a complaint at ic3.gov. This is the primary federal intake mechanism for cyber incidents. The IC3 complaint form asks for specific details: how the incident occurred, the financial impact, the type of attack, and any identifying information about the attacker. Be thorough. Vague reports go to the bottom of the pile.
CISA (Cybersecurity and Infrastructure Security Agency)
If your organization operates in critical infrastructure — healthcare, financial services, energy, government — report to CISA as well. CISA provides technical assistance and can deploy incident response resources. They also share anonymized threat intelligence that helps protect other organizations from the same attack.
Local FBI Field Office
For significant incidents — ransomware demands over $50,000, data breaches involving PII of thousands, nation-state indicators — contact your local FBI field office directly. The IC3 online form is important, but a phone call to a field agent gets faster traction on serious cases.
What Qualifies as a Reportable Cyber Incident?
This is one of the most common questions I get, and it's worth answering directly: any unauthorized access to your systems, data, or accounts qualifies as a cyber incident worth reporting. You don't need to confirm a full-blown data breach before you act.
Specific Scenarios That Require Reporting
- Ransomware attack: Always report. Even if you pay (which I strongly advise against), law enforcement needs to know. Ransomware payments funded an estimated $350 million in criminal revenue in 2020 according to Chainalysis data.
- Business email compromise (BEC): The IC3 reported $1.7 billion in BEC losses in 2019. If a wire transfer was redirected, report immediately — the IC3's Recovery Asset Team has successfully frozen fraudulent transfers when notified quickly.
- Phishing that led to credential theft: If an employee gave up credentials to a phishing site, that's a reportable incident. The attacker now has access to your environment.
- Data exfiltration: Any confirmed or suspected theft of customer PII, financial records, or intellectual property.
- Insider threat: An employee or contractor who deliberately accessed data they shouldn't have.
When in doubt, report. No law enforcement agency has ever penalized an organization for over-reporting.
Step 4: Comply with State and Federal Notification Laws
All 50 U.S. states have data breach notification laws. The timelines and thresholds vary, but most require notification to affected individuals within 30 to 90 days of discovery. Some states — like California under the CCPA — have stricter requirements and steeper penalties.
Key Regulatory Bodies to Consider
- State Attorney General: Most states require you to notify the AG's office if the breach exceeds a certain number of affected residents (often 500 or more).
- HHS (Health & Human Services): If you handle protected health information under HIPAA, breach notification to HHS is mandatory.
- FTC (Federal Trade Commission): If your organization made security promises to customers that it failed to keep, the FTC may get involved. The FTC's enforcement actions against companies like Wyndham Hotels and LabMD established that inadequate cybersecurity can constitute an unfair business practice.
- SEC: Publicly traded companies have additional disclosure obligations.
Your legal counsel should map out which laws apply to your specific situation. This isn't optional — failure to notify carries its own penalties, independent of the breach itself.
Step 5: Document Everything for the After-Action Review
Once the immediate crisis subsides, the real work begins. Every incident should produce a detailed after-action report. I've led dozens of these reviews, and the organizations that learn from them are the ones that don't repeat the same mistakes.
What Your After-Action Report Should Include
- Timeline of the incident from first indicator to full containment
- Attack vector — how did the threat actor get in?
- Systems and data affected
- Response actions taken and their effectiveness
- Gaps in detection, response, or communication
- Specific remediation steps and responsible parties
- Updated security controls to prevent recurrence
The Verizon 2020 Data Breach Investigations Report found that 22% of breaches involved social engineering and 45% involved hacking. If your incident fell into either category, your after-action plan needs to address both the technical vulnerability and the human factor.
The Training Gap That Makes Reporting Fail
Here's the uncomfortable truth: reporting procedures are worthless if your employees can't recognize an incident in the first place. The same Verizon DBIR data shows that phishing is the top threat action in breaches. Your people are the first line of detection — and they need to be trained accordingly.
I recommend two specific investments. First, enroll your team in cybersecurity awareness training that covers incident recognition, social engineering tactics, and reporting protocols. This gives everyone a shared vocabulary and a clear understanding of what "suspicious" actually looks like.
Second, run regular phishing simulations through a dedicated phishing awareness training program. Simulations do something classroom training alone can't — they create muscle memory. When your employees have practiced spotting and reporting phishing emails dozens of times, they'll do it automatically when a real attack arrives.
Build a Zero-Blame Reporting Culture
The fastest way to kill incident reporting is to punish the reporter. I've seen organizations where employees hid compromised credentials for days because they were afraid of being fired. That fear gave attackers all the time they needed.
Adopt a zero-blame policy for good-faith reporting. If someone clicks a phishing link and reports it within minutes, that's a success — not a failure. The failure is the employee who clicks and says nothing. Your security awareness training should reinforce this message relentlessly.
A Reporting Checklist You Can Use Today
Pin this somewhere your incident response team can see it:
- Contain: Isolate systems, disable accounts, preserve evidence
- Internal notification: IT Security → CISO → Legal → Executives (with timestamps)
- Federal reporting: FBI IC3 complaint + CISA (if critical infrastructure) + local FBI field office (if severe)
- Legal compliance: State AG notification, HIPAA/HHS, FTC, SEC as applicable
- Customer notification: Per state breach notification law timelines
- Document: Maintain a detailed incident log throughout
- After-action review: Conduct within 2 weeks of containment
This isn't theoretical. This is the sequence I've followed in real incidents, and it works. The organizations that had this process documented before the incident handled it in hours. The ones that didn't took weeks — and paid the price in regulatory scrutiny, customer trust, and recovery costs.
Don't Wait for the Breach to Build Your Plan
The SolarWinds supply chain attack disclosed in December 2020 is a stark reminder that no organization is too sophisticated to be compromised. The question isn't whether you'll face a cyber incident — it's whether you'll know how to report a cyber incident when it happens.
Build your reporting chain now. Train your employees now. Test your process with tabletop exercises before a real threat actor tests it for you. The organizations that survive breaches with their reputation intact are the ones that responded quickly, reported transparently, and learned aggressively.
Your next step is straightforward: make sure every person in your organization knows what a cyber incident looks like and exactly who to tell when they see one. That single change will do more for your security posture than any tool you could buy.