The Clock Starts Ticking the Second You Discover a Breach

In March 2024, Change Healthcare suffered a ransomware attack that exposed the protected health information of over 100 million individuals. The fallout wasn't just technical — it was a cascading failure in communication, notification, and reporting that took months to untangle. If your organization ever faces a data breach, knowing how to report a data breach correctly isn't optional. It's the difference between a manageable incident and a regulatory nightmare.

I've worked with organizations that discovered a breach on a Friday afternoon and had no idea who to call first. The FBI? Their state attorney general? Their customers? The answer depends on the type of data compromised, where your customers live, and what industry you operate in. This guide walks you through the exact steps — with real deadlines, real agencies, and real consequences for getting it wrong.

Step 1: Confirm the Breach and Preserve Evidence

Before you report anything, you need to confirm that a breach actually occurred. A security alert isn't the same as a confirmed compromise. Your incident response team — or an external forensics firm — needs to determine what happened, what data was affected, and whether the threat actor still has access.

Preserve everything. Logs, affected systems, email headers, network captures. I've seen organizations wipe compromised servers in a panic, destroying the very evidence they needed for law enforcement and regulatory investigations. Document the timeline meticulously: when the breach started, when it was discovered, and every action taken afterward.

Don't Skip the Scoping Phase

You can't write an accurate notification letter if you don't know what was stolen. Was it names and emails? Social Security numbers? Protected health information? Financial account data? The category of data dictates your legal obligations and who you must notify. Rushing past this step creates liability.

Step 2: Notify Law Enforcement

If the breach involves criminal activity — ransomware, credential theft, extortion, or a sophisticated threat actor — report it to law enforcement immediately. File a complaint with the FBI's Internet Crime Complaint Center (IC3). For ransomware specifically, CISA also wants to hear from you.

Reporting to law enforcement does not satisfy your state or federal notification obligations. It's a parallel track. But it's critical: the FBI may already be investigating the same group that hit you, and your forensic data could be the missing piece.

When to Contact CISA

The Cyber Incident Reporting for Critical Infrastructure Act (CIRCIA) requires critical infrastructure entities to report significant cyber incidents to CISA within 72 hours. Even if you're not in critical infrastructure, CISA's reporting portal is a valuable resource. They can provide technical assistance and share threat intelligence that helps you contain the breach faster.

Step 3: Determine Your State Notification Requirements

Here's where knowing how to report a data breach gets complicated. All 50 U.S. states, the District of Columbia, and U.S. territories have their own breach notification laws. The deadlines, definitions of "personal information," and notification methods vary wildly.

  • California (CCPA/CPRA): Notification required "in the most expedient time possible and without unreasonable delay."
  • Texas: 60 days from discovery of the breach.
  • Florida: 30 days to individuals, 30 days to the Department of Legal Affairs.
  • New York (SHIELD Act): Notification to the AG "in the most expedient time possible."

If your customers are in multiple states, you must comply with every applicable state law. The strictest deadline wins. Most organizations default to the shortest window across all affected jurisdictions.

What About HIPAA and Financial Regulations?

If you handle protected health information, the HIPAA Breach Notification Rule requires you to notify affected individuals within 60 days, HHS if over 500 individuals are affected, and prominent media outlets in the affected state. Financial institutions under the Gramm-Leach-Bliley Act have their own set of requirements enforced by the FTC and banking regulators.

The FTC's Health Breach Notification Rule also applies to health apps and non-HIPAA-covered entities — a gap many organizations don't realize exists until it's too late.

What Should a Breach Notification Include?

This is a common search question, so let me answer it directly. A proper data breach notification to affected individuals should include:

  • A description of what happened, including the date of the breach and the date of discovery.
  • The types of personal information involved (SSN, financial data, medical records, etc.).
  • Steps the organization is taking in response.
  • What individuals can do to protect themselves (credit monitoring, fraud alerts, password changes).
  • Contact information for questions, usually a dedicated call center or email address.
  • Contact information for the FTC, state attorney general, or relevant regulatory body.

Keep the language plain. Grade 8 reading level. I've reviewed notification letters written by legal teams that were so dense with legalese that recipients had no idea they'd been breached. That defeats the purpose and can draw regulatory scrutiny.

Step 4: Notify Your State Attorney General

Many states require you to notify the state attorney general's office in addition to affected individuals — especially when the breach exceeds a certain threshold (often 500 or 1,000 residents). Some states, like California, require you to submit a sample copy of the notification letter you sent to consumers.

Don't treat this as a formality. Attorneys general have enforcement power. The New York AG's office alone has pursued dozens of data breach enforcement actions in recent years, resulting in significant settlements.

Step 5: Notify Affected Individuals

Written notice — by mail or email, depending on state law — is the standard. Some states allow substitute notice (website posting plus media notice) if the cost of direct notice exceeds a threshold or if you lack contact information for affected individuals.

Timing matters here. Delayed notification erodes trust and invites lawsuits. The 2017 Equifax breach is the textbook example: the company waited six weeks after discovery to notify the public, fueling massive consumer backlash and a $700 million FTC settlement.

The $4.88M Lesson Most Organizations Learn Too Late

According to IBM's 2024 Cost of a Data Breach Report, the global average cost of a data breach hit $4.88 million. But here's the part most people miss: organizations with an incident response team and a tested response plan saved an average of $2.66 million per breach compared to those without.

Knowing how to report a data breach is one component of a broader incident response capability. If your employees can't recognize a phishing email or a social engineering attempt, you're fighting the battle after the walls have already been breached.

That's why foundational cybersecurity awareness training matters so much. Your people are your first line of defense — and your biggest vulnerability. Combine that with phishing awareness training for your organization that runs realistic phishing simulations, and you dramatically reduce the likelihood of a breach that triggers this entire reporting cascade.

Build the Playbook Before You Need It

Every organization should have a written incident response plan that answers these questions before a breach occurs:

  • Who is on the incident response team? (IT, legal, communications, executive leadership)
  • Who has authority to engage external forensics and legal counsel?
  • What are the notification deadlines for every state where you have customers?
  • Who drafts the notification letters?
  • Do you have cyber insurance, and what does the policy require for reporting?
  • How do you handle multi-factor authentication resets and credential rotation during an active incident?

Test this plan annually. Run tabletop exercises. The worst time to figure out your reporting obligations is during an active breach with a threat actor still in your network.

Zero Trust Doesn't Mean Zero Reporting

Even organizations with mature zero trust architectures experience breaches. The 2024 Verizon Data Breach Investigations Report found that 68% of breaches involved a human element — social engineering, errors, or misuse. Technology alone won't save you. And when a breach does happen, a zero trust framework doesn't exempt you from notification requirements.

Your security posture should reduce breach frequency and blast radius. Your incident response plan should ensure you report correctly when prevention fails. Both are non-negotiable.

Don't Wait for the Breach to Start Learning

If you've read this far, you already understand that breach reporting is a legal, operational, and reputational challenge. The organizations that handle it well are the ones that prepared before the incident — with trained employees, tested playbooks, and clear communication channels.

Start with the fundamentals. Make sure every person in your organization knows what a phishing email looks like, what social engineering sounds like, and what to do when something doesn't look right. That single investment in security awareness can prevent the breach that triggers everything I just described.