In 2024, IBM's Cost of a Data Breach Report pegged the global average breach cost at $4.88 million — the highest ever recorded. That same report found that organizations with security awareness training programs saved an average of $258,629 per breach compared to those without. Yet when I talk to security leaders, most of them still can't articulate which security awareness metrics actually matter. They track completion rates, pat themselves on the back, and wonder why the board still questions their budget.
I've spent years building and evaluating security awareness programs, and here's the uncomfortable truth: most teams measure the wrong things. They count butts in seats instead of behavioral change. They report training hours instead of risk reduction. This post breaks down the specific metrics that demonstrate real program value — the ones that make CFOs stop asking "why do we spend money on this?"
Why Most Security Awareness Metrics Miss the Point
If your primary KPI is "percentage of employees who completed training," you're measuring compliance, not security. A 98% completion rate looks great on a slide deck. It tells you absolutely nothing about whether your workforce can spot a credential theft attempt disguised as an IT password reset.
The Verizon 2024 Data Breach Investigations Report found that 68% of breaches involved a human element — social engineering, errors, or misuse. That number hasn't budged much in years. Completion rates clearly aren't moving the needle. You need metrics that capture whether people actually behave differently after training.
Think of it this way: a hospital doesn't measure surgical training effectiveness by counting how many surgeons attended a lecture. They measure patient outcomes. Your security awareness metrics should measure security outcomes.
The 8 Security Awareness Metrics CISOs Should Track in 2025
1. Phishing Simulation Click Rate (and Its Trend Over Time)
This is the single most cited metric in the industry, and for good reason — it's a direct behavioral measurement. But the raw number matters far less than the trend. A 15% click rate in Q1 that drops to 4% by Q4 tells a powerful story.
Run phishing simulations monthly with varying difficulty levels. Track click rates by department, role, and seniority. I've seen executive teams click at 3x the rate of frontline employees because they're used to acting fast on urgent requests. That's actionable intel, not just a number. If you need a structured program to get started, phishing awareness training designed for organizations can help you build a consistent simulation cadence.
2. Report Rate: The Metric That Matters More Than Clicks
Here's what separates good programs from great ones. It's not just whether employees avoid clicking a phishing email — it's whether they report it. Your report rate measures the percentage of simulated (and real) phishing emails that employees actively flag using your reporting tool.
A high report rate means your people aren't just passively avoiding threats. They're acting as human sensors in your threat detection ecosystem. This is the foundation of a zero trust culture where every employee participates in defense. Track report-to-click ratio specifically — you want that number climbing every quarter.
3. Time to Report
Speed kills threat actors. If an employee reports a real phishing email within 2 minutes of receiving it, your SOC can pull that message from every inbox in the organization before most people even open it. If the average reporting time is 4 hours, you've already lost.
Measure median time-to-report on both simulations and real incidents. This metric directly correlates with your ability to contain social engineering attacks before they cause damage. I've watched organizations cut their mean time to report from 3 hours to under 10 minutes within two quarters of focused training.
4. Repeat Offender Rate
This one stings, but it's essential. What percentage of employees fail multiple phishing simulations within a rolling 12-month window? These repeat clickers represent your highest human risk. They're the ones a threat actor will eventually reach.
Track this segment separately. Give them additional, targeted training — not punitive measures, but genuinely different content. If your repeat offender rate isn't declining quarter over quarter, your training content isn't working for this population, and you need to change your approach.
5. Security Behavior Score (Composite Index)
Mature programs create a composite score that rolls multiple behaviors into a single index per employee or department. This might include phishing simulation performance, password hygiene (are they using the password manager?), multi-factor authentication enrollment, clean desk compliance, and incident reporting activity.
A composite security behavior score gives leadership a single number that trends over time. It's the "credit score" equivalent for human risk. Weight the components based on what matters most to your threat model. For most organizations, phishing resilience and MFA adoption carry the heaviest weight.
6. Actual Incident Correlation
This is where the rubber meets the road. Can you correlate your training program with a reduction in actual security incidents? Track the number of successful phishing compromises, credential theft events, malware infections originating from email, and ransomware incidents quarter over quarter.
When your phishing simulation click rate drops from 22% to 5% and your actual email-originated incidents drop by 60% over the same period, you have a story the board will listen to. This is the metric that justifies every dollar in your awareness budget.
7. Training Engagement Quality
Go beyond completion rates. Measure assessment scores, knowledge retention (tested 30/60/90 days after training), and voluntary engagement with supplemental content. If you offer a cybersecurity awareness training program, look at how employees interact with the material — do they breeze through, or do they engage with scenarios and examples?
Low assessment scores after "completed" training tell you the content isn't landing. High completion with low retention tells you the format needs to change. These quality indicators help you refine content, not just check a box.
8. Cost Per Risk Reduction
This is the executive-level metric. Take your total program spend — training platform, simulation tools, staff time, content development — and divide it by your measurable risk reduction. You can define risk reduction as the percentage decrease in successful social engineering attacks, the reduction in incident response costs, or the improvement in your composite behavior score.
When you can say "we spent $150,000 on security awareness and reduced email-originated incidents by 55%, saving an estimated $1.2 million in potential breach costs," that's a conversation ender in the best possible way.
What Are Security Awareness Metrics?
Security awareness metrics are quantifiable measurements used to evaluate the effectiveness of an organization's security awareness training program. They go beyond simple completion tracking to measure actual employee behavior change, risk reduction, and program ROI. Key metrics include phishing simulation click rates, incident report rates, time to report, repeat offender rates, and correlation with real security incidents. Effective metrics help CISOs demonstrate that training investments translate to measurable reductions in human-caused security breaches.
Building a Measurement Dashboard That Tells a Story
Numbers without narrative are noise. Your security awareness metrics dashboard should tell a clear story: here's where we started, here's what we did, here's where we are, and here's the business impact.
I recommend structuring your dashboard in three tiers:
- Executive tier: Cost per risk reduction, actual incident correlation, and composite behavior score trend. Three numbers. That's it. This is what the board sees.
- Management tier: Department-level phishing click rates, report rates, repeat offender percentages, and training engagement quality. This is what department heads use to drive accountability.
- Operational tier: Individual simulation results, time-to-report distributions, assessment scores, knowledge retention decay curves, and MFA enrollment rates. This is what your security team uses to tune the program.
Each tier answers a different question. Executives ask "is this worth the investment?" Managers ask "where are my gaps?" Operators ask "what do I need to fix this week?"
The Benchmarking Trap: Your Numbers vs. Industry Averages
I see teams get obsessed with industry benchmarks. "The average phishing click rate is 11%, and ours is 9%, so we're fine." That's dangerous thinking. Your threat model isn't average. Your adversaries don't care about industry medians.
Use benchmarks as a sanity check, not a target. The 2024 Verizon DBIR (available here) provides useful data on human-element breaches, but your target should be continuous improvement against your own baseline. If you went from 20% to 9%, that's excellent. If you've been stuck at 9% for three quarters, you have a plateau problem regardless of what the industry average says.
Common Mistakes That Corrupt Your Data
Running the Same Simulation Templates
If you send the same "package delivery" phishing template every quarter, your click rate will drop — not because employees got smarter, but because they memorized that specific email. Rotate templates. Vary difficulty. Include business email compromise scenarios, not just mass-phish templates. Your simulations should mirror what real threat actors actually send.
Punishing Reporters Instead of Rewarding Them
Nothing kills your report rate faster than punishing people who report false positives. If an employee reports a legitimate email as suspicious, that's a good thing. They're being cautious. The moment you make reporting feel risky, your most valuable metric — report rate — collapses.
Measuring Annually Instead of Continuously
Annual security awareness training with a single annual survey produces a single data point. You can't identify trends, seasonal patterns, or the impact of specific interventions with one measurement per year. Monthly simulations and quarterly knowledge assessments give you the data density you need to make real decisions.
Ignoring the "Why" Behind the Numbers
A sudden spike in click rates after months of improvement isn't necessarily a failure. Maybe you introduced a harder template. Maybe a new department onboarded 50 employees who haven't been through training yet. Context matters. Always pair quantitative metrics with qualitative analysis.
Tying Metrics to Regulatory and Framework Requirements
If your organization operates under NIST, HIPAA, PCI-DSS, or CMMC, your security awareness metrics serve double duty. NIST's Cybersecurity Framework (NIST CSF) emphasizes awareness and training under the Protect function. CISA's guidance on cybersecurity best practices reinforces the need for measurable training programs.
Map your metrics to specific framework controls. When an auditor asks how you satisfy NIST PR.AT-1 (awareness and training), you don't hand them a completion certificate. You hand them a trend report showing behavioral improvement across eight distinct KPIs over 12 months. That's the difference between checking a box and demonstrating security maturity.
Making the Business Case: From Metrics to Money
Every security leader eventually faces the budget conversation. Here's the formula I use, and it works.
Start with IBM's breach cost data: $4.88 million average in 2024. Multiply by the probability of a breach in your industry over any given year (the Ponemon Institute publishes this data). That gives you your annualized breach exposure. Now show how your training program reduced the likelihood of a human-caused breach by a measurable percentage. The delta is your program's value.
For example: if your annualized breach exposure is $2 million and your program demonstrably reduced human-element risk by 40%, you've delivered $800,000 in risk reduction. If you spent $200,000 on the program, that's a 4:1 return. CFOs understand that math.
Pair this with your trending security awareness metrics — declining click rates, rising report rates, fewer actual incidents — and you've built a case that survives scrutiny.
Start Measuring What Matters This Quarter
If you're still relying on completion percentages to justify your security awareness program, 2025 is the year to fix that. Pick three metrics from this list, establish baselines this quarter, and commit to monthly measurement.
Start with phishing simulation click rate, report rate, and repeat offender rate. Those three alone will transform how you understand human risk in your organization. Layer in composite scoring and incident correlation as your program matures.
The tools exist. The frameworks exist. The data exists. The only thing standing between you and a defensible, measurable security awareness program is the decision to start tracking what actually matters — and stop counting things that don't.