In 2022, Medibank — one of Australia's largest health insurers — suffered a breach that exposed 9.7 million customer records. The root cause? Compromised credentials. A single employee's stolen login led to one of the most damaging data breaches in Australian history. Medibank had security awareness training in place. But they couldn't tell you whether it was actually working.
That's the problem I see over and over again. Organizations run training, check the compliance box, and move on. Nobody asks the hard question: how to measure security awareness training in a way that actually tells you if your people are getting better at stopping threats. If you can't measure it, you can't improve it — and your organization stays exposed.
This post gives you the specific metrics, methods, and benchmarks you need to turn your security awareness program from a checkbox exercise into a measurable line of defense. No fluff. Just the numbers and processes that work.
Why Most Organizations Get Measurement Wrong
Here's what typically happens. A company rolls out annual security awareness training. They track one metric: completion rate. Ninety-five percent of employees finish the course. Leadership gets a green slide in the quarterly deck. Everyone feels great.
Then a threat actor sends a well-crafted spear phishing email, and someone in accounting wires $200,000 to a fraudulent account. The completion rate didn't prevent anything.
The Verizon 2023 Data Breach Investigations Report found that 74% of all breaches involved the human element — including social engineering, errors, and misuse. That number hasn't meaningfully budged in years despite most organizations claiming they train their employees. The gap isn't in having training. It's in knowing whether the training changes behavior. That requires measurement beyond attendance.
The Metrics That Actually Tell You Something
If you want to know how to measure security awareness training with real precision, you need a layered set of metrics. No single number tells the whole story. Here are the ones I track in every program I advise on.
1. Phishing Simulation Click Rates
This is the most direct behavioral metric you have. Run regular phishing simulations and track what percentage of employees click malicious links, open suspicious attachments, or submit credentials on fake landing pages. According to the Verizon 2023 DBIR, the median click rate on phishing simulations across organizations is around 10-15%, but top-performing programs push this below 5%.
Track this monthly. Look at the trendline, not individual snapshots. If your click rate isn't dropping over six months, your training content isn't working. Platforms like the phishing awareness training at phishing.computersecurity.us let you run targeted simulations and measure exactly this.
2. Reporting Rates
Click rates tell you who falls for attacks. Reporting rates tell you who fights back. This metric tracks how many employees actively report suspicious emails through your phishing report button or IT helpdesk.
A mature security culture doesn't just avoid threats — it surfaces them. I've seen organizations where reporting rates jump from 8% to over 40% within a year of consistent training. That's a force multiplier for your security operations team because every reported phish is a potential incident caught early.
3. Time to Report
Speed matters. If an employee reports a phishing email 48 hours after receiving it, the damage may already be done. Track the median time between email delivery and employee report. Aim to get this under 10 minutes for simulated campaigns. This metric directly correlates with your ability to contain real credential theft attempts before they escalate.
4. Repeat Offender Rate
Some employees click on every simulation you send. These are your highest-risk individuals. Track what percentage of your workforce has failed two or more phishing simulations in the past 12 months. This group needs targeted remediation training, not just another generic module. If your repeat offender rate stays flat, you're not reaching the people who need help most.
5. Knowledge Assessment Scores
Behavioral metrics are king, but knowledge assessments still have a role. Short quizzes after training modules measure whether employees can identify social engineering tactics, understand data handling policies, and recognize the signs of ransomware. Track average scores and pass rates over time. A good baseline program like the cybersecurity awareness training at computersecurity.us includes assessments built into the curriculum.
6. Incident Metrics
Connect your training data to real-world security incidents. Track how many confirmed phishing incidents, credential compromises, and malware infections occur per quarter. Compare these numbers before and after training rollouts. This is the metric your CISO cares about most because it ties directly to risk reduction and the cost of a data breach — which IBM's 2023 report pegged at an average of $4.45 million globally.
How to Build a Measurement Framework from Scratch
Metrics without structure are just numbers. Here's the framework I use to turn those numbers into a decision-making tool.
Step 1: Establish Your Baseline
Before you change anything, measure where you are right now. Run a baseline phishing simulation without any prior warning. Send a knowledge assessment. Pull your last 12 months of security incident data. Document everything. This is your starting point, and every future improvement gets measured against it.
Step 2: Define Target Benchmarks
Set specific, time-bound goals. For example: reduce phishing click rates from 18% to under 8% within 12 months. Increase reporting rates to 30% within six months. Cut repeat offenders by half within a year. These benchmarks need to be realistic but aggressive enough to force real program improvement.
Step 3: Measure Monthly, Report Quarterly
Phishing simulations should run at least monthly, with different difficulty levels and social engineering techniques. Compile results into a quarterly report for leadership. Show trendlines, not just point-in-time numbers. Include department-level breakdowns because risk isn't evenly distributed — your finance and HR teams face different threats than engineering.
Step 4: Segment Your Data
Aggregate numbers hide problems. Break your metrics down by department, role, location, and seniority level. In my experience, executive teams and new hires are consistently the highest-risk groups. You can't fix what you can't see. Segmentation reveals where your training is working and where it's failing.
Step 5: Close the Loop with Targeted Remediation
When someone fails a phishing simulation, what happens next? If the answer is "nothing," your measurement program is incomplete. Effective programs trigger immediate micro-training for employees who click. Track whether remediation actually changes behavior by monitoring whether those individuals pass the next simulation. This closed-loop approach is what separates mature programs from compliance exercises.
What Is a Good Phishing Click Rate?
This is one of the most common questions I get, so here's a direct answer. A good phishing simulation click rate depends on your industry and program maturity, but here are general benchmarks based on industry data:
- Baseline (no prior training): 25-35% click rate is typical
- After 6 months of training: 15-20% is average
- Mature program (12+ months): Under 5% is the target
- Reporting rate in mature programs: 40%+ of recipients report the simulated phish
If your organization is above 20% after a year of active training, something fundamental needs to change — your content, your frequency, or your approach to the employees who keep failing.
Connecting Awareness Metrics to Zero Trust
Measuring security awareness training doesn't exist in a vacuum. It feeds directly into your broader security posture. Organizations moving toward a zero trust architecture should use awareness metrics as one input into their risk scoring models.
An employee with a high repeat offender score might warrant additional multi-factor authentication requirements, restricted access to sensitive systems, or more frequent credential rotation. CISA's zero trust maturity model explicitly calls out workforce training as a foundational pillar. Your awareness metrics give you the data to make risk-based access decisions. See CISA's Zero Trust Maturity Model for the full framework.
The $4.45M Argument for Better Measurement
When you present these metrics to leadership, frame them in financial terms. IBM's 2023 Cost of a Data Breach report found that organizations with high levels of security awareness training saved an average of $232,867 per breach compared to those without. That number alone justifies the investment in proper measurement.
But the real ROI story is in prevention. Every phishing email your employees report instead of click is a potential incident avoided. Every percentage point drop in your click rate represents fewer credential theft events, fewer ransomware infections, and fewer late-night calls to your incident response team.
How to Present Metrics to the Board
Boards don't want dashboards with 20 numbers. They want three things: Are we getting better? How do we compare to our industry? What's the financial impact? Build your quarterly report around those three questions. Use trendlines showing click rate reduction over time. Reference industry benchmarks from reports like the Verizon DBIR. And tie improvements to estimated cost avoidance using the IBM breach cost data.
Common Measurement Mistakes I See Constantly
After years of working with organizations on this exact problem, these are the mistakes that keep showing up.
Mistake 1: Only Measuring Annually
Annual training with an annual phishing test tells you almost nothing. Human behavior changes with reinforcement, not one-time events. The NIST Cybersecurity Framework emphasizes continuous improvement for a reason. Monthly simulations and quarterly assessments are the minimum cadence that produces actionable data.
Mistake 2: Using the Same Phishing Template Every Time
If you send the same "your password is expiring" email every month, employees learn to spot that specific email — not phishing in general. Vary your templates. Use current events, internal company themes, and different social engineering techniques like pretexting, authority impersonation, and urgency tactics. Your measurement only reflects real-world resilience if your simulations mirror real-world attacks.
Mistake 3: Shaming Employees Who Fail
I've watched organizations publish "wall of shame" lists of employees who clicked phishing simulations. This destroys trust and kills reporting rates. People stop reporting real suspicious emails because they're afraid of punishment. Measurement should drive education, not humiliation. Frame failures as learning opportunities, and your reporting rates will climb.
Mistake 4: Ignoring Qualitative Feedback
Numbers matter, but so does context. Survey your employees after training. Ask what they found useful, what confused them, and what threats they're most worried about. I've had employees surface real-world phishing attempts during post-training surveys that the security team didn't even know about. That qualitative data is gold.
Building a Culture That Measures What Matters
The organizations that get the best results from security awareness training are the ones that treat measurement as a continuous discipline, not a reporting exercise. They run diverse phishing simulations monthly. They track behavioral metrics alongside knowledge scores. They segment data by department and role. They tie awareness metrics to their broader zero trust and risk management strategies.
Start by establishing your baseline with a comprehensive training program like the cybersecurity awareness training at computersecurity.us. Layer in regular simulations through phishing.computersecurity.us. Then measure relentlessly using the framework above.
Knowing how to measure security awareness training is what separates organizations that survive from those that end up in the next breach headline. The data is there. You just have to decide to track it.