LimitedView
Research16 March 20267 min read

Measuring Security Culture: Beyond Phishing Click Rates

Phishing click rates dominate security culture measurement. They measure threat recognition in artificial conditions, not the automatic behaviour patterns that actually determine security outcomes.

Security culture measurement has converged on a small number of proxy metrics that are easy to collect, easy to report upward, and largely disconnected from the behavioural outcomes they are supposed to predict. Phishing simulation click rates are the most widely used. They are also among the least predictive of real-world security behaviour.

The problem is not that phishing simulations are useless. They are useful for measuring one specific thing: whether an employee can recognise a fabricated phishing attempt when they know they are operating in an environment where simulations occur. What they do not measure is what employees do when they encounter an unexpected, sophisticated, real threat at a moment of distraction or pressure.

How Do You Measure Security Culture?

Security culture is measured accurately through a combination of behavioural indicators, process adherence metrics, and leading indicators that reflect the conditions under which behaviour occurs. Not simply whether employees passed a test.

Behavioural indicators include voluntary incident reporting rates, credential hygiene adherence measured at the infrastructure level, and compliance with security procedures in ambiguous situations where deviation would be undetected. These metrics are harder to collect than simulation click rates. They are substantially more predictive of how an organisation will perform under real incident conditions.

Process adherence metrics include mean time from employee detection to escalation: how quickly do employees who notice something unusual report it? The rate of approved access exceptions requested versus shadow IT usage. The proportion of incidents first identified by non-security employees versus the security tooling. An organisation where non-technical employees routinely surface incidents before the SIEM does has a markedly different culture from one where the security team is the only reliable detection mechanism.

Leading indicators include training recency relative to current threat categories, repeat incident rates by cohort, and the gap between incident detection and training deployment. LimitedView's research identifies this last metric as particularly predictive: organisations with gaps under 72 hours between security events and related training deployments show significantly lower repeat incident rates in subsequent quarters.

What Metrics Indicate Strong Security Culture?

Strong security culture is indicated by a specific combination of metrics that collectively demonstrate behavioural change, not just awareness. No single metric is sufficient.

The most reliable positive indicator is a declining repeat incident rate by category over rolling 90-day periods. When employees who have been exposed to a category of threat, and have received timely, relevant training following that exposure, subsequently encounter the same threat category at a materially lower rate of successful exploitation, this indicates that training has produced genuine behaviour change rather than temporary awareness.

LimitedView's research across 847 organisations and 650,000 employees shows that organisations with incident-triggered training delivery programmes achieve an average 64% reduction in repeat incidents within 90 days of initial exposure and training. Organisations with scheduled training programmes and equivalent content quality show no statistically significant improvement in repeat incident rates over the same period.

A second strong indicator is a high voluntary reporting rate. Specifically, the proportion of suspicious activity that employees report before it escalates to a detectable security event. Organisations with strong security cultures are characterised by employees who treat reporting as a default rather than an exceptional action. Measuring reporting rates requires a frictionless reporting mechanism and a culture where reports are acknowledged and acted upon rather than generating audit consequences for the reporter.

A third indicator is the breadth of detection: what proportion of incidents are first surfaced by employees outside the security team? When the answer is substantial, it indicates that security awareness has become genuinely distributed rather than residing primarily in the security function.

Why Are Phishing Click Rates a Poor Measure of Security Culture?

Phishing click rates are a poor measure of security culture because they measure recognition performance under artificial conditions rather than automatic behaviour under real ones.

When an employee knows their organisation runs phishing simulations, they are operating with a degree of vigilance that does not characterise their normal working state. The specific cognitive mode activated when checking email at 7am on a mobile device while managing competing priorities is fundamentally different from the mode activated when engaging carefully with a known simulation programme.

Phishing simulations test deliberate, consciously applied security knowledge. Real phishing attacks exploit the gap between that deliberate mode and the automatic processing that governs most routine decisions. An employee who correctly identifies 100% of simulation attempts may still click a sophisticated real-world spear-phishing link if it arrives at a moment when automatic processing is dominant and deliberate scrutiny has not been triggered.

Security culture, properly defined, is not what employees do when they are being tested. It is the automatic behaviour patterns they exhibit when they are not. Measuring it accurately requires observation of behaviour in real conditions, not simulated ones.

This matters operationally because security programmes optimised against phishing click rates can show sustained metric improvement while actual vulnerability to phishing remains unchanged or increases. The metric is improving because employees are more alert during the testing period. Behaviour outside that period is not being measured and therefore not being improved.

A Framework for Meaningful Measurement

Moving beyond phishing click rates does not require abandoning simulation-based tools. It requires treating them as one signal among several rather than as the primary indicator of programme effectiveness.

A more complete measurement framework tracks repeat incident rates by category and cohort as the outcome measure; voluntary reporting rates and mean time to report as the behavioural measure; training delivery lag after incidents as the infrastructure measure; and credential hygiene indicators such as password reuse rates and MFA adoption rates as the process measure. These metrics together describe an organisation's actual security behaviour profile rather than its simulation performance.

LimitedView's research indicates that the organisations with the strongest actual security outcomes, as measured by incident rates, repeat incident rates, and mean time to contain, are not necessarily those with the lowest phishing simulation click rates. They are consistently those with the shortest gap between security events and training deployment, and the highest voluntary reporting rates. These metrics reflect the conditions under which security culture is built: relevant information delivered when the brain is most prepared to act on it, within an environment where reporting is normalised.

Security culture measurement should start from the outcomes it is trying to predict and work backwards to the metrics that most reliably predict them. Phishing click rates fail this test. Repeat incident rates, reporting rates, and training delivery infrastructure metrics pass it.

More Insights

Incident Analysis

Ransomware Training After an Attack: Why the First 48 Hours Matter Most

10 April 2026Read →
Research

The Neuroscience of Security Training: Why Timing Beats Content

9 April 2026Read →
AI Governance

What Is Shadow AI? The Risk Your Organisation Is Ignoring

8 April 2026Read →

Ready to Move from 12% to 73%?

See how incident-triggered training delivers measurable behaviour change — not compliance theatre.