Table of Contents
Introduction: The Day the 100% Completion Rate Meant Nothing
For years, I was a true believer.
I built security awareness programs by the book, meticulously crafting modules, deploying them across the enterprise, and chasing the one metric that seemed to matter: 100% employee completion.
I thought a perfect score on a quiz was a proxy for a perfect defense.
I presented my dashboards to leadership with pride, showing neat, green bars that screamed “compliance.” We were secure, I thought, because we were trained.
Then came the breach that changed everything.
It wasn’t sophisticated.
It wasn’t a zero-day exploit unearthed by a shadowy nation-state actor.
It was a simple, elegant piece of social engineering—an urgent, well-crafted email that appeared to come from a trusted vendor.
And it sailed right past an employee who had not only completed but aced their annual training just one week prior.
The damage was significant, but the blow to my professional certainty was catastrophic.
That day, the 100% completion rate felt less like a shield and more like a tombstone marking the death of my assumptions.
That failure forced a painful re-evaluation.
My team and I had done everything “right” according to industry best practices.
We had checked every box.
Yet, when the moment of truth arrived, the system failed.
This wasn’t a failure of technology, but a failure of philosophy.
It led me to question the very foundation of our approach.
The problem wasn’t about finding a better training module or a more engaging Video. The problem was that I was asking the wrong question.
I was asking, “How can I teach my employees to be more secure?” The real question, the one that unlocked a new and profoundly more effective approach, was this: Is “training” even the right word for what we need to do?
This report is the story of that journey—a journey from the ashes of a failed compliance-based model to the discovery of a new paradigm rooted in the established sciences of public health and human behavior.
It is a blueprint for moving beyond the illusion of “awareness” and toward the tangible, measurable goal of organizational resilience.
It is about treating your employees not as students to be lectured, but as a population whose collective immunity is your greatest asset.
Part I: The Anatomy of Failure: Deconstructing the “Awareness” Myth
Before we can build a new model, we must perform a clinical dissection of the old one.
The traditional approach to security awareness is not just slightly ineffective; it is systemically flawed, built on a foundation of incorrect assumptions about human psychology and organizational dynamics.
Its continued use, despite overwhelming evidence of its failure, is one of the great unexamined liabilities in modern business.
The Compliance Theater Problem
The fundamental flaw in most security awareness programs lies in their origin story.
They were not born from a desire to change behavior or reduce risk.
They were created to satisfy regulatory requirements and pass audits.1
Mandates from frameworks like the General Data Protection Regulation (GDPR), the Federal Information Security Modernization Act (FISMA), and the Gramm-Leach-Bliley Act (GLBA) all require some form of security awareness training.1
This has given rise to what can only be described as “Compliance Theater”—an elaborate performance designed to convince auditors that security is being taken seriously, while having little to no impact on an organization’s actual risk posture.
This compliance-first approach optimizes for the wrong outcomes.
Success is measured by metrics that are easy to document but meaningless in practice: completion rates, quiz scores, and the number of hours employees spent in training.1
Organizations celebrate achieving 100% training completion, even as their own data shows that these “trained” employees continue to click on phishing links and fall for social engineering attacks at alarming rates.1
The program becomes a box-ticking exercise, a ritual disconnected from the reality of the threat landscape.2
The Knowledge-Behavior Gap: The University of Chicago Bombshell
For decades, security training has been built on the “information deficit model”—the assumption that people engage in risky behavior because they lack knowledge, and that providing them with information will automatically lead to safer actions.1
A groundbreaking 2024 study from the University of Chicago delivered a fatal blow to this theory.
Researchers conducted a large-scale study within a real-world corporate environment to measure the effectiveness of typical training programs.
The findings were stark: there was no significant correlation between how recently an employee had completed their annual cybersecurity training and their ability to avoid simulated phishing attacks.3
Employees who had just undergone training performed no better than those who had not received it for over a year.
As lead researcher Grant Ho stated, “Our study suggests that these requirements are probably not providing good value in their current form”.3
This research provides empirical proof that the knowledge-behavior gap is a chasm.
We can fill an employee’s head with information about how to spot a phishing email, but that knowledge does not reliably translate into the correct action when they are facing a real-world scenario—under pressure, multitasking, and trying to get their actual job done.
The problem is not a lack of awareness; it is a failure of action in the critical moment.
The Psychology of Disengagement
Even if a program is designed with the best intentions, it often runs headlong into the realities of human psychology.
The very format of traditional training creates cognitive and emotional barriers that sabotage its effectiveness.
- Security Fatigue & Cognitive Overload: Modern employees are drowning in information. Their workdays are a constant barrage of emails, instant messages, notifications, and deadlines.4 In this high-pressure environment, a mandatory, generic, hour-long security module is perceived not as a helpful tool, but as an annoying interruption to their “real work”.1 A study from the National Institute of Standards and Technology (NIST) identified “security fatigue,” a state in which employees become so overwhelmed by constant warnings and requirements that they begin to actively avoid security best practices, reuse passwords, or ignore alerts altogether.5 Cognitive Load Theory further explains that bombarding learners with excessive or complex information impairs both attention and memory, making it less likely they will retain or apply the knowledge.5
- The Backfire Effect of Fear and Punishment: Many security programs, particularly phishing simulations, rely on fear-based messaging or punitive measures, shaming employees who click a link.5 While this may seem logical, research shows it is deeply counterproductive. A report from the SANS Institute found that positive reinforcement and reward-based approaches lead to significantly higher participation and long-term behavior change than punitive ones.5 Fear and shame do not foster a learning mindset; they foster avoidance, resentment, and a culture where employees are afraid to report mistakes—the very last thing you want in a security incident.
- Moral Licensing: Perhaps the most insidious psychological flaw is a phenomenon known as “moral licensing.” Research shows that when people perform a virtuous act (like dutifully completing their mandatory security training), they can subconsciously feel licensed to engage in less-virtuous behavior later.1 An employee who has “done their part” by passing the annual quiz may feel more justified in taking a security shortcut later in the week. This effect helps explain the paradoxical finding that some organizations with comprehensive training programs can experience
 higher rates of security incidents than those with minimal training.1 The training itself creates a false sense of security that lowers vigilance.
These individual failures coalesce into a self-perpetuating cycle.
Regulators mandate training, so organizations create easily auditable, one-size-fits-all programs.
These programs trigger psychological resistance and fail to change behavior.
Incidents continue, which auditors see as evidence that more training is needed, and the cycle begins anew.
The traditional model is not just a broken tool; it is a flawed system designed to perpetuate its own existence.
To escape this loop, we must change the system itself.
Table 1: The Two Models of Security Training
| Dimension | Traditional “Awareness” Model | Behavioral “Resilience” Model | 
| Primary Goal | Compliance & Audits 1 | Measurable Behavior Change & Risk Reduction 6 | 
| Employee Role | Student (Passive Recipient) 7 | Population Member (Active Participant) 8 | 
| Core Method | Knowledge Transfer (Annual CBT) 3 | Habit Formation & Environmental Design 9 | 
| Key Metrics | Completion Rates, Quiz Scores 1 | Phishing Click Rates, MFA Adoption, Incident Reports 2 | 
| Psychology | Information Deficit (Assumes ignorance) 1 | Behavioral Science (Addresses cognitive bias) 11 | 
| Expected Outcome | A “Checked Box” 2 | Organizational Herd Immunity 12 | 
Part II: The Epiphany: From Classroom to Community Health
The turning point in my own journey came from a place I never expected: the healthcare section of a news site.
I was reading about the devastating impact of ransomware on hospitals, where cyberattacks weren’t just causing financial damage but were forcing ambulances to be diverted, surgeries to be canceled, and patient care to be fundamentally compromised.13
A cyberattack in a German hospital was even linked to a patient’s death.14
It struck me with the force of a revelation.
We had been treating cybersecurity as an IT problem, a technical challenge to be solved with firewalls and software.
But here, in the starkest possible terms, it was being framed as a public health issue.15
An unpatched server wasn’t just a vulnerability; it was a vector for disease.
A successful phish wasn’t just a policy violation; it was a breakdown in community health that put lives at risk.
This reframing didn’t just give me a new metaphor; it gave me an entirely new scientific framework to understand and combat the problem.
A New Science: Cyber-Epidemiology
Epidemiology is the study of how diseases spread within populations and how to control them.
Applying its principles to cybersecurity provides a powerful, data-driven model for analyzing risk.17
Instead of seeing employees as individuals to be educated, we can see our organization as a population, and digital threats as pathogens that propagate through it.
One of the most established tools in epidemiology is the SEIR model, which categorizes a population into four states: Susceptible, Exposed, Infected, and Recovered.18
In a cybersecurity context, this model becomes a dynamic diagnostic tool 19:
- Susceptible: These are the members of our population who are vulnerable to a specific digital pathogen. This could be a new hire who hasn’t been onboarded, an employee in a high-risk department like finance, or anyone who hasn’t enabled multi-factor authentication (MFA). They lack immunity.
- Exposed: This group has come into contact with the threat but is not yet infectious. Think of an employee who has received a sophisticated phishing email but has not yet clicked the link or downloaded the attachment. There is a window of opportunity for intervention.
- Infected: This is an individual or system that has been compromised. An employee’s credentials have been stolen, their machine is running malware, or they have been tricked into wiring funds to a fraudulent account. They are now a potential vector to spread the infection to others.
- Recovered: This group has been infected but has undergone remediation and now possesses some level of immunity. This could be an employee whose machine has been cleaned and who has just completed a targeted, real-time micro-training on the specific attack they fell for. They are now more resilient than they were before.
This model shatters the simplistic, binary view of “trained vs. untrained.” It allows us to see our organization as a living, dynamic ecosystem with varying levels of risk and resilience.
It transforms the CISO’s role from that of a school principal to that of an epidemiologist, tasked with monitoring the health of the population and managing outbreaks.
The Ultimate Goal: Achieving Cybersecurity Herd Immunity
If we are managing a population’s health, what is our strategic goal? It is not 100% vaccination, which is often impossible.
The goal is herd immunity.
As defined by the World Health Organization (WHO), herd immunity (or population immunity) is the indirect protection from an infectious disease that occurs when a sufficient percentage of a population is immune, either through vaccination or prior infection.20
When this threshold is reached, the pathogen cannot find enough susceptible hosts to maintain its chain of transmission.
The spread of the disease slows and stops, which protects the most vulnerable members of the population—those who cannot be vaccinated for medical reasons, such as infants or the immunocompromised.20
For a highly contagious disease like measles, this threshold is around 95%; for polio, it’s about 80%.20
Applying this concept to cybersecurity is transformative.12
Cybersecurity herd immunity means building such a high level of collective resilience within the organization that a single point of failure—one errant click, one compromised password—does not lead to a catastrophic, enterprise-wide breach.
The attack is contained because the surrounding “cells” (other employees, adjacent systems, connected vendors) are sufficiently “immune” and break the chain of transmission.12
A threat actor might successfully infect one host, but they cannot easily move laterally or escalate privileges because the ecosystem around them is secure.
This is not just a clever turn of phrase; it represents a fundamental paradigm shift.
The old model views the problem through a pedagogical lens: employees are “students,” the goal is knowledge acquisition, and the primary tool is a curriculum.
The public health model views the problem through an epidemiological lens: employees are a “population,” the goal is risk mitigation, and the primary tools are the cybersecurity equivalents of vaccination, hygiene promotion, and environmental sanitation.
This changes the questions we ask.
We stop asking, “Did everyone complete the annual training?” and start asking, “What is the R-naught (reproduction number) of this phishing strain in our organization?” and “What is our population’s overall immunity level to credential theft attacks?” It elevates the practice from a “soft” HR function to a hard, data-driven science focused on managing the health and resilience of the entire organizational ecosystem.
Part III: The Behavioral Science Toolkit: Your New Public Health Interventions
Adopting a public health paradigm gives us a new diagnosis and a new strategic goal.
But to achieve herd immunity, we need effective interventions.
An epidemiologist doesn’t just diagnose an outbreak; they deploy a toolkit of vaccines, hygiene protocols, and sanitation systems.
For our purposes, this toolkit comes directly from the field of behavioral science.
These are the practical, evidence-based methods for changing human behavior at scale.
Pillar 1: Engineering Secure Actions with the Fogg Behavior Model (B=MAP)
Dr. BJ Fogg, a behavior scientist at Stanford University, developed a brilliantly simple and powerful model for understanding why any behavior occurs.
The formula is B = MAP, which stands for Behavior = Motivation + Ability + Prompt.23
For any given behavior to happen, these three elements must converge at the same moment.
If a behavior doesn’t happen, at least one of the three is missing.24
This model gives us a precise diagnostic framework and a set of levers to pull:
- Motivation: This is a person’s desire to perform the action. Fogg identifies three core motivators: Sensation (seeking pleasure, avoiding pain), Anticipation (hope or fear), and Belonging (social acceptance or rejection).25 However, motivation is notoriously fickle and unreliable. It spikes and wanes, making it a poor foundation for designing lasting habits.27 A core principle of Fogg’s work is to
 not rely on high motivation to drive behavior change.
- Ability: This is a person’s capacity to do the behavior. Fogg argues this is the most important and reliable lever for change. Ability isn’t just about skill; it’s about simplicity. It is a function of our scarcest resources at any given moment, such as time, money, physical effort, or mental effort (“brain cycles”).23 The easier a behavior is to do, the more likely we are to do it, even with low motivation.28 This is the essence of Fogg’s “Tiny Habits” method: to create change, make the desired action radically simple.29 Instead of a vague goal like “Be more vigilant about phishing,” the tiny habit becomes “Hover over one link in an email before clicking.”
- Prompt: This is the cue or trigger that tells you to “do it now.” Without a prompt, nothing happens, even if motivation and ability are high.25 A person might be motivated and able to go for a run, but if their alarm doesn’t go off, they’ll stay in bed. For habits to stick, the prompt must be anchored to an existing, reliable routine in your life.27
The B=MAP model revolutionizes how we approach security failures.
When an employee fails to report a phishing email, we don’t blame them for “lack of awareness.” We use B=MAP as a diagnostic tool.
Was the “report phish” button hard to find or use (a failure of Ability)? Was there no clear, obvious cue reminding them to report it (a failure of Prompt)? Or did they fear being punished for receiving a phish in the first place (a failure of Motivation)? This approach shifts the focus from blaming the user to fixing the system.
Pillar 2: Building Immunity with Atomic Habits and Digital Hygiene
If B=MAP provides the design for a single secure action, James Clear’s work in Atomic Habits provides the methodology for wiring that action into a durable, automatic habit—the cybersecurity equivalent of a long-lasting vaccine.
Clear argues that true behavior change is identity change.
The goal isn’t just to do a habit; it’s to become the type of person who does that habit.10
Every small action is a “vote” for the type of identity you want to build.32
The core mechanism for this is building better systems, not just setting goals.
A key technique is “habit stacking,” which is the practical application of Fogg’s Prompt lever.
You anchor a new tiny habit to a pre-existing one, creating a chain of behaviors that becomes automatic over time.10
This is how we build a foundation of “digital hygiene”—small, routine practices that collectively reduce risk, much like washing your hands protects against disease.33
By combining Fogg’s “start tiny” principle with Clear’s “habit stacking,” we can create simple, powerful recipes for building security immunity one small action at a time.
Table 2: The Digital Hygiene Habit Stack
| Existing Anchor Routine | New Tiny Security Habit | Instant Celebration | 
| After I pour my morning coffee… | I will open my password manager and check for any alerts. 35 | A mental checkmark: “All clear for the day.” | 
| After I sit down at my desk… | I will lock my screen (e.g., Win+L / Ctrl+Cmd+Q). 9 | A feeling of professionalism: “My space is secure.” | 
| Before I click on a link in an unexpected email… | I will hover my mouse over it to check the true destination. 33 | A small nod of satisfaction: “Good catch.” | 
| After I finish a video call… | I will check that my camera is physically covered or turned off. | A sense of relief: “My privacy is protected.” | 
| When my phone prompts me for a software update… | I will tap “Install Tonight.” 35 | A feeling of being responsible: “Keeping my device healthy.” | 
Pillar 3: Designing a Secure Environment with Nudge Theory
The final pillar of our behavioral toolkit is designing the environment itself to make secure choices easier.
This is our public sanitation system.
The concept comes from Nobel laureate Richard Thaler and his work on “Nudge Theory.” A nudge is an aspect of the “choice architecture” that alters people’s behavior in a predictable way without forbidding any options or significantly changing their economic incentives.38
It’s about making the desired path the path of least resistance.
Cybersecurity is a field ripe for the application of nudges, which serve as the perfect antidote to security fatigue because they are timely, contextual, and require minimal cognitive effort from the user.40
Examples include:
- Visual Cues: A password strength meter that turns from red to green as a user types provides immediate, intuitive feedback, nudging them toward a stronger password.40 A bright, unmissable banner at the top of an email that says “” nudges the user to apply a higher level of scrutiny.11
- Timely Prompts: A warning that pops up at the moment an employee tries to upload a sensitive document to an unsanctioned cloud storage site is far more effective than a rule buried in a 100-page policy document.39 The nudge is delivered precisely when it is most relevant.
- Social Proof: Humans are social creatures who are heavily influenced by the behavior of their peers. A message that says, “85% of your department has already enabled MFA” leverages this tendency to nudge others toward compliance.38
These three pillars—B=MAP, Atomic Habits, and Nudge Theory—are not a menu of options to choose from.
They form a deeply integrated system.
B=MAP is the strategic framework for designing any behavioral intervention.
Atomic Habits is the implementation methodology for teaching an individual how to adopt that behavior and make it part of their identity.
Nudge Theory provides the environmental tactics for making the entire process easier, more intuitive, and more likely to succeed.
You use Fogg to design the “vaccine,” Clear to create the “vaccination schedule,” and Thaler to ensure the “water supply is clean.” This integrated approach is what drives real, sustainable behavior change and builds lasting organizational immunity.
Part IV: The Resilient Organization: A Blueprint for a Behavior-First Security Program
Translating this new paradigm from theory into practice requires a fundamental restructuring of how we approach, staff, and measure security programs.
It demands a shift from producing training content to engineering behavioral outcomes.
This is the blueprint for building that new capability within your organization.
Redefining the Role: From Training Manager to Behavioral Risk Officer
The first and most critical step is recognizing that the person leading this effort is no longer a traditional training manager.
Their job is not to create PowerPoint slides or administer a learning management system (LMS).
Their job is to be an applied behavioral scientist.
This new role might be called a Behavioral Risk Officer or a Human Risk Manager.
- New Responsibilities: This role is focused on data, design, and influence. Core responsibilities include analyzing incident data to identify the top human risks, partnering with security operations and cyber threat intelligence teams to understand threat actor tactics, designing targeted behavioral interventions (habits, nudges, and environmental changes), and measuring the impact of these programs on behavioral metrics, not compliance scores.6 They are responsible for reducing human risk by making security simple and actionable for the workforce.6
- New Skillsets: The ideal candidate for this role possesses a unique blend of skills. While a foundational understanding of security concepts is necessary, deep technical expertise is secondary to a mastery of human behavior. The most successful professionals in this new field often come from backgrounds in communications, marketing, human resources, or psychology.44 The SANS Institute found that while 80% of security awareness professionals currently have technical backgrounds, a lack of communication and engagement skills is their biggest challenge.44 The Behavioral Risk Officer must be an expert communicator, capable of translating complex risks into simple, compelling messages that resonate with a non-technical audience.43
The Modern Security Program in Practice: A Step-by-Step Guide
Implementing a behavior-first program follows a clear, iterative cycle modeled on public health principles.
- Step 1: Epidemiological Assessment (Diagnose). The first step is to abandon the one-size-fits-all model. Instead, use data to identify your highest-risk populations and behaviors. Analyze security incident reports, help desk tickets, and threat intelligence to understand which departments are most susceptible to which specific threats.45 For example, your data might show that the Finance department is disproportionately targeted by Business Email Compromise (BEC) attacks, while your developers are more at risk from credential stuffing attacks due to password reuse on code repositories. This assessment allows you to allocate your resources where they will have the greatest impact.
- Step 2: Design Role-Based Interventions (Treat). For each high-risk group identified in Step 1, design a targeted “treatment plan” using the behavioral science toolkit. This is not a generic, hour-long course. It is a portfolio of small, specific interventions. For the Finance team, it might be a series of nudges that add warning banners to emails containing payment-related keywords. For developers, it might be a pre-commit hook in their coding environment that scans for and blocks API keys. The goal is to design interventions that are tiny, easy, and fit seamlessly into the employee’s existing workflow.
- Step 3: Implement & Measure What Matters (Monitor). Roll out these interventions and shift your measurement focus entirely from compliance to behavior. Your new dashboard should not feature “training completion rates.” It should track Key Performance Indicators (KPIs) that directly reflect risk reduction. These include:
- Reduction in successful phishing clicks across different departments.
- Increase in the rate of employee-reported suspicious emails.
- Adoption rates for critical security tools like MFA and password managers.
- Reduction in time-to-patch for software vulnerabilities on employee devices.
This data-driven approach creates a continuous feedback loop.
You can see which interventions are working and which are not, allowing you to refine your approach over time, much like an epidemiologist tracks the efficacy of a public health campaign.
To illustrate the power of this model, consider a common high-risk group: Executive Assistants.
They are frequent targets of CEO fraud, possess broad access to sensitive information and calendars, and operate under immense pressure.45
A traditional program would give them the same generic phishing training as everyone else.
A behavioral program would diagnose their specific risk (urgent, authoritative requests for credentials or fund transfers) and design a targeted intervention.
This could be a simple habit: “After I receive an urgent financial request via email, I will take one deep breath and verbally confirm it with the executive.” This tiny, two-second pause is enough to break the spell of urgency and prevent a costly mistake.
It is this type of targeted, behavior-focused intervention that delivers a real, measurable return on investment.
Table 3: Role-Based Risk & Nudge Matrix
| Role | Primary Human Risk | Traditional (Failed) Approach | Behavioral (Effective) Intervention | |
| Finance Dept. | Business Email Compromise (BEC) / Invoice Fraud 46 | Annual training on “how to spot fake invoices.” | Nudge: A bright yellow banner automatically appears on all emails containing words like “invoice,” “payment,” or “wire transfer” that originate from an external domain.40 | Habit: After receiving a payment change request via email, I will verify it via a known phone number. | 
| Executive Admin | CEO Fraud / Credential Harvesting 45 | “Be careful with urgent requests from executives.” | Nudge: A real-time pop-up on the executive’s calendar that says, “Reminder: CEO fraud is common. Please verbally confirm any unusual financial requests.” 41 | Habit: After receiving an urgent request, I will take one deep breath before acting. | 
| Software Developer | Leaking secrets in code / Reusing credentials 46 | A policy document on secure coding standards. | Nudge: A pre-commit git hook that scans for API keys and warns the developer before they can commit. Habit: After cloning a new repo, I will create a .env file for my secrets. | |
| HR Dept. | Payroll diversion / Personally Identifiable Information (PII) exfiltration | A module on HIPAA/GDPR compliance. | Nudge: A system that requires two-person approval for any changes to employee bank details. Habit: Before downloading a file with PII, I will confirm my screen is locked and no one is behind me. | 
Conclusion: A Culture of Resilience, Not a Mandate for Compliance
The journey from that painful breach to this new understanding has been a profound one.
It required me to abandon long-held beliefs and embrace a new way of thinking.
The conclusion is inescapable: our industry’s traditional approach to the human side of security is broken.
It is a relic of a compliance-driven era that has been proven ineffective by both academic research and real-world experience.
Continuing to invest in programs that we know do not work is not just wasteful; it is a dereliction of our duty as risk leaders.
This report offers a different path.
It is a fundamental paradigm shift—from treating employees as a liability to be managed through lectures, to seeing them as the core of a resilient, collective immune system that can be cultivated through science.
It is about moving from the theater of compliance to the reality of behavior change.
The tools and frameworks outlined here—Cyber-Epidemiology, Herd Immunity, the Fogg Behavior Model, Atomic Habits, and Nudge Theory—are not theoretical constructs.
They are proven, practical, and powerful methodologies for engineering a safer, more resilient organization.
The call to action for leaders is clear.
Stop funding failure.
Stop chasing meaningless metrics.
Stop asking your people to do the impossible and then blaming them when they fail.
Instead, embrace your role as an organizational epidemiologist.
Start building a system that makes security the easy, automatic, and default choice.
Begin the work of designing a culture where secure habits are not a mandate to be endured, but an ingrained part of your organization’s D.A. The goal is not awareness.
The goal is immunity.
Works cited
- Why Most Security Awareness Training Is Actually Making You Less …, accessed on August 10, 2025, https://medium.com/deeptempo/why-most-security-awareness-training-is-actually-making-you-less-secure-5f4770d8f4e0
- Why Security Awareness Programs Fail and How to Improve Them – Defendify, accessed on August 10, 2025, https://www.defendify.com/blog/why-security-awareness-programs-fail/
- New Study Reveals Gaps in Common Types of Cybersecurity Training, accessed on August 10, 2025, https://cs.uchicago.edu/news/new-study-reveals-gaps-in-common-types-of-cybersecurity-training/
- Psychology and Cybersecurity: Rewiring Human Behavior to build Sustainable Cyber Resilience – ISACA Engage, accessed on August 10, 2025, https://engage.isaca.org/inprogressnewjerseychapter/blogs/felicia-hou/2025/06/26/psychology-and-cybersecurity-rewiring-human-behavi
- Why Most Cybersecurity Awareness Programs Fail in 2025 (and …, accessed on August 10, 2025, https://www.brside.com/academy-blog/why-most-cybersecurity-awareness-programs-fail-in-2025-(and-how-to-fix-them)
- Who and What is a Security Awareness Officer? – SANS Institute, accessed on August 10, 2025, https://www.sans.org/blog/who-and-what-is-a-security-awareness-officer/
- Is Traditional Security Training Enough? Cracking the Code on Human Behavior – HumanFirewall, accessed on August 10, 2025, https://humanfirewall.io/rethinking-cybersecurity-training/
- The Human Element: Psychology of Cybersecurity | AgileBlue, accessed on August 10, 2025, https://agileblue.com/the-human-element-psychology-of-cybersecurity-and-building-a-security-aware-culture/
- Cybersecurity: Make It a Habit! – UC IT security, accessed on August 10, 2025, https://security.ucop.edu/resources/security-awareness/habits.html
- Cybersecurity Habits Meet Neuroscience | Sileo.com, accessed on August 10, 2025, https://sileo.com/cybersecurity-habits/
- Psychology of Cybersecurity and Human Behavior – Identity Management Institute®, accessed on August 10, 2025, https://identitymanagementinstitute.org/psychology-of-cybersecurity-and-human-behavior/
- Creating cybersecurity herd immunity through third-party risk management – Techerati, accessed on August 10, 2025, https://www.techerati.com/news-hub/creating-cybersecurity-herd-immunity-through-third-party-risk-management/
- The importance of cybersecurity in protecting patient safety – American Hospital Association, accessed on August 10, 2025, https://www.aha.org/center/cybersecurity-and-risk-advisory-services/importance-cybersecurity-protecting-patient-safety
- Advocacy of cyber public health – PMC, accessed on August 10, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC8403267/
- Cyber Threats Pose a Public Health Risk – A Public-Private Partnership Prescription, accessed on August 10, 2025, https://www.naccho.org/blog/articles/cyber-threats-public-health-risk
- Healthcare and Public Health Cybersecurity – CISA, accessed on August 10, 2025, https://www.cisa.gov/topics/cybersecurity-best-practices/healthcare
- Applications of Epidemiology to Cybersecurity – ProQuest, accessed on August 10, 2025, https://search.proquest.com/openview/9d0ed8c941aef559f1ac9aaab721896a/1?pq-origsite=gscholar&cbl=396497
- Modelling Cyber Vulnerability using Epidemic Models – SciTePress, accessed on August 10, 2025, https://www.scitepress.org/papers/2017/64019/64019.pdf
- Using epidemiology to combat cyberthreats – Polytechnique Insights, accessed on August 10, 2025, https://www.polytechnique-insights.com/en/columns/digital/using-epidemiology-to-combat-cyberthreats/
- Coronavirus disease (COVID-19): Herd immunity, lockdowns and COVID-19 – World Health Organization (WHO), accessed on August 10, 2025, https://www.who.int/news-room/questions-and-answers/item/herd-immunity-lockdowns-and-covid-19
- Herd Immunity: History, Vaccines, Threshold & What It Means – Cleveland Clinic, accessed on August 10, 2025, https://my.clevelandclinic.org/health/articles/22599-herd-immunity
- Achieving herd immunity in cybersecurity – Inquirer Opinion, accessed on August 10, 2025, https://opinion.inquirer.net/150935/achieving-herd-immunity-in-cybersecurity
- The Fogg Model – Habit Weekly, accessed on August 10, 2025, https://www.habitweekly.com/models-frameworks/the-fogg-model
- Fogg Behavior Model, accessed on August 10, 2025, https://behaviordesign.stanford.edu/resources/fogg-behavior-model
- A Complete Guide to the Fogg Behavior Model | Triple Whale, accessed on August 10, 2025, https://www.triplewhale.com/blog/fogg-behavior-model
- Fogg Behavior Model – The Decision Lab, accessed on August 10, 2025, https://thedecisionlab.com/reference-guide/psychology/fogg-behavior-model
- Book Summary: Tiny Habits by BJ Fogg – To Summarise, accessed on August 10, 2025, https://www.tosummarise.com/book-summary-tiny-habits-by-bj-fogg/
- New Book Summary – Tiny Habits by BJ Fogg : r/BettermentBookClub – Reddit, accessed on August 10, 2025, https://www.reddit.com/r/BettermentBookClub/comments/10j7xq9/new_book_summary_tiny_habits_by_bj_fogg/
- Book Summary – Tiny Habits (B.J. Fogg) – Readingraphics, accessed on August 10, 2025, https://readingraphics.com/book-summary-tiny-habits/
- Tiny Habits By BJ Fogg – Book Highlights & Summary – Shilpa Kapilavai, accessed on August 10, 2025, https://shilpakapilavai.com/tiny-habits-by-bj-fogg-book-highlights-summary/
- What I’ve Learned from Atomic Habits by James Clear. : r/productivity – Reddit, accessed on August 10, 2025, https://www.reddit.com/r/productivity/comments/1c92nfz/what_ive_learned_from_atomic_habits_by_james_clear/
- How to hack your habits: author James Clear shares his life-changing strategy – Atlassian, accessed on August 10, 2025, https://www.atlassian.com/blog/teamwork/atomic-habits-james-clear
- 5 Digital Hygiene Habits to Help Your Organization Stay Safe Online – Health IT Answers, accessed on August 10, 2025, https://www.healthitanswers.net/5-digital-hygiene-habits-to-help-your-organization-stay-safe-online/
- Top 10 Digital Hygiene Practices for a Safer Online Life | John Jermain Memorial Library, accessed on August 10, 2025, https://www.johnjermain.org/top-10-digital-hygiene-practices-for-a-safer-online-life/
- Best Practices for Device Hygiene | McAfee, accessed on August 10, 2025, https://www.mcafee.com/learn/best-practices-for-device-hygiene/
- what are some simple habits to improve my personal cybersecurity? : r/AskNetsec – Reddit, accessed on August 10, 2025, https://www.reddit.com/r/AskNetsec/comments/1lo6u5m/what_are_some_simple_habits_to_improve_my/
- Dartmouth Guide to Digital Hygiene – ser, accessed on August 10, 2025, https://services.dartmouth.edu/TDClient/1806/Portal/KB/ArticleDet?ID=155669
- 2021 Volume 1 Nudging Our Way to Successful Information Security Awareness – ISACA, accessed on August 10, 2025, https://www.isaca.org/resources/isaca-journal/issues/2021/volume-1/nudging-our-way-to-successful-information-security-awareness
- Nudge Theory For Security Awareness | ThinkCyber, accessed on August 10, 2025, https://blog.thinkcyber.co.uk/introduction-to-nudge-theory-for-security-awareness
- What is the Nudge Theory For Security Awareness – Keepnet Labs, accessed on August 10, 2025, https://keepnetlabs.com/blog/what-is-the-nudge-theory-for-security-awareness
- Top five examples of nudge theory in action – CybSafe, accessed on August 10, 2025, https://www.cybsafe.com/blog/top-five-examples-of-nudge-theory-in-action/
- Cybersecurity Awareness and Training Manager (Remote) job in North Chicago, IL | AbbVie, accessed on August 10, 2025, https://careers.abbvie.com/en/job/cybersecurity-awareness-and-training-manager-remote-in-north-chicago-il-jid-4022
- Job Description for Security Awareness Officer – SANS Institute, accessed on August 10, 2025, https://www.sans.org/blog/job-description-for-security-awareness-officer
- SANS reveal top reasons for failure of enterprise security awareness programmes, accessed on August 10, 2025, https://www.intelligentciso.com/2017/07/19/sans-reveal-top-reasons-for-failure-of-enterprise-security-awareness-programmes/
- 7 Weaknesses of Security Awareness Training – cyberconIQ.com, accessed on August 10, 2025, https://cyberconiq.com/blog/7-weaknesses-of-security-awareness-training/
- Security Awareness Isn’t Dead—But It’s Not Enough – Keepnet Labs, accessed on August 10, 2025, https://keepnetlabs.com/blog/security-awareness-isn-t-dead-but-it-s-not-enough






