Latest Stories

Stay up-to-date with everything at Approach

Publication

Threat modeling & secure development: Risk assessment is now a legal requirement – Is your team ready?

Publication date

25.03.2026

This is a visual for the Threat Modeling blog.
For years, threat modeling was the mark of a mature security team: valuable, recommended, but ultimately optional. That era is over. With the EU Cyber Resilience Act and NIS2 now shaping how software must be built across Europe, threat modeling has quietly become a compliance obligation. The question is no longer whether your team should do it. It’s whether your team is equipped to do it well.

This article explores how regulations like the CRA and NIS2 impact development risk management. We examine why threat modeling is the industry-standard solution and demonstrate its practical application for developers, AppSec leads, and CTOs alike. 

Already familiar with the basics? This piece builds on our earlier article, “Threat Modeling: A Gate to Secure Development Culture which covers the cultural foundations and a phased approach to getting started.

 

1. What the NIS 2 and CRA regulations require in terms of risk management 

Neither CRA nor NIS2 uses the phrase “threat modeling” explicitly. What they do require, however, maps almost perfectly to what threat modeling produces. 

 

The Cyber Resilience Act (CRA) 

The CRA entered into force in December 2024 and will apply to any product with digital elements placed on the EU market by December 2027. Its core obligation is secure-by-design: security must be built in from the start, not bolted on after deployment. 

Concretely, the CRA requires manufacturers to:1 

  • Assess cybersecurity risks as part of the product development process 
  • Document security decisions and their rationale 
  • Ensure vulnerabilities are identified and addressed throughout the lifecycle 
  • Apply the principle of security by default/design 

 

Each of these requirements corresponds directly to an output of a well-run threat modeling process: a risk register, documented design decisions, identified attack vectors, and architectural mitigations.

 

NIS2 

NIS2 applies to essential and important entities across critical sectors and extends security obligations deeper into supply chains than its predecessor. Among its risk management requirements: 

  • Security must be addressed in the development of systems and services 
  • Risks from suppliers and third-party components must be assessed 
  • Incidents must be preventable where possible through proactive risk management 

Again, threat modeling is one of the few structured techniques that produces the evidence these obligations call for; particularly around supply chain risk and proactive architectural decision-making. 

 

2. From good practice to compliance evidence 

Here’s where many teams fall short: they do threat modeling as an exercise, but they don’t treat it as a compliance artifact. The difference matters. A threat model produced as a whiteboard conversation is valuable for the team. A threat model that is documented, versioned, linked to architectural decisions, and revisited when the system changes become evidence. 

What that means in practice:

  • Traceability: Each identified threat should link to a mitigation decision or an accepted risk. “We identified this attack vector, here is what we did about it” is the sentence your compliance documentation needs to be able to produce. 
  • Repeatability: A one-off model for a single feature does not satisfy ongoing CRA obligations. Teams need a lightweight, repeatable process that can be applied at each design phase. 
  • Versioning: When the system changes, the threat model must change with it. This requires treating TM outputs as living documents, not snapshots. 
  • Coverage: CRA’s scope includes third-party components and dependencies. Threat modeling must extend beyond your own code to the libraries, APIs, and services your product relies on. 

The gap between “we do threat modeling” and “we can demonstrate we do threat modeling” is precisely where CRA and NIS2 will expose unprepared teams. You must show you have thought about risk before you shipped. 

 

What it looks like for each role 

While people with all types of experiences have their place in the activity, what it means and what it demands differs depending on where you sit. 

 

For developers and engineers 

The good news: you don’t need to become a security expert2. Threat modeling at the developer level is about the habit of asking “what could go wrong?” before writing code, not after. 

In practice this could mean: 

  • Running a 30-60 minute structured conversation at the start of a new feature or significant change 
  • Using a simple framework like STRIDE or the 4 questions framework to guide the discussion 
  • Capturing threats and agreed mitigations in a lightweight template, linked to the relevant ticket or story 
  • Flagging unresolved risks explicitly rather than leaving them implicit

The output doesn’t need to be a formal document. It needs to be findable, attributable, and honest about what was considered and what was decided. 

 

For AppSec/Security leads 

Your role shifts from doing threat modeling to enabling it at scale. Under CRA obligations, the question you need to answer is: can every team, for every significant feature, produce a threat model that a regulator could examine? 

That requires a standardized process that doesn’t require security expertise to execute (templates, checklists, lightweight facilitation guides, training materials), but also making sure this integrates well into existing development rituals such as design reviews, sprint planning, architecture review boards, … 

For you, threat modeling is a way to aggregate threats across teams to identify systemic risks and recurring patterns. 

Threat modeling also gives AppSec leads something they often lack: a structured way to prioritize. Not every threat needs the same response. A documented, risk-ranked backlog of security issues is far more defensible and actionable than an undifferentiated list of findings. 

 

For CTOs and CISOs 

The CRA creates personal accountability for senior leaders in ways that previous frameworks largely did not. Secure-by-design is now a product compliance requirement with potential market access implications. You have to make sure that you are on track by asking your teams: 

  • Do we have a documented process for assessing security risks at design time? 
  • Can we produce evidence that we identified and addressed threats for our most recent major release? 
  • Does our supplier and dependency assessment include security risk criteria? 
  • Is our threat modeling process integrated into our SDLC, or does it exist as a parallel activity that gets skipped under time pressure? 

Investing in threat modeling capability now is becoming  a market differentiator as customers and partners now look into the security posture of their software suppliers. Think about the practical consequence of not being able to demonstrate compliance when a prospect’s procurement team asks for it.

 

Building the compliance bridge: a practical path 

If your team is just getting started, Threat Modeling: A Gate to Secure Development Culture” walks through the cultural shift and phased approach. This article picks up where that one ends. 

If your team already has threat modeling in some form, even informally, the path to compliance-grade practice is shorter than you might think. It is mostly about systematizing what you already do. 

 

These steps will tell you whether the process is working - threat modeling

Compliance-grade practice is shorter than you might think. It is mostly about systematizing what you already do.

The Bottom Line 

Threat modeling has always been one of the highest-leverage investments a development team can make, it is a cornerstone into the ‘shift-left’ mindset. CRA and NIS2 have now made it non-optional for a large share of the market. 

The teams that will navigate this transition most smoothly are the ones that have embedded threat modeling as a normal part of how they design and build software. Not a compliance exercise conducted under pressure, but a habit that produces better software as a side effect. 

 

The good news: you don’t need to build that habit alone. Frameworks exist and (some) tools are lightweight. The expertise is also available.  

What it takes is the decision to start. 

OTHER STORIES

Across Belgium, the NIS2 directive is no longer a distant regulatory change, it is becoming a concrete operational obligation. With the Belgian transposition now in force and the first compliance milestones already activated, organisations must ensure they are not only aware of the upcoming deadlines but actively preparing to demonstrate progress.
Anonymisation isn’t just a compliance tactic — it’s a strategic enabler that reduces risk, builds trust, and unlocks data for innovation. In this practical guide, our Data protection expert Ana-Maria Luca explains why anonymisation matters, how it strengthens smarter data governance, and how organisations can get started through a phased approach.
The EU AI Act is changing how organisations can deploy AI — depending on the risk level and their role in the value chain. Our GRC expert Kevin Lavrijssen provides a clear overview of what’s coming, when it applies, and how to take the first steps toward compliance and stronger AI governance. 

Contact us to learn more about our services and solutions

Our team will help you start your journey towards cyber serenity

Do you prefer to send us an email?