One of my first jobs out of university was as a sysadmin for a medical testing laboratory. The servers I was responsible for were in a computer room in the office. I could touch them, I could physically see how they were networked together, and I could see the hardware that was installed. Each of them had an operating system installed with applications running directly on them to support what they were doing - our setup included an on-premise mail server, web server, firewall and data storage. The security for this was relatively simple: monitoring, firewall, patching, least-privilege access, and so on.
Nowadays, the average user will do most of their web browsing on a phone, not a computer. Websites are larger and more complicated, with users using complex browsers to match. Where before we had bare-metal servers in computer rooms, we now run our applications in the cloud, where resources are defined via APIs and get created and destroyed regularly.
Approaching security for each piece of this new digital puzzle is complex. The approaches to security of yore would consist of a list of "controls" - recommended security settings for a given piece of technology. The equivalent lists for modern systems are huge. While useful to a security practitioner though, a list of controls does not help businesses understand what they need to do to achieve the security they desire, or how long it's going to take (and, of course, the cost).
Clients I've worked with who employ large numbers of security experts continue to struggle with this. As an organisation's IT estate increases in size and complexity, its security needs expand with them, occupying its security teams. More and more organisations of all sizes are now building their own applications, which do not fit the generic mould that lists of controls are typically written for.
Here's a different approach which cuts through many of these problems: starting not with controls, but with risk. Risk, that is, to the business.
In a controls-driven way of thinking, we ask "what can be locked down", and "what can be turned off". Starting with risk changes the conversation to "what matters?", and "what will harm the business?". In other words, the business comes first.
This is great - we're now talking in a language non-technical folk can understand. Though we still mitigate risks with controls (which will be technical), risks mitigating is a conversation the business needs to be part of, and risk-driven thinking enables that.
How then can we measure the risks an application poses?
Enter: the threat model.
Rather than being general, threat models are usually specific to the application you've built (generic threat models for some things do exist, which can be useful in specific circumstances, but that's a topic for another time). Threat models combine a few key sets of information into a simple document:
- threats that can manifest against the application (e.g. a hacker brute-forcing a user's password)
- where the threat could manifest (e.g. authentication micro-service, or a third-party API)
- the risks associated (e.g. user data can be read, changed and deleted without the user's knowledge or permission)
- the ways we could mitigate (e.g. password policy, MFA or rate limits on the authentication endpoint)
- what we've already mitigated
This information is captured in a threat modelling session, where the engineering team and a threat modeller or security expert sit around the table. By combining this information, readers of the threat model are now crystal clear on:
- the list of possible mitigations
- what those mitigations are protecting against
- why particular mitigations should be implemented
- the outcome if the mitigations fail or are not implemented
(Where I've put "mitigations" above, these would previously have been "controls". Mitigations is a more helpful word as we may put in mitigation that doesn't stop a threat actor in their tracks as a control might. For example, we may trigger an alert to our security team for investigation, but allow the activity to continue.)
This sounds simple, and some readers may be thinking "surely there are other documents that describe this in a simple-to-digest way already"? It is simple, and yet without threat modelling getting this information in one place is surprisingly elusive.
It brings several benefits.
First, consider your engineering team. Say they're building a public API using AWS Lambda, and need to create a development environment where work on the application can be done away from the public API, and a production environment where the public API itself will be hosted. They decide for simplicity to host these in the same AWS account.
This train of thought might seem logical. To the security team, it will likely raise a red flag. To resolve and reach a common understanding, a brief threat modelling session can establish the risks of combining the environments in one account. If the teams find the risk is not tolerable (for example, the production environment will house sensitive customer data that should not be accessible to application developers), mitigations or alternative approaches can then be proposed. All deliberations are recorded in the threat model document.
Whatever the outcome of the discussion, both teams will now have new information: engineers will have a deeper understanding of the *business risks* around the application they are building. Security will have a threat model for this application which they can build on and use in future conversations with engineers, as well as use in incident response and security audits. Everybody wins!
I've chosen a fairly contrived example to make the point, but more complex variations of this play out all the time: for example in the last week I've had separate questions on the use of SSH bastions, exposing VMs to the internet, granting IAM credentials from a frontend and security boundaries in S3.
These questions around relaxing security or potentially unsafe configurations tend in my experience to create frustration between security and engineering teams because of the different goals the teams have, particularly in larger organisations. A threat model brings the right data to the fore and gives both teams the context they need to reach a sensible compromise, with both walking away from the interaction knowing they contributed and weighing the options together.
Threat modelling is also collaborative. The part I most enjoy about threat modelling is getting a whole team together to threat model the things they're responsible for. The focus on business risks makes the consequences of attacks "real" for everyone around the table. And there's something about having a load of people in a room bouncing ideas off each other, coming up with potential threats and mitigations. Not only is it fun, it generally produces much better results. Far better than a security expert doing it alone.
By using a collaborative approach security also becomes more visible. Visibility not only improves awareness of what our security is, why we need and how we're improving it but also improves confidence in our security approach and team. It sows and grows security culture in your organisation, especially among engineers who are at the coal face of threat modelling their applications.
It's a cliche, but security is everyone's business. Not everyone can get into the technical detail about what needs to be done where, but threat modelling enables them to talk about business risk. This is the really crucial first step that makes security easier down the road. When speaking to people about this I draw a comparison to software requirements: understanding requirements properly makes software delivery easier and of higher quality. Security is exactly the same with risk - understanding your risks earlier makes it far easier to deliver quality security with less wastage down the line.
(On wastage: I worked with one client who had invested extensively in security tools for their cloud environment. The tools cost reasonable sums to licence, and some required significant engineering effort to integrate into the rest of their stack. When the Log4J vulnerability surfaced, they discovered that many of these tools failed to answer the questions they needed to ask as part of incident response. This wastage, of time and money, could have been largely avoided with a risk-centric approach.)
We help tech businesses get started by truly understanding the security risks they face so they can realise these benefits for themselves. We train and equip existing teams to immediately threat model applications and build focussed security plans which deliver security with clarity, quality and meaningful, visible risk reduction.
Businesses today face new and daunting challenges to their operations, no matter the sector or geography they're in. By starting your cybersecurity conversations with risk, rather than controls, you turn security from daunting into achievable. And you can bring others with you and have fun along the way.