I've often heard the following question (or variants thereof):
How do I secure [this thing]?
Such a question rarely lends itself to a quick answer -- in almost all cases it prompts further questions: secure what, against what, in what cases, from whom? What options are you considering, and how will they help? Akamai InfoSec uses the Principals-Goals-Powers-Controls rubric to ask and answer these questions, and in so doing, help guide development effort towards our goals.
How to Secure Anything
(Note: The Principals-Goals-Powers-Controls rubric was originally developed by Brian Sniffen and Michael Stone based on a key bit of advice from Joshua Guttman of WPI: "If you're not talking about an adversary, you aren't doing security.")
How do I secure [this thing]?
- What is 'this'? Define the **principals** of your system: what entities, systems, locations, and/or interactions do you wish to include? What do you wish to specifically exclude? We call the participants in such a system "principals". If you have more than one or two, drawing a system diagram can aid understanding.
- What do you mean by 'secure'? Define your security **goals**: what does "secure" mean? What must remain true about the system in order for it to be secure? What counts as an unacceptable loss?
- Secure it against whom? Define your adversaries: who might try to frustrate those goals? Adversaries don't need to be malicious; we usually include "Murphy" (as in Murphy's Law: that which can go wrong, will; natural accidents, in other words) and well-intentioned but mistaken actors ("I clicked 'delete' when I meant 'save'") in the set of adversaries.
- What are their capabilities? Consider what **powers** are available to those adversaries. Again, accidents should be taken into account in addition to deliberate, malicious activity. For each adversary, how can they use powers available to them to frustrate your security goals?
- What can you do to prevent that? Define and build a set of **controls** that limit or prevent the ability of adversary powers to affect your security goals.
Note that we don't consider adversary goals or motivations.Capability is more important than intent -- relying on good intentions is rarely an adequate security control.
Part of our role is helping teams analyze their own security: we help them answer these questions themselves, rather than answering for them. Our engineering teams produce a Security Considerations section as part of their architecture document, containing a list of security goals, an enumeration of adversaries and their powers, and a description of security controls that help achieve those goals in the face of those powers.
A non-technical example
How can this be applied in practice? Consider this somewhat-contrivednon-technical (and belatedly-seasonal) example.
Over holiday dinner, your uncle says: "I hear you're into security.Next year, I want to secure my bowl of Halloween candy -- how would I do that?"
Here's a conversation you might have:
You: "All right -- is it just your Halloween candy you're worried about, or do you want to prevent people playing tricks on you?"
Uncle: "No, I'm not worried about that. I drenched a bunch of TP'ers with the hose last year and since then they've left me alone."
You: "OK, then. What do you mean by 'secure'?"
Uncle: "You know, secure it. Prevent people from taking the whole thing. I have to work on Halloween night, and my security camera always catches a kid taking the whole bowl; that gets expensive."
You: "So your goal is to prevent anyone from taking more than their fair share?"
Uncle: "Yeah -- I want to make sure each person gets one, and only one, piece of candy."
You: "Hmm, that's a slightly different statement. It's both a negative goal -- people shouldn't be able to take too much -- and a positive goal -- everyone should be able to get candy. (Otherwise, you could 'prevent people from taking the whole thing' by not handing out any candy at all!) I think I understand. Is it just kids you're worried about, or everyone?"
Uncle: "Good point. I'm just worried about kids. Oh, and there was that time someone stepped on the side of the bowl and flung candy into the bushes..."
You: "OK, then, we'll consider accidents too. What have you tried so far?"
Uncle: "A couple years back I tried putting a sign on the bowl: 'One Piece per Person.' That didn't work -- they just poured it all out."
You: "Makes sense. The sort of person who'd take an entire bowl of candy isn't much deterred by signs."
Uncle: "The next year, I tried using a very heavy cast-iron pot to hold the candy. That prevented the bowl-flipping accident. I thought it'd prevent people from pouring it out, but one smart kid just scooped it out by the handful instead."
You: "Seems like an OK solution. Shame it didn't work."
Uncle: "I also tried a box with a small hole, but larger kids couldn't get their hands in. Making the hole larger just allowed kids with smaller hands to grab more."
You: "Clever, but understandably flawed."
Uncle: "I tried building a gadget to dispense candy on a timer, but it kept flaking out -- besides, I need to be able to handle a bunch of kids showing up at once."
You: "While I'd love to come up with a clever technical solution like that, it seems easiest to have a human in the loop. Perhaps you could try getting a backup to hand it out for you? I'm usually free that night."
Uncle: "Hmm, if you're willing, that'd work. See you next Halloween."
Grandfather (interjecting): "That doesn't work! There's no way you can prevent all your candy from disappearing -- I mean, what if a robber shows up and takes your candy at gunpoint?!"
You: "That's out-of-scope. We decided before that we're only considering greedy kids as adversaries -- besides, gun-toting criminals wandering around at night are probably after more than candy."
A note on objections and limited resources
As Grandpa points out, virtually any conceivable system has one or more adversaries whose powers defeat your controls -- in a computer security context, the archetypical "gun-toting robber" is the NSA or other hypothetical adversaries with nation-state budgets and compute farms the size of small towns. In almost any "how do I secure this?" discussion, you'll find someone pointing out that the proposed system isn't secure against the NSA, Chinese intelligence, malicious BIOS rootkits, evil employers with ceiling cameras and keyloggers, and so on ad infinitum. You can't call the system secure, claim those objectors, unless you take those adversaries into account. It's true that "This system is secure, full stop" is almost always a bad claim to make -- much like "this road is safe" or "this food is healthy." That's not the goal of this process -- an honest security analysis will almost certainly admit there are times when the security controls will fail, and that's OK.
As much as anything else, system security is subject to business pressures and thus must contend for limited resources: the time you spend ensuring the security of a system is (usually) time not spent adding new features or otherwise improving the day-to-day customer experience. Thus, in most cases (unless you work for the NSA, maybe) it makes little sense to have perfect security as a goal. This is where the principals-goals-powers-controls rubric can help: defining the scope of your system and its goals, helps determine where to apply your limited budget (whether that be money, bandwidth, compute-hours, or person-hours; most likely some combination of those) towards getting the best return-on-investment.
A technical example
Bringing this back to information security, here's a common example --password authentication -- analyzed using this rubric
System: A password system for user authentication, consisting of a client (the browser), a server with a standard database, and a link between these systems.
- Passwords are kept secret from everyone but the end-user.
- Only the end-user associated with a password can pass authentication.
- Adversaries and their powers:
- End-users, who can transmit and check passwords over the network and
- can insert new passwords into the database by signing up.
- Malicious actors who've compromised the system and can access the
- database directly.
- A man-in-the-middle who can see traffic passing over the network.
- Passwords are hashed using bcrypt, whosebuilt-in salting features deter rainbow-table attacks arising from a database compromise.
- Network rate-limiting and autobanning, in addition to that naturally provided by bcrypt, renders a network brute-force attack infeasible.
- The network link is encrypted using TLS to deter man-in-the-middle snooping. Access to the TLS private key is limited to trusted administrators.
- Anyone who has access to the SSL private key can decrypt communications on the wire and reveal passwords. Anyone who can break the key, or who can otherwise conduct a TLS man-in-the-middle attack (for example, by using a malicious-but-trusted CA) can access these passwords as well.
- Similarly, a user with root privileges on the server can access the passwords in unencrypted form; for example, by using a debugger on the server process.
- Malware on the end-user's computer (a keylogger or maliciousbrowser, for example) could access keyboard input and grab the password.
- Perfect Forward Secrecy is also out of scope -- an adversary may record traffic flows in the hopes of *later* decrypting them. Also out of scope are side-channels such as timing or cache-based attacks.
"How do I secure this?" is not a simple question. Through this rubric,you can ask yourself questions to honestly consider the security of your system and point the way towards improving it.