May 2009

Volume 24 Number 05

Security Briefs - A Conversation About Threat Modeling

By Michael Howard | May 2009

Contents

The Thespians
Scene I
Scene II
Scene III
Scene IV

Many years ago, I read a paper by Bill Bryant entitled "Designing an Authentication System: A Dialogue in Four Scenes" that explains the Kerberos authentication system. The paper is a reflection of how grumpy and jaded security people think and react to questions from nonsecurity people. Often, when nonsecurity people ask security people questions about security, it can be like extracting teeth to arrive at an appropriate and secure solution because there is a communication mismatch between the domain-expertise of the developer and of the security professional. So I thought I would document an amalgam of conversations I have had over many years into one long security chat. I do not pretend that this chat will do justice to the original Kerberos paper, however, and I apologize profusely in advance.

A major goal of this chat is to tie in some of the major Security Development Lifecycle (SDL) requirements we impose on product teams here at Microsoft.

The Thespians

Paige: a young, bright software developer.

Michael: a simple security guy at Microsoft.

Scene I

A small hallway between two sets of cubicles, supposedly designed to enhance agile software development and communication.

Paige: Hey, grumps. I need your help building some software.

Michael: So?

Paige: Seriously, I want your help building this system I'm working on.

Michael: You mean you're going to design something, build it, pretend to test it, and then ask me to find the security vulnerabilities?

Paige: Don't be so grumpy. No, I want your help up front.

Pause: Michael wipes his hand down his face and forces a smile.

Michael: Wow! That's a first. OK, I'll help. What do you want?

Paige: We have the skeleton design laid out. It's for a Web application, but I want to make sure we're doing the right security things.

Michael: You mean you want to pass your Final Security Review?

Paige looks perplexed and a little agitated.

Paige: Don't be so GRUMPY! No, I want to do the right security things, I really do. I don't want this product in the headlines for the wrong reasons. I want to make sure any sensitive data is secured and I want to keep any emergency patches to a minimum. Our customers have told us they want to reduce patch deployment incidents.

Michael attempts a really bad Yoda impersonation.

Michael: Wise you are.

Paige: Where do we start?

Michael: Where's the threat model? Without a threat model we're just guessing.

Paige: Here it is. We've identified the threats, but I guess we need to make sure the mitigations are correct.

Michael leans over Paige's desk and looks over the threat model rapidly.

Michael: Good. Very Good. I'm impressed. It's a good threat model. If you squint, the application looks a lot like many Web-based applications: Web front end, Web back end, database server, application logic, storing customer data at the back end. Hmmm… pretty standard stuff.

Paige: OK, now what?

Michael: I see you've identified the first-order threats.

Paige: Actually, the threat modeling tool did that.

Michael: I know, I was being nice! Let's look at some of the biggies. At this point, I'm assuming the application diagram is accurate.

Paige: Yes, the design has been pretty stable for a couple of weeks.

Michael: Excellent. A good diagram is critical to the rest of the process, as it is the most important bridge between your application domain and my security domain.

Scene II

Michael pulls up a chair at Paige's desk to look at the threat modeling tool analysis in more detail.

Michael: The first thing you should know is that attacks happen and there is really nothing you can do to stop them. As long as attackers are drawing breath, they will attack. Web applications are a big target because they often provide the front end to sensitive data and they are often full of really bad security bugs.

Paige: Why are Web apps so full of security bugs?

Michael: I don't have empirical data, but I've seen thousands of bugs, reviewed more code than I care to remember, and heard every plausible and implausible excuse for not fixing bugs.

I think we see so many security bugs because of two things. First, there is a lot of Web code written today, and the law of big numbers says that a very small number of security bugs in a lot of code is still a big number of bugs.

Second, most developers just don't know the issues. It's sad, but true. Unfortunately, there are a large number of attackers out there who do know the issues.

Third, a lot of code is rushed to market. I hear "Get it out there" all too often. Don't get me wrong; your Web app has to provide real business value, and if it's stuck in the development labs, it's doing no one any good. But if you're building something that is highly exposed, such as a Web app, then you need to perform security due diligence so you don't put customers at risk.

Paige: You said there were two reasons, but gave three.

Michael: I know. I lied.

Michael furrows his brow.

Michael: When did you last go to security training?

Paige: Three weeks ago; it was the Web security class.

Michael: Excellent. Who taught the class?

Paige: Bryan.

Michael has a worried look on his face. Paige responds to the worried look.

Paige: WHAT?

Michael: Just kidding, you had one of our best guys. Did he tell you about his AJAX security book?

Paige: Of course.

Both Paige and Michael smirk.

Michael: OK, let's get back on course.

Short pause as Michael looks over the threat model.

Michael: You've noted that the database server stores potentially sensitive user data.

Paige: Yeah. There's a small client application that runs in the browser, written in C#. This code can upload files from the user to our back end via the Web server, and the files are stored in the file system along with various file metadata stored in SQL Server for rapid lookup. We might store billions of files eventually.

Michael: This raises the security and privacy bar substantially because the data is potentially sensitive; and because it is data, it is subject to information disclosure threats at the back-end data store and while it's in transit from the client to the server, right?

Paige: Er, yeah.

Michael: OK, that gives me a feel for the risk of your application. So let's start at the beginning.

I see you have a user—an external entity—on your system. You clearly need to authenticate users because they can be spoofed. How do you authenticate them?

Paige: We use a user name and password or Windows authentication.

Michael: Why both?

Paige: We have two distinct usage scenarios.

Paige shows the core scenarios in the threat modeling tool. The first is the Internet. The second is an intranet scenario.

Michael: What do you mean by intranet?

Paige: From the client code we detect whether the computer is domain joined, and if it is, we change a few of our internal settings to take full advantage of the Windows platform.

Paige shows the following code:

bool _fIsDomainJoined = true; try { Domain d = Domain.GetComputerDomain(); } catch (ActiveDirectoryObjectNotFoundException e) { // not in a domain _fIsDomainJoined = false; }

Michael grimaces.

Michael: There are two bugs in the code. The first is you're checking whether the machine, not the user account, is joined to the domain. It's possible to log on using a local account even if the machine is domain joined, so change the code to use Domain.GetCurrentDomain.

Next, you're failing open. If the code raises any other kind of exception and it's handled elsewhere in the code, then the code will use the domain-joined logic because the domain-joined flag is true. You need to change the code to set the domain-joined flag to true only within a successful try block.

Kudos, by the way, on only catching the exception you can handle. Many people catch all exceptions, even the exceptions they cannot handle, and that simply masks real code errors.

Paige: OK, I get it.

Michael: Also, you should be specific in the threat model: change Intranet to Domain Joined.

Paige updates the threat modeling tool.

Michael: Tell me about the Windows authentication you use when domain joined.

Paige: We use NTLM or Kerberos.

Michael grins.

Michael: You mean you use Kerberos?

Paige: What do you mean?

Michael: You said this is a domain-joined scenario, so you should enforce Kerberos. You should move away from NTLM where possible and use only Kerberos. All currently supported domain-joined versions of Windows support Kerberos.

Paige: How do we enforce that?

Michael: Just make sure you are not requesting NTLM directly; look for "NTLM" in your code, and if you see it, come see me and we can set things right. Do not violate the SDL and require NTLM. This sample code is bad because it won't take advantage of Kerberos:

Michael shows Paige some C++ code that calls:

AcquireCredentialsHandle(NULL, NTLMSP_NAME ,...)

Paige makes a note about using Kerberos.

Paige: What's so special about Kerberos?

Michael: I could simply say, don't use NTLM because it says so in the SDL. But here's why you should use Kerberos: it is more scalable than NTLM and it authenticates the client and the server. NTLM authenticates only the client.

Now tell me about the Internet-based authentication system. Is this your own system or something else?

Paige: We use Windows Live ID.

Michael: I'll be honest. As a security guy, I'm not a fan of user ID/password authentication systems.

Paige: But we need usability.

Michael: Exactly. I totally understand. Just make sure you're using the latest version of the Windows Live ID SDK.

We've addressed one threat: spoofing the user; mitigated with authentication, in this case Windows Live ID or Kerberos. Make sure you capture that in the threat model.

Paige: Done.

Michael: This application you have running in the browser is subject to all the STRIDE risk elements because it's a process. I want to focus on two of the big threats. We can focus on the others tomorrow.

Let's look at spoofing. Are you sure it's the real application and not some other application pretending to be your code? Let's also look at elevation of privilege. The last thing you want is to allow an attacker to get this code to perform tasks beyond its supposed capability. Many security pros claim that a secure program is a program that does what it's supposed to do, and no more.

Paige: We have this nailed. I think! First, the CAB file that's downloaded to the browser is digitally signed by a certificate authority that is subordinate to a root CA certificate installed by default in Windows and Linux, our two target platforms for this first release. Second, the code is C#, so we can restrict the permission set to only those permissions required by the code, which basically is only FileDialogPermission, UIPermission, DnsPermission, and SocketPermission. At least, that's what we think we need right now.

Michael smiles.

Michael: Wow. Good stuff! You're doing something I think is awesome: you're offloading the security work to either the runtime or the operating system rather than creating your own security infrastructure. If you do your own security stuff, you'll just get it wrong! You know what's cool about using digital signatures on the managed code application?

Michael: You just addressed the T threat in STRIDE: tampering. So make sure the threat model incorporates the fact that your digital signature is the T mitigation.

Scene III

Paige and Michael grab a coffee and head back to Paige's desk. Michael is wearing a white T-shirt, so he drinks his coffee with an outstretched neck!

Michael: Next is the server. Again, it's a process in the threat model, so it's subject to all the STRIDE threats. Let's only look at spoofing for a moment. How do you know the Web site is the site you really want and not a rogue?

Paige: Ha! We have this one sewn up. In fact, now that I think about what you said earlier, we might have two solutions. The first is we require SSL/TLS, which gives us server authentication, right?

Michael: Very good. As long as the common name, or CN, in the server certificate matches the domain name in the URL.

Paige: But wait. There's more. If we're using the intranet... I mean domain-joined scenario, we get server authentication by using Kerberos! Cool.

Michael: I'm impressed. OK, client code spoofing addressed. Server spoofing addressed. Client spoofing addressed. How about threats on-the-wire: the data flow from the client to the server, and the data flow from the server to the client. Clearly (no pun intended), all are subject to information disclosure and tampering—I and T in STRIDE—because they are data flows. The good news is they're already mitigated.

Paige: They are?

Michael: Yes.

Michael pauses for effect and to see if Paige gets the solution.

Paige: Ohhh, that's right. SSL/TLS! We get on-the-wire encryption and tamper detection for free!

Michael: Correctammundo.

Paige: You just said, "Correctammundo!"

Michael: I'm sorry. You are correct. For encryption, SSL/TLS uses symmetric-key encryption such as AES, and it uses message authentication codes for tamper detection. Oh, and make sure your Web server is configured to use 128-bit or better symmetric encryption—it's an SDL requirement.

Paige makes a note about encryption key length.

Michael: But I still think there's an elephant in the room. What about the data that flows from the client to the server and is then stored in the file system and in SQL Server? That data is potentially confidential, so how do you mitigate the information disclosure and tampering threats?

Paige: What do you mean? Is the data unprotected at the server?

Michael: Sure it is. Once the data pops out of that secure SSL/TLS channel it's back in the clear. So the question is, how do you protect the data from tampering and information disclosure by your administrators or, dare I say it, attackers if they compromise the back-end server?

Paige: Wow. I never thought of that.

A boyish smile spreads rapidly across Michael's face.

Michael: That's why I get paid the mediocre bucks!

Paige: OK, simple. We encrypt the data using, say, DES.

Michael makes a really annoying FAIL sound.

Paige: Explain.

Michael: Encryption can mitigate the information disclosure threat, but encryption does not mitigate tampering threats, at least not without using sophisticated cipher modes. Also, you can't use DES because it's not secure enough and is a violation of SDL policy. You can use AES for symmetric encryption.

Paige: How do we fix this, then?

Michael: I'll be honest, it's not trivial. Encryption is pretty easy. You just call some library code, pass in a key, the plaintext, initialization vector, cipher-type, key-size, cipher-mode, padding mode, and you're done.

Paige: You're being cynical.

Michael: I'm not. That's the easy part!

Paige: You've utterly lost me.

Michael: The hard part is the key management. How do you generate the encryption keys? Where do you store them? How do you revoke them? How do you transport them between computers or users? What if a key is lost?

Paige: We only allow the user to access his or her own data.

Michael: That makes things a little easier. But not much! Where possible, rely on the operating system.

Paige: So what can we use?

Scene IV

Michael takes a deep breath and swivels his chair toward Paige.

Michael: All right, here's what I would do. You have two distinct scenarios: Internet and domain-based. Domain-based is Windows-specific, so leverage the Windows operating system security primitives as much as possible.

Paige: OK, go on.

Michael: The SDL mandates that private data be protected using the Data Protection API (DPAPI), so just use that. Your client code can detect when it is executed by a domain-joined user, so just call CryptProtectData and CryptUnprotectData and leave it at that. Windows takes care of the key management as well as transferring keys between domain-joined machines when needed. Another advantage is that this solution imposes very little on the user. The user does not know the data is protected. It all happens transparently, and it's "psychologically acceptable" to the user.

Paige looks puzzled.

Paige: What?

Michael: Schedule me next week, and we'll discuss a couple of important security principles you should know: the Bell-LaPadula Disclosure Model and the Saltzer and Schroeder Secure Design principles.

Paige opens up Outlook and adds an appointment for the following week.

Paige: Back to our application: what about the integrity of the data and tamper detection?

Michael: DPAPI addresses that too!

Paige: Seems like a no-brainer to me.

Michael: Yes, and you're SDL compliant!

Both Michael and Paige grin as Paige updates the threat model.

Paige: So what about the Internet scenario?

Michael: This is where it can get very tricky. To be honest, I don't have a solution for you right now, and designing one is difficult as I need to know some things about your application. What operating systems do you support? Should the server administrators really have access to the data? How do you generate and store the encryption keys? What are your auditing requirements? Assuming the encryption uses data derived from the user, what if the user forgets their password? I could keep going!

Paige: Do you want to discuss this tomorrow?

Michael: Sure, we should dedicate a whole hour—actually, make it two hours!

But before we wrap up this chat, I need to ask a couple of code-level questions. First, you're using a Web application, so you're using the SDL-mandated ValidateRequest and ViewStateUserKey, right?

Paige: Absolutely.

Michael: And you're using an HTML encoding library where appropriate?

Paige: Yes. All input that comes from outside the server, and is then echoed back, including metadata from the database, is encoded using the Microsoft AntiXSS library.

Michael: Good. And cookies?

Paige: Marked as HttpOnly.

Paige opens Visual Studio and shows the following line of code from the Web server:

cookie.HttpOnly = true;

Michael: Excellent! SQL statements?

Paige: Using stored procedures and parameterized queries, and denying access to underlying database objects.

Michael: Bryan taught you well!

Paige: Thanks. Well, that was a very useful chat. I learned a lot today.

Michael: Good to hear. We still have a great deal to cover, but I think we're off to a good start. We can look at the rest of your threat model once we've addressed the Internet back-end data security issues. But for the moment, I think we've addressed some big issues.

Michael gets up to go back to his cubical. Then stops and turns around.

Michael: Oh, hang on. You will have a privacy statement on the Web site, right?

Paige: Yes. Our lawyers and privacy folks crafted one for us.

Michael: Good, that's another SDL requirement you've addressed.

Send your questions and comments for Security Briefs to briefs@microsoft.com.

Michael Howard is a Principal Security Program Manager at Microsoft focusing on secure process improvement and best practice. He is the coauthor of many security books including Writing Secure Code for Windows Vista, The Security Development Lifecycle, Writing Secure Code, and 19 Deadly Sins of Software Security.