Security WatchHelp Wanted—Need "People" People
I’m not really a people person. I’ve said so for years. My old college roommate, Joe, had a better way to put it, but the way he said it is unfortunately not suitable for print. Frankly, not being a people person is a good part of the reason I went into IT in the first place. People don’t seem to ever do what I want them to do, so I found it more comfortable to work with computers. They are easier to control.
This basic realization, however, has lately started worrying me a lot. Not being a people person does not bother me personally all that much. The problem is that I seem to be working in an industry full of people like me. We all seem to have gotten into computers because we were not quite what you may call "normal." My long-lost friend Rob Kolstad once had a very good way to describe Linux users that epitomizes the problem: "What do you mean there is no driver for that printer? Guess I’ll just have to write one." While many of us probably would not resort to writing drivers, we certainly would not think twice about tweaking the parameters of one, and most people who write software today could probably write a driver—whether they could write one without a security hole is a different matter though.
Users Are Not Like Us
Unfortunately, most users are not really like us, and were we to build computers and software solely for people like us we would face certain market problems when trying to actually sell them. Microsoft has, quite frankly, made a fortune because the company figured out early on how to make software that regular people could use. Perhaps even more importantly, Microsoft figured out how to build an ecosystem that allowed third-party developers to do the same. Are we there yet? Not by a long shot, but we (the IT industry) have certainly made great strides.
The strange thing, though, is that few of us "computer people" seem to think much of the people who are to use the services we provide, who are to buy the software we are selling, and who are to actually use the computers we manage as part of the corporate, school, home, or friendsneighborsandpeoplewemeetinthesupermarket IT department. This is in spite of the fact that those very same people are the ones who actually pay for the lube on our propeller hats. I have been delivering presentations to IT professionals and developers for the better part of 10 years. One surefire way to get a laugh from the audience is to joke about how much easier it would be to run systems and write software if we did not have all these pesky users.
Nowhere is this presenting a bigger problem than in the security niche of the general IT industry. While we make it clear that a particular vulnerability requires user interaction, we also know that in most cases, getting users to interact is easy. We just have to offer them something they value more than their private or corporate data, like chocolate or naked celebrities. In spite of the fact that we all know this, very little seems to be done about helping users get better at security. Instead, we give up and claim that users just can’t be taught. However, as an industry, IT will never fully succeed unless we can get users on our side.
For the past 20 years, the security industry has done little beyond just reacting to the bad guys, not proactively doing much to stop them. We place the bar in a certain place, and the bad guys jump over it. So we raise the bar, and the bad guys are stumped for a while, but eventually they jump over it again. It feels like we are stuck in an old Basic program:
10 Detect new attacks 20 react_to_attack_by_raising_the_bar 30 goto 10
We are not making any real headway. Instead of simply reacting to the latest attack, we need to finally get around to not just raising the bar, but to move it to a different playing field; a playing field where we get to write the rules!
To be fair though, the reactive approach led to some successes. For instance, even some industry followers are now conceding that Microsoft is getting ahead on security. But it has also created some new problems, as pointed out by Marcus Ranum a few months ago in his article "The Six Dumbest Ideas in Computer Security". Ranum discusses misguided, though commonly used, attempts and procedures to control attacks on your system. By and large, I really do agree with almost everything Marcus says. However, the part about giving up on users bothers me to no end. I agree that we need to work on preventing users from shooting themselves in the foot. The problem, though, is that we can’t always do that—at least not if we still want to allow people to do what they need or want to do with their computers. For instance, to block Web and e-mail attacks we really need to block Web and e-mail entirely. In other words, to do what Marcus recommends we would, among other things, need to block ports TCP 80, 443, 110, 25, and 143 on the firewall in all directions, as well as all tunneled versions of the typical protocols on those ports. Are you willing to do that?
Tech Solutions Will Fail
Some players in the IT industry would have you believe the solution is anti-virus, anti-spyware, anti-rootkits and other things that collectively fall under the heading of anti-malware. None of that provides a complete solution in the long run, for several reasons. First, they will only ever be able to find things they are programmed to find, and until someone tells them to find something, they won’t. Second, there will always be a contingent of computers that are out of date for one reason or another. With the interdependencies we have in systems today, those computers endanger the rest of us.
Finally, anti-malware will not be able to stop anyone from voluntarily entering their passwords on a Web site that really does look like eBay, but is actually hosted in some teenager’s closet. Clever anti-phishing techniques may be able to warn the user, but it won’t stop them. Nor will anti-malware prevent anyone from uploading sensitive organizational information to a hostile nation-state. In the next couple of years, anti-malware will probably be universally perceived as a good mechanism to protect against known attacks, but not of much use against targeted or mutating attacks.
Other players in the industry would have you believe that things like outbound host-based firewalls will be the panacea. That would probably qualify as the seventh dumbest idea in security. These firewalls will not stop the vast majority of real attacks. First, they are still port-based, and the attacks are not. The attacks occur in the application layer and above. The fact that attack traffic goes over port 80 does not make it legitimate. Second, this argument assumes we can contain a host that has already been compromised, on that host! This would be a violation of the first immutable law of information security: "If a bad guy can persuade you to run his program on your computer, it’s not your computer anymore". The firewall may block current attacks, but that misses the point—we have to start thinking about how to block future attacks. Finally, host-based outbound firewalls are a perfect example of why pure technical solutions can fail. These firewalls ask users intelligent questions, such as the one you see in Figure 1.
Figure 1** What We Show the User **
The problem is that these dialog boxes were not exactly written by people people. They were written by propeller heads, for propeller heads, because the propeller heads typically do not know any real people. When the average user is confronted with this dialog, he does not actually see it at all. What he sees is a lot like Figure 2.
Figure 2** What the User Actually Sees **
For all these reasons, technical solutions to what are essentially people problems do not work very well. They may help technical people, but technical people do not need the help. In order to be successful the protection needs to sit as close to the problem as possible. Trying to use one object to protect another object generally will fail because only the object at risk can fully understand the intricacies of the risk. Objects, including people, therefore need to be empowered to protect themselves. A world where every object relies on some other object for its protection will ultimately fall down. At a deep technical level, many, if not most, buffer overflow vulnerabilities came about for this very reason—some object assumed it was protected by some other object.
Users Are the Problem
What this means is that even as we get better at securing the systems themselves, we are left with one fundamental problem: users are no smarter about security than they ever were. It was, and remains, far easier to hack people than it ever was to hack computers. The reason most criminals (with notable exceptions, such as convicted computer criminal Kevin Mitnick) have not taken to doing so on a massive scale may be because they went into IT for the same reason I did. But systems are getting better. There are fewer technical vulnerabilities to exploit and we are finding out about them sooner. All in all, this enables us to correct technical security problems quite quickly.
On the contrary, it seems like it is getting easier to hack people—or maybe I am just now realizing how easy and profitable it can be. Fifteen years ago, users had never heard of the Internet, nor had they done online banking, received an e-mail message telling them to click a link to select a password for their new credit card, entered their social security number or a credit card verification code on a Web site, or read their e-mail while sitting at Starbucks. These things were clearly beyond their world view back then. Now these are all common activities. People do not think twice about things that would have raised all sorts of alarms 15 years ago. Think about it. In 1991, what would your answer have been to the following request: "Go to cheaploans.woodgrovebank.com and fill out our easy application form, including your checking account number, your social security number, and your home address, and we will run a credit report and e-mail you three offers for a new mortgage in less than five minutes."
In a way, by enabling all these wonderful things online, we have also desensitized people to things that in many cases may be extremely malicious. By taking people out of their comfort zone, out of the cognitive space where they understood their world, we have put them into a world that feels thoroughly alien to them, where nothing seems out of the ordinary; a world where they have stopped questioning the legitimacy of anything that happens to them. Cognitive scientists claim that our cognitive facilities evolved in caves, where our major concerns were to stay out of reach of the saber-toothed cat, how to kill a mammoth to get food today, and where we might find a mate to club and drag home. I do not entirely agree with cognitive science, but it is clear that human cognitive faculties certainly did not evolve to deal with phishing attacks or a "Vulnerability in the Graphics Rendering Engine [which] Could Allow Remote Code Execution." In addition, by cultivating an industry of fear, the security industry has done little to help people cope with the new threats they are facing.
Like most people, I have burning thoughts that keep me awake at night. I think about the notion that all the indicators are pointing toward a disastrous showdown—a disastrous showdown which we are already starting to see in the phishing attacks that hit our mailboxes every day. Phishing is not a technical attack. It is a wetware attack—an attack against people. We can certainly stop it with technical means, but frankly, nobody wants to. Doing so would require us to shut off e-mail, or Web access, or both. That’s a losing proposition.
I firmly believe that writing off people is wrong. People are incredibly smart when you get right down to it. They have learned some extremely complicated things, like walking, talking, reading, even driving cars without crashing into things all that often. There is no reason to believe they could not be taught to make more intelligent security decisions. I am not saying they should become security experts; only that they need to learn that sending a blank, signed check to an unknown recipient is probably not a good idea. There is, of course, the possibility that we could embody this expertise in a computer program—an expert system—and have it make decisions on behalf of users. However, during the late 1990s expert systems were shown to usually be a more complicated abstraction of the real world than the real world they were intended to model. Besides, scientific research by Michael Davern (see, for example, his article "Discovering Value and Realizing Potential from Information Technology Investments") and others showed that users typically overrode the decisions made by the system anyway.
In order to develop some solutions to these problems we need to figure out what to tell people to help them make intelligent security decisions. What do IT people need to tell their users, as well as friends, family, neighbors, and the guy at Starbucks, about security? How should you do it? I think the problem will take real cognitive psychology experts. We need to figure out how to transfer expertise from the people who do understand security to those who do not. Maybe it is time for the scientific field to pick up the gauntlet and solve these problems? I think we need leadership in this area, and right now it worries me greatly that I am not seeing it.
The real solution, therefore, is that we—the people who design, write, implement, and manage software—have to learn how to deal with people. That is the only way we will be able to help people defend themselves. Defending themselves is the only way people can be safe. People are the new frontline in the infosec battle. Like it or not, that is where we are heading. Take passwords, for instance. We can solve the technical issues with passwords. What we have not figured out yet is how to solve the biggest problem of all—the fact that if I want someone’s password, the easiest way to get it remains to simply ask the person for it. Done right, there is an 80 percent or better chance I will get it.
How to proceed then? Well, the sad fact is that the IT industry by and large won’t. There are some signs of improvement, such as an increase in the guidance available around selecting and managing passwords (see the documentation at "Password checker" for example). However, much of the IT industry does not consider it its job to solve these types of problems. The job of the IT industry is simply to improve the technical security of its products and to provide products which solve technical security problems. However, as a society we must secure the ecosystem itself. That part is nobody’s job since the success of a particular product is not clearly tied to it. It is hard to argue that Microsoft will sell more copies of Windows® by securing people. It will not sell more support contracts for Oracle to make people unbreakable. It certainly would not sell more copies of Norton Internet Security if people were better able to protect themselves. While the benefits of an ecosystem that can protect itself are available to all, the costs of bringing us there are not clearly assignable to any current players in the market place. We are, by and large, left without much leadership.
As the IT industry solves its security problems, the battlefield changes. On this new battlefield the generals have left. And the troops are unaware that they have joined a battle and they have no weapons and little body armor.
Jesper Johansson is a senior security strategist with Microsoft, focusing on how customers should best deploy Microsoft products more securely. He has a Ph.D in MIS and has delivered speeches on security at conferences all over the world.
© 2008 Microsoft Corporation and CMP Media, LLC. All rights reserved; reproduction in part or in whole without permission is prohibited.