Chapter One. Psychological Acceptability Revisited
IN 1987, BRIAN REID WROTE “PROGRAMMER CONVENIENCE IS THE ANTITHESIS OF SECURITY, because it is going to become intruder convenience if the programmer’s account is ever compromised.”[*] This belief of the fundamental conflict between strong computer security mechanisms and usable computer systems pervades much of modern computing. According to this belief, in order to be secure, a computer system must employ security mechanisms that are sophisticated and complex—and therefore difficult to use.
Today a growing number of security researchers and practitioners realize that this belief contains an inherent contradiction. The reason has to do with the unanticipated result of increasing complexity. A fundamental precept of designing security mechanisms is that, as the mechanisms grow more complex, they become harder to configure, to manage, to maintain, and indeed even to implement correctly. Errors become more probable, thereby increasing the chances that mechanisms will be configured erroneously, mismanaged, maintained improperly, or implemented incorrectly. This weakens the security of the system. So the more complex a system is, the more secure it should be—yet the less secure it is likely to be, because of the complexity designed to add security!
Finding ways to maximize both the usability of a system and the security of a system has been a longstanding problem. Saltzer and Schroeder’s principle of psychological acceptability  (see the sidebar) says that a security mechanism should not make accessing a resource, or taking some other action, more difficult than it would be if the security mechanism were not present. In practice, this principle states that a security mechanism should add as little as possible to the difficulty of the human performing some action.
Applying this principle raises a crucial issue: difficult for whom? A programmer may find setting access control permissions on a file easy; a secretary may find the same task difficult. Applying the principle of psychological acceptability requires taking into account the abilities, knowledge, and mental models of the people who will use the system. Unfortunately, on those infrequent occasions when the principle is applied, the developers often design the mechanism to meet their own expectations and models of the system. These are invariably different from the expectations and models of the system’s users, no matter whether the users are individuals at home or a team of system administrators at a large corporation.
As a result, security mechanisms are indeed cumbersome and less effective than they should be. To illustrate the problem, I focus on three examples in this chapter: passwords , patching, and configuration.
Passwords are a mechanism designed to authenticate a user—that is, to bind the identity of the user to an entity on the computer (such as a process). A password is a sequence of characters that confirms the user’s identity. If an attacker guesses the password associated with an identity, the attacker can impersonate the legitimate user with that identity.
Problems with passwords are well known; one of the earliest ARPANET RFCs warned that many passwords were easy to guess. But a well-known problem is usually an unsolved problem. Reid’s lament involved someone guessing a password on a poorly maintained system, and from there intruding upon a large number of systems at a major university. In the early 1990s, CERT announced that many attackers were using default administrative passwords to enter systems. In the early 2000s, a CERT advisory reported a “back door” account in a database system with a known password. SANS includes password selection issues as two of the current Top 20 Vulnerabilities, as well as several exploits that depend upon accounts with no passwords or with passwords set by the vendor. For example, the SQLSnake/Spida Worm exploits an empty password for the default administrative account for Microsoft SQL Server.
The principle of psychological acceptability, taken literally, says that passwords should be unnecessary. But the use of passwords to protect systems adds minimal overhead for people who are using the system, provided that the passwords are easy to remember. To be effective, the passwords must also be difficult to guess. So, how can passwords be made easy to remember, yet difficult to guess?
One difficulty in solving this problem lies in balancing the ability of a human to remember a password that an attacker will find difficult to guess against the ingenuity of the attacker. The attacker has the advantage. People choose passwords that they can remember easily. Unfortunately, these are usually easy to guess. Experiments by Morris and Thompson and others were able to guess user passwords for between 25% and 80% of the users. The users typically picked dictionary words, names, and other common words. Amusingly, in one experiment, the analyst was able to determine who was dating whom, because many passwords were, or were derived from, the names of the users’ partners.
Part of the problem is that different users have different ideas of what constitutes a password that is difficult to guess. When warned not to use names as passwords, one user changed his password to “Barbara1”. Foreign words are also common; one guessed password was a Mandarin phrase meaning “henpecked husband.” Another was the Japanese word for “security.” In the latter case, the American user was stunned when someone guessed the password quickly, because he never expected an attacker to try a Japanese word.
System administrators, system programmers, and others who have been the victims of attacks involving guessed passwords, or who run programs that guess passwords as a preventative measure, usually understand the need for passwords that are difficult to guess, and appreciate how resourceful password guessers can be. Users of home systems, who surf the Web, exchange email, write letters, print cards, and balance budgets, may or may not understand the need for good passwords, and almost always underestimate how resourceful attackers can be. The success of war driving , in which people attempt to piggyback onto wireless networks, attests to this. Most home wireless access points are left configured with default settings that allow anyone to use the network without a password and further allow the network to be administered with the default password. Many users simply plug in their equipment, notice that it works, and never bother to read the accompanying manual—let alone configure their equipment for secure operation. These users do not make this choice deliberately, and are generally unaware of the consequences.
Attempts to educate users meet with varied success. The most successful methods involve providing immediate feedback to the user, with an explanation of why the proposed password is poor. This must be done carefully. One organization circulated a memorandum describing how to select good passwords. The memo gave several examples. Attackers simply tried the passwords used in the examples, and found that several users had used them.
The proper selection of passwords is a classic human factors problem . Assigning passwords selected at random can be shown to maximize the expected time needed to guess a password. But passwords with randomly selected characters are difficult to remember. So, random passwords, and especially multiple random passwords, result in people either writing the passwords down on paper or forgetting them. Either outcome defeats the purpose of passwords. A proper selection method must somehow balance the need to remember a password with the need to make that password as random as possible.
Proactive password checking subjects a user-proposed password to a number of tests to determine how likely the password is to be guessed. This is a viable approach, provided that the tests are well drawn. One potential problem is that an attacker can determine from the tests which potential passwords need not be tried. The set of potential passwords must be large enough to prevent attackers from trying them all. A USENET posting illustrated the necessity of this requirement. It described a (mock) set of characteristics of passwords that were difficult to guess. It then asserted that only one word met these criteria, so everyone had to use the same password!
Various attempts to balance the needs of memory and randomness mix randomly generated passwords with human-selected passwords. One common approach, used by Microsoft, Apple, and other vendors, is to supply a “wallet” or “key ring” for passwords. The user enters her passwords, and their associated target, into the key ring, and chooses a “master password” to encipher the ring. Whenever a password is needed, the user supplies the single master password, and the system deciphers the appropriate entry in the ring. This allows the user to save many passwords at the price of remembering only one. An obvious extension allows the passwords on the key ring to be generated randomly.
This approach tries to implement the principle of psychological acceptability by making passwords as invisible as possible. The user needs to remember only one password for all her different systems. But an attacker without access to the key ring must discover a different password for each system for that user. If the passwords are chosen randomly, and the set of possible passwords is large enough, guessing the chosen password is highly unlikely.
There are two important weaknesses to this approach. The first lies in the phrase “without access to the key ring.” If the attacker gains that access, she needs to guess only the master password to discover all the other passwords. So, the problem of password guessing has not been eliminated; it has been reduced to the user having to select one password that is difficult to guess. The second problem springs from this need. What happens if the user forgets her master password? In most implementations of the key ring, the system cannot recover the master password (because if the system can do so, an attacker can also). Hence, the user must change all passwords on the key ring, as the originals cannot be recovered either, and select a new master password.
This demonstrates a failure to meet one aspect of the principle of psychological acceptability. If the security mechanism depends upon a human, what happens if the human fails? Logic dictates that this should never happen, and if it does, it is the human’s problem. But logic must account for the frailties of human beings, and the principle of psychological acceptability speaks to human failure. How do you recover?
Another approach is to base authentication on criteria in addition to a password, such as possession of a smart card or a biometrics measurement. In principle, if a password is discovered, the attacker cannot immediately gain access to the protected system. Again, the principle of psychological acceptability comes into play; the additional requirement must be acceptable. Swiping an identification card, or entering a number displayed on a token, might be acceptable. In most cultures and computing environments, testing the DNA of the user would not be.
Other authentication techniques abound. The chapters in Part II of this book, Authentication Mechanisms, discuss several, including variants on passwords. The key question that one must answer in order to use the authentication techniques described in those chapters is whether the techniques balance effectiveness and usability to the satisfaction of both the users and the managers.
Correcting problems such as default or empty passwords poses problems when the vendor distributes the correction. As an example, for many years, Microsoft’s SQL Server was distributed with an empty password on its administrator account. This changed when the SQLSnake/Spida Worm exploited that empty password to acquire access to servers running that database. Microsoft issued a patch to update the server and add a password.
A patch is an update to a program or system designed to enhance its functionality or to solve an existing problem. In the context of security, it is a mechanism used to fix a security problem by updating the system. The patch, embodied in a program or script, is placed on the system to be patched, and then executed. The execution causes the system to be updated.
Ideally, patching should never be necessary. Systems should be correct and secure when delivered. But in practice, even if such systems could be created, their deployment into various environments would mean that the systems would need to be changed to meet the needs of the specific environment in which they are used. So, patching will not go away. However, it should be minimal, and as invisible as possible. Specifically, the principle of psychological acceptability implies that patching systems should require little to no intervention by the system administrator or user.
Unfortunately, several considerations make invisible patching difficult.
The first difficulty is collecting all of the necessary patches . In a homogeneous network, only one vendor’s patches need to be gathered, but a single vendor may offer a wide variety of systems. The patches for one system likely will not apply to another system. If the network has systems from many vendors, the problem of gathering and managing the patches is severe. Various tools such as Cro-Magnon and management schemes using several tools attempt to ameliorate this task. All require configuration and maintenance, and knowledgeable system administrators.
The second difficulty is system-specific conflicts. When vendors write and test a patch, they do so for their current distribution. But customers tailor the systems to meet their needs. If the tailoring conflicts with the patch, the patch may inhibit the system from functioning correctly.
Two examples will demonstrate the problem. In the first example, a site runs a version of the Unix operating system that uses a nonstandard, but secure, mail server program. When a patch for that system is released, the system administrator updates the system programs, and then reinstalls them. This entire process is automated, so the system administrator runs two commands: one to update the source code for the system, and the other to compile and reinstall all changed programs. But, whenever the system’s standard mail server is one of the programs patched, the system administrator must reinstall the nonstandard mail server. Because of the architecture of the updating process, this requires a separate set of commands. This violates the principle of psychological acceptability, because maintaining the security mechanism (the nonstandard mail server) is a visible process. The system administrator must be aware of the updating process, and check that the standard mail server is not updated or reinstalled.
The second example comes from the world of finance. Many large brokerage houses run their own financial software. As the brokerage houses write this software themselves, and use it throughout the world, they must ensure that nothing interferes with these programs. If the programs cease to function, the houses will lose large sums of money because they will not be able to trade on the stock markets or carry out their other financial functions. When a vendor sends them a security patch, the brokerage houses dare not install that patch on their most important production systems because the patch may interfere with their programs. The vendor does not have copies of these programs, and so has no way to test for interference. Instead, the houses install the patch on a test system or network, and determine for themselves if there is a conflict. Again, the process of maintaining a secure system should be invisible to the system administrators, but because of the nature of the system, transparency means a possible violation of the availability aspects of the site’s security policy. The conflict seems irreconcilable.
This conflict is exacerbated by automatic downloading and installation of patches. On the surface, doing so makes the patching invisible. If there are no conflicts between the patch and the current configuration, this is true. But if there are conflicts, the user may find a system that does not function as expected, with no clear reason for the failure.
This happened with a recent patch for Microsoft’s Windows XP system. Service Pack 2 provided many modifications to improve both system functionality and security. Therein lay the problem. Among the enhancements was the activation of Windows Firewall, which blocks certain connections from the Internet. This meant that many servers and clients, including IIS, some FTP clients, and many games, would not function correctly. After installing the patch, users then had to reset various actions of the firewall to allow these programs to function as they did before the patch was installed. The principle of psychological acceptability disallows these problems.
In the extreme, one patch may improve security, but disable necessary features. In this case, the user must decide between an effective security mechanism and a necessary functionality. Thus, the security mechanism is as obtrusive as possible, clearly violating the principle of psychological acceptability. The best example of this is another patch that Microsoft issued to fix a vulnerability in SQL Server. This patch eliminated the vulnerability exploited by the Slammer worm , but under certain conditions interfered with correct SQL Server operations. A subsequent patch fixed the problem.
The third difficulty with automating the patching process is understanding the trustworthiness of the source. If the patch comes from the vendor, and is digitally signed using the vendor’s private key, then the contents of the patch are as trustworthy as the vendor is. But some vendors have distributed patches through less secure channels, such as USENET newsgroups or unsigned downloads. Some systems automatically check digital signatures on patches , but many others do not—and faced with the choice, many users will not bother to check, either. Unverified or unverifiable patches may contain Trojan horses or other back doors designed to allow attackers entry—an attack that was demonstrated when a repository of security-related programs was broken into, and the attackers replaced a security program designed to filter network connections with one that allowed attackers to gain administrator access to the system on which it was installed. Sometimes even signatures cannot be trusted. An attacker tricked Verisign, Inc., into issuing two certificates used to authenticate installers and Active X components (but not for updating Windows) to someone claiming to be from Microsoft Corporation. Although the certificates were cancelled as soon as the hoax was discovered, the attacker could have produced digitally signed fake patches during the interval of time that the newly issued certificates remained valid.
Finally, the need to tailor techniques for patching to the level of the target audience is amply demonstrated by the problems that home users face. With most vendors, home users must go to the vendors’ web sites to learn about patches, or subscribe to an automated patch notification system. ,  Because users rarely take such proactive actions on their own, some vendors are automating the patching mechanisms in an attempt to make these mechanisms invisible to the user. As most home users reconfigure their systems very little, this effort to satisfy the principle of psychological acceptability may work well. However, the technology is really too new for us to draw any reliable conclusions.
Building a secure system does not assure its security: the system must also be installed and operated securely. Configuration is a key component of secure installation and operation, because it constrains what users and the system processes can do in the particular environment where the system is used. For example, a computer configured to be secure in a university research environment (in which information is accessible to everyone inside the research group) would be considered nonsecure in a military environment (in which information is accessible only to those with a demonstrated need to know). Different configurations allow a system to be used securely in different environments.
The decisions about configuration settings that a vendor faces when constructing patches are, to say the least, daunting. The vendor must balance the need to take into account the security policy of the sites to which the patch will be distributed with the need to provide a minimal level of security for those sites that cannot, or do not, reconfigure an installed patch. The principle of psychological acceptability dictates that, whatever course is followed, the installers of the patch not only should be able to alter the default configuration with a minimum of effort, but also should be able to determine whether they need to alter the default configuration with a minimum of effort.
An example will illustrate the dilemma. This example first arose from a system that was designed for academic research. One version was widely distributed with file permissions set by default to allow any user on the system to read, write, and execute files on the system. Once the system was installed, the file permissions could be reset to allow accesses appropriate to the site. This approach violated the principle of fail-safe defaults, because the system was distributed with access control permissions set to allow all accesses. It also required all system administrators to take action to protect the system. An advantage of this is that it forced administrators to develop a security policy, even if only a highly informal one. But the price was that system administrators had to apply mechanisms after the system was installed, violating the principle of psychological acceptability. Had the system been distributed with rights set to some less open configuration, system administrators would not need to act immediately to protect the system. This would have been a less egregious violation of the principle of psychological acceptability. Fortunately, for the most part, system administrators understood enough to apply the necessary changes, and knew of the need when they received the system.
The conflict between security and ease of use arises in configurations not related to patching. Many programs allow the user to define macros, or sequences of instructions that augment or replace standard functions. For example, Microsoft Word allows the user to take special actions upon opening a file. These actions are programmed using a powerful macro language. This language allows special-purpose documents to be constructed, text to be inserted into documents, and other useful functions. But attackers have written computer viruses and worms in this language and embedded them in documents: the Melissa virus executed when an infected file was opened using Microsoft Word. Among other actions, the virus infected a commonly used template file, so any other file referencing that template would also be infected. The benefit of added functionality brought with it an added security threat.
The solution was to allow the user to configure Microsoft Word to display a warning box before executing a macro. This box would ask the user if macros were to be enabled or disabled. Whether this solution works depends upon the user’s understanding that macros pose a threat, and the user being able to assess whether the macro is likely to be malicious given the particular file being opened. The wording and context of the warning, and the amount and quality of information it gives, is critical to help a naive user make this assessment. If macro languages must be supported, and a user can make the indicated assessment, this solution is as unobtrusive as possible and yet protects the user against macro viruses. It is an attempt to apply the principle of psychological acceptability.
The lesson that we draw from the three illustrations provided in this chapter is that the solution to the problem of developing psychologically acceptable security mechanisms depends upon the context in which those mechanisms are to be used. In an environment in which only trusted users have access to a system, simple passwords are sufficient; but in a more public environment, more complex passwords or alternate authentication mechanisms become necessary. Patches designed for a known environment can modify the system with little or no user action; patches applied in an environment different from the one for which they are designed risk creating security problems. Complex configurations lead to errors, and the less computer-savvy the users are, the worse the security problems will be.
This lesson suggests an approach to improving the current state of the art. Testing mechanisms by placing them in environments in which they will be used, and analyzing the way in which those mechanisms are used, will show potential problems quickly. But this requires using human subjects to test the mechanisms. Actually testing mechanisms on the populations that are to use these mechanisms will provide useful data. Testing mechanisms on the programmers and designers of those mechanisms may give some insight into potential problems. However, this latter testing will not reveal the problems arising from errors in installation, configuration, and operation by users unfamiliar with the mechanisms’ design and implementation.
The principle of psychological acceptability is being applied more often now than it has been in the past. We have far to go, however. The primary problem with its current application is the range of users to which it must be applied. How can one create mechanisms that are easy to install, provide the protection mechanisms necessary, and are unobtrusive in use, for people ranging in skill from novice home computer users to system administrators who manage hundreds of computers from many different vendors? This remains an open question—one that may very well be insoluble.
Nevertheless, the current state of the art leaves room for considerable improvement.
About the Author
Matt Bishop is a professor in the Department of Computer Science at the University of California at Davis. He studies the analysis of vulnerabilities in computer systems, policy models, and formal modeling of access controls. He is active in information assurance education, is a charter member of the Colloquium on Information Systems Security Education, and has presented tutorials at many conferences. He wrote the textbooks Computer Security: Art and Science and Introduction to Computer Security (both from Addison Wesley).
[*] Brian Reid, “Reflections on Some Recent Widespread Computer Break-Ins,” Communications of the ACM 30:2 (Feb. 1987), 105.
 Jerome Saltzer and Michael Schroeder, “The Protection of Information in Computer Systems,” Proceedings of the IEEE 63:9 (1975), 1278–1308.
 Matt Bishop, Computer Security: Art and Science (Reading, MA: Addison Wesley Professional, 2003).
 Bob Metcalfe, “The Stockings Were Hung by the Chimney with Care,” RFC 602 (1973).
 Robert Morris and Ken Thompson, “Password Security: A Case History,” Communications of the ACM 22:11 (Nov. 1979), 594–597.
 Matt Bishop and Daniel Klein, “Improving System Security via Proactive Password Checking,” Computers & Security 14:3 (Apr. 1995), 233–249.
 Frans Meulenbroeks, “Rules for the Selection of Passwords,” rec.humor.funny (July 3, 1992); http://www.netfunny.com/rhf/jokes/92q3/selpass.html.
 Jeremy Bargin and Seth Taplin, “Cro-Magnon: A Patch Hunter-Gatherer,” Proceedings of the 13th LISA Conference (Nov. 1999), 87–94.
 David Ressman and John Valdés, “Use of Cfengine for Automated, Multi-Platform Software and Patch Distribution,” Proceedings of the 14th LISA Conference (Dec. 2000), 207–218.
 Microsoft Corp., “Some Programs Seem to Stop Working After You Install Windows XP Service Pack 2,” Article ID 842242 (Sept. 28, 2004); http://support.microsoft.com/default.aspx?kbid=842242.
 Microsoft Corp., “Elevation of Privilege in SQL Server Web Tasks (Q316333),” Microsoft Security Bulletin MS02-061 (Oct. 16, 2002); http://www.microsoft.com/technet/security/bulletin/MS02-061.mspx.
 Microsoft Corp., “FIX: Handle Leak Occurs in SQL Server When Service or Application Repeatedly Connects and Disconnects with Shared Memory Network Library,” Article ID 317748 (Oct. 30, 2002); http://support.microsoft.com/default.aspx?scid=kb;en-us;317748.
 Saltzer and Schroeder, 1282.
 Microsoft Corp., “Word Macro Virus Alert ‘Melissa Macro Virus',” Article ID 224567 (Aug. 9, 2004); http://support.microsoft.com/default.aspx?scid=kb;en-us;224567.