This chapter looks at the way in which software security is integrated into the application lifetime and provides practical advice that will help you understand the content contained in later chapters.
We define the term secure application to mean an application designed with security in mind. We firmly believe that truly effective software security is achieved only when it is completely integrated into the application development process and is understood to be every bit as important as stability, performance, and feature completeness.
We recommend that you read this chapter twice. Read it now to help build a context for the technical content of the following chapters. When you have finished reading this book and have a better grasp of .NET security, read this chapter again, and consider how you can improve your development process to best implement the advice and recommendations we make.
With few exceptions, a design is produced for an application before development begins; for smaller projects, the programmers may produce the design, which may be closely related to the implementation and contain low-level technical details. Larger projects usually engage an application architect to produce a more abstract design, leaving development of it’s components to individual development teams.
Security is an important part of the design process and cannot be left until the implementation phase. A fully integrated security policy will provide the greatest protection against your application being subverted and simplify the process of integrating security functionality into your code. You cannot retrofit a comprehensive security model into a design.
As the application designer, you need to have an understanding of the security capabilities provided by the platform that the application uses, in the same way that you must understand the features and functions of other components, such as databases and operating systems. This knowledge is important even if you will not be involved in the implementation of the application. Where possible we have written each chapter so that an explanation of the security features offered by .NET is separate from the details of how to apply the functionality during coding; we recommend that architects working at even the most abstract levels should read the latter material.
The first step towards applying security to an application design is to identify the restricted resources and secrets, two concepts that we introduced in Chapter 1. Recall that a restricted resource is functionality to which you wish to control access to, and a secret is some piece of data that you wish to hide from third parties.
Functional resources are the features that your application provides, for example, the ability to approve a loan within a banking application. These resources are easy to identify and are defined with the functional specification for the application.
External resources are those that underpin your application—for example, a database. Access to these resources should be coordinated with access to your functional resources, so that, for example, users who are unable to approve loans through a functional resource are not able to edit the database directly to achieve the same effect. This coordination proves the need for a wider security view that we introduced in Chapter 1.
Subversion resources do not appear to be significant at first glance but can be used in conjunction with a functional or external resource in order to subvert your application or the platform on which your application executed. For example, one resource is the ability to write data to a file that is used by the operating system to enforce security policy.
Creating the list of restricted resources associated with your application is the foundation for understanding the trust relationships that you need to define, which we discuss in the next section. We make the following suggestions to assist in developing your skills in identifying restricted resources:
Consider the way your application interacts with other systems. Think carefully about the way in which your application depends on other services. Access to some resources may need to be restricted in order to protect other systems, even though they cannot be used to subvert your application.
Apply common sense. Do not follow the business specification slavishly—as an architect, you are responsible for designing an application that satisfies all of the business and technical objectives of the project, even those that are not stated explicitly. By applying some common sense, you can often identify resources that must be restricted in order to achieve the wider objectives of your organization.
Define and follow design standards. By applying a common design methodology to all of your projects, you can create patterns of functionality that are recognized easily as restricted resources.
Open your design to review. Do not work in isolation—ask for, and act on, the comments of your colleagues. Different people think in different ways, and we have found that reviewing application designs in groups is especially effective for identifying subversion resources.
Once you have identified the protected resources your application uses, you can define the trust levels your application will require of its users or clients to grant access to them. Trust can be granted to a wide range of entities, including users, code, external libraries, and different computers.
When you trust a user, you are granting the individual an ability represented by a restricted function resource. For example, if you were developing a banking application, you might allow loan officers to approve loan requests. .NET supports this approach with role-based security, where you define the types of user that will use the system, and consider which resources each requires to perform his work. We discuss .NET role-based security in Chapter 10.
If you grant trust to code, or to a class library, then you grant trust to the assembly that contains the classes; we introduced assemblies in Chapter 2. .NET uses attributes of the assembly (the strong name, the publisher, etc.), known as evidence, to grant trust to assemblies. We discuss evidence in Chapter 6, and its uses in Chapter 7, Chapter 8, and Chapter 9.
When you require trust, you are in effect asking to consume the restricted resources of another application. For example, if your application depends on access to a database, then you must ensure that the database server will accept connections from your computer, and that your application is able to read and write the data it needs. Trust is a chain of relationships that extends through the services on which your application depends, your application itself, and the services or users that make use of your application.
Identifying trust is the process of examining your restricted resources and establishing how access to them should be allocated. You should also examine how the services you depend on assign trust and ensure that your application design provides for complying with their trust demands. We make the following suggestions to assist in developing your skills in identifying trust:
Assign the smallest amount of trust required to perform a task. Define levels of trust that are closely associated with the tasks that will be performed by your users, and ensure that you are granting the smallest amount of access to restricted resources that the user requires to perform her tasks. Do not group trust into large, poorly defined levels in the name of “simplicity”; such an approach will lead to a poor application design.
Consider the effect of the trust chain and the possible subversion it creates. Ensure that your application is not a tool for gaining illegal access to a more important resource. For example, if a database trusts your application to read and write data, does your application provide an easier hacking target than the database itself? Your application should not grant trust to external resources in a way that bypasses security measures implemented by those resources.
Consider the real-world organization of trust. Think about the way in which trust is assigned to the users of your application in the real world, and consider following that model as the basis for specifying roles in your application.
Consider what overlapping trust allows. Examine the trust that your application requires, and think about the effect that granting multiple trust levels will have. For example, you may have defined one trust level that grants access to read the list of customer accounts and another trust level that grants the ability to send emails. If you assign both of these trust levels to a single user, you have granted the ability to email details of the customer account outside of the company. Try to think how a user might tie together apparently unrelated resources and activities to subvert your security policies.
Document your decisions clearly. When you document the design of your application, explain why you have defined your trust levels, and clearly illustrate the way in which these trust levels should be assigned to user roles or services. Clearly documenting this information will simplify the development process and serve as an authoritative reference during application testing.
Identifying secrets is usually a simpler process than identifying restricted resources; you must examine each type of data that your application creates or processes, and decide if that data requires protection. In Part III, we discuss the measures that you can take to protect data, but we offer the following advice to assist you in ensuring that you correctly classify your application’s data:
Consider who owns the data. If you process data that is created or supplied by another application, you should protect the data to at least the same level as that application does. Your application should not be an easy means of accessing data that is better protected elsewhere.
Consider your legal obligations. You may have legal obligations to ensure that certain information remains private, especially the personal details of your customers. Seek legal advice to establish your responsibilities.
Consider the effect of disclosure. Understand the impact of disclosing the data that your application works with, and use this information to assess what data should be a secret. Remember, sometimes data is protected for reasons of public image rather than practical considerations. For example, the damage to your company’s reputation exceeds the damage to a credit card holder if you expose his card number to the world. The credit card provider limits the cardholder’s liability, but there is no limit to the amount of damage bad publicity can do to your business.
You must accept that you cannot design or implement an application that is invulnerable to attack—even the best security can be broken. As part of an application design, you should always specify what actions will be taken in the event of a security breach, and define a plan of action that ensures that your application fails gracefully, and does not expose other applications and services to subversion.
It is not acceptable to simply write security events to a log file and hope that someone acts on them. As the designer of the application, you have a responsibility to minimize the risk to which you expose your clients and customers. The details of the failure plan will differ based on the purpose and complexity of the application, but should include, at a minimum, details of how security breaches are to be dealt with and what immediate actions should be taken to protect your data and protected resources from further compromises.
The developer takes the application design from the architect and develops the classes that form the application implementation. The developer must have a good working knowledge of software security, and especially the security features provided by the development platform. This knowledge is required so that the developer can correctly program the security policy as part of the implementation process.
The developer takes a much more concrete approach than the abstract approach taken during the design, and takes the broad policy laid out in the application design and transforms it into a robust and accurate implementation.
We do not suggest that the developer should follow the application design to the exclusion of everything else. As the developer, you have an obligation to assess the practicality and suitability of the application design, and the security policy the design defines. Question the design appropriately and bear in mind that your in-depth knowledge of programming security should be used to collaborate on improving a faulty design, rather than as a weapon in a political or cultural war. Nonetheless, respect the purpose of the application design, and do not deviate from it unless you have the architect’s permission—needlessly deviating from the design will lead to implementation defects, which can present unforeseen opportunities to attack and subvert your application.
The developer is often responsible for making implementation decisions, such as the strength of cryptography used to protect secrets or the way security roles are employed. There is often a temptation to adopt new and exciting technologies, which is a dangerous approach when applied to application security. Security is best established by using tried-and-tested techniques, and by using algorithms and implementations that have been subjected to extensive testing.
As you will see in later chapters, .NET security is implemented by the developer but is configured by the system administrator. You should implement your security policy to simplify the configuration wherever possible, and to use default settings that offer a reasonable level of security without any configuration at all. You cannot expect a system administrator to have the in-depth knowledge required to develop the application or the time to invest in learning the intricacies of your application. Document the default settings you have used, and explain their significance. We offer the following advice to assist you in developing applications securely:
Ensure that someone knows when you make a change. Implementing changes in isolation is likely to open security holes in your application. Components of a software system are often highly dependent on each other. Unless told of a change, other people working from the original design will assume that your components function as specified and will make implementation decisions for their own components based on those assumptions.
Do not be afraid to ask questions. You should always seek clarification when you do not understand part of the application design; many developers feel that this is a sign of “weakness,” but our experience is that confusion is often caused by lack of clarity in the design or an error made by the architect. Always make sure you understand all of the design before implementing the application.
Take the time to understand the business. You will find it easier to understand the decisions made in the application design if you take the time to understand the business problem the application is intended to solve. Remember that the architect is the “bridge” between the business problem and the technical solution, and decisions that may appear to have no technical justification are often influenced by business factors.
Do not rely on untested security libraries. Developers are usually responsible for selecting third-party tools and libraries for the application implementation. We recommend that you select security libraries from reputable companies and submit their products to your own security-testing procedure (see Section 4.3).
Apply rigorous unit testing. You should test all of the classes that you develop as part of the application implementation. This testing should not only test the expected behavior, but also make sure that unexpected inputs or actions do not expose security weaknesses. In this regard, your unit testing is a simplified form of the security testing that we proscribe below.
Remove any default accounts before deployment. It is usual to create default user accounts or trust levels that simplify unit testing; you must ensure that these are disabled or removed before the application is tested and deployed.
As we stated in Chapter 1, security testing is unlike ordinary application testing—the security tester looks for ways to subvert the security of an application prior to its deployment. Effective security testing can significantly reduce the number of security defects in an application and can highlight flaws in the application design. We offer the following advice to assist you in security testing applications:
Play the part of the employee. Do not limit your simulated attacks to those you expect a hacker to make—be sure to determine if it is possible for a disgruntled employee to subvert the application security. Employees are usually assigned more trust in an application security model, which can sometimes provide easier routes of attack.
Test beyond the application itself. Your testing should include attacks on the system on which the application depends, including database, directory, and email servers. In the case of .NET, you should also see if you can subvert your application via an attack on the runtime components. Poor configuration or a poor understanding of security functionality can often provide an avenue for an attack that can subvert the application indirectly.
Test beyond the application design. Do not fall into the trap of simply testing to ensure that the application design has been correctly implemented; this is functional testing, and it does not offer many insights into security failures.
Monitor trends in general attack strategies. Expand your range of simulated attacks by monitoring the way real attacks are performed. Your customers may furnish you with descriptions of attacks they have seen, and you can learn from the way other applications and services are attacked.
There is a growing awareness of the value in security testing, and tools have started to emerge to assist in the testing process. The first generation of tools are focused on testing the configuration of an application and the .NET runtime, but work is in progressing on more complex software that will automate applying common types of attack. See the Microsoft .NET home page for information about .NET testing tools in general and some links to security-testing tools.
The system administrator is responsible for installing and configuring the application. This task includes assigning user accounts to the roles defined by .NET role-based security (see Chapter 10) and assigning appropriate levels of trust to the assemblies that make up the application. Equally important is the configuration of the services required by the application, such as database and directory servers.
If you are the system administrator, you have an obligation to gain an understanding of how the application should be configured, and to spend the time to determine how the security configuration is best tailored to your enterprise. You have a reasonable expectation that the software publisher will provide you with a robust and functional application, and the software publisher has a reasonable expectation that you will install and configure its application by following its instructions and by applying your knowledge of the company.
Nonetheless, you should consider carefully the levels of trust that you assign to a publisher’s assemblies, and ensure that you are not granting an application more permissions than it requires to perform correctly. You should also ensure that you are not compromising the security of your corporate network by configuring the application and the services that it depends on.
Your final obligation is to monitor the application in order to watch for security defects or breaches and to report these problems to the software publisher. See Section 4.6 for an explanation of how the management of security continues for as long as the application is in use.
Once a system administrator has installed and configured the application, it can be executed using the Common Language Runtime (CLR). The CLR is a complex piece of software, and a detailed discussion of how the CLR works is outside the scope of this book. In this section, we discuss the two facets of CLR operation that relate specifically to security.
Verification is the first aspect of the CLR we are interested in. Before executing a managed application, the CLR completes a verification process that is the first step toward enforcing .NET security. If the application is made up of strong-named assemblies (see Chapter 2 for details), then the contents of the assemblies are checked and compared with the value of the digital signature; the CLR will not execute code from any assembly that has been tampered with. If the contents of the assembly are unchanged, the CLR verifies that the code contained within the assemblies is type-safe, meaning that the code does not perform illegal operations, access memory directly, or try to access type members incorrectly.
If the code contained in the assemblies is type-safe, and the contents of the assembly have not been tampered with, then the CLR examines the evidence of each assembly, and grants permissions to the code within that assembly based on the .NET security policy configuration. We discuss evidence in Chapter 6, permissions in Chapter 7, and security policies in Chapter 8.
The important fact, and one you must understand, is that the security implementation (performed by the developer) and the configuration (performed by the system administrator) are combined by the CLR when the application is started to determine how the application is allowed to execute. .NET security is not simply proscribed by the software publisher; it is something that requires the cooperation and understanding of the customer.
It is extremely difficult to develop and deploy software that is reasonably secure, and impossible to develop and deploy software that is invulnerable to attack. Reconcile yourself to the fact that you application may be subverted, and plan accordingly.
Do not stop thinking about the security of your application when you have deployed the final product, or even when the system administrator has completed the configurations and users are making use of the functionality. The impact of security lasts as long as the life of the application itself; you should be prepared to monitor for security breaches, and have a plan in place to deal with them.
An effective weapon against hacking is education; you should ensure that your customers understand how you have applied security within your application, assist them in understanding how to recognize an in-progress attack, and how to tell when a hacker has bypassed or subverted your application security. You should aim to build a relationship with your clients that makes it possible for them to report security problems to you, and endeavor to respond to such reports in a responsible and sensible manner. We believe that as a publisher of software, your responsibilities include:
If you are successful in establishing a way in which customers can report security attacks and defects, you have a responsibility to use this information to assess the impact of potential problems and act to reduce the risk to your customers and users.
You should portray the security software techniques you apply to your applications in an accurate and honest manner, and not make unreasonable or unlikely claims. You should educate your customer, ensuring that they understand that no security precautions are impervious to concerted attack, and help them to develop a plan of action to execute in the event of a successful subversion of your security measures. Such a plan should contain a reasonable approach to assessing the impact of the attack and steps to take to restore the application to service.
Your customers can benefit from knowing how other customers are being attacked and whether those attacks are successful. This information will allow for an informed assessment of the security risk as it affects their wider enterprise. We recommend that you should be open and transparent in your handling of security matters, and make as much information available to your customers as possible.
You should use the customer reports you receive to identify security defects in your application, and issue updates or workarounds quickly, effectively and, ideally, without charge. Security defects are unlike other software defects and can expose your customer to a wide range of risks, beyond the compromise of your application; a clear and concise security fix policy can only enhance your reputation as a software publisher, and will help you to build a more stable and secure application.
Companies that understand security to be a necessity publish the software products that we consider trustworthy. These companies accept the inevitability of security defects being discovered in their products, and they act quickly and decisively to provide solutions to their customers. We strongly recommend that you do the same for your customers.