A deceptively simple question to ask, rather more complex to answer.
When first starting out in the world of security, it can be difficult to understand or to even to know what to look at first. The successful hacks you will read about in the news paint a picture of Neo-like adversaries who have a seemingly infinite range of options open to them with which to craft their highly complex attacks. When thought about like this, security can feel like a possibly unwinnable field that almost defies reason.
While it is true that security is a complex and ever-changing field, it is also true that there are some relatively simple first principles that, once understood, will be the undercurrent to all subsequent security knowledge you acquire. Approach security as a journey, not a destination—one that starts with a small number of fundamentals upon which you will continue to build iteratively, relating new developments back to familiar concepts.
With this in mind, and regardless of our backgrounds, it is important that we all understand some key security principles before we begin. We will also take a look at the ways in which security has traditionally been approached, and why that approach is no longer as effective as it once was now that Agile is becoming more ubiquitous.
Security for development teams tends to focus on information security (as compared to physical security like doors and walls, or personnel security like vetting procedures). Information security looks at security practices and procedures during the inception of a project, during the implementation of a system, and on through the operation of the system.
While we will be talking mostly about information security in this book, for the sake of brevity we will just use security to refer to it. If another part of the security discipline is being referred to, such as physical security, then it will be called out explicitly.
As engineers we often discuss the technology choices of our systems and their environment. Security forces us to expand past the technology. Security can perhaps best be thought of as the overlap between that technology and the people who interact with it day-to-day as shown in Figure 1-1.
So what can this picture tell us? It can be simply viewed as an illustration that security is more than just about the technology and must, in its very definition, also include people.
People don’t need technology to do bad things or take advantage of each other; such activities happened well before computers entered our lives; and we tend to just refer to this as crime. People have evolved for millennia to lie, cheat, and steal items of value to further themselves and their community. When people start interacting with technology, however, this becomes a potent combination of motivations, objectives, and opportunity. In these situations, certain motivated groups of people will use the concerted circumvention of technology to further some very human end goal, and it is this activity that security is tasked with preventing.
However, it should be noted that technological improvements have widened the fraternity of people who can commit such crime, whether that be by providing greater levels of instruction, or widening the reach of motivated criminals to cover worldwide services. With the internet, worldwide telecommunication, and other advances, you are much more easily attacked now than you could have been before, and for the perpetrators there is a far lower risk of getting caught. The internet and related technologies made the world a much smaller place and in doing so have made the asymmetries even starker—the costs have fallen, the paybacks increased, and the chance of being caught drastically reduced. In this new world, geographical distance to the richest targets has essentially been reduced to zero for attackers, while at the same time there is still the old established legal system of treaties and process needed for cross-jurisdictional investigations and extraditions—this aside from the varying definitions of what constitutes a computer crime in different regions. Technology and the internet also help shield perpetrators from identification: no longer do you need to be inside a bank to steal its money—you can be half a world away.
Circumvention is used deliberately to avoid any implicit moral judgments whenever insecurity is discussed.
The more technologies we have in our lives, the more opportunities we have to both use and benefit from them. The flip side of this is that society’s increasing reliance on technology creates greater opportunities, incentives, and benefits for its misuse. The greater our reliance on technology, the greater our need for that technology to be stable, safe, and available. When this stability and security comes into question, our businesses and communities suffer. The same picture can also help to illustrate this interdependence between the uptake of technology by society and the need for security in order to maintain its stability and safety, as shown in Figure 1-2.
As technology becomes ever more present in the fabric of society, the approaches taken to thinking about its security become increasingly important.
A fundamental shortcoming of classical approaches to information security is failing to recognize that people are just as important as technology. This is an area we hope to provide a fresh perspective to in this book.
There was a time that security was the exclusive worry of government and geeks. Now, with the internet being an integral part of people’s lives the world over, securing the technologies that underlie it is something that is pertinent to a larger part of society than ever before.
If you use technology, security matters because a failure in security can directly harm you and your communities.
If you build technology, you are now the champion of keeping it stable and secure so that we can improve our business and society on top of its foundation. No longer is security an area you can mentally outsource:
You are responsible for considering the security of the technology.
You provide for people to embrace security in their everyday lives.
Failure to accept this responsibility means the technology you build will be fundamentally flawed and fail in one of its primary functions.
Security, or securing software more specifically, is about minimizing risk. It is the field in which we attempt to reduce the likelihood that our people, systems, and data will be used in a way that would cause financial or physical harm, or damage to our organization’s reputation.
Most security practices are about preventing bad things from happening to your information or systems. But risk calculation isn’t about stopping things; it’s about understanding what could happen, and how, so that you can prioritize your improvements.
To calculate risk you need to know what things are likely to happen to your organization and your system, how likely they are to happen, and the cost of them happening. This allows you to work out how much money and effort to spend on protecting against that stuff.
Vulnerability is about exposure. Outside the security field, vulnerability is how we talk about being open to harm either physically or emotionally. In a systems and security sense, we use the word vulnerability to describe any flaw in a system, component, or process that would allow our data, systems, or people to be misused, exposed, or harmed in some way.
You may hear phrases such as, “a new vulnerability has been discovered in…software” or perhaps, “The attacker exploited a vulnerability in…” as you start to read about this area in more depth. In these examples, the vulnerability was a flaw in an application’s construction, configuration or business logic that allowed an attacker to do something outside the scope of what was authorized or intended. The exploitation of the vulnerability is the actual act of exercising the flaw itself, or the way in which the problem is taken advantage of.
Likelihood is the way we measure how easy (or likely) it is that an attacker would be able (and motivated) to exploit a vulnerability.
Likelihood is a very subjective measurement and has to take into account many different factors. In a simple risk calculation you may see this simplified down to a number, but for clarity, here are the types of things we should consider when calculating likelihood:
Do you need to be a deep technical specialist, or will a passing high-level knowledge be enough?
Does the exploit work reliably? What about over the different versions, platforms, and architectures where the vulnerability may be found? The more reliable the exploit, the less likely attacks are to cause a side effect that is noticeable: this makes it a safer exploit for an attacker to use, as it can reduce the chances of detection.
Does the exploitation of the vulnerability lend itself well to be automated? This can help its inclusion in things like exploit kits or self-propagating code (worms), which means you are more likely to be subject to indiscriminate exploit attempts.
Do you need to be have the ability to communicate directly with a particular system on a network or have a particular set of user privileges? Do you need to have already compromised one or more other parts of the system to make use of the vulnerability?
For the majority of businesses, we measure impact in terms of money lost. This could be actual theft of funds (via credit card theft or fraud, for example), or it could be cost of recovering from a breach. Cost of recovery often includes not just addressing the vulnerability, but also:
Responding to the incident itself
Repairing other systems or data that may have been damaged or destroyed
Implementing new approaches to help increase the security of the system in an effort to prevent a repeat
Increased audit, insurance, and compliance costs
Marketing costs and public relations
Increased operating costs or less favorable rates from suppliers
At the more serious end of the scale are those of us who build control systems or applications that have direct impact on human lives. In those circumstances, measuring the impact of a security issue is much more personal, and may include death and injury to individuals or groups of people.
In a world where we are rapidly moving toward automation of driving and many physical roles in society, computerized medical devices, and computers in every device in our homes, the impact of security vulnerabilities will move toward an issue of protecting people rather than just money or reputation.
We are used to the idea that we can remove the imperfections from our systems. Bugs can be squashed and inefficiencies removed by clever design. In fact, we can perfect the majority of things we build and control ourselves.
Risk is a little different.
Risk is about external influences to our systems, organizations, and people. These influences are mostly outside of our control (economists often refer to such things as externalities). They could be groups or individuals with their own motivations and plans, vendors and suppliers with their own approaches and constraints, or environmental factors.
As we don’t control risk or its causes, we can never fully avoid it. It would be an impossible and fruitless task to attempt to do so. Instead we must focus on understanding our risks, minimizing them (and their impacts) where we can, and maintaining watch across our domain for new evolving or emerging risks.
The acceptance of risk is also perfectly OK, as long as it is mindful and the risk being accepted is understood. Blindly accepting risks, however, is a recipe for disaster and is something you should be on the lookout for, as it can occur all too easily.
While we are on this mission to minimize and mitigate risks, we also have be aware that we live in an environment of limits and finite resources. Whether we like it or not, there are only 24 hours in a day, and we all need to sleep somewhere in that period. Our organizations all have budgets and a limited number of people and resources to throw at problems.
As a result, there are few organizations that can actually address every risk they face. Most will only be able to mitigate or reduce a small number. Once our resources are spent, we can only make a list of the risks that remain, do our best to monitor the situation, and understand the consequence of not addressing them.
The smaller our organizations are, the more acute this can be. Remember though, even the smallest teams with the smallest budget can do something. Being small or resource poor is not an excuse for doing nothing, but an opportunity to do the best you can to secure your systems, using existing technologies and skills in creative ways.
Choosing which risks we address can be hard and isn’t a perfect science. Throughout this book, we should give you tools and ideas for understanding and measuring your risks more accurately so that you can make the best use of however much time and however many resources you have.
While we would all love to believe that it would take a comic-book caliber super villain to attack us or our applications, we need to face a few truths.
There is a range of individuals and groups that could or would attempt to exploit vulnerabilities in your applications or processes. Each has their own story, motivations, and resources; and we need to know how these things come together to put our organizations at risk.
In recent years we have been caught up with using the word cyber to describe any attacker that approaches via our connected technologies or over the internet. This has led to the belief that there is only one kind of attacker and that they probably come from a nation-state actor somewhere “far away.”
The military considered there to be four theaters of war where countries can legally fight: land, sea, air, and space. When the internet started being used by nations to interfere and interact with each other, they recognized that there was a new theater of war: cyber, and from there the name has stuck.
Once the government started writing cyber strategies and talking about cyber crime, it was inevitable that large vendors would follow the nomenclature, and from there we have arrived at a place where it is commonplace to hear about the various cybers and their associated threats. Unfortunately cyber has become the all-encompassing marketing term used to both describe threats and brand solutions. This commercialization and over-application has had the effect of diluting the term and making it something that has become an object of derision for many in the security community. In particular, those who are more technically and/or offensively focused often use “cyber” as mockery.
While some of us (including more than one of your authors) might struggle with using the word “cyber,” it is undeniable that it is a term well understood by nonsecurity and nontechnical people; alternate terms such as “information security,” “infosec,” “commsec,” or “digital security” are all far more opaque to many. With this in mind, if using “cyber” helps you get bigger and more points across to those who are less familiar with the security space or whose roles are more focused on PR and marketing, then so be it. When in more technical conversations or interacting with people more toward the hacker end of the infosec spectrum, be aware that using the term may devalue your message or render it mute altogether.
That’s simply not the case.
There are many types of attackers out there, the young, impetuous, and restless; automated scripts and roaming search engines looking for targets; disgruntled ex-employees; organized crime; and the politically active. The range of attackers is much more complex than our “cyber” wording would allow us to believe.
When you are trying to examine the attackers that might be interested in your organization you must consider both your organization’s systems and its people. When you do this, there are three different aspects of the attacker’s profile or persona worth considering:
Their motivations and objectives (why they want to attack and what they hope to gain)
Their resources (what they can do, what they can use to do it, and the time they have available to invest)
Their access (what they can get hold of, into, or information from)
When we try to understand which attacker profiles our organization should protect against, how likely each is to attack, and what impact it would have, we have to look at all of these attributes in the context of our organization, its values, practices, and operations.
Security is how we achieve this and we get it by upholding a set of values.
Before anything else, stop for a second and understand what it is that you are trying to secure, what are the crown jewels in your world, and where are they kept? It is surprising how many people embark on their security adventure without this understanding, and as such waste a lot of time and money trying to protect the wrong things.
Every field has its traditional acronyms, and confidentiality, integrity, and availability (CIA) is a treasure in traditional security fields. It is used to describe and remember the three tenets of secure systems—the features that we strive to protect.
There are very few systems now that allow all people to do all things. We separate our application users into roles and responsibilities. We want to ensure that only those people we can trust, who have authenticated and been authorized to act, can access and interact with our data.
Maintaining this control is the essence of confidentiality.
When taking responsibility for this data, we do so under the assumption that we will keep it in a controlled state. That from the moment we are entrusted with data, we understand and can control the ways in which is is modified (who can change it, when it can be changed, and in what ways). Maintaining data integrity is not about keeping data preserved and unchanged; it is about having it subjected to a controlled and predictable set of actions such that we understand and preserve its current state.
A system that can’t be accessed or used in the way that it was intended is no use to anyone. Our businesses and lives rely on our ability to interact with and access data and systems on a nearly continuous basis.
The not-so-much witty, as cynical among us will say that to secure a system well, we should power it down, encase it in concrete, and drop it to the bottom of the ocean. This, however, wouldn’t really help us maintain the requirement for availability.
Security requires that we keep our data, systems, and people safe without getting in the way of interacting with them.
This means finding a balance between the controls (or measures) we take to restrict access or protect information and the functionality we expose to our users as part of our application. As we will discuss, it is this balance that provides a big challenge in our information sharing and always-connected society.
Nonrepudiation is a proof of both the origin and integrity of data; or put another way, is the assurance that an activity cannot be denied as having been taken. Non-repudiation is the counterpart to auditability, which taken together provide the foundation upon which every activity in our system—every change and every task—should be traceable to an individual or an authorized action.
This mechanism of linking activity to a usage narrative or an individual’s behavior gives us the ability to tell the story of our data. We can recreate and step through the changes and accesses made, and build a timeline. This timeline can help us identify suspicious activity, investigate security incidents or misuse, and even debug functional flaws in our systems.
One of the main drivers for security programs in many organizations is compliance with legal or industry-specific regulatory frameworks. These dictate how our businesses are required to operate, and how we need to design, build, and operate our systems.
Love them or hate them, regulations have been—and continue to be—the catalyst for security change, and often provide us with the management buy-in and support that we need to drive security initiatives and changes. Compliance “musts” can sometimes be the only way to convince people to do some of the tough but necessary things required for security and privacy.
Something to be eyes-wide-open about from the outset is that compliance and regulation are related but distinct from security. You can be compliant and insecure, as well as secure and noncompliant. In an ideal world, you will be both compliant and secure; however, it is worth noting that one does not necessarily ensure the other.
These concepts are so important, in fact, that Chapter 14, Compliance is devoted to them.
What follows below is an (almost certainly incomplete) collection of common misconceptions that people have about security. When you start looking for them, you will see them exhibited with an often worrying frequency, not only in the tech industry, but also in the mass media and your workplace in general.
Security is not black and white; however, the concept of something being secure or insecure is one that is chased and stated countless times per day. For any sufficiently complex system, a statement of (in)security in an absolute sense is incredibly difficult, if not impossible, to make, as it all depends on context.
The goal of a secure system is to ensure the appropriate level of control is put in place to mitigate the threats you see are relevant to that system’s use case. If the use case changes, so do the controls that are needed to render that system secure. Likewise, if the threats the system faces change, the controls must evolve to take into account the changes.
Security from who? Security against what? and How could that mitigation be circumvented? are all questions that should be on the tip of your tongue when considering the security of any system.
No organization or system will ever be “secure.” There is no merit badge to earn, and nobody will come and tell you that your security work is done and you can go home now. Security is a culture, a lifestyle choice if you prefer, and a continuous approach to understanding and reacting to the world around us. This world and its influences on us are always changing, and so must we.
It is much more useful to think of security as being a vector to follow rather than a point to be reached. Vectors have a size and a direction, and you should think about the direction you want to go in pursuit of security and how fast you’d like to chase it. However, it’s a path you will continue to walk forever.
The classic security focus is summed up by the old joke about two men out hunting when they stumble upon a lion. The first man stops to do up his shoes, and the second turns to him and cries, “Are you crazy? You can’t outrun a lion.” The first man replies, “I don’t have to outrun the lion. I just have to outrun you.”
Your systems will be secure if the majority of attackers would get more benefit by attacking somebody else. For most organizations, actually affecting the attackers’ motivations or behavior is impossible, so your best defense is to make it so difficult or expensive to attack you that it’s not worth it.
Security tools, threats, and approaches are always evolving. Just look at how software development has changed in the last five years. Think about how many new languages and libraries have been released and how many conferences and papers have been presented with new ideas. Security is no different. Both the offensive (attacker) security worlds and the defensive are continually updating their approaches and developing new techniques. As quickly as the attackers discover a new vulnerability and weaponize it, the defenders spring to action and develop mitigations and patches. It’s a field where you can’t stop learning or trying, much like software development.
Despite the rush of vendors and specialists available to bring security to your organization and systems, the real truth is you don’t need anything special to get started with security. Very few of the best security specialists have a certificate or special status that says they have passed a test; they just live and breathe their subject every day. Doing security is about attitude, culture, and approach. Don’t wait for the perfect time, tool, or training course to get started. Just do something.
As you progress on your security journey, you will inevitably be confronted by vendors who want to sell you all kinds of solutions that will do the security for you. While there are many tools that can make meaningful contributions to your overall security, don’t fall into the trap of adding to an ever-growing pile. Complexity is the enemy of security, and more things almost always means more complexity (even if those things are security things). A rule of thumb followed by one of the authors of this book is not to add a new solution unless it lets you decommission two, which may be something to keep in mind.
If you picked up this book, there is a good chance that you are either a developer who wants to know more about this security thing, or you are a security geek who feels you should learn some more about this Agile thing you hear all the developers rabbiting on about. (If you don’t fall into either of these groups, then we’ll assume you have your own, damn fine reasons for reading an Agile security book and just leave it at that.)
One of the main motivations for writing this book was that, despite the need for developers and security practitioners to deeply understand each other’s rationale, motivations, and goals, the reality we have observed over the years is that such understanding (and dare we say empathy) is rarely the case. What’s more, things often go beyond merely just not quite understanding each other and step into the realm of actively trying to minimize interactions; or worse, actively undermining the efforts of their counterparts.
It is our hope that some of the perspectives and experience captured in this book will help remove some of the misunderstandings, and potentially even distrust, that exist between developers and security practitioners, and shine a light into what the others do and why.