Chapter 1. Introducing Vulnerability Management
Vulnerability management is one of the foundational practices of an effective cybersecurity program. It focuses on identifying, classifying, prioritizing, remediating, and mitigating vulnerabilities in software and hardware systems. A complete vulnerability management program accomplishes more than just detection. It establishes a proactive approach to security, protecting systems before attackers can exploit known weaknesses to avert attacks entirely, rather than reacting after the fact. It helps organizations significantly reduce their attack surface and safeguard critical data and network infrastructure by continuously scanning for, analyzing, and addressing vulnerabilities.
New threats constantly emerge, and new exposures are discovered daily, making vulnerability management a continuous process rather than a one-time undertaking. Building a vulnerability management program has always been, and still is, crucial because vulnerabilities pose a significant risk when left unaddressed or poorly managed. Unmitigated vulnerabilities lead to unauthorized access, data breaches, and system failures, which have catastrophic effects on business operations and data protection.
Cybersecurity constantly evolves, and the impact of vulnerabilities extends far beyond immediate security concerns; vulnerabilities can disrupt business productivity, stymie operations, erode customer trust, and ultimately result in substantial financial losses.
A Brief History of Vulnerability Management
Vulnerability management has undergone significant transformations over the years, evolving with technological changes and the maturation of the cybersecurity industry. In its infancy, cybersecurity was predominantly concerned with physical security and basic network protection. Early approaches to identifying and managing vulnerabilities were rudimentary, focusing on immediate threats using the limited tools and techniques available. This nascent stage laid the groundwork for what would, in time, become a complex discipline and part of a more holistic approach to cybersecurity.
The focus of vulnerability management shifted dramatically as the internet emerged and the global population grew increasingly connected. The internet has made it easier than ever for individuals to share information, allowing data to travel in the blink of an eye. As this technology became ubiquitous, threats and cybercriminals evolved. Attacks were no longer the result of a single malicious actor whose actions affected one organization at a time. Attacks became broader, and threat actors grew bolder and more organized.
The surge in cybercrime started with the Morris Worm, the first major multiorganizational attack, which exploited known vulnerabilities and impacted thousands of computers. Similar attacks followed, with the ILOVEYOU virus using emails as a vector and WannaCry ransomware devastating hundreds of thousands of unpatched computers. Attacks were not purely limited to malware. Equifax’s data breach, for example, stemmed from attackers exploiting unpatched vulnerabilities and stealing the personal data of millions of people. Each of these attacks could have been averted if vulnerabilities had been properly managed.
The threat landscape was only part of the catalyst for the growth of vulnerability management. Industry standards and regulations also evolved to help manage emerging threats. Establishing dedicated cybersecurity organizations like the Computer Emergency Response Team (CERT) and creating the first vulnerability databases introduced systematic approaches for identifying, reporting, and managing vulnerabilities. Similarly, government-led initiatives and regulations, including the General Data Protection Regulation (GDPR) and the Health Insurance Portability and Accountability Act (HIPAA), have heavily influenced the development and implementation of industry-wide vulnerability management practices.
However, the focus on regulatory compliance still overlooked some vulnerabilities that exposed organizations to the risk of a breach. Vulnerability management efforts naturally supported these compliance efforts, but too many organizations treated the compliance process as a checklist, rather than using it as a guide to provide a better structure for their cybersecurity program.
Tracking Vulnerabilities
Vulnerability tracking is one of many critical tasks in vulnerability management. It started with tools that employed various methods and systems to systematically identify and monitor potential security flaws. Each method was tailored to address different aspects of the vulnerability management process. These methods include automated scanning of networks, applications, and systems to detect known vulnerabilities based on signatures or heuristics. The tracking process then grew to include configuration management tools to assess systems against established security benchmarks and identify deviations that may pose risks.
Further evolutions, integrated with threat intelligence platforms, help correlate vulnerability data with active threats in the wild, providing contextual insights that enhance the understanding and categorization of vulnerabilities. These tools and techniques pinpoint and categorize vulnerabilities to facilitate communication within the cybersecurity community. This categorization is essential for prioritizing response efforts and effectively conveying the severity and implications of vulnerabilities to stakeholders.
Understanding CVEs
Common Vulnerabilities and Exposures (CVEs) were developed by MITRE Corporation in 1989 to standardize the tracking and classification of vulnerabilities. Each CVE is a unique label that defines and categorizes a specific security vulnerability. The standardized structure of CVEs is crucial for effective vulnerability management, as it allows for precise and consistent communication about specific vulnerabilities across different platforms and organizations globally.
The role of CVEs extends beyond just identification. CVEs provide crucial information to manage and prioritize security threats. They also establish a universally recognized reference point, helping to facilitate quicker decision-making regarding addressing and mitigating risks. This helps organize and streamline response efforts, ensuring that the most critical vulnerabilities receive immediate attention.
Understanding CVSS Scores
While CVEs help standardize the discussion of vulnerabilities, the Common Vulnerability Scoring System (CVSS) provides a standard method for assessing and scoring their severity. The CVSS standardizes how the impact, complexity, and exploitability of vulnerabilities are evaluated by assigning a numerical score from 0 to 10. This system has several components, starting with a base score that measures the intrinsic qualities of a vulnerability. Additionally, temporal and environmental scores account for factors that change over time or vary across user environments.
Higher scores indicate more severe vulnerabilities that have a more significant impact and are easier to exploit. Teams often prioritize these vulnerabilities for remediation over lower-scoring vulnerabilities that are either harder to exploit or far less impactful.
The CVSS is integral to organizations making informed decisions about remediation priorities. It helps companies efficiently allocate their resources, allowing them to focus on patching or mitigating vulnerabilities that pose the most significant risk.
Modern Approaches
CVSS scores are not the only way vulnerability management determines the risk of a given vulnerability. The Exploit Prediction Scoring System (EPSS) model was created to estimate the likelihood of a software vulnerability being exploited in the wild. It leverages historical exploit data, the characteristics of vulnerabilities, and the environments they affect to provide a score that helps organizations prioritize vulnerabilities based on their actual risk of being exploited. While the EPSS model is a valuable step forward, its accuracy depends heavily on the accuracy and completeness of the data used to generate the scores.
Around the same time the EPSS model was introduced, the Cybersecurity and Infrastructure Security Agency (CISA) developed the CISA Known Exploited Vulnerabilities (KEV) catalog, a curated list of vulnerabilities that are actively exploited by cyber adversaries and are verified by partner agencies and the private sector. The KEV catalog provides accurate data on known threats, helping organizations to prioritize remediation efforts based on vulnerabilities that pose significant and proven risks to their networks and systems. However, the CISA KEV catalog is not an exhaustive list of all vulnerabilities that could possibly threaten an organization.
The Challenges of Vulnerability Management
Vulnerability management has matured significantly over the years, yet addressing challenges and gaps is still a struggle. This is partly due to the shifting threat landscape and the continuously evolving nature of technology and cyberattacks. Numerous products developed for vulnerability management offer varying capabilities and features. Unfortunately, no single solution is the perfect answer. At best, trade-offs are made to balance operational ease with organizational fit. This results in solutions that excel in certain environments but leave gaps in visibility or coverage in others.
While a vulnerability management program is crucial to an organization’s security posture, several significant weaknesses make traditional vulnerability management less effective in providing the necessary mitigations demanded by the modern threat landscape.
Alert Overflow
One of the pervasive challenges in vulnerability management programs is managing the overwhelming volume of alerts generated by various security solutions. Organizations employ multiple tools to detect vulnerabilities, but not all reported issues are actionable or genuine. Many alerts turn out to be false positives—findings that initially seem valid but are ultimately deemed irrelevant upon closer examination. This flood of incorrect alerts consumes substantial time and resources as analysts must verify each alert, contributing to security inefficiencies.
The consequences of these false positives extend beyond wasted resources; they lead to a phenomenon known as alert fatigue. As analysts encounter a high volume of alerts that do not translate into real threats, there’s a growing tendency to view new alerts skeptically. This skepticism can result in a slower response to investigating alerts, potentially overlooking genuine vulnerabilities. The challenge, therefore, is not just in identifying vulnerabilities but also in enhancing the accuracy of the detection tools to reduce false positives and, in turn, lower alert volume.
Reliance on Agent-Based or Agentless Solutions
Vulnerability management tools often rely on agent-based scanning or agentless methods. While providing in-depth and continuous monitoring of each device, agent-based scanning is resource intensive and time-consuming. It also brings significant administrative overhead as each new device added to the network necessitates an additional manual installation of the agent. Dependency on operating system compatibility can also limit the scanning scope, because agent-based solutions don’t always effectively cover network devices such as routers and switches.
Alternatively, agentless scanning, although advantageous for its minimal impact on system resources and ease of deployment, struggles to provide the depth of visibility and continuous monitoring needed, particularly in decentralized networks. The lack of installed agents means that any devices operating behind personal or remote networks—common in today’s remote work and mobile environments—are often beyond the reach of agentless scans, leaving potential vulnerabilities unchecked. While agentless systems scan various devices regardless of operating systems, they often provide a less granular view of the organization’s security posture than agent-based systems. Unlike agent-based systems, agentless systems are unable to access operating systems at lower levels, limiting their visibility into running processes and memory. Additionally, this method’s reliance on network accessibility means that any network disruptions impede the ability to conduct thorough scans, introducing gaps in security monitoring that attackers could leverage to their advantage.
Limited Visibility
Vulnerability management tools, used alone, have limited visibility and rarely address the need for proactive asset discovery. Traditional scanning tools often fall short when addressing assets in the cloud. They struggle to maintain visibility due to the dynamic nature of cloud services, where virtual assets frequently spin up and down. This ephemeral quality leads to missed scans and unmonitored periods of vulnerability exposure. Conversely, some tools are specifically cloud-centric; they excel at detecting vulnerabilities in the cloud but suffer the same lack of visibility for on-premises assets. Considering that most organizations are hybrid, multiple solutions are often needed to cover the potential attack surface.
Challenges Detecting Misconfigurations
Similarly, not every tool detects all varieties of issues. Some specialize in detecting vulnerabilities associated with different versions of software or services, yet fail to detect misconfigurations. This provides easy targets for attackers. To address this, organizations are often forced to adopt multiple solutions to get full coverage. Using multiple vulnerability management tools requires additional time and personnel to manage, operate, and maintain. It often requires multiple dashboards for a complete picture of organizational vulnerabilities, which comes with its own challenges.
Complexity
Managing multiple vulnerability management tools, each with its own dashboard, adds significant complexity to a cybersecurity program. It creates a fractional view, often leading to missed or improperly prioritized vulnerabilities because a user cannot assess all the data from multiple tools simultaneously. This is a serious problem because, in many cases, if taken together, this data indicates that a vulnerability is more significant than it appears in the fractional view.
However, the challenge does not end there, as many vulnerability management solutions force a trade-off between complexity and customization. While offering the ability to tailor features and functionalities to specific needs, customizable solutions tend to introduce greater complexity into the security processes and infrastructure. This complexity manifests in more intricate setup processes, higher maintenance requirements, or a steeper learning curve.
On the one hand, high levels of customization allow organizations to fine-tune their security measures to precisely address unique risks, integrate seamlessly with existing systems, and align with internal workflows and policies. On the other hand, this customization can complicate system management, potentially requiring dedicated resources for continuous configuration adjustments and updates.
Lack of Timely Updates
Any vulnerability management solution is only as good as the data from which it draws conclusions. Numerous vulnerability databases are out there, each with a different selection of vulnerabilities. The diversity and scope of these databases can vary significantly, affecting the comprehensiveness of the vulnerability management process. For example, some databases may focus on vulnerabilities in widely used commercial software, while others might include more extensive data on open source projects or less common applications. This variability can lead to disparities in security coverage, with some systems better protected than others based on the data sources utilized by their respective vulnerability management tools.
Performance limitations exist for each of these databases based on their frequency of updates and ability to provide data promptly. Latency in registering vulnerabilities leaves organizations vulnerable, while vulnerabilities exist in the wild but are undetectable if databases do not contain this information. Delays can also come from the database’s ability to serve information to products and vendors. Those with limited resources may not have the infrastructure to provide timely updates, delaying the ability of products to update.
Similarly, when vulnerabilities are first discovered, there is a period when they are unknown to the public and the affected software developers, leaving no time for preventive patches or software updates. These zero-day vulnerabilities can be exploited to bypass security measures and compromise systems before defenses are implemented. This makes them particularly dangerous and challenging for cybersecurity teams, as they must rapidly identify, assess, and mitigate these threats without prior knowledge or preparation.
Get Moving from Vulnerability Management to Exposure Management now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.