Skip to content

Death to Vulnerability Management As We Know It

Death to Vulnerability Management As We Know It

Vulnerability Management concepts are changing. The idea that vulnerability management is limited to scanning alone is being replaced with a wider and more comprehensive view. It’s now transforming to a concept called vulnerability identification, which is an umbrella for any type of service or activity centered around identifying vulnerabilities. This can include scanning and penetration testing, as well as breach and attack simulations.

Legacy ideas on vulnerability management tend to result in a lack of full understanding of the asset environment. Typical vulnerability management programs are not keeping track of everything that’s out there, especially externally facing components that may have dropped off the radar. There also tends to be shortcomings regarding the proper definition of scanning profiles, including what to scan for and what should be considered critical issues that need to be flagged with the highest priority.

What Today’s Vulnerability Management Should Look Like

One of the first steps in improving the performance of a vulnerability management program is that vulnerability scans should be performed quarterly while keeping a close watch on remediations to ensure standards are met. Too often, companies may have older, orphan systems that are not properly locked down or externally-facing servers that also lack proper security configurations. Every quarter (or even better, monthly), a business should scan to validate new assets, determine how those assets are being used, and link the teams that have taken ownership of the assets.

Another essential step is the creation of a risk register. This register should be the cornerstone of an effective vulnerability management program as it provides an assessment of vulnerabilities compared to the risk tolerance for a business. Risk tolerance is determined by variables such as potential liability that your organization could face, particularly in industries such as banking, insurance, or healthcare, and the likelihood of an attack from a variety of threats, such as criminal hackers or nation-states.

This type of risk evaluation can answer key questions such as:

  • What is considered a critical area of protection?
  • When a patch is released for a critical area, will the SLA be 60 or 90–days?
  • Will logs be kept for a typical 14-day rotation or several months to trace back the source of an attack?
  • Will logs be deleted or placed into cold storage?
  • Will peak traffic be captured separately?

This risk register process should also include an evaluation of asset management. It must be determined if a list of all assets within a protected environment, and outside of it, is current and accurate. Keep in mind that an orphan system, that has fallen off an asset list, may have no endpoint protection monitoring or logs tracking activity, making it ripe for exploitation.

How to Improve Visibility Into Vulnerability Management

Most organizations will also need a good way of centralizing vulnerability management data. It’s common in some companies for the results of key security operations, such as penetration tests, to live on a PDF, while governance risk and compliance (GRC) lives in another location, and the massive files showing scan data may even be in another spot. Data from scanning, penetration tests and GRC all need to be moved to a single source to enable effective data orchestration to drive decision making. A company should be able to analyze all of this information so they can validate if the number of vulnerable assets is increasing or decreasing, and to determine the criticality of the vulnerabilities uncovered.

And this brings us to our next point. For vulnerability management to be truly effective, this centralized repository of data needs to be fed into what we call a vulnerability management orchestration platform. This type of platform can perform trend mapping to identify whether a security environment is heading in the right direction. It can show trends, and perform analytics on data such as severities, engagements, custom tags, and timeframes. It’s the type of insight that’s always useful when budgetary decisions need to be made to allocate resources.

Analysis in Action

To understand how this tool can be used, consider the example of a business that has an external asset list that grows continually, year-after-year. But the vulnerability tabulations are not going down. This probably indicates that a new approach to how this business handles their information security is needed. Similarly, if a company conducts penetration tests and comes back every year identifying that default password usage is commonplace, then a new password policy is probably needed. These examples are typical of the actionable visibility that can be provided by a vulnerability management orchestration platform.

What should you look for when evaluating the performance of this type of technology? A vulnerability management orchestration platform should support host-based remediation efforts by consolidating all findings for an asset, regardless of where the risk was identified. It should be able to import findings and automatically be able to populate information into standardized reports. The right solution will also have the ability to tag essential assets to enable rapid filtering for analytics to highlight problems that need the most attention.

With the right tools and processes in place, vulnerability management can expand far beyond its original definition. By truly taking on the role of identification, classification—and ultimately mitigation—of vulnerabilities, this new integrated and comprehensive concept can help you find and close all the hidden openings into your technology infrastructure.

TEAMARES helps organizations identify, classify, prioritize, assist in remediation, and mitigate software vulnerabilities. Talk to a TEAMARES expert to learn how to take action on your vulnerability data.

About the Author: 

Quentin Rhodes-Herrera, CyberOne’s director of professional services, leads the offensive and defensive teams known as TEAMARES. He is an experienced security professional with expertise in security analysis, physical security, risk assessment, and penetration testing. Quentin’s diverse background is built from a variety of staff and leadership positions in IT, with specific experience in threat and vulnerability management, penetration testing, network operations, process improvement, standards development, and interoperability testing.