Why Do Vulnerability Management?

Why Do Vulnerability Management?

 

Original article here – https://www.linkedin.com/pulse/vulnerability-management-101-lawrence-munro/

Threat and vulnerability management is one of the most important activities in measuring risk and protecting an organisation’s estate. This security hygiene process is the front line against automated malware, opportunistic attacks and more capable aggressors.

Observing many of the largest breaches to have happened in the last ten years, you’ll notice that many (if not most) were caused by basic failings in security hygiene. Simple vulnerabilities such as missing patches and ‘OWASP Top 10’ issues (e.g. SQL injection) are all too often the cause. The advice I always give organisations is “don’t run before you can walk”. I often see organisations with ‘advanced’ (expensive) agents that block malware client-side and sandboxing technologies at the gateway costing millions of dollars. However, when the organisation is penetration tested (or worse, breached), the report shows that the tester / attacker compromised the network via a missing patch or a basic configuration issue.

An adage I often use to explain the threat landscape and the importance of getting the basic vulnerability management right is as follows:

Two men are out walking in the woods, when they come across a huge Grizzly Bear. The Bear looks furious and from what they can gather, is about to attack. One man turns to the other and says “what are we going to do? Do you think we can outrun a bear?” to which, the other man (now at a jog) responds “I don’t have to outrun the bear, I only need to outrun you!”.

Arguably, most cyber-attacks are perpetrated by criminals looking to make money. They utilise an operating model akin to that of any other for-profit business and are largely planned, but opportunistic. Business is good for cyber criminals, but like any money-orientated organisation they want to maximise profitability by selecting the right targets. Target choice is a lot less selective than you may think, as many attacks begin opportunistically via a compromise from automated malware or cursory reconnaissance. By failing at the basics, an organisation will appear a softer target and draw more attention as a result. A common mistake that organisations make is that they don’t think they’re a target. Therefore, they think this lowers their risk or compromise. If you have money flowing through your business, however much, you’re interesting to an attacker if your security is soft. Moreover, you can get caught by malware ‘targeting’ home users, where the attackers don’t realise they’ve hit a commercial entity and they want $50 (or current bitcoin equivalent) to remove some sort of ransomware (that could be costing you millions).

Key Jargon

There is a lot of jargon associated with vulnerability management. Below is a list of key terms that are required to have a meaningful discussion of this topic.

  • False Positive (FP) – A false positive is a vulnerability finding that appears within a result set that is incorrectly reported. I.e. the vulnerability does not exist, but the tool says it does.
  • False Negative (FN) – A false negative is where a vulnerability does exist, but it is not found or disclosed by the tool.
  • Vulnerability – A vulnerability is a misconfiguration or software bug that introduces a potential attack vector to an asset.
  • Exploit – Tactics, Techniques and Procedures used to compromise or subvert controls or expected behaviour of an asset, such as a software program.
  • Port Scan – A process of interrogating remote services, by means of sending partially or fully formed packets of varying types, to establish the state of a remote port. This is typically done using TCP/IP on packet switched networks using an automated tool, such as Nmap.
  • Scope – The scope is a list of targets that are documented as being relevant during an assessment. All assets are deemed either ‘in’ or ‘out’ of scope.
  • CVE – CVE stands for common vulnerabilities and exposures. CVE is a program that was launched by MITRE in 1999 that provides a library of known vulnerabilities and attempts to provide a universal language for describing vulnerabilities and risk.
  • CNA – A ‘CNA’ is a numbering authority that is able to assign CVE numbers to specific vulnerabilities. CNAs exist to scale the processing of CVE numbering and are approved by MITRE before they can operate.
  • Gold Build – A pre-configured image of an operating system or software that is security hardened. These form the basis of secure deployments to save time configuring each image from a security perspective.

Vulnerability Assessments vs. Vulnerability Management

Vulnerability Assessments

Vulnerability assessments are: unitary, point-in-time, exception-based exercises that utilise scanning technology and the expertise of a human operator. The role of the automated tool is to map your environment, discover your assets and then to check for known vulnerabilities and misconfigurations. The role of the operator is to ensure full asset discovery, tune the scanner to gain confidence that coverage and safety are appropriate and then to process the output of the scanner i.e. remove FPs and FNs. The output of the exercise is compiled into a report (most tools will generate this automatically) and distributed to relevant stakeholders for processing.

Vulnerability Management

A vulnerability management program is a more structured and process-driven approach. Scans are performed regularly (typically weekly, monthly or quarterly) by an operator, with the assets constantly being refreshed and the scanning tool updated. There are also management processes around the handling of vulnerabilities and many organisations employ the use of ticketing systems to track progress. A basic loop for vulnerability management may look like this:

Risk Registers and Tracking Issues

After performing scans and removing false positives, you need to be able to manage and process the outputs of the exercise. I find that maintaining a risk register is useful for this. A risk register can be as complex as you want to make it, but simple is best. The primary function of a risk register is to help you organise your thoughts and help you track your decisions on what to fix or not fix now you have some information. Typically, a risk register will include not just risks that are generated from IT vulnerabilities, but from a wider organisational perspective too. It is wise to investigate what other stakeholders within your organisation are using to manage their risks, as these risks will often overlap with others from different paradigms / departments. They may have a flashy tool that could help in some way too. Creating another silo for risk decisions within your organisation or even within your function will just create further issues in the longer term.

As your scanner outputs vulnerabilities, you need to convert the vulnerabilities into risks, so you can make good decisions on what to fix and when. A basic way of doing this is using one of the following two equations (where P/I/T relate to the vulnerability):

  • Risk = Probability x Impact
  • Risk = Threat x Probability x Impact

These equations aren’t overly useful in the grand scheme of things, but they give you an acid test to understand what dimensions are important. Most tools will have vulnerabilities classified already, normally using at least CVSS scoring. This can help you decide what is and isn’t a risk more quickly and simply. It should be noted though, that there are lots of metrics in CVSS (especially in v3.0) and often vendors will only use the ‘base’ score for simplicity. The drawback of this is that things like temporal and environmental metrics will not be considered, so you should make sure you understand whatever scoring system you’re using in depth and whether you need to adjust ratings when adding them to your risk register.

A risk register will typically contain the following data points at a minimum:

  • A unique identifier for that risk
  • A description of the risk
  • An owner
  • A description of the impact of the risk
  • A risk score
  • A treatment approach
  • Treatment cost
  • A decision on the treatment of the risk (what to do, if anything)

Utilise (or develop) a Common Language to Talk About and Measure Risk

It’s important to think about how you’re going to record and talk about risk when managing vulnerabilities. Everyone has finite resources, and in order to maximise efficiency and budget you need to be able to compare one finding to another (and make quick and accurate decisions). Also, you want to be able to identify the same vulnerability whether it appears in a scan, incident, pen test, red team or vendor notification and understand when / whether it’s the same underlying issue. I don’t think that this is currently a solved problem, but there are pieces of the puzzle that can help you build your own approach. As mentioned above, one of the most useful is CVSS. It’s by no means perfect, but it is widely adopted by the NVD, Microsoft and most scanning vendors use it to score vulnerabilities. If you’re just starting out, it’s best not to try anything flashy, so adopting CVSS and its associated nomenclature is a good bet. A good introduction to CVSS can be found here: https://www.first.org/cvss/specification-document

Automate the Things

The cruces of good vulnerability management are: getting good data, being able to make quick decisions and fixing the real issues fast. The best and cheapest way to accomplish this, is to automate as much of the process as possible and streamline the data to the least information required. A lot of scanning platforms now integrate (via API) with ticketing and workflow systems, and can create tickets based on administration from the scanning platform itself. I’d encourage looking into whether your organisation already has some sort of system that can process issue-type tickets. Common systems include: Jira, Jenkins and Service Now.

Additionally, when checking for FP / FNs, it’s worth developing or looking for scripts that do manual checks. Often, you’ll find that scanning platforms don’t have the most verbose logging and you’ll need to verify FPs and FNs using some sort of script. Penetration testers often have a lot of these types of scripts and tools, so it’s worth being friendly to your in-house testers or third-party providers to get access. Most can be found by searching on Google / GitHub if you know what you’re looking for though.

Use Free Stuff First

There are a host of tools available to baseline configurations and missing patches, which are free. Before you buy an expensive tool, there is a lot you can do to check your security posture in advance. I’ve listed some of my favourites below (thanks to James McKinley for nudging me to include this – watch his LinkedIn for a more comprehensive post on this topic!):

  • Microsoft Security Compliance Toolkit (I still like MBSA too!) – “The Security Compliance Toolkit (SCT) is a set of tools that allows enterprise security administrators to download, analyse, test, edit, and store Microsoft-recommended security configuration baselines for Windows and other Microsoft products.” https://docs.microsoft.com/en-us/windows/security/threat-protection/security-compliance-toolkit-10
  • OpenVas – “OpenVAS is a full-featured vulnerability scanner. Its capabilities include unauthenticated testing, authenticated testing, various high level and low-level internet and industrial protocols, performance tuning for large-scale scans and a powerful internal programming language to implement any type of vulnerability tests.” http://www.openvas.org/about.html
  • Metasploit – Metasploit is an exploit framework that helps penetration testers find and exploit vulnerabilities. It’s useful to confirm vulnerabilities exist and try out post exploitation threat models. https://www.metasploit.com
  • CIS Benchmarks – CIS provides documents defining the most secure configurations of a huge range of Operating systems and software packages. They also provide pre-built secure images for Amazon and Azure via their stores. You can get commercial membership based on the size of your organisation that gives you access to a lot more content. Some scanning vendors have turned these checks into scripts that work with their products. They’re really useful for creating ‘gold builds’. https://www.cisecurity.org/cis-benchmarks/

Infrastructure Scanners

Overview of How Infrastructure Scanners Work

Infrastructure vulnerability scanners are fairly simple conceptually, although many vendors add lots of ‘bells and whistles’ to enhance the user experience (UX) and management elements of the solution. There are typically two types of scanning options, internal and external (although some cover applications too) normally referred to as profiles or policies. In most cases, this is managed by configuring a scanning policy that defines a subset of checks that you wish to perform against the target hosts. Most scanning suites will have pre-configured profiles for the common scan types, such a ‘best practice’, ‘PCI ASV’, ‘internal’ or ‘external’.

Port Scanning and Discovery

In order to identify and then track assets, you need to ‘discover’ them first. Typically, a scanner will initiate a port scan against the IP addresses you have specified within the tool. Normally, this will be the entire IP address range of an organisation or a network segment. This is referred to as the scope and it comprises all (or a subset of) the internal and external network ranges. It is important to cover the whole range and not just the ‘known active’ hosts, as part of the value of this exercise is to sanity check your knowledge of your estate as well as identifying rogue hosts. If you want more information on how port scanning works, Wikipedia does a much better job than I will: https://en.wikipedia.org/wiki/Port_scanner.

Asset Management

An often overlooked, but nevertheless important element of vulnerability management, is asset management. In order to effectively manage vulnerabilities, you must first understand your estate and infrastructure by mapping it. All decent platforms will have the capability to manage assets, with many having lots of nice features to track the assets over time and even in real-time using a range of monitoring options. Additionally, many platforms integrate with third-party asset management, configuration and patch management platforms meaning robust processes can be automated. I’d always encourage the use of these types of features as time of is the essences and assurance that actions are performed is much higher. A lot of time and effort should be given to this element of TVM, as missing an asset can lead to missed vulnerabilities and increase the chances of a breach. It’s best to validate the scans performed by the vulnerability management platform, using a specialist tool, such as Nmap (https://nmap.org/)  or Masscan (https://github.com/robertdavidgraham/masscan). If you’re not confident doing this yourself, a good way of validating this is to request full port scans from a penetration tester, explaining in advance what you’re trying to achieve.

In order to validate that all assets have been discovered, a good practice is to perform additional rounds of OSINT periodically, to see if anything has been missed. A multitude of Internet-based services constantly crawl and trawl the Internet creating indices of whatever they find. The content can vary, but will often include useful information such as version numbers from service headers, which can easily be cross-referenced to a vulnerability. Many of these sites make this information publicly available and searchable (such as Google, Bing and Shodan) and negate the need for attackers to send packets to your infrastructure. This attack surface can change all the time, especially within the application world, so staying up-to-date is key. The bug bounty industry has been powered by ephemeral bugs (bugs that appear for short periods of time), so understanding your assets as close to real time as possible is important, as the bad guys will be watching too.

It is also important to validate discovered assets with the various stakeholders within your organisation to make sure they agree with the world-view created by the tool. I’ve lost track of the times large organisations discover they own a whole swath of applications nobody knew about.

Scanning Policy / Profile Management

A policy or profile is a preconfigured series of checks that alters the scanning behaviour of the platform. For the sake of consistency, I will be using the term ‘policy’ to describe this concept throughout this post. The scanner itself has a range of different ‘checks’ it’s able to perform and will typically categorise them for easy administration. A check is the unit of scanning activity and typically performs a simple test to identify whether a vulnerability exists on the target asset(s) or not.

All good TVM solutions come with preconfigured policies out-of-the-box. For the most part, the policies will be additive in the sense that the checks performed in the policy are a sub-set of the exhaustive capability of the platform.

Scanning policies exists for several good reasons:

  • Some checks are more aggressive or invasive than others, most platforms have a ‘safe checks’ option where you can turn anything risky off.
  • Internal network environments are different from external perimeter environments, so some checks are superfluous to needs.
  • Often you will have knowledge about the target environment, so you won’t need to waste time running certain categories of check. E.g. you wish to scan a single host running Linux, so you disable all the windows / UNIX category checks or any others that are unrelated.
  • From previous experience, you know that certain checks can create unwanted side-effects, such a denial of service (DoS) conditions. A common ‘gotcha’ with vulnerability scanners is their propensity for taking down printers!
  • You wish to remove ‘noisy’ checks that produce a lot of traffic as you’re scanning at peak times (or god forbid, over a VPN).

As a rule of thumb, I’d suggest spending a lot of time getting your asset inventory right before building custom scanning policies. Once you’re confident you know your assets well, you can tune a policy that fits.

Authenticated Scans and Benchmarking

When scanning network infrastructure, accuracy is very important as you want to have confidence in the results to reduce false-positive checking. A key method to enhance the accuracy of a scan is to perform the assessment authenticated. All good platforms will have the capability to add credentials for scanning and will be able to perform authenticated checks. Most platforms will support a range of authentication mechanisms such as SMB, SSH, SAML, Kerberos and plaintext. From a security perspective, you should use an encrypted method of course!

When a check is performed unauthenticated during a scan, it will typically use a method such a banner grabbing to establish whether a system is vulnerable or not. Banner grabbing works by interrogating a port (sending valid requests to it) and then searching for specific information in the response that may confirm the version of software that is running on the host. The scanner may also look at the packets received and try and fingerprint the version of software that is running on the target. These techniques can sometimes be unreliable, due to backporting of patches, intentional obfuscation of the header as a defensive measure, security hardening that removes header exposure altogether or unreliable fingerprinting technology within the scanner. Authenticated checks work differently, as they interrogate the operating system directly (typically requires administrative privileges). In Microsoft Windows, for example, the scanner will interact with a mix of the Windows components and the Windows Registry to return accurate version numbers.

When performing authenticated scans, it makes it possible to implement host build reviews and benchmarking using many of the leading Scanning tools. Benchmarking is an audit process that compare the current state of a host or operating system against a standard. Gold builds are useful as they save a lot of time later and increase general security.

Compliance != Security

It’s important to understand that there is a key difference between compliance and security. Security frameworks, such as ISO27001 and PCI DSS are good starting points on the journey towards security, but being compliant is not the same as being secure.

Cons and Pitfalls of TVM

Scanning technologies are only as good as the vulnerabilities they’re configured to detect. The vast majority of the vulnerability information that is processed and included by the vendor comes from third parties. As discussed earlier in this post, the Mitre CVE project is widely considered the de facto database for acknowledgement and documentation of vulnerabilities and is typically the primary source of information for all scanning vendors. CSO magazine reported that over 6000 vulnerabilities went unassigned by MITRE in 2015. While all the good scanning vendors do their own research, this is a worrying statistic and casts doubt on the coverage you can expect from your tools. This is not to say that vulnerability scanners don’t have huge value, just that you should not consider them a silver bullet or exhaustive.

Final Tips and Tricks for Network Scanners

  • Validate asset information manually periodically (banner grabbing, different port scanner)
  • Include UDP scanning
  • Don’t scan printers or mainframes or ICS infrastructure, they can fall over easily
  • Get to know the business stakeholders and assets owners well, issues can sometimes happen and good relationships can smooth the process of recovery!
  • Scan frequently – things change a lot, some platforms have nice features like retroactive notifications for new vulnerabilities, this is useful
  • Do authenticated scans wherever possible, this increases accuracy
  • Learn how to test and exploit common vulnerabilities to check false positives (or ask a pen tester)
  • Ignore a lot of the SSL noise! Many scanners will report low SSL, medium SSL, high SSL risk issues. Processing these issues and remediating can be a full-time job if you have a large legacy web estate, and most SSL bugs are theoretical or require huge resources to exploit.
  • Tail scanner logs or proxy the traffic to see what’s happening (if you can’t get raw logs)

Further reading / References:

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s