Microsoft Azure Sentiel is fasting becoming a very powerful SIEM and IMO, I think its going to take the lead for the following reasons;
Most Enterprise organisations are using Windows Operating Systems for Desktops and Servers, and these large fleets require Threat Detection.
SYSMON and Windows Event Collection is the de facto option to monitor Windows Operation system giving access to Digital Forensic information. No need for a expensive EDR solutions, you can also use other Opensource tools for deep diving/remote control of Windows OS. (Also, Windows provides SCCM for Windows fleet management. )
Published 11 June 2020 – ID G00718877 – 23 min read
Network detection and response (formerly known as network traffic analysis) vendors are adding more automated and manual response features to their solutions. Here, we provide an overview of the market and highlight some of the key vendors to be considered by security and risk management leaders.
Applying machine learning and other analytical techniques to network traffic is helping enterprises detect suspicious traffic that other security tools are missing.
Network detection and response (NDR) remains a crowded market with a low barrier to entry, as many vendors can apply common analytical techniques to traffic monitored from a SPAN port. Customer references, from a broad set of vendors, are generally satisfied with their tools.
Response capabilities fall into two categories: manual and automatic. Vendors have been actively enhancing their manual (threat hunting and incident response) features, and have been adding partners to broaden their automatic response functionality.
To improve infrastructure security and the detection of suspicious network traffic, security and risk management leaders should:
Implement behavioral-based NDR tools to complement signature-based detection solutions.
Include NDR-as-a-feature solutions in their evaluations, if they are available from their current security information and event management (SIEM), firewall or other security vendors.
Decide early on in the evaluation process if they desire automated response versus manual response capabilities. A clearly defined response strategy is valuable in selecting a shortlist of NDR vendors.
NDR solutions primarily use non-signature-based techniques (for example, machine learning or other analytical techniques) to detect suspicious traffic on enterprise networks. NDR tools continuously analyze raw traffic and/or flow records (for example, NetFlow) to build models that reflect normal network behavior. When the NDR tools detect suspicious traffic patterns, they raise alerts. In addition to monitoring north/south traffic that crosses the enterprise perimeter, NDR solutions can also monitor east/west communications by analyzing traffic from strategically placed network sensors.Response is also an important function of NDR solutions. Automatic responses (for example, sending commands to a firewall so that it drops suspicious traffic) or manual responses (for example, providing threat hunting and incident response tools) are common elements of NDR tools. In 2019, Gartner named this market “network traffic analysis.” This year, we renamed it “network detection and response,” because this term more accurately reflects the functionality of these solutions.
Dozens of vendors claim to analyze network traffic (or flow records) and to detect suspicious activity on the network. We have applied the following criteria to identify the most relevant vendors.Inclusion CriteriaVendors must:
Analyze raw network packet traffic or traffic flows (for example, NetFlow records) in real time or near real time.
Monitor and analyze north/south traffic (as it crosses the perimeter), as well as east/west traffic (as it moves laterally throughout the network).
Be able to model normal network traffic and highlight suspicious traffic that falls outside the normal range.
Offer behavioral techniques (non-signature-based detection), such as machine learning or advanced analytics that detect network anomalies.
Provide automatic or manual response capabilities to react to the detection of suspicious network traffic.
Exclusion CriteriaWe exclude solutions that:
Require a prerequisite component — for example, those that require a SIEM or firewall platform.
Emphasize network forensics over detection functionality, primarily through the storage and analysis of full PCAP data.
Work primarily on log analysis.
Are based primarily on analytics of user session activity — for example, user and entity behavior analytics (UEBA) technology.
Focus primarily on analyzing traffic in Internet of Things (IoT) or operational technology (OT) environments, because specialized solutions are optimized to address this use case.
Vendors are focused on enhancing their detection and response capabilities. For detection, we expect vendors to continue enhancing their ability to detect suspicious patterns in encrypted traffic. Some vendors will add the ability to terminate, decrypt and analyze TLS traffic natively in their sensors. However, most vendors, particularly the ones with out-of-band sensors, will enhance their ability to detect suspicious traffic without decrypting the TLS traffic and inspecting the payload. Some vendors detect suspicious SSL/TLS Server Certificates for this purpose. Also, some vendors use techniques such as analyzing the length of individual packets, the timing between packets, the duration of connections and other methods to detect suspicious TLS traffic. We expect that more vendors will enhance their solutions with similar functionality.Vendors will also be enhancing their response capabilities. For automated responses, they will broaden partnerships with firewall vendors (send commands to firewalls to drop suspicious traffic), network access control vendors (send commands to the network access control [NAC] solution to isolate an endpoint), security operations automation response (SOAR) vendors (respond to events with playbooks), endpoint detection and response (EDR) vendors (to contain compromised endpoints) and other security vendors. For manual response, vendors will improve their threat hunting and incident response functions by improving workflow features (for example, helping incident responders prioritize which security events they need to respond to first).
Here, we analyze the segments of the NDR market:
Pure-play NDR companies. The vendors in this category are mostly smaller specialty companies whose only product is an NDR solution.
Network-centric companies: Several companies that have historically targeted network use cases, such as network performance monitoring and diagnostics (NPMD; see “Market Guide for Network Performance Monitoring and Diagnostics”), have developed solutions to address security use cases. These network-centric solutions were already monitoring network traffic, and these vendors have applied analytical techniques, such as machine learning, to detect anomalous traffic.
Others. A few vendors do not fit cleanly in the two categories defined above. For example, large, diversified network security providers, such as Cisco and Hillstone Networks, also offer NDR solutions. Cisco has Stealthwatch, and Hillstone has the Server Breach Detection System.
Table 1 highlights the NDR vendors that meet our inclusion criteria and were not eliminated by our exclusion criteria.
Table 1: Representative Vendors in Network Detection and Response
Product, Service or Solution Name
Awake Security Platform
Enterprise Immune System
Flowmon Anomaly Detection System (ADS)
Server Breach Detection System (sBDS)
Source: Gartner (June 2020)Please refer to Note 2 for a list of other vendors that we are tracking.The vendors listed in this Market Guide do not imply an exhaustive list. This section is intended to provide more understanding of the market and its offerings.
Based in Santa Clara, California, Awake Security uses supervised machine learning, unsupervised machine learning and some deep learning techniques to detect suspicious traffic. Awake does not decrypt TLS traffic. It also does not use JA3 signatures, but Awake has developed its own application/TLS fingerprinting algorithms. It also uses encrypted traffic analysis techniques. For example, it can identify attempts to tunnel malicious traffic over DNS and other protocols.Awake’s solution includes manual and automatic response capabilities. Its Ava tool performs automated threat hunting, incident triage and response. Awake partners with multiple firewall vendors, orchestration tools and other solutions to enforce automated responses. Awake sells the solution as an annual subscription, based on aggregate throughput. Virtual appliances are available at no charge, and physical devices are available for a fee. Customers can deploy Awake in two modes. With the first option, no customer sensitive data ever leaves the customer’s environment. With the second option, customers deploy the central analytics and management in an Awake hosted cloud. In this scenario, each customer’s data is isolated and can only be accessed by the customer that owns the data. Awake also offers a managed network detection and response service built on the technology platform.
Blue Hexagon is based in Sunnyvale, California. It launched its network and IaaS (Amazon Web Services [AWS] and Microsoft Azure) network detection solution in 2019, with a cloud management console. The vendor serves the U.S. market and plans expansion internationally in 2020. Blue Hexagon’s detection engine inspects network traffic and files, and is based on deep learning to detect threats. The solution cannot decrypt TLS. It relies on TLS handshake and tunnel characteristics to detect anomalies on encrypted traffic, using its deep learning models. The vendor uses threat intelligence feeds, but also uses deep learning to classify sources as malicious.Blue Hexagon can be deployed in-line and out-of-band. When deployed out-of-band, it integrates with endpoint security and firewall solutions, as well as SIEM, SOAR and AWS/Azure to provide automated response. When deployed in-line (“bump in the wire” or through ICAP), it can directly block traffic. Licensing for Blue Hexagon follows a traditional network security approach, with hardware purchase (virtual appliance is free of charge) and licensing based on required bandwidth, which includes vendor support. IaaS pricing can be bandwidth-based or per hour.
Headquartered in Columbia, Maryland, Bricata is a network security vendor primarily targeting the U.S. and European markets. The vendor’s solution leverages the Suricata IDPS module for signature-based controls and the Zeek (formerly Bro) engine for protocol and behavioral analysis, while capturing full-packet traffic data for retrospective analysis. Bricata is a highly customizable solution, where users can tune detections and create specialized detections. Bricata also includes the Cylance Infinity engine for file analysis. The network sensors and centralized management are available in physical and virtual appliances. They can also be deployed on the main IaaS platforms. The sensors do not decrypt TLS traffic, and rely on JA3 fingerprinting to provide encrypted session analysis. The vendor recently released the ability to tag alerts based on the MITRE ATT&CK framework, to aggregate similar events in the dashboard, and to run files in the Cuckoo Sandbox.The vendor’s response capabilities rely on SIEM and SOAR integration, and API documentation is available to create custom response scenarios with firewall, NAC and other products. Bricata’s software pricing is based on aggregated bandwidth of inspected traffic. Customers can also purchase hardware appliances through Bricata’s channel partners.
Cisco, based in San Jose, California, offers two deployment options for its Stealthwatch solution. Stealthwatch Enterprise collects, stores and analyzes information in the customer’s environment. Stealthwatch Cloud is a SaaS offering. It can monitor a customer’s private network or a public cloud environment (through integrations with AWS, Azure or Google Cloud Platform). Stealthwatch detects suspicious traffic primarily by analyzing NetFlow, IPFIX or sFlow records. Stealthwatch uses multiple analytical techniques to detect suspicious traffic, including supervised machine learning, unsupervised machine learning and some deep learning algorithms. The solution does not decrypt TLS traffic. Stealthwatch uses Cisco’s Encrypted Traffic Analysis (ETA) functionality to analyze TLS traffic without decrypting it.Stealthwatch provides historical information to enable a security analyst to manually respond to incidents. It also enables automated responses through integration with Cisco’s Identity Services Engine (ISE). Stealthwatch alarms and events can be shared with Cisco’s SecureX platform, where responses can be automated via SecureX playbooks. Stealthwatch is sold as a subscription based on the necessary flows per second, network device count or total monthly flows.
Corelight is headquartered in San Francisco, California, serving customers essentially in North America and Europe. The vendor’s founders created the Zeek (formerly Bro) network monitoring framework and the solution’s sensors are available in the form of appliances (physical and virtual) on AWS and, more recently, on Azure. Corelight uses Zeek as its main engine and as a support for its own detections and integrating third-party threat intelligence feeds. Corelight mainly relies on its own analysis of the traffic metadata, and can also extract files to forward them to third-party file inspection devices. Corelight Sensors do not decrypt TLS, but the vendor just added additional encrypted traffic analysis for SSH — to detect brute force attempts and interactive connections — and TLS, including JA3 fingerprinting and certificate analysis.As Corelight Sensors are more frequently deployed out of band, the vendor focused its response capabilities on integrating with a broad portfolio of SIEM and SOAR tools. Customers interested in Corelight will purchase hardware appliances and attached subscriptions based on sensors’ expected bandwidth capacity.
Darktrace is based in Cambridge, U.K., and San Francisco, California. It’s detection capability is primarily based on unsupervised machine learning, and it also utilizes supervised machine learning and deep learning algorithms. To analyze encrypted traffic, Darktrace relies primarily on unsupervised machine learning to detect unusual and anomalous JA3s. Darktrace offers a SaaS module to monitor traffic between users and Microsoft Office 365. In 2019, Darktrace introduced the Cyber AI Analyst capability. It uses analytical techniques to automatically investigate threats detected by Darktrace’s flagship Enterprise Immune System (EIS). Cyber AI Analyst investigates the most important incidents on a dashboard, and it provides written reports on these incidents.Darktrace’s optional Antigena tool automates the response to incidents detected by EIS. It sends commands to leading firewall vendors to drop suspicious traffic. It also integrates with some SOAR tools, some EDR tools and NAC tools. Cyber AI Analyst is Darktrace’s primary tool for automatically investigating and responding to threats. Pricing for EIS is based on an annual subscription. The price for Antigena for Network is 50% of the cost of the EIS license. The price for Antigena for Email is based on the number of users in the organization.
ExtraHop is a large network monitoring and security vendor, based in Seattle, Washington. It launched its NDR product, named Reveal(x), in January 2018. The vendor quickly gained visibility on shortlists among its existing customers and across multiple regions in pure NDR evaluations. ExtraHop delivers Reveal(x) as a self-service on-premises or IaaS appliance solution, or as cloud-hosted SaaS. Reveal(x) sensors extract enriched metadata to feed multiple analysis engines and build correlated security events. ExtraHop also offers full-packet capture or event-triggered packet capture. Users can drill down from summary metadata into the raw packets as Reveal(x) allows filtering and downloading of only the range of packets required. Reveal(x) can decrypt TLS traffic, if given access to the server secret keys or the symmetric session key, and relies on JA3 fingerprinting and other traffic analysis techniques when decryption is not an option. ExtraHop detection capabilities leverage a combination of techniques, including rule- and reputation-based controls, but also combine supervised and unsupervised machine learning to detect anomalies and deviation from normal network behaviors.ExtraHop chose to integrate with ticketing, SIEM and SOAR for automated orchestration, and with firewalls or endpoint protection solutions for automated response. Reveal(x) is priced as a set of subscriptions, which depends on the number of endpoints, and so-called “critical assets” combined with bandwidth tiers. Additional features, such as full-packet capture and physical appliances, are priced separately.
Fidelis is based in Bethesda, Maryland. In addition to its NDR solution, the vendor also sells its own EDR and deception products. Fidelis combines multiple techniques to detect malicious traffic, including supervised and unsupervised machine learning, signatures, and statistical analysis. In April 2020, Fidelis launched a stand-alone TLS decryption appliance. It plans to add TLS decryption as an option on its sensors in 3Q20. It also uses JA3 signatures and machine learning techniques to analyze encrypted TLS traffic.Fidelis Network does not directly integrate with any firewall solutions. It provides automated responses, such as packet drops, TCP resets and email quarantine, as well as quarantining files and custom playbooks, through its integration with its own EDR tool, Fidelis Endpoint. Fidelis also integrates with Carbon Black Cloud and other EDR tools. Fidelis can export data to SIEM and SOAR products. Manual response capabilities include the ability to search metadata, which can be stored for as long as the customer decides to keep it. Fidelis Network is licensed on an aggregate bandwidth and metadata storage model. An on-premises license can be purchased as a subscription or a perpetual model. A cloud license (managed from the cloud with data stored in the cloud) can only be licensed as a subscription.
FireEye is a global security company, based in Milpitas, California. FireEye SmartVision is its NDR solution, specialized on server-side traffic. SmartVision physical or virtual sensors are deployed typically to intercept client-to-server traffic. SmartVision detection engines heavily leverage IDS and threat intelligence rule-based controls. FireEye products are powered by a proprietary Multi-Vector Execution (MVX) engine, which can be hosted on-premises or in the cloud. FireEye Network Forensics provides full-packet capture and analysis of traffic. Machine learning techniques also apply to traffic and file analysis.FireEye SmartVision response capabilities are available through the vendor’s orchestration and endpoint solutions, or via numerous integrations. Additional investigation tools are part of the FireEye Helix threat hunting and managed security service offering. The SmartVision solution can be purchased with a perpetual license (customers buy appliances), or as an annual subscription (based on Mbps of throughput or on a per-user basis).
Flowmon is based in Brno, Czechia. Its detection algorithms are based on a combination of multiple techniques, including machine learning, heuristics, statistical and signature-based methods. Flowmon does not decrypt TLS traffic. It uses encrypted traffic analysis techniques to look for indicators of compromise and compliance-related risks. It also uses JA3 fingerprints, but it does not rely heavily on this technique. Flowmon can ingest flow data (for example, NetFlow, IPFIX and others) from the network infrastructure, but it achieves the best results when customers implement its probes. These probes generate metadata that provides visibility into Layer 7 traffic across multiple protocols. The probes also include a memory buffer to support event-triggered packet captures.Flowmon supports some automated response capabilities through formal partnerships and integration with Cisco’s NAC tool, Fortinet and Hillstone firewalls, and some other products. The tool also enables manual response by providing the ability to query and analyze origin data for threat hunting and incident analysis. Flowmon’s detection engine is licensed per volume of processed flows per second (fps). Customers can purchase yearly subscriptions or perpetual licenses. Flowmon collectors are licensed based on performance (fps) and storage capacity. Stand-alone probes are licensed per number of interfaces and speeds.
Based in Santa Clara, California, Gigamon’s ThreatINSIGHT solution is based on technology from its acquisition of ICEBRG in 2018. ThreatINSIGHT uses a combination of techniques to detect suspicious traffic, including supervised and unsupervised machine learning, deep learning, and signatures. ThreatINSIGHT can analyze decrypted TLS traffic when it is coupled with Gigamon’s SSL decryption feature (an optional component of Gigamon’s flagship GigaVUE network packet broker). To analyze unencrypted TLS traffic, ThreatINSIGHT uses JA3 signatures and it applies machine learning techniques to detect anomalous patterns of communication within the encrypted traffic stream.When compared to many of its competitors, ThreatINSIGHT has limited integrations with technology partners to automatically respond to detections. It integrates with Demisto, Splunk and Mimecast, but it does not have any partnerships with firewall vendors (to drop suspicious traffic) or NAC vendors (to isolate a compromised endpoint). The Insight Query Language (IQL) feature allows incident responders to perform threat hunting and incident response by searching through a store of metadata. ThreatINSIGHT is available as a subscription service, priced according to bandwidth. As part of the subscription, every ThreatINSIGHT customer receives a dedicated Technical Account Manager, regardless of their size.
With headquarters in Brno, Czechia, GREYCORTEX is a pure-play NDR vendor offering a solution called MENDEL. GREYCORTEX offers its solution mainly in Europe and the Asia/Pacific region. MENDEL consists of virtual and physical appliances. It can work with a single device, combining traffic gathering (sensors) and analysis (collectors), and expand to a three-tier architecture by adding a centralized management to handle multiple collectors. GREYCORTEX combines numerous supervised and unsupervised machine learning models, then correlates it with rule-based controls. It also provides solutions for ICS/SCADA networks. GREYCORTEX NDR supports configurable packet capture, and uses JA3 fingerprinting for TLS analysis and supports TLS decryption.MENDEL can automatically block by instrumenting third-party network and security devices, leveraging their management API. Default configuration includes one month of searchable metadata. Two pricing models are available. Customers can purchase perpetual licenses based on sensor throughput and flows per second. Alternatively, customers can purchase a subscription license, also based on sensor throughput and flows per second (the subscription price includes support).
Hillstone Networks is a large network security vendor, based in Suzhou, China, with regional headquarters in Santa Clara, California. Its Server Breach Detection System (sBDS) can be deployed as a stand-alone product, and its threat detection sensors can also be bundled in the vendor’s centralized analytics solution (i-Source). Hillstone’s solution combines the various engines from its security portfolio, including IDS and malware inspection, but does not decrypt or analyze TLS sessions. Its use of unsupervised machine learning is focused on baselining client-to-server traffic patterns and spotting deviations.Hillstone’s NDR solution integrates with other products from the vendor for incident response. Pricing is based on appliance purchase and attached subscriptions.
Based in Fulton, Maryland, IronNet targets large enterprises that are concerned about attacks from nation states. Its solution uses a combination of behavioral detection techniques, including supervised and unsupervised machine learning and some deep learning. It also uses statistical analysis and some heuristic techniques to detect suspicious traffic. IronNet does not decrypt TLS traffic, and it does not support JA3 fingerprints. However, it uses a range of artificial intelligence and machine learning techniques to detect suspicious TLS traffic.Unlike many vendors in this market, IronNet does not automatically respond to threats by integrating with firewalls to drop suspicious network traffic. However, it does integrate with leading SOAR and SIEM products. IronNet has strong manual hunt capabilities, enabling threat hunters to investigate across network flow data and pull packet capture (PCAP) on any flow (not just what IronDefense deems as high risk). The Expert System feature in the IronDefense product prioritizes threats and provides contextual information for incident responders. The solution also provides a crowdsourcing feature that enables communities of peer enterprises to collaborate against targeted threats. Pricing for IronDefense is based on a flat monthly fee based on analytical throughput (not ingest throughput) or by number of users. Customers must purchase IronDefense physical or virtual sensors.
On 4 June 2020, VMware announced the intent to acquire Lastline. Gartner expects the deal to close by the end of June. After the deal has closed, Gartner expects that VMware will integrate Lastline technology into its NSX product.Lastline is based in San Mateo, California. Its Defender product uses a combination of techniques to detect suspicious traffic, including supervised and unsupervised machine learning, and some deep learning functions. It also uses signatures, statistical analysis and heuristics, as well as a sandbox to detect malicious files. Defender does not natively decrypt TLS traffic. Instead, it applies anomaly detection to JA3 hashes. It also applies encrypted traffic analysis techniques to detect suspicious traffic without inspecting the payload.Lastline’s automated response with firewall vendors (to send a command to the firewall, so it drops suspicious traffic) is limited to only Check Point Software Technologies. However, Lastline integrates with many other security products, including VMware Carbon Black Cloud, Symantec (Blue Coat), Splunk (Phantom), Trend Micro (Tipping Point), Palo Alto Networks and several others. When the Lastline sensors are deployed in-line, they can block suspicious traffic. For manual response, Lastline provides good threat hunting and incident response capabilities. The solution includes the open-source Kibana search and visualization product. Lastline has also built a query language to do more complex searches. The solution includes a triage functionality that correlates multiple alerts into a single high-fidelity alert. Defender is sold as a subscription. Organizations can purchase based on either the number of protected hosts or the number of protected users.
Based in Kennebunk, Maine, Plixer is a network performance monitoring and security vendor, offering an NDR solution based around Scrutinizer. Its customer base is mainly in the U.S. and Europe. Scrutinizer is deployed as physical/virtual sensors or as a SaaS. Scrutinizer collects metadata from the existing network infrastructure (switches, routers, firewalls, packet brokers, etc.), as well as from Plixer FlowPro, which is an optional sensor. The vendor recently acquired endpoint monitoring software, which promises to add more endpoint-related monitoring. Plixer offers integration with Endace for full-packet capture. Scrutinizer includes multiple rule-based and heuristic detections, detecting network anomalies, and security incidents. It complements these techniques with traffic baselining for anomaly detection and JA3 fingerprinting for TLS session analysis.Scrutinizer’s response capabilities include incident-based and threshold-based triggers to update firewall or other network equipment through API calls. Plixer’s subscription licensing is based on flow rate and the number of metadata-exporting network devices. Threat hunting capabilities are integral to Scrutinizer.
Vectra is a global NDR vendor, with headquarters in San Jose, California. Vectra Cognito is the company’s main product offering. The vendor was early on the NDR market with its Cognito platform. Vectra is highly visible in Gartner client inquiries across the Americas and EMEA regions, and growing in the Asia/Pacific region. Cognito Detect, the NDR product, leverages physical appliance sensors and virtual machines deployable on hypervisors and on IaaS platforms, and can interact with some SaaS through APIs to gather SaaS events. The analysis engine (Vectra Brain) can be deployed on-premises or on public cloud. Vectra uses supervised machine learning to detect global threats, and combines it with threat intelligence for more accurate detection of known bad actors. It uses unsupervised learning models for more contextualized anomaly detection. The vendor uses JA3 fingerprinting and other techniques to provide detection coverage for encrypted traffic, but does not decrypt TLS. Vectra provides easy-to-understand dashboards, and a “campaign view,” which puts multiple events in context and eases the investigation. Vectra recently launched a beta program for an Office 365 monitoring offering, and released Lockdown, an event aggregation and automated response (via partner integrations) feature that is part of Cognito Detect.Vectra’s Lockdown solution integrates with endpoint controls, firewalls, SOAR and SIEM to provide response capabilities. It can also directly integrate with the infrastructure, taking down workload or temporarily disabling compromised user accounts. Vectra’s pricing, in addition to the hardware costs, is based on the number of active monitored IP addresses. Additional subscriptions are available to forward enriched, Zeek-formatted data in real time to a third-party data lake (Cognito Stream), or to a SaaS that is integrated with Cognito Detect (Cognito Recall) for threat hunting purposes.
Enterprises should strongly consider NDR solutions to complement signature-based tools and network sandboxes. Many Gartner clients have reported that NDR tools have detected suspicious network traffic that other perimeter security tools had missed.When evaluating NDR vendors, assess these factors:
Response — Some vendors focus more on automated responses (for example, sending a command to a firewall to drop suspicious traffic), whereas other vendors focus more on manual responses (for example, providing strong threat hunting tools). Enterprises should decide which approach is a better fit for them and should analyze the vendors with response features that best meet their requirements.
Pure-play versus NDR as a feature — Is it more sensible to implement NDR as a feature from another technology vendor (for example, SIEM), or do you require a more full-featured, pure-play NDR solution from one of the vendors analyzed in this Market Guide?
Note 1Representative Vendor Selection
These vendors were selected because they met Gartner’s inclusion criteria, and were not eliminated by our exclusion criteria.
Note 2Other Vendors That We Are Tracking
IoT and OT Specialization Vendors
NDR as a Feature Vendors
IBM (QRadar Network Insights)
Palo Alto Networks (Cortex XDR)
Kaspersky (see Note 3)
Qianxin Technology Co., Ltd. (SkyEye)
Tencent (T-Sec NTA)
Note 3: Kaspersky
In September 2017, the U.S. government ordered all federal agencies to remove Kaspersky’s software from their systems. Several media reports, citing unnamed intelligence sources, made additional claims. Gartner is unaware of any evidence brought forward in this matter. At the same time, Kaspersky’s initial complaints have been dismissed by a U.S. District of Columbia Court.Kaspersky has launched a transparency center in Zurich where trusted stakeholders can inspect and evaluate product internals. Kaspersky has also committed to store and process customer data in Zurich, Switzerland. Gartner clients, especially those who work closely with U.S. federal agencies, should consider this information in their risk analysis and continue to monitor this situation for updates.
Selecting the Right SOC Model for Your Organization
Published 24 February 2020 – ID G00464962 – 22 min read
An SOC provides centralized security event monitoring and threat detection and response capabilities, and may support other security operations’ functions and business unit requirements. This research helps security and risk management leaders identify the best SOC model for their organization.
Security operations centers (SOCs) will fail in their mission without a clear target operating model, and if their deliverables are not tightly coupled to business use cases, risks and outcomes.
A hybrid SOC working with external providers is a credible option that is increasingly being adopted by many organizations, specifically midsize enterprises.
Organizations are increasingly interested in multifunction SOCs, extending SOC duties to incident response, threat intelligence and threat hunting, while adding OT/ICS/IoT in scope.
Building, implementing, running and sustaining a fully staffed 24/7 SOC is cost-prohibitive for most organizations.
Security and risk management leaders responsible for security operations should:
Develop an SOC target operating model, taking into account current risks and threats, as well as the business objectives, focusing on specific threat detection and response use cases.
Use managed detection and response (MDR) or other security services to offset the cost of 24/7 SOC operations and to fill coverage and skills gaps, tactically or as a long-term strategy.
Expand the SOC’s capabilities beyond just SIEM solutions to provide greater visibility into the IT, OT and IoT environment where appropriate, but do not expect a full SOC/NOC integration.
Likewise, plan for SOC functions beyond reactive incident monitoring and into threat detection and response, and even proactive threat hunting.
Strategic Planning Assumption
By 2024, 25% of all organizations will have an SOC function, up from 10% today. This will range from small part-time virtual SOCs to fully staffed full-time SOCs, to outsourcing of SOC services to an external provider, or a combination of these.
Security operations centers (SOCs) have historically been adopted by only very large organizations requiring centralized and consolidated security operations focused on security event monitoring, and threat detection and response, usually delivered 24/7.
This has changed, and SOCs are becoming more ubiquitous as organizations large and small shift security efforts from prevention only to a blend of prevention and detection.
Gartner defines an SOC as a construct with the following characteristics:
A mission, usually focused on threat detection and response.
A facility, dedicated to the SOC, either physical or virtual.
A team, often operating in around-the-clock shifts to provide 24/7 coverage.
A set of processes and workflows that support the SOC’s functions.
A tool or set of tools to help predict, prevent, detect, assess and respond to security threats and incidents.
However, the SOC does not always have to be a physical facility with hundreds of analysts working around the clock. Gartner has seen less mature, as well as resource-constrained organizations employ staff members to perform security operational functions on an ad hoc basis and remotely (that is, where there is a virtual SOC function being delivered). While SOC is the ubiquitous term, other terms such as cybersecurity operations center, cyber defense center and cyber fusion center are often used.
Gartner observes a renewed interest from incoming inquiries in merging both the NOC and SOC functions for economies of scale. Although a fully fused NOC/SOC approach is not a viable alternative at scale, the common set of functions between NOC and SOC needs to be identified, and a decision has to be made on where this function will live. At the very least, always improving coordination between the NOC and SOC needs to be encouraged.
An organization cannot buy an outsourced SOC. Outsourced services still feed into an organization’s own security operations regardless of how informal that may be. A hybrid SOC usually connotes an SOC where one or more of the core functions are performed using outsourced security services. It is the most common form of SOC across all organizations, as most organizations will leverage some types of security services (for example, reverse malware engineering is a common function).
SOCs’ main mission is focused on the following functions, with threat detection and response being the most common across SOCs. The SOC needs to be clearly aligned to its target operating model, as defined in “Create an SOC Operating Model to Drive Success.” If a set of functions is not delivered out of the SOC, this could indicate that these functions are performed by another internal structure, an external service provider or are not aligned to the organization’s security use cases:
Security event monitoring, detection, investigation and alert triaging
Security incident response management, including malware analysis and forensic analysis
Threat intelligence management (ingestion, production, curation and dissemination)
Risk-based vulnerability management (notably, the prioritization of patching)
Security device management and maintenance (for the SOC technology stack)
Development of data and metrics for compliance reporting/management
Figure 1 describes the main functions of an SOC across all SOC models.
Figure 1. Modern SOC Components
Depending on the functions and capabilities provided, a fully functional SOC running 24/7 requires at least eight to 12 full-time employees (see “How to Plan, Design, Operate and Evolve a SOC”). This does not include capacity for management, staff turnover, personal time off or other special activities like malware reverse engineering, forensics and threat analysis that may need to be performed by the SOC staff.
Ideally, an SOC should be located in a dedicated, physical environment (such as an isolated room) with heightened levels of physical access required. Due to the sensitive nature of incident investigations, as well as the potential for tampering with potential evidence and hiding malicious tracks, physical access to the facility needs to be restricted to authorized personnel only. The SOC’s infrastructure (network, systems, applications) should be isolated or segmented from the production network to prevent internal breaches affecting the operations of the SOC. Furthermore, the technology infrastructure used for monitoring and investigations within the SOC should be isolated and separated from the internet. Finally, the SOC will often have its own independent internet connectivity so that it can continue to operate and perform investigations even if the corporate network is, for example, under a distributed denial of service (DDoS) attack. Based on Gartner client inquiries, however, this is not always the case. Although some organizations build/manage SOCs with high levels of physical protection and isolation, as described above, most organizations opt for a traditional office environment and simple isolation measures.
Five main models of SOC have emerged, which can be mapped along the maturity of the SOC processes and workflows in an organization, as described in Figure 2.
Figure 2. Five Models of SOC
These models are further described in Table 1 and the sections below.
Table 1: Five Primary Operational SOC Models for Typical Organizations
Typical Maturity of SOC Workflows
When to Select
No dedicated facility
No dedicated facility available
Part-time and geographically distributed team members
Activated when an incident is discovered
Low to medium
Simple SOC with IoT/OT/ICS and some 24/7 NOC
Dedicated facility with a dedicated team performing, not just security, but some other critical 24/7 IT operations from the same facility to reduce costs
Availability of some formalized processes and workflows
Low to very high
Mixes internal resources and outsourced security services
Any SOC model can be qualified as hybrid when it uses outsourced security services
Dedicated and semidedicated staff, either internally or outsourced
Security operations can be performed by the organization’s internal staff 24/7, 8-5 on weekdays, or 8-5 every day with some responsibilities offloaded to an external provider
Primary model when fully delegated to an MSSP or an MDR
Medium to high
Self-contained, in-house, dedicated 24/7 threat detection and response
Fully in-house, 24/7 operations
Incident response, TH and TI functions and teams in place
High to very high
Manages and coordinates other SOCs and activities
Need to coordinate other SOCs
Coordinate response across all SOCs for major incidents
Provide threat intelligence, situational awareness and additional expertise
Rarely directly involved in day-to-day operations
Source: Gartner (February 2020)
A virtual SOC (vSOC) does not reside in a dedicated facility, nor does it have a common war room.
Instead, it is composed of team members who may have other duties and functions. Since there may not be dedicated tools for the SOC, like a SIEM, team members rely on available IT, and sometimes security technologies, and become active when a security incident occurs. In addition to a lack of SOC tools and SOC expertise, the lack of formalized processes and workflows for both the detection as well as the response phase is a typical attribute of a vSOC. Things are done reactively, ad hoc, using the available people and tools, usually on a best effort and nondeterministic way.
A vSOC is typically suited to smaller enterprises that experience only infrequent incidents and/or do not have resources for a more encompassing SOC. Sometimes an organization can only afford an IT person or a handful of people who can, on a part-time basis, review alerts generated by the firewall or an antivirus, or periodically review critical logs in support of a threat detection and response function.
The defining attribute of a multifunction SOC is to bring IoT/OT/ICS in scope for the SOC, and/or to deliver on other critical 24/7 IT operations from the same facility to reduce costs.
This model is usually adopted by less mature organizations that need to deliver multiple use cases from the same facility, and that may not have dedicated expertise in IT, security and OT. These use cases are usually simple enough, both from the NOC as well as SOC standpoint, to be delivered by common tools and common people. However, factors such as politics, budget and process maturity levels can lead to staff members doing multiple things, but none of them well. NOCs adhere to the Information Technology Infrastructure Library (ITIL) definitions of incident and incident management, which is generally not the right approach to take in terms of security incidents. The ITIL’s focus is on events that cause a disruption of service, with the goal of restoring the service as quickly and efficiently as possible. Security and risk management leaders must never be distracted by this convergence or else it may affect the mission of the SOC and its ability to help securely deliver and enable business outcomes.
Organizations engaged in this model always start by mapping available telemetry, tools, and expertise, and defining common use cases, processes and workflows for the multifunction SOC (see “Align NetOps and SecOps Tool Objectives With Shared Use Cases”). These can include not only IT and security devices and users, but also IoT/OT/ICS.
The defining attribute for a hybrid SOC is to mix both internal resources with outsourced ones, while leveraging external security services for the delivery of some or most of the SOC functions.
One or more dedicated people are responsible for ongoing SOC operations, involving semidedicated team members and third parties, as required. If an organization cannot operate 24/7, the resulting gap can be covered by a number of providers, resulting in a hybrid SOC model. These providers might include an MSSP (see “Magic Quadrant for Managed Security Services, Worldwide”), a managed detection and response (MDR) service provider (see “Market Guide for Managed Detection and Response Services”), a co-managed SIEM service provider, or sometimes a special security consulting provider or system integrator (SI) for such services as specialized incident response/forensics. Only large enterprises are able to afford and commit to dedicated, 24/7 internal SOCs. However, many organizations desire some form of internal security operations capability (although limited), even if they are using an external provider for a majority of their security monitoring needs.
The hybrid SOC model can reduce the cost of 24/7 operations. Therefore, it is well suited not only for small to midsize enterprises, and especially for those working extensively with third parties, but also to larger organizations and mature SOCs that can selectively outsource some security services.
Furthermore, it allows the organization to maintain stable security operations while internal capabilities are developed over time. During this time, any resource gaps can be filled, and existing security resources can shift their focus to other activities, such as deeper investigations of incidents. As such, this model is also adopted by organizations that have a desire to build insourced competencies but (1) need an immediate solution to their problem, (2) have limited expertise to be autonomous right away, and (3) want to leverage the security service provider for knowledge transfer and continuous expertise gathering.
The defining attribute of a dedicated SOC is to have a 24/7 centralized threat detection and response function, with a dedicated facility, IT, and security infrastructure and team, and robust processes and workflows. It is self-contained, possessing all of the resources required for continuous day-to-day security operations.
A fully centralized SOC is suited for large enterprises with multiple business units and geographically dispersed locations, sensitive environments, and high-risk, high-security requirements, as well as service providers that provide MSSs. Specifically, large enterprises choose to build, implement and run their own SOCs when:
Laws, regulations or governance issues prevent the outsourcing option.
There are concerns about specific/targeted threats.
Specialized expertise and knowledge about the business cannot be outsourced.
The organization’s technology stack is not supported by third-party security services.
Recently, Gartner is seeing large enterprises with a complex and distinct set of use cases and/or very widespread security mandates fusing traditional security operations with more contemporary functions. Examples of these extended use cases include, but are not limited to, threat intelligence, cyber incident response and OT/Internet of Things (IoT) security. There are, however, both advantages and disadvantages to doing this. For example, fusing incident response as part of the SOC will allow tighter integration between detection and response, and is an essential factor needed for security operational success (see “Prepare for the Inevitable With an Effective Security Incident Response Plan”). On the other end of the spectrum, it can create separation of duties conflicts and/or pull the security event monitoring resources away from the incident response tasks, thus affecting the effectiveness of the monitoring during an actual incident (see “How to Plan, Design, Operate and Evolve a SOC”).
Dedicated SOCs usually keep most functions in house and minimize security services. However, even large dedicated SOCs can outsource some very specific functions, such as reverse malware engineering. Strictly speaking, most dedicated SOCs are also very advanced hybrid SOCs.
The defining attribute of a command SOC is to support and manage several SOCs, and not be involved in day-to-day operations.
Very large and/or distributed organizations that have regional offices with a certain operating independence, service providers offering MSSs and those providing shared services (for example, government agencies) may have more than one SOC under their purview. Where these SOCs are required to run autonomously, they will function as centralized or distributed SOCs. In some instances, the SOCs will work together, but must be managed hierarchically. In those cases, one SOC should be designated as the command SOC. The command SOC coordinates security intelligence gathering, produces threat intelligence, curates and fuses these for consumption by all other SOCs, in addition to providing additional expertise and skills such as forensic investigations and/or threat analysis. Sometimes, this is how a computer emergency response team (CERT) functions in smaller countries where they are serving as an aggregation and coordination point more than delivering day-to-day security operations.
Benefits and Uses
Improved Threat Management
Many organizations already routinely implement and/or employ a variety of security technologies and services designed to prevent and detect threats, as well as harden and protect assets. When these solutions are managed in silos, organizations lose the opportunity to centrally consolidate, normalize, correlate and monitor these threats in real time, and will at best waste valuable time and resources, and at worst miss obvious threats that an SOC could have easily detected. Such a value is realized via the SOC as a delivery vehicle for a central point of reconciliation and management of these threats.
Reduction in MTTD and MTTR Incidents
Integrated security event monitoring gives the security operations team better visibility and enables it to correlate patterns and surface suspicious activities. Effective detection and escalation of incidents and close coordination between the individual teams within a defined workflow and process allow an organization to detect and respond faster, improving both mean time to detect (MTTD) and mean time to remediate (MTTR).
Centralization and Consolidation of Security Functions
Consolidating security functions in an SOC can provide cost efficiencies, enable cost sharing and leverage economies of scale while maximizing the available expertise, skills and resources. For larger organizations with a distributed geographical environment, especially those with local governance requirements, centralizing some security operations functions can help provide a centralized view, as well as a set of core security services, to all entities, while respecting local regulations.
An SOC is often the operational model of choice for large and some midsize enterprises to meet regulatory requirements mandating security event monitoring, vulnerability management and incident response functions. Furthermore, an SOC can improve compliance auditing and reporting across the organization, but an SOC would typically not be built for compliance-only use cases.
Gartner indicates SOC spending tends to be a significant percent of an organization’s total security budget (see “SOC Development Roadmap”) — 57% spend over 20% of security’s total budget on the SOC. However, clients seem to be split between insourcing or outsourcing their SOC (see “Setting Up a Security Operations Center (SOC)”). In addition, an increased spending in SOC is sustained by:
Maturing of information security programs
Centralization of incident detection, threat detection and response capabilities, as well as consolidation of security operations functions expanded throughout the entire organization
Increased adoption of external service support for security event monitoring and device management
In 2019, Gartner saw a 39% increase of inquiries from clients requesting assistance on both building and maturing their security operations through the lens of an SOC. These clients have security operations functions that are either conducted by internal staff, supported by an external provider offering MSSs to offload some of the SOC functions from the organization internally, or provided in the form of regionally or vertically aligned shared services.
Lack of Improvement in Breach Response Efficiency/Capabilities
With threat management as a major driver for adopting an SOC, most will be judged by how they perform in that function and will be measured by the speed and efficacy of security event monitoring and threat detection and response.
Organizations adopting the SOC model should carefully evaluate how this investment translates to less frequent and severe breaches, and compare it to their own pre-SOC state. Furthermore, security technologies are not silver bullets. SOCs may become overwhelmed by the vast number of alerts generated by an expanding number of security tools. Although this is a common issue, there is no simple solution to avoid this quandary. After all, some organizations genuinely have a lot of malicious activity, which leads to alert overload. Better SIEM tuning to minimize noise, use of advanced analytics for better detection, and use of automation for alert triage and faster response are often used to reduce the alert flood.
Skills, Expertise and Staff Retention
Staff retention for SOC analysts is generally difficult. Even service providers that can offer a career path and progression struggle to keep their SOC analysts for longer than three to four years. As a result of the shift-based and repetitive work, in addition to a rare and sought-after skill set, the SOC analyst role is often seen as a steppingstone role. This trend is further exacerbated by a global shortage in available qualified staff (see “Adapt Your Traditional Staffing Practices for Cybersecurity”).
An understaffed SOC or one staffed with inexperienced analysts will be ineffective and will struggle to achieve its objective of rapid detection and response to threats and incidents, despite all the spend on technology and data collection. It will also increase analyst attrition if left understaffed for longer periods. To avoid starting an SOC project that can never succeed due to resource constraints, seek out alternatives such as MSSs or other forms of hybrid and outsourced security event monitoring, like MDR service providers. Alternatively, start with non-24/7 coverage and expand later when the resources are available.
Regardless of the SOC model implemented, Gartner recommends developing an SOC staff retention strategy from the start, as well as maintaining a continuous hiring capacity, which can help the organization maintain the SOC with the minimum, yet optimum staff required (see “Develop Existing Security Staff to Excel in the Digital Era”).
Return on Investment Demonstration
Security and risk management leaders need to understand that success is not just about achieving the security operations metrics, but also about the concurrently external metrics that align with the business. Important starting points are paying attention to what is your market, what is your message and what media you should use. For example, concerns over detection rates, open tickets per analyst and ticket closure rates are warranted. However, do not lose sight of the fact that the business is mainly concerned about addressing these questions:
Can we continue to deliver our products/services?
What competitive disruptions or players in our market will cause clients to shift from our products/services?
To ensure your organization has the most appropriate security metrics, start with the end in mind and first develop tightly defined goals and metrics the SOC needs to deliver against that align to the business outcomes. Also, make sure that a sustainable budget is secured for the first two to three years of the SOC operation. It will often take this amount of time for people, processes and technology to be integrated into your organization and delivering at a reasonable level of proficiency.
Security and risk management leaders involved in incident monitoring, threat detection and response, and/or other adjacent security operations functions (such as threat hunting and threat intelligence) should benefit from efficiencies by formalizing all relevant duties within a security operations center. This SOC will then:
Gather and centralize required security personnel. These can be present either physically or virtually, and can belong to the organization’s security, operations, IT or network teams, or belong to a service provider. Likewise, these resources can be assigned on a full-time or part-time basis.
Define repeatable and automatable processes and workflows. These will depend on the scope of the SOC and should tend to address not only threat detection but also response. When an outside service provider is involved, it is then particularly important to define the “who is doing what, when” by using a responsible, accountable, consulted, informed RACI matrix to define roles and responsibilities, and expose integrations and communications between the client and the service provider.
Appropriately implement tools. Depending on scope, these tools (which can include, for example, CLM, SIEM, SOAR, SIRP or ITSM) should be selected and implemented to not only support current SOC requirements, but also current or planned SOC scope creep beyond security. This includes, for example, supporting the IT operations team and its NOC, or the ICS owners and their IoT ecosystem.
The scope of the SOC can then be defined along the following two dimensions:
Breadth of scope. As an example, does the SOC address only a subset of the infrastructure, or a subset of the user population, entire BUs or even the entire organization?
Depth of scope. As an example, does the SOC address basic, best-practice cyber-hygiene use cases, or does it address more complex use cases such as advanced persistent threat (APT) or insider threat? Does it include the IoT ecosystem, and does it deliver some NOC services as well?
Based on the scope of the SOC along these two dimensions, available expertise and resources, and strategic appetite for insourcing versus outsourcing, organizations can engage in an SOC initiative using one of the models described in this research note.
Note 1ITIL 4 Incident and Incident Management Definitions
The definition of “incident” was revised in ITIL 2 as “an event which is not part of the standard operation of a service and which causes or may case disruption to or a reduction in the quality of services and customer productivity.” Failure of one disk from a mirror set would fall in this category. ITIL 4 refers to incident management as a practice, describing key activities, inputs, outputs and roles. The primary objective of the incident management ITIL process is to return the IT service to users as quickly as possible.
Published 29 April 2020 – ID G00394281 – 61 min read
Modern application design and the continued adoption of DevSecOps are expanding the scope of the AST market. Security and risk management leaders will need to meet tighter deadlines and test more complex applications by seamlessly integrating and automating AST in the software delivery life cycle.
Strategic Planning Assumptions
By 2025, 70% of attacks against containers will be from known vulnerabilities and misconfigurations that could have been remediated.
By 2025, organizations will speed up their remediation of coding vulnerabilities identified by SAST by 30% with code suggestions applied from automated solutions, up from less than 1% today, reducing time spent fixing bugs by 50%.
By 2024, the provision of a detailed, regularly updated software bill of materials by software vendors will be a non-negotiable requirement for at least half of enterprise software buyers, up from less than 5% in 2019.
Gartner’s view of the market is focused on transformational technologies or approaches delivering on the future needs of end users.
Gartner defines the application security testing (AST) market as the buyers and sellers of products and services designed to analyze and test applications for security vulnerabilities.
We identify four main AST technologies:
Static AST (SAST) technology analyzes an application’s source, bytecode or binary code for security vulnerabilities, typically at the programming and/or testing software life cycle (SLC) phases.
Dynamic AST (DAST) technology analyzes applications in their dynamic, running state during testing or operational phases. It simulates attacks against an application (typically web-enabled applications and services and APIs), analyzes the application’s reactions and, thus, determines whether it is vulnerable.
Interactive AST (IAST) technology combines elements of DAST simultaneously with instrumentation of the application under test. It is typically implemented as an agent within the test runtime environment (for example, instrumenting the Java Virtual Machine [JVM] or .NET CLR) that observes operation or attacks and identifies vulnerabilities.
Software composition analysis (SCA) technology used to identify open-source and third-party components in use in an application, their known security vulnerabilities, and typically adversarial license restrictions.
AST can be delivered as a tool or as a subscription service. Many vendors offer both options to reflect enterprise requirements for a product and a service.
The 2020 Magic Quadrant will focus on a vendor’s SAST, DAST, SCA and IAST offerings, maturity and features as tools or as a service. AST vendors innovating or partnering for these were also included.
Gartner has observed the major driver in the evolution of the AST market is the need to support enterprise DevOps initiatives. Customers require offerings that provide high-assurance, high-value findings while not unnecessarily slowing down development efforts. Clients expect offerings to fit earlier in the development process, with testing often driven by developers rather than security specialists. As a result, this market evaluation focuses more heavily on the buyer’s needs when it comes to supporting rapid and accurate testing capable of being integrated in an increasingly automated fashion throughout the software development life cycle. In addition, Gartner recognizes the growing relevance of containers as an attractive technology for application development, especially for cloud-native applications. We have added support for containers as a factor in the 2020 Magic Quadrant.
Gartner has observed that enterprises today increasingly employ AST for mobile apps. The toolsets for AST, as well as techniques for behavioral analysis, are often employed to analyze source, byte or binary code, and observe the behavior of mobile apps to identify coding, design, packaging, deployment and runtime conditions that introduce security vulnerabilities. While these capabilities are valued, they do not drive the current or evolving needs of customers in the AST space, and thus are similarly not a primary focus of this Magic Quadrant.
Figure 1. Magic Quadrant for Application Security Testing
Source: Gartner (April 2020)
Vendor Strengths and Cautions
Based in the U.S. and France, CAST is a software intelligence vendor whose product is used to analyze software composition, architecture, flaws, quality grades and cloud readiness. In addition to its code quality testing offering, CAST provides enterprise SAST with the CAST Application Intelligence Platform (AIP). The vendor also offers CAST Highlight, which provides SAST pattern analysis and SCA. The CAST Security Dashboard enables application security professionals to prioritize and resolve application security vulnerabilities. The vendor also provides a desktop version called CAST Lite.
During the past 12 months, CAST continued to expand its language and framework coverage; improved its SCA offering (including the addition of transitive dependencies and visual representation of dependencies); and optimized its scanning for complex projects. CAST also worked on false positive reduction, including the introduction of its autoblackboxing capability. This allows users to fine-tune and customize their analysis (for example, including external code or recognizing and suppressing specific false positives). CAST also introduced AIP Console, which allows for automated application discovery, configuration and set up.
CAST will appeal to large enterprises requiring a solution that combines security testing with code quality testing, and to existing CAST AIP clients that already use the platform for quality testing.
CAST offers a single solution that can be used for quality analysis as well as security analysis, which can be appealing to organizations with DevSecOps use cases.
Client feedback highly rated the ability to get a single view into issues across security, quality and architecture. CAST’s analysis engine provides an architectural blueprint of the software that helps test composite applications in multiple languages, visualize the architecture to improve code security by detecting insider threats via rogue data access and reduce false positives.
The vendor provides a scoring mechanism that can be calibrated to organization-specific criteria to track whether an application’s health is increasing or deteriorating from security, reliability and multiple other standpoints.
CAST provides the ability to set up a plan of action based on a particular objective, such as reducing technical debt or improving the security score.
Client feedback favorably rated the scalability and performance of the SAST engine in analyzing larger applications.
Clients perceive CAST as an application quality testing solution provider, rather than an established application security vendor.
The vendor does not provide SCA as part of its main SAST offering, AIP, but only with CAST Highlight.
CAST’s SAST solution is missing key software development life cycle (SDLC) integration features, such as a spellchecker, incremental scanning and, most importantly, an integrated development environment (IDE) plug-in.
CAST clients often cite setup, implementation and customization as areas for improvement. Also, the vendor does not provide 24/7 support.
CAST does not provide DAST or IAST, and has no partnerships to deliver either.
Known originally for its SAST offering, Checkmarx has expanded the scope of its portfolio to include SCA, IAST and — via a partnership — managed DAST. An on-demand interactive educational offering, CxCodebashing, provides developers with just-in-time training about vulnerabilities within code. The vendor’s SCA product is essentially new this year, with an internally developed version replacing a previous OEM offering retaining the same name, CxOSA. The SCA offering also supports new container scanning capabilities to aid in identifying problematic open source in images. Another change is the addition of a Docker and Linux-based SAST scanning engine. This addresses past complaints around a requirement for Windows to support local scanning engines, and also enables a new “elastic” scanning facility allowing customers to add (or remove) scanning engines to reflect changing workloads. Another update offers expanded prioritization of results based on a confidence rating (derived from a machine learning [ML] algorithm) and other variables, such as user-defined policies, severity ratings, age and several others.
Checkmarx offers a mix of deployment options for most of its products, with identical capabilities available in on-premises, cloud and managed service forms. Based in Tel Aviv, the vendor offers a global presence in North and South America, Europe, and the Asia/Pacific region, including Japan. Principal support centers are located in Texas, Israel and India. Checkmarx was acquired on 16 March 2020 by private equity firm Hellman & Friedman from Insight Ventures, which retains a minority interest. As this acquisition occurred following the deadline for this Magic Quadrant, any impact on the vendor’s position was not addressed.
The vendor’s portfolio competes well for various use cases, including DevSecOps, cloud-native development and more traditional development approaches where SAST is a central requirement. SAST capabilities support a broad variety of programming languages and frameworks, and include support for incremental and parallel tests.
CxIAST employs a passive scanning model and results are correlated with SAST findings, as are issues discovered within open-source packages. This helps with validation of results, and can aid in confirming that a vulnerability is within executable code.
Tool integration within IDEs and the build environment is frequently cited as a strength by customers.
Remediation guidance, augmented by the optional CxCodebashing education component, helps developers understand vulnerabilities and how they can be resolved. A graph-based display of code execution paths and vulnerabilities highlights a proposed “best fix” location. Also, chat-based guidance provides fix advice from Checkmarx support staff.
The product suite offers guidance on the prioritization of vulnerabilities, with reports factoring in data such as the severity of the vulnerability, impact, source and sink information, and confidence level. Confidence levels are derived from a mix of technologies, including an ML algorithm to validate results and correlation between SAST findings and those discovered by IAST or SCA tests.
Through its various components, the Checkmarx portfolio offers basic support for both API security testing and container scanning. The vendor indicates that it plans to continue investment in these areas.
Reflecting its history, the bulk of the vendor’s customers are for its CxSAST product, although Checkmarx continues to invest in expanding its portfolio and capabilities, and other products show growth.
CxDAST is based on a third-party technology relationship and is only available as part of a managed service offering. For use cases where DAST is a primary — or the only — element of an AST effort, the offering may be less attractive.
CxOSA, despite retaining the existing name and feature set, is essentially a new product and is available only as an add-on to the CxSAST product.
Licensing continues to be raised as a source of dissatisfaction by some customers, which may be a consequence of the mix of pricing models offered. Especially for SAST, these are generally based on the number of users or projects/applications — an approach that is emerging as an industry standard. When combined with multiple license models (perpetual, term and subscription), prospective customers gain flexibility, along with complexity. Rankings for negotiation flexibility, pricing and value are on par with competitive vendors, and are generally positive.
Based in the U.S., Contrast Security is an AST vendor that also sells in the U.K., EU and the Asia/Pacific region. The Contrast platform consists of three primary products: IAST (Contrast Assess), SCA (Contrast OSS) and RASP (Contrast Protect). Contrast Assess incorporates Contrast OSS, which automatically performs SCA through both static scans and runtime analysis, and as a part of the Contrast platform. Contrast Protect) can be licensed independently or jointly with Contrast Assess. The vendor also offers a central management console, the Contrast TeamServer, which can be delivered as a service or on-premises. The testing approach, known as self-testing or passive IAST, does not require an external scanning component to generate attack patterns to identify vulnerabilities; rather, it is driven by application test activity, such as quality assurance (QA), executed automatically or manually.
Contrast is a good fit for organizations pursuing a DevOps methodology and looking for approaches to insert automated, continuous security testing that is developer-centric. Organizations that have developers with previous security experience favor Contrast for its lower operational complexity and a quick start into DevSecOps. Some are skipping the traditional SAST/DAST starting point and going straight to IAST. Contrast offers service integrations with the Eclipse, Rational Application Developer for WebSphere Software, IntelliJ IDEA, Visual Studio (VS) Code and VS IDEs through plug-ins that users can install from the vendor’s public IDE marketplace. Contrast provides a comprehensive REST API, as well as out-of-the-box integrations with common DevOps tools such as Chef, Puppet, Jenkins, Azure Pipelines, Maven and Gradle.
Contrast Assess, combined with the vendor’s SCA product (Contrast OSS), is a good choice for organizations leveraging a DevOps or agile approach, offering a quick starting point and rapid integration across the entire SDLC. Gartner client feedback indicates that this also helps in embedding AST among development teams without security testing expertise, because the agent can identify vulnerabilities through normal application testing. Contrast Assess is one of the most broadly adopted IAST solutions and continues to compete on nearly every IAST shortlist.
Contrast’s reporting tool, TeamServer, provides a comprehensive view of code, dependencies, vulnerabilities and project security status in an easy-to-use, intuitive platform. Status is reported as a grade (A through F), making it simple to consume status quickly across complex DevSecOps projects. It also includes a tool for representing dependencies and services in the form of a map, which makes it easier to visualize the attack surface.
Contrast has put significant effort into scanning COTS software, making it a good choice for enterprises with large implementations of third-party code that might be concerned with COTS application security and dependencies on third-party application libraries.
Clients highly rate the ease of use of the tool and the vendor’s support. Contrast introduced a Community Edition for Assess and Protect to allow users to utilize the fully functional platform for a limited number of applications.
Contrast’s platform support provides AST, SCA and RASP for Java, .NET Framework, .NET Core, Node.js, Ruby, and Python.
Contrast Security offers a full IAST and SCA solution, and does not provide stand-alone SAST or DAST tools or services, although its IAST tools can do similar testing in some cases.
Client feedback suggests that, due to the passive testing model, effective test coverage requires clients to have mature test automation capabilities or to run Contrast Assess in conjunction with DAST or “DAST-lite” tools. To address this, Contrast introduced a “route coverage” feature to give clients visibility into their test coverage by highlighting which parts of the application were exercised or still need to be covered.
Contrast can test mobile application back ends, but not the client-side code of the mobile app, and does not conduct behavioral analysis or check front-end code vulnerabilities, such as DOM-based XSS.
Contrast does not feature some of the nice-to-have ongoing support mechanisms that organizations with no AST experience often look for (for example, IDE gamification, human-checked results), although it does support chat with staff for specific questions.
GitLab is a global company with headquarters in the U.S. GitLab provides a continuous integration/continuous delivery (CI/CD)-enabling platform and offers AST as part of its Ultimate/Gold tier. The vendor combines proprietary and open-source scanner results within its own workflows, and provides SAST and DAST. GitLab also provides SCA functionality with Dependency Scanning. It also provides open-source scanning capabilities with Container Scanning and License Compliance. A new entrant in the Magic Quadrant, in the past 12 months GitLab introduced support for Java, remediation recommendations and a security dashboard. It also integrated the SCA technology, stemming from the acquisition of Gemnasium, into its SCA offering. GitLab also added, among other features, Secret Detection to its SAST. This functionality serves to scan the content of the repository and identify credentials and other sensitive information that should not be left unprotected in the code.
GitLab will prove a good fit for organizations that use its platform as a development environment, and for organizations looking for a broader development CI/CD-enabling solution that comes with a developer-friendly and affordable security scanning option.
GitLab has a single platform for development and security for the entire SDLC, which allows for easier integration of security, as well as easier acceptance and adoption for developers. Security professionals have visibility into the vulnerabilities at the time the code is committed, and when modifications, approvals and exceptions are made, and can also enforce security policies in the merge request flow.
The vendor’s SAST, Secret Detection; DAST, Dependency Scanning; and Container Scanning and License Compliance offerings are included in the Ultimate/Gold tier. Its pricing is publicly available, and provides a relatively affordable option.
GitLab provides DAST on a developer’s individual code changes within the code repository. It does so by recreating a review application based on the code that is already committed in the repository.
Users can configure requirements for pipelines, and ensure that some, or all, of the security scans are a part of that.
GitLab provides container scanning for vulnerabilities, and for code deployments in Docker containers and those using Kubernetes.
GitLab’s SAST lacks features that are available in more mature offerings. Language coverage is limited and the dashboard lacks the granularity and customizability of more established tools. Its SAST offering lacks features such as quick fix recommendations. Although GitLab can test developer code before merging it, it does not have an IDE plug-in and does not provide real-time spell checking.
GitLab is new to the AST space and Gartner clients haven’t traditionally considered it a security vendor. Its security offering is relatively new, and doesn’t have extensive end-user feedback.
GitLab’s AST comes as part of the broader development platform. Organizations that do not use GitLab for development will find stand-alone security scanning from the vendor impractical.
The vendor does not provide specific mobile AST support and its DAST offering is essentially Open Web Application Security Project’s (OWASP’s) open-source ZAP tool.
HCL Software is, at least in name, a newcomer to this Magic Quadrant, having acquired IBM’s AppScan products and technologies after the company exited the application security business. The acquisition was preceded by a two-year span in which HCL was responsible for development and maintenance of the product line, while IBM continued the sales and marketing functions. HCL AppScan is suitable for a variety of use cases, making it attractive to larger organizations with a mix of requirements. HCL Software is based in India. Regional sales and support offices are located in North and Central America, Europe, and several countries in the Asia/Pacific region.
The overall structure of the product portfolio remains largely unchanged, albeit somewhat complex. On-premises products include AppScan Source for SAST, and AppScan Standard and AppScan Enterprise for desktop and on-premises DAST, respectively. AppScan Enterprise Server is an on-premises server platform for sharing policies, results and DAST scanning manually and via automation. Service-based offerings are all grouped under the AppScan on Cloud brand and include both SAST and DAST support. HCL’s IAST offering, called Glass Box, is largely an extension of — and tightly integrated with — its DAST products (both on-premises and cloud-based versions). Software composition analysis is provided by the AppScan on Cloud service, and is based on an HCL static analysis engine coupled with an OEM database provided by WhiteSource. Mobile testing is available via AppScan Source for static analysis, and AppScan on Cloud for DAST, IAST and behavioral monitoring. API-specific tests are delivered through a combination of SAST and DAST. In general, products can be deployed on-premises, in the cloud or in a hybrid arrangement.
During the past 12 months, significant effort has been expended on reworking the product line to offer more standard functionality across platforms. For example, its Bring Your Own Language capability enables more consistent language coverage across platforms. Support for Apex, Ruby and Golang, available in the cloud version of AppScan, was added to the on-premises version of the product. Customers and partners can also use the capability, enabling further customization.
AppScan enjoys a good reputation for DAST scanning, sharing the same basic technology across the portfolio. The desktop-based AppScan Standard is a customizable offering especially suited for manual assessments. Incremental scanning allows for faster scans, and an “action-based” browser recording technology enables testing of complex workflows and improved insight into single-page applications where not all activity is captured in standard GET/POST operations.
AppScan, while still owned by IBM, was one of the first products to heavily leverage ML techniques for application security tasks, including the provision of Intelligent Finding Analytics (IFA), which helps improve accuracy and identify a “best fix” location for vulnerabilities. Under HCL, progress has continued with an effort to apply ML-based analytics to DAST findings generated by the vendor’s cloud customers to significantly improve speed and accuracy.
HCL offers good support for mobile application testing, leveraging its SAST, DAST, SCA and IAST components, as well as behavioral analysis.
Support for DevOps environments is competitive with other vendors and includes integrations into common IDEs and CI/CD toolchain components. Developers can perform scans in a private sandbox, reviewing results before committing code. The tools provide standard explanatory and supportive information, supplemented by optimal fix information and vulnerability grouping provided by IFA. No formal computer-based training or “just in time” training is provided, although such support — increasingly a staple of AST tools — is reportedly on the roadmap.
Any change in ownership is potentially disruptive, although the two-year transfer period from IBM to HCL appears to have eased the transition. However, HCL is at a disadvantage in acquiring new customers, given its current lack of brand awareness in the market. Thus, while the vendor offers a similar product vision as other portfolio vendors, it is ranked lower for its ability to execute.
The AppScan portfolio is robust, but complex, with inconsistent features across platforms. For example, Open Source Analysis is only available in the cloud, and mobile testing can span environments. HCL is taking steps — such as with the Bring Your Own Language facility — to rationalize features across the full range of the portfolio, although the result is not yet complete.
AppScan’s IAST capability is tightly integrated with the DAST offering and cannot be purchased independently. A passive IAST approach, increasingly in favor among DevOps teams, was released on 25 March 2020, after the deadline for this evaluation, and therefore is not considered.
The overall pricing model for HCL’s portfolio is complex. First, cloud offerings are based on a subscription model, but on-premises products are only available with traditional perpetual licenses (including a term-based variation). That disparity complicates purchasing for organizations wishing to pursue a hybrid deployment model. Other pricing metrics vary and are based on the number of applications, users (with varied types of user licenses on offer) and per-scan pricing. Buyers must evaluate multiple options to obtain optimal pricing terms.
Based in the U.K., Micro Focus is a global provider of AST products and services under the well-known Fortify brand. Micro Focus sales has a broad global reach, with a strong presence in North America, EMEA and Central American markets. Fortify offers Static Code Analyzer (SAST), WebInspect (DAST and IAST), Software Security Center (its console), Application Defender (monitoring and RASP) and Fortify Audit Workbench (AWB). Fortify provides its AST as a product, as well as in the cloud, with Fortify on Demand (FoD). The hybrid model allows the FoD tools to scan code and integrate results with the Fortify reporting tool and the developer environment.
During the past year, Fortify has expanded language support (26 app stacks for SAST) and integration with common CI/CD tools like Jenkins/Jira. Micro Focus has also expanded its partnership with Sonatype to a full OEM agreement and integrated its Static Code Analyzer tool directly into FoD, although it still supports Black Duck and WhiteSource. Fortify’s AST offerings should be considered by enterprises looking for a comprehensive set of AST capabilities — either as a product or service, or combined — with enterprise-class reporting and integration capabilities.
Micro Focus has put investment into a more DevSecOps developer-centric model. This includes moving DAST more fully into the hands of development by providing coordination between FoD scans and code in the IDE. It is focusing on eliminating impediments to fully automated workflows with features like macro autogeneration and API scanning improvements. Fortify supports cloud-friendly deployment models and simplified orchestration, and is adding support for containerization. To facilitate a faster, cleaner DevSecOps model, Fortify has added RESTful APIs and a command line interface for both static and dynamic testing.
Fortify is an excellent fit for large enterprises with multiple, complex projects and a variety of coding styles and experience levels. It has shown flexibility and strength in dealing with issues such as legacy code replacement and modern development styles like microservices, and has experience in M&A activity.
Swagger-supported RESTful APIs and the integrated Fortify Ecosystem were built to support modern DevSecOps organizations, a marked improvement over older versions of the product suite. Open-source integrations, both in FoD and with SSC, Jira and Octane automation, are also important steps in this direction.
Fortify offers mobile testing with FoD directly, as well as the tools with SCA and WebInspect in support of mobile application scanning.
While no one has completely solved the issue of false positives, Micro Focus has made significant improvements in simplifying and reducing FPs. Micro Focus has extended its Fortify Audit Assistant feature to allow teams the flexibility to either manually review artificial intelligence (AI) predictions on issues, or to opt in to “automatic predictions,” which allow for a completely in-band automated triaging of findings.
While Fortify has begun to show the results of Micro Focus’ investment, overall market awareness has not yet caught up. Gartner client inquiry calls do not yet reflect the new functionality and are still dominated by discussions about the older versions of the product suite.
Fortify is known for its depth and accuracy of results, which meets the needs of enterprise customers that then leverage contextual-based analysis. Less mature organizations looking for incremental improvements over time may experience challenges with the complexity and volume of unfiltered results.
While Fortify offers highly flexible license and pricing models, during inquiries clients report that the pricing remains complicated and the on-premises operational complexity is high.
Automated scans are faster than they were in older versions of the product, and a good fit for DevSecOps, but optional human-audited scan results in FoD are out of band and can take significantly longer. ·Fortify balances this challenge to human auditing by providing customers with the option to enable in-band, AI-driven audits without human intervention, both on-premises and with FoD.
Founded in 2009 in Buenos Aires, Argentina, Onapsis is a U.S.-based company with centers in the U.S., Germany and Argentina. In June 2019, it acquired Virtual Forge, a prominent player in the SAP code security space. Onapsis has established or strengthened relationships with leading strategic system integrators, managed security service providers (MSSPs), technology alliance partners and value-added resellers (VARs), such as Accenture, Deloitte, Optiv, deepwatch and others, to offer services to protect organizations using SAP and Oracle.
The business-critical application space has traditionally used code reviews by developers and security personnel, and has relied on existing defense in-depth measures to protect these applications. Onapsis offers standard AST tools (SAST/DAST) and makes it easy for ERP developers to integrate them into their existing processes. Onapsis is strictly a business-application-based tool supporting the common languages used in development (e.g., ABAP, ABAP Objects, Business Server Pages [BSP], Business Warehouse Objects, SAPUI5, XSJS and SQLScript) The vendor is a good fit for companies developing tools (in-house or as a third party) that want to adopt more of a repeatable DevSecOps, process.
Onapsis supports the DevSecOps cycle with plug-ins and services that fit into existing business-critical developer workflows.
The vendor has good support for SAP and Oracle applications as they move to the cloud, such as S/4HANA, C/4HANA, Workday, Salesforce, SuccessFactors, Ariba and others..
Its data flow and tracking options are especially useful for monitoring compliance risks in applications in financial services, human capital management (HCM), supply chain management (SCM) and other applications.
Onapsis supports a number of complex programming languages and offers a good web-based interface for scanning and managing results across multiple projects that fits well with other ERP development tools.
The vendor also supports SAP HANA Studio, Eclipse, SAP Web IDE and SAP ABAP development workbench, with similar workflows and processes across the different development IDEs.
Although Onapsis enjoys extensive cooperation with SAP and Oracle, there is some risk as both are still competitors in this space with their own products (e.g., SAP’s Code Vulnerability Analyzer).
With a focus on applications supported by SAP and Oracle, overall programming language support is limited compared to other tools in the AST space, but is focused on common business-critical application developers.
Onapsis has an IDE plug-in for its toolsets, but the experience varies significantly between them. Results of the scans are available through PDF reports with the developer environment, or via a web interface. Onapsis also offers full integration with SAP’s cloud-based Web IDE, which does provide a fully integrated developer experience. For ABAP, there is also a fully integrated experience.
DAST support is limited to workflow and call graph analysis.
Traditionally known for its DAST solutions, including InsightAppSec, Rapid7 has begun to position other products in its portfolio as application security solutions. This includes the vulnerability assessment solution InsightVM, which provides some software composition analysis as part of its container assessment capabilities. The vendor’s tCell product — a RASP offering acquired in late 2018 — provides insights into code execution and vulnerabilities, generally postdeployment. As a RASP offering, tCell relies on the same basic technology as many IAST testing tools, but is designed as an application protection solution, not a testing tool.
Rapid7 retains its reputation for having a strong DAST offering, and is especially suited for use cases where the combination of DAST and vulnerability assessment is valued — such as testing the security of web-based applications, especially where organizations face strong compliance requirements. The addition of tCell provides organizations with an opportunity to work with RASP-based app protection and the insights it can provide. Improvements over the past year include enhancements to authentication support, with the addition of multiple authentication techniques enabling improved application scanning. The vendor has also added support for multiple application frameworks (such as Angular, React and others), improving its ability to test single-page applications, which are increasingly common. Integration is provided with Jira and a variety of CI/CD tools (with additional support available via API), but most in-depth analysis of results takes place in the product’s dashboard. (A Chrome browser extension enables developers and others to interact regarding results without directly accessing the dashboard.)
Rapid7 is based in the U.S., with sales and support offices primarily located in North America and EMEA, and with some presence in the Asia/Pacific region. InsightAppSec is offered as a cloud-based service, with options for on-premises deployments and as a managed service.
Rapid7 continues to enjoy a strong reputation for its DAST tool, especially in support of in-depth custom manual assessments. Tests can be performed interactively, allowing for the manipulation of parameters, and aiding troubleshooting and the validation of fixes.
Rapid7’s Universal Translator technology analyzes requests to identify various formats, parses them and normalizes the data to a standard form to create similar attacks across tested formats. For formats that cannot be crawled, such as JSON and REST web services, this is accomplished via user-recorded traffic.
Expanded support for application frameworks makes Rapid7 an attractive choice for testing modern, single-page applications.
Rapid7 continues to enjoy good marks from most users for the product’s ease of use, dashboard and reporting. For example, developers are provided information such as recommendations, description and error information, and attack replay functionality, which enables them to understand, patch and retest vulnerabilities.
Rapid7’s inclusion of vulnerability assessment and RASP in its application security portfolio expands the scope of its offering beyond DAST, but the additional tools don’t offer feature parity with competitive solutions. For example, while InsightVM and tCell help identify vulnerabilities in built applications and containers, it does not warn of restrictive open-source licenses — a standard capability for SCA tools. (Rapid7 announced a partnership with SCA specialist Snyk as this Magic Quadrant was being finalized. Any resulting improvements in SCA capabilities will be reflected in future evaluations, as those changes materialize.)
While test results are highly detailed, the tools lack direct integration with IDEs, prompting developers to switch to the InsightAppSec dashboard (or browser extension) to review data and supporting information. It is possible to incorporate vulnerability data into a Jira ticket, which would assist in providing information to a developer more directly.
While individual Rapid7 products are built on a common platform, they lack the correlation of results across tools that other vendors provide, such as between IAST and SCA. However, correlation is provided between DAST and a selection of other vendors’ SAST tools. (Rapid7 lacks a SAST offering of its own.)
Rapid7 does not support distributed scanning.
Based in the U.S., Synopsys is a global company with offerings in the software and semiconductor areas. While Synopsys has been executing a strategy to expand its AST portfolio during the past five years, 2019 was primarily spent on integrating the products together technologically and consolidating their offerings. This has been successful, and the market now sees these products as a well-integrated whole with significant movement from single point solutions to multiproduct purchases.
The Polaris Software Integrity Platform has become the central management tool for all Synopsys AST products (except its DAST managed service, which is still stand-alone). Code Sight, the vendor’s IDE plug-in management tool, has been integrated into the product suite as well, with the goal of providing a complete in-editor experience for developer-based security testing. While primarily aimed at DevSecOps organizations, this developer-centric model is recommended by Gartner as a best practice, and all developers, regardless of methodology, benefit from that approach. Synopsys should be considered by organizations looking for a complete AST offering that want variety in AST technologies, assessment depth, deployment options and licensing.
In January 2020, Synopsys bought DAST and API security provider Tinfoil Security and is adding it to its suite of products; however, this acquisition occurred after the cut-off date for this Magic Quadrant and our analysis does not take it into account.
The Synopsys suite is a relatively easy entry point for organizations that may be just starting to take a developer-centric approach to security, as well as more advanced organizations that find integrating and managing a set of point solutions to be too time-consuming.
The Code Sight plug-in is a good fit for DevOps shops. It has strong integration with IDEs to provide feedback early in the development phase. The Code Sight plug-in leverages the IDE to act as an interface to all tools on Polaris, with an emphasis on remediation. This fits well with most development teams, regardless of maturity.
Support for CI/CD tools (for example, Jenkins and Jira reporting) has increased significantly in 2019, with support in Coverity, Seeker and Black Duck being used as part of the overall build/test/deploy cycle.
Seeker continues to be one of the most broadly adopted IAST solutions, with good SDLC integration. Synopsys has an agent-only IAST for Seeker that does not require an inducer. This supports the passive testing model offered by some IAST competitors.
Seeker compliance reports now offer GDPR and Common Attack Pattern Enumeration and Classification vulnerability tracking, in addition to its PCI DSS, OWASP and CWE tracking.
Gartner client feedback indicates that the vulnerability clarification and fix recommendation is limited, compared with some of the competitors.
Gartner clients from small and midsize businesses have expressed that, despite interest in the vendor’s solutions, the price is often outside their budgets, especially for nascent programs, leading them to seek less costly alternatives. Synopsys’ sales process is also complicated, and clients have reported trouble navigating it.
Synopsys offers DAST only as a managed service. Synopsys AST managed services are orchestrated through a cloud-based portal that is separate from Polaris; however, managed service testing results can be viewed through the Polaris reporting tool. Emphasis for dynamic testing is concentrated on the Seeker IAST product line.
While Seeker has reports for various regulatory compliance regimes, compliance is often much more complicated than a set of scans. Users should be aware that they are responsible for the full scope of audit and regulatory compliance measures.
Headquartered in the U.S., Veracode is an AST provider with a strong presence in the North American market, as well as in the European market. The Veracode offering includes a family of SAST, DAST, IAST and SCA services surrounded by a policy management and analytics hub, as well as e-learning modules. Greenlight is a SAST plug-in for the Eclipse, IntelliJ and Visual Studio IDEs. Veracode also provides mobile AST and an application attestation program called Veracode Verified, which enables companies to provide a third-party attestation of their products’ security level to a prospective buyer.
During the past 12 months, Veracode introduced support for modern application deployments in the cloud and containers. Also, it merged its original SCA offering and the recently acquired SourceClear SCA product into a new SCA offering that can scan both locally and in the cloud. Veracode also further extended its language coverage and introduced continuous alerting on new vulnerabilities. On 1 October 2019, Veracode released its IAST, which can run in the build phase and the QA test environment.
Veracode will meet the requirements of organizations looking for a comprehensive portfolio of AST services along with tailored AST advice, broad language coverage, and ease of implementation and use.
Gartner clients rate highly the quick setup, ease of use and scalability of the solution, as well as the vendor’s willingness to work with customer requirements.
Veracode’s services include tailored vulnerability and remediation advice, and reviews of the mitigations where needed, which can be useful to reduce remediation time and in organizations where developers are not application security experts. Veracode results come with “fix first” recommendations that consider how easy an issue is to fix and how much impact it has, and then recommend the best location to fix the issue.
Veracode feeds the intelligence collected from its cloud-based scans back to its engine and database. This is used to improve accuracy through SaaS learning, faster SCA updates, as well as advice for rapid response to known vulnerabilities.
Veracode’s SCA offering allows both agent-based local and cloud-based scanning, and provides a unique database with 50% more vulnerabilities than the National Vulnerability Database. Veracode can also scan test third-party applications or SaaS cloud with their consent, as well as COTS applications such as the ones provided by independent software vendors. To help with the focus on exposed applications, Veracode’s SCA offering can deprioritize vulnerabilities by checking if they are in the execution path of the application.
Veracode does not offer AST tools that can be installed on-premises, only AST as a service. It provides Internal Scanning Management that can be located on the client’s network to support the testing of internal applications, with scanning configured and controlled via the cloud service.
Veracode does not offer dynamic scanning of APIs, a capability increasingly available from competitors, relying instead on static and interactive AST. Veracode also does not allow discovery of APIs.
Some Gartner clients have cited first line of support from the vendor as an item to be improved. Additionally, even though Veracode has a worldwide presence, it only provides support in English.
WhiteHat Security’s Sentinel platform continues to stand out in use cases where DAST is a requirement, including web-based applications and APIs, both in production and preproduction. In addition, partly by virtue of a partnership with NowSecure, it ranks well for mobile AST, where it combines behavioral testing with SAST and DAST scans of popular mobile languages such as Java, Objective-C and Swift. Software composition analysis is also provided and is now available as a stand-alone product offering. Customers continue to give the vendor compliments for human and ML-based augmentations to testing, including validation of results and optional penetration testing and business logic assessments. WhiteHat continues to be unique with its Directed Remediation capabilities, where fixes developed by the WhiteHat Threat Research Center are automatically suggested to developers for selected findings. It was the first to offer chat-based assistance to developers for help in understanding specific vulnerabilities, although other vendors have also begun to provide this service. WhiteHat’s offerings are service-based, although the vendor offers a virtual appliance for local scanning, with results sent to the cloud for verification, correlation and inclusion in dashboards and reporting.
WhiteHat was acquired by NTT Security in July 2019 and operates as an independent subsidiary. Sales and support capabilities have traditionally focused heavily on North America. The vendor has also maintained a limited presence in Europe and the Asia/Pacific region. The NTT acquisition opens the possibility of broader sales and support channels.
WhiteHat has a strong reputation among Gartner clients as a DAST-as-a-service provider and should be considered by buyers seeking an AST SaaS platform.
WhiteHat continues to execute toward its strategy of addressing the requirements of DevOps organizations with differentiated SAST, SCA and DAST products for the development, build and deployment phases of the life cycle. Generally, options earlier in the process — such as SAST and SCA for developers — are optimized for fast return of results by limiting the scope of testing. Later phases provide more in-depth checks and add options for human verification and testing. The vendor continues to expand ML-based automated verification to help speed the process, and to better align to the needs of rapidly iterating development teams.
WhiteHat’s customers continue to value the vendor’s strong support services. As noted, these include vulnerability verification, manual business logic assessments/penetration testing and the ability to leverage its Threat Research Center engineers to discuss findings.
WhiteHat SAST remediation capabilities extend beyond identifying the optimal point of remediation to automatically provide custom code patches that can be copied and pasted into the code to fix identified vulnerabilities for a portion of findings for Java and C#.
WhiteHat Sentinel Dynamic provides continuous, production-safe DAST of production websites with automatic detection and assessment, and alerts for newly discovered vulnerabilities.
DAST results can be fed to a variety of web application firewall solutions, enabling the creation of rules to mitigate vulnerabilities until they can be remediated in code.
WhiteHat does not offer an IAST solution. It does use SAST findings to inform DAST scans for improved accuracy.
Customer feedback indicates some dissatisfaction with the products’ user interfaces. IDE plug-ins, for example, are functional, but supplementary and explanatory information is often poorly formatted. Findings can be fed to defect tracking systems, such as Jira.
WhiteHat’s SAST offering has limited language support, compared with competitive offerings.
WhiteHat does not offer AST as a tool, only as a cloud service. However, it can provide an on-premises virtual appliance that performs scans at a customer’s site, feeding results to the cloud for verification, correlation and inclusion in dashboards for reporting and analysis.
Vendors Added and Dropped
We review and adjust our inclusion criteria for Magic Quadrants as markets change. As a result of these adjustments, the mix of vendors in any Magic Quadrant may change over time. A vendor’s appearance in a Magic Quadrant one year and not the next does not necessarily indicate that we have changed our opinion of that vendor. It may be a reflection of a change in the market and, therefore, changed evaluation criteria, or of a change of focus by that vendor.
Onapsis, HCL Software and GitLab were added to this Magic Quadrant.
Acunetix, IBM and Qualys were dropped from this Magic Quadrant based on our inclusion and exclusion criteria.
Inclusion and Exclusion Criteria
For Gartner clients, Magic Quadrant and Critical Capabilities research identifies and then analyzes the most relevant providers and their products in a market. Gartner uses, by default, an upper limit of 20 vendors to support the identification of the most relevant providers in a market. On some specific occasions, the upper limit may be extended where the intended research value to our clients might otherwise be diminished. The inclusion criteria represent the specific attributes that analysts believe are necessary for inclusion in this research.
To qualify for inclusion, vendors needed to meet the following criteria as of 1 November 2019:
Market participation: Provide a dedicated AST solution (product, service or both) that covers at least two of the following four AST capabilities: SCA, SAST, DAST or IAST, as described in the Market Definition/Description section.
During the past four quarters (4Q18 and the first three quarters of 2019):
Must have generated at least $22 million of AST revenue, including $17 million in North America and/or Europe, the Middle East and Africa (excluding professional services revenue)
Technical capabilities relevant to Gartner clients:
Provide a repeatable, consistent subscription-based engagement model (if the vendor provides AST as a service) using mainly its own testing tools to enable its testing capabilities. Specifically, technical capabilities must include:
An offering primarily focused on security tests to identify software security vulnerabilities, with templates to report against OWASP top 10 vulnerabilities
An offering with the ability to integrate via plug-in, API or command line integration into CI/CD tools (such as Jenkins) and bug-tracking tools (such as Jira)
For SAST products and/or services:
Provide a direct plug-in for Eclipse or Visual Studio IDE at a minimum
For DAST products and/or services:
Provide a stand-alone AST solution with dedicated web-application-layer dynamic scanning capabilities.
Support for web scripting and automation tools such as Selenium
For IAST products and/or services:
Support for Java and .NET applications
For SCA products and/or services:
Ability to scan for commonly known malware
Ability to scan for out-of-date vulnerable libraries
Ability to integrate with application registries and container registries
Ability to scan open-source OS components for known vulnerabilities and to map to common vulnerabilities and exposures (CVEs)
Business capabilities relevant to Gartner clients: Have phone, email and/or web customer support. They must offer contract, console/portal, technical documentation and customer support in English (either as the product’s/service’s default language or as an optional localization).
We will not include vendors in this research that:
Focus only on mobile platforms or a single platform/language
Provide services, but not on a repeatable, predefined subscription basis — for example, providers of custom consulting application testing services, contract pen testing or professional services
Provide network vulnerability scanning but do not offer a stand-alone AST capability, or offer only limited web application layer dynamic scanning
Offer only protocol testing and fuzzing solutions, debuggers, memory analyzers, and/or attack generators
Primarily focus on runtime protection
Focus on application code quality and integrity testing solutions or basic security testing solutions, which have limited AST capabilities
Open-Source Software Considerations
Magic Quadrants are used to evaluate the commercial offerings, sales execution, vision, marketing and support of products in the market. This excludes the evaluation of open-source software (OSS) or vendor products that rely heavily on or bundle open-source tools.
Several vendors that are not evaluated in this Magic Quadrant are present in the AST space or in markets that overlap with AST. These vendors do not currently meet our inclusion criteria; however, they either provide AST features or address specific AST requirements and use cases.
These providers range from consultancies and professional services to related solution categories, including:
Business-critical application security
Application security orchestration and correlation (ASOC)
Application security requirements and threat management (ASRTM)
Crowdsourced security testing platforms (CSSTPs)
Container security solutions
Ability to Execute
Product or Service: This criterion assesses the core goods and services that compete in and or serve the defined market. This includes current product and service capabilities, quality, feature sets, skills, etc. These can be offered natively or through OEM agreements/partnerships, as defined in the Market Definition/Description section and detailed in the subcriteria. This criterion specifically evaluates current core AST product/service capabilities, quality and accuracy, and feature sets. Also, the efficacy and quality of ancillary capabilities and integration into the SDLC are valued.
Overall Viability: Viability includes an assessment of the organization’s overall financial health, as well as the financial and practical success of the business unit. It assesses the likelihood of the organization to continue to offer and invest in the product, as well as the product’s position in the current portfolio. Specifically, we look at the vendor’s focus on AST, its growth and estimated AST market share, and its customer base.
Sales Execution/Pricing: This criterion looks at the organization’s capabilities in all presales activities and the structure that supports them. This includes deal management, pricing and negotiation, presales support and the overall effectiveness of the sales channel.
We are looking at capabilities such as how the vendor supports proofs of concept or pricing options for both simple and complex use cases. The evaluation also includes feedback received from clients on experiences with vendor sales support, pricing and negotiations.
Market Responsiveness/Record: This criterion assesses the ability to respond, change direction, be flexible and achieve competitive success as opportunities develop, competitors act, customer needs evolve and market dynamics change. It also considers the vendor’s history of responsiveness to changing market demands. We evaluate how the vendor’s broader application security capabilities match with enterprises’ functional requirements, and the vendor’s track record in delivering innovative features when the market demands them. We also account for vendors’ appeal with security technologies complementary to AST.
Marketing Execution: This criterion assesses the clarity, quality, creativity and efficacy of programs designed to deliver the organization’s message in order to influence the market, promote the brand, increase awareness of products and establish a positive identification in the minds of customers. This mind share can be driven by a combination of publicity, promotional activity, thought leadership, social media, referrals and sales activities. We evaluate elements such as the vendor’s reputation and credibility among security specialists.
Customer Experience: We look at the products and services and/or programs that enable customers to achieve anticipated results. Specifically, this includes quality supplier/buyer interactions, technical support or account support. This may also include ancillary tools, customer support programs, availability of user groups, service-level agreements, etc.
Operations: This criterion assesses the ability of the organization to meet goals and commitments. Factors include quality of the organizational structure, skills, experiences, programs, systems and other vehicles that enable the organization to operate effectively and efficiently.
Table 1: Ability to Execute Evaluation Criteria
Product or Service
Source: Gartner (April 2020)
Completeness of Vision
Market Understanding: This refers to the ability to understand customer needs and translate them into products and services. Vendors that show a clear vision of their market listen to and understand customer demands, and can shape or enhance market changes with their added vision. It includes the vendor’s ability to understand buyers’ needs and translate them into effective and usable AST (SAST, DAST, IAST and SCA) products and services.
In addition to examining a vendor’s key competencies in this market, we assess its awareness of the importance of:
Integration with the SDLC (including emerging and more flexible approaches)
Assessment of third-party and open-source components
The tool’s ease of use and integration with the enterprise infrastructure and processes
How this awareness translates into its AST products and services
Marketing Strategy: We look for clear, differentiated messaging consistently communicated internally, and externalized through social media, advertising, customer programs and positioning statements. The visibility and credibility of the vendor’s meeting the needs of an evolving market is also a consideration.
Sales Strategy: We look for a sound strategy for selling that uses the appropriate networks, including: direct and indirect sales, marketing, service, and communication. In addition, we look for partners that extend the scope and depth of market reach, expertise, technologies, services, and the vendor’s customer base. Specifically, we look at how a vendor reaches the market with its solution and sells it — for example, leveraging partners and resellers, security reports, or web channels.
Offering (Product) Strategy: We look for an approach to product development and delivery that emphasizes market differentiation, functionality, methodology and features as they map to current and future requirements. Specifically, we are looking at the product and service AST offering, and how its extent and modularity can meet different customer requirements and testing program maturity levels. We evaluate the vendor’s development and delivery of a solution that is differentiated from the competition in a way that uniquely addresses critical customer requirements. We also look at how offerings can integrate relevant non-AST functionality that can enhance the security of applications overall.
Business Model: This criterion assesses the design, logic and execution of the organization’s business proposition to achieve continued success.
Vertical/Industry Strategy: We assess the strategy to direct resources (sales, product, development), skills and products to meet the specific needs of individual market segments, including verticals.
Innovation: We look for direct, related, complementary and synergistic layouts of resources, expertise or capital for investment, consolidation, defensive or preemptive purposes. Specifically, we assess how vendors are innovating to address evolving client requirements to support testing for DevOps initiatives as well as API security testing, serverless and microservices architecture. We also evaluate developing methods to make security testing more accurate. We value innovations in IAST, but also in areas such as containers, training and integration with the developers’ existing software development methodology.
Geographic Strategy: This criterion evaluates the vendor’s strategy to direct resources, skills and offerings to meet the specific needs of geographies outside the “home” or native geography, either directly or through partners, channels and subsidiaries, as appropriate for that geography and market. We evaluate the worldwide availability and support for the offering, including local language support for tools, consoles and customer service..
Table 2: Completeness of Vision Evaluation Criteria
Offering (Product) Strategy
Source: Gartner (April 2020)
Leaders in the AST market demonstrate breadth and depth of AST products and services. Leaders typically provide mature, reputable SAST and DAST, and demonstrate vison through development of other emerging AST techniques, such as container support, in their solutions. Leaders also should provide organizations with AST-as-a-service delivery models for testing, or with a choice of a tool and AST as a service, as well as an enterprise-class reporting framework supporting multiple users, groups and roles, ideally via a single management console. Leaders should be able to support the testing of mobile applications and should exhibit strong execution in the core AST technologies they offer. While they may excel in specific AST categories, Leaders should offer a complete platform with strong market presence, growth and client retention.
Challengers in this Magic Quadrant are vendors that have executed consistently, often with strength in a particular technology (for example, SAST, DAST or IAST) or by focusing on a single delivery model (for example, on AST as a service only). In addition, they have demonstrated substantial competitive capabilities against the Leaders in their particular focus area, and have demonstrated momentum in their customer base in terms of overall size and growth.
Visionaries in this Magic Quadrant are vendors that are in AST with a strong vision that addresses the evolving needs of the market. It includes vendors that provide innovative capabilities to accommodate DevOps, integrate in the SDLC or identify vulnerabilities. Visionaries may not execute as consistently as Leaders or Challengers.
Niche Players offer viable, dependable solutions that meet the needs of specific buyers. Niche Players fare well when considered for buyers looking for “best of breed” or “best fit” to address a particular business or technical use case that matches the vendor’s focus. Niche Players may address subsets of the overall market. Enterprises tend to pick Niche Players when the focus is on a few important functions, or on specific vendor expertise or when they have an established relationship with the vendor. Niche Players typically focus on a specific type of AST technology or delivery model, or a specific geographic region.
The need for application security is ubiquitous across small, midsize and large organizations. With new data privacy requirements, the consequences of a security breach are no longer limited to reputational damage, but also can involve substantial fines and penalties. Vendors have been offering core AST technologies and additional support offerings for well over a decade, and they have matured in speed and efficacy, but common code problems still remain. Most solutions in the market provide some form of code scanning capability, security training services, program development services and remediation support in a growing variety of ways to support developers and security professionals. DevSecOps, agile, and a general demand for greater automation and speed have led to the maturing of the market and the evolution of both full platform solutions offering a wide variety of commonly used testing tools and specialty solutions that offer a deeper dive into a particular technology or combine security testing with other features like code quality.
In general, better accuracy, faster results, easier integrations and enhanced remediation guidance are top of mind for vendors in this market. It has become simpler for end users to find vulnerabilities using AST tools integrated into their workflow or development environment. Solutions that make it easy for developers to be successful at security mesh well with the DevSecOps philosophy (see “Integrating Security Into the DevSecOps Toolchain”) while freeing up some security resources otherwise dedicated to running code scans. In general, anything the developers have to remember to do will be forgotten, but when integrated into their existing workflow, they come naturally. However, Gartner client inquiry feedback still indicates a need to improve remediation guidance, increase testing speed and accuracy, and simplify the operation of AST solutions to support clients adopting, integrating and scaling AST programs.
These challenges are not solved solely by the right technology; they often require changes in organizational culture, better collaboration and sound practices. Still, incompatible security technologies can impede progress, in which case development and security teams risk being driven further apart rather than becoming better collaborators. To cope with these challenges, organizations should:
Require solutions that expose and integrate automated functionality through plug-ins (including IDE, build, repository, QA and preproduction) into the SDLC. This will enable developers to fix issues earlier in the process, and it will improve coordination between development and security.
Favor vendors that specialize in comprehensive testing of APIs, applications deployed in containers and other aspects of modern development (e.g., single-page applications, microservices, serverless, edge computing, etc.) to support those use cases. Clients increasingly are seeking out point solutions with a specific focus on these technologies, particularly with respect to testing their APIs.
Require solutions that provide SCA, which is a critical or mandatory feature of an overall approach to security testing of applications, because open-source and third-party components are proliferating in applications that enterprises build. Vendors in the industry are introducing their own SCA solutions, as well as partnering with specialized SCA vendors. Gartner clients should pay special attention to those SCA solutions that offer OSS governance capabilities to enable the organization to proactively enforce its policy with respect to OSS when components are being onboarded or pulled in from external repositories and package managers. This should be further augmented with production time SCA, such as that available from container security products to alert to new vulnerabilities as they become known.
Favor a risk-based approach to vulnerability management rather than a “fix all the bugs” mentality. Too often, the perfect becomes the enemy of the good, wasting time and resources and demotivating developers and teams. There is often a trade-off to be made between speed and depth, so buyers should ensure that any resulting diminishment in the accuracy of results that often accompanies lower turnaround times remains acceptable.
Press vendors for specifics on their roadmap with respect to false positive reduction and how they will be employed to enhance their solutions. Buyers should look past ML hype and marketing to better understand specifics on how the proposed ML implementations will meaningfully improve areas such as enhancing accuracy, automating remediation efforts or achieving better testing coverage. Gartner clients should weigh vendor plans with respect to ML-based improvements, particularly when considering longer-term engagements, and consider the applicability of the proposed approaches. Artificial intelligence (AI) and ML are overused marketing terms, making it difficult to distinguish between hyperbole and genuine value, and should be evaluated closely.
Current Gartner forecasts place the size of the AST market (sales of SAST, DAST and IAST tools) at $1.33 billion by the end of 2020. Through 2022, the AST market is projected to have a 10% compound annual growth rate (CAGR), indicating that the market is growing slightly faster than the overall security market, which is projected to grow at a CAGR of 9% over the same period. Initial examination of updated vendor results suggests the market is growing at a faster pace than originally projected. This is believed to be a function of both increasing buyer demand for core AST tools, and the growing importance of associated solutions not currently included in the base forecast (such as SCA and mobile AST). Analysis of data continues, and any revisions to the forecast will be published in Gartner’s quarterly Information Security Market Forecast.
2019 continued to be a busy year of buyouts and mergers in the AST market. In June 2019, HCL Technologies completed its acquisition of IBM’s AppScan product suite as part of its $1.8 billion deal for a variety of IBM products. Also, in July 2019, NTT Security closed its buyout of WhiteHat Security. NTT is keeping the WhiteHat brand distinct from NTT Security, but this does significantly expand WhiteHat’s global coverage and partner network. Rapid7 made two purchases, acquiring tCell (runtime application self-protection) in late 2018, and NetFort (network monitoring) in mid-2019. In June, Onapsis completed its acquisition of Virtual Forge and has begun integrating its CodeProfiler suite into the Onapsis product line. Late in 2018, Checkmarx purchased Custodela, an Ontario-based provider of software security program development and consulting services focused on DevSecOps. Finally, in January 2020, Synopsys acquired Tinfoil Security and intends to merge its DAST and API testing product suit with its existing enterprise AST platform (all acquisitions after the Magic Quadrant cut-off date are noted in this research, but their capabilities are not included in the vendors’ evaluations).
In addition to this activity, we’ve seen some interesting moves by infrastructure players like Microsoft and VMware to make inroads into secure development. In 2018, Microsoft bought GitHub, arguably the world’s leading development repository. In 2019, GitHub acquired Semmle, a code analytics platform, and became a CVE Numbering Authority. The CVE system provides references for publicly disclosed information about security vulnerabilities and exposures, putting GitHub in a unique position for finding and disclosing code vulnerabilities. Also, on 30 December 2019, VMware announced that it was acquiring Pivotal Software for $2.7 billion (both Pivotal and VMware are part of Dell). This puts VMware in a strong position to manage, among other things, the container and software defined network security spaces. While it’s still early, Gartner has seen a market increase in inquiries about container security, so both of these moves are interesting.
The market continues to exhibit signs of increasing consolidation and commoditization, at least with respect to SAST, DAST and SCA for traditional web applications. However, as we can see from the placements in the 2020 AST Magic Quadrant, there continues to be a strong demand for specialty solutions that offer in-depth coverage of specific areas or combine traditional AST with other testing (e.g., code quality, enterprise applications, etc.).
In 2019, the number of Gartner end-user client conversations on DevSecOps and AST increased by 50% over 2018. While most clients do not have a full or even majority DevOps team, many techniques out of the DevOps method are easily adapted to existing coding disciplines. This includes a focus on making security an integral part of the developer work cycle and eliminating “security gates” late in the process. Other trends in 2019 included a rise in interest in container security. While containers continue to be a minor part of the market compared to more traditional applications, inquiry was up 65% over 2018. Similarly, inquiry regarding scanning for known vulnerabilities in open-source code (SCA) rose 20% in 2019.
In general, we have seen the following DevSecOps trends emerging in our client inquiries:
Integration of security and compliance testing seamlessly into DevSecOps, so developers never have to leave their CI or CD toolchain environments
Teams embracing a “developers own their code” philosophy, which extends into security (as well as performance, reliability and code quality)
Scanning for known vulnerabilities and misconfigurations in all open-source and third-party components
An emphasis on removing vulnerabilities with the highest severity and risk, rather than trying to remove all known vulnerabilities in custom code
Giving developers more autonomy to use new types of tools and approaches to minimize friction (such as interactive AST) to replace traditional static and dynamic testing
Scaling their information security teams into DevOps by using a security champion/coach model rather than putting them directly on the teams (which has scalability and cultural issues)
Treating all automation scripts, templates, images and blueprints with the same level of assurance they would apply to any source code
Increased interest in containerization
And we see those trends beginning to be reflected in the toolsets, including:
There is increased availability of SCA tools as part of product offerings across the Magic Quadrant participants.
IDE security plug-ins have not only become the normal expectation for buyers, but increasingly they are expecting the IDE to be the main conduit for reporting, fix suggestions, lessons, gamification and other developer-centric security activity. Anything that requires developers to go “out of band” is generally disfavored.
Fix suggestions are becoming more context-aware, not only with specific instructions, but also with options for involving human review and guidance from tool providers. Tool vendors are providing more options for including some human review of results in addition to ML for the elimination of false positives.
Vendors are starting to deliver options for covering some of the container and microservice attack surfaces, although full container scanning is still a bit off.
This year’s Magic Quadrant shows two distinct trends: One broadening, and one deepening. The first trend is a movement toward all-inclusive platforms that do SAST/DAST/IAST/SCA as well as integrated reporting, CI/CD pipeline integration and a robust developer experience in the IDE. While each vendor will have specific strengths and weaknesses in individual tools, the common theme is that they are full, broad-spectrum platforms. The second trend is movement by some vendors to concentrate on doing a few things very well, often combining aspects of deep security testing with other functions such as code quality analysis, business-critical apps or specific types of testing not covered well by the broad-spectrum players. Both trends result in more choices for security leads and heads of development, both of which can be purchase decision makers.
We have four notable market observations:
Clients with experienced security staff are looking more seriously at using IAST solutions. Gartner saw a 40% increase in inquiry volume around IAST in 2019. For organizations with staff that have previously used SAST/DAST, IAST becomes a viable quick-start alternative, especially if they are making their first AST purchase and the staff are experienced in DevSecOps from previous work. It fits well into the DevSecOps workflow and give developers the opportunity to mix and correlate aspects of both dynamic testing and static analysis. While this is still a small percentage of the volume of DevSecOps calls, its growth represents an interesting, if minor, trend.
Container/microservice security is beginning to appear as an important trend in AST. In 2019, Gartner saw a 60% increase in the number of clients asking about container security. While this still represents a small portion of our call volume on AST, we feel it’s significant. Vendors are beginning to address container security concerns by repurposing some of their existing product suites (e.g., SCA for scanning OS components, SAST for payload scanning, etc.). These solutions do not yet cover the full, complex attack surface that containers represent.
Human-assisted DevSecOps is being offered by more vendors to reduce false positives and to assist developers in their IDE and developer environments. While ML continues to do the heavy lifting for false positive reduction, AST vendors are increasingly offering the option to have results reviewed by humans who can help remove false positives. While fast DevOps organizations continue to prefer automated, rapid turnaround times, other organization with less rigid deadlines and less security experience are taking advantage of FP reduction via human review. Similarly, while many organizations are adopting a “developer security coach” model for assisting coders grappling with security tasks, some are opting to use coaches from vendors provided through chat or other dedicated channels. This supports the goal of making security easy for developers to consume and provides rapid response to common questions.
Many clients are still seeking “one-stop shop” vendors that offer multiple technologies as part of a unified platform, a trend we noted in 2019. To support this effort, buyers are prioritizing vendors that provide multiple technologies and deployment options. Feedback from clients suggests that efforts to “glue together” various specialty tools suffer from complexity and reporting problems (i.e., the results of one tool not being consumable by others, resulting in a loss of context). Efforts to correlate these in-house do not yield the same level of rich data and project tracking and reporting as integrated, enterprisewide platform providers. Application vulnerability correlation helps with this.
Product/Service: Core goods and services offered by the vendor for the defined market. This includes current product/service capabilities, quality, feature sets, skills and so on, whether offered natively or through OEM agreements/partnerships as defined in the market definition and detailed in the subcriteria.
Overall Viability: Viability includes an assessment of the overall organization’s financial health, the financial and practical success of the business unit, and the likelihood that the individual business unit will continue investing in the product, will continue offering the product and will advance the state of the art within the organization’s portfolio of products.
Sales Execution/Pricing: The vendor’s capabilities in all presales activities and the structure that supports them. This includes deal management, pricing and negotiation, presales support, and the overall effectiveness of the sales channel.
Market Responsiveness/Record: Ability to respond, change direction, be flexible and achieve competitive success as opportunities develop, competitors act, customer needs evolve and market dynamics change. This criterion also considers the vendor’s history of responsiveness.
Marketing Execution: The clarity, quality, creativity and efficacy of programs designed to deliver the organization’s message to influence the market, promote the brand and business, increase awareness of the products, and establish a positive identification with the product/brand and organization in the minds of buyers. This “mind share” can be driven by a combination of publicity, promotional initiatives, thought leadership, word of mouth and sales activities.
Customer Experience: Relationships, products and services/programs that enable clients to be successful with the products evaluated. Specifically, this includes the ways customers receive technical support or account support. This can also include ancillary tools, customer support programs (and the quality thereof), availability of user groups, service-level agreements and so on.
Operations: The ability of the organization to meet its goals and commitments. Factors include the quality of the organizational structure, including skills, experiences, programs, systems and other vehicles that enable the organization to operate effectively and efficiently on an ongoing basis.
Completeness of Vision
Market Understanding: Ability of the vendor to understand buyers’ wants and needs and to translate those into products and services. Vendors that show the highest degree of vision listen to and understand buyers’ wants and needs, and can shape or enhance those with their added vision.
Marketing Strategy: A clear, differentiated set of messages consistently communicated throughout the organization and externalized through the website, advertising, customer programs and positioning statements.
Sales Strategy: The strategy for selling products that uses the appropriate network of direct and indirect sales, marketing, service, and communication affiliates that extend the scope and depth of market reach, skills, expertise, technologies, services and the customer base.
Offering (Product) Strategy: The vendor’s approach to product development and delivery that emphasizes differentiation, functionality, methodology and feature sets as they map to current and future requirements.
Business Model: The soundness and logic of the vendor’s underlying business proposition.
Vertical/Industry Strategy: The vendor’s strategy to direct resources, skills and offerings to meet the specific needs of individual market segments, including vertical markets.
Innovation: Direct, related, complementary and synergistic layouts of resources, expertise or capital for investment, consolidation, defensive or pre-emptive purposes.
Geographic Strategy: The vendor’s strategy to direct resources, skills and offerings to meet the specific needs of geographies outside the “home” or native geography, either directly or through partners, channels and subsidiaries as appropriate for that geography and market.
Critical Capabilities for Security Information and Event Management
Published 24 February 2020 – ID G00381141 – 52 min read
Security information and event management solutions keep evolving to address demands across a range of buyers and requirements. Security and risk management leaders responsible for security operations should use this research to evaluate and select the most appropriate solutions.
SIEM solution capabilities and support, specifically for consumption models, architecture, analytics, user monitoring and operations, are increasingly varying across vendors.
SIEM vendors are trying to solve, with varying degrees of success, the inherent complexities in deploying and operating SIEM tools. However, most SIEM solutions are still too complex for buyers with limited resources and expertise.
Big data technologies as core components of SIEM solutions are starting to become table stakes — for example, Hadoop or Elasticsearch, which are now leveraged by most SIEM solutions.
SIEM vendors have embraced security orchestration, automation and response (SOAR) via native capabilities, OEM and partnerships, or deeper integrations with leading SOAR vendors.
Although most SIEM buyers continue to purchase on-premises software or appliance SIEM solutions, SaaS SIEM is gaining traction, and more SIEM tools are offered as SaaS SIEM only.
IT security and risk management leaders responsible for security monitoring and operations:
Focus your evaluation on the critical capabilities that align to their use cases (e.g., forensics, advanced threat detection and response), requirements, and current and future IT environments (e.g., on-premises versus cloud-based services).
Improve response by leveraging new SOAR-type functionality that the SIEMs are providing natively before purchasing a dedicated SOAR.
Give preference to SIEM solutions that can be consumed as a service to minimize overhead and management if you don’t have complex, on-premises SIEM architecture requirements and are, or plan to be, a heavy user of cloud-based services.
Strategic Planning Assumptions
By 2022, 50% of all SIEM tools will be cloud-native and delivered as a service from the vendor, up from 20% today.
By 2022, 75% of all SIEM vendors in the Gartner Magic Quadrant will offer advanced analytics features, as well as orchestration and automation features, up from 30% today.
What You Need to Know
This document was revised on 2 March 2020. The document you are viewing is the corrected version. For more information, see the Corrections page on gartner.com.
Security and risk management leaders evaluating SIEM solutions must start by clearly understanding and describing their scope as well as their use cases, and then defining specific requirements from these inputs in conjunction with applicable stakeholders. These stakeholders should typically go beyond security and IT, and include such teams as audit, lines of business, legal and human resources. Additional factors to consider when evaluating SIEM solutions include:
The scale and complexity of the deployment — for example, the types and locations of data sources in scope for distributed organizations or hybrid multicloud environments.
Architectural considerations for deployment and consumption — for example, will the solution be deployed on-premises, in the cloud, via a hybrid approach or consumed as software as a service (SaaS)?
Operational roles, such as use of internal resources versus service providers, and managed SIEM services
Applicable compliance regimes and mandates, such as data retention and reporting requirements
Gartner recommends that organizations initiate any SIEM project with a clear understanding of their use cases (see “How to Build Use Cases for Your SIEM”) to achieve long-term value from deploying a SIEM solution.
A phased acquisition and implementation approach, in which the most critical drivers and quick wins are implemented in the first phases and more-complex use cases occur in later phases, is also recommended. This will also allow the organization to rightsize its SIEM resources, both from a licensing and an operational costs perspective. This also requires being particularly careful with the selection and implementation of the foundational pieces.
Developing a multiyear, yet fluid, project roadmap for the SIEM solution’s operation and expansion, with input from applicable stakeholders, aligned to the overall business direction as well as information security strategies, will ensure that any solution purchased remains fit for purpose. (For more information on SIEM deployments, see “How to Architect and Deploy a SIEM Solution”). Such a roadmap needs to be revisited regularly, typically after an audit or a significant incident, or as new laws and regulations are adopted. Finally, organizations need to evaluate the SIEM solution vendors’ deployment and ongoing support capabilities, taking into account the resources and expertise available internally to the organization, and through the SIEM vendors and third-party service providers.
Critical Capabilities Use-Case Graphics
Figure 1. Vendors’ Product Scores for Basic Searching and Reporting Use Case
Source: Gartner (February 2020)
Figure 2. Vendors’ Product Scores for Compliance and Control Monitoring Use Case
Source: Gartner (February 2020)
Figure 3. Vendors’ Product Scores for Basic Security Monitoring Use Case
Source: Gartner (February 2020)
Figure 4. Vendors’ Product Scores for Complex Security Monitoring Use Case
Source: Gartner (February 2020)
Figure 5. Vendors’ Product Scores for Advanced Threat Detection and Response Use Case
Source: Gartner (February 2020)
AT&T Cybersecurity, part of the AT&T Business portfolio, is headquartered in Dallas. AT&T Cybersecurity’s SIEM solution is called Unified Security Management (USM) Anywhere, and is delivered as a SaaS SIEM from the Amazon Web Services (AWS) cloud. The solution includes several built-in capabilities, including asset discovery; vulnerability assessment; intrusion detection system (IDS) for network and cloud; and an endpoint detection and response (EDR) agent. The solution has an app framework that enables integrations with third-party security products for detection, automation and response. USM Anywhere also includes basic user and entity behavior analytics (UEBA) functions. Additional offerings include AT&T Alien Labs, which provides a threat intelligence subscription as part of the core SIEM product via the Open Threat Exchange (OTX) threat intelligence sharing capability. An on-premises software deployment, USM Appliance, is still available and supported. Prospective customers should know that there is no feature parity between to two offerings.
USM Anywhere runs in AWS, and customers deploy data collection sensors as VMware or HyperV images, or templates for 13 supported AWS regions, Microsoft Azure or Google Cloud Platform. Integrations with SaaS solutions are provided via AlienApps. Country-specific data storage is available for the U.S., Ireland, Germany, Japan, Australia, the U.K., Canada, India and Brazil. The solution includes an optional EDR agent for Windows, Linux and Mac. Weekly updates to data collectors are managed by the vendor. EDR agents can autoupdate or be managed by customers.
Detections and alerting in the USM Anywhere platform are largely dependent on the real-time detection rules and analytics methods created and maintained by the vendor based on internally sourced threat research. All rules, response templates, algorithms and associated threat research are included in the solution. Alarms can be viewed in the context of a kill chain or the ATT&CK framework.
USM Anywhere pricing is generally based on data volume (GB per month).
Dell Technologies (RSA)
RSA is a business within Dell Technologies, which is headquartered in Round Rock, Texas. RSA’s NetWitness Platform (RSA NWP) is composed of a variety of components (versions introduced in April 2019 and reviewed for this research):
RSA NetWitness Logs (version 11.3)
RSA NetWitness Endpoint (version 11.3)
RSA NetWitness Network (version 11.3)
RSA NetWitness UEBA (version 11.3), with competencies derived from the acquisition of Fortscale in 2018
RSA NetWitness Orchestrator (version 4.5 introduced in July 2019), an OEM of Demisto’s SOAR solution
RSA NWP can be deployed in a variety of formats, ranging from software to physical and virtual deployments — on-premises and in public cloud services, such as AWS and Azure. It is flexible in how and where those components are installed to support simple deployments through complex, n-tier deployments across on-premises and cloud — infrastructure as a service (IaaS) — environments, as well as geographically distributed environments.
Various on-premises log and data sources, as well as contextual sources, are supported, in addition to common SaaS vendors (Office 365, Salesforce) and cloud service providers (CSPs; such as AWS, Azure and Google Cloud).
Decoders are used to collect logs and data, as well as perform analytics. Concentrators aggregate and index metadata. Smaller environments can use hybrid decoders/concentrators for collection and indexing. Event Stream Analysis is responsible for real-time analysis, correlation and notification. Archivers provide long-term data retention. All of these components are licensed together as part of a single SIEM SKU in the metered pricing model. RSA NWP Endpoint Insights is a free Windows and Linux endpoint agent to capture and forward endpoint logs, whereas the for-pay RSA NWP Endpoint agent has more EDR capabilities. RSA NetWitness Server is the single user interface (UI) into the solution. RSA NetWitness Platform Live is a cloud-based content distribution service. RSA also offers the for-pay NetWitness UEBA for advanced analytics, and NetWitness Orchestrator for SOAR capabilities.
RSA NWP pricing is generally based on data volume (GB per day).
Exabeam is headquartered in San Mateo, California. Its SIEM, branded as Security Management Platform (SMP), is composed of seven products (version 2019.2 was introduced in February 2019 and reviewed for this research):
Data Lake (version i31) for CLM-type functionality
Advanced Analytics (version i48), a user-focused UEBA
Threat Hunter (version i48), for forensics, investigations and searches
Entity Analytics (version i48), an entity-focused UEBA
Incident Responder (version i48), a workflow-automation-focused SOAR
Case Manager (version i48), an incident response/case management platform
Cloud Connectors (SkyFormation v2.4) from the SkyFormation acquisition for cloud coverage
Exabeam takes a module approach to its platform, where different solutions can be purchased individually and run in various combinations. Buyers seeking a SIEM solution will need to purchase Data Lake, Advanced Analytics and/or Entity Analytics (depending on the use cases), and Incident Responder. The platform leverages several big data technologies like Elasticsearch, HDFS, MongoDB, Kafka and Spark.
Exabeam SMP is available as an on-premises deployment or as a SaaS-based solution (Exabeam SaaS Cloud). SMP is offered as physical or virtual appliances. Docker container versions are also available. The solution can be deployed in IaaS, like AWS, Azure and Google Cloud, and hybrid options — through a combination of appliances and software installations — are supported. Exabeam also has partners to deliver managed SIEM on a per-customer basis. Multitenancy is supported via a combination of logical data segmentation and role-based access control (RBAC) controls. Content and integrations can be downloaded by customers and from the Exabeam community portal.
Exabeam SMP pricing is generally based on the number of employees in the organization.
FireEye is headquartered in Milpitas, California. FireEye’s SIEM capabilities are delivered by its Helix platform, which integrates with other FireEye security solutions for email, network and endpoint that are sold separately. FireEye Threat Intelligence, supporting detections and threat hunting, and FireEye Security Orchestrator, for workflow automation, are also integrated into the Helix platform. The solution includes (at no added cost) virtual sensors to collect network metadata for monitoring. FireEye also offers services branded Expertise On Demand to provide support for tuning detection rules, investigating alerts and responding to incidents.
FireEye Helix is offered as a SaaS solution, deployed in multiple regions in AWS and managed by FireEye. Data collection is handled via one or more Communications Broker agents running on customer premises or in customer cloud environments. Helix orchestration capabilities are typically deployed on-premises to support integration with vulnerability scanning solutions. Several integrated FireEye security solutions also run in the cloud, but can be optionally operated on-premises, on physical or virtual systems. These solutions also include Communications Broker connectivity to Helix.
FireEye creates and maintains detection rules and analytics algorithms based on internally sourced threat research from Mandiant incident response engagement findings and threat intelligence services. Alerts are created by a combination of real-time rule matching and the postprocessing application of threat intelligence and indicators of compromise against events ingested by the platform. All rules, algorithms and associated threat research is included in the solution, and users can add rules to support additional use cases.
FireEye Helix pricing is generally based on data velocity (events per second [EPS]).
Fortinet is headquartered in Sunnyvale, California. The Fortinet SIEM solution (version 5.2.1 introduced in June 2019 and reviewed for this research) is composed of the following components:
FortiSIEM Advanced Agent (a for-pay, server-focused endpoint agent for Windows and Linux with some file integrity monitoring [FIM] and EDR capabilities)
FortiGuard IOC (a for-pay threat intelligence subscription feed)
FortiInsight (a for-pay pure-play UEBA tool derived from the ZoneFox acquisition)
Fortinet FortiSIEM is part of Fortinet’s Security Fabric, which allows enhanced collaboration and integration between several of the vendor’s portfolio solutions (e.g., Fortinet FortiSandbox) for additional, multitool use cases.
All FortiSIEM nodes can be deployed as virtual machines, and each component can be deployed on-premises or in a public/private cloud as long as the hypervisor is supported. For small installations, FortiSIEM can be deployed as a single virtual appliance with a local disk or as a hardware appliance (sold by Fortinet). As complexity and performance requirements grow, customers can scale vertically with bigger appliances, and/or can scale horizontally by adding Worker and Collector appliances (virtual or hardware).
FortiSIEM can manage most common on-premises data sources and cloud sources (although Google Cloud Platform, for example, is not supported), and can also ingest network flow data from firewalls and other network devices in the form of NetFlow version 5, version 9, IPFIX, sFlow, JFlow, and Cisco AVC, VPC Flow and Syslog.
FortiSIEM’s distributed architecture consists of three kinds of nodes: Supervisor, Worker and Collector. An additional Report Server node is needed when integrating third-party business intelligence software with FortiSIEM. The Collector node discovers devices in remote locations, gathers events from these devices, parses and then preprocesses the events with enrichment, compresses them, and forwards them to the Supervisor or Worker nodes. Workers process and store events, and perform partial queries sent to the Supervisor. The Supervisor node processes the partial query/rule results from the Worker nodes to produce the final result. FortiSIEM clients can select between a proprietary FortiSIEM NoSQL event database or Elasticsearch for event storage, while the Supervisor node provides a single system database image to the user by unifying the Discovery and Event databases.
FortiSIEM pricing is generally based on the number of assets in scope (number of IP addresses).
HanSight is headquartered in Beijing, China. HanSight Enterprise SIEM (version 5.1 introduced in May 2019 and reviewed for this research) is the core product that is part of an ecosystem of solutions that includes HanSight UBA, HanSight NTA, and HanSight TIP (all also at version 5.1). Other solutions available within the HanSight ecosystem include vulnerability management, asset discovery and data loss prevention (DLP). HanSight does not have its own EDR and cloud workload protection platform (CWPP) tools, and instead has partnerships with several Chinese security technology vendors (e.g., 360 Total Security, Magic Shield and Qingteng).
HanSight Enterprise SIEM is available as software, as a hardware appliance (aimed at midsize enterprises) or as a per-customer hosted environment. There are five components that can be deployed in various configurations (all in one, individually, combinations) — Data Collector Clients, NTA, Central Management, Threat/Detection and Incident Management and Elasticsearch. Real-time analytics are performed by the Security Analytics Engine, while batch processing is handled through HanSight Query Language (HQL) search and the UEBA module called HanSight UBA. HQL can implement machine learning algorithms, and the HanSight Notebook allows more-advanced users to create their own machine learning in Python or Java.
HanSight Enterprise SIEM pricing is generally based on data volume (EPS).
IBM Security is headquartered in Cambridge, Massachusetts. The QRadar Security Intelligence Platform (SIP; version 7..3.2 introduced in May 2019 and reviewed for this research) consists of QRadar SIEM and other separately priced components:
IBM QRadar Vulnerability Manager provides vulnerability assessment.
Network monitoring support in SIP includes IBM QRadar Network Insights for network flows, and the IBM QRadar Network Packet Capture appliance.
IBM QRadar Incident Forensics provides investigation support.
IBM QRadar Advisor with Watson provides automated research for threats and actors.
IBM QRadar User Behavior Analytics (UBA) is a free add-on module for user-monitoring use cases.
IBM Resilient SOAR, a solution that has supported bidirectional integration between Resilient and the QRadar SIEM solution.
In addition to these IBM QRadar components, IBM offers the Security App Exchange, with integrations developed by IBM and third parties.
QRadar uses a proprietary data store for environmental and event data. Alerts (offenses) are created by a combination of real-time correlation and baselining (vendor-provided and user-created or modified rules) and machine learning analytics to detect anomalous behaviors.
Deployment options include on-premises software, hardware and virtual appliance options (for most components), or deployed as SaaS by IBM via the QRadar on Cloud offering. Smaller deployments can be addressed in an all-in-one appliance, and larger deployments can scale horizontally with additional appliances.
IBM QRadar pricing is generally based on data velocity (EPS).
LogPoint is headquartered in Copenhagen, Denmark. LogPoint SIEM solution is composed of the following modules (generally introduced in June 2019 and reviewed for this research):
LogPoint Core SIEM (version 6.6.1)
LogPoint UEBA (version 2.1.0)
LogPoint Director Console (version 1.5.0)
LogPoint Director Fabric (version 1.5.0)
LogPoint Applied Analytics (version 2.0)
LogPoint can be deployed as an all-in-one virtual or hardware appliance running a hardened version of Ubuntu Linux 16.04 combining the Collector, Backend and Search head. Scalability is achieved via the use of larger appliances and/or additional Collector/Backend modules deployed in key locations. These collection instances parse, normalize, enrich, filter, route, compress and buffer event data. Larger and more complex deployment (e.g., federated model) will introduce LogPoint Director to manage the various components. LogPoint, through the use of NIFI, supports connections to query remote data lakes, including Hadoop and Elastic data stores. LogPoint utilizes Apache Lucene and NoSQL flat-file storage split into several components, raw log data, key/value pair data and enriched data (if available).
The analysis tier in LogPoint is delivered in a hybrid model. Inside LogPoint SIEM, the analysis node of LogPoint, is a component that consumes data from many LogPoint Backends. The Backends stream data in real time toward the analytics node, where data is processed for real-time correlations and alerting. Whenever an analyst requires forensics data or conducts a long-term analysis, queries are executed against the data stores.
The for-pay LogPoint UEBA is delivered through a cloud service that relays insights back to LogPoint SIEM.
LogPoint SIEM pricing is generally based on the number of assets in the organization.
LogRhythm, headquartered in Boulder, Colorado, brands its SIEM solution as LogRhythm NextGen SIEM Platform (version 7.4 introduced in October 2018 and reviewed for this research). The core SIEM component is the XDR Stack and includes the DetectX, AnalytiX and RespondX components.
Additional modules for user and network monitoring include:
UserXDR (for UEBA capabilities)
NetworkXDR (for NTA capabilities)
LogRhythm System Monitor (aka SysMon version 7.4), a host agent for data collection and EDR capabilities available in Lite and Pro versions
Network Monitor (aka NetMon version 3.9 and NetMon Freemium), the means to collect network data to support NetworkXDR
The platform leverages a mix of Windows and Linux OS, as well as Microsoft SQL and Elasticsearch for data management.
The LogRhythm platform supports large and smaller organizations with two versions — LogRhythm Enterprise and LogRhythm XM (an all-in-one appliance). These can be deployed either on-premises as software, a physical appliance or virtual appliance. IaaS deployments are supported as well, and hybrid models of IaaS and on-premises, and mixing and matching of virtual, physical and software installs, are also supported. LogRhythm’s SaaS SIEM offering is called LogRhythm Cloud. LogRhythm UserXDR is only available as a SaaS offering. Multitenancy is natively supported in the solution. Content is made available to customers via the LogRhythm Knowledge Base and is updated on a weekly basis.
LogRhythm’s core SIEM product pricing is generally based on velocity (messages per second [MPS]).
ManageEngine has headquarters in India (Chennai), as well as in the U.S. (Austin, Texas). ManageEngine’s core SIEM product is Log360, and there are several modules, individually licensed, that address security and IT operations use cases. These include (versions as of July 2019 and reviewed for this research):
ManageEngine ADAudit Plus (version 6.0; Active Directory change auditing and reporting)
ManageEngine EventLog Analyzer (version 12.0.5; central log management)
ManageEngine Cloud Security Plus (version 4.0; CLM and SIEM for AWS and Azure)
ManageEngine Log360 UEBA (version 4.0; user and entity behavior analysis)
ManageEngine DataSecurity Plus (version 5.0; data discovery and file server auditing)
ManageEngine O365 Manager Plus (version 4.3; Office 365 security and compliance)
ManageEngine Exchange Reporter Plus (version 5.4; Exchange Server change audits and reporting)
Event data is ingested via agent-based and agentless methods, including log import. Alerts are created via real-time correlation rules packed with the solution or developed by users. The UEBA add-on module detects anomalous activities.
ManageEngine Log360 software can be deployed on-premises on physical or virtual systems. The ManageEngine Log360 Cloud solution stores the data collected by the log management module, EventLog Analyzer; however, it is not a SaaS-based SIEM tool. Typical small deployments would make use of the EventLog Analyzer and ADAudit Plus components.
Log360 pricing is generally based on the number of assets in scope (number of IP addresses).
McAfee is headquartered in Santa Clara, California. McAfee Enterprise Security Manager (ESM) version 11.2.1 (reviewed for this research) was introduced in July 2019 and is composed of the following modules:
McAfee Event Receiver (ERC), for collection and correlation of data
McAfee Enterprise Log Search (ELS), for Elastic-based log search
McAfee Enterprise Log Manager (ELM), for long-term log storage and management
McAfee Advanced Correlation Engine (ACE), for dedicated correlation including risk and ruleless (behavior-based) correlation, as well as statistical and baseline anomaly detection
Some McAfee ESM modules can be deployed as physical or virtual appliances. Appliances are available in both hardware and virtual (on-premises or cloud) form for ESM, ERC, ELS, ESM, ACE and ADM. Virtual machines can be deployed in AWS, Azure and Oracle cloud environments. They can also be deployed in on-premises virtualized environments running ESX, KVM, Hyper-V and Xen.
The McAfee ERC allows agentless collection of 500-plus data sources. The ERC performs a number of functions, including raw log collection, log parsing and data enrichment. The ERC uses Kafka to publish data to which other components within the environment can subscribe to obtain raw and parsed data. The ERC performs real-time alerting simple analytics. McAfee SIEM Collector is an optional agent, deployable via SCCP, SCCM or McAfee ePO, capable of collecting various logs (e.g., traditional Windows Event logs, flat file logs like DNS or DHCP, or customer defined application logs). The SIEM Collector supports servers acting as a Windows Event Forwarding host. The SIEM Collector can provide a native SQL connector to Microsoft SQL and Oracle RDBMS systems to allow customized queries against database views and tables.
McAfee ESM pricing is generally based on data velocity (EPS).
Micro Focus, headquartered in Newbury, U.K., offers the ArcSight platform as its SIEM solution. Micro Focus ArcSight’s architecture is now based on big data technologies (e.g., Kafka bus for improved data transfers and horizontal scaling of data ingestion) and is composed of the following components (versions are those reviewed for this research):
ArcSight Enterprise Security Manager (ESM; version 7.0 introduced in May 2019), providing core SIEM functions of real-time analytics and monitoring and incident management
ArcSight Logger (version 6.7.1 introduced in May 2019), providing event and data processing and storage
ArcSight Transformation Hub (version 3.0 introduced in July 2019) as part of the Security Open Data Platform (SODP) for data management and routing
Interset UEBA (version 5.8 as of the July 2019 cutoff date for this research) for user and entity monitoring
ArcSight Investigate (version 2.3 introduced in July 2018) for data searching and visualizations to support incident investigation and threat hunting use cases
ArcSight Management Center (ArcMC; version 2.92 introduced in July 2019) is the stand-alone utility used to manage ArcSight components
SmartConnector (version 7.13 introduced in July 2019), Micro Focus’ content for data parsing and normalization
In addition, there are premium add-ons that cover additional content, and Micro Focus has other products in its portfolio that complement ArcSight. Content and third-party technology integrations are managed through the ArcSight Marketplace.
ArcSight components are provided as stand-alone appliances or software that can be installed on the customer’s own infrastructure (physical and virtual) or installed in IaaS. SaaS is not available, but hosted offerings can be provided by third parties. Multitenant functionality is native to ArcSight ESM for managed security service providers (MSSPs), managed service providers (MSPs) and organizations that need to monitor multiple organizations in a single solution.
ArcSight pricing is generally based on data velocity (EPS).
Rapid7 is headquartered in Boston, Massachusetts. Its SaaS SIEM offering is InsightIDR, and the underlying AWS-based Insight platform includes these other modules:
InsightVM (vulnerability assessment)
InsightAppSec (application security)
InsightOps (log management for IT operations).
Rapid7 offers Insight Agent as its preferred endpoint agent to enable telemetry gathering and basic bidirectional response integration capabilities with Rapid7 InsightIDR, Rapid7 InsightVM and Rapid7 InsightOps. InsightIDR also offers integration with InsightVM, which enables customers to deploy one agent across the environment to instrument and collect vulnerability assessment data while performing detection and response functions. Insight Collectors provide event ingestion, and an unlimited number of Collectors or Agents are available without additional licenses. Rapid7 also offers a managed service built around the Insight platform, including managed detection and response and vulnerability management.
Data is ingested via Collectors (syslog, plus directories, AWS CloudTrail, Azure Event Hubs, etc.) and Agents (which are optional for event collection, but required for response actions and vulnerability management). Alerts are created by vendor-provided and customer-developed rules, and with statistical and machine learning analytics based on open-source, proprietary and AWS technologies. Basic response capabilities using Agents are available from InsightIDR, with more complete integrations available for InsightConnect.
InsightIDR pricing is generally based on the number of assets in scope (number of IP addresses).
Securonix is headquartered in Addison, Texas. The Securonix SNYPR SIEM platform consists of the following components (version 6.2 was introduced in October 2018 and the CU4 was introduced in May 2019; both were reviewed for this research):
Securonix SNYPR is available primarily as cloud-delivered SaaS service, although MSSPs and key customers are known to run SNYPR either on-premises or in their own clouds.
Securonix is capable of managing hybrid multicloud environments, with support for most cloud IaaS and PaaS as well as many SaaS applications, on-premises logs and other contextual data, as well as network traffic and flows.
Securonix SIEM architecture is composed of three tiers — ingestion, compute/storage, and master console tiers. The ingestion and data acquisition tier is composed of remote ingesters (aka RIN), which are deployed on-premises in the customer data center or in the cloud, depending on the source of the log events. The data is compressed, encrypted and forwarded by the RINs to the compute/storage tier, where it gets processed — parsed, normalized, enriched, indexed, stored (for compliance) and analyzed for potential threats. The master console tier provides UI and administration services.
Securonix SNYPR is based on the Hadoop stack, and all data is stored and managed in typical big data components. Kafka, Solr, HDFS, HBase and Spark. Analytics are done via a Lambda architecture with streaming and batch-processing services. Analytics applications such as event correlation utilize Spark streaming services to analyze the data in real time, whereas applications requiring lookup of historical data (e.g., risk scoring) utilize the batch-processing layer. Batch processing is also utilized for alerting on historical data, training machine learning algorithms and running hunting queries on historical data.
Securonix SIEM pricing is generally based on the number of employees in the organization.
SolarWinds is headquartered in Austin, Texas. SolarWinds’ SIEM solution is Security Event Manager (SEM; version 6.7 introduced in May 2019 and reviewed for this research). SEM includes core SIEM features that provide data management, real-time correlation and log searching to support threat and compliance monitoring, investigations and response.
SEM’s architecture is straightforward, with only two components: a virtual appliance for all SEM features and functions, and a multifunction endpoint agent that provides log collection and forwarding, FIM, EDR (including active response functionality), and lightweight DLP capabilities.
Scalability can be achieved either by increasing resources to a virtual appliance or by splitting the SEM database and appliances across multiple virtual machines. Multitenancy is not native, but through the use of a master console, multiple SEMs can be viewed in a single pane.
SEM is complemented with other products in the SolarWinds portfolio for ticketing and case management, user monitoring, and network and application monitoring, through for-pay solutions such as Access Rights Manager, Identity Monitor, Service Desk, Server & Application Monitor and Papertrail, among others.
SolarWinds pricing is generally based on the number of assets in scope (number of IP addresses).
Splunk, headquartered in San Francisco, provides SIEM solutions via a combination of:
Splunk Enterprise for core log management capabilities, delivered either on-premises (Splunk Enterprise version 7.3 introduced in May 2019) or as SaaS SIEM via Splunk CloudSplunk Enterprise Security (ES version 5.3.1 introduced in July 2019), also available on-premises or in the cloud as a service
Additional (on-premises-only and for-pay) premium apps include Splunk User Behavior Analytics (UBA) and Splunk Phantom, as well as Splunk Security Essentials for Ransomware and Splunk App for PCI Compliance for more specific use cases
Splunk ES provides core SIEM features and functionality on top of the Splunk core. Splunk UBA complements Splunk ES’s rule, correlation and basic statistical analytics through the addition of machine learning capabilities focused on user and entity anomaly monitoring. SOAR capabilities in Phantom are an enhancement over the adaptive response framework incident management and automation natively provided in Splunk ES.
Splunk is primarily delivered as software, or via SaaS as Splunk Cloud. Splunk can be installed on customer hardware, in virtual environments or in IaaS. Splunk’s architecture contains only Universal Forwarders, Search Heads and Indexers. Universal Forwarders are agents that provide log collection and forwarding. Indexers and Search Heads are the two main components to collect, analyze and visualize data and outputs. The architecture is scalable both horizontally and vertically using these components. Splunk Stream provides a means for collecting network data off the wire. Buyers looking for an appliance-based approach can find offerings from various third parties.
Splunk Enterprise and Splunk Cloud pricing is generally based on data volume (GB per day).
The SIEM market is evolving to address wide demands across a range of buyers.
SIEM tools’ role as the central threat detection technology in enterprises is confirmed, while they are increasingly natively addressing the response phase after the detection. Gartner continues to see security operations centers (SOCs) built around a SIEM solution to deliver threat detection and response services.
At the same time, SIEM continues penetrating the small and midsize business segment, which does not have, nor does it plan to have, the required expertise to manage a SIEM and pursue advanced use cases. SIEM vendors are addressing this need with:
Easier consumption models, such as SaaS SIEM along with predictable pricing models
Stronger packaging of the content, availability of an app store and overall better user experience around the inherent complexity of these tools
Use of advanced analytics to supplement the lack of this skill set in organizations
Use of automation features to augment the availability of organizations
SIEM solutions are modernizing across the log management, analytics and operations tiers to deal with evolving buyer requirements, which requires enhancing features and adding new functionality. As an example, the ability to ingest and analyze data from cloud environments is now expected from clients looking for a SIEM tool. As described in “Technology Insight for the Modern SIEM,” standard architectures have emerged for SIEM tools to address complexities in data management, analytics and operations, via a well-architected three-layer approach. This is done to address the following issues:
As security teams deal with the challenges of increasing volumes of data from a variety of new sources with growing velocities, SIEM technology vendors are adopting big data technologies, such as Hadoop, NoSQL, Elasticsearch and Kafka to replace legacy data management capabilities oriented around proprietary methods and relational databases.
To cope with an evermore hostile threat environment, both external attackers and insider threats, SIEM vendors are adding more sophisticated analytics methods, such as machine learning, to complement existing analytics capabilities, in addition to custom content focused on specific types of threats, such as ransomware or threat profiles modeled after their tactics, techniques and procedures. UEBA technologies (see “Market Guide for User and Entity Behavior Analytics”) have been quickly embraced by existing SIEM vendors (either via self-developed technology or acquisition, or through white labeling), in addition to UEBA vendors that have pivoted to the SIEM technology market. Machine-readable threat intelligence is increasingly made available, both with the core SIEM solution and as a premium feature. However, the quality of out-of-the-box threat intelligence and support for third-party feeds varies among vendors.
At the operational tier, SIEM solution buyer requirements are driving demand for more sophisticated case and incident management features, as well as ways to measure, track, report on and improve the mean time to detect (MTTD) and mean time to respond (MTTR) to threats. The automation of specific activities done manually by SIEM tool users in both investigating events and alerts, as well as in initiating response actions, strongly aligns to the SOAR tool market and its use cases. As a result, SOAR features (see “Innovation Insight for Security Orchestration, Automation and Response”) are starting to be added to SIEM solutions in a trajectory similar to the adoption of UEBA — via acquisitions, white-label partnerships, third-party integrations and native development.
Some SIEM buyers opt for and prefer vendors providing a full portfolio of security solutions that have (pre)integrated their offering within a platform approach, offering a range of threat management — as well as broader security operations — capabilities that complement the vendor’s core SIEM solution. These threat management solutions typically include multifunction endpoint agents, or sensors, that can provide log collection and forwarding, FIM, EDR, and even DLP functions. Network monitoring technologies provide similar capabilities, such as data collection and forwarding of network flow and metadata, partial and full packet capture, and threat detection (via signatures and network traffic analytics). For buyers who are underinvested in these complementary threat detection solutions, the integration and ease of use with the core SIEM solution are beneficial; however, for those buyers who have already invested in third-party solutions, this approach can be less beneficial and reduces flexibility.
As integration with a vendor’s existing technology portfolio, as well as with third-party technologies, increases in importance, in addition to buyer demands for easier management and use of SIEM tools, SIEM vendors are adding or improving centralized management capabilities. These capabilities may follow the app store or marketplace model, or be provided via a support website, where precanned integrations, packaged content and other SIEM solution updates can be accessed.
Context is key for effective threat detection and response. SIEM solutions are becoming more sophisticated in their ability to consume, but also generate, contextual information. The enrichment of log event data upon ingestion at the data tier is becoming more common. The adoption of APIs for data collection, as well as accessing contextual data held in federated repositories (e.g., CMDBs, IAM tools, vulnerability assessment tools), is expanding and continues to grow across vendors. These integrations are also proving important to support the adoption and expansion of orchestration and automation features.
Additionally, log management-as-a-service vendors are beginning to compete in the SIEM space by adding core SIEM capabilities. (For more information on central log management, see “Use Central Log Management for Security Event Monitoring Use Cases”). The trend toward leveraging cloud services to locate the SIEM solution closer to the data source, as well as enable advanced analytics such as UEBA, is a clear catalyst for the current and short-term/midterm SaaS SIEM adoption.
Product/Service Class Definition
SIEM technologies provide core security information management (SIM) and security event management functions, along with a variety of advanced features and complementary solutions and capabilities. This supports near-real-time security event monitoring, threat detection (both real-time and via historical analysis), incident investigation and response, and compliance requirements. Core functions include:
The collection of security event information from a wide variety of sources in a central repository where it can be processed and stored in various forms (e.g., raw version, enriched, normalized)
Real-time and historicalanalysis, and alerting of potential threats
Reporting and dashboards
Searching across historical data for forensics and threat hunting
Workflow and case management
Integrations and automation for extending the value proposition and achieving more functionality
SIEM technology is typically deployed to:
Monitor, correlate and analyze activity across multiple systems and applications
Discover external and internal threats
Monitor the activities of users and specific types of users, such as those with privileged access (both internal and third parties), and users with access to critical data assets such as intellectual property, and executives
Monitor server and database resource access, and offer some data exfiltration monitoring capabilities
Provide compliance reporting
Provide analytics and workflow to support incident response, and increasingly the ability to orchestrate and automate actions and workflows, powering SOC types of use cases
SIEM technology aggregates and analyzes the event data produced by networks, devices, systems and applications. The primary data source has been time-series-based log data; however, SIEM technology is evolving to process (e.g., for real-time monitoring) and leverage (e.g., for incident investigation and response) other forms of data to obtain context about users, IT assets, data, applications, threats and vulnerabilities (e.g., Active Directory [AD], configuration management database [CMDB], vulnerability management data, HR information and threat intelligence).
Critical Capabilities Definition
Here, we evaluate nine capabilities across SIEM technologies. Security and risk management leaders should use this research to understand the differing capabilities across the SIEM technology landscape aligned to their specific use cases.
This capability encompasses the architecture of the solution and its modules, its deployment alternatives, as well as horizontal and vertical scalability.
SIEM solution architectures must support a variety of buyer environments, ranging from smaller enterprises that may only need a single appliance solution to global enterprises and MSSPs with complex environments that require distributed, n-tier architectures, and even enterprises looking for a cloud-delivered, SaaS SIEM. SIEM tool buyers must evaluate the complexity of deployments, such as an all-in-one box approach versus individual or combined modules to support large-scale deployments, or their ability to send potentially massive amounts of logs through their gateway for SaaS SIEMs. They also need to assess how to deploy those components — for example, using physical or virtual appliances or software, provided as a service, or a combination thereof. To deal with the increase in the volume, velocity and variety (and retention) of data across organizations of all sizes, SIEM solutions are adopting and leveraging big data technologies. Buyers in organizations with “cloud first” policies are looking to SIEM solutions that are delivered as a service or that can be installed in IaaS or hybrid deployments. Support for integrating with an array of security and nonsecurity technologies is also increasingly important, with the growing adoption of APIs in security and other IT technologies.
This capability focuses on emerging or more established use cases centered on cloud, as well as the use of cloud as a deployment alternative for the SIEM solution.
As organizations are looking to benefit from the cloud, some adopt a cloud-first approach that prioritizes the use of cloud services, while most organizations have a hybrid environment with a combination of on-premises assets (e.g., firewalls, routers, servers), and cloud workloads in the form of SaaS solutions (e.g., Office 365) or IaaS/PaaS (e.g., AWS, Azure or Google Cloud Platform). This trend has been sustained over the past few years, to the point where a SIEM tool’s ability to operate in cloud and hybrid environments is now key for most, if not all, organizations.
This critical capability will take into account: (1) the tool’s ability to be delivered as a cloud service (and in this case, whether the solution is an on-premises image merely hosted by the vendor, or whether the solution is a genuine cloud-native SaaS service); and (2) the scope of cloud providers and services that the SIEM tool can natively offer use cases for (and in this case, whether the SIEM tool can interact with them in a bidirectional way).
Operations and Support
SIEM solutions are recognized as complex technologies that can be difficult to deploy, and require ongoing maintenance and support to stay fit for purpose.
Combined with a shortage of available SIEM engineering expertise, the ability to deploy and sustain the administration of SIEM tools becomes increasingly challenging. An integrated management console and user experience that enables efficient management of the SIEM solution is important. From this management console, log and data source management, administration of analytics content (e.g., correlation rules, whitelist matching and machine learning algorithms), reporting, and user administration through role-based access control are provided. Likewise, the SIEM tool should provide an easy way to define and manage automated responses via playbooks and workflow integration.
Vendor support is most visible in maintenance contracts purchased by SIEM tool buyers. This includes product support (e.g., patches, hotfixes, version upgrades and content updates), as well as human support to assist SIEM solution owners. Support is typically provided via remote means, but can vary across options such as email, phone and even access to Slack channels. Support may also be provided in tiers to help buyers who may not need, or can’t afford, expensive support plans. Finally, support for implementation is a crucial element for new SIEM implementations and upgrades, and may be provided directly by the vendor or through designated third parties.
Data Management Capabilities
This capability captures the SIEM tool’s ability to properly and easily manage data, from standard logs from security devices, to NetFlow, packet captures, vulnerability scanning data and external context data.
External context data includes machine-readable threat intelligence (MRTI) or user and asset context, such as AD or CMDB.
The ability to support data acquisition of IaaS and SaaS and IoT/OT devices is increasingly important as well, and clients expect this feature to be provided natively by the tools, as outlined in the Cloud Readiness capability.
A SIEM tool needs access to the right data, and these data points can range widely from on-premises sources to cloud compute sources, and encompass security data sources as well as nonsecurity ones, such as IT, organizational and HR data. These data points can be structured or unstructured, and collected via syslog, push or pull, or invoking API calls. In addition to getting the data via connectors, the SIEM needs corresponding parsers in order to make those data points insightful. Once collected, the data can be stored in raw form or normalized, enriched or contextualized form, or a combination thereof. Tools can also offer compression capabilities to minimize storage requirements, often at the expense of performance. Tools that offer an organization the ability to convince a court that the evidence is sound need to look at the full life cycle of data management, from secure transport of the data sources with nonrepudiation to secure storage with guaranteed integrity of each event and event sequence. Capabilities to provide fine-grained access control (usually RBAC) to logs, along with obfuscation and anonymization features and flexibility to provide multiple retention policies, will also be taken into account.
This capability describes the tool’s ability to offer the right analytics to get the most accurate insights for multiple use cases.
Once the data is collected, it needs to be analyzed in as real time as possible to detect threats as quickly as possible. To achieve a good level of accuracy, several analytics methods can run in parallel, ranging from simple pattern matching to more complex supervised and/or unsupervised machine learning. Some analytics are open, and clients can understand and modify the analytics, whereas some others are not, and behave more like a black box. Some SIEM tool’s analytics are both powerful and flexible enough to extend beyond threat detection and into adjacent use cases (such as some fraud use cases), and can offer a bench to evolve existing machine learning models or create new ones. Likewise, some SIEM tools map the analytics to known approaches such as Lockheed Martin’s Cyber KillChain, or MITRE ATT&CK frameworks to better understand what is happening in the organization.
While the analytics dynamically compute risk scores for users and/or entities based on actions or events taking place, the tool needs to help organizations prioritize their threat landscape by offering an intuitive UI.
Response and Incident Management
An organization’s ability to quickly open cases at any moment (such as directly from the real-time monitoring UI or during a threat hunt) with full context around the case will accelerate case resolution and improve the overall security posture.
For larger organizations dealing with a deluge of alerts, the tool can apply analytics to help triage cases to the right analysts by mapping skill sets to the focus of the case based on each analyst’s load (e.g., endpoint-centric cases assigned to available endpoint experts). For complex cases, the tool can likewise help the collaboration of multiple analysts by providing simple Slack-like features or advanced RBAC, where analysts with different clearance levels can still collaborate on cases.
Incidents or cases should have the ability to change the status (preconfigured and custom), support notes and annotations, and assign to other users. Integrations with enterprise IT help desk and IT service management (ITSM) systems, enabling interaction with business units outside of security (such as IT operations) are commonly required by buyers.
All steps performed during a case need to be logged and kept securely, as an organization should always be ready to go to court and demonstrate court-ready evidence. Attributes such as nonrepudiation, integrity of each event and integrity of the event sequence are important for incident and case management.
Interfaces that offer the ability to drill down on an alert and/or integrate with forensics and threat hunting capabilities allow rapid and contextual reactions to monitoring (e.g., open a case or launch an investigation).
Content Packaging and Management
Content for SIEM includes collectors and parsers for data sources, complete use cases, compliance packages, rules and models for analytics, and response and playbook capabilities.
All this content must be organized, packaged and managed easily to minimize operational costs of accessing, modifying and deploying content.
This content is required for the SIEM tool to function properly, and leveraging extensive and well organized content provided natively by the vendor, integrators, consultants or the community offers significant value to all organizations, especially the smaller ones, or the larger ones initiating their SIEM journey.
SIEM vendors tend to build ecosystems of technology alliances with complementary security and nonsecurity vendors to offer a rich and robust set of connectors, parsers and additional content such as analytics and/or response capabilities. In this case, ecosystem density is key for larger environments that implement nonstandard technologies or applications. Leveraging vendor-provided native content is particularly important for the response and playbook features, as these connections are bidirectional and integrations need to be done on both sides.
In addition to raw content, the tool should offer a management framework for accessing, updating and managing this content, and enabling its functionality. Particularly important for first-time SIEM buyers and those with limited resources, predefined functions and ease of deployment and support are valued over advanced functionality and extensive customization. The use of an app-store-type feature to provide a centralized location for locating and installing new content, integrations and other features is beneficial for all organizations and use cases.
Forensics and Threat Hunting
Investigation capabilities encompass the ability to use a SIEM tool to search for particular evidence to investigate an incident, be used as a forensics tool or to support threat hunting.
Search features and functionality are fundamental in a SIEM tool, and need to accommodate a wide range of organizational maturity. While some may prefer to search using a descriptive taxonomy and drop-down menus, others may prefer free-flowing search queries with either regular expression (regex) or boolean, or simple vendor-specific language. Response times can vary widely across SIEM tools, and preference should be given to quick response time for wide searches across large datasets, and for a user-intuitive visualization that will help analysts uncover interesting events or patterns. The ability to pivot from result to result, using simple clicks rather than copying and pasting into another console, is important, as is the ability to open a case at any point in a hunt. Finally, SIEM tools capable of searching across several data stores in large and mature organizations should be privileged.
User Experience and User Interface
Because of their flexibility and sheer complexity, SIEM tools are complex to operate and manage. Thus, it’s important for the tool to offer a user experience that is appropriate for both the size and maturity of the client organization, as well as the use case to implement.
Some SIEM tools assume that the client will have a high maturity level, and will offer workflows and operational models that are very efficient but require deeper expertise, while others clearly cater to less mature organizations and offer more guidance to the user, usually at the cost of efficiency.
The user experience encompasses the UI and the presentation layer of these tools, including dashboards, reports, alerts, how configurable these are and how well suited they are for the audience that the tools cater to.
Setup and ongoing management of the tool also vary widely, based on the client environment and the size of the deployment. Some tools aim to be used in SOCs with dozens or hundreds of security analysts, implying requirements to facilitate the description of the SOC team to help triage cases. Other tools typically operate in environments with no dedicated SIEM expertise, and forcing clients to describe their SOC expertise would not make sense.
Tools that offer a user experience that can scale with growing environments should be privileged.
Basic Searching and Reporting
This use case focuses on the simplest use case for SIEM, searching and reporting on the pool of logs.
These searches are often used for basic queries such as “Who logged in this weekend to our VPN concentrator?” and sometimes then run as periodic reports. Often the users are less mature organizations that do not have the skill set, availability or appetite to embark on more complex real-time monitoring use cases. However, these organizations also understand the benefit of using SIEM tools for searching and reporting across the enterprise, and want a path to more sophisticated real-time monitoring use cases.
A tool’s ability to rapidly allow nonexpert users to get an answer to their question, benefit from relevant visualizations, and intuitively pivot to more searches or to the creation of reports will be key to this use case.
Compliance and Control Monitoring
This use case is aimed at demonstrating compliance with specific mandates like PCI DSS, HIPAA or SOX, or best practices or control policies like ISO 27001 or CIS Controls.
These use cases tend to demonstrate that a particular event or series of events has indeed happened, or, on the contrary, never happened. Often in the form of checklists that can be hundreds or thousands of lines long that auditors need to validate, these compliance and control monitoring use cases are hard to manage manually. SIEM tools that offer stronger solutions for compliance and control monitoring use cases will facilitate the whole life cycle for this use case — from providing predefined content that caters to specific compliance and controls, to the way this content is organized in packages that are easy to customize based on unique needs, to how the vendor approaches the updates and new content for these packages, to how easily the reports can be generated and shared with auditors in an automatic way.
Basic Security Monitoring
This use case supports basic broad-based threat detection use cases, as well as capabilities that help new and less mature SIEM buyers and users.
These include ease of deployment and operations, real-time monitoring, and analytics. It typically includes first-time SIEM solution buyers, and buyers focused on less sophisticated use cases. These buyers may be more likely to adopt “single box” solutions and as-a-service and hosted offerings. The focus is on solutions that are easier to implement and manage with packaged content (analytics, reports and responses) that solve discrete threat-monitoring use cases (e.g., ransomware) that do not focus on cloud or IoT/OT security.
Complex Security Monitoring
This use case focuses on SIEM solutions with complex architectures (n-tier, hybrid), environments and user populations, as well as big data-type log and event challenges.
N-tier or hybrid architectures are required to support environments with challenges such as distributed geographies and multiple environments (on-premises, IaaS, SaaS) for data collection; high volumes, velocities and varieties of data collection; and multitenancy requirements. The scope for some use cases could include IoT/OT. Event monitoring, both real time and historic, leverages a variety of analytics in varying degrees of complexity. Best-of-breed security technologies may be employed, which requires integrations for both data collection and incident investigation and response activities.
Advanced Threat Detection and Response
This use case focuses on the early discovery and analysis of advanced and targeted attacks, and the ability to rapidly respond to those attacks.
Advanced threat hunting activities enhanced by the tool are also included in this use case.
Organizations operating in high-risk verticals and environments face an ever-increasing hostile external threat landscape, where adversaries target specific organizations and attempt to compromise them with persistence and an arsenal of tools and tactics in order to achieve their goals. SIEM solution buyers facing these advanced and persistent threat actors look to monitor, detect and respond to attacks across the range of the attack chain in near real time, and through advanced analytics and threat hunting across historic log events and data. These buyers usually seek complementary host and network threat detection and forensics tools, as well as other technologies like SOAR, which are directly integrated, or at least well integrated, with their SIEM solution, to facilitate the rapid investigation and response to detected threats.
In this case, there is a requirement to cross the boundaries between on-premises, cloud and IoT/OT scopes in order to offer a unified view of threats across the full threat landscape.
Vendors Added and Dropped
FireEye and HanSight were added to the SIEM Critical Capabilities research this year, based on their meeting the SIEM Magic Quadrant inclusion criteria.
BlackStratus, Netsurion-EventTracker and Venustech were dropped this year because they did not meet the Magic Quadrant inclusion criteria for revenue or geographic presence.
In this research, we’ve included software products for evaluation. The inclusion criteria are the same as for the SIEM Magic Quadrant:
The product must provide SIM and security event management capabilities to end-user customers via software and/or appliance and/or SaaS.
The SIEM features, functionality and add-on solutions must be generally available as of 31 July 2019.
The product must support data capture and analysis from heterogeneous, third-party sources (that is, other than from the SIEM vendors’ products/SaaS), including from market-leading network technologies, endpoints/servers, cloud (IaaS, SaaS) and business applications
The vendor must have SIEM (product/SaaS license and maintenance, excluding managed services) revenue exceeding $32 million for the 12 months prior to 30 June 2019, or have 100 production customers as of the end of that same period. Production customers are defined as those who have licensed the SIEM and are monitoring production environments with the SIEM. Gartner requires that vendors provide a written confirmation of achievement of this requirement and others that stipulate revenue or customer thresholds. The confirmation must be from an appropriate finance executive within the organization.
The vendor must receive 15% of SIEM product/SaaS revenue for 12 months prior to 30 June 2019 from outside the geographical region of the vendor’s headquarters location, and must have at least 10 production customers in each of at least two of the following geographies: North America, EMEA, the Asia/Pacific region or Latin America.
The vendor must have sales and marketing operations (via print/email campaigns and/or local language translations for sales/marketing materials) targeting at least two of the following geographies as of 30 June 2019: North America, EMEA, the Asia/Pacific region or Latin America.
Capabilities that are available only through a managed service relationship — that is, SIEM functionality that is available to customers only when they sign up for a vendor’s managed security or managed detection and response or managed SIEM or other managed service offering. By “managed services,” we mean those in which the customer engages the vendor to establish, monitor, escalate and/or respond to alerts/incidents/cases.
Table 1: Weighting for Critical Capabilities in Use Cases
Basic Searching and Reporting
Compliance and Control Monitoring
Basic Security Monitoring
Complex Security Monitoring
Advanced Threat Detection and Response
Operations and Support
Data Management Capabilities
Response and Incident Management
Content Packaging and Management
Forensics and Threat Hunting
User Experience and User Interface
Source: Gartner (February 2020)
This methodology requires analysts to identify the critical capabilities for a class of products/services. Each capability is then weighed in terms of its relative importance for specific product/service use cases.
Critical Capabilities Rating
Table 2: Product/Service Rating on Critical Capabilities
Dell Technologies (RSA)
Operations and Support
Data Management Capabilities
Response and Incident Management
Content Packaging and Management
Forensics and Threat Hunting
User Experience and User Interface
Source: Gartner (February 2020)
Table 3 shows the product/service scores for each use case. The scores, which are generated by multiplying the use-case weightings by the product/service ratings, summarize how well the critical capabilities are met for each use case.
Table 3: Product Score in Use Cases
Dell Technologies (RSA)
Basic Searching and Reporting
Compliance and Control Monitoring
Basic Security Monitoring
Complex Security Monitoring
Advanced Threat Detection and Response
Source: Gartner (February 2020)
To determine an overall score for each product/service in the use cases, multiply the ratings in Table 2 by the weightings shown in Table 1.
Critical Capabilities Methodology
This methodology requires analysts to identify the critical capabilities for a class of products or services. Each capability is then weighted in terms of its relative importance for specific product or service use cases. Next, products/services are rated in terms of how well they achieve each of the critical capabilities. A score that summarizes how well they meet the critical capabilities for each use case is then calculated for each product/service.
“Critical capabilities” are attributes that differentiate products/services in a class in terms of their quality and performance. Gartner recommends that users consider the set of critical capabilities as some of the most important criteria for acquisition decisions.
In defining the product/service category for evaluation, the analyst first identifies the leading uses for the products/services in this market. What needs are end-users looking to fulfill, when considering products/services in this market? Use cases should match common client deployment scenarios. These distinct client scenarios define the Use Cases.
The analyst then identifies the critical capabilities. These capabilities are generalized groups of features commonly required by this class of products/services. Each capability is assigned a level of importance in fulfilling that particular need; some sets of features are more important than others, depending on the use case being evaluated.
Each vendor’s product or service is evaluated in terms of how well it delivers each capability, on a five-point scale. These ratings are displayed side-by-side for all vendors, allowing easy comparisons between the different sets of features.
Ratings and summary scores range from 1.0 to 5.0:
1 = Poor or Absent: most or all defined requirements for a capability are not achieved
To determine an overall score for each product in the use cases, the product ratings are multiplied by the weightings to come up with the product score in use cases.
The critical capabilities Gartner has selected do not represent all capabilities for any product; therefore, may not represent those most important for a specific use situation or business objective. Clients should use a critical capabilities analysis as one of several sources of input about a product before making a product/service decision.
Comprehensive Explanation: What is a SIEM (in 2020 and beyond.)
[I have not had the time to proof read nor correct grammatical errors, spelling mistakes and typos. ]
SIEM unifies Threat Detection and Hunting.
This is an old topic worth revising and level setting with the latest advancements, concepts and learning from a decades of unsuccessful SIEM deployments! It is worth revisiting as allot people don’t understand the value and even less understand how to effectively operationalise and achieve business outcomes utilising the power of a SIEM.
After reading this you will gain enough insight into the basics of SIEM.
I am continually asked the same questions around SIEM design, so glad to finally brain dump this knowledge and share with the community
(SIEM in Public Cloud is beyond the scope of this article, while all the information is relevant, I will write another article focusing specifically for Threat Detection for Public Cloud environments. )
Security Information and Event Management
A SIEM seeks to provide a holistic approach to an organisation’s IT security. A SIEM represents a combination of services, appliances, and software products. It performance real-time collection of log data from devices, applications and hosts. It also process the collected log data, enabling real-time analysis of security alerts generated by network hardware and applications, Advanced Correlation for security and operational events, as well as real-time alarming and scheduled reporting.
SIEM technology is used in many enterprise organizations to provide real time reporting and long term analysis of security events. SIEM products evolved from two previously distinct product categories, namely security information management (SIM) and security event management (SEM).
Table 1 shows this evolution.
Table 1 . SIM and SEM Product Features Incorporated into SIEM
Real time reporting, log collection, normalization, correlation, aggregation
Combined SIEM Product
SIEM combines the essential functions of SIM and SEM products to provide a comprehensive view of the enterprise network using the following functions:
Log collection of event records from sources throughout the organization provides important forensic tools and helps to address compliance reporting requirements.
Normalization maps log messages from different systems into a common data model, enabling the organization to connect and analyze related events, even if they are initially logged in different source formats.
Correlation link slogs and events from disparate systems or applications, speeding detection of and reaction to security threats.
Aggregation reduces the volume of event data by consolidating duplicate event records.
Reporting presents the correlated aggregated event data in real-time monitoring and long-term summaries.
Internal IT environment consists of services, networking equipment, application, and components that they want to protect and prevent intrusion into. In order to protect these assets and data, you can deploy protection in the form of firewalls, antivirus, IPS/IDS and Authentication. Protection Examples such as;
Secure Access Service Edge
Despite all of the systems and effort put into these solutions, those trying to breach that environment will get in. Once they are in, detecting and responding to their attack is time critical.
A SIEM receives or taps into all of these activity as it is continually receiving thousands of logs per second from all of these devices and systems within the environment. The SIEM process log data to make meaning of what is actually happening on a device aka Detection, and analytics are used to analyses data activity, providing more input into what is actually happening.
SIEM solutions also provides the ability to analysis log historic data and generate reports for compliances purposes as well as providing digital forensic and fulfilling additional parts of overall information security strategy.
SIEM solutions centralising log data within IT environments, augmenting security measures and enabling real-time analysis. It is constantly watching, monitoring and analysing events and alerts with the environment in an effort to detect attacks and intrusions.
Fourth Wave of SIEM
SIEMs sometimes gets a bad name as it is incredibly powerful and yet takes enormous amount of skills and effort to get working. Not because of the SIEM, but it requires data from all of your IT environment and that particularly causes massive delays in successful SIEM deployment. (This can be easily solved. Keep reading.) SIEM has evolved to very mature platforms. E.g. ArcSight 20+ years of evolution. Read ArcSight History here
PCI-DSS really drove first phase of SIEM deployment for Complaint Business outcome.
Then people started to detect bad things in network activity.
This phase was when customer started to build SOCs.
This is about SOCs developing Threat Hunting utilising NDR, EDR, SIEM and SOAR
SIEM processes all types of Machine data produced by devices in a IT environment.
Machine data is one of the most underused and undervalued assets of any organization. But some of the most important insights that you can gain—across IT and the business—are hidden in this data: where things went wrong, how to optimize the customer experience, the fingerprints of fraud. All of these insights can be found in the machine data that’s generated by the normal operations of your organization.
Machine data is valuable because it contains a definitive record of all the activity and behavior of your customers, users, transactions, applications, servers, networks and mobile devices. It includes configurations, data from APIs, message queues, change events, the output of diagnostic commands, call detail records and sensor data from industrial systems, and more.
The challenge with leveraging machine data is that it comes in a dizzying array of unpredictable formats, and traditional monitoring and analysis tools weren’t designed for the variety, velocity, volume or variability of this data.
In computing, syslog/ˈsɪslɒɡ/ is a standard for message logging. It allows separation of the software that generates messages, the system that stores them, and the software that reports and analyzes them. Each message is labeled with a facility code, indicating the software type generating the message, and assigned a severity level.
The syslog protocol, defined in RFC 3164, protocol provides a transport to allow a device to send event notification messages across IP networks to event message collectors, also known as syslog servers. The protocol is simply designed to transport these event messages from the generating device to the collector. The collector doesn’t send back an acknowledgment of the receipt of the messages.
Syslog uses the User Datagram Protocol (UDP), port 514, for communication. Being a connectionless protocol, UDP does not provide acknowledgments. Additionally, at the application layer, syslog servers do not send acknowledgments back to the sender for receipt of syslog messages. Consequently, the sending device generates syslog messages without knowing whether the syslog server has received the messages. In fact, the sending devices send messages even if the syslog server does not exist.
The syslog packet size is limited to 1024 bytes and carries the following information:
Computer system designers may use syslog for system management and security auditing as well as general informational, analysis, and debugging messages. A wide variety of devices, such as printers, routers, and message receivers across many platforms use the syslog standard. This permits the consolidation of logging data from different types of systems in a central repository. Implementations of syslog exist for many operating systems.
When operating over a network, syslog uses a client-server architecture where a syslog server listens for and logs messages coming from clients.
SIEM is a mandatory requirement for Compliance Audits such as PCI-DSS, ISO, 27001, Sarbanes–Oxley Act of 2002(thanks Enron), and other standards.
The Payment Card Industry (PCI) Security Standards Council was founded by five global payment brands: American Express, Discover Financial Services, JCB International, MasterCard, and Visa. These five payment brands had a common vision of strengthening security policies across the industry to prevent data breaches for businesses that accept and process payment cards. Together they drafted and released the first version of PCI Data Security Standard (PCI DSS 1.0) on December 15, 2004.
PCI DSS is a regulation with twelve requirements that serve as a security baseline to secure payment card data.
Requirement 10: Track and monitor all access to network resources and cardholder data.
Requirement 11.5: Deploy a change detection mechanism (for example, file integrity monitoring tools) to alert 24 personnel to unauthorized modification (including changes, additions, and deletions) of critical system files, configuration files or content files. Configure the software to perform critical file comparisons at least weekly. Implement a process to respond to any alerts generated by the change-detection solution.
Depending on your PCI-DSS merchant level and number of Credit Card transactions you process, you will need to adhere to different levels of PCI-Auditing.
Cyber Threat Intelligence
Threat intelligence, or cyber threat intelligence, is information an organization uses to understand the threats that have, will, or are currently targeting the organization. This info is used to prepare, prevent, and identify cyber threats looking to take advantage of valuable resources.
Cyber Threat Intelligence consists of many number of information including; Indicators of Comprise and Indicators of Attacks
Indicators of compromise (IOCs) are “pieces of forensic data, such as data found in system log entries or files, that identify potentially malicious activity on a system or network.” Indicators of compromise aid information security and IT professionals in detecting data breaches, malware infections, or other threat activity. By monitoring for indicators of compromise, organizations can detect attacks and act quickly to prevent breaches from occurring or limit damages by stopping attacks in earlier stages.
Indicators of compromise act as breadcrumbs that lead infosec and IT pros to detect malicious activity early in the attack sequence. These unusual activities are the red flags that indicate a potential or in-progress attack that could lead to a data breach or systems compromise.
Indicators of attack are similar to IOCs, but rather than focusing on forensic analysis of a compromise that has already taken place, indicators of attack focus on identifying attacker activity while an attack is in process. Indicators of compromise help answer the question “What happened?” while indicators of attack can help answer questions like “What is happening and why?” A proactive approach to detection uses both IOAs and IOCs to discover security incidents or threats in as close to real time as possible
ATPs and Tactics, Techniques and Procedures (TTPs)
SIEM can utilise Cyber threat intelligence/IoCs/IoAs/TTPS and correlate with the IT environment log data to Detect threats in real-time and history log data.
Correlation Rules, Behaviour patterns, Pattern matching, Anomaly detection, Conditions, Thresholds, Network Modelling and Machine learning (Phew give me a pay rise. )
Correlation is one of the key components of any effective SIEM tool. As information from across your digital environment feeds into a SIEM, it uses correlation to identify any possible issues. It does so by comparing sequences of activity against preset rules, conditions and thresholds. SIEMs allow sophisticated ways to implement risk based rules.
The latest SIEM, can now implement Anomaly detection via Machine learning.
All integrated with Threat Intelligence information.
The Brains inside a SIEM is based on Correlation Rules, Pattern matching, Conditions, Thresholds and now implementation of Machine learning via Unsupervised and Supervised Models.
Supervised Machine Learning
Unsupervised Machine Learning
Network Modelling and Risk Scoring
Use case is a term used for Threat Detection in terms of Business Context. It combines the value and context in SIEM platform.
You can catch just about everything with ArcSight Default Content and SIGMA Rules! The rest you need to pay someone like me to workshop and write.
Machine Data Sources
Amazon Web Services
Security & Compliance, IT Operations
Data from AWS can support service monitoring, alarms and a dashboards for metrics, and can also track security-relevant activities, such as login and logout events.
APM Tool Logs
Security & Compliance, IT Operations
APM tool logs can provide end-to-end measurement of complex, multi-tier applications, and be used to perform post-hoc forensic analytics on security incidents that span multiple systems.
Security & Compliance, IT Operations, Application Delivery
Authentication data can help identify users that are struggling to log in to applications and provide insight into potentially anomalous behaviors, such as activities from different locations within a specified time period.
Security & Compliance, IT Operations
Firewall data can provide visibility into blocked traffic in case an application is having communication problems. It can also be used to help identify traffic to malicious and unknown domains.
Industrial Control Systems (ICS)
Security & Compliance, Internet of Things, Business Analytics
ICS data provides visibility into the uptime and availability of critical assets, and can play a major role in identifying when these systems have fallen victim to malicious activity.
Security & Compliance, Internet of Things, Business Analytics
Medical device data can support patient monitoring and provide insights to optimize patient care. It can also help identify compromised protected health information.
Security & Compliance, IT Operations
Network protocol data can provide visibility into the network’s role in overall availability and performance of critical services. It’s also an important source for identifying advanced persistent threats.
Security & Compliance, IT Operations, Internet of Things
Sensor data can provide visibility into system performance and support compliance reporting of devices. It can also be used to proactively identify systems that require maintenance.
Security & Compliance, IT Operations
System logs are key to troubleshooting system problems and can be used to alert security teams to network attacks, a security breach or compromised software.
Security & Compliance, IT Operations, Business Analytics
Web logs are critical in debugging web application and server problems, and can also be used to detect attacks, such as SQL injections.
SIEM Data formats
Typical formats supported by SIEM platform to ingest Log data;
In the realm of security event management, a myriad of event formats streaming from disparate devices makes for a complex integration. Common Event format by ArcSight promote interoperability between various event- or log-generating devices.
Although each vendor has its own format for reporting event information, these event formats often lack the key information necessary to integrate the events from their devices.
The ArcSight standard attempts to improve the interoperability of infrastructure devices by aligning the logging output from various technology vendors.
Common Event Format (CEF) is a Logging and Auditing file format from ArcSight and is an extensible, text-based format designed to support multiple device types by offering the most relevant information.
Message syntaxes are reduced to work with Arcisght normalization. Specifically, Common Event Format defines a syntax for log records comprised of a standard header and a variable extension, formatted as key-value pairs.The format called Common Event Format (CEF) can be readily adopted by vendors of both security and non-security devices.
This format contains the most relevant event information, making it easy for event consumers to parse and use them. To simplify integration, the syslog message format is used as a transport mechanism.
Ensures timestamps all reflect the same time zone to correlate events from different timezones.
Time is an important piece for threat detection. Some time zones around the world don’t observe Daylight Savings Time (DST) and some time zones are actually a half hour different than others. In addition to time zone issues, some devices don’t include a time in the log message. A SIEM needs to timestamp a log with a single time zone.
Data Enrichment (Meta data extracting, tagging and enrichment)
SIEM parses and breaks down log message into core components and adding context. e.g. adding customer tag, etc.
Log data is not uniform, they following a standard protocol, but the information within isn’t standard followed by log source providers, so a SIEM has to process the log into a unified threat detection taxonomy and universal schema in order to run mathematical rules.
Log information needs to be assigned into common schema so that a [User Log on] message from various system from Unix, Windows, Active Directory, AWS, etc will all be tagged as User Log on to assist threat detection search rules.
Threat and Risk Contextualisation
Evaluate each log and provide risk-based priority value. e.g. Information for Edge services / DMZ or Authentication such as Active Direction, DNS information, etc.
Events are a collections of syslogs that is created after processing with Threat Intelligence and/or correlation rules. An Event is a actionable log items sent to human Analysts for further triage, performing investigations and reporting.
Sizing SIEM solutions
Sizing a SIEM solutions, begins with the basic list of devices that you want to monitor. See Example Device List collection Tool;
Windows Server (Active Directory)
Windows Server (DNS)
Fortinet Firewall (IDS/IPS/VPN)
Citrix Access Gateway
SIEM Sizing (Events Per Second)
Critical to sizing and design of a SIEM platform, is to determine Events Per Second produced by the quantity of devices Size,
You need to determine and estimate the following SIEM fundamentals;
Events Per Second
Events Per Day:
Online Retention Period and requirement Storage in GBs
Retention Period and required Storage in GBs
Network Bandwidth Peak requirements: (GB /per second for all Devices.)
EPS average (Day, Week, Month, etc.)
Estimated Device Growth over 3 years
EPS Headroom (Allow 10-30%)
Recovery Point Objective
Recovery Time Objective
Event / Alert Size (512 Kbs per Event is a rough estimate.)
SIEM Sizing Rosetta Stone
GB (1 GB = 1,000,000,000 BYTES)
EPS (1 EVENT = 600 BYTES)
Storage and Archival are critical for any Security Logging platform
Raw Event Size
Normalised Event Size
Online Retention Period
Events Per Day
GB Storage per day/Retention time.
It is vital to understand the way your SIEM platform receivers and processing data; What is the Schema format, Schema on Read, Schema on Write. Is it using Distributed Search or in-memory Real-time, etc. The last thing you want to do is HORD data and not understand what you are collecting and be scared of getting rid of it and not even be able to get any value from the data; Don’t turn into this guy, because the Finance department will start knocking on your door and the day will come when you will have to provide justification and prove business results. If you ever get breached and can’t even useful information after you stored tons of data. You might need to find another job.
Overwhelming about of logs sources without proper sanitisation and normalisation can lead to massive amount of useless information in SIEM leading to alert fatigue
False-Positive and False-Negatives
A false positive state is when the SIEM identifies an activity as an attack but the activity is acceptable behavior. A false positive is a false alarm.
A false negative state is the most serious and dangerous state. This is when the SIEM identifies an activity as acceptable when the activity is actually an attack. That is, a false negative is when the SIEM fails to catch an attack. This is the most dangerous state since the security professional has no idea that an attack took place.
False positives, on the other hand, are an inconvenience at best and can cause significant issues. However, with the right amount of overhead, false positives can be successfully adjudicated; false negatives cannot.
Airport Security: a “false positive” is when ordinary items such as keys or coins get mistaken for weapons (machine goes “beep”)
Medical screening: low-cost tests given to a large group can give many false positives (saying you have a disease when you don’t), and then ask you to get more accurate tests.
Antivirus software: a “false positive” is when a normal file is thought to be a virus
Popular SYSLOG Servers
Log Sources Categories
err no clue
SIEM – Real-Time vs Search
As the ever increasing volume of data increases, it becomes increasingly difficult to gain critical insights into to massive volumes of data for SIEMs and other data analytics platforms. SIEMs need to detect threats in-real time and search years of log source archives at the same time. So you are trying to solve two critical problems at the same time;
Security Event Management
Real-Time Streaming Data Analytics
Security Information Management
Searching Large Data sets at scale and speed
These two requirements are incredibly difficult to solve at scale. So, lo and behold, Open source to the rescue; Apache Kafka and Apache Hadoop provide solutions for both of these requirements.
A streaming platform has three key capabilities:
Publish and subscribe to streams of records, similar to a message queue or enterprise messaging system.
Store streams of records in a fault-tolerant durable way.
Process streams of records as they occur.
Kafka is generally used for two broad classes of applications:
Building real-time streaming data pipelines that reliably get data between systems or applications
Building real-time streaming applications that transform or react to the streams of data
Apache Hadoop (aka Data Lake)
The Apache™ Hadoop® project develops open-source software for reliable, scalable, distributed computing.
Security Operations and Automated Response (SOAR.)
This subject is beyond the scope of this article. I will dive into this in the near future.
Leading SIEM Vendor Solutions
ArcSight Data Platform
ArcSight really almost invited the SIEM industry with 20+ year Product portfolio and invented CEF format for cyber security now supports Apache Kafka and Apache Hadoop. Integrating Unsupervised Machine learning via Vertica, IDOL and Interset.
While gaining popularity for general purpose IT monitoring, they do have some capability in Security and Big Data Analytics. Splunk Enterprise is the Base, solution, with Splunk Enterprise Security, Splunk UBA, Splunk Cloud and Splunk Phantom. , Splunk Machine Learning Toolkit, Splunk uses Common Information Model
Another original SIEM vendor.
I don’t have any experience with QRadar.
ELK Security Onion / HELK
Fastest growing Open source Search stack. ELK is Opensource. Elastic is very powerful opensource platform, recently acquired Endgame. ELK stack; Elasticsearch, Kibana, Logstash, Beats. ECS Elastic Common Schema
Popular due to McAfee Enterprise license agreements.
100% Windows Server Based, no linux edition. Every complex to deploy and requires high resources and application administration. Does have SYSMON, FIM, NETMON, UEBA and SOAR as part of the solution.
FireEye / Mandiant
Premium products for Banking and Defence Grade Technology combined with 24/7 DFIR SOC services. So this is Product solution and arguably the best DFIR Team (Mandiant). Every expensive.. HX, NX, MX proud lines, for Endpoint, Network and Cloud SIEM.
Thank you for reading this article, please support my sharing, Next article, I will look at Log collection and SIEM Design patterns in Cloud.
If you would like to sponsor my next article or this blog, please get in touch.