Microsoft Azure Sentinel

Microsoft Azure Sentinel

Microsoft Azure Sentiel is fasting becoming a very powerful SIEM and IMO, I think its going to take the lead for the following reasons;

For all of the above reason, I am going to learn Azure Sentinel in more depth, hopefully build a cyber range using my MSDN subscription.


Presales Toolkit

Presales Toolkit

Hemispheres | Left & Right Hemispheres Roles, Facts & Information

Ive developed various tools for complex software solutions presales selling and going to table them here;

  • Sales and SE working together EQ+IQ=$$$$
  • RFP Evaluation Method
  • RFP Response template
  • Public service procurement panels (I did this in every company so far. Seriously! haha)
  • POC Best Practice
  • Sales and Influencing Engagement Method
  • Win Plan
  • Reference Architectures, HLD, DD, Configs, Assessments, ROI, Maturity Strategy.
  • Channel Strategy

Market Guide for Network Detection and Response

Market Guide for Network Detection and Response

Published 11 June 2020 – ID G00718877 – 23 min read

Network detection and response (formerly known as network traffic analysis) vendors are adding more automated and manual response features to their solutions. Here, we provide an overview of the market and highlight some of the key vendors to be considered by security and risk management leaders.


Key Findings

  • Applying machine learning and other analytical techniques to network traffic is helping enterprises detect suspicious traffic that other security tools are missing.
  • Network detection and response (NDR) remains a crowded market with a low barrier to entry, as many vendors can apply common analytical techniques to traffic monitored from a SPAN port. Customer references, from a broad set of vendors, are generally satisfied with their tools.
  • Response capabilities fall into two categories: manual and automatic. Vendors have been actively enhancing their manual (threat hunting and incident response) features, and have been adding partners to broaden their automatic response functionality.


To improve infrastructure security and the detection of suspicious network traffic, security and risk management leaders should:

  • Implement behavioral-based NDR tools to complement signature-based detection solutions.
  • Include NDR-as-a-feature solutions in their evaluations, if they are available from their current security information and event management (SIEM), firewall or other security vendors.
  • Decide early on in the evaluation process if they desire automated response versus manual response capabilities. A clearly defined response strategy is valuable in selecting a shortlist of NDR vendors.

Market Definition

NDR solutions primarily use non-signature-based techniques (for example, machine learning or other analytical techniques) to detect suspicious traffic on enterprise networks. NDR tools continuously analyze raw traffic and/or flow records (for example, NetFlow) to build models that reflect normal network behavior. When the NDR tools detect suspicious traffic patterns, they raise alerts. In addition to monitoring north/south traffic that crosses the enterprise perimeter, NDR solutions can also monitor east/west communications by analyzing traffic from strategically placed network sensors.Response is also an important function of NDR solutions. Automatic responses (for example, sending commands to a firewall so that it drops suspicious traffic) or manual responses (for example, providing threat hunting and incident response tools) are common elements of NDR tools. In 2019, Gartner named this market “network traffic analysis.” This year, we renamed it “network detection and response,” because this term more accurately reflects the functionality of these solutions.

Market Description

Dozens of vendors claim to analyze network traffic (or flow records) and to detect suspicious activity on the network. We have applied the following criteria to identify the most relevant vendors.Inclusion CriteriaVendors must:

  • Analyze raw network packet traffic or traffic flows (for example, NetFlow records) in real time or near real time.
  • Monitor and analyze north/south traffic (as it crosses the perimeter), as well as east/west traffic (as it moves laterally throughout the network).
  • Be able to model normal network traffic and highlight suspicious traffic that falls outside the normal range.
  • Offer behavioral techniques (non-signature-based detection), such as machine learning or advanced analytics that detect network anomalies.
  • Provide automatic or manual response capabilities to react to the detection of suspicious network traffic.

Exclusion CriteriaWe exclude solutions that:

  • Require a prerequisite component — for example, those that require a SIEM or firewall platform.
  • Emphasize network forensics over detection functionality, primarily through the storage and analysis of full PCAP data.
  • Work primarily on log analysis.
  • Are based primarily on analytics of user session activity — for example, user and entity behavior analytics (UEBA) technology.
  • Focus primarily on analyzing traffic in Internet of Things (IoT) or operational technology (OT) environments, because specialized solutions are optimized to address this use case.

Market Direction

Vendors are focused on enhancing their detection and response capabilities. For detection, we expect vendors to continue enhancing their ability to detect suspicious patterns in encrypted traffic. Some vendors will add the ability to terminate, decrypt and analyze TLS traffic natively in their sensors. However, most vendors, particularly the ones with out-of-band sensors, will enhance their ability to detect suspicious traffic without decrypting the TLS traffic and inspecting the payload. Some vendors detect suspicious SSL/TLS Server Certificates for this purpose. Also, some vendors use techniques such as analyzing the length of individual packets, the timing between packets, the duration of connections and other methods to detect suspicious TLS traffic. We expect that more vendors will enhance their solutions with similar functionality.Vendors will also be enhancing their response capabilities. For automated responses, they will broaden partnerships with firewall vendors (send commands to firewalls to drop suspicious traffic), network access control vendors (send commands to the network access control [NAC] solution to isolate an endpoint), security operations automation response (SOAR) vendors (respond to events with playbooks), endpoint detection and response (EDR) vendors (to contain compromised endpoints) and other security vendors. For manual response, vendors will improve their threat hunting and incident response functions by improving workflow features (for example, helping incident responders prioritize which security events they need to respond to first).

Market Analysis

Here, we analyze the segments of the NDR market:

  • Pure-play NDR companies. The vendors in this category are mostly smaller specialty companies whose only product is an NDR solution.
  • Network-centric companies: Several companies that have historically targeted network use cases, such as network performance monitoring and diagnostics (NPMD; see “Market Guide for Network Performance Monitoring and Diagnostics”), have developed solutions to address security use cases. These network-centric solutions were already monitoring network traffic, and these vendors have applied analytical techniques, such as machine learning, to detect anomalous traffic.
  • Others. A few vendors do not fit cleanly in the two categories defined above. For example, large, diversified network security providers, such as Cisco and Hillstone Networks, also offer NDR solutions. Cisco has Stealthwatch, and Hillstone has the Server Breach Detection System.

Representative Vendors

Market Introduction

Table 1 highlights the NDR vendors that meet our inclusion criteria and were not eliminated by our exclusion criteria.

Table 1: Representative Vendors in Network Detection and Response

Enlarge Table

VendorProduct, Service or Solution Name
Awake SecurityAwake Security Platform
Blue HexagonBlue Hexagon
CorelightCorelight Sensors
DarktraceEnterprise Immune System
Fidelis CybersecurityFidelis Elevate
FlowmonFlowmon Anomaly Detection System (ADS)
Hillstone NetworksServer Breach Detection System (sBDS)
LastlineLastline Defender
VectraCognito Detect

Source: Gartner (June 2020)Please refer to Note 2 for a list of other vendors that we are tracking.The vendors listed in this Market Guide do not imply an exhaustive list. This section is intended to provide more understanding of the market and its offerings.

Vendor Profiles

Awake Security

Based in Santa Clara, California, Awake Security uses supervised machine learning, unsupervised machine learning and some deep learning techniques to detect suspicious traffic. Awake does not decrypt TLS traffic. It also does not use JA3 signatures, but Awake has developed its own application/TLS fingerprinting algorithms. It also uses encrypted traffic analysis techniques. For example, it can identify attempts to tunnel malicious traffic over DNS and other protocols.Awake’s solution includes manual and automatic response capabilities. Its Ava tool performs automated threat hunting, incident triage and response. Awake partners with multiple firewall vendors, orchestration tools and other solutions to enforce automated responses. Awake sells the solution as an annual subscription, based on aggregate throughput. Virtual appliances are available at no charge, and physical devices are available for a fee. Customers can deploy Awake in two modes. With the first option, no customer sensitive data ever leaves the customer’s environment. With the second option, customers deploy the central analytics and management in an Awake hosted cloud. In this scenario, each customer’s data is isolated and can only be accessed by the customer that owns the data. Awake also offers a managed network detection and response service built on the technology platform.

Blue Hexagon

Blue Hexagon is based in Sunnyvale, California. It launched its network and IaaS (Amazon Web Services [AWS] and Microsoft Azure) network detection solution in 2019, with a cloud management console. The vendor serves the U.S. market and plans expansion internationally in 2020. Blue Hexagon’s detection engine inspects network traffic and files, and is based on deep learning to detect threats. The solution cannot decrypt TLS. It relies on TLS handshake and tunnel characteristics to detect anomalies on encrypted traffic, using its deep learning models. The vendor uses threat intelligence feeds, but also uses deep learning to classify sources as malicious.Blue Hexagon can be deployed in-line and out-of-band. When deployed out-of-band, it integrates with endpoint security and firewall solutions, as well as SIEM, SOAR and AWS/Azure to provide automated response. When deployed in-line (“bump in the wire” or through ICAP), it can directly block traffic. Licensing for Blue Hexagon follows a traditional network security approach, with hardware purchase (virtual appliance is free of charge) and licensing based on required bandwidth, which includes vendor support. IaaS pricing can be bandwidth-based or per hour.


Headquartered in Columbia, Maryland, Bricata is a network security vendor primarily targeting the U.S. and European markets. The vendor’s solution leverages the Suricata IDPS module for signature-based controls and the Zeek (formerly Bro) engine for protocol and behavioral analysis, while capturing full-packet traffic data for retrospective analysis. Bricata is a highly customizable solution, where users can tune detections and create specialized detections. Bricata also includes the Cylance Infinity engine for file analysis. The network sensors and centralized management are available in physical and virtual appliances. They can also be deployed on the main IaaS platforms. The sensors do not decrypt TLS traffic, and rely on JA3 fingerprinting to provide encrypted session analysis. The vendor recently released the ability to tag alerts based on the MITRE ATT&CK framework, to aggregate similar events in the dashboard, and to run files in the Cuckoo Sandbox.The vendor’s response capabilities rely on SIEM and SOAR integration, and API documentation is available to create custom response scenarios with firewall, NAC and other products. Bricata’s software pricing is based on aggregated bandwidth of inspected traffic. Customers can also purchase hardware appliances through Bricata’s channel partners.


Cisco, based in San Jose, California, offers two deployment options for its Stealthwatch solution. Stealthwatch Enterprise collects, stores and analyzes information in the customer’s environment. Stealthwatch Cloud is a SaaS offering. It can monitor a customer’s private network or a public cloud environment (through integrations with AWS, Azure or Google Cloud Platform). Stealthwatch detects suspicious traffic primarily by analyzing NetFlow, IPFIX or sFlow records. Stealthwatch uses multiple analytical techniques to detect suspicious traffic, including supervised machine learning, unsupervised machine learning and some deep learning algorithms. The solution does not decrypt TLS traffic. Stealthwatch uses Cisco’s Encrypted Traffic Analysis (ETA) functionality to analyze TLS traffic without decrypting it.Stealthwatch provides historical information to enable a security analyst to manually respond to incidents. It also enables automated responses through integration with Cisco’s Identity Services Engine (ISE). Stealthwatch alarms and events can be shared with Cisco’s SecureX platform, where responses can be automated via SecureX playbooks. Stealthwatch is sold as a subscription based on the necessary flows per second, network device count or total monthly flows.


Corelight is headquartered in San Francisco, California, serving customers essentially in North America and Europe. The vendor’s founders created the Zeek (formerly Bro) network monitoring framework and the solution’s sensors are available in the form of appliances (physical and virtual) on AWS and, more recently, on Azure. Corelight uses Zeek as its main engine and as a support for its own detections and integrating third-party threat intelligence feeds. Corelight mainly relies on its own analysis of the traffic metadata, and can also extract files to forward them to third-party file inspection devices. Corelight Sensors do not decrypt TLS, but the vendor just added additional encrypted traffic analysis for SSH — to detect brute force attempts and interactive connections — and TLS, including JA3 fingerprinting and certificate analysis.As Corelight Sensors are more frequently deployed out of band, the vendor focused its response capabilities on integrating with a broad portfolio of SIEM and SOAR tools. Customers interested in Corelight will purchase hardware appliances and attached subscriptions based on sensors’ expected bandwidth capacity.


Darktrace is based in Cambridge, U.K., and San Francisco, California. It’s detection capability is primarily based on unsupervised machine learning, and it also utilizes supervised machine learning and deep learning algorithms. To analyze encrypted traffic, Darktrace relies primarily on unsupervised machine learning to detect unusual and anomalous JA3s. Darktrace offers a SaaS module to monitor traffic between users and Microsoft Office 365. In 2019, Darktrace introduced the Cyber AI Analyst capability. It uses analytical techniques to automatically investigate threats detected by Darktrace’s flagship Enterprise Immune System (EIS). Cyber AI Analyst investigates the most important incidents on a dashboard, and it provides written reports on these incidents.Darktrace’s optional Antigena tool automates the response to incidents detected by EIS. It sends commands to leading firewall vendors to drop suspicious traffic. It also integrates with some SOAR tools, some EDR tools and NAC tools. Cyber AI Analyst is Darktrace’s primary tool for automatically investigating and responding to threats. Pricing for EIS is based on an annual subscription. The price for Antigena for Network is 50% of the cost of the EIS license. The price for Antigena for Email is based on the number of users in the organization.


ExtraHop is a large network monitoring and security vendor, based in Seattle, Washington. It launched its NDR product, named Reveal(x), in January 2018. The vendor quickly gained visibility on shortlists among its existing customers and across multiple regions in pure NDR evaluations. ExtraHop delivers Reveal(x) as a self-service on-premises or IaaS appliance solution, or as cloud-hosted SaaS. Reveal(x) sensors extract enriched metadata to feed multiple analysis engines and build correlated security events. ExtraHop also offers full-packet capture or event-triggered packet capture. Users can drill down from summary metadata into the raw packets as Reveal(x) allows filtering and downloading of only the range of packets required. Reveal(x) can decrypt TLS traffic, if given access to the server secret keys or the symmetric session key, and relies on JA3 fingerprinting and other traffic analysis techniques when decryption is not an option. ExtraHop detection capabilities leverage a combination of techniques, including rule- and reputation-based controls, but also combine supervised and unsupervised machine learning to detect anomalies and deviation from normal network behaviors.ExtraHop chose to integrate with ticketing, SIEM and SOAR for automated orchestration, and with firewalls or endpoint protection solutions for automated response. Reveal(x) is priced as a set of subscriptions, which depends on the number of endpoints, and so-called “critical assets” combined with bandwidth tiers. Additional features, such as full-packet capture and physical appliances, are priced separately.


Fidelis is based in Bethesda, Maryland. In addition to its NDR solution, the vendor also sells its own EDR and deception products. Fidelis combines multiple techniques to detect malicious traffic, including supervised and unsupervised machine learning, signatures, and statistical analysis. In April 2020, Fidelis launched a stand-alone TLS decryption appliance. It plans to add TLS decryption as an option on its sensors in 3Q20. It also uses JA3 signatures and machine learning techniques to analyze encrypted TLS traffic.Fidelis Network does not directly integrate with any firewall solutions. It provides automated responses, such as packet drops, TCP resets and email quarantine, as well as quarantining files and custom playbooks, through its integration with its own EDR tool, Fidelis Endpoint. Fidelis also integrates with Carbon Black Cloud and other EDR tools. Fidelis can export data to SIEM and SOAR products. Manual response capabilities include the ability to search metadata, which can be stored for as long as the customer decides to keep it. Fidelis Network is licensed on an aggregate bandwidth and metadata storage model. An on-premises license can be purchased as a subscription or a perpetual model. A cloud license (managed from the cloud with data stored in the cloud) can only be licensed as a subscription.


FireEye is a global security company, based in Milpitas, California. FireEye SmartVision is its NDR solution, specialized on server-side traffic. SmartVision physical or virtual sensors are deployed typically to intercept client-to-server traffic. SmartVision detection engines heavily leverage IDS and threat intelligence rule-based controls. FireEye products are powered by a proprietary Multi-Vector Execution (MVX) engine, which can be hosted on-premises or in the cloud. FireEye Network Forensics provides full-packet capture and analysis of traffic. Machine learning techniques also apply to traffic and file analysis.FireEye SmartVision response capabilities are available through the vendor’s orchestration and endpoint solutions, or via numerous integrations. Additional investigation tools are part of the FireEye Helix threat hunting and managed security service offering. The SmartVision solution can be purchased with a perpetual license (customers buy appliances), or as an annual subscription (based on Mbps of throughput or on a per-user basis).


Flowmon is based in Brno, Czechia. Its detection algorithms are based on a combination of multiple techniques, including machine learning, heuristics, statistical and signature-based methods. Flowmon does not decrypt TLS traffic. It uses encrypted traffic analysis techniques to look for indicators of compromise and compliance-related risks. It also uses JA3 fingerprints, but it does not rely heavily on this technique. Flowmon can ingest flow data (for example, NetFlow, IPFIX and others) from the network infrastructure, but it achieves the best results when customers implement its probes. These probes generate metadata that provides visibility into Layer 7 traffic across multiple protocols. The probes also include a memory buffer to support event-triggered packet captures.Flowmon supports some automated response capabilities through formal partnerships and integration with Cisco’s NAC tool, Fortinet and Hillstone firewalls, and some other products. The tool also enables manual response by providing the ability to query and analyze origin data for threat hunting and incident analysis. Flowmon’s detection engine is licensed per volume of processed flows per second (fps). Customers can purchase yearly subscriptions or perpetual licenses. Flowmon collectors are licensed based on performance (fps) and storage capacity. Stand-alone probes are licensed per number of interfaces and speeds.


Based in Santa Clara, California, Gigamon’s ThreatINSIGHT solution is based on technology from its acquisition of ICEBRG in 2018. ThreatINSIGHT uses a combination of techniques to detect suspicious traffic, including supervised and unsupervised machine learning, deep learning, and signatures. ThreatINSIGHT can analyze decrypted TLS traffic when it is coupled with Gigamon’s SSL decryption feature (an optional component of Gigamon’s flagship GigaVUE network packet broker). To analyze unencrypted TLS traffic, ThreatINSIGHT uses JA3 signatures and it applies machine learning techniques to detect anomalous patterns of communication within the encrypted traffic stream.When compared to many of its competitors, ThreatINSIGHT has limited integrations with technology partners to automatically respond to detections. It integrates with Demisto, Splunk and Mimecast, but it does not have any partnerships with firewall vendors (to drop suspicious traffic) or NAC vendors (to isolate a compromised endpoint). The Insight Query Language (IQL) feature allows incident responders to perform threat hunting and incident response by searching through a store of metadata. ThreatINSIGHT is available as a subscription service, priced according to bandwidth. As part of the subscription, every ThreatINSIGHT customer receives a dedicated Technical Account Manager, regardless of their size.


With headquarters in Brno, Czechia, GREYCORTEX is a pure-play NDR vendor offering a solution called MENDEL. GREYCORTEX offers its solution mainly in Europe and the Asia/Pacific region. MENDEL consists of virtual and physical appliances. It can work with a single device, combining traffic gathering (sensors) and analysis (collectors), and expand to a three-tier architecture by adding a centralized management to handle multiple collectors. GREYCORTEX combines numerous supervised and unsupervised machine learning models, then correlates it with rule-based controls. It also provides solutions for ICS/SCADA networks. GREYCORTEX NDR supports configurable packet capture, and uses JA3 fingerprinting for TLS analysis and supports TLS decryption.MENDEL can automatically block by instrumenting third-party network and security devices, leveraging their management API. Default configuration includes one month of searchable metadata. Two pricing models are available. Customers can purchase perpetual licenses based on sensor throughput and flows per second. Alternatively, customers can purchase a subscription license, also based on sensor throughput and flows per second (the subscription price includes support).

Hillstone Networks

Hillstone Networks is a large network security vendor, based in Suzhou, China, with regional headquarters in Santa Clara, California. Its Server Breach Detection System (sBDS) can be deployed as a stand-alone product, and its threat detection sensors can also be bundled in the vendor’s centralized analytics solution (i-Source). Hillstone’s solution combines the various engines from its security portfolio, including IDS and malware inspection, but does not decrypt or analyze TLS sessions. Its use of unsupervised machine learning is focused on baselining client-to-server traffic patterns and spotting deviations.Hillstone’s NDR solution integrates with other products from the vendor for incident response. Pricing is based on appliance purchase and attached subscriptions.


Based in Fulton, Maryland, IronNet targets large enterprises that are concerned about attacks from nation states. Its solution uses a combination of behavioral detection techniques, including supervised and unsupervised machine learning and some deep learning. It also uses statistical analysis and some heuristic techniques to detect suspicious traffic. IronNet does not decrypt TLS traffic, and it does not support JA3 fingerprints. However, it uses a range of artificial intelligence and machine learning techniques to detect suspicious TLS traffic.Unlike many vendors in this market, IronNet does not automatically respond to threats by integrating with firewalls to drop suspicious network traffic. However, it does integrate with leading SOAR and SIEM products. IronNet has strong manual hunt capabilities, enabling threat hunters to investigate across network flow data and pull packet capture (PCAP) on any flow (not just what IronDefense deems as high risk). The Expert System feature in the IronDefense product prioritizes threats and provides contextual information for incident responders. The solution also provides a crowdsourcing feature that enables communities of peer enterprises to collaborate against targeted threats. Pricing for IronDefense is based on a flat monthly fee based on analytical throughput (not ingest throughput) or by number of users. Customers must purchase IronDefense physical or virtual sensors.


On 4 June 2020, VMware announced the intent to acquire Lastline. Gartner expects the deal to close by the end of June. After the deal has closed, Gartner expects that VMware will integrate Lastline technology into its NSX product.Lastline is based in San Mateo, California. Its Defender product uses a combination of techniques to detect suspicious traffic, including supervised and unsupervised machine learning, and some deep learning functions. It also uses signatures, statistical analysis and heuristics, as well as a sandbox to detect malicious files. Defender does not natively decrypt TLS traffic. Instead, it applies anomaly detection to JA3 hashes. It also applies encrypted traffic analysis techniques to detect suspicious traffic without inspecting the payload.Lastline’s automated response with firewall vendors (to send a command to the firewall, so it drops suspicious traffic) is limited to only Check Point Software Technologies. However, Lastline integrates with many other security products, including VMware Carbon Black Cloud, Symantec (Blue Coat), Splunk (Phantom), Trend Micro (Tipping Point), Palo Alto Networks and several others. When the Lastline sensors are deployed in-line, they can block suspicious traffic. For manual response, Lastline provides good threat hunting and incident response capabilities. The solution includes the open-source Kibana search and visualization product. Lastline has also built a query language to do more complex searches. The solution includes a triage functionality that correlates multiple alerts into a single high-fidelity alert. Defender is sold as a subscription. Organizations can purchase based on either the number of protected hosts or the number of protected users.


Based in Kennebunk, Maine, Plixer is a network performance monitoring and security vendor, offering an NDR solution based around Scrutinizer. Its customer base is mainly in the U.S. and Europe. Scrutinizer is deployed as physical/virtual sensors or as a SaaS. Scrutinizer collects metadata from the existing network infrastructure (switches, routers, firewalls, packet brokers, etc.), as well as from Plixer FlowPro, which is an optional sensor. The vendor recently acquired endpoint monitoring software, which promises to add more endpoint-related monitoring. Plixer offers integration with Endace for full-packet capture. Scrutinizer includes multiple rule-based and heuristic detections, detecting network anomalies, and security incidents. It complements these techniques with traffic baselining for anomaly detection and JA3 fingerprinting for TLS session analysis.Scrutinizer’s response capabilities include incident-based and threshold-based triggers to update firewall or other network equipment through API calls. Plixer’s subscription licensing is based on flow rate and the number of metadata-exporting network devices. Threat hunting capabilities are integral to Scrutinizer.


Vectra is a global NDR vendor, with headquarters in San Jose, California. Vectra Cognito is the company’s main product offering. The vendor was early on the NDR market with its Cognito platform. Vectra is highly visible in Gartner client inquiries across the Americas and EMEA regions, and growing in the Asia/Pacific region. Cognito Detect, the NDR product, leverages physical appliance sensors and virtual machines deployable on hypervisors and on IaaS platforms, and can interact with some SaaS through APIs to gather SaaS events. The analysis engine (Vectra Brain) can be deployed on-premises or on public cloud. Vectra uses supervised machine learning to detect global threats, and combines it with threat intelligence for more accurate detection of known bad actors. It uses unsupervised learning models for more contextualized anomaly detection. The vendor uses JA3 fingerprinting and other techniques to provide detection coverage for encrypted traffic, but does not decrypt TLS. Vectra provides easy-to-understand dashboards, and a “campaign view,” which puts multiple events in context and eases the investigation. Vectra recently launched a beta program for an Office 365 monitoring offering, and released Lockdown, an event aggregation and automated response (via partner integrations) feature that is part of Cognito Detect.Vectra’s Lockdown solution integrates with endpoint controls, firewalls, SOAR and SIEM to provide response capabilities. It can also directly integrate with the infrastructure, taking down workload or temporarily disabling compromised user accounts. Vectra’s pricing, in addition to the hardware costs, is based on the number of active monitored IP addresses. Additional subscriptions are available to forward enriched, Zeek-formatted data in real time to a third-party data lake (Cognito Stream), or to a SaaS that is integrated with Cognito Detect (Cognito Recall) for threat hunting purposes.

Market Recommendations

Enterprises should strongly consider NDR solutions to complement signature-based tools and network sandboxes. Many Gartner clients have reported that NDR tools have detected suspicious network traffic that other perimeter security tools had missed.When evaluating NDR vendors, assess these factors:

  • Response — Some vendors focus more on automated responses (for example, sending a command to a firewall to drop suspicious traffic), whereas other vendors focus more on manual responses (for example, providing strong threat hunting tools). Enterprises should decide which approach is a better fit for them and should analyze the vendors with response features that best meet their requirements.
  • Pure-play versus NDR as a feature — Is it more sensible to implement NDR as a feature from another technology vendor (for example, SIEM), or do you require a more full-featured, pure-play NDR solution from one of the vendors analyzed in this Market Guide?

Note 1Representative Vendor Selection

These vendors were selected because they met Gartner’s inclusion criteria, and were not eliminated by our exclusion criteria.

Note 2Other Vendors That We Are Tracking

IoT and OT Specialization Vendors

  • Armis
  • Cyberbit

NDR as a Feature Vendors

  • IBM (QRadar Network Insights)
  • LogRhythm (NetMon)
  • Palo Alto Networks (Cortex XDR)

Other Vendors

  • Accedian
  • aizoOn
  • Braintrace
  • cPacket
  • Kaspersky (see Note 3)
  • Lumu
  • MistNet
  • MixMode
  • Noble
  • Nominet
  • Quadminers
  • Qianxin Technology Co., Ltd. (SkyEye)
  • Qihoo 360
  • RSA
  • Stellar Cyber
  • Tencent (T-Sec NTA)
  • ThreatBook
  • Vehere

Note 3: Kaspersky

In September 2017, the U.S. government ordered all federal agencies to remove Kaspersky’s software from their systems. Several media reports, citing unnamed intelligence sources, made additional claims. Gartner is unaware of any evidence brought forward in this matter. At the same time, Kaspersky’s initial complaints have been dismissed by a U.S. District of Columbia Court.Kaspersky has launched a transparency center in Zurich where trusted stakeholders can inspect and evaluate product internals. Kaspersky has also committed to store and process customer data in Zurich, Switzerland. Gartner clients, especially those who work closely with U.S. federal agencies, should consider this information in their risk analysis and continue to monitor this situation for updates.

Selecting the Right SOC Model for Your Organization

Selecting the Right SOC Model for Your Organization

Published 24 February 2020 – ID G00464962 – 22 min read

An SOC provides centralized security event monitoring and threat detection and response capabilities, and may support other security operations’ functions and business unit requirements. This research helps security and risk management leaders identify the best SOC model for their organization.


Key Findings

  • Security operations centers (SOCs) will fail in their mission without a clear target operating model, and if their deliverables are not tightly coupled to business use cases, risks and outcomes.
  • A hybrid SOC working with external providers is a credible option that is increasingly being adopted by many organizations, specifically midsize enterprises.
  • Organizations are increasingly interested in multifunction SOCs, extending SOC duties to incident response, threat intelligence and threat hunting, while adding OT/ICS/IoT in scope.
  • Building, implementing, running and sustaining a fully staffed 24/7 SOC is cost-prohibitive for most organizations.


Security and risk management leaders responsible for security operations should:
  • Develop an SOC target operating model, taking into account current risks and threats, as well as the business objectives, focusing on specific threat detection and response use cases.
  • Use managed detection and response (MDR) or other security services to offset the cost of 24/7 SOC operations and to fill coverage and skills gaps, tactically or as a long-term strategy.
  • Expand the SOC’s capabilities beyond just SIEM solutions to provide greater visibility into the IT, OT and IoT environment where appropriate, but do not expect a full SOC/NOC integration.
  • Likewise, plan for SOC functions beyond reactive incident monitoring and into threat detection and response, and even proactive threat hunting.

Strategic Planning Assumption

By 2024, 25% of all organizations will have an SOC function, up from 10% today. This will range from small part-time virtual SOCs to fully staffed full-time SOCs, to outsourcing of SOC services to an external provider, or a combination of these.


Security operations centers (SOCs) have historically been adopted by only very large organizations requiring centralized and consolidated security operations focused on security event monitoring, and threat detection and response, usually delivered 24/7.
This has changed, and SOCs are becoming more ubiquitous as organizations large and small shift security efforts from prevention only to a blend of prevention and detection.


Gartner defines an SOC as a construct with the following characteristics:
  • A mission, usually focused on threat detection and response.
  • A facility, dedicated to the SOC, either physical or virtual.
  • A team, often operating in around-the-clock shifts to provide 24/7 coverage.
  • A set of processes and workflows that support the SOC’s functions.
  • A tool or set of tools to help predict, prevent, detect, assess and respond to security threats and incidents.
However, the SOC does not always have to be a physical facility with hundreds of analysts working around the clock. Gartner has seen less mature, as well as resource-constrained organizations employ staff members to perform security operational functions on an ad hoc basis and remotely (that is, where there is a virtual SOC function being delivered). While SOC is the ubiquitous term, other terms such as cybersecurity operations center, cyber defense center and cyber fusion center are often used.
Gartner observes a renewed interest from incoming inquiries in merging both the NOC and SOC functions for economies of scale. Although a fully fused NOC/SOC approach is not a viable alternative at scale, the common set of functions between NOC and SOC needs to be identified, and a decision has to be made on where this function will live. At the very least, always improving coordination between the NOC and SOC needs to be encouraged.
An organization cannot buy an outsourced SOC. Outsourced services still feed into an organization’s own security operations regardless of how informal that may be. A hybrid SOC usually connotes an SOC where one or more of the core functions are performed using outsourced security services. It is the most common form of SOC across all organizations, as most organizations will leverage some types of security services (for example, reverse malware engineering is a common function).


SOCs’ main mission is focused on the following functions, with threat detection and response being the most common across SOCs. The SOC needs to be clearly aligned to its target operating model, as defined in “Create an SOC Operating Model to Drive Success.” If a set of functions is not delivered out of the SOC, this could indicate that these functions are performed by another internal structure, an external service provider or are not aligned to the organization’s security use cases:
  • Security event monitoring, detection, investigation and alert triaging
  • Security incident response management, including malware analysis and forensic analysis
  • Threat intelligence management (ingestion, production, curation and dissemination)
  • Risk-based vulnerability management (notably, the prioritization of patching)
  • Threat hunting
  • Security device management and maintenance (for the SOC technology stack)
  • Development of data and metrics for compliance reporting/management
Figure 1 describes the main functions of an SOC across all SOC models.

Figure 1. Modern SOC Components

Modern SOC Components
Depending on the functions and capabilities provided, a fully functional SOC running 24/7 requires at least eight to 12 full-time employees (see “How to Plan, Design, Operate and Evolve a SOC”). This does not include capacity for management, staff turnover, personal time off or other special activities like malware reverse engineering, forensics and threat analysis that may need to be performed by the SOC staff.
Ideally, an SOC should be located in a dedicated, physical environment (such as an isolated room) with heightened levels of physical access required. Due to the sensitive nature of incident investigations, as well as the potential for tampering with potential evidence and hiding malicious tracks, physical access to the facility needs to be restricted to authorized personnel only. The SOC’s infrastructure (network, systems, applications) should be isolated or segmented from the production network to prevent internal breaches affecting the operations of the SOC. Furthermore, the technology infrastructure used for monitoring and investigations within the SOC should be isolated and separated from the internet. Finally, the SOC will often have its own independent internet connectivity so that it can continue to operate and perform investigations even if the corporate network is, for example, under a distributed denial of service (DDoS) attack. Based on Gartner client inquiries, however, this is not always the case. Although some organizations build/manage SOCs with high levels of physical protection and isolation, as described above, most organizations opt for a traditional office environment and simple isolation measures.

SOC Models

Five main models of SOC have emerged, which can be mapped along the maturity of the SOC processes and workflows in an organization, as described in Figure 2.

Figure 2. Five Models of SOC

Five Models of SOC
These models are further described in Table 1 and the sections below.

Table 1: Five Primary Operational SOC Models for Typical Organizations

Enlarge Table
SOC Model
Typical Maturity of SOC Workflows
Main Attribute
When to Select
Virtual SOC
Very low
No dedicated facility
  • No dedicated facility available
  • Part-time and geographically distributed team members
  • Activated when an incident is discovered
Multifunction SOC
Low to medium
Simple SOC with IoT/OT/ICS and some 24/7 NOC
  • Dedicated facility with a dedicated team performing, not just security, but some other critical 24/7 IT operations from the same facility to reduce costs
  • Availability of some formalized processes and workflows
Hybrid SOC
Low to very high
Mixes internal resources and outsourced security services
Any SOC model can be qualified as hybrid when it uses outsourced security services
  • Dedicated and semidedicated staff, either internally or outsourced
  • Security operations can be performed by the organization’s internal staff 24/7, 8-5 on weekdays, or 8-5 every day with some responsibilities offloaded to an external provider
  • Primary model when fully delegated to an MSSP or an MDR
Dedicated SOC
Medium to high
Self-contained, in-house, dedicated 24/7 threat detection and response
  • Dedicated facility
  • Dedicated team
  • Fully in-house, 24/7 operations
  • Incident response, TH and TI functions and teams in place
Command SOC
High to very high
Manages and coordinates other SOCs and activities
  • Need to coordinate other SOCs
  • Coordinate response across all SOCs for major incidents
  • Provide threat intelligence, situational awareness and additional expertise
  • Rarely directly involved in day-to-day operations
Source: Gartner (February 2020)

Virtual SOC
A virtual SOC (vSOC) does not reside in a dedicated facility, nor does it have a common war room.
Instead, it is composed of team members who may have other duties and functions. Since there may not be dedicated tools for the SOC, like a SIEM, team members rely on available IT, and sometimes security technologies, and become active when a security incident occurs. In addition to a lack of SOC tools and SOC expertise, the lack of formalized processes and workflows for both the detection as well as the response phase is a typical attribute of a vSOC. Things are done reactively, ad hoc, using the available people and tools, usually on a best effort and nondeterministic way.
A vSOC is typically suited to smaller enterprises that experience only infrequent incidents and/or do not have resources for a more encompassing SOC. Sometimes an organization can only afford an IT person or a handful of people who can, on a part-time basis, review alerts generated by the firewall or an antivirus, or periodically review critical logs in support of a threat detection and response function.

Multifunction SOC
The defining attribute of a multifunction SOC is to bring IoT/OT/ICS in scope for the SOC, and/or to deliver on other critical 24/7 IT operations from the same facility to reduce costs.
This model is usually adopted by less mature organizations that need to deliver multiple use cases from the same facility, and that may not have dedicated expertise in IT, security and OT. These use cases are usually simple enough, both from the NOC as well as SOC standpoint, to be delivered by common tools and common people. However, factors such as politics, budget and process maturity levels can lead to staff members doing multiple things, but none of them well. NOCs adhere to the Information Technology Infrastructure Library (ITIL) definitions of incident and incident management, which is generally not the right approach to take in terms of security incidents. The ITIL’s focus is on events that cause a disruption of service, with the goal of restoring the service as quickly and efficiently as possible. Security and risk management leaders must never be distracted by this convergence or else it may affect the mission of the SOC and its ability to help securely deliver and enable business outcomes.
Organizations engaged in this model always start by mapping available telemetry, tools, and expertise, and defining common use cases, processes and workflows for the multifunction SOC (see “Align NetOps and SecOps Tool Objectives With Shared Use Cases”). These can include not only IT and security devices and users, but also IoT/OT/ICS.

Hybrid SOC
The defining attribute for a hybrid SOC is to mix both internal resources with outsourced ones, while leveraging external security services for the delivery of some or most of the SOC functions.
One or more dedicated people are responsible for ongoing SOC operations, involving semidedicated team members and third parties, as required. If an organization cannot operate 24/7, the resulting gap can be covered by a number of providers, resulting in a hybrid SOC model. These providers might include an MSSP (see “Magic Quadrant for Managed Security Services, Worldwide”), a managed detection and response (MDR) service provider (see “Market Guide for Managed Detection and Response Services”), a co-managed SIEM service provider, or sometimes a special security consulting provider or system integrator (SI) for such services as specialized incident response/forensics. Only large enterprises are able to afford and commit to dedicated, 24/7 internal SOCs. However, many organizations desire some form of internal security operations capability (although limited), even if they are using an external provider for a majority of their security monitoring needs.
The hybrid SOC model can reduce the cost of 24/7 operations. Therefore, it is well suited not only for small to midsize enterprises, and especially for those working extensively with third parties, but also to larger organizations and mature SOCs that can selectively outsource some security services.
Furthermore, it allows the organization to maintain stable security operations while internal capabilities are developed over time. During this time, any resource gaps can be filled, and existing security resources can shift their focus to other activities, such as deeper investigations of incidents. As such, this model is also adopted by organizations that have a desire to build insourced competencies but (1) need an immediate solution to their problem, (2) have limited expertise to be autonomous right away, and (3) want to leverage the security service provider for knowledge transfer and continuous expertise gathering.
Driving adoption of this model are a shortage and gap in the availability of skills and expertise, general budget constraints, and the considerable cost of 24/7 security operations. As an example, Gartner has seen increased interest in and adoption of co-managed SIEM services (see “How and When to Use Co-managed Security Information and Event Management”).

Dedicated SOC
The defining attribute of a dedicated SOC is to have a 24/7 centralized threat detection and response function, with a dedicated facility, IT, and security infrastructure and team, and robust processes and workflows. It is self-contained, possessing all of the resources required for continuous day-to-day security operations.
A fully centralized SOC is suited for large enterprises with multiple business units and geographically dispersed locations, sensitive environments, and high-risk, high-security requirements, as well as service providers that provide MSSs. Specifically, large enterprises choose to build, implement and run their own SOCs when:
  • Laws, regulations or governance issues prevent the outsourcing option.
  • There are concerns about specific/targeted threats.
  • Specialized expertise and knowledge about the business cannot be outsourced.
  • The organization’s technology stack is not supported by third-party security services.
Recently, Gartner is seeing large enterprises with a complex and distinct set of use cases and/or very widespread security mandates fusing traditional security operations with more contemporary functions. Examples of these extended use cases include, but are not limited to, threat intelligence, cyber incident response and OT/Internet of Things (IoT) security. There are, however, both advantages and disadvantages to doing this. For example, fusing incident response as part of the SOC will allow tighter integration between detection and response, and is an essential factor needed for security operational success (see “Prepare for the Inevitable With an Effective Security Incident Response Plan”). On the other end of the spectrum, it can create separation of duties conflicts and/or pull the security event monitoring resources away from the incident response tasks, thus affecting the effectiveness of the monitoring during an actual incident (see “How to Plan, Design, Operate and Evolve a SOC”).
Dedicated SOCs usually keep most functions in house and minimize security services. However, even large dedicated SOCs can outsource some very specific functions, such as reverse malware engineering. Strictly speaking, most dedicated SOCs are also very advanced hybrid SOCs.

Command SOC
The defining attribute of a command SOC is to support and manage several SOCs, and not be involved in day-to-day operations.
Very large and/or distributed organizations that have regional offices with a certain operating independence, service providers offering MSSs and those providing shared services (for example, government agencies) may have more than one SOC under their purview. Where these SOCs are required to run autonomously, they will function as centralized or distributed SOCs. In some instances, the SOCs will work together, but must be managed hierarchically. In those cases, one SOC should be designated as the command SOC. The command SOC coordinates security intelligence gathering, produces threat intelligence, curates and fuses these for consumption by all other SOCs, in addition to providing additional expertise and skills such as forensic investigations and/or threat analysis. Sometimes, this is how a computer emergency response team (CERT) functions in smaller countries where they are serving as an aggregation and coordination point more than delivering day-to-day security operations.

Benefits and Uses

Improved Threat Management
Many organizations already routinely implement and/or employ a variety of security technologies and services designed to prevent and detect threats, as well as harden and protect assets. When these solutions are managed in silos, organizations lose the opportunity to centrally consolidate, normalize, correlate and monitor these threats in real time, and will at best waste valuable time and resources, and at worst miss obvious threats that an SOC could have easily detected. Such a value is realized via the SOC as a delivery vehicle for a central point of reconciliation and management of these threats.
Reduction in MTTD and MTTR Incidents
Integrated security event monitoring gives the security operations team better visibility and enables it to correlate patterns and surface suspicious activities. Effective detection and escalation of incidents and close coordination between the individual teams within a defined workflow and process allow an organization to detect and respond faster, improving both mean time to detect (MTTD) and mean time to remediate (MTTR).
Centralization and Consolidation of Security Functions
Consolidating security functions in an SOC can provide cost efficiencies, enable cost sharing and leverage economies of scale while maximizing the available expertise, skills and resources. For larger organizations with a distributed geographical environment, especially those with local governance requirements, centralizing some security operations functions can help provide a centralized view, as well as a set of core security services, to all entities, while respecting local regulations.
Regulatory Compliance
An SOC is often the operational model of choice for large and some midsize enterprises to meet regulatory requirements mandating security event monitoring, vulnerability management and incident response functions. Furthermore, an SOC can improve compliance auditing and reporting across the organization, but an SOC would typically not be built for compliance-only use cases.

Adoption Rate

Gartner indicates SOC spending tends to be a significant percent of an organization’s total security budget (see “SOC Development Roadmap”) — 57% spend over 20% of security’s total budget on the SOC. However, clients seem to be split between insourcing or outsourcing their SOC (see “Setting Up a Security Operations Center (SOC)”). In addition, an increased spending in SOC is sustained by:
  • Maturing of information security programs
  • Centralization of incident detection, threat detection and response capabilities, as well as consolidation of security operations functions expanded throughout the entire organization
  • Current and future legislation and regulatory frameworks that mandate security event monitoring and detection and response capabilities (see “A Technical Solution Landscape to Support Selected GDPR Requirements”)
  • An increase in risks/threats via breaches and incidents
  • Growth of technology usage due to digitalization of business (see “Hype Cycle for Threat-Facing Technologies, 2019”)
  • Increased adoption of external service support for security event monitoring and device management
In 2019, Gartner saw a 39% increase of inquiries from clients requesting assistance on both building and maturing their security operations through the lens of an SOC. These clients have security operations functions that are either conducted by internal staff, supported by an external provider offering MSSs to offload some of the SOC functions from the organization internally, or provided in the form of regionally or vertically aligned shared services.


Lack of Improvement in Breach Response Efficiency/Capabilities
With threat management as a major driver for adopting an SOC, most will be judged by how they perform in that function and will be measured by the speed and efficacy of security event monitoring and threat detection and response.
Organizations adopting the SOC model should carefully evaluate how this investment translates to less frequent and severe breaches, and compare it to their own pre-SOC state. Furthermore, security technologies are not silver bullets. SOCs may become overwhelmed by the vast number of alerts generated by an expanding number of security tools. Although this is a common issue, there is no simple solution to avoid this quandary. After all, some organizations genuinely have a lot of malicious activity, which leads to alert overload. Better SIEM tuning to minimize noise, use of advanced analytics for better detection, and use of automation for alert triage and faster response are often used to reduce the alert flood.
Skills, Expertise and Staff Retention
Staff retention for SOC analysts is generally difficult. Even service providers that can offer a career path and progression struggle to keep their SOC analysts for longer than three to four years. As a result of the shift-based and repetitive work, in addition to a rare and sought-after skill set, the SOC analyst role is often seen as a steppingstone role. This trend is further exacerbated by a global shortage in available qualified staff (see “Adapt Your Traditional Staffing Practices for Cybersecurity”).
An understaffed SOC or one staffed with inexperienced analysts will be ineffective and will struggle to achieve its objective of rapid detection and response to threats and incidents, despite all the spend on technology and data collection. It will also increase analyst attrition if left understaffed for longer periods. To avoid starting an SOC project that can never succeed due to resource constraints, seek out alternatives such as MSSs or other forms of hybrid and outsourced security event monitoring, like MDR service providers. Alternatively, start with non-24/7 coverage and expand later when the resources are available.
Regardless of the SOC model implemented, Gartner recommends developing an SOC staff retention strategy from the start, as well as maintaining a continuous hiring capacity, which can help the organization maintain the SOC with the minimum, yet optimum staff required (see “Develop Existing Security Staff to Excel in the Digital Era”).
Return on Investment Demonstration
Security and risk management leaders need to understand that success is not just about achieving the security operations metrics, but also about the concurrently external metrics that align with the business. Important starting points are paying attention to what is your market, what is your message and what media you should use. For example, concerns over detection rates, open tickets per analyst and ticket closure rates are warranted. However, do not lose sight of the fact that the business is mainly concerned about addressing these questions:
  • Can we continue to deliver our products/services?
  • What competitive disruptions or players in our market will cause clients to shift from our products/services?
  • Are we conducting our activities legally?
For more information on aligning security metrics with business objectives, see “Develop Key Risk Indicators and Security Metrics That Influence Business Decision Making.”
To ensure your organization has the most appropriate security metrics, start with the end in mind and first develop tightly defined goals and metrics the SOC needs to deliver against that align to the business outcomes. Also, make sure that a sustainable budget is secured for the first two to three years of the SOC operation. It will often take this amount of time for people, processes and technology to be integrated into your organization and delivering at a reasonable level of proficiency.


Security and risk management leaders involved in incident monitoring, threat detection and response, and/or other adjacent security operations functions (such as threat hunting and threat intelligence) should benefit from efficiencies by formalizing all relevant duties within a security operations center. This SOC will then:
  • Gather and centralize required security personnel. These can be present either physically or virtually, and can belong to the organization’s security, operations, IT or network teams, or belong to a service provider. Likewise, these resources can be assigned on a full-time or part-time basis.
  • Define repeatable and automatable processes and workflows. These will depend on the scope of the SOC and should tend to address not only threat detection but also response. When an outside service provider is involved, it is then particularly important to define the “who is doing what, when” by using a responsible, accountable, consulted, informed RACI matrix to define roles and responsibilities, and expose integrations and communications between the client and the service provider.
  • Appropriately implement tools. Depending on scope, these tools (which can include, for example, CLM, SIEM, SOAR, SIRP or ITSM) should be selected and implemented to not only support current SOC requirements, but also current or planned SOC scope creep beyond security. This includes, for example, supporting the IT operations team and its NOC, or the ICS owners and their IoT ecosystem.
The scope of the SOC can then be defined along the following two dimensions:
  • Breadth of scope. As an example, does the SOC address only a subset of the infrastructure, or a subset of the user population, entire BUs or even the entire organization?
  • Depth of scope. As an example, does the SOC address basic, best-practice cyber-hygiene use cases, or does it address more complex use cases such as advanced persistent threat (APT) or insider threat? Does it include the IoT ecosystem, and does it deliver some NOC services as well?
Based on the scope of the SOC along these two dimensions, available expertise and resources, and strategic appetite for insourcing versus outsourcing, organizations can engage in an SOC initiative using one of the models described in this research note.

Note 1ITIL 4 Incident and Incident Management Definitions

The definition of “incident” was revised in ITIL 2 as “an event which is not part of the standard operation of a service and which causes or may case disruption to or a reduction in the quality of services and customer productivity.” Failure of one disk from a mirror set would fall in this category. ITIL 4 refers to incident management as a practice, describing key activities, inputs, outputs and roles. The primary objective of the incident management ITIL process is to return the IT service to users as quickly as possible.

Magic Quadrant for Application Security Testing

Magic Quadrant for Application Security Testing

Published 29 April 2020 – ID G00394281 – 61 min read

Modern application design and the continued adoption of DevSecOps are expanding the scope of the AST market. Security and risk management leaders will need to meet tighter deadlines and test more complex applications by seamlessly integrating and automating AST in the software delivery life cycle.

Strategic Planning Assumptions

By 2025, 70% of attacks against containers will be from known vulnerabilities and misconfigurations that could have been remediated.
By 2025, organizations will speed up their remediation of coding vulnerabilities identified by SAST by 30% with code suggestions applied from automated solutions, up from less than 1% today, reducing time spent fixing bugs by 50%.
By 2024, the provision of a detailed, regularly updated software bill of materials by software vendors will be a non-negotiable requirement for at least half of enterprise software buyers, up from less than 5% in 2019.

Market Definition/Description

Gartner’s view of the market is focused on transformational technologies or approaches delivering on the future needs of end users.
Gartner defines the application security testing (AST) market as the buyers and sellers of products and services designed to analyze and test applications for security vulnerabilities.
We identify four main AST technologies:
  • Static AST (SAST) technology analyzes an application’s source, bytecode or binary code for security vulnerabilities, typically at the programming and/or testing software life cycle (SLC) phases.
  • Dynamic AST (DAST) technology analyzes applications in their dynamic, running state during testing or operational phases. It simulates attacks against an application (typically web-enabled applications and services and APIs), analyzes the application’s reactions and, thus, determines whether it is vulnerable.
  • Interactive AST (IAST) technology combines elements of DAST simultaneously with instrumentation of the application under test. It is typically implemented as an agent within the test runtime environment (for example, instrumenting the Java Virtual Machine [JVM] or .NET CLR) that observes operation or attacks and identifies vulnerabilities.
  • Software composition analysis (SCA) technology used to identify open-source and third-party components in use in an application, their known security vulnerabilities, and typically adversarial license restrictions.
AST can be delivered as a tool or as a subscription service. Many vendors offer both options to reflect enterprise requirements for a product and a service.
The 2020 Magic Quadrant will focus on a vendor’s SAST, DAST, SCA and IAST offerings, maturity and features as tools or as a service. AST vendors innovating or partnering for these were also included.
Gartner has observed the major driver in the evolution of the AST market is the need to support enterprise DevOps initiatives. Customers require offerings that provide high-assurance, high-value findings while not unnecessarily slowing down development efforts. Clients expect offerings to fit earlier in the development process, with testing often driven by developers rather than security specialists. As a result, this market evaluation focuses more heavily on the buyer’s needs when it comes to supporting rapid and accurate testing capable of being integrated in an increasingly automated fashion throughout the software development life cycle. In addition, Gartner recognizes the growing relevance of containers as an attractive technology for application development, especially for cloud-native applications. We have added support for containers as a factor in the 2020 Magic Quadrant.
Gartner has observed that enterprises today increasingly employ AST for mobile apps. The toolsets for AST, as well as techniques for behavioral analysis, are often employed to analyze source, byte or binary code, and observe the behavior of mobile apps to identify coding, design, packaging, deployment and runtime conditions that introduce security vulnerabilities. While these capabilities are valued, they do not drive the current or evolving needs of customers in the AST space, and thus are similarly not a primary focus of this Magic Quadrant.

Magic Quadrant

Figure 1. Magic Quadrant for Application Security Testing

Source: Gartner (April 2020)

Magic Quadrant for Application Security Testing

Vendor Strengths and Cautions


Based in the U.S. and France, CAST is a software intelligence vendor whose product is used to analyze software composition, architecture, flaws, quality grades and cloud readiness. In addition to its code quality testing offering, CAST provides enterprise SAST with the CAST Application Intelligence Platform (AIP). The vendor also offers CAST Highlight, which provides SAST pattern analysis and SCA. The CAST Security Dashboard enables application security professionals to prioritize and resolve application security vulnerabilities. The vendor also provides a desktop version called CAST Lite.
During the past 12 months, CAST continued to expand its language and framework coverage; improved its SCA offering (including the addition of transitive dependencies and visual representation of dependencies); and optimized its scanning for complex projects. CAST also worked on false positive reduction, including the introduction of its autoblackboxing capability. This allows users to fine-tune and customize their analysis (for example, including external code or recognizing and suppressing specific false positives). CAST also introduced AIP Console, which allows for automated application discovery, configuration and set up.
CAST will appeal to large enterprises requiring a solution that combines security testing with code quality testing, and to existing CAST AIP clients that already use the platform for quality testing.

  • CAST offers a single solution that can be used for quality analysis as well as security analysis, which can be appealing to organizations with DevSecOps use cases.
  • Client feedback highly rated the ability to get a single view into issues across security, quality and architecture. CAST’s analysis engine provides an architectural blueprint of the software that helps test composite applications in multiple languages, visualize the architecture to improve code security by detecting insider threats via rogue data access and reduce false positives.
  • The vendor provides a scoring mechanism that can be calibrated to organization-specific criteria to track whether an application’s health is increasing or deteriorating from security, reliability and multiple other standpoints.
  • CAST provides the ability to set up a plan of action based on a particular objective, such as reducing technical debt or improving the security score.
  • Client feedback favorably rated the scalability and performance of the SAST engine in analyzing larger applications.

  • Clients perceive CAST as an application quality testing solution provider, rather than an established application security vendor.
  • The vendor does not provide SCA as part of its main SAST offering, AIP, but only with CAST Highlight.
  • CAST’s SAST solution is missing key software development life cycle (SDLC) integration features, such as a spellchecker, incremental scanning and, most importantly, an integrated development environment (IDE) plug-in.
  • CAST clients often cite setup, implementation and customization as areas for improvement. Also, the vendor does not provide 24/7 support.
  • CAST does not provide DAST or IAST, and has no partnerships to deliver either.


Known originally for its SAST offering, Checkmarx has expanded the scope of its portfolio to include SCA, IAST and — via a partnership — managed DAST. An on-demand interactive educational offering, CxCodebashing, provides developers with just-in-time training about vulnerabilities within code. The vendor’s SCA product is essentially new this year, with an internally developed version replacing a previous OEM offering retaining the same name, CxOSA. The SCA offering also supports new container scanning capabilities to aid in identifying problematic open source in images. Another change is the addition of a Docker and Linux-based SAST scanning engine. This addresses past complaints around a requirement for Windows to support local scanning engines, and also enables a new “elastic” scanning facility allowing customers to add (or remove) scanning engines to reflect changing workloads. Another update offers expanded prioritization of results based on a confidence rating (derived from a machine learning [ML] algorithm) and other variables, such as user-defined policies, severity ratings, age and several others.
Checkmarx offers a mix of deployment options for most of its products, with identical capabilities available in on-premises, cloud and managed service forms. Based in Tel Aviv, the vendor offers a global presence in North and South America, Europe, and the Asia/Pacific region, including Japan. Principal support centers are located in Texas, Israel and India. Checkmarx was acquired on 16 March 2020 by private equity firm Hellman & Friedman from Insight Ventures, which retains a minority interest. As this acquisition occurred following the deadline for this Magic Quadrant, any impact on the vendor’s position was not addressed.

  • The vendor’s portfolio competes well for various use cases, including DevSecOps, cloud-native development and more traditional development approaches where SAST is a central requirement. SAST capabilities support a broad variety of programming languages and frameworks, and include support for incremental and parallel tests.
  • CxIAST employs a passive scanning model and results are correlated with SAST findings, as are issues discovered within open-source packages. This helps with validation of results, and can aid in confirming that a vulnerability is within executable code.
  • Tool integration within IDEs and the build environment is frequently cited as a strength by customers.
  • Remediation guidance, augmented by the optional CxCodebashing education component, helps developers understand vulnerabilities and how they can be resolved. A graph-based display of code execution paths and vulnerabilities highlights a proposed “best fix” location. Also, chat-based guidance provides fix advice from Checkmarx support staff.
  • The product suite offers guidance on the prioritization of vulnerabilities, with reports factoring in data such as the severity of the vulnerability, impact, source and sink information, and confidence level. Confidence levels are derived from a mix of technologies, including an ML algorithm to validate results and correlation between SAST findings and those discovered by IAST or SCA tests.
  • Through its various components, the Checkmarx portfolio offers basic support for both API security testing and container scanning. The vendor indicates that it plans to continue investment in these areas.

  • Reflecting its history, the bulk of the vendor’s customers are for its CxSAST product, although Checkmarx continues to invest in expanding its portfolio and capabilities, and other products show growth.
  • CxDAST is based on a third-party technology relationship and is only available as part of a managed service offering. For use cases where DAST is a primary — or the only — element of an AST effort, the offering may be less attractive.
  • CxOSA, despite retaining the existing name and feature set, is essentially a new product and is available only as an add-on to the CxSAST product.
  • Licensing continues to be raised as a source of dissatisfaction by some customers, which may be a consequence of the mix of pricing models offered. Especially for SAST, these are generally based on the number of users or projects/applications — an approach that is emerging as an industry standard. When combined with multiple license models (perpetual, term and subscription), prospective customers gain flexibility, along with complexity. Rankings for negotiation flexibility, pricing and value are on par with competitive vendors, and are generally positive.

Contrast Security

Based in the U.S., Contrast Security is an AST vendor that also sells in the U.K., EU and the Asia/Pacific region. The Contrast platform consists of three primary products: IAST (Contrast Assess), SCA (Contrast OSS) and RASP (Contrast Protect). Contrast Assess incorporates Contrast OSS, which automatically performs SCA through both static scans and runtime analysis, and as a part of the Contrast platform. Contrast Protect) can be licensed independently or jointly with Contrast Assess. The vendor also offers a central management console, the Contrast TeamServer, which can be delivered as a service or on-premises. The testing approach, known as self-testing or passive IAST, does not require an external scanning component to generate attack patterns to identify vulnerabilities; rather, it is driven by application test activity, such as quality assurance (QA), executed automatically or manually.
Contrast is a good fit for organizations pursuing a DevOps methodology and looking for approaches to insert automated, continuous security testing that is developer-centric. Organizations that have developers with previous security experience favor Contrast for its lower operational complexity and a quick start into DevSecOps. Some are skipping the traditional SAST/DAST starting point and going straight to IAST. Contrast offers service integrations with the Eclipse, Rational Application Developer for WebSphere Software, IntelliJ IDEA, Visual Studio (VS) Code and VS IDEs through plug-ins that users can install from the vendor’s public IDE marketplace. Contrast provides a comprehensive REST API, as well as out-of-the-box integrations with common DevOps tools such as Chef, Puppet, Jenkins, Azure Pipelines, Maven and Gradle.

  • Contrast Assess, combined with the vendor’s SCA product (Contrast OSS), is a good choice for organizations leveraging a DevOps or agile approach, offering a quick starting point and rapid integration across the entire SDLC. Gartner client feedback indicates that this also helps in embedding AST among development teams without security testing expertise, because the agent can identify vulnerabilities through normal application testing. Contrast Assess is one of the most broadly adopted IAST solutions and continues to compete on nearly every IAST shortlist.
  • Contrast’s reporting tool, TeamServer, provides a comprehensive view of code, dependencies, vulnerabilities and project security status in an easy-to-use, intuitive platform. Status is reported as a grade (A through F), making it simple to consume status quickly across complex DevSecOps projects. It also includes a tool for representing dependencies and services in the form of a map, which makes it easier to visualize the attack surface.
  • Contrast has put significant effort into scanning COTS software, making it a good choice for enterprises with large implementations of third-party code that might be concerned with COTS application security and dependencies on third-party application libraries.
  • Clients highly rate the ease of use of the tool and the vendor’s support. Contrast introduced a Community Edition for Assess and Protect to allow users to utilize the fully functional platform for a limited number of applications.
  • Contrast’s platform support provides AST, SCA and RASP for Java, .NET Framework, .NET Core, Node.js, Ruby, and Python.

  • Contrast Security offers a full IAST and SCA solution, and does not provide stand-alone SAST or DAST tools or services, although its IAST tools can do similar testing in some cases.
  • Client feedback suggests that, due to the passive testing model, effective test coverage requires clients to have mature test automation capabilities or to run Contrast Assess in conjunction with DAST or “DAST-lite” tools. To address this, Contrast introduced a “route coverage” feature to give clients visibility into their test coverage by highlighting which parts of the application were exercised or still need to be covered.
  • Contrast can test mobile application back ends, but not the client-side code of the mobile app, and does not conduct behavioral analysis or check front-end code vulnerabilities, such as DOM-based XSS.
  • Contrast does not feature some of the nice-to-have ongoing support mechanisms that organizations with no AST experience often look for (for example, IDE gamification, human-checked results), although it does support chat with staff for specific questions.


GitLab is a global company with headquarters in the U.S. GitLab provides a continuous integration/continuous delivery (CI/CD)-enabling platform and offers AST as part of its Ultimate/Gold tier. The vendor combines proprietary and open-source scanner results within its own workflows, and provides SAST and DAST. GitLab also provides SCA functionality with Dependency Scanning. It also provides open-source scanning capabilities with Container Scanning and License Compliance. A new entrant in the Magic Quadrant, in the past 12 months GitLab introduced support for Java, remediation recommendations and a security dashboard. It also integrated the SCA technology, stemming from the acquisition of Gemnasium, into its SCA offering. GitLab also added, among other features, Secret Detection to its SAST. This functionality serves to scan the content of the repository and identify credentials and other sensitive information that should not be left unprotected in the code.
GitLab will prove a good fit for organizations that use its platform as a development environment, and for organizations looking for a broader development CI/CD-enabling solution that comes with a developer-friendly and affordable security scanning option.

  • GitLab has a single platform for development and security for the entire SDLC, which allows for easier integration of security, as well as easier acceptance and adoption for developers. Security professionals have visibility into the vulnerabilities at the time the code is committed, and when modifications, approvals and exceptions are made, and can also enforce security policies in the merge request flow.
  • The vendor’s SAST, Secret Detection; DAST, Dependency Scanning; and Container Scanning and License Compliance offerings are included in the Ultimate/Gold tier. Its pricing is publicly available, and provides a relatively affordable option.
  • GitLab provides DAST on a developer’s individual code changes within the code repository. It does so by recreating a review application based on the code that is already committed in the repository.
  • Users can configure requirements for pipelines, and ensure that some, or all, of the security scans are a part of that.
  • GitLab provides container scanning for vulnerabilities, and for code deployments in Docker containers and those using Kubernetes.

  • GitLab’s SAST lacks features that are available in more mature offerings. Language coverage is limited and the dashboard lacks the granularity and customizability of more established tools. Its SAST offering lacks features such as quick fix recommendations. Although GitLab can test developer code before merging it, it does not have an IDE plug-in and does not provide real-time spell checking.
  • GitLab is new to the AST space and Gartner clients haven’t traditionally considered it a security vendor. Its security offering is relatively new, and doesn’t have extensive end-user feedback.
  • GitLab’s AST comes as part of the broader development platform. Organizations that do not use GitLab for development will find stand-alone security scanning from the vendor impractical.
  • The vendor does not provide specific mobile AST support and its DAST offering is essentially Open Web Application Security Project’s (OWASP’s) open-source ZAP tool.

HCL Software

HCL Software is, at least in name, a newcomer to this Magic Quadrant, having acquired IBM’s AppScan products and technologies after the company exited the application security business. The acquisition was preceded by a two-year span in which HCL was responsible for development and maintenance of the product line, while IBM continued the sales and marketing functions. HCL AppScan is suitable for a variety of use cases, making it attractive to larger organizations with a mix of requirements. HCL Software is based in India. Regional sales and support offices are located in North and Central America, Europe, and several countries in the Asia/Pacific region.
The overall structure of the product portfolio remains largely unchanged, albeit somewhat complex. On-premises products include AppScan Source for SAST, and AppScan Standard and AppScan Enterprise for desktop and on-premises DAST, respectively. AppScan Enterprise Server is an on-premises server platform for sharing policies, results and DAST scanning manually and via automation. Service-based offerings are all grouped under the AppScan on Cloud brand and include both SAST and DAST support. HCL’s IAST offering, called Glass Box, is largely an extension of — and tightly integrated with — its DAST products (both on-premises and cloud-based versions). Software composition analysis is provided by the AppScan on Cloud service, and is based on an HCL static analysis engine coupled with an OEM database provided by WhiteSource. Mobile testing is available via AppScan Source for static analysis, and AppScan on Cloud for DAST, IAST and behavioral monitoring. API-specific tests are delivered through a combination of SAST and DAST. In general, products can be deployed on-premises, in the cloud or in a hybrid arrangement.
During the past 12 months, significant effort has been expended on reworking the product line to offer more standard functionality across platforms. For example, its Bring Your Own Language capability enables more consistent language coverage across platforms. Support for Apex, Ruby and Golang, available in the cloud version of AppScan, was added to the on-premises version of the product. Customers and partners can also use the capability, enabling further customization.

  • AppScan enjoys a good reputation for DAST scanning, sharing the same basic technology across the portfolio. The desktop-based AppScan Standard is a customizable offering especially suited for manual assessments. Incremental scanning allows for faster scans, and an “action-based” browser recording technology enables testing of complex workflows and improved insight into single-page applications where not all activity is captured in standard GET/POST operations.
  • AppScan, while still owned by IBM, was one of the first products to heavily leverage ML techniques for application security tasks, including the provision of Intelligent Finding Analytics (IFA), which helps improve accuracy and identify a “best fix” location for vulnerabilities. Under HCL, progress has continued with an effort to apply ML-based analytics to DAST findings generated by the vendor’s cloud customers to significantly improve speed and accuracy.
  • HCL offers good support for mobile application testing, leveraging its SAST, DAST, SCA and IAST components, as well as behavioral analysis.
  • Support for DevOps environments is competitive with other vendors and includes integrations into common IDEs and CI/CD toolchain components. Developers can perform scans in a private sandbox, reviewing results before committing code. The tools provide standard explanatory and supportive information, supplemented by optimal fix information and vulnerability grouping provided by IFA. No formal computer-based training or “just in time” training is provided, although such support — increasingly a staple of AST tools — is reportedly on the roadmap.

  • Any change in ownership is potentially disruptive, although the two-year transfer period from IBM to HCL appears to have eased the transition. However, HCL is at a disadvantage in acquiring new customers, given its current lack of brand awareness in the market. Thus, while the vendor offers a similar product vision as other portfolio vendors, it is ranked lower for its ability to execute.
  • The AppScan portfolio is robust, but complex, with inconsistent features across platforms. For example, Open Source Analysis is only available in the cloud, and mobile testing can span environments. HCL is taking steps — such as with the Bring Your Own Language facility — to rationalize features across the full range of the portfolio, although the result is not yet complete.
  • AppScan’s IAST capability is tightly integrated with the DAST offering and cannot be purchased independently. A passive IAST approach, increasingly in favor among DevOps teams, was released on 25 March 2020, after the deadline for this evaluation, and therefore is not considered.
  • The overall pricing model for HCL’s portfolio is complex. First, cloud offerings are based on a subscription model, but on-premises products are only available with traditional perpetual licenses (including a term-based variation). That disparity complicates purchasing for organizations wishing to pursue a hybrid deployment model. Other pricing metrics vary and are based on the number of applications, users (with varied types of user licenses on offer) and per-scan pricing. Buyers must evaluate multiple options to obtain optimal pricing terms.

Micro Focus

Based in the U.K., Micro Focus is a global provider of AST products and services under the well-known Fortify brand. Micro Focus sales has a broad global reach, with a strong presence in North America, EMEA and Central American markets. Fortify offers Static Code Analyzer (SAST), WebInspect (DAST and IAST), Software Security Center (its console), Application Defender (monitoring and RASP) and Fortify Audit Workbench (AWB). Fortify provides its AST as a product, as well as in the cloud, with Fortify on Demand (FoD). The hybrid model allows the FoD tools to scan code and integrate results with the Fortify reporting tool and the developer environment.
During the past year, Fortify has expanded language support (26 app stacks for SAST) and integration with common CI/CD tools like Jenkins/Jira. Micro Focus has also expanded its partnership with Sonatype to a full OEM agreement and integrated its Static Code Analyzer tool directly into FoD, although it still supports Black Duck and WhiteSource. Fortify’s AST offerings should be considered by enterprises looking for a comprehensive set of AST capabilities — either as a product or service, or combined — with enterprise-class reporting and integration capabilities.
Micro Focus has put investment into a more DevSecOps developer-centric model. This includes moving DAST more fully into the hands of development by providing coordination between FoD scans and code in the IDE. It is focusing on eliminating impediments to fully automated workflows with features like macro autogeneration and API scanning improvements. Fortify supports cloud-friendly deployment models and simplified orchestration, and is adding support for containerization. To facilitate a faster, cleaner DevSecOps model, Fortify has added RESTful APIs and a command line interface for both static and dynamic testing.

Fortify is an excellent fit for large enterprises with multiple, complex projects and a variety of coding styles and experience levels. It has shown flexibility and strength in dealing with issues such as legacy code replacement and modern development styles like microservices, and has experience in M&A activity.
Swagger-supported RESTful APIs and the integrated Fortify Ecosystem were built to support modern DevSecOps organizations, a marked improvement over older versions of the product suite. Open-source integrations, both in FoD and with SSC, Jira and Octane automation, are also important steps in this direction.
Fortify offers mobile testing with FoD directly, as well as the tools with SCA and WebInspect in support of mobile application scanning.
While no one has completely solved the issue of false positives, Micro Focus has made significant improvements in simplifying and reducing FPs. Micro Focus has extended its Fortify Audit Assistant feature to allow teams the flexibility to either manually review artificial intelligence (AI) predictions on issues, or to opt in to “automatic predictions,” which allow for a completely in-band automated triaging of findings.

  • While Fortify has begun to show the results of Micro Focus’ investment, overall market awareness has not yet caught up. Gartner client inquiry calls do not yet reflect the new functionality and are still dominated by discussions about the older versions of the product suite.
  • Fortify is known for its depth and accuracy of results, which meets the needs of enterprise customers that then leverage contextual-based analysis. Less mature organizations looking for incremental improvements over time may experience challenges with the complexity and volume of unfiltered results.
  • While Fortify offers highly flexible license and pricing models, during inquiries clients report that the pricing remains complicated and the on-premises operational complexity is high.
  • Automated scans are faster than they were in older versions of the product, and a good fit for DevSecOps, but optional human-audited scan results in FoD are out of band and can take significantly longer. ·Fortify balances this challenge to human auditing by providing customers with the option to enable in-band, AI-driven audits without human intervention, both on-premises and with FoD.


Founded in 2009 in Buenos Aires, Argentina, Onapsis is a U.S.-based company with centers in the U.S., Germany and Argentina. In June 2019, it acquired Virtual Forge, a prominent player in the SAP code security space. Onapsis has established or strengthened relationships with leading strategic system integrators, managed security service providers (MSSPs), technology alliance partners and value-added resellers (VARs), such as Accenture, Deloitte, Optiv, deepwatch and others, to offer services to protect organizations using SAP and Oracle.
The business-critical application space has traditionally used code reviews by developers and security personnel, and has relied on existing defense in-depth measures to protect these applications. Onapsis offers standard AST tools (SAST/DAST) and makes it easy for ERP developers to integrate them into their existing processes. Onapsis is strictly a business-application-based tool supporting the common languages used in development (e.g., ABAP, ABAP Objects, Business Server Pages [BSP], Business Warehouse Objects, SAPUI5, XSJS and SQLScript) The vendor is a good fit for companies developing tools (in-house or as a third party) that want to adopt more of a repeatable DevSecOps, process.

  • Onapsis supports the DevSecOps cycle with plug-ins and services that fit into existing business-critical developer workflows.
  • The vendor has good support for SAP and Oracle applications as they move to the cloud, such as S/4HANA, C/4HANA, Workday, Salesforce, SuccessFactors, Ariba and others..
  • Its data flow and tracking options are especially useful for monitoring compliance risks in applications in financial services, human capital management (HCM), supply chain management (SCM) and other applications.
  • Onapsis supports a number of complex programming languages and offers a good web-based interface for scanning and managing results across multiple projects that fits well with other ERP development tools.
  • The vendor also supports SAP HANA Studio, Eclipse, SAP Web IDE and SAP ABAP development workbench, with similar workflows and processes across the different development IDEs.

  • Although Onapsis enjoys extensive cooperation with SAP and Oracle, there is some risk as both are still competitors in this space with their own products (e.g., SAP’s Code Vulnerability Analyzer).
  • With a focus on applications supported by SAP and Oracle, overall programming language support is limited compared to other tools in the AST space, but is focused on common business-critical application developers.
  • Onapsis has an IDE plug-in for its toolsets, but the experience varies significantly between them. Results of the scans are available through PDF reports with the developer environment, or via a web interface. Onapsis also offers full integration with SAP’s cloud-based Web IDE, which does provide a fully integrated developer experience. For ABAP, there is also a fully integrated experience.
  • DAST support is limited to workflow and call graph analysis.


Traditionally known for its DAST solutions, including InsightAppSec, Rapid7 has begun to position other products in its portfolio as application security solutions. This includes the vulnerability assessment solution InsightVM, which provides some software composition analysis as part of its container assessment capabilities. The vendor’s tCell product — a RASP offering acquired in late 2018 — provides insights into code execution and vulnerabilities, generally postdeployment. As a RASP offering, tCell relies on the same basic technology as many IAST testing tools, but is designed as an application protection solution, not a testing tool.
Rapid7 retains its reputation for having a strong DAST offering, and is especially suited for use cases where the combination of DAST and vulnerability assessment is valued — such as testing the security of web-based applications, especially where organizations face strong compliance requirements. The addition of tCell provides organizations with an opportunity to work with RASP-based app protection and the insights it can provide. Improvements over the past year include enhancements to authentication support, with the addition of multiple authentication techniques enabling improved application scanning. The vendor has also added support for multiple application frameworks (such as Angular, React and others), improving its ability to test single-page applications, which are increasingly common. Integration is provided with Jira and a variety of CI/CD tools (with additional support available via API), but most in-depth analysis of results takes place in the product’s dashboard. (A Chrome browser extension enables developers and others to interact regarding results without directly accessing the dashboard.)
Rapid7 is based in the U.S., with sales and support offices primarily located in North America and EMEA, and with some presence in the Asia/Pacific region. InsightAppSec is offered as a cloud-based service, with options for on-premises deployments and as a managed service.

  • Rapid7 continues to enjoy a strong reputation for its DAST tool, especially in support of in-depth custom manual assessments. Tests can be performed interactively, allowing for the manipulation of parameters, and aiding troubleshooting and the validation of fixes.
  • Rapid7’s Universal Translator technology analyzes requests to identify various formats, parses them and normalizes the data to a standard form to create similar attacks across tested formats. For formats that cannot be crawled, such as JSON and REST web services, this is accomplished via user-recorded traffic.
  • Expanded support for application frameworks makes Rapid7 an attractive choice for testing modern, single-page applications.
  • Rapid7 continues to enjoy good marks from most users for the product’s ease of use, dashboard and reporting. For example, developers are provided information such as recommendations, description and error information, and attack replay functionality, which enables them to understand, patch and retest vulnerabilities.

  • Rapid7’s inclusion of vulnerability assessment and RASP in its application security portfolio expands the scope of its offering beyond DAST, but the additional tools don’t offer feature parity with competitive solutions. For example, while InsightVM and tCell help identify vulnerabilities in built applications and containers, it does not warn of restrictive open-source licenses — a standard capability for SCA tools. (Rapid7 announced a partnership with SCA specialist Snyk as this Magic Quadrant was being finalized. Any resulting improvements in SCA capabilities will be reflected in future evaluations, as those changes materialize.)
  • While test results are highly detailed, the tools lack direct integration with IDEs, prompting developers to switch to the InsightAppSec dashboard (or browser extension) to review data and supporting information. It is possible to incorporate vulnerability data into a Jira ticket, which would assist in providing information to a developer more directly.
  • While individual Rapid7 products are built on a common platform, they lack the correlation of results across tools that other vendors provide, such as between IAST and SCA. However, correlation is provided between DAST and a selection of other vendors’ SAST tools. (Rapid7 lacks a SAST offering of its own.)
  • Rapid7 does not support distributed scanning.


Based in the U.S., Synopsys is a global company with offerings in the software and semiconductor areas. While Synopsys has been executing a strategy to expand its AST portfolio during the past five years, 2019 was primarily spent on integrating the products together technologically and consolidating their offerings. This has been successful, and the market now sees these products as a well-integrated whole with significant movement from single point solutions to multiproduct purchases.
The Polaris Software Integrity Platform has become the central management tool for all Synopsys AST products (except its DAST managed service, which is still stand-alone). Code Sight, the vendor’s IDE plug-in management tool, has been integrated into the product suite as well, with the goal of providing a complete in-editor experience for developer-based security testing. While primarily aimed at DevSecOps organizations, this developer-centric model is recommended by Gartner as a best practice, and all developers, regardless of methodology, benefit from that approach. Synopsys should be considered by organizations looking for a complete AST offering that want variety in AST technologies, assessment depth, deployment options and licensing.
In January 2020, Synopsys bought DAST and API security provider Tinfoil Security and is adding it to its suite of products; however, this acquisition occurred after the cut-off date for this Magic Quadrant and our analysis does not take it into account.

  • The Synopsys suite is a relatively easy entry point for organizations that may be just starting to take a developer-centric approach to security, as well as more advanced organizations that find integrating and managing a set of point solutions to be too time-consuming.
  • The Code Sight plug-in is a good fit for DevOps shops. It has strong integration with IDEs to provide feedback early in the development phase. The Code Sight plug-in leverages the IDE to act as an interface to all tools on Polaris, with an emphasis on remediation. This fits well with most development teams, regardless of maturity.
  • Support for CI/CD tools (for example, Jenkins and Jira reporting) has increased significantly in 2019, with support in Coverity, Seeker and Black Duck being used as part of the overall build/test/deploy cycle.
  • Seeker continues to be one of the most broadly adopted IAST solutions, with good SDLC integration. Synopsys has an agent-only IAST for Seeker that does not require an inducer. This supports the passive testing model offered by some IAST competitors.
  • Seeker compliance reports now offer GDPR and Common Attack Pattern Enumeration and Classification vulnerability tracking, in addition to its PCI DSS, OWASP and CWE tracking.

  • Gartner client feedback indicates that the vulnerability clarification and fix recommendation is limited, compared with some of the competitors.
  • Gartner clients from small and midsize businesses have expressed that, despite interest in the vendor’s solutions, the price is often outside their budgets, especially for nascent programs, leading them to seek less costly alternatives. Synopsys’ sales process is also complicated, and clients have reported trouble navigating it.
  • Synopsys offers DAST only as a managed service. Synopsys AST managed services are orchestrated through a cloud-based portal that is separate from Polaris; however, managed service testing results can be viewed through the Polaris reporting tool. Emphasis for dynamic testing is concentrated on the Seeker IAST product line.
  • While Seeker has reports for various regulatory compliance regimes, compliance is often much more complicated than a set of scans. Users should be aware that they are responsible for the full scope of audit and regulatory compliance measures.


Headquartered in the U.S., Veracode is an AST provider with a strong presence in the North American market, as well as in the European market. The Veracode offering includes a family of SAST, DAST, IAST and SCA services surrounded by a policy management and analytics hub, as well as e-learning modules. Greenlight is a SAST plug-in for the Eclipse, IntelliJ and Visual Studio IDEs. Veracode also provides mobile AST and an application attestation program called Veracode Verified, which enables companies to provide a third-party attestation of their products’ security level to a prospective buyer.
During the past 12 months, Veracode introduced support for modern application deployments in the cloud and containers. Also, it merged its original SCA offering and the recently acquired SourceClear SCA product into a new SCA offering that can scan both locally and in the cloud. Veracode also further extended its language coverage and introduced continuous alerting on new vulnerabilities. On 1 October 2019, Veracode released its IAST, which can run in the build phase and the QA test environment.
Veracode will meet the requirements of organizations looking for a comprehensive portfolio of AST services along with tailored AST advice, broad language coverage, and ease of implementation and use.

  • Gartner clients rate highly the quick setup, ease of use and scalability of the solution, as well as the vendor’s willingness to work with customer requirements.
  • Veracode’s services include tailored vulnerability and remediation advice, and reviews of the mitigations where needed, which can be useful to reduce remediation time and in organizations where developers are not application security experts. Veracode results come with “fix first” recommendations that consider how easy an issue is to fix and how much impact it has, and then recommend the best location to fix the issue.
  • Veracode feeds the intelligence collected from its cloud-based scans back to its engine and database. This is used to improve accuracy through SaaS learning, faster SCA updates, as well as advice for rapid response to known vulnerabilities.
  • Veracode’s SCA offering allows both agent-based local and cloud-based scanning, and provides a unique database with 50% more vulnerabilities than the National Vulnerability Database. Veracode can also scan test third-party applications or SaaS cloud with their consent, as well as COTS applications such as the ones provided by independent software vendors. To help with the focus on exposed applications, Veracode’s SCA offering can deprioritize vulnerabilities by checking if they are in the execution path of the application.

  • Veracode does not offer AST tools that can be installed on-premises, only AST as a service. It provides Internal Scanning Management that can be located on the client’s network to support the testing of internal applications, with scanning configured and controlled via the cloud service.
  • Veracode does not offer dynamic scanning of APIs, a capability increasingly available from competitors, relying instead on static and interactive AST. Veracode also does not allow discovery of APIs.
  • Some Gartner clients have cited first line of support from the vendor as an item to be improved. Additionally, even though Veracode has a worldwide presence, it only provides support in English.

WhiteHat Security

WhiteHat Security’s Sentinel platform continues to stand out in use cases where DAST is a requirement, including web-based applications and APIs, both in production and preproduction. In addition, partly by virtue of a partnership with NowSecure, it ranks well for mobile AST, where it combines behavioral testing with SAST and DAST scans of popular mobile languages such as Java, Objective-C and Swift. Software composition analysis is also provided and is now available as a stand-alone product offering. Customers continue to give the vendor compliments for human and ML-based augmentations to testing, including validation of results and optional penetration testing and business logic assessments. WhiteHat continues to be unique with its Directed Remediation capabilities, where fixes developed by the WhiteHat Threat Research Center are automatically suggested to developers for selected findings. It was the first to offer chat-based assistance to developers for help in understanding specific vulnerabilities, although other vendors have also begun to provide this service. WhiteHat’s offerings are service-based, although the vendor offers a virtual appliance for local scanning, with results sent to the cloud for verification, correlation and inclusion in dashboards and reporting.
WhiteHat was acquired by NTT Security in July 2019 and operates as an independent subsidiary. Sales and support capabilities have traditionally focused heavily on North America. The vendor has also maintained a limited presence in Europe and the Asia/Pacific region. The NTT acquisition opens the possibility of broader sales and support channels.

  • WhiteHat has a strong reputation among Gartner clients as a DAST-as-a-service provider and should be considered by buyers seeking an AST SaaS platform.
  • WhiteHat continues to execute toward its strategy of addressing the requirements of DevOps organizations with differentiated SAST, SCA and DAST products for the development, build and deployment phases of the life cycle. Generally, options earlier in the process — such as SAST and SCA for developers — are optimized for fast return of results by limiting the scope of testing. Later phases provide more in-depth checks and add options for human verification and testing. The vendor continues to expand ML-based automated verification to help speed the process, and to better align to the needs of rapidly iterating development teams.
  • WhiteHat’s customers continue to value the vendor’s strong support services. As noted, these include vulnerability verification, manual business logic assessments/penetration testing and the ability to leverage its Threat Research Center engineers to discuss findings.
  • WhiteHat SAST remediation capabilities extend beyond identifying the optimal point of remediation to automatically provide custom code patches that can be copied and pasted into the code to fix identified vulnerabilities for a portion of findings for Java and C#.
  • WhiteHat Sentinel Dynamic provides continuous, production-safe DAST of production websites with automatic detection and assessment, and alerts for newly discovered vulnerabilities.
  • DAST results can be fed to a variety of web application firewall solutions, enabling the creation of rules to mitigate vulnerabilities until they can be remediated in code.

  • WhiteHat does not offer an IAST solution. It does use SAST findings to inform DAST scans for improved accuracy.
  • Customer feedback indicates some dissatisfaction with the products’ user interfaces. IDE plug-ins, for example, are functional, but supplementary and explanatory information is often poorly formatted. Findings can be fed to defect tracking systems, such as Jira.
  • WhiteHat’s SAST offering has limited language support, compared with competitive offerings.
  • WhiteHat does not offer AST as a tool, only as a cloud service. However, it can provide an on-premises virtual appliance that performs scans at a customer’s site, feeding results to the cloud for verification, correlation and inclusion in dashboards for reporting and analysis.

Vendors Added and Dropped

We review and adjust our inclusion criteria for Magic Quadrants as markets change. As a result of these adjustments, the mix of vendors in any Magic Quadrant may change over time. A vendor’s appearance in a Magic Quadrant one year and not the next does not necessarily indicate that we have changed our opinion of that vendor. It may be a reflection of a change in the market and, therefore, changed evaluation criteria, or of a change of focus by that vendor.


Onapsis, HCL Software and GitLab were added to this Magic Quadrant.


Acunetix, IBM and Qualys were dropped from this Magic Quadrant based on our inclusion and exclusion criteria.

Inclusion and Exclusion Criteria

For Gartner clients, Magic Quadrant and Critical Capabilities research identifies and then analyzes the most relevant providers and their products in a market. Gartner uses, by default, an upper limit of 20 vendors to support the identification of the most relevant providers in a market. On some specific occasions, the upper limit may be extended where the intended research value to our clients might otherwise be diminished. The inclusion criteria represent the specific attributes that analysts believe are necessary for inclusion in this research.
To qualify for inclusion, vendors needed to meet the following criteria as of 1 November 2019:
  • Market participation: Provide a dedicated AST solution (product, service or both) that covers at least two of the following four AST capabilities: SCA, SAST, DAST or IAST, as described in the Market Definition/Description section.
  • Market traction:
    • During the past four quarters (4Q18 and the first three quarters of 2019):
      • Must have generated at least $22 million of AST revenue, including $17 million in North America and/or Europe, the Middle East and Africa (excluding professional services revenue)
  • Technical capabilities relevant to Gartner clients:
    • Provide a repeatable, consistent subscription-based engagement model (if the vendor provides AST as a service) using mainly its own testing tools to enable its testing capabilities. Specifically, technical capabilities must include:
      • An offering primarily focused on security tests to identify software security vulnerabilities, with templates to report against OWASP top 10 vulnerabilities
      • An offering with the ability to integrate via plug-in, API or command line integration into CI/CD tools (such as Jenkins) and bug-tracking tools (such as Jira)
    • For SAST products and/or services:
      • Support for Java, C#, PHP and JavaScript at a minimum
      • Provide a direct plug-in for Eclipse or Visual Studio IDE at a minimum
    • For DAST products and/or services:
      • Provide a stand-alone AST solution with dedicated web-application-layer dynamic scanning capabilities.
      • Support for web scripting and automation tools such as Selenium
    • For IAST products and/or services:
      • Support for Java and .NET applications
    • For SCA products and/or services:
      • Ability to scan for commonly known malware
      • Ability to scan for out-of-date vulnerable libraries
    • For containers:
      • Ability to integrate with application registries and container registries
      • Ability to scan open-source OS components for known vulnerabilities and to map to common vulnerabilities and exposures (CVEs)
  • Business capabilities relevant to Gartner clients: Have phone, email and/or web customer support. They must offer contract, console/portal, technical documentation and customer support in English (either as the product’s/service’s default language or as an optional localization).
We will not include vendors in this research that:
  • Focus only on mobile platforms or a single platform/language
  • Provide services, but not on a repeatable, predefined subscription basis — for example, providers of custom consulting application testing services, contract pen testing or professional services
  • Provide network vulnerability scanning but do not offer a stand-alone AST capability, or offer only limited web application layer dynamic scanning
  • Offer only protocol testing and fuzzing solutions, debuggers, memory analyzers, and/or attack generators
  • Primarily focus on runtime protection
  • Focus on application code quality and integrity testing solutions or basic security testing solutions, which have limited AST capabilities

Open-Source Software Considerations

Magic Quadrants are used to evaluate the commercial offerings, sales execution, vision, marketing and support of products in the market. This excludes the evaluation of open-source software (OSS) or vendor products that rely heavily on or bundle open-source tools.

Other Players

Several vendors that are not evaluated in this Magic Quadrant are present in the AST space or in markets that overlap with AST. These vendors do not currently meet our inclusion criteria; however, they either provide AST features or address specific AST requirements and use cases.
These providers range from consultancies and professional services to related solution categories, including:
  • Business-critical application security
  • Application security orchestration and correlation (ASOC)
  • Application security requirements and threat management (ASRTM)
  • Crowdsourced security testing platforms (CSSTPs)
  • API-security-focused solutions
  • Container security solutions

Evaluation Criteria

Ability to Execute

Product or Service: This criterion assesses the core goods and services that compete in and or serve the defined market. This includes current product and service capabilities, quality, feature sets, skills, etc. These can be offered natively or through OEM agreements/partnerships, as defined in the Market Definition/Description section and detailed in the subcriteria. This criterion specifically evaluates current core AST product/service capabilities, quality and accuracy, and feature sets. Also, the efficacy and quality of ancillary capabilities and integration into the SDLC are valued.
Overall Viability: Viability includes an assessment of the organization’s overall financial health, as well as the financial and practical success of the business unit. It assesses the likelihood of the organization to continue to offer and invest in the product, as well as the product’s position in the current portfolio. Specifically, we look at the vendor’s focus on AST, its growth and estimated AST market share, and its customer base.
Sales Execution/Pricing: This criterion looks at the organization’s capabilities in all presales activities and the structure that supports them. This includes deal management, pricing and negotiation, presales support and the overall effectiveness of the sales channel.
We are looking at capabilities such as how the vendor supports proofs of concept or pricing options for both simple and complex use cases. The evaluation also includes feedback received from clients on experiences with vendor sales support, pricing and negotiations.
Market Responsiveness/Record: This criterion assesses the ability to respond, change direction, be flexible and achieve competitive success as opportunities develop, competitors act, customer needs evolve and market dynamics change. It also considers the vendor’s history of responsiveness to changing market demands. We evaluate how the vendor’s broader application security capabilities match with enterprises’ functional requirements, and the vendor’s track record in delivering innovative features when the market demands them. We also account for vendors’ appeal with security technologies complementary to AST.
Marketing Execution: This criterion assesses the clarity, quality, creativity and efficacy of programs designed to deliver the organization’s message in order to influence the market, promote the brand, increase awareness of products and establish a positive identification in the minds of customers. This mind share can be driven by a combination of publicity, promotional activity, thought leadership, social media, referrals and sales activities. We evaluate elements such as the vendor’s reputation and credibility among security specialists.
Customer Experience: We look at the products and services and/or programs that enable customers to achieve anticipated results. Specifically, this includes quality supplier/buyer interactions, technical support or account support. This may also include ancillary tools, customer support programs, availability of user groups, service-level agreements, etc.
Operations: This criterion assesses the ability of the organization to meet goals and commitments. Factors include quality of the organizational structure, skills, experiences, programs, systems and other vehicles that enable the organization to operate effectively and efficiently.

Table 1: Ability to Execute Evaluation Criteria

Enlarge Table
Evaluation Criteria
Product or Service
Overall Viability
Sales Execution/Pricing
Market Responsiveness/Record
Marketing Execution
Customer Experience
Not Rated
Source: Gartner (April 2020)

Completeness of Vision

Market Understanding: This refers to the ability to understand customer needs and translate them into products and services. Vendors that show a clear vision of their market listen to and understand customer demands, and can shape or enhance market changes with their added vision. It includes the vendor’s ability to understand buyers’ needs and translate them into effective and usable AST (SAST, DAST, IAST and SCA) products and services.
In addition to examining a vendor’s key competencies in this market, we assess its awareness of the importance of:
  • Integration with the SDLC (including emerging and more flexible approaches)
  • Assessment of third-party and open-source components
  • The tool’s ease of use and integration with the enterprise infrastructure and processes
  • How this awareness translates into its AST products and services
Marketing Strategy: We look for clear, differentiated messaging consistently communicated internally, and externalized through social media, advertising, customer programs and positioning statements. The visibility and credibility of the vendor’s meeting the needs of an evolving market is also a consideration.
Sales Strategy: We look for a sound strategy for selling that uses the appropriate networks, including: direct and indirect sales, marketing, service, and communication. In addition, we look for partners that extend the scope and depth of market reach, expertise, technologies, services, and the vendor’s customer base. Specifically, we look at how a vendor reaches the market with its solution and sells it — for example, leveraging partners and resellers, security reports, or web channels.
Offering (Product) Strategy: We look for an approach to product development and delivery that emphasizes market differentiation, functionality, methodology and features as they map to current and future requirements. Specifically, we are looking at the product and service AST offering, and how its extent and modularity can meet different customer requirements and testing program maturity levels. We evaluate the vendor’s development and delivery of a solution that is differentiated from the competition in a way that uniquely addresses critical customer requirements. We also look at how offerings can integrate relevant non-AST functionality that can enhance the security of applications overall.
Business Model: This criterion assesses the design, logic and execution of the organization’s business proposition to achieve continued success.
Vertical/Industry Strategy: We assess the strategy to direct resources (sales, product, development), skills and products to meet the specific needs of individual market segments, including verticals.
Innovation: We look for direct, related, complementary and synergistic layouts of resources, expertise or capital for investment, consolidation, defensive or preemptive purposes. Specifically, we assess how vendors are innovating to address evolving client requirements to support testing for DevOps initiatives as well as API security testing, serverless and microservices architecture. We also evaluate developing methods to make security testing more accurate. We value innovations in IAST, but also in areas such as containers, training and integration with the developers’ existing software development methodology.
Geographic Strategy: This criterion evaluates the vendor’s strategy to direct resources, skills and offerings to meet the specific needs of geographies outside the “home” or native geography, either directly or through partners, channels and subsidiaries, as appropriate for that geography and market. We evaluate the worldwide availability and support for the offering, including local language support for tools, consoles and customer service..

Table 2: Completeness of Vision Evaluation Criteria

Enlarge Table
Evaluation Criteria
Market Understanding
Marketing Strategy
Sales Strategy
Offering (Product) Strategy
Business Model
Not Rated
Vertical/Industry Strategy
Not Rated
Geographic Strategy
Source: Gartner (April 2020)

Quadrant Descriptions


Leaders in the AST market demonstrate breadth and depth of AST products and services. Leaders typically provide mature, reputable SAST and DAST, and demonstrate vison through development of other emerging AST techniques, such as container support, in their solutions. Leaders also should provide organizations with AST-as-a-service delivery models for testing, or with a choice of a tool and AST as a service, as well as an enterprise-class reporting framework supporting multiple users, groups and roles, ideally via a single management console. Leaders should be able to support the testing of mobile applications and should exhibit strong execution in the core AST technologies they offer. While they may excel in specific AST categories, Leaders should offer a complete platform with strong market presence, growth and client retention.


Challengers in this Magic Quadrant are vendors that have executed consistently, often with strength in a particular technology (for example, SAST, DAST or IAST) or by focusing on a single delivery model (for example, on AST as a service only). In addition, they have demonstrated substantial competitive capabilities against the Leaders in their particular focus area, and have demonstrated momentum in their customer base in terms of overall size and growth.


Visionaries in this Magic Quadrant are vendors that are in AST with a strong vision that addresses the evolving needs of the market. It includes vendors that provide innovative capabilities to accommodate DevOps, integrate in the SDLC or identify vulnerabilities. Visionaries may not execute as consistently as Leaders or Challengers.

Niche Players

Niche Players offer viable, dependable solutions that meet the needs of specific buyers. Niche Players fare well when considered for buyers looking for “best of breed” or “best fit” to address a particular business or technical use case that matches the vendor’s focus. Niche Players may address subsets of the overall market. Enterprises tend to pick Niche Players when the focus is on a few important functions, or on specific vendor expertise or when they have an established relationship with the vendor. Niche Players typically focus on a specific type of AST technology or delivery model, or a specific geographic region.


The need for application security is ubiquitous across small, midsize and large organizations. With new data privacy requirements, the consequences of a security breach are no longer limited to reputational damage, but also can involve substantial fines and penalties. Vendors have been offering core AST technologies and additional support offerings for well over a decade, and they have matured in speed and efficacy, but common code problems still remain. Most solutions in the market provide some form of code scanning capability, security training services, program development services and remediation support in a growing variety of ways to support developers and security professionals. DevSecOps, agile, and a general demand for greater automation and speed have led to the maturing of the market and the evolution of both full platform solutions offering a wide variety of commonly used testing tools and specialty solutions that offer a deeper dive into a particular technology or combine security testing with other features like code quality.
In general, better accuracy, faster results, easier integrations and enhanced remediation guidance are top of mind for vendors in this market. It has become simpler for end users to find vulnerabilities using AST tools integrated into their workflow or development environment. Solutions that make it easy for developers to be successful at security mesh well with the DevSecOps philosophy (see “Integrating Security Into the DevSecOps Toolchain”) while freeing up some security resources otherwise dedicated to running code scans. In general, anything the developers have to remember to do will be forgotten, but when integrated into their existing workflow, they come naturally. However, Gartner client inquiry feedback still indicates a need to improve remediation guidance, increase testing speed and accuracy, and simplify the operation of AST solutions to support clients adopting, integrating and scaling AST programs.
These challenges are not solved solely by the right technology; they often require changes in organizational culture, better collaboration and sound practices. Still, incompatible security technologies can impede progress, in which case development and security teams risk being driven further apart rather than becoming better collaborators. To cope with these challenges, organizations should:
  • Require solutions that expose and integrate automated functionality through plug-ins (including IDE, build, repository, QA and preproduction) into the SDLC. This will enable developers to fix issues earlier in the process, and it will improve coordination between development and security.
  • Favor vendors that specialize in comprehensive testing of APIs, applications deployed in containers and other aspects of modern development (e.g., single-page applications, microservices, serverless, edge computing, etc.) to support those use cases. Clients increasingly are seeking out point solutions with a specific focus on these technologies, particularly with respect to testing their APIs.
  • Require solutions that provide SCA, which is a critical or mandatory feature of an overall approach to security testing of applications, because open-source and third-party components are proliferating in applications that enterprises build. Vendors in the industry are introducing their own SCA solutions, as well as partnering with specialized SCA vendors. Gartner clients should pay special attention to those SCA solutions that offer OSS governance capabilities to enable the organization to proactively enforce its policy with respect to OSS when components are being onboarded or pulled in from external repositories and package managers. This should be further augmented with production time SCA, such as that available from container security products to alert to new vulnerabilities as they become known.
  • Favor a risk-based approach to vulnerability management rather than a “fix all the bugs” mentality. Too often, the perfect becomes the enemy of the good, wasting time and resources and demotivating developers and teams. There is often a trade-off to be made between speed and depth, so buyers should ensure that any resulting diminishment in the accuracy of results that often accompanies lower turnaround times remains acceptable.
  • Press vendors for specifics on their roadmap with respect to false positive reduction and how they will be employed to enhance their solutions. Buyers should look past ML hype and marketing to better understand specifics on how the proposed ML implementations will meaningfully improve areas such as enhancing accuracy, automating remediation efforts or achieving better testing coverage. Gartner clients should weigh vendor plans with respect to ML-based improvements, particularly when considering longer-term engagements, and consider the applicability of the proposed approaches. Artificial intelligence (AI) and ML are overused marketing terms, making it difficult to distinguish between hyperbole and genuine value, and should be evaluated closely.

Market Overview

Current Gartner forecasts place the size of the AST market (sales of SAST, DAST and IAST tools) at $1.33 billion by the end of 2020. Through 2022, the AST market is projected to have a 10% compound annual growth rate (CAGR), indicating that the market is growing slightly faster than the overall security market, which is projected to grow at a CAGR of 9% over the same period. Initial examination of updated vendor results suggests the market is growing at a faster pace than originally projected. This is believed to be a function of both increasing buyer demand for core AST tools, and the growing importance of associated solutions not currently included in the base forecast (such as SCA and mobile AST). Analysis of data continues, and any revisions to the forecast will be published in Gartner’s quarterly Information Security Market Forecast.
2019 continued to be a busy year of buyouts and mergers in the AST market. In June 2019, HCL Technologies completed its acquisition of IBM’s AppScan product suite as part of its $1.8 billion deal for a variety of IBM products. Also, in July 2019, NTT Security closed its buyout of WhiteHat Security. NTT is keeping the WhiteHat brand distinct from NTT Security, but this does significantly expand WhiteHat’s global coverage and partner network. Rapid7 made two purchases, acquiring tCell (runtime application self-protection) in late 2018, and NetFort (network monitoring) in mid-2019. In June, Onapsis completed its acquisition of Virtual Forge and has begun integrating its CodeProfiler suite into the Onapsis product line. Late in 2018, Checkmarx purchased Custodela, an Ontario-based provider of software security program development and consulting services focused on DevSecOps. Finally, in January 2020, Synopsys acquired Tinfoil Security and intends to merge its DAST and API testing product suit with its existing enterprise AST platform (all acquisitions after the Magic Quadrant cut-off date are noted in this research, but their capabilities are not included in the vendors’ evaluations).
In addition to this activity, we’ve seen some interesting moves by infrastructure players like Microsoft and VMware to make inroads into secure development. In 2018, Microsoft bought GitHub, arguably the world’s leading development repository. In 2019, GitHub acquired Semmle, a code analytics platform, and became a CVE Numbering Authority. The CVE system provides references for publicly disclosed information about security vulnerabilities and exposures, putting GitHub in a unique position for finding and disclosing code vulnerabilities. Also, on 30 December 2019, VMware announced that it was acquiring Pivotal Software for $2.7 billion (both Pivotal and VMware are part of Dell). This puts VMware in a strong position to manage, among other things, the container and software defined network security spaces. While it’s still early, Gartner has seen a market increase in inquiries about container security, so both of these moves are interesting.
The market continues to exhibit signs of increasing consolidation and commoditization, at least with respect to SAST, DAST and SCA for traditional web applications. However, as we can see from the placements in the 2020 AST Magic Quadrant, there continues to be a strong demand for specialty solutions that offer in-depth coverage of specific areas or combine traditional AST with other testing (e.g., code quality, enterprise applications, etc.).
In 2019, the number of Gartner end-user client conversations on DevSecOps and AST increased by 50% over 2018. While most clients do not have a full or even majority DevOps team, many techniques out of the DevOps method are easily adapted to existing coding disciplines. This includes a focus on making security an integral part of the developer work cycle and eliminating “security gates” late in the process. Other trends in 2019 included a rise in interest in container security. While containers continue to be a minor part of the market compared to more traditional applications, inquiry was up 65% over 2018. Similarly, inquiry regarding scanning for known vulnerabilities in open-source code (SCA) rose 20% in 2019.
In general, we have seen the following DevSecOps trends emerging in our client inquiries:
  • Integration of security and compliance testing seamlessly into DevSecOps, so developers never have to leave their CI or CD toolchain environments
  • Teams embracing a “developers own their code” philosophy, which extends into security (as well as performance, reliability and code quality)
  • Scanning for known vulnerabilities and misconfigurations in all open-source and third-party components
  • An emphasis on removing vulnerabilities with the highest severity and risk, rather than trying to remove all known vulnerabilities in custom code
  • Giving developers more autonomy to use new types of tools and approaches to minimize friction (such as interactive AST) to replace traditional static and dynamic testing
  • Scaling their information security teams into DevOps by using a security champion/coach model rather than putting them directly on the teams (which has scalability and cultural issues)
  • Treating all automation scripts, templates, images and blueprints with the same level of assurance they would apply to any source code
  • Increased interest in containerization
And we see those trends beginning to be reflected in the toolsets, including:
  • There is increased availability of SCA tools as part of product offerings across the Magic Quadrant participants.
  • IDE security plug-ins have not only become the normal expectation for buyers, but increasingly they are expecting the IDE to be the main conduit for reporting, fix suggestions, lessons, gamification and other developer-centric security activity. Anything that requires developers to go “out of band” is generally disfavored.
  • Fix suggestions are becoming more context-aware, not only with specific instructions, but also with options for involving human review and guidance from tool providers. Tool vendors are providing more options for including some human review of results in addition to ML for the elimination of false positives.
  • Vendors are starting to deliver options for covering some of the container and microservice attack surfaces, although full container scanning is still a bit off.
See “12 Things to Get Right for Successful DevSecOps” for more on best practices for developers.
This year’s Magic Quadrant shows two distinct trends: One broadening, and one deepening. The first trend is a movement toward all-inclusive platforms that do SAST/DAST/IAST/SCA as well as integrated reporting, CI/CD pipeline integration and a robust developer experience in the IDE. While each vendor will have specific strengths and weaknesses in individual tools, the common theme is that they are full, broad-spectrum platforms. The second trend is movement by some vendors to concentrate on doing a few things very well, often combining aspects of deep security testing with other functions such as code quality analysis, business-critical apps or specific types of testing not covered well by the broad-spectrum players. Both trends result in more choices for security leads and heads of development, both of which can be purchase decision makers.
We have four notable market observations:
  • Clients with experienced security staff are looking more seriously at using IAST solutions. Gartner saw a 40% increase in inquiry volume around IAST in 2019. For organizations with staff that have previously used SAST/DAST, IAST becomes a viable quick-start alternative, especially if they are making their first AST purchase and the staff are experienced in DevSecOps from previous work. It fits well into the DevSecOps workflow and give developers the opportunity to mix and correlate aspects of both dynamic testing and static analysis. While this is still a small percentage of the volume of DevSecOps calls, its growth represents an interesting, if minor, trend.
  • Container/microservice security is beginning to appear as an important trend in AST. In 2019, Gartner saw a 60% increase in the number of clients asking about container security. While this still represents a small portion of our call volume on AST, we feel it’s significant. Vendors are beginning to address container security concerns by repurposing some of their existing product suites (e.g., SCA for scanning OS components, SAST for payload scanning, etc.). These solutions do not yet cover the full, complex attack surface that containers represent.
  • Human-assisted DevSecOps is being offered by more vendors to reduce false positives and to assist developers in their IDE and developer environments. While ML continues to do the heavy lifting for false positive reduction, AST vendors are increasingly offering the option to have results reviewed by humans who can help remove false positives. While fast DevOps organizations continue to prefer automated, rapid turnaround times, other organization with less rigid deadlines and less security experience are taking advantage of FP reduction via human review. Similarly, while many organizations are adopting a “developer security coach” model for assisting coders grappling with security tasks, some are opting to use coaches from vendors provided through chat or other dedicated channels. This supports the goal of making security easy for developers to consume and provides rapid response to common questions.
  • Many clients are still seeking “one-stop shop” vendors that offer multiple technologies as part of a unified platform, a trend we noted in 2019. To support this effort, buyers are prioritizing vendors that provide multiple technologies and deployment options. Feedback from clients suggests that efforts to “glue together” various specialty tools suffer from complexity and reporting problems (i.e., the results of one tool not being consumable by others, resulting in a loss of context). Efforts to correlate these in-house do not yield the same level of rich data and project tracking and reporting as integrated, enterprisewide platform providers. Application vulnerability correlation helps with this.


Evaluation Criteria Definitions

Ability to Execute

Product/Service: Core goods and services offered by the vendor for the defined market. This includes current product/service capabilities, quality, feature sets, skills and so on, whether offered natively or through OEM agreements/partnerships as defined in the market definition and detailed in the subcriteria.
Overall Viability: Viability includes an assessment of the overall organization’s financial health, the financial and practical success of the business unit, and the likelihood that the individual business unit will continue investing in the product, will continue offering the product and will advance the state of the art within the organization’s portfolio of products.
Sales Execution/Pricing: The vendor’s capabilities in all presales activities and the structure that supports them. This includes deal management, pricing and negotiation, presales support, and the overall effectiveness of the sales channel.
Market Responsiveness/Record: Ability to respond, change direction, be flexible and achieve competitive success as opportunities develop, competitors act, customer needs evolve and market dynamics change. This criterion also considers the vendor’s history of responsiveness.
Marketing Execution: The clarity, quality, creativity and efficacy of programs designed to deliver the organization’s message to influence the market, promote the brand and business, increase awareness of the products, and establish a positive identification with the product/brand and organization in the minds of buyers. This “mind share” can be driven by a combination of publicity, promotional initiatives, thought leadership, word of mouth and sales activities.
Customer Experience: Relationships, products and services/programs that enable clients to be successful with the products evaluated. Specifically, this includes the ways customers receive technical support or account support. This can also include ancillary tools, customer support programs (and the quality thereof), availability of user groups, service-level agreements and so on.
Operations: The ability of the organization to meet its goals and commitments. Factors include the quality of the organizational structure, including skills, experiences, programs, systems and other vehicles that enable the organization to operate effectively and efficiently on an ongoing basis.

Completeness of Vision

Market Understanding: Ability of the vendor to understand buyers’ wants and needs and to translate those into products and services. Vendors that show the highest degree of vision listen to and understand buyers’ wants and needs, and can shape or enhance those with their added vision.
Marketing Strategy: A clear, differentiated set of messages consistently communicated throughout the organization and externalized through the website, advertising, customer programs and positioning statements.
Sales Strategy: The strategy for selling products that uses the appropriate network of direct and indirect sales, marketing, service, and communication affiliates that extend the scope and depth of market reach, skills, expertise, technologies, services and the customer base.
Offering (Product) Strategy: The vendor’s approach to product development and delivery that emphasizes differentiation, functionality, methodology and feature sets as they map to current and future requirements.
Business Model: The soundness and logic of the vendor’s underlying business proposition.
Vertical/Industry Strategy: The vendor’s strategy to direct resources, skills and offerings to meet the specific needs of individual market segments, including vertical markets.
Innovation: Direct, related, complementary and synergistic layouts of resources, expertise or capital for investment, consolidation, defensive or pre-emptive purposes.
Geographic Strategy: The vendor’s strategy to direct resources, skills and offerings to meet the specific needs of geographies outside the “home” or native geography, either directly or through partners, channels and subsidiaries as appropriate for that geography and market.

Critical Capabilities for Security Information and Event Management

Comprehensive Explanation: What is a SIEM (in 2020 and beyond.)

SOCComprehensive Explanation: What is a SIEM (in 2020 and beyond.)

[I have not had the time to proof read nor correct grammatical errors, spelling mistakes and typos. ]

SIEM unifies Threat Detection and Hunting.

This is an old topic worth revising and level setting with the latest advancements, concepts and learning from a decades of unsuccessful SIEM deployments! It is worth revisiting as allot people don’t understand the value and even less understand how to effectively operationalise and achieve business outcomes utilising the power of a SIEM.

After reading this you will gain enough insight into the basics of SIEM.

I am continually asked the same questions around SIEM design, so glad to finally brain dump this knowledge and share with the community

(SIEM in Public Cloud is beyond the scope of this article, while all the information is relevant, I will write another article focusing specifically for Threat Detection for Public Cloud environments. )

Security Information and Event Management

A SIEM seeks to provide a holistic approach to an organisation’s IT security. A SIEM represents a combination of services, appliances, and software products. It performance real-time collection of log data from devices,  applications and hosts. It also process the collected log data, enabling real-time analysis of security alerts generated by network hardware and applications, Advanced Correlation for security and operational events, as well as real-time alarming and scheduled  reporting.

SIEM technology is used in many enterprise organizations to provide real time reporting and long term analysis of security events. SIEM products evolved from two previously distinct product categories, namely security information management (SIM) and security event management (SEM).

Table 1 shows this evolution.

Table 1 . SIM and SEM Product Features Incorporated into SIEM

Separate SIM and SEM Products

Security Information Management:

Log collection, archiving, historical reporting, forensics

Security Event Management:

Real time reporting, log collection, normalization, correlation, aggregation

Combined SIEM Product

Log collection





SIEM combines the essential functions of SIM and SEM products to provide a comprehensive view of the enterprise network using the following functions:

  • Log collection of event records from sources throughout the organization provides important forensic tools and helps to address compliance reporting requirements.
  • Normalization maps log messages from different systems into a common data model, enabling the organization to connect and analyze related events, even if they are initially logged in different source formats.
  • Correlation link slogs and events from disparate systems or applications, speeding detection of and reaction to security threats.
  • Aggregation reduces the volume of event data by consolidating duplicate event records.
  • Reporting presents the correlated aggregated event data in real-time monitoring and long-term summaries.

Internal IT environment consists of services, networking equipment, application, and components that they want to protect and prevent intrusion into. In order to protect these assets and data, you can deploy protection in the form of firewalls, antivirus, IPS/IDS and Authentication. Protection Examples such as;

  • Firewalls
  • Antivirus
  • IPS
  • IDS
  • Authentication
  • Web Security
  • Email Security
  • Traffic Capture
  • WAF
  • DLP
  • FIM
  • Secure Access Service Edge
  • MFA
  • EDR

Despite all of the systems and effort put into these solutions, those trying to breach that environment will get in. Once they are in, detecting and responding to their attack is time critical. 

A SIEM receives or taps into all of these activity as it is continually receiving thousands of logs per second from all of these devices and systems within the environment. The SIEM process log data to make meaning of what is actually happening on a device aka Detection, and analytics are used to analyses data activity, providing more input into what is actually happening.

SIEM solutions also provides the ability to analysis log historic data and generate reports for compliances purposes as well as providing digital forensic and fulfilling additional parts of overall information security strategy. 

SIEM solutions centralising log data within IT environments, augmenting security measures and enabling real-time analysis. It is constantly watching, monitoring and analysing events and alerts with the environment in an effort to detect attacks and intrusions.

Fourth Wave of SIEM

SIEMs sometimes gets a bad name as it is incredibly powerful and yet takes enormous amount of skills and effort to get working. Not because of the SIEM, but it requires data from all of your IT environment and that particularly causes massive delays in successful SIEM deployment. (This can be easily solved. Keep reading.) SIEM has evolved to very mature platforms. E.g. ArcSight 20+ years of evolution. Read ArcSight History here

  • First Wave
    • PCI-DSS really drove first phase of SIEM deployment for Complaint Business outcome.
  • Second Wave
    • Then people started to detect bad things in network activity.
  • Third Wave
    • This phase was when customer started to build SOCs.
  • Fourth Wave
    • This is about SOCs developing Threat Hunting utilising NDR, EDR, SIEM and SOAR

Machine Data

SIEM processes all types of Machine data produced by devices in a IT environment.

Machine data is one of the most underused and undervalued assets of any organization. But some of the most important insights that you can gain—across IT and the business—are hidden in this data: where things went wrong, how to optimize the customer experience, the fingerprints of fraud. All of these insights can be found in the machine data that’s generated by the normal operations of your organization.

Machine data is valuable because it contains a definitive record of all the activity and behavior of your customers, users, transactions, applications, servers, networks and mobile devices. It includes configurations, data from APIs, message queues, change events, the output of diagnostic commands, call detail records and sensor data from industrial systems, and more.

The challenge with leveraging machine data is that it comes in a dizzying array of unpredictable formats, and traditional monitoring and analysis tools weren’t designed for the variety, velocity, volume or variability of this data.


In computingsyslog /ˈsɪslɒɡ/ is a standard for message logging. It allows separation of the software that generates messages, the system that stores them, and the software that reports and analyzes them. Each message is labeled with a facility code, indicating the software type generating the message, and assigned a severity level.

The syslog protocol, defined in RFC 3164, protocol provides a transport to allow a device to send event notification messages across IP networks to event message collectors, also known as syslog servers. The protocol is simply designed to transport these event messages from the generating device to the collector. The collector doesn’t send back an acknowledgment of the receipt of the messages.

Syslog uses the User Datagram Protocol (UDP), port 514, for communication. Being a connectionless protocol, UDP does not provide acknowledgments. Additionally, at the application layer, syslog servers do not send acknowledgments back to the sender for receipt of syslog messages. Consequently, the sending device generates syslog messages without knowing whether the syslog server has received the messages. In fact, the sending devices send messages even if the syslog server does not exist.

The syslog packet size is limited to 1024 bytes and carries the following information:

  • Facility
  • Severity
  • Hostname
  • Timestamp
  • Message

Computer system designers may use syslog for system management and security auditing as well as general informational, analysis, and debugging messages. A wide variety of devices, such as printers, routers, and message receivers across many platforms use the syslog standard. This permits the consolidation of logging data from different types of systems in a central repository. Implementations of syslog exist for many operating systems.

When operating over a network, syslog uses a client-server architecture where a syslog server listens for and logs messages coming from clients.

The Syslog protocol is defined by Request for Comments (RFC) documents published by the Internet Engineering Task Force (Internet standards). The following is a list of RFCs that define the syslog protocol:[13]

  • The BSD syslog ProtocolRFC3164. (obsoleted by The Syslog ProtocolRFC5424.)
  • Reliable Delivery for syslogRFC3195.
  • The Syslog ProtocolRFC5424.
  • TLS Transport Mapping for SyslogRFC5425.
  • Transmission of Syslog Messages over UDPRFC5426.
  • Textual Conventions for Syslog ManagementRFC5427.
  • Signed Syslog MessagesRFC5848.
  • Datagram Transport Layer Security (DTLS) Transport Mapping for SyslogRFC6012.
  • Transmission of Syslog Messages over TCPRFC6587.

More reading on Syslog;


SIEM is a mandatory requirement for Compliance Audits such as PCI-DSS, ISO, 27001, Sarbanes–Oxley Act of 2002(thanks Enron), and other standards.

The Payment Card Industry  (PCI) Security Standards Council was founded by five global payment brands: American Express, Discover Financial Services, JCB International, MasterCard, and Visa. These five payment brands had a common vision of strengthening  security policies across the industry to prevent data breaches for businesses that accept and process payment cards. Together they drafted and released the first version of PCI Data Security Standard (PCI DSS 1.0) on December 15, 2004.

PCI DSS is a regulation with twelve requirements that serve as a security baseline to secure payment card data.

  • PCI-DSS v 3.2.1 Requirements;
    • Requirement 10: Track and monitor all access to network resources and cardholder data.
    • Requirement 11.5: Deploy a change detection mechanism (for example, file integrity monitoring tools) to alert 24 personnel to unauthorized modification (including changes, additions, and deletions) of critical system files, configuration files or content files. Configure the software to perform critical file comparisons at least weekly. Implement a process to respond to any alerts generated by the change-detection solution.
    • PCI DSS v3.2.1 Quick Reference Guide 2020-05-01 11-42-23

Depending on your PCI-DSS merchant level and number of Credit Card transactions you process, you will need to adhere to different levels of PCI-Auditing.

Cyber Threat Intelligence

Threat intelligence, or cyber threat intelligence, is information an organization uses to understand the threats that have, will, or are currently targeting the organization. This info is used to prepare, prevent, and identify cyber threats looking to take advantage of valuable resources.

Cyber Threat Intelligence consists of many number of information including; Indicators of Comprise and Indicators of Attacks

Indicators of compromise (IOCs) are “pieces of forensic data, such as data found in system log entries or files, that identify potentially malicious activity on a system or network.” Indicators of compromise aid information security and IT professionals in detecting data breaches, malware infections, or other threat activity. By monitoring for indicators of compromise, organizations can detect attacks and act quickly to prevent breaches from occurring or limit damages by stopping attacks in earlier stages.

Indicators of compromise act as breadcrumbs that lead infosec and IT pros to detect malicious activity early in the attack sequence. These unusual activities are the red flags that indicate a potential or in-progress attack that could lead to a data breach or systems compromise.

Indicators of attack are similar to IOCs, but rather than focusing on forensic analysis of a compromise that has already taken place, indicators of attack focus on identifying attacker activity while an attack is in process. Indicators of compromise help answer the question “What happened?” while indicators of attack can help answer questions like “What is happening and why?” A proactive approach to detection uses both IOAs and IOCs to discover security incidents or threats in as close to real time as possible

Example IoCs;

  • Unusual Outbound Network Traffic
  • Anomalies in Privileged User Account Activity
  • Geographical Irregularities
  • Log-In Red Flags
  • Increases in Database Read Volume
  • HTML Response Sizes
  • Large Numbers of Requests for the Same File
  • Mismatched Port-Application Traffic
  • Suspicious Registry or System File Changes
  • Unusual DNS Requests
  • Unexpected Patching of Systems
  • Mobile Device Profile Changes
  • Bundles of Data in the Wrong Place
  • Web Traffic with Unhuman Behavior
  • Signs of DDoS Activity

ATPs and Tactics, Techniques and Procedures (TTPs)

SIEM can utilise Cyber threat intelligence/IoCs/IoAs/TTPS and correlate with the IT environment log data to Detect threats in real-time and history log data. 

Correlation Rules, Behaviour patterns, Pattern matching, Anomaly detection, Conditions, Thresholds, Network Modelling and Machine learning (Phew give me a pay rise. )

Correlation is one of the key components of any effective SIEM tool. As information from across your digital environment feeds into a SIEM, it uses correlation to identify any possible issues. It does so by comparing sequences of activity against preset rules, conditions and thresholds. SIEMs allow sophisticated ways to implement risk based rules.

The latest SIEM, can now implement Anomaly detection via Machine learning.

All integrated with Threat Intelligence information.

The Brains inside a SIEM is based on Correlation Rules, Pattern matching, Conditions, Thresholds and now implementation of Machine learning via Unsupervised and Supervised Models.

  • Correlation Rules
  • Pattern Matching
  • Conditions
  • Thresholds
  • Supervised Machine Learning
  • Unsupervised Machine Learning
  • Network Modelling and Risk Scoring

Use Case

Use case is a term used for Threat Detection in terms of Business Context. It combines the value and context in SIEM platform.

Leading SIEM platforms such as ArcSight has built-in ESM Default Content Use Cases for 80% of your Threat Detection requirements. There are also 3rd Party Use Case library’s including SOCPrime ATT&CK® and SIGMA generic SIEM rules format. SIGMA Rules 

You can catch just about everything with ArcSight Default Content and SIGMA Rules! The rest you need to pay someone like me to workshop and write.

Machine Data Sources

Data Type Use Cases Examples
Amazon Web Services Security & Compliance, IT Operations Data from AWS can support service monitoring, alarms and a dashboards for metrics, and can also track security-relevant activities, such as login and logout events.
APM Tool Logs Security & Compliance, IT Operations APM tool logs can provide end-to-end measurement of complex, multi-tier applications, and be used to perform post-hoc forensic analytics on security incidents that span multiple systems.
Authentication Security & Compliance, IT Operations, Application Delivery Authentication data can help identify users that are struggling to log in to applications and provide insight into potentially anomalous behaviors, such as activities from different locations within a specified time period.
Firewall Security & Compliance, IT Operations Firewall data can provide visibility into blocked traffic in case an application is having communication problems. It can also be used to help identify traffic to malicious and unknown domains.
Industrial Control Systems (ICS) Security & Compliance, Internet of Things, Business Analytics ICS data provides visibility into the uptime and availability of critical assets, and can play a major role in identifying when these systems have fallen victim to malicious activity.
Medical Devices Security & Compliance, Internet of Things, Business Analytics Medical device data can support patient monitoring and provide insights to optimize patient care. It can also help identify compromised protected health information.
Network Protocols Security & Compliance, IT Operations Network protocol data can provide visibility into the network’s role in overall availability and performance of critical services. It’s also an important source for identifying advanced persistent threats.
Sensor Data Security & Compliance, IT Operations, Internet of Things Sensor data can provide visibility into system performance and support compliance reporting of devices. It can also be used to proactively identify systems that require maintenance.
System Logs Security & Compliance, IT Operations System logs are key to troubleshooting system problems and can be used to alert security teams to network attacks, a security breach or compromised software.
Web Server Security & Compliance, IT Operations, Business Analytics Web logs are critical in debugging web application and server problems, and can also be used to detect attacks, such as SQL injections.

SIEM Data  formats

Typical formats supported by SIEM platform to ingest Log data;

Syslog, SNMP, SMTP,  SCP, FTP, flat file, SQL query, Database Reader, cloud APIs, REST_api, XML, Secure syslog, Cisco FIREsight and SDEE, Checkpoint LEA. AWS Guard duty, Cloudwatch, AWS S3, SCP, JDBC, etc.

Common Event Format (CEF)

In the realm of security event management, a myriad of event formats streaming from disparate devices makes for a complex integration. Common Event format by ArcSight  promote interoperability between various event- or log-generating devices.

Although each vendor has its own format for reporting event information, these event formats often lack the key information necessary to integrate the events from their devices.
The ArcSight standard attempts to improve the interoperability of infrastructure devices by aligning the logging output from various technology vendors.
Common Event Format (CEF) is a Logging and Auditing file format from ArcSight and is an extensible, text-based format designed to support multiple device types by offering the most relevant information.
Message syntaxes are reduced to work with Arcisght normalization. Specifically, Common Event Format defines a syntax for log records comprised of a standard header and a variable extension, formatted as key-value pairs.The format called Common Event Format (CEF) can be readily adopted by vendors of both security and non-security devices.
This format contains the most relevant event information, making it easy for event consumers to parse and use them. To simplify integration, the syslog message format is used as a transport mechanism.


  • Time Normalisation
    • Ensures timestamps all reflect the same time zone to correlate events from different timezones.
    • Time is an important piece for threat detection. Some time zones around the world don’t observe Daylight Savings Time (DST) and some time zones are actually a half hour different than others. In addition to time zone issues, some devices don’t include a time in the log message. A SIEM needs to timestamp a log with a single time zone.
  • Data Enrichment (Meta data extracting, tagging and enrichment)
    • SIEM parses and breaks down log message into core components and adding context. e.g. adding customer tag, etc.
    • Log data is not uniform, they following a standard protocol, but the information within isn’t standard followed by  log source providers, so a SIEM has to process the log into a unified threat detection taxonomy and universal schema in order to run mathematical rules.
    • Log information needs to be assigned into common schema so that a [User Log on] message from various system from Unix, Windows, Active Directory, AWS, etc will all be tagged as User Log on to assist threat detection search rules.
  • Threat and Risk Contextualisation
    • Evaluate each log and provide risk-based priority value. e.g. Information for Edge services / DMZ or Authentication such as Active Direction, DNS information, etc.
May 11 10:00:39 scrooge SG_child[808]: [ID 748625] m:WR-SG-SUMMARY c:X (http) GET / => http://bali/ , status:200 , redirection URL: , referer: , mapping:bali , request size: 421 , backend response size: 12960 , audit token:- , time statistics (microseconds): [request total 16617 , allow/deny filters 1290 , backend responsiveness 11845 , response processing 1643 , ICAP reqmod  , ICAP respmod  ] timestamp: [2012-05-11 10:00:39] [ rid:T6zHJ38AAAEAAAo2BCwAAAMk sid:910e5dd02df49434d0db9b445ebba975 ip: ]

Securty Schema


Events are a collections of syslogs that is created after processing with Threat Intelligence and/or correlation rules. An Event is a actionable log items sent to human Analysts for further triage, performing investigations and reporting.

Sizing SIEM solutions

Sizing a SIEM solutions, begins with the basic list of devices that you want to monitor. See Example Device List collection Tool;

Device List
Device TypeVendorModelLocationQuantity
Windows Server (Active Directory)Microsoft1
Windows Server (DNS)Microsoft1
AWS (CloudTrail)AWS1
Fortinet Firewall (IDS/IPS/VPN)Fortinet1
Citrix Access GatewayCitrix1

SIEM Sizing (Events Per Second)

Critical to sizing and design of a SIEM platform, is to determine Events Per Second produced by the quantity of devices Size,

You need to determine and estimate the following SIEM fundamentals;

  • Events Per Second
  • Events Per Day:
  • Online Retention Period and requirement Storage in GBs
  • Retention Period and required Storage in GBs
  • Network Bandwidth Peak requirements: (GB /per second for all Devices.)
  • EPS Peak
  • EPS average (Day, Week, Month, etc.)
  • Estimated Device Growth over 3 years
  • EPS Headroom (Allow 10-30%)
  • Recovery Point Objective
  • Recovery Time Objective
  • Uptime requirement
  • Event / Alert Size (512 Kbs per Event is a rough estimate.)

SIEM Sizing Rosetta Stone

GB (1 GB = 1,000,000,000 BYTES)EPS (1 EVENT = 600 BYTES)

Storage and Archival are critical for any Security Logging platform

  • Raw Event Size
  • Normalised Event Size
  • Retention Time
  • Online Retention Period
  • Events Per Day
  • Compression Ration
  • GB Storage per day/Retention time.


It is vital to understand the way your SIEM platform receivers and processing data; What is the Schema format, Schema on Read, Schema on Write. Is it using Distributed Search or in-memory Real-time, etc. The last thing you want to do is HORD data and not understand what you are collecting and be scared of getting rid of it and not even be able to get any value from the data; Don’t turn into this guy, because the Finance department will start knocking on your door and the day will come when you will have to provide justification and prove business results. If you ever get breached and can’t even useful information after you stored tons of data. You might need to find another job.

Hoarding (With images) | Compulsive hoarding, Hoarding, Hoarder


Overwhelming about of logs sources without proper sanitisation and normalisation can lead to massive amount of useless information in SIEM leading to alert fatigue

False-Positive and False-Negatives

false positive state is when the SIEM identifies an activity as an attack but the activity is acceptable behavior. A false positive is a false alarm.

false negative state is the most serious and dangerous state. This is when the SIEM identifies an activity as acceptable when the activity is actually an attack. That is, a false negative is when the SIEM fails to catch an attack. This is the most dangerous state since the security professional has no idea that an attack took place.

False positives, on the other hand, are an inconvenience at best and can cause significant issues. However, with the right amount of overhead, false positives can be successfully adjudicated; false negatives cannot.

  • Airport Security: a “false positive” is when ordinary items such as keys or coins get mistaken for weapons (machine goes “beep”)
  • Medical screening: low-cost tests given to a large group can give many false positives (saying you have a disease when you don’t), and then ask you to get more accurate tests.
  • Antivirus software: a “false positive” is when a normal file is thought to be a virus

Popular SYSLOG Servers

  • ArcSight Logger
  • Nagios
  • Zabix
  • Logstash
  • NXLog

Log Sources Categories

  • Operations Systems
    • Windows
    • Linux
    • OSX
  • Mobile
    • iOS
    • Android
    • Microsoft
    • windows Phone
  • OT/IOT
    • err no clue
  • APIs
  • Databases
  • Policy Devices
    • Firewals
    • IDS/IPS
    • Authentication
    • Antivirus
  • Network Devices
    • Switches
    • Firewalls
    • Routers
  • Applications
  • Entities/Users
  • Public Cloud

SIEM – Real-Time vs Search

As the ever increasing volume of data increases, it becomes increasingly difficult to gain critical insights into to massive volumes of data for SIEMs and other data analytics platforms. SIEMs need to detect threats in-real time and search years of log source archives at the same time. So you are trying to solve two critical problems at the same time;

  1. Security Event Management 
    1. Real-Time Streaming Data Analytics
  2. Security Information Management
    1. Searching Large Data sets at scale and speed

These two requirements are incredibly difficult to solve at scale.  So, lo and behold, Open source to the rescue; Apache Kafka and Apache Hadoop provide solutions for both of these requirements.

Apache Kafka

A streaming platform has three key capabilities:

  • Publish and subscribe to streams of records, similar to a message queue or enterprise messaging system.
  • Store streams of records in a fault-tolerant durable way.
  • Process streams of records as they occur.

Kafka is generally used for two broad classes of applications:

  • Building real-time streaming data pipelines that reliably get data between systems or applications
  • Building real-time streaming applications that transform or react to the streams of data

Apache Hadoop (aka Data Lake)

The Apache™ Hadoop® project develops open-source software for reliable, scalable, distributed computing.

Security Operations and Automated Response (SOAR.)

This subject is beyond the scope of this article. I will dive into this in the near future.

Leading SIEM Vendor Solutions

  • ArcSight Data Platform
    • ArcSight really almost invited the SIEM industry with 20+ year Product portfolio and invented CEF format for cyber security now supports Apache Kafka and Apache Hadoop. Integrating Unsupervised Machine learning via Vertica, IDOL and Interset.
  • Splunk
    • While gaining popularity for general purpose IT monitoring, they do have some capability in Security and Big Data Analytics. Splunk Enterprise is the Base, solution, with Splunk Enterprise Security, Splunk UBA, Splunk Cloud and Splunk Phantom. , Splunk Machine Learning Toolkit, Splunk uses Common Information Model
  • IBM QRadar
    • Another original SIEM vendor.
    • I don’t have any experience with QRadar.
  • ELK Security Onion / HELK
    • Fastest growing Open source Search stack. ELK is Opensource. Elastic is very powerful opensource platform, recently acquired Endgame. ELK stack; Elasticsearch, Kibana, Logstash, Beats. ECS  Elastic Common Schema
  • McAfee Nitro
    • Popular due to McAfee Enterprise license agreements.
  • LogRythm
    • 100% Windows Server Based, no linux edition. Every complex to deploy and requires high resources and application administration. Does have SYSMON, FIM, NETMON, UEBA and SOAR as part of the solution.
  • FireEye / Mandiant
    • Premium products for Banking and Defence Grade Technology combined with 24/7 DFIR SOC services. So this is Product solution and arguably the best DFIR Team (Mandiant). Every expensive.. HX, NX, MX proud lines, for Endpoint, Network and Cloud SIEM.

Thank you for reading this article, please support my sharing, Next article,  I will look at Log collection and SIEM Design patterns in Cloud.

If you would like to sponsor my next article or this blog, please get in touch.