5 Questions That CISOs Must Answer Before Adopting Artificial Intelligence

5 Questions That CISOs Must Answer Before Adopting Artificial Intelligence

Published 29 August 2018 – ID G00350259 – 23 min read

Mentions of artificial intelligence in Gartner security inquiries rose 76% in the past 12 months. Security and risk management leaders and chief information security officers in early-adopting organizations must articulate the potential benefits and risks of machine learning techniques.


Key Findings

  • The most frequent question about artificial intelligence in security inquiries is, “What is the state of AI in security?” SRM leaders are tempted to believe bold promises about its benefits but are often ill-prepared to evaluate the impact of AI on their mission.
  • A sample review of Leaders from 11 Gartner Magic Quadrants shows that more than 80% of these security vendors include AI in their marketing message. Machine learning and deep neural networks are the most frequently cited techniques.
  • When considering the use of AI, CISOs need to identify whether they have to fight “resistance to change” or “fear of missing out” biases in order to rebalance their AI adoption strategy.
  • Today’s use of AI in security addresses use cases that other techniques can address. Most organizations lack evaluation frameworks and metrics to benchmark these new techniques against the older ones they already paid for.
  • Machine learning is also vulnerable to attacks, highlighting that AI is no silver bullet.


SRM leaders and CISOs building information security management strategy should:
  • Focus on the desired outcome. Define evaluation metrics to measure the quality of AI results in order to help free purchasing decisions from the confusion of marketing hype. New technical approaches in your defense portfolio must achieve measurable results.
  • Assess the impact of using machine learning on staff and data privacy. Identify skills gaps and required training. Classify relevant regulatory requirements based on how AI might impact them.
  • Utilize AI as a complementary technique, beginning with experimental engagements. The low maturity of AI does not preclude utilization, but recency of implementations invites caution.
  • Discourage DIY approaches on AI. The maturity of AI techniques has not yet grown past the Peak of Inflated Expectations.

Strategic Planning Assumptions

By 2022, replacement of conventional security approaches by machine learning technologies will actually make organization less secure for a third of enterprises.
By 2021, 60% of security vendors will claim AI-driven capabilities, more than half of them undeservedly, to justify premium prices.
In 2020, AI will become a positive net job motivator, creating 2.3 million jobs while eliminating only 1.8 million jobs.


Despite existing definitions, “artificial intelligence” is often used as a relative term. Marketers and journalists use the term loosely to describe a broad range of functions. The hype has veiled artificial intelligence in a fog of exaggerated expectations and vague details — even fear: The thought of AI taking over our jobs often generates anxiety. Yet within context, an SRM leader can appreciate AI for how it can help achieve better security outcomes.
The promise of organizations gaining the ability to predict future attacks is a marketing smokescreen hiding real progress in advanced diagnostic analytics and in risk scoring. To be clear, AI has potential to enhance the effectiveness of a security team. In the area of anomaly detection and security analytics, humans working with AI accomplish much more than without it. However, it is far more realistic to strive for “smart automation,” executing tasks with trained professional human effort that is complemented by automation rather than by fully automated AI features.
It is a serious responsibility for SRM leaders and CISOs to determine if, and how, an emerging technology might benefit their organization. In theory, it is not the provider’s choice to make this determination or to shape it. The benefits are something that the technology should prove by itself.
Developing the right strategic approach to effectively incorporate AI in SRM programs requires preliminary work. Security leaders must answer these three questions before reaching out to technology providers (see Figure 1):
  1. Do you need artificial intelligence for the problem you are trying to solve?
  2. How can you measure that it is worth the investment?
  3. What will be the impact of using AI for this?

Figure 1. Three Preliminary Questions to Ask About Artificial Intelligence

Source: Gartner (August 2018)


Three Preliminary Questions to Ask About Artificial Intelligence
This research examines the questions SRM leaders most frequently ask Gartner analysts relating to AI implementations in their practices. With sufficient understanding of the capabilities of AI — and the best ways to determine how AI can serve the organization — leaders can utilize these promising technologies with reasonable expectations.

What Should CISOs and Their Team Know About Artificial Intelligence?

“Artificial Intelligence Primer for 2018,” Gartner analysis that lists upcoming research on the topic, includes the following definition of artificial intelligence:
Artificial intelligence refers to systems that change behaviors without being explicitly programmed based on data collected, usage analysis and other observations. These systems learn to identify and classify input patterns, probabilistically predict and operate unsupervised.
Using artificial intelligence simply means using algorithms to classify information and predict outcomes faster — and in greater volume — than humans can.
AI implementations today are “special-purposed AI” that are limited to specific narrow use cases. This is the case for any AI claim in security today. “Artificial General Intelligence” (AGI) designates a general-purpose AI that does not exist yet (see “Hype Cycle for Artificial Intelligence”).

Artificial Intelligence Is in the Eye of the Beholder

CISOs should remember that — as with any popular term — there are many interpretations of “artificial intelligence.” Service providers might use artificial intelligence to describe the most basic feature, leveraging static artificial intelligence, such as a knowledge base of attack techniques, whereas using the term “artificial intelligence” sets much higher expectations for prospective customers.
To find the appropriate AI solution for the enterprise’s needs, SRM leaders must correctly translate the marketing hype that has heralded AI in misleading terms. Several common marketing buzzwords have turned out to have much less dramatic but nonetheless useful meanings (see Table 1).

Table 1: Navigating AI Buzzwords

Enlarge Table
Term used as a buzzword
Most likely meaning
“Next generation”
“Our latest release”
Holistic approach
Artificial intelligence
Machine learning
Algorithms processing large amounts of data
Deep learning (deep neural network)
Multistep machine learning
Source: Gartner (August 2018)

Understanding the Basics of AI

More advanced algorithms can immediately reveal crucial information about vulnerabilities, attacks, threats, incidents and responses. Engaging the proper AI function could result in a noticeable augmentation of a security team’s capabilities (see Figure 2).

Figure 2. High-Level Descriptions of AI Concepts and Security Use Cases

Source: Gartner (August 2018)


High-Level Descriptions of AI Concepts and Security Use Cases
The most frequent concept CISOs will encounter when discussing artificial intelligence are probabilistic reasoning (often generalized as “machine learning”) and computational logic (often referred through rule-based systems). (See “Artificial Intelligence Hype: Managing Business Leadership Expectations” for a more detailed explanation.)
Machine learning is a data science technical discipline that automatically extracts patterns (knowledge) from data. At a high level, machine learning:
  1. Extracts target “features” from data (e.g., email metadata)
  2. Automatically trains a decision logic, called a “model,” with data (e.g., Bayesian networks)
  3. Applies the model on a given input (e.g., emails, files) to determine and estimate output (classify an email as spam or find malware)
There are three major subdisciplines of machine learning, which relate to the types of observation provided:
  • Supervised learning: Using precategorized data as a training set for the model (e.g., a dataset of known good emails and known spams)
  • Unsupervised learning: Using unlabeled data to train the model (e.g., network traffic)
  • Reinforcement learning: Using the results of an action to adjust behavior (subsequent actions) in similar circumstances (e.g., behavior anomaly detection)
A deep neural network, or deep learning, expands machine learning by discovering intermediate representations. This allows SRM leaders to tackle more complex problems and to solve problems with higher accuracy (see “Innovation Insight for Deep Learning”). A typical example of using deep learning is image processing. In security, some complex problems, such as building risk scores from various sources and analyzing network traffic, might benefit from using this approach.
Machine learning and deep neural network implementations are opaque. Even with knowledge of the mathematics behind these concepts, it is difficult for the user to identify the source data and rationale behind the output.
Gartner has published extensive research on the various uses of AI under the key initiative  Artificial Intelligence. A sampling of research documents (see list below) will let you learn more about artificial intelligence.
  • “Hype Cycle for Artificial Intelligence”
  • “Artificial Intelligence Hype: Managing Business Leadership Expectations”
  • “Machine Learning: FAQ From Clients”
  • “Innovation Insight for Deep Learning”
  • Learn that machine learning and deep learning behave as black box algorithms with your organization’s data. This will have implications not only on data privacy but also on tool evaluation, as “production proof of concept” might neither prove easily feasible nor show future performance.
  • As staff from various teams learn about AI, enforce knowledge sharing and build an internal knowledge base on the topic.
  • Acknowledge AI hype; hear “algorithm” when vendors or markets say “artificial intelligence,” “machine learning” or any other related buzzword.
  • Develop a baseline understanding of AI concepts for members of the security team who might deal with the related technologies to optimize costs and avoid unneeded purchases.

What Is AI’s Expected Impact on SRM?

Artificial intelligence’s promise is to automatically process data and apply analytics functions much better than human teams can without aid. Improved automation and data analytics apply to security analytics — such as SIEM, network traffic analysis, user and entity behavior analytics — and infrastructure protection — endpoint protection, web application firewall, bot mitigation, cloud access security brokers. AI solutions promise to offer improved efficiency and speed to find more attacks, reduce false alerts and perform faster detect-and-respond functions.
The use of AI is also visible in integrated risk management, where its promise is to better support risk-based decision making by identifying and prioritizing risk.
Machine learning is already pervasive in many security markets. It might incrementally improve the benefits of existing technologies when implemented as a feature and could also answer new needs when machine learning is at the core of a new product (see Figure 3).

Figure 3. What Should You Expect From Using Machine Learning?

Source: Gartner (August 2018)


What Should You Expect From Using Machine Learning?
To engage AI with reasonable expectations of an improved SRM practice, CISOs will wish to gain at least a minimum understanding of requirements. They should prepare to be the leading agent of the company’s security use of AI. They should become familiar with the capabilities of AI-based systems to assess the potential of these systems for greater efficiency and effectiveness.
More importantly, SRM leaders should be the voice of reason and clarity, setting suitable expectations for stakeholders and employees around the reality of AI versus its exaggerations. Staff and colleagues must also follow their lead and concentrate their energies on several crucial actions in a security practice. These understandings are described in Figure 4.

Figure 4. Security Roles and Required Understandings in AI Engagements

Source: Gartner (August 2018)


Security Roles and Required Understandings in AI Engagements
Because many algorithms must consume large amounts of data, CISOs should engage early with privacy leaders to understand the implications of using an AI product or feature on data security and privacy.
Technical advisors and security operations need a more in-depth understanding of AI technology. They should start by defining the right evaluation metrics to assess the efficacy of the new techniques and to avoid being influenced by the coolness factor of a new shiny tool. SRM leaders should appreciate that lack of transparency and trouble with evaluation of tool effectiveness are the key problems.
There is no assurance that machine-language-driven results are better than alternate techniques. Available feedback is still scarce. When evaluating solutions that have AI claims, security leaders should focus on the outcome, not the technique. When adding a new technical approach to your defense portfolio, it needs to achieve measurable results.
Machine learning is fallible. It can offer incorrect or incomplete conclusions when using insufficient data, domain insights or compute infrastructure. There may be no mathematical solution designed for the organization’s specific needs. In many areas of security where AI is unproved, including anomaly detection, the efficacy of machine learning is difficult to benchmark. Part of the reason for that is the lack of reliable security metrics.
Machine learning is also vulnerable to attacks. Attacks on classification algorithms introduce just enough noise to change the diagnostic.1 As they’ve done every time, attackers will adapt to the new defense techniques and might also already be leveraging AI to improve their attacks.
  • Inventory the areas where AI techniques are already available to improve existing solutions. Test if they actually improve upon them, as anticipated.
  • Identify new categories of solutions leveraging AI techniques that could help fill a gap in your security posture.
  • Define AI-related roles, responsibilities and required understanding in the security team (see Figure 4).
  • Focus everyone’s attention on the security outcome, not the availability of an AI technique. This requires investing resources and time on heavy testing before scaling the use of an AI technique.

What Is the State of Artificial Intelligence in Security?

As shown in the “Hype Cycle for Artificial Intelligence, 2017,” the most useful techniques for security (machine learning and deep learning) are at the Peak of Inflated Expectations. This suggests that early adopters will undergo a period of experimentation before optimum results will be achieved.
This low maturity of AI in general is one of the reasons why it is probably a bad idea for security organizations to try an autonomous DIY — build your own AI — approach to implement AI for security objectives. SRM leaders should recognize that exchanges of knowledge with other teams in the organization or within vertical industries may be required to hasten payback first. Specialized resources are scarce, and the AI tools and framework are not fully mature yet.
Similarly, most technology providers’ AI initiatives related to SRM are immature. Even when excluding false claims, the solutions and engagements in AI from security vendors are recent. This is apparent in the form of a lot of “AI version 1.0” implementations in many security products. These implementations might also rely on third-party AI frameworks.
Gartner estimates that many of today’s AI implementations would not pass due diligence testing in proving that they achieve significantly better results than other existing techniques.
Some vendors rebrand statistical analysis with a new name. For example, for a long time web application firewalls have used statistical approaches to provide automated application pattern learning, which is now called AI. These AI maturity levels, ranging from immature to experimental, do not preclude utilization. Security leaders should treat AI as emerging technologies, adding them as experimental, complementary controls.
In the Enterprise survey, conducted in March 2018, “cybersecurity” comes as the most frequent, most critical and most prominent AI project within organizations (see Figure 5). Gartner predicts that, by 2021, 50% of enterprises will have added unsupervised machine learning to their fraud detection solution suites (see “Begin Investing Now in Enhanced Machine-Learning Capabilities for Fraud Detection”).

Figure 5. Cybersecurity Leads AI Use in Enterprises

Source: Gartner (August 2018)

Cybersecurity Leads AI Use in Enterprises
SRM leaders may find that machine learning techniques might have better results compared to signature-based controls or other heuristics, but they must build a system to measure and compare results between various techniques used for the same purpose.

The Promise of Predictive Security Analytics

Predictive analytics is one of the four categories of analytics (descriptive, diagnostic and prescriptive are the others). It is also an area where the promise and expectations largely exceed the current state of AI. A solution in this category is designed to answer the question, “What will happen?”
An example of true prediction can be found in nonsecurity areas, such as predictive maintenance. A security example might forecast a company’s likelihood of compromise. For example, “We estimate that you have 90% chance of being breached by malware within two weeks. This likelihood is because you have servers publicly exposed and we are seeing active infection in organizations from the same vertical in your region.”
In “Combine Predictive and Prescriptive Analytics to Drive High-Impact Decisions,” Gartner offered a reasonable explanation of what predictive analytics addresses:
Predictive analytics addresses the question, “What is likely to happen?” It relies on such techniques as predictive modeling, regression analysis, forecasting, multivariate statistics and pattern matching.
“What is likely to happen?” can be answered with a probability, which is admittedly less appealing than a true prediction of the future. Arguably, a risk score is closer to advanced diagnostics than it is to a prediction. Still, many security analytics providers actively leverage the “predictive analytics” message to present their scoring mechanisms.
  • Rationalize any temptation to do-it-yourself AI for security.
  • Treat security products and features leveraging AI as emerging yet unproven technologies.
  • Beware before giving any security budget to self-proclaimed seers.

What Should You Ask Security Vendors About AI?

Evaluating emerging technologies is hard because there is no feedback, no template for building requirements and a lot of uncertainties around long-term value. CISOs should ensure that their team avoids evaluation biases:
When considering the use of artificial intelligence, CISOs need to identify whether they have to fight “resistance to change” or “fear of missing out” biases.
One important consideration for leaders to note is that adding artificial intelligence as a feature of an existing product might look less impactful than adding a new product but in reality it’s not that simple. The simple feature might “call home” and leak data to the vendor’s cloud infrastructure, which could break some privacy commitments. A stand-alone appliance product with local machine learning at its core, deployed in detection mode only, wouldn’t be that risky to evaluate.
When algorithms are applied locally, and there is no data sharing, or when these risks have been handled appropriately, testing an AI feature is easy. Adding a new algorithm as a feature of an existing platform is another way to test how the new feature can augment technology.
The section below includes some of the content from “Questions to Ask Vendors That Say They Have ‘Artificial Intelligence.’” In the context of using AI for security, we can group them in relevant categories and add some specific questions.
Artificial Intelligence Mechanics
  • What algorithms does the product use to analyze data?
  • Which analytics methods (ML and others) contribute to AI functionality?
  • How do you upgrade the AI engine (e.g., the model) once deployed in production?
  • For each analytic method mentioned above, please indicate the alternate technical approaches used to solve the same problem (on your product or on competitors’ products).
  • How can we see what happens with data that is related to my project?
  • What data and compute requirements will you need to build the models for the solution?
  • Does your product send anything outside of our organization once deployed (“call home”)?
    • If yes, please describe (and provide samples) of what the product sends.
    • If yes, please describe configuration options available to control/disable the feature.
    • If yes, please describe the security measure you (the vendor) deploy to ensure the security of your customers information.
  • How can we view/control data used by the solution?
  • Can we wipe data in specific situations — e.g., data related to a departing employee?
Security Benefits
  • What are the security and performance metrics relevant to measure the results from AI?
  • Could you provide third-party resources, such as independent testing reports, on your AI technology?
  • Can you provide peer references with a similar use case to ours?
  • What resources are available to gather and refine data that the AI solution can use so that its outcomes improve?
Process and People
  • How does your solution integrate in our enterprise workflow (e.g., incident response, ticketing)?
  • Does your solution integrate with third-party security solutions?
  • How should I expect to devote staff and time to tune and maintain the solution?
  • Please list available training courses for personnel operating the solution.
  • Please describe available reports security operations can use to communicate about the solution.
Any use of metrics for security might be helpful in determining the potential performance benefits of a solution. However, leaders should use rationalized metrics that are specific to their organization’s operational expectations.
Measuring AI success is a challenge. SRM leaders and CISOs should focus on measuring the potential benefits that any solution offers against an identified threat vector rather than on whether it uses AI or not.
For some of these technologies, a competitive Proof of Concept process will reveal to the selecting organization the realities of the AI capability offered by the vendor while shortening the evaluation period.
  • Use and fine-tune the above list of questions when surveying vendors with claims of AI use.
  • Don’t create specific metrics for AI but evaluate the outcome against the same metrics that you use for other techniques.
  • Test, test, test. The novelty of these techniques implies a lack of independent testing and difficulties to get peer references with long-enough experience on the product.

Should You Fire Your Security Team (and Find a New Job)?

According to Gartner’s CIO survey, a third of organizations do not have a full-time security specialist. Over 25% of respondents indicate that they need employees with digital security skills. The widespread and ongoing shortage of qualified security professionals means that these skills are hard to find, if not impossible. These skills shortages leave security teams without sufficient numbers to complete SRM’s complex mission.
However, this doesn’t mean that technologies and new techniques, such as AI, are solely the answer to this challenge. With the development of new security roles, and the commensurate acquisition of new security competencies and skills, organizations can manage risk from digital business initiatives by assigning the right people with the right skills and competencies to the right roles.
Does this mean that CIOs should fire the CISO and the security team? No. Security and risk leaders must shift their view from hiring only to optimizing their security function.
Indeed, some functions will be automated, such as log management. Others may be replaced or augmented by machine learning capabilities (see the “Market Guide for Endpoint Detection and Response Solutions”), but that does not mean that leaders are in any way, shape or form ready to disband their security team. AI is a capability that will enable tools to become more powerful and efficient. However, those tools will only be as good as the practitioners using them — the corollary to “it’s the poor carpenter who blames his tools.”
Contrary to any warnings that AI’s ascendance means the disappearance of the human worker, Gartner predicts that, by 2020, AI will create more jobs (2.3 million) than it eliminates (1.8 million). Upkeeping and tuning AI implementation may mean new hires with rare and expensive skills in the future.
SRM leaders have already gone through several transformations in recent years. Adding AI to enterprise’s security portfolio has some similarities with these transformations that CISOs have already gone through (see Table 2).

Table 2: Using Artificial Intelligence Compared to Recent Trends

Enlarge Table
Recent trend
Similarities with using artificial intelligence
Cloud-based security
Trusting third-party provider to process your data
“Black box” technology and little configurability at start
Next-generation firewall
Shift in expected results for existing technologies
Network sandboxing
Incontrollable hype on an emerging market
Source: Gartner (August 2018)
SRM leaders should have learned from the previous big technology shifts that hiring an “expert” rarely works for emerging areas because the required skills are not available yet.
Artificial intelligence pioneers have learned to aim for fairly “soft” outcomes when they engage AI. These leaders focus on worker augmentation, not on worker replacement.
The skills required to embrace emerging technologies change faster than most organizations can adapt. More importantly, these skills will open up the security industry to new roles such as “data security scientist” and “threat hunters” (see “New Security Roles Emerge as Digital Ecosystems Take Over”). The data security scientist role, for example, incorporates data science and analytics into security functions and applications. Specifically, this role determines how machine learning knowledge-based artificial intelligence can be deployed to automate tasks and orchestrate security functions by using algorithms and mathematical models to reduce risk.
To sharpen the specific expertise of your staff to incorporate those new roles, consider a variety of skills development exercises and development platforms. Talent-development platforms include universities, institutions, SANS training courses, conferences and table-top exercises. At the most mature point, consider the use of a cyber range (see “Boost Resilience and Deliver Digital Dexterity With Cyber Ranges”).
SRM leaders overseeing information security management should:
  • Shift focus from hiring and buying to optimize existing security programs.
  • Consider artificial intelligence as a capability with the potential to improve the effectiveness and efficiency of the tools at the disposal of SRM practitioners. It is not a magical replacement for them.
  • Prioritize experimentations so AI engagements are directed at challenges in which you lack the resources or corporate worker base to succeed.
  • Build a decision framework for incorporating AI that favors fair evaluation and handles privacy impacts.
  • Use AI as an opportunity to strengthen or build your operational security metrics and RFP requirements.
  • Optimize your security function by getting rid of manual processes and develop your staff’s expertise; do not remove your security team.


The 2018 Gartner CIO Survey was conducted online from 20 April 2017 to 26 June 2017 among Gartner Executive Programs members and other CIOs. Qualified respondents were the seniormost IT leader (CIO) for their overall organization or a part of their organization (for example, a business unit or region). The total sample is 3,160, with representation from all geographies and industry sectors (public and private). The survey was developed collaboratively by a team of Gartner analysts and was reviewed, tested and administered by Gartner’s Research Data and Analytics team.
Gartner’s Security and Risk Survey was conducted between 24 February 2017 and 22 March 2017 to better understand how risk management planning, operations, budgeting and buying are performed, especially in the following areas:
  • Risk and security management
  • Security technologies and identity and access management (IAM)
  • Business continuity management
  • Security compliance and audit management
  • Privacy
The research was conducted online among 712 respondents in five countries: U.S. (n = 141), Brazil (n = 143), Germany (n = 140), U.K. (n = 144) and India (n = 144).
Qualifying organizations have at least 100 employees and $50 million in total annual revenue for FY16. All industry segments qualified, with the exception of IT services and software and IT hardware manufacturing.
Further, each of the five technology-focused sections of the questionnaire required the respondents to have at least some involvement or familiarity with one of the technology domains we explored.
Interviews were conducted online and in a native language and averaged 19 minutes. The sample universe was drawn from external panels of IT and business professionals. The survey was developed collaboratively by a team of Gartner analysts who follow these IT markets and was reviewed, tested and administered by Gartner’s Research Data and Analytics team.
Disclaimer: “Total” results do not represent “global” findings and are a simple average of results for the targeted countries, industries and company size segments in this survey.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s