How to Monitor and Analyze AWS Managed Microsoft AD Security Logs Using Amazon CloudWatch and Splun

How to Monitor and Analyze AWS Managed Microsoft AD Security Logs Using Amazon CloudWatch and Splunk

 

https://aws.amazon.com/blogs/apn/how-to-monitor-and-analyze-aws-managed-microsoft-ad-security-logs-using-amazon-cloudwatch-and-splunk/

 

AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) makes it possible for you to monitor and analyze security events of your directory in near real-time.

You can now forward security event logs from your directory to Amazon CloudWatch Logs in the Amazon Web Services (AWS) account of your choice, and centrally monitor events using AWS services or third-party applications such as Splunk, an AWS Partner Network (APN) Advanced Technology Partner with the AWS Security Competency.

In this post, I will show you an example of how to detect and respond to unauthorized or unusual activity. For example, account lockouts may result from a user who forgot their password. However, a bad actor could be attempting unauthorized access, or running a denial of service attack against your users.

By detecting account lockouts, you may be able to distinguish between an attacker and a user who innocently lost access and an attacker, and you can respond appropriately.

I will also explore how to monitor and create near-real-time alerts for account lockouts in your AWS Managed Microsoft AD using Amazon CloudWatch Logs and Splunk. I’ll accomplish this in four steps:

  1. Enable log forwarding to Amazon CloudWatch Logs.
  2. Configure your Splunk environment.
  3. Stream logs from Amazon CloudWatch Logs to Splunk using an AWS Lambda function.
  4. Configure the monitor account lockouts dashboard.

Assumptions and Solution Architecture

For the purposes of this post, I am assuming you already created an AWS Managed Microsoft AD directory and configured a fine-grained password policy that enforces the account lockout policy (not enabled by default).

In this example, I configured a password policy with a lockout policy after three failed login attempts. I’ve assumed you are already using Splunk Cloud, which is a cloud-native approach to monitor cloud services.

If you don’t have one already, sign up here and verify your email. This takes you to a login page where you can spin up your Splunk Cloud within minutes.

Splunk Security Logs-1

As you can see, I’ve enabled AWS Managed Microsoft AD log forwarding to Amazon CloudWatch Logs, configured Splunk, used an AWS Lambda function to push the event logs from Amazon CloudWatch Logs to Splunk, and then configured the Splunk dashboard to monitor account lockouts.

You can use Amazon Kinesis Data Firehose as an alternative to an AWS Lambda function. In this post, I’ll use Splunk Cloud instead of Splunk Enterprise, because it eliminates the need of infrastructure deployment and management.

Step 1: Enable Log Forwarding to Amazon CloudWatch Logs

Follow these steps to enable log forwarding from your directory to Amazon CloudWatch Logs:

  • Open the AWS Management Console, select Directory Service, and then select the directory you want to share (in my case, corp.com).
  • In the details page, select the Networking & Security tab, and then choose Enable under the Log Forwardingsection.

Splunk Security Logs-2

  • Create or select an existing CloudWatch Log group that will contain the security logs from your domain controllers. If you have a central security team that monitors your cloud activity from a separate central account, you can send the security logs to their Amazon CloudWatch account.
    .
    In this example, I’ll create a new log group in the same account as the directory. Select the Create a New Log Group option and use the suggested log group name. Choose Enable, and then wait 5-10 minutes for the security logs of each domain controller to be available in Amazon CloudWatch Logs.
    .
    Note that AWS Directory Service will create or use an existing resource policy with permissions to publish the security logs to the specified log group name.

Splunk Security Logs-3

Step 2: Configure Your Splunk Environment

As I mentioned in the solution architecture overview, I am using an AWS Lambda function to push event logs from Amazon CloudWatch Logs to Splunk. To receive the event logs into Splunk, I must first configure a Splunk HTTP Event Collector (HEC) by following these steps:

  • Open the Splunk management console, select Settings, then Data Inputs, and choose Add New HTTP Event Collector. Here’s a list of properties you must configure:

Splunk Security Logs-4

Below is my configuration example:

Splunk Security Logs-5

  • Enable HEC through the Global Settings dialog box. On the Data Inputs page, select HTTP Event Collector and choose Global Settings. Select Enable in the All Tokens option.

Splunk Security Logs-6

Step 3: Stream Logs from Amazon CloudWatch Logs to Splunk

Now that I’ve enabled log forwarding to Amazon CloudWatch Logs and configured Splunk, I’ll create an AWS Lambda function to stream logs from CloudWatch Logs to Splunk. To accomplish this, I will use a predefined Splunk CloudWatch log-processing blueprint in Lambda by following these steps:

  • Open the AWS Management Console, select Lambda, and then choose Create Function. Select the Blueprintsoption, and search for “splunk.” Select the “splunk-cloudwatch-logs-processor” Lambda blueprint and choose Configure.

Splunk Security Logs-7

  • In the Basic Information section, provide a Name for your Lambda function and create or select an Identity and Access Management (IAM) Role that grants Lambda the rights to CreateLogGroup, CreateLogStream, and PutLogEvents.
    .
    In this example, I created a new Role from the template shown below. The Lambda function will attach an AWSLambdaBasicExecutionRole which has the permissions listed above.

Splunk Security Logs-8

  • In the CloudWatch Logs Trigger section, select the Log Group to which you are forwarding AWS Managed Microsoft AD security logs (see Step 1). Provide a Filter Name, and make sure you check the Enable Trigger option.

Splunk Security Logs-9

  • In the Environment Variables section, provide the values for the following variables according to your Splunk configurations in Step 2:
    .

Splunk Security Logs-10

  • Choose Creation Function, and AWS subscribes this Lambda Function to the selected log group. With this, Amazon CloudWatch Logs triggers the subscribed Lambda function each time CloudWatch receives a new security event from AWS Managed Microsoft AD.

Splunk Security Logs-11

  • After a few minutes, you’ll see your directory security events in your Splunk environment. To see the events from your Splunk dashboard, click on Search & Reporting and query using command index=main, and choose the appropriate values from Selected Fields in the left pane. This will auto-populate the search query as index=main host=”input-prd-p-69vgmjstn6rc.cloud.splunk.com:8088″ source=”lambda:DSSecurityLogs.”

Splunk Security Logs-12

Step 4: Configure the Monitor Account Lockouts Dashboard

Now that Splunk is receiving security events, I am ready to create a dashboard in Splunk where I can monitor the account lockouts of the directory. Active Directory generates the Event ID 4740 every time an account lockout occurs. To monitor this specific event, I need to install the Splunk add-on for Microsoft Windows, which enables Splunk to understand and parse Windows logs.

From your Splunk dashboard, click on Find More Apps and search for “Splunk Add-on for Microsoft Windows.”

Splunk Security Logs-13

The Splunk for Microsoft Windows add-on provides common information model mappings for Windows events, and allows you to set up dashboard and alerting that I’ll configure in next steps. Click Install besides the Add-On.

Splunk Security Logs-14

Splunk can now process the log files as Windows security events. Next, I will use Splunk searches to configure a dashboard report that shows details of account lockouts:

  • Create a query to search for the account lockout events (Event ID 4740). Here’s an example of the query: sourcetype=xmlwineventlog EventCode=4740 | table _time TargetUserName, TargetDomainName | rename TargetDomainName as “Caller Computer Name”

Splunk Security Logs-15

  • Save the query by selecting the option Save As Dashboard Panel and provide the requested information. See here for more details on creating Dashboards.

Splunk Security Logs-16

  • You can now see the account lockout events in your Splunk dashboard report.

Splunk Security Logs-17.2

Congratulations! You can now monitor your AWS Managed Microsoft AD security event logs using Splunk in near real-time. Splunk provides additional monitoring and alerting capabilities, such as sending an alert email every time an account lockout occurs.

Summary

In this post, I have demonstrated how you can monitor your AWS Managed Microsoft AD directory’s security events using Amazon CloudWatch Logs and Splunk in near real-time.

I used the account lockout event as an example that helps you to be informed and take appropriate actions even before your end users reach out to you. In addition, I showed you how to accomplish this by using cloud-based services only. This makes it easier and more cost-effective for you to monitor your directory security events because it eliminates the need to deploy any additional infrastructure or deploy and manage additional monitoring tools.

Advertisements

Incident Response Plan

Incident Response Plan

  1. Readiness and Detection Review
  2. BIA (Impact Analysis)
  3. Computer Security Incident Response Plan (CSIRP) Development
  4. Computer Incident Response Team (CIRT) Development
  5. First Incident Responder Training
  6. Table Top Exercises
  7. Attack Simulation & Response Exercise (Red / Purple Team)
  8. Annual Review
  9. DFIR
  10. SOAR

Reference

Everyone in sales needs watch these videos;

Everyone in sales needs watch these videos;

A Conference Call – https://www.youtube.com/watch?v=kNz82r5nyUw

Meeting Backup – https://www.youtube.com/watch?v=wU99CCWr77k

The Expert – https://www.youtube.com/watch?v=BKorP55Aqvg

Email in Real Life – https://www.youtube.com/watch?v=HTgYHHKs0Zw

A Video Conference in Real Life – https://www.youtube.com/watch?v=JMOOG7rWTPg

Stuff Business People Say – https://www.youtube.com/watch?v=MHg_M_zKA6Y

Working from Home – https://www.youtube.com/watch?v=co_DNpTMKXk

Influence: The Psychology of Persuasion,

Influence: The Psychology of Persuasion, Revised Edition

One of the best sales training books

N_232

  1. Reciprocation
    • People generally feel obliged to return favors offered to them. This trait is embodied in all human cultures and is one of the human characteristics that allow us to live as a society.Compliance professionals often play on this trait by offering a small gift to potential customers. Studies have shown that even if the gift is unwanted, it will influence the recipient to reciprocate.

      A variation on this theme is to ask for a particularly big favor. When this is turned down, a smaller favor is asked for. This is likely to be successful because a concession on one side (the down-scaling of the favor) will be reciprocated by a concession by the other party (agreement to the smaller favor).

    • Practice
      • Give free stuff and information
      • Follow up educated emails are fantastic and state actions points
  2. Commitment and Consistencies
    • People have a general desire to appear consistent in their behavior. People generally also value consistency in others.Compliance professionals can exploit the desire to be consistent by having someone make an initial, often small, commitment. Requests can then be made that are in keeping with this initial commitment.

      People also have a strong desire to stand by commitments made by providing further justification and reasons for supporting them. This pattern of behavior toward or resulting in a negative outcome is called escalation of commitment.

    • Practices
      • Ask for buy in
      • Ask for approval process and procurement process
      • Sell something small
  3. Social Proof
    • People generally look to other people similar to themselves when making decisions. This is particularly noticeable in situations of uncertainty or ambiguity.This trait has led compliance professionals to provide fake information on what others are doing. Examples of this are staged interviews on television advertisements or “infomercials”.
    • Practices
      • Case Study
      • Example References (Large Bank)
      • Customer Reference
  4. Liking
    • People are more likely to agree to offers from people whom they like.There are several factors that can influence people to like some people more than others:
      • Physical attractiveness can give people a “halo” effect whereby others are more likely to trust them and think of them as smarter and more talented.
      • People tend to like people who are most like themselves.
      • People tend to like those who pay them compliments.
      • People who they are forced to cooperate with to achieve a common goal tend to form a trust with those people.
      • People tend to like people that make them laugh. For example, many lectures start with a joke.

      Any one of the above methods may not help influence people, but used in combination, their effects can be magnified.

      • Practices
        • Dress sharp for meetings
        • Match the person body language and personality
        • Find out what they did on the weekend and bring that up in next meeting
        • Find commonality (Family, Sports, etc.)
        • Make a joke
  5. Authority
    • The Milgram experiment ran by Stanley Milgram provided some of the most stunning insights into how influential authority can be over others.People often act in an automated fashion to commands from authority, even if their instincts suggest the commands should not be followed.
      • Practice
        • Gartner References
        • Brining in Experts
  6. Scarcity
    • People tend to want things as they become less available. This has led advertisers to promote goods as “limited availability”, or “short time only”.It has also been shown that when information is restricted (such as through censorship), people want the information more and will hold that information in higher regard.

      Items are also given a higher value when they were once in high supply but have now become scarce.

    • Practice
      • Offer the Discount for a limited time only – scarcity of budget
      • Allocation of resourcing needs time – Scarcity of resources 
      • Project timeline and to achieve business outcomes – Scarcity of time

AI Myth

AI Myth

The promise of organizations gaining the ability to predict future attacks is a marketing smokescreen hiding real progress in advanced diagnostic analytics and in risk scoring. To be clear, AI has potential to enhance the effectiveness of a security team. In the area of anomaly detection and security analytics, humans working with AI accomplish much more than without it. However, it is far more realistic to strive for “smart automation,” executing tasks with trained professional human effort that is complemented by automation rather than by fully automated AI features.”

Gartner Report: 5 Questions That CISOs Must Answer Before Adopting Artificial Intelligence

How to weigh the potential benefits and risks of machine learning

5 Questions That CISOs Must Answer Before Adopting Artificial Intelligence

5 Questions That CISOs Must Answer Before Adopting Artificial Intelligence

Published 29 August 2018 – ID G00350259 – 23 min read


Mentions of artificial intelligence in Gartner security inquiries rose 76% in the past 12 months. Security and risk management leaders and chief information security officers in early-adopting organizations must articulate the potential benefits and risks of machine learning techniques.

Overview

Key Findings

  • The most frequent question about artificial intelligence in security inquiries is, “What is the state of AI in security?” SRM leaders are tempted to believe bold promises about its benefits but are often ill-prepared to evaluate the impact of AI on their mission.
  • A sample review of Leaders from 11 Gartner Magic Quadrants shows that more than 80% of these security vendors include AI in their marketing message. Machine learning and deep neural networks are the most frequently cited techniques.
  • When considering the use of AI, CISOs need to identify whether they have to fight “resistance to change” or “fear of missing out” biases in order to rebalance their AI adoption strategy.
  • Today’s use of AI in security addresses use cases that other techniques can address. Most organizations lack evaluation frameworks and metrics to benchmark these new techniques against the older ones they already paid for.
  • Machine learning is also vulnerable to attacks, highlighting that AI is no silver bullet.

Recommendations

SRM leaders and CISOs building information security management strategy should:
  • Focus on the desired outcome. Define evaluation metrics to measure the quality of AI results in order to help free purchasing decisions from the confusion of marketing hype. New technical approaches in your defense portfolio must achieve measurable results.
  • Assess the impact of using machine learning on staff and data privacy. Identify skills gaps and required training. Classify relevant regulatory requirements based on how AI might impact them.
  • Utilize AI as a complementary technique, beginning with experimental engagements. The low maturity of AI does not preclude utilization, but recency of implementations invites caution.
  • Discourage DIY approaches on AI. The maturity of AI techniques has not yet grown past the Peak of Inflated Expectations.

Strategic Planning Assumptions

By 2022, replacement of conventional security approaches by machine learning technologies will actually make organization less secure for a third of enterprises.
By 2021, 60% of security vendors will claim AI-driven capabilities, more than half of them undeservedly, to justify premium prices.
In 2020, AI will become a positive net job motivator, creating 2.3 million jobs while eliminating only 1.8 million jobs.

Analysis

Despite existing definitions, “artificial intelligence” is often used as a relative term. Marketers and journalists use the term loosely to describe a broad range of functions. The hype has veiled artificial intelligence in a fog of exaggerated expectations and vague details — even fear: The thought of AI taking over our jobs often generates anxiety. Yet within context, an SRM leader can appreciate AI for how it can help achieve better security outcomes.
The promise of organizations gaining the ability to predict future attacks is a marketing smokescreen hiding real progress in advanced diagnostic analytics and in risk scoring. To be clear, AI has potential to enhance the effectiveness of a security team. In the area of anomaly detection and security analytics, humans working with AI accomplish much more than without it. However, it is far more realistic to strive for “smart automation,” executing tasks with trained professional human effort that is complemented by automation rather than by fully automated AI features.
It is a serious responsibility for SRM leaders and CISOs to determine if, and how, an emerging technology might benefit their organization. In theory, it is not the provider’s choice to make this determination or to shape it. The benefits are something that the technology should prove by itself.
Developing the right strategic approach to effectively incorporate AI in SRM programs requires preliminary work. Security leaders must answer these three questions before reaching out to technology providers (see Figure 1):
  1. Do you need artificial intelligence for the problem you are trying to solve?
  2. How can you measure that it is worth the investment?
  3. What will be the impact of using AI for this?

Figure 1. Three Preliminary Questions to Ask About Artificial Intelligence

Source: Gartner (August 2018)

350259_0001

Three Preliminary Questions to Ask About Artificial Intelligence
This research examines the questions SRM leaders most frequently ask Gartner analysts relating to AI implementations in their practices. With sufficient understanding of the capabilities of AI — and the best ways to determine how AI can serve the organization — leaders can utilize these promising technologies with reasonable expectations.

What Should CISOs and Their Team Know About Artificial Intelligence?

“Artificial Intelligence Primer for 2018,” Gartner analysis that lists upcoming research on the topic, includes the following definition of artificial intelligence:
Artificial intelligence refers to systems that change behaviors without being explicitly programmed based on data collected, usage analysis and other observations. These systems learn to identify and classify input patterns, probabilistically predict and operate unsupervised.
Using artificial intelligence simply means using algorithms to classify information and predict outcomes faster — and in greater volume — than humans can.
AI implementations today are “special-purposed AI” that are limited to specific narrow use cases. This is the case for any AI claim in security today. “Artificial General Intelligence” (AGI) designates a general-purpose AI that does not exist yet (see “Hype Cycle for Artificial Intelligence”).

Artificial Intelligence Is in the Eye of the Beholder

CISOs should remember that — as with any popular term — there are many interpretations of “artificial intelligence.” Service providers might use artificial intelligence to describe the most basic feature, leveraging static artificial intelligence, such as a knowledge base of attack techniques, whereas using the term “artificial intelligence” sets much higher expectations for prospective customers.
To find the appropriate AI solution for the enterprise’s needs, SRM leaders must correctly translate the marketing hype that has heralded AI in misleading terms. Several common marketing buzzwords have turned out to have much less dramatic but nonetheless useful meanings (see Table 1).

Table 1: Navigating AI Buzzwords

Enlarge Table
Term used as a buzzword
Most likely meaning
“Next generation”
“Our latest release”
Holistic approach
Multifunction
Artificial intelligence
Algorithms
Machine learning
Algorithms processing large amounts of data
Deep learning (deep neural network)
Multistep machine learning
Predictive
Diagnostic
Source: Gartner (August 2018)

Understanding the Basics of AI

More advanced algorithms can immediately reveal crucial information about vulnerabilities, attacks, threats, incidents and responses. Engaging the proper AI function could result in a noticeable augmentation of a security team’s capabilities (see Figure 2).

Figure 2. High-Level Descriptions of AI Concepts and Security Use Cases

Source: Gartner (August 2018)

350259_0002

High-Level Descriptions of AI Concepts and Security Use Cases
The most frequent concept CISOs will encounter when discussing artificial intelligence are probabilistic reasoning (often generalized as “machine learning”) and computational logic (often referred through rule-based systems). (See “Artificial Intelligence Hype: Managing Business Leadership Expectations” for a more detailed explanation.)
Machine learning is a data science technical discipline that automatically extracts patterns (knowledge) from data. At a high level, machine learning:
  1. Extracts target “features” from data (e.g., email metadata)
  2. Automatically trains a decision logic, called a “model,” with data (e.g., Bayesian networks)
  3. Applies the model on a given input (e.g., emails, files) to determine and estimate output (classify an email as spam or find malware)
There are three major subdisciplines of machine learning, which relate to the types of observation provided:
  • Supervised learning: Using precategorized data as a training set for the model (e.g., a dataset of known good emails and known spams)
  • Unsupervised learning: Using unlabeled data to train the model (e.g., network traffic)
  • Reinforcement learning: Using the results of an action to adjust behavior (subsequent actions) in similar circumstances (e.g., behavior anomaly detection)
A deep neural network, or deep learning, expands machine learning by discovering intermediate representations. This allows SRM leaders to tackle more complex problems and to solve problems with higher accuracy (see “Innovation Insight for Deep Learning”). A typical example of using deep learning is image processing. In security, some complex problems, such as building risk scores from various sources and analyzing network traffic, might benefit from using this approach.
Machine learning and deep neural network implementations are opaque. Even with knowledge of the mathematics behind these concepts, it is difficult for the user to identify the source data and rationale behind the output.
Gartner has published extensive research on the various uses of AI under the key initiative  Artificial Intelligence. A sampling of research documents (see list below) will let you learn more about artificial intelligence.
  • “Hype Cycle for Artificial Intelligence”
  • “Artificial Intelligence Hype: Managing Business Leadership Expectations”
  • “Machine Learning: FAQ From Clients”
  • “Innovation Insight for Deep Learning”
Recommendations:
  • Learn that machine learning and deep learning behave as black box algorithms with your organization’s data. This will have implications not only on data privacy but also on tool evaluation, as “production proof of concept” might neither prove easily feasible nor show future performance.
  • As staff from various teams learn about AI, enforce knowledge sharing and build an internal knowledge base on the topic.
  • Acknowledge AI hype; hear “algorithm” when vendors or markets say “artificial intelligence,” “machine learning” or any other related buzzword.
  • Develop a baseline understanding of AI concepts for members of the security team who might deal with the related technologies to optimize costs and avoid unneeded purchases.

What Is AI’s Expected Impact on SRM?

Artificial intelligence’s promise is to automatically process data and apply analytics functions much better than human teams can without aid. Improved automation and data analytics apply to security analytics — such as SIEM, network traffic analysis, user and entity behavior analytics — and infrastructure protection — endpoint protection, web application firewall, bot mitigation, cloud access security brokers. AI solutions promise to offer improved efficiency and speed to find more attacks, reduce false alerts and perform faster detect-and-respond functions.
The use of AI is also visible in integrated risk management, where its promise is to better support risk-based decision making by identifying and prioritizing risk.
Machine learning is already pervasive in many security markets. It might incrementally improve the benefits of existing technologies when implemented as a feature and could also answer new needs when machine learning is at the core of a new product (see Figure 3).

Figure 3. What Should You Expect From Using Machine Learning?

Source: Gartner (August 2018)

350259_0003.png

What Should You Expect From Using Machine Learning?
To engage AI with reasonable expectations of an improved SRM practice, CISOs will wish to gain at least a minimum understanding of requirements. They should prepare to be the leading agent of the company’s security use of AI. They should become familiar with the capabilities of AI-based systems to assess the potential of these systems for greater efficiency and effectiveness.
More importantly, SRM leaders should be the voice of reason and clarity, setting suitable expectations for stakeholders and employees around the reality of AI versus its exaggerations. Staff and colleagues must also follow their lead and concentrate their energies on several crucial actions in a security practice. These understandings are described in Figure 4.

Figure 4. Security Roles and Required Understandings in AI Engagements

Source: Gartner (August 2018)

350259_0004

Security Roles and Required Understandings in AI Engagements
Because many algorithms must consume large amounts of data, CISOs should engage early with privacy leaders to understand the implications of using an AI product or feature on data security and privacy.
Technical advisors and security operations need a more in-depth understanding of AI technology. They should start by defining the right evaluation metrics to assess the efficacy of the new techniques and to avoid being influenced by the coolness factor of a new shiny tool. SRM leaders should appreciate that lack of transparency and trouble with evaluation of tool effectiveness are the key problems.
There is no assurance that machine-language-driven results are better than alternate techniques. Available feedback is still scarce. When evaluating solutions that have AI claims, security leaders should focus on the outcome, not the technique. When adding a new technical approach to your defense portfolio, it needs to achieve measurable results.
Machine learning is fallible. It can offer incorrect or incomplete conclusions when using insufficient data, domain insights or compute infrastructure. There may be no mathematical solution designed for the organization’s specific needs. In many areas of security where AI is unproved, including anomaly detection, the efficacy of machine learning is difficult to benchmark. Part of the reason for that is the lack of reliable security metrics.
Machine learning is also vulnerable to attacks. Attacks on classification algorithms introduce just enough noise to change the diagnostic.1 As they’ve done every time, attackers will adapt to the new defense techniques and might also already be leveraging AI to improve their attacks.
Recommendations:
  • Inventory the areas where AI techniques are already available to improve existing solutions. Test if they actually improve upon them, as anticipated.
  • Identify new categories of solutions leveraging AI techniques that could help fill a gap in your security posture.
  • Define AI-related roles, responsibilities and required understanding in the security team (see Figure 4).
  • Focus everyone’s attention on the security outcome, not the availability of an AI technique. This requires investing resources and time on heavy testing before scaling the use of an AI technique.

What Is the State of Artificial Intelligence in Security?

As shown in the “Hype Cycle for Artificial Intelligence, 2017,” the most useful techniques for security (machine learning and deep learning) are at the Peak of Inflated Expectations. This suggests that early adopters will undergo a period of experimentation before optimum results will be achieved.
This low maturity of AI in general is one of the reasons why it is probably a bad idea for security organizations to try an autonomous DIY — build your own AI — approach to implement AI for security objectives. SRM leaders should recognize that exchanges of knowledge with other teams in the organization or within vertical industries may be required to hasten payback first. Specialized resources are scarce, and the AI tools and framework are not fully mature yet.
Similarly, most technology providers’ AI initiatives related to SRM are immature. Even when excluding false claims, the solutions and engagements in AI from security vendors are recent. This is apparent in the form of a lot of “AI version 1.0” implementations in many security products. These implementations might also rely on third-party AI frameworks.
Gartner estimates that many of today’s AI implementations would not pass due diligence testing in proving that they achieve significantly better results than other existing techniques.
Some vendors rebrand statistical analysis with a new name. For example, for a long time web application firewalls have used statistical approaches to provide automated application pattern learning, which is now called AI. These AI maturity levels, ranging from immature to experimental, do not preclude utilization. Security leaders should treat AI as emerging technologies, adding them as experimental, complementary controls.
In the Enterprise survey, conducted in March 2018, “cybersecurity” comes as the most frequent, most critical and most prominent AI project within organizations (see Figure 5). Gartner predicts that, by 2021, 50% of enterprises will have added unsupervised machine learning to their fraud detection solution suites (see “Begin Investing Now in Enhanced Machine-Learning Capabilities for Fraud Detection”).

Figure 5. Cybersecurity Leads AI Use in Enterprises

Source: Gartner (August 2018)

350259_0005
Cybersecurity Leads AI Use in Enterprises
SRM leaders may find that machine learning techniques might have better results compared to signature-based controls or other heuristics, but they must build a system to measure and compare results between various techniques used for the same purpose.

The Promise of Predictive Security Analytics

Predictive analytics is one of the four categories of analytics (descriptive, diagnostic and prescriptive are the others). It is also an area where the promise and expectations largely exceed the current state of AI. A solution in this category is designed to answer the question, “What will happen?”
An example of true prediction can be found in nonsecurity areas, such as predictive maintenance. A security example might forecast a company’s likelihood of compromise. For example, “We estimate that you have 90% chance of being breached by malware within two weeks. This likelihood is because you have servers publicly exposed and we are seeing active infection in organizations from the same vertical in your region.”
In “Combine Predictive and Prescriptive Analytics to Drive High-Impact Decisions,” Gartner offered a reasonable explanation of what predictive analytics addresses:
Predictive analytics addresses the question, “What is likely to happen?” It relies on such techniques as predictive modeling, regression analysis, forecasting, multivariate statistics and pattern matching.
“What is likely to happen?” can be answered with a probability, which is admittedly less appealing than a true prediction of the future. Arguably, a risk score is closer to advanced diagnostics than it is to a prediction. Still, many security analytics providers actively leverage the “predictive analytics” message to present their scoring mechanisms.
Recommendations:
  • Rationalize any temptation to do-it-yourself AI for security.
  • Treat security products and features leveraging AI as emerging yet unproven technologies.
  • Beware before giving any security budget to self-proclaimed seers.

What Should You Ask Security Vendors About AI?

Evaluating emerging technologies is hard because there is no feedback, no template for building requirements and a lot of uncertainties around long-term value. CISOs should ensure that their team avoids evaluation biases:
When considering the use of artificial intelligence, CISOs need to identify whether they have to fight “resistance to change” or “fear of missing out” biases.
One important consideration for leaders to note is that adding artificial intelligence as a feature of an existing product might look less impactful than adding a new product but in reality it’s not that simple. The simple feature might “call home” and leak data to the vendor’s cloud infrastructure, which could break some privacy commitments. A stand-alone appliance product with local machine learning at its core, deployed in detection mode only, wouldn’t be that risky to evaluate.
When algorithms are applied locally, and there is no data sharing, or when these risks have been handled appropriately, testing an AI feature is easy. Adding a new algorithm as a feature of an existing platform is another way to test how the new feature can augment technology.
The section below includes some of the content from “Questions to Ask Vendors That Say They Have ‘Artificial Intelligence.’” In the context of using AI for security, we can group them in relevant categories and add some specific questions.
Artificial Intelligence Mechanics
  • What algorithms does the product use to analyze data?
  • Which analytics methods (ML and others) contribute to AI functionality?
  • How do you upgrade the AI engine (e.g., the model) once deployed in production?
  • For each analytic method mentioned above, please indicate the alternate technical approaches used to solve the same problem (on your product or on competitors’ products).
Privacy
  • How can we see what happens with data that is related to my project?
  • What data and compute requirements will you need to build the models for the solution?
  • Does your product send anything outside of our organization once deployed (“call home”)?
    • If yes, please describe (and provide samples) of what the product sends.
    • If yes, please describe configuration options available to control/disable the feature.
    • If yes, please describe the security measure you (the vendor) deploy to ensure the security of your customers information.
  • How can we view/control data used by the solution?
  • Can we wipe data in specific situations — e.g., data related to a departing employee?
Security Benefits
  • What are the security and performance metrics relevant to measure the results from AI?
  • Could you provide third-party resources, such as independent testing reports, on your AI technology?
  • Can you provide peer references with a similar use case to ours?
  • What resources are available to gather and refine data that the AI solution can use so that its outcomes improve?
Process and People
  • How does your solution integrate in our enterprise workflow (e.g., incident response, ticketing)?
  • Does your solution integrate with third-party security solutions?
  • How should I expect to devote staff and time to tune and maintain the solution?
  • Please list available training courses for personnel operating the solution.
  • Please describe available reports security operations can use to communicate about the solution.
Any use of metrics for security might be helpful in determining the potential performance benefits of a solution. However, leaders should use rationalized metrics that are specific to their organization’s operational expectations.
Measuring AI success is a challenge. SRM leaders and CISOs should focus on measuring the potential benefits that any solution offers against an identified threat vector rather than on whether it uses AI or not.
For some of these technologies, a competitive Proof of Concept process will reveal to the selecting organization the realities of the AI capability offered by the vendor while shortening the evaluation period.
Recommendations:
  • Use and fine-tune the above list of questions when surveying vendors with claims of AI use.
  • Don’t create specific metrics for AI but evaluate the outcome against the same metrics that you use for other techniques.
  • Test, test, test. The novelty of these techniques implies a lack of independent testing and difficulties to get peer references with long-enough experience on the product.

Should You Fire Your Security Team (and Find a New Job)?

According to Gartner’s CIO survey, a third of organizations do not have a full-time security specialist. Over 25% of respondents indicate that they need employees with digital security skills. The widespread and ongoing shortage of qualified security professionals means that these skills are hard to find, if not impossible. These skills shortages leave security teams without sufficient numbers to complete SRM’s complex mission.
However, this doesn’t mean that technologies and new techniques, such as AI, are solely the answer to this challenge. With the development of new security roles, and the commensurate acquisition of new security competencies and skills, organizations can manage risk from digital business initiatives by assigning the right people with the right skills and competencies to the right roles.
Does this mean that CIOs should fire the CISO and the security team? No. Security and risk leaders must shift their view from hiring only to optimizing their security function.
Indeed, some functions will be automated, such as log management. Others may be replaced or augmented by machine learning capabilities (see the “Market Guide for Endpoint Detection and Response Solutions”), but that does not mean that leaders are in any way, shape or form ready to disband their security team. AI is a capability that will enable tools to become more powerful and efficient. However, those tools will only be as good as the practitioners using them — the corollary to “it’s the poor carpenter who blames his tools.”
Contrary to any warnings that AI’s ascendance means the disappearance of the human worker, Gartner predicts that, by 2020, AI will create more jobs (2.3 million) than it eliminates (1.8 million). Upkeeping and tuning AI implementation may mean new hires with rare and expensive skills in the future.
SRM leaders have already gone through several transformations in recent years. Adding AI to enterprise’s security portfolio has some similarities with these transformations that CISOs have already gone through (see Table 2).

Table 2: Using Artificial Intelligence Compared to Recent Trends

Enlarge Table
Recent trend
Similarities with using artificial intelligence
Cloud-based security
Trusting third-party provider to process your data
“Black box” technology and little configurability at start
Next-generation firewall
Shift in expected results for existing technologies
Network sandboxing
Incontrollable hype on an emerging market
Source: Gartner (August 2018)
SRM leaders should have learned from the previous big technology shifts that hiring an “expert” rarely works for emerging areas because the required skills are not available yet.
Artificial intelligence pioneers have learned to aim for fairly “soft” outcomes when they engage AI. These leaders focus on worker augmentation, not on worker replacement.
The skills required to embrace emerging technologies change faster than most organizations can adapt. More importantly, these skills will open up the security industry to new roles such as “data security scientist” and “threat hunters” (see “New Security Roles Emerge as Digital Ecosystems Take Over”). The data security scientist role, for example, incorporates data science and analytics into security functions and applications. Specifically, this role determines how machine learning knowledge-based artificial intelligence can be deployed to automate tasks and orchestrate security functions by using algorithms and mathematical models to reduce risk.
To sharpen the specific expertise of your staff to incorporate those new roles, consider a variety of skills development exercises and development platforms. Talent-development platforms include universities, institutions, SANS training courses, conferences and table-top exercises. At the most mature point, consider the use of a cyber range (see “Boost Resilience and Deliver Digital Dexterity With Cyber Ranges”).
Recommendations:
SRM leaders overseeing information security management should:
  • Shift focus from hiring and buying to optimize existing security programs.
  • Consider artificial intelligence as a capability with the potential to improve the effectiveness and efficiency of the tools at the disposal of SRM practitioners. It is not a magical replacement for them.
  • Prioritize experimentations so AI engagements are directed at challenges in which you lack the resources or corporate worker base to succeed.
  • Build a decision framework for incorporating AI that favors fair evaluation and handles privacy impacts.
  • Use AI as an opportunity to strengthen or build your operational security metrics and RFP requirements.
  • Optimize your security function by getting rid of manual processes and develop your staff’s expertise; do not remove your security team.

Evidence

The 2018 Gartner CIO Survey was conducted online from 20 April 2017 to 26 June 2017 among Gartner Executive Programs members and other CIOs. Qualified respondents were the seniormost IT leader (CIO) for their overall organization or a part of their organization (for example, a business unit or region). The total sample is 3,160, with representation from all geographies and industry sectors (public and private). The survey was developed collaboratively by a team of Gartner analysts and was reviewed, tested and administered by Gartner’s Research Data and Analytics team.
Gartner’s Security and Risk Survey was conducted between 24 February 2017 and 22 March 2017 to better understand how risk management planning, operations, budgeting and buying are performed, especially in the following areas:
  • Risk and security management
  • Security technologies and identity and access management (IAM)
  • Business continuity management
  • Security compliance and audit management
  • Privacy
The research was conducted online among 712 respondents in five countries: U.S. (n = 141), Brazil (n = 143), Germany (n = 140), U.K. (n = 144) and India (n = 144).
Qualifying organizations have at least 100 employees and $50 million in total annual revenue for FY16. All industry segments qualified, with the exception of IT services and software and IT hardware manufacturing.
Further, each of the five technology-focused sections of the questionnaire required the respondents to have at least some involvement or familiarity with one of the technology domains we explored.
Interviews were conducted online and in a native language and averaged 19 minutes. The sample universe was drawn from external panels of IT and business professionals. The survey was developed collaboratively by a team of Gartner analysts who follow these IT markets and was reviewed, tested and administered by Gartner’s Research Data and Analytics team.
Disclaimer: “Total” results do not represent “global” findings and are a simple average of results for the targeted countries, industries and company size segments in this survey.